mysqlguy.net

MySQL

my mac essentials

Planet MySQL - July 15, 2008 - 9:42pm
whenever i see somebody?s list of essential mac applications, i am always a little surprised at how little overlap it has for me. now that i?ve mostly switched over the new macbook pro, here?s the list of applications that i have installed:
  • acorn ($50): this is a nifty little image editing application. in the last few days, i have been using it to mock up shelving layouts for the store.
  • bzr (free): this is the distributed version control system of choice at mysql these days.
  • busysync: it would be nice to keep my google calender and ical in sync. after giving spanning sync a try for a bit, i am giving this a try as an alternative.
  • delivery status: this dashboard widget is great for tracking the way-too-many packages that i get from amazon and other places.
  • google notifier (no cost): now that i have switched almost entirely to using gmail, this is useful to let me know when i have new mail.
  • linkinus ($20): i use this irc client for accessing the company chat server to connect with my mysql coworkers.
  • menucalendarclock for ical (no cost or $20 for more features): i like this replacement for the date/time display in the upper-right of the menu bar, which drops down a full calendar, including upcoming ical events.
  • myob accountedge ($300): this is for doing the books for the store and gallery.
  • mysql (free): i have the standard mysql server package installed for testing.
  • twitteriffic (ad-supported or $15): this is a not-too-obtrusive way of participating in twitter nonsense.
  • virtualbox (free): i used parallels on my last machine, but i figured i would give sun?s own virtualization technology a spin. i use it to run a windows xp image for development using the microsoft toolchain and for accessing sun?s vpn.
  • xcode (no cost): i don?t really use xcode itself, just many of the unix development tools that come along with it.


of the bundled software, i regularly use address book, ical, iphoto, itunes, mail (for my sun/mysql email), preview, safari, and terminal. and i use time machine, but i hope i don?t have to regularly use it.
Categories: MySQL

High Performance MySQL is going to press, again

Planet MySQL - July 15, 2008 - 9:17pm

Apparently High Performance MySQL, 2nd Edition is selling quite well — I’m not sure exactly how well — because we’re preparing for a second printing. This makes me very happy. I don’t think they anticipated going back to the press for quite some time.

The book fluctuates between sales rank 1000 and 2000 on Amazon during the day, and has reached as high as 600 or so. This is just phenomenal. The O’Reilly team was psyched when it broke 5000, and so was I — but now we’ve stayed under 2000 for a long time (except when Amazon sold out of it). Frankly I’d have thought that for a niche-market book like this, we’d have been in the 10,000 range or something like that.

Clearly we (the authors, editors, publisher, etc) have done something right! This is a great feeling.

Thanks for sending errata, by the way. I have just completed proofreading the whole book myself, and found a number of things that may be fixed in the second printing. I think certain types of errors won’t be fixed, but the important ones certainly will be.

Books, mysql, writing
Categories: MySQL

Flexviews for MySQL 1.0.3-alpha released

Planet MySQL - July 15, 2008 - 8:26pm
Make sure you set sp_max_recursion_depth - see the instructions file.

changes:
functions/procedures now take parameters in DB,TABLE order.
demo database removed from sourceforge and added into SVN/release tarball.
GRANT USAGE on *.* to flexviews; is now the default permission level of the flexviews user
I removed the mview_ prefix from filenames.
flexviews.add_table() now checks for the existence of the table and the table's mvlog

get it at http://flexviews.sourceforge.net
Categories: MySQL

Sphinx 0.9.8 is released!

Planet MySQL - July 15, 2008 - 7:40pm

The Sphinx project just released version 0.9.8, with many enhancements since the previous release. There’s never been a better time to try it out. It’s really cool technology.

What is Sphinx? Glad you asked. It’s fast, efficient, scalable, relevant full-text searching and a heck of a lot more. In fact, Sphinx complements MySQL for a lot of non-search queries that MySQL frankly isn’t very good at, including WHERE clauses on low-selectivity columns, ORDER BY with a LIMIT and OFFSET, and GROUP BY. A lot of you are probably running fairly simple queries with these constructs and getting really bad performance in MySQL. I see it a lot when I’m working with clients, and there’s often not much room for optimization. Sphinx can execute a subset of such queries very efficiently, due to its smart I/O algorithms and the way it uses memory. By “subset” I mean you don’t get the full complexity of SQL, but you get enough functionality for lots of the poorly-performing queries I see in the wild. It’s a 95% solution.

Is Sphinx for you? Good question. You can find answers in Appendix C in High Performance MySQL. And yes, that is why I wrote this blog post — to put in a plug for the book. *grin* But before I go, let me put in another plug for Sphinx: go vote for it on Sourceforge! If it’s voted as one of the Community Choice projects of the year, that will be fantastic.

Andrew Aksyonoff, mysql, Sphinx
Categories: MySQL

OSS helps hedge funds facing difficult times

Planet MySQL - July 15, 2008 - 5:00pm
Open source Marketcetera Trading Platform allows hedge funds and trading institutions to maximize control and minimize cost READ MORE

Categories: MySQL

Quick note on bulk performance using on duplicate key update

Planet MySQL - July 15, 2008 - 3:56pm

Had an interesting thought today, I wonder how much faster ( if at all ) updating via insert on duplicate key in bulk was compared to individual update statements.  The client  app I was looking at receives updates to a counter from various severs around the world and they are updating a row that contains time metrics.  For instance update traffic set visistcount = visistcount + 1000 where customer = 123 and hour = ‘12′ and date=’7/15/2008′.  These statements are driven by feeds from the various servers, so it lends itself to bulk operations very easily.   It seemed like batching these up will minimally save the parse and network travel time.

Basically instead of :

update bulk_update set val=10+val where val1 = 20;
update bulk_update set val=10+val where val1 = 21;
update bulk_update set val=10+val where val1 = 22;
update bulk_update set val=10+val where val1 = 23;

use something like this:

insert into bulk_update ( val , val1 , testdate ) values ( 10, 20, now() ),( 10, 21, now() ),( 10, 22, now() ),( 10, 23, now() ) on duplicate key update val=val+values(val);

A real quick test showed that the bulk on duplicate key syntax resulted in about a 30% performance improvement.

Categories: MySQL

Creating an Intermediate Replication Layer

Planet MySQL - July 15, 2008 - 12:52pm
A few weeks ago, I discussed how to keep binlogs in sync in a tree or pyramid replication scheme. That thread discussed how to re-distribute load in case of failure in one of the intermediate slaves. Today we will look at how to create those intermediate replication slaves. We start with a traditional flat replication [...]
Categories: MySQL

GSoC Weekly Report - Week 7

Planet MySQL - July 15, 2008 - 12:42pm
Project: MySQL Forge RSS/Atom feeds

KEY ACCOMPLISHMENTS LAST WEEK

* Merging feeds with the BZR branch.
* Writing first PHPUnit tests.
* Extending existing PHPUnit testsuite to make it more suitable for use in the BZR branch.
* Reading the SeleniumRC documentation

KEY TASKS THAT STALLED LAST WEEK

* Installing SeleniumRC and exercise with its functionality.

KEY CONCERNS

* None

TASKS IN THE UPCOMING WEEK

* Installing SeleniumRC
* Write new PHPUnit tests
* Final merge for RSS/Atom feeds in BZR trunk
Categories: MySQL

New apps for the iPhone

Planet MySQL - July 15, 2008 - 11:50am
Looks like there's been a slew of new applications enabled by Apple's new iPhone, SDK and app store. While a lot of the emphasis is on communications and games, there's a surprising number of good business applications. First to market among open source business applications for the iPhone are Pentaho with their open source Business Intelligence extension for dashboards, reports and drilldowns, Funambol with open source contact sync program, Zimbra with an open source email client, Zmanda with an iPhone client for their Enterprise backup tool, and SugarCRM with their offline client edition. I have no doubt that many more... READ MORE

Categories: MySQL

Mycat beta 0.3.0 released

Planet MySQL - July 15, 2008 - 8:00am
After a very long time distracted with other projects, I finally added the third component of the MyCAT project: binlog_mon, a binary log manager for MySQL.

The main feature of this tool is that it has two disk usage thresholds which determine when it purges your binary logs:
  • a lower, "nominal", threshold above which binary logs will be purged if-and-only-if none of the replication slaves are still reading it,
  • and a higher, "critical", threshold at which the behavior is configurable.
It can simply send you an alert if disk usage is above critical and the oldest file is still needed - or it can purge 1 file, all files until usage below critical, or all files until usage below nominal levels. (Other options could be added fairly easily.) The "critical" option is so configurable because purging any binary log while a slave is still reading them will most likely corrupt the slave, but in some (probably most) cases that is better than filling up the log disk and locking the master database!

For those not familiar with the MyCAT project, it is an open-source Perl toolset for managing MySQL/Linux servers that I wrote initially for my own use, but after a few people asked for similar tools I decided to publish it. Currently, it is composed of three programs that: monitor replication (rep_mon), monitor and rotate binary logs (binlog_mon), and allow remote shell and scp access to predefined groups of servers (rcall). This third tool is great for simplifying tasks such as syncing a new httpd.conf file and running a rolling-restart across a large web farm. All three tools read the same XML configuration file which defines servers and their properties (such as which servers have mysql, what volume the binlogs are on and what the nominal/critical levels are, what an acceptable replication "lag" is, and whether or not exceeding that lag is important enough to send you an email, etc). It really does a lot of things useful to any linux or mysql admin working in a LAMP environment, and I encourage anyone to send me feedback or feature requests.

MyCAT wiki
MyCAT on Sourceforge


Cheers,
Devananda
Categories: MySQL

Variable's Day Out #15: bulk_insert_buffer_size

Planet MySQL - July 15, 2008 - 6:45am

Properties:

Applicable To MyISAM Server Startup Option --bulk_insert_buffer_size=<value> Scope Both Dynamic Yes Possible Values Integer:

Range: 0 - 4294967295 (4G)

Default Value 8388608 (8M) Category Performance

Description:

This cache is used by MyISAM to optimize bulk inserts. This cache is a special tree-like structure. Bulk inserts include statements like LOAD DATA INFILE..., INSERT ... SELECT, INSERT ... VALUES (...), (...), ..., (...)

Bulk inserts are often used to minimize disk writes etc and are very common in applications inserting lots of data. I often use bulk inserting using the following technique: (pseudocode)

  • CREATE TEMPORARY TABLE tmp_table LIKE main_table
  • LOOP:
    • INSERT ROWS INTO TEMPORARY TABLE
  • INSERT INTO main_table SELECT * FROM tmp_table

If you are using any technique for bulk insertion, you should be carefully setting this parameter.

Optimal Value:

This depends on the way your bulk inserts are structured. If you are doing bulk inserts with data being close to or more than 10k, this comes really handy.

Read more:

Hope you enjoyed reading this.

Categories: MySQL

MySQL Camp in Bangalore- July 29th 2008

Planet MySQL - July 15, 2008 - 5:19am

Sun Microsystems, India and OSSCube are organizing a MySQL camp on 29th July, 2008.


A highlight of the event is the keynote by Kaj Arno, VP for Community Relations from MySQL AB.

The venue for the camp is:

Hi-Tech Seminar Hall,
   DES Block (Dept of Electrical Science)
   M S Ramaiah Institute of Technology
   Near MS Ramaiah hospital
   MSR Nagar, Bangalore - 560054 

The timings for the camp are 4PM to 7PM, July 29th, 2008.

The Scheduled talks include:

Title Speaker Abstract Duration MySQL - The community, The Product, The Company Kaj Arno An overview of all things MySQL, from a combined technical and business perspective. A short history of MySQL, the company, the product. What the MySQL Community is, and how MySQL works with it. How
MySQL is being integrated into Sun. One Hour How to Contribute Code to MySQL Thava Alagu A discussion on How the community can contribute code to the MySQL DB Project. About 30 minutes Performance Optimisation in Enterprise Applications Sonali Minocha A discussion on how performance optimisation can be carried out using MySQL for Enterprise Applications.

About 30 minutes



The event will also mark the launch of Bangalore MySQL User Group

So, come join us and have a blast!

Please add your name at http://www.osscube.com/mysql/camp.php and/or http://mysql.meetup.com/297/

Categories: MySQL

Auditing your MySQL Data - Part 2

Planet MySQL - July 15, 2008 - 12:15am

Continuing from my earlier post Auditing your MySQL Data, Roland has accurately highlighted that my initial post leaves out some important information for auditing. As the original charter was only to keep a history, for the purpose of comparing certain columns, a history was all that was needed.

Providing a history of changes forms the basis of auditing, and in keeping with my post title and intended follow-up, this is the all important second part. However in order to provide true auditing additional information is necessary. This includes:

  1. When was an operation performed
  2. What operation was performed, i.e. INSERT, UPDATE and DELETE
  3. Who performed the operation

Date and operation can be determined via the database, but in order to gather all this information, interaction with the application is necessary to obtain the true user information (This can’t be determined via a trigger)

The issue becomes a greater need for design understanding. What is the purpose of the audit data? How will it be accessed? How complex in maintaining the data do you wish to consider?

One alternative is keep a separate log of audit history. The benefits are a clear and easy way to provide a history of a users’ actions, and can preserve the structure of database table between the base and audit table can remain the same, triggers can remain relatively simply. However if you want to look at the data with audit history, it is better to embed these columns within each table, and triggers have to be customized and maintained in more detail the my original post.

When considering the progression of these points, the design process normally returns to the following conclusion. The following columns are added to the base table.

  • A create_timestamp column is added
  • A last_update_timestamp column is added
  • A last_update_user_id column is added

The create_timestamp is optional from an auditing perspective, because the last_update_timestamp of the first audit row will contain the same value, however experience has shown this column is valuable for other design considerations.

The only remaining issue is the type of operation, INSERT,UPDATE & DELETE. Both INSERT and UPDATE can be inferred, DELETE can not. To maintain the simplicity model, a common approach is to use a BEFORE DELETE trigger to insert an audit record with all the same values of the previous row, with the last_update_timestamp manually set. DELETE can then be determined via a no difference in any updated values.

It ultimate conclusion comes down your application design and needs. For example, your design for example may include a flag or row status for example to indicate deletes which are later cleaned up via a batch process so you don’t really care about the date/time of the actual purging of data. This then negates the need for any DELETE trigger.

Again, thanks to Roland for providing a link to Putting the MySQL information_schema to Use which provides a number of SQL statements that help in the generation of Triggers to support full auditing.

You should be aware that CURRENT_USER normally serves zero purpose if all changes are made via an application user.

At this time, you also have another design consideration. Do you introduce a procedure to re-create the triggers via an automated means for each schema change, or do you manually maintain triggers with schema changes. With each approach, additional checking and verification is necessary to ensure your triggers are correctly configured.

Categories: MySQL

Pensacola MySQL Users Group Meeting

Planet MySQL - July 14, 2008 - 10:04pm

I just put together on “meetup.com” the first Pensacola MUG meeting.  The first meeting is scheduled for August the 5th at 7:00 pm at the Panera Bread right outside the entrance to Cordova Mall.

I have thoroughly enjoyed the previous user group meetings I have attended and wanted to see if we can get a group growing here in Pensacola.  No formal agenda for the first meeting, just want to meet everyone and find out what people want to hear about in the future. I can be reached @ bmurphy AT paragon-cs.com.

The information and signup is at: http://mysql.meetup.com/300/.

Categories: MySQL

Falcon Transactional Characteristics

Planet MySQL - July 14, 2008 - 9:56pm

It’s time to continue our series on the transactional storage engines for MySQL. Some might question why I even include Falcon because it is very much beta at this time. MySQL, however, has made quite an investment into Falcon, and while it is currently beta, the code is improving and it looks like that it will be production-worthy when MySQL server 6.0 hits GA.

If this is the case, it is important to begin to understand what Falcon was designed for and how it differs from other transactional engines such as InnoDB. I am going to concentrate quite a bit on the Falcon/InnoDB comparison as that is what everyone wants to talk about. This is despite my having heard MySQL employees repeatedly make statements to the effect of, “Falcon is not going to replace InnoDB,” or “Falcon is not competing with InnoDB.” Well, take that with a grain of salt. It certainly seems to me that they are competing for the same spot.

Warning

As I said, Falcon is beta. First off, don’t even try to use it in production. Using it in production means you will also be using MySQL Server 6.0, which itself is considered alpha. Your data will explode, be corrupted, or eaten by jackals. It won’t be pretty. It will cause great pain.

In addition, the features of Falcon are still changing. What I say here might or might not be accurate in the future.

End of Warning

So, why was Falcon even created?

(more…)

Categories: MySQL

How Skoll Collects MySQL Runtime Information

Planet MySQL - July 14, 2008 - 8:37pm

To understand the runtime behavior of MySQL under different configurations, Skoll needs to collect runtime data while testing MySQL builds. To accomplish this, the Skoll client takes advantage of gcov, a test coverage program that's part of the GNU Compiler Collection (GCC). gcov collects runtime information such as how many times a line of code, a method or a file was executed, what was the coverage and much more.

The source tree of MySQL actually provides a few build scripts that enables gcov under the BUILD directory. However, these scripts build MySQL using the default configuration; not exactly what Skoll needs. Skoll client builds and tests MySQL in different configurations by passing compile-time and run-time flags to the configure script before compilation. For example

./configure --enable-thread-safe-client --with-extra-charsets=all --with-innodb --with-ndbcluster To enable gcov under non-default configurations, Skoll client passes a few more compiler flags to the configure script.

CC=gcc CFLAGS=-fprofile-arcs -ftest-coverage LDFLAGS=-fprofile-arcs -ftest-coverage CXX=gcc CXXFLAGS=-fprofile-arcs -ftest-coverage After compilation is complete, most of the source files will have corresponding .gcno files.

When Skoll client has finished running all of the MySQL tests, it will recursively go through the source directories and run

gcov <filename> on every source file to generate gcov runtime data. These runtime data files are then zipped and sent to the Skoll server for future processing and analysis.
Categories: MySQL

MySQL Support Engineer needs our help!

Planet MySQL - July 14, 2008 - 8:04pm
Categories: MySQL

MySQL UUID() function, so fast it makes me nervous

Planet MySQL - July 14, 2008 - 6:23pm
So I was hacking around in the MySQL server code this weekend, and took at look at the implementation of the UUID() function.

Instead of using Theodore Tso uuid library, which now ships by default on all Linux and MacOS machines (I dont know about BSD and Solaris), we implemented our own from scratch. With a global mutex lock on generation, which will utterly blow on a multicore machine.

So I pulled down a branch of the server source code from Launchpad, and then branched it again twice locally. In those two branches, I replaced the implementation with a call to libuuid.so uuid_generate_random() and with a call to libuuid.so uuid_generate_time(). Then I built all three trees, and benchmarked 10 million calls to UUID (SELECT BENCHMARK(10000000,UUID());).

I was surprised at the results.

implementationtime in seconds built-in1.95 uuid_generate_random()96.76 uuid_generate_time()70.62

How is the builtin one so fast?!

Something is either very wrong with the Tso implementation, that nobody has cared about because they aren't using it for thousands of calls per second, or else something is very wrong with the MySQL internal implementation, and we're not safely generating properly unique UUIDs.

My money is on the second guess...
Categories: MySQL

A community for life

Planet MySQL - July 14, 2008 - 6:00pm

The MySQL community is united for a noble purpose. Leaving aside their usual differences about release cycles and openness philosophy, the community is united in helping the son of Andrii Nikitin, a MySQL Support engineer, to overcome the difficult challenge of financing a life saver bone marrow transplant.

The MySQL project has made its e-commerce site available to help raising the funds as quickly as possible. A very young life is in danger, and many people from inside and outside the company have donated for Ivan.

This is not the first case of open source and life saving tied together. For example, Matthew Swift, one of the lead developers of the OpenDS project, has combined a sport challenge with help for research on Leukodystrophy. Check out his Tri4Joe blog.

Categories: MySQL

building c/odbc 5.1 on mac os x

Planet MySQL - July 14, 2008 - 5:57pm
to build connector/odbc 5.1 on mac os x leopard, the first thing you will need is xcode. then you will want to install a recent version of mysql (5.0 or 5.1, or even 6.0 if you are feeling adventurous).

to be able to build the gui setup library, you will need to install qt, but i have found it easiest to work with qt3, not the latest qt4. you can download the last release of that from trolltech?s ftp server ? the file is called qt-mac-free-3.3.8.tar.gz. you will need to apply this patch to allow it to compile on leopard. i configure it with some options to eliminate stuff i don?t care about, and to build statically:

./configure -no-tablet -no-accessibility -no-cups -thread -static

with those prerequisites met, you can download the source for connector/odbc (using the release packages or svn for now, and we will migrate to bzr soon). i build with the gui enabled and with debugging symbols and no optimization:

./configure --with-qt-dir=/path/to/qt-mac-free-3.3.8 \ --with-extra-xlibs="-framework Carbon -framework QuickTime -lz" \ --enable-dmlink --with-debug CFLAGS="-O0 -g"

that should be enough to get the driver compiled. the test suite should mostly pass ? there is one test in the my_basics suite that fails because of bugs in the iODBC implementation of SQLCancel, which i reported to them last november.

there is also one other test that fails right now, but we are working on that. it is a test for a workaround for a server bug, and it?s not clear whether the test case needs to be updated for the unicode-aware connector/odbc 5.1, or if the fix we applied in connector/odbc 3.51 didn?t get merged correctly into the 5.1 tree.
Categories: MySQL

MySQL

Yahoo

Recent comments

About Me

Jay Janssen
Yahoo!, Inc.
jayj at yahoo dash inc dot com

MySQL
High Availability
Global Load Balancing
Failover

View Jay Janssen's LinkedIn profileView Jay Janssen's Facebook profile

User login

Friends

Links