mysqlguy.net

MySQL

MySQL Memory allocation & TMPDIR

Planet MySQL - 0 sec ago

A quick one here, we often talk about effectively utilizing memory to get the most of your MySQL Server.  I wanted to remind folks to not forget about allocating memory to a tmpfs for the tmpdir.  Certainly this is not going to help everyone, but those who create lots of temporary table will find the performance boost most welcome. In fact in some cases you maybe better off allocating memory to a tmpfs then you would be the innodb buffer pool, but like everything it depends on your workload and environment.

Whats tmpfs?  In a nutshell a filesystem ontop of a ramdisk  used for temporay file access.  Read the wiki page for more.   Sounds like a great thing for /tmp right? By the way It really urks me that most people leave /tmp as part of the root filesystem ….  shameful, read my common mistakes lun layouts  for my complaining about that.  Anyways, back to the topic.

What brought this post on? I am working with a client who was having horrendous performance on several seemly simple queries.  The worst query, which will remain hidden to protect the innocent was consistently finishing around 13 second each run…. see here:

1028 rows in set (13.75 sec)
1028 rows in set (13.95 sec)
1028 rows in set (13.64 sec)

looking at the explain for the query, it was using a temporary table.  It is one of these queries that’s always going to build a temporary table on disk and then do a filesort.  You know the ones where you need to rethink the business case for that data because their is not great way to execute it. So without a way to prevent the temp table, and with no way force it into memory within the database I forced it into memory outside the database by pointing the tmpdir to a tmpfs.  This is a fairly dramatic difference, but it really underscores that huge performance gains can be gained from small changes:

1028 rows in set (0.26 sec)
1028 rows in set (0.26 sec)
1028 rows in set (0.27 sec)

In summary, If your using lots of text fields, doing lots of sorting, group by’s, etc you may see a nice performance boost by pointing over your tmpdir to tmpfs.

You can create a new tmpfs by issuing the following:

mkdir -p /tmpmysql
mount -t tmpfs -o size=2048M,mode=0777 tmpfs /tmpmysql

then point tmpdir over to /tmpmysql

Don’t forget to add it to you fstab to make the change permanent.

Categories: MySQL

Yet more on Stored Procedure performance

Planet MySQL - 17 min 44 sec ago
In Karlsson's blog post, More on Stored Procedure performance, he has written a simple C program to send queries repeatedly to the MySQL server in order to do performance measurement. Unfortunately, his sample client will end up factoring a lot of the synchronous communications overhead in to his performance comparison.I think that he should enable the MySQL multi-statement feature and then
Categories: MySQL

Improving Downloads at MySQL.com

Planet MySQL - 48 min 21 sec ago

The MySQL Web Team and the MySQL Community Team want to improve your experience with downloading software from MySQL.com. Now, we have many ideas on how to improve things, some subtle, some not so subtle, but we would like to hear from you.

As with all UI improvements, what we want to do is to come up with solutions and features which make the process easier, faster or more reliable. In this case it should be easier to identify what you need, faster to get to the download and more reliable in getting the correct download or your previous downloads.

What pages/sections are we dealing with specifically? We are talking about the downloads of MySQL and MySQL-related software available at the following pages: http://dev.mysql.com/downloads/

Here are some screenshots of some of the pages we are thinking of revamping:

  • Main downloads page at dev.mysql.com


  • Software Summary and Quick Links

 

  • Sample software listing

If you have any thoughts and ideas, please post them here in the comments or email me at dups (at) sun (dot) com.

In a later post, I'll discuss some of the suggestions that we get and some of the suggestions that we have internally.

Categories: MySQL

When are you required to have a commercial MySQL license?

Planet MySQL - 1 hour 4 min ago
As you may know, MySQL has a dual-licensing model. You can get the source under the GPL version 2, or you can buy a commercial license. I’ve recently been hearing a lot of confusion about when you have to buy a commercial license. People I’ve spoken to wrongly believe that they’re required to purchase [...]
Categories: MySQL

DBA_OBJECTS View for MySQL

Planet MySQL - 3 hours 18 min ago

When using Oracle, the data dictionary provides us with tons of tables and views, allowing us to fetch information about pretty much anything within the database. We do have information like that in MySQL 5.0 (and up) in the information_schema database, but it’s scattered through several different tables.

Sometimes a client asks us to change the datatype of a column, but forgets to mention the schema name, and sometimes even the table name. As you can imagine, having this kind of information is vital to locate the object and perform the requested action. This kind of behaviour must be related to Murphy’s Law.

In any case, I’d like to share with you a simple stored procedure that has helped us a lot in the past.

(more…)

Categories: MySQL

More on Stored Procedure performance

Planet MySQL - 4 hours 40 min ago
There has been some discussion on the performance of MySQL Stored Procedures here, last up was Anthony T Curtis in his blog where he writes about Perl stored procedures. Brooks Johnson writes on the SP performance in his blog and concludes that procedures really are slow., in terms of compute intensive operations at least.

As a follow-up to this, I wanted to test what the difference really is when we use database intensive operations. Here. procedures really should be faster, as they run inside MySQL itself. Books has already compared the performance of MySQL Compute intensive operations to MSSQL, and regretable, MySQL came out behind the Redmond thingy.

But now to my really simple database operations performance test. I'm using a procedure to INSERT data into a table, and to reduce the impact of the INSERT itself, I'm using the Blackhole Storage Engine. Then I write a simple procedure to INSERT into the table. All in all, this looks like this:
CREATE TABLE IF NOT EXISTS foo(bar INT NOT NULL PRIMARY KEY, col2 CHAR(100))
ENGINE=Blackhole;

DROP PROCEDURE IF EXISTS perf;
delimiter //
CREATE PROCEDURE perf(nOper INTEGER)
BEGIN
  DECLARE i INTEGER DEFAULT 0;

  WHILE i &lt nOper DO
    INSERT INTO foo VALUES(57, 'Some data');
    SET i = i + 1;
  END WHILE;
END
//

Now, I need to run this puppy somehow and I want to run the same inserts from a client program, and for this I write a simple C program, like this:
#include
#include
#define MY_SOCKET "/tmp/mysql5131.sock"

int main(int argc, char *argv[])
  {
  MYSQL *pMySQL;
  int i, nLoop;
  char *pStmt;

  if(argc &lt 3)
    {
    fprintf(stderr, "Usage: %s \n", argv[0]);
    return 0;
    }
  nLoop = atoi(argv[1]);
  pStmt = argv[2];

  pMySQL = mysql_init(NULL);
  if(mysql_real_connect(pMySQL, NULL, "perf", "perf", "test",
0, MY_SOCKET, CLIENT_COMPRESS) == NULL)
    {
    fprintf(stderr, "Error %s connecting to MySQL.\n",
    mysql_error(pMySQL));
    mysql_close(pMySQL);
    return 1;
    }

  for(i = 0; i &lt nLoop; i++)
    {
    if(mysql_query(pMySQL, pStmt) != 0)
      {
      fprintf(stderr, "Error %s in MySQL query.\n", mysql_error(pMySQL));
      mysql_close(pMySQL);
      return 1;
      }
    }
  mysql_close(pMySQL);
  return 0;
  }

The C prgram is called spperf, and it take 2 arguments, a counter of how many times the statement should be run and the text of the SQL statement to run. This way I can do something remotely interesting, which is to run X number of loops in SP and X number outside, i.e. I can loop in both the SP and in the C program. I'll show what I mean real soon.

To begin with, I run 100000 INSERTs to the table foo created above. I use linux time to time the execution. Yes, I know this is cride, but it works for this simple test. So:
[root@moe spperf]# time ./spperf 100000 "INSERT INTO foo VALUES(57, 'Some data')"

real 0m6.967s
user 0m0.580s
sys 0m0.702s

As we can see, it took some 7 seconds for this execution. Now, to run the same INSERTs using the procedure, I do this:
[root@moe spperf]# time ./spperf 1 "CALL PERF(100000)"

real 0m3.439s
user 0m0.003s
sys 0m0.001s

And as we can see, this is a bit faster. Now we can try some other combinations, like running 4 statements at the time in the procedure, and calling the procedure 25000 times, which will cause the same 100000 INSERTs as in the examples above
[root@moe spperf]# time ./spperf 25000 "CALL PERF(4)"

real 0m5.609s
user 0m0.184s
sys 0m0.268s

And the procedure, even when called 25000 times, still outperforms 100000 straghtforward Client inserts.

There are more tests that can be done here, I'm going to do two more now, one where I only do 1 INSERT per Stored Procedure call, and one where I do 2.
[root@moe spperf]# time ./spperf 100000 "CALL PERF(1)"

real 0m12.703s
user 0m0.770s
sys 0m0.853s
[root@moe spperf]# time ./spperf 50000 "CALL PERF(2)"

real 0m7.775s
user 0m0.337s
sys 0m0.461s

As we can see, for a simple procedure, it's not terribly fast, but as soon as there is something slightly more to do, a procedure really isn't such a bad idea, at least if that something is not comptationally intensive that is. I have just one more thing to test now.

As Brooks has already determined, computational operations in a stored procedure really aren't fast at all. So then, my perf procedure above, when running with just 1 INSERT and I run it 100000 times, really can be simplified. So if I do that, let's see what happens. I create a new simpler procedure like this:
DROP PROCEDURE IF EXISTS perf2;
delimiter //
CREATE PROCEDURE perf2()
BEGIN
INSERT INTO foo VALUES(57, 'Some data');
END
//
And then I run it and compare it with running the first procedure with the argument 1 and see what happens:
[root@moe spperf]# time ./spperf 100000 "CALL PERF(1)"

real 0m12.744s
user 0m0.698s
sys 0m0.779s
[root@moe spperf]# time ./spperf 100000 "CALL PERF2()"

real 0m7.630s
user 0m0.491s
sys 0m0.779s

Ouch! That was some difference from looping just 1 time in the first procedure. Lesson: Do as little compuational stuff as possible in any procedure.

/Karlsson
Categories: MySQL

DRBD Management Console

Planet MySQL - 7 hours 44 min ago

Wow, check out what just came out from Linbit: The DRBD Management Console. Written in Java (so it runs anywhere), completely open source (GPLv3), and allows you to manage DRBD and Heartbeat based clusters. You can install, configure, see your systems graphically, and a lot more. I’m interested to try the beta out, as soon as I get back to my lab (sitting in the airport now). If you know how to use DRBD/Heartbeat, and use it in production for your MySQL setup, it might be a good application to test out, and improve if need be.

From the screenshots, I’m surprised this isn’t a value added extra that Linbit would like to charge for. Kudos, Linbit, for keeping it GPLv3!

Paper: Optimizing MySQL Database Application Performance with DTrace

Planet MySQL - 10 hours 16 min ago

Just came across this paper:  Optimizing MySQL Database Application Performance with Solaris Dynamic Tracing

Looks useful.

Categories: MySQL

How To Add Two-Factor Authentication To phpBB

Planet MySQL - 11 hours 26 min ago

How To Add Two-Factor Authentication To phpBB

This document describes how to add WiKID two-factor authentication to phpBB through Apache using mod_auth_xradius. Given the recent attack against phpBB and the exposure of it's users' passwords, we thought two-factor authentication might be timely.

Categories: MySQL

Offline climbing the easiest of the Seven Summits

Planet MySQL - 12 hours 46 min ago

If you send me an email during the rest of February, you are highly unlikely to get an answer. However, I promise not to spam you with an “Out of office” auto reply. I dislike receiving those messages, as they seemingly have little correlation with when the person actually will reply to the email.

My excuse for not responding is that I won’t be around my laptop and that connectivity is bad in rural Tanzania anyway. I will make an attempt at conquering Kilimanjaro, one of the Seven Summits, and to twitter while doing it — to the extent there is sufficient SMS coverage, blood sugar and my fingers aren’t numb.

I also hope to take some pics, which will illustrate a blog hopefully titled “Over 14% of the Seven Summits conquered“, rather than “Taking the Milk Train from Africa“.

Huh, Milk Train? Explanation: If you’re dishonourably released from the Reserve Officer School in Finland, you’re said to take the “dairy train” home (I took course 174 in 1984 and luckily returned with a non-dairy train). And while I’m not too worried about whether I’ve trained sufficiently, I am worried about how I’ll take the height (5895 m!), and how my knees take the long hikes.

BTW, there are MySQLers who have made the Kilimanjaro. Olivier Beutels (in Mktg in Finland) and Duleepa “Dups” Wijayawardhana (in, drumrolls, the Community Team in Canada) are the ones I know of. Quoting selected passages from Dups’s Kili blog from Summit Day:

And then off we went into the moonlit night. It is cold, brutally cold. Colder even than Edmonton in the dead of winter. 

After passing 5000m I start having breathing problems. I am not getting enough oxygen into my system. It is now painfully slow for all of us, three steps, three breaths. 

The horizon is getting lighter and the glacier wall is to our left. I have never seen the horizon that wide, that immense. It is a scene I will never forget.

Slowly I make it to the summit with the others. This is the moment I had waited for.

If Uhuru Peak was the summit of my emotion, the path down was the deepest well of despair. My blood sugar is extremely low, my water is frozen and I am feeling the effects of AMS more severely. At one point Victor is holding me and I black out and come back. Even more disturbing I’ve developed a fever. My temper is short and I immediately pass out on my return to Barafu Huts.

If you’re interested in the schedule, take a look at my blog entry “Attempting the Kilimanjaro“.

Categories: MySQL

A Better Parser Needed?

Planet MySQL - 15 hours 38 min ago

Taking a little break from refactoring temporal data handling this evening, I decided to run some profiles against both Drizzle and MySQL 5.1.33. I profiled the two servers with callgrind (a valgrind tool/skin) while running the drizzleslap/mysqlslap test case. In both cases, I had to make a small change to the drizzled/tests/test-run.pl Perl script.

For the MySQL build, I used the BUILD/compile-amd64-debug-max build script. For Drizzle, I used my standard build process which builds Drizzle with maximum debugging symbols and hooks. It's worth noting that the debug and build process for MySQL and Drizzle are very different, and the MySQL debug build contains hooks to the DBUG library, which you'll notice appear on the MySQL call graphs and consume a lot of the overall function calls. You won't see this in the Drizzle graphs because we do not use DBUG. For all intents and purposes, just ignore the calls to anything in the DBUG library in the MySQL graphs since in a non-debug build all that stuff is NOOPed out...

FYI, the drizzleslap/mysqlslap test case is a decent one to run profiling against because it tests a range of different SQL statements in a concurrent environment, something you won't really see in the other tests cases. This is the reason I like using it when profiling with valgrind/callgrind/oprofile...

OK, so on to the graphs...

Drizzle Callgrind Profile -- The Function Calls

Here is the top portion of the function call tree for the drizzled server over the time of the drizzleslap test run. I ordered the result by the percentage of total execution time that was spent in each function call. At the top, you'll see that the get_text function used by the Lex_input_stream class and the InnoDB log_group_write_buf() function are the two top execution time consumers.

For the log_group_write_buf() function (defined in /storage/innobase/log/log0log.c) , this makes sense: Drizzle's default storage engine is InnoDB (yes, even for the INFORMATION_SCHEMA) and therefore you'll notice this function, which is responsible for writing to one of InnoDB's log group files.

More interestingly, there is a fairly alarming 6.45% of the total execution time, for the get_text() static function (defined in drizzled/sql_lex.cc). If we look at this routine, it's fairly clear what the function does — it simply reads an unescaped text literal, without quotes. Nothing particularly fancy, although this comment above the routine might be telling: "Return an unescaped text literal without quotes. Fix sometimes to do only one scan of the string"

Perhaps it's time to look into that single-scan thing...or perhaps not.

The next "biggie" is the my_mb_wc_utf8mb4() routine, which is called an astonishing 20,779,971 times over the course of the execution of approximately 17,000 statements. How'd I come up with 17,000 statements? I just looked at the number of times DRIZZLEparse() is called, which is close enough...

my_mb_wc_utf8mb4() is called from a number of places, most notably from within the parser and lexer, and when converting to various string classes and primitives. It's really scary that this function is called so many times! Why? Well, one reason is that Drizzle does not support any other character set than UTF8 full 4-byte. Although we support many collations, we don't support the myriad character sets that MySQL does. So, it kind of makes sense that UTF8 routines would be called quite a bit. But...20M executions for 17K statements seems like there is an obvious inefficiency here.

This leads nicely to MySQL. If Drizzle is spending so much time in UTF8 routines, and MySQL doesn't by default use UTF8 as it's character set (and MySQL 5.1.33 doesn't support 4-byte UTF8, it would make sense that MySQL would NOT be spending nearly as much time executing character set conversion routines, right? Well, not so much.

MySQL Callgrind Profile -- The Function Calls

Remember when looking at the MySQL call graph to disregard the DBUG library calls (look for dbug.c in the source file column) since these would be optimized away in a production environment....

What we notice in the MySQL calls is that pthread_getspecific is the number one execution time consumer, followed by a few of those DBUG library calls. I have a suspicion that the pthread_specific calls are actually related to the DBUG library calls, which track debugging information in the threads. I might be wrong about this but given the stark difference between pthread_specific()'s top spot in MySQL's call graph and it not appearing in Drizzle's graph, it makes sense that this is related to the DBUG library. So, I'll ignore it for now.

So, after those, you'll notice a bunch of calls to memcpy() and the number of calls to memcpy() very closely matches the number of calls from the Drizzle graph. This makes sense. Drizzle's mechanism for transporting data across the wire and for translating record formats between the database kernel and the storage engine has not yet changed much, and this is where much of the calls to memcpy() are coming from. (this will change with ValueObjects, BTW, but more on that later...)

After memcpy(), though, if you scan the function call list, you'll notice that MySQL, even with UTF8 not as the default character set, there is still a whole lot of calls, just like in Drizzle, to various character set routines — notably my_uni_utf8(), my_uft8_uni(), copy_and_convert(), my_mb_wc_latin1(), my_ismbchar_utf8(), and so on.

It turns out that if you add up all the character set conversion and comparison routine executions in both MySQL and Drizzle, that all those function calls are taking up more than 12% of the total execution time for both Drizzle and MySQL!

Call Trees Seem to Blame the Parser

I'm not going to go on too much further, as it's getting late and I'm tired, but I'm putting the call trees for Drizzle and MySQL for the profiling runs below. I think it's fairly clear that the parser is eating up a large chunk of execution time. Perhaps it's time to look into prototyping and benchmarking other parsers, or at the very least, looking into streamlining the existing parser to be more efficient when it comes to character set routines...

Feel free to click on the images below for the fullsize versions.

Cheers! -jay

Here is the Drizzle call tree:

And here is the MySQL one:

Categories: MySQL

Loops plugin for rails and merb released

Planet MySQL - February 16, 2009 - 11:41pm

loops is a small and lightweight framework for Ruby on Rails and Merb created to support simple background loops in your application which are usually used to do some background data processing on your servers (queue workers, batch tasks processors, etc).

Originally loops plugin was created to make our (Scribd.com) own loops code more organized. We used to have tens of different modules with methods that were called with script/runner and then used with nohup and other not so convenient backgrounding techniques. When you have such a number of loops/workers to run in background it becomes a nightmare to manage them on a regular basis (restarts, code upgrades, status/health checking, etc).

After a short time of writing our loops in more organized ways we were able to generalize most of the loops code so now our loops look like a classes with a single mandatory public method called run. Everything else (spawning many workers, managing them, logging, backgrounding, pid-files management, etc) is handled by the plugin itself.

The major idea behind this small project was to create a deadly simple and yet robust framework to be able to run some tasks in background and do not think about spawning many workers, restarting them when they die, etc. So, if you need to be able to run either one or many copies of your worker or you do not want to think about re-spawning dead workers and do not want to spend megabytes of RAM on separate copies of Ruby interpreter (when you run each copy of your loop as a separate process controlled by monit/god/etc), then I’d recommend you to try this framework — you’ll like it.

For more information, visit the project site and, of course, read the sources

Categories: MySQL

Replication Checksumming Through Encryption

Planet MySQL - February 16, 2009 - 6:36pm
Problem

A problem we occasionally see is Relay Log corruption, which is most frequently caused by network errors. At this point in time, the replication IO thread does not perform checksumming on incoming data (currently scheduled for MySQL 6.x). In the mean time, we have a relatively easy workaround: encrypt the replication connection. Because of the nature of encrypted connections, they have to checksum each packet.

Solution 1: Replication over SSH Tunnel

This is the easiest to setup. You simply need to do the following on the Slave:

shell> ssh -f [email protected] -L 4306:master.server:3306 -N

This sets up the tunnel. slave.server:4306 is now a tunnelled link to master.server:3306. So now, you just need to alter the Slave to go through the tunnel:

mysql> STOP SLAVE; mysql> CHANGE MASTER TO master_host='localhost', master_port=4306; mysql> START SLAVE;

Everything else stays the same. Your Slave is still connecting to the same Master, just in a different manner.

This solution does have a couple of downsides, however:

  • If the SSH tunnel goes down, it won’t automatically reconnect. This can be fixed with a small script that restarts the connection if it fails. The script can be added to your init.d setup, so it automatically opens on server startup.
  • If you use MySQL Enterprise Monitor, it won’t be able to recognize that the Master/Slave pair go together.
Solution 2: Replication with SSL

Replication with SSL can be trickier to setup, but it removes the two downsides of the previous solution. Luckily, the MySQL Documentation Team have done all the hard work for you.

  • Step 1: Create the certificates
  • Step 2: Setup the servers to recognize the certificates
  • Step 3: Change the Slave to use SSL
Conclusion

If you’re seeing corruption problems in your Relay Log, but not in your Master Binary Log, try Solution 1. It’s quick to setup and will determine if encryption is the solution to your problem. If it works, setup Solution 2. It will take a little bit of fiddling around, but is certainly worth the effort.

Categories: MySQL

Free Culture vs. Fear Culture vs. Fee Culture

Planet MySQL - February 16, 2009 - 6:12pm

Last week, my good colleague Gerv gently took me to task about requiring that videos submitted to the Mozilla Net Effects video program be licensed under the Creative Common NonCommercial-ShareAlike license (instead of an actual Free Culture licensed like Creative Common ShareAlike license or Creative Common Attribution license) . I thought about this for a while and got to wondering why I'd ever let fear of misuse overcome my experience and common sense.

Licensing and contract choices are often driven by fear and greed. We work, play, love and give in an environment where "How much profit can I make?" and "How can other people take the risks associated with my profit?" are the questions most often asked when working up legal agreements.

Further compounding this problem is the fact that lawyers profit off of the fear and entropy generated by complex and hostile legal environments - at least, they will until the system becomes unsustainable and collapses under a Himalayas-worth of contracts, precedents, book law, treaties and paranoia.

Free Culture, Free Software and Open Source licensing provide some antidote to this toxic environment, by providing terms that fairly balance opportunity and risk across the parties involved. Under these licensing scheme, each party has nearly the same set of rights and gets to behave much as if they own the work that is being licensed - under any of these licenses, I am free to make, modify and share or sell copies of the original work. The benefits of this are enormous and, at times, subtle. For example, when a work is licensed under a Free Culture or Free Software license, it makes it much harder and less economically viable to take the rights associated work from the original author at some point in the future.

Even though I deal with Free Culture, Free Software and Open Source licensing on a daily basis and my career was built in the PHP and MySQL communities, I still breathe in the general atmosphere of paranoia.

This is probably why I chose to require that videos be submitted to the Mozilla Net Effects video program be licensed under the Creative Common NonCommercial-ShareAlike license — I let my fear of someone misusing the videos outweigh the benefits from allowing commercial use. People with bad intentions will do bad things with the videos, often regardless of the license on the work. Preventing people who do create good in society from using the work in a broad way is just providing extra advantage to those who break the rules.

So, as soon as I go and edit the relevant wiki pages, I'll be asking people to choose Free Culture licensing for their videos.

Categories: MySQL

Open Source in a Wider World

Planet MySQL - February 16, 2009 - 5:46pm
Last week, I wrote about how the European Commission's Tool East project leveraged opentaps Open Source ERP + CRM to create an open source ERP system for the Eastern European tool and die making industry. I thought this was a very interesting example of how open source software could be used to advance social and economic development.

I was pleasantly surprised to hear encouraging feedback from many people about this project. In a time where our environment and our economies are facing unprecedented challenges, it's gratifying to know that our work could help our societies meet those challenges. I hope that the open source communities could come together and solve greater social problems in the future.
Categories: MySQL

Life goes on and making the internet more secure with Web of Trust (WOT)

Planet MySQL - February 16, 2009 - 11:52am
It's now more than a week since I left Sun and I have been very busy with old commitments; I had one talk at the Tampere University about "Open source licensing and how this affects quality" and a keynote about "Open Source Licensing" for the "2nd Symposium of the HyperTransport Center of Excellence" in Mannheim, Germany.

My web site, askmonty.org, is coming around nicely but it will take a couple of more weeks to add some missing information to it. After that I will start working on the Maria, MariaDB and MySQL code bases for real.

The most exiting thing that has happened so far, is that my investment company "Open Ocean" have just closed a funding round with "Web of Trust" i.e. WOT. I will take a seat on the company's Board of Directors.

What I like about WOT is that it solves a practical problem I have often experienced myself when I browse and want to buy things from Web sites: "Can I really trust this web site with my personal information, like my credit card". In addition WOT solves the problem in an elegant and user friendly way; Easy enough so that anyone can use it.

So what is then WOT? WOT is a popular and free browser add-on that works with Firefox and Explorer. It offers you information whether a web site is known to be involved in Internet scams, identity theft, spyware, spam or if it's just an unreliable online shop.

WOT provides safety ratings to search results when using Google, Yahoo!, Wikipedia, Digg and other popular sites. The Website rating information is continuously updated by the WOT user community and numerous trusted sources, such as listings of from numerous malware and phishing sites.

I encourage everyone to try out WOT to get a better Internet experience. You should consider registering as a WOT user to be able to rate web sites.

In addition, if you encounter a web site that is un-trustworthy, please rate it trough WOT to tell other Internet users about your experience to save them from the possibly trouble you suffered. Also, if you really like a web site or if you get excellent service from it, please use WOT and tell others about this!

I hope we can all work together and make the Internet a place where you can easily know where it's safe to browse and shop!

You can find a lot more information about WOT, including downloading it, from http://www.mywot.com. If you are using Firefox, you can install it by using the menu option "tools/add-ons", click on "get add-ons" and do a search after "WOT".
Categories: MySQL

MySQL University: Developing MySQL on Solaris

Planet MySQL - February 16, 2009 - 10:49am

This Thursday (February 19th, 14:00 UTC), MC Brown & Trond Norbye will give a MySQL University session on Developing MySQL on Solaris. MC works on the MySQL Documentation Team and has been involved with quite a few Solaris things, for example porting MySQL to openSolaris. Trond has been involved with many things, including openSolaris, as you can see from his blog.

For MySQL University sessions, point your browser to this page. You need a browser with a working Flash plugin. You may register for a Dimdim account, but you don't have to. (Dimdim is the conferencing system we're using for MySQL University sessions. It provides integrated voice streaming, chat, whiteboard, session recording, and more.) All MySQL University sessions are recorded, that is, slides and voice can be viewed as a Flash movie (.flv). You can find those recordings on the respective MySQL University session pages which are listed on the MySQL University home page.

MySQL University is a free educational online program for engineers/developers. MySQL University sessions are open to anyone, not just Sun employees. Sessions are recorded (slides and audio), so if you can't attend the live session you can look at the recording anytime after the session.

Here's the schedule for the upcoming weeks:

February 19, 2009 14:00 UTC / 8am CST (Central) / 9am EST (Eastern) / 14:00 GMT / 15:00 CET / 17:00 MDT (Moscow) Developing MySQL on Solaris MC Brown & Trond Norbye February 26, 2009 14:00 UTC / 8am CST (Central) / 9am EST (Eastern) / 14:00 GMT / 15:00 CET / 17:00 MDT (Moscow) Backing up MySQL using file system snapshots Lenz Grimmer March 5, 2009 14:00 UTC / 8am CST (Central) / 9am EST (Eastern) / 14:00 GMT / 15:00 CET / 17:00 MDT (Moscow) Good Coding Style Konstantin Osipov March 12, 2009 14:00 UTC / 8am CST (Central) / 9am EST (Eastern) / 14:00 GMT / 15:00 CET / 17:00 MDT (Moscow) MySQL and ZFS MC Brown March 19, 2009
14:00 UTC / 8am CST (Central) / 9am EST (Eastern) / 14:00 GMT / 15:00 CET / 17:00 MDT (Moscow) How to Use Charsets and Collations Properly Susanne Ebrecht

Categories: MySQL

Selective restoring using ndb_restore

Planet MySQL - February 16, 2009 - 10:29am
We've added some new options in MySQL Cluster 6.3.22 which makes it possible to selectively restore tables. The new options for ndb_restore are:

--include-databases=name
Comma separated list of databases to restore.
Example: db1,db3
--exclude-databases=name
Comma separated list of databases to not restore.
Example: db1,db3
--include-tables=name
Comma separated list of tables to restore. Table name
should include database name. Example: db1.t1,db3.t1
--exclude-tables=name
Comma separated list of tables to not restore. Table name
should include database name. Example: db1.t1,db3.t1

To demonstrate in a few examples, lets assume you have the following tables:

mysql> SELECT TABLE_SCHEMA AS `Schema`,TABLE_NAME AS `Table`
FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA LIKE 'db_';
+--------+-------+
| Schema | Table |
+--------+-------+
| db1 | t1 |
| db1 | t2 |
| db1 | t3 |
| db2 | t2 |
| db2 | t3 |
| db3 | t1 |
| db3 | t4 |
+--------+-------+

If you need to restore table db3.t4 and the complete database db2 you should do the following on all data nodes (some important options are omitted!):

shell> ndb_restore [...] -r --include-tables=db3.t4
shell> ndb_restore [...] -r --include-databases=db2

In a similar way you can exclude. For example, if you need to restore all database but db1:

shell> ndb_restore [...] -r --exclude-databases=db1

To exclude the table db3.t1, and restore everything else:

shell> ndb_restore [...] -r --exclude-tables=db3.t1

I helped making the initial patches for this feature and hope there are not to much bugs! Maybe some more suggestions on how to improve it?
Of course, all this is documented in the MySQL manual.
Categories: MySQL

The importance of network latency in application performance – part 2

Planet MySQL - February 16, 2009 - 10:00am

I harped on this earlier this month. The network is an often over looked, but vital component of every application. I have been in many shops content with running 100Mb/s between the application and database simply because they are no where near maxing out the available network bandwidth between the two servers. What they are forgetting is there is a big latency difference between 10Mb,100Mb, & 1000mb. Speaking from my testing on Waffle Grid we see that under load 100mb connection routinely has network latency in the 3000-4000 microsecond range, while running under load in 1gbe tests we routinely run at around 1100 microseconds. By the way the same test using a Dolphin interconnect card finishes with an average latency of less then 300 microseconds. These tests average less then 5Mb/s being pushed over the network, which from a network perspective would not even hit half the available bandwidth of a 100Mb/s network. What this means you may think 100Mb/s connection is good enough, but it could be adding 3x more latency to each network operation. 3X more over how many packets per day? Over how many operations each day? Ouch, that’s really going to add up.

Categories: MySQL

MySQL Partitions at PHPCon Italia

Planet MySQL - February 16, 2009 - 6:05am

I will speak at PHPCon Italia 2009, in Rome, on March 19th.

The subject is a very trendy. I will cover efficiency with partitions, a topic that every DBA and MySQL developers should enjoy.

Categories: MySQL

MySQL

Yahoo

Recent comments

About Me

Jay Janssen
Yahoo!, Inc.
jayj at yahoo dash inc dot com

MySQL
High Availability
Global Load Balancing
Failover

View Jay Janssen's LinkedIn profileView Jay Janssen's Facebook profile

User login

Friends

  • Pushing Leads to Salesforce with PHP
  • Non-Charitable Charity – Why Giving the Government’s Way is Void of Love
  • Super Geeky: Removing Subversion files from a directory with Windows Powershell

Links