mysqlguy.net

Feed aggregator

Sun & MySQL at Linuxtag 2008 Berlin (2008-05-28/2008-05-31)

Planet MySQL - May 26, 2008 - 3:00am

From May 28th-31st, the annual LinuxTag will take place in Berlin, Germany. I followed the growth and evolution of LinuxTag from the very early days and I have fond memories of the event back when it still took place at the University of Kaiserslautern and our SuSE "booth" was just a regular table taken from the lecture rooms...

Things have evolved a lot since then. Today, LinuxTag is one of the largest Linux/Open Source Events in Europe and my new employer Sun is a major sponsor this year. In addition to several talks and keynotes, there will be a large Sun booth in the exhibition area (Booth #205) and we will have a dedicated MySQL demo pod! Some of the things we plan to demo there are the upcoming MySQL Server releases (5.1, 6.0 with Falcon and Online Backup), MySQL Workbench, MySQL Enterprise Monitor as well as how to combine these with other Sun products like Glassfish, NetBeans, OpenSolaris or OpenOffice.

Some other stuff that we will be showcasing on the Sun booth:

  • Be Brilliant Faster with OpenSolaris: Develop, Debug, Deploy Apps Faster with ZFS and Dtrace, OpenSolaris Live CD – Fast, Free, and Easy to Install
  • Virtualize Your Business with xVM and VirtualBox: OpenSolaris, Windows, Linux & Mac OS X Virtualized, Develop on VirtualBox, Deploy on xVM, Free & Open
  • Sun Studio Software for OpenSolaris and Linux: C/C++/Fortran Compilers and Tools, x86 and SPARC
  • Cool New Features in OpenOffice.org 3.0: Importing PDFs and Managing Appointments, now with full support for MAC OS X (Aqua)
  • Discovering Open High Availability Cluster: Overview about HA Clusters, Community Group Projects, Single Node Cluster – Service Failover between Zones
  • GlassFish - the Open Source Java EE 5 Application Server: JRuby/ Rails, Ajax & Comet

I look forward to being there! Please contact me, if you are interested in visiting Linuxtag and would like to receive a free pass!

 

 

Categories: MySQL

Corruption using MySQL AES_[EN|DE]NCRYPT functions

Planet MySQL - May 25, 2008 - 3:52pm

I was contacted this week by a previous client regarding a failure of processing data. This was a Contact, Financial and Media Management system I developed for a non-for-profit organization a LAMJ stack, and I’ve had to do nothing since deployment in the past 3 years, no bug fixes, no feature enhancements. The only thing lacking is additional management reporting, and data is extracted for this option now.

It runs under commodity Hardware, Linux and MySQL and it’s only unscheduled downtime was an power failure before UPS power was installed. However this all changed last week. Processing of regular scheduled encrypted data simply failed unexpectedly.

A summary of the environment.

  • Data is inserted with AES_ENCRYPT(data,key);
  • Data is retrieved with AES_DECRYPT(data,key);
  • Data is never updated.
  • New data is always added, and historical data always remains.
  • The application has no end user access to modify data.
  • The application has no function anywhere to modify the data, so no rouge happening could have occured.
  • An AUTO_INCREMENT column and TIMESTAMP columns provide a level of auditing for data forensics.
  • Backup copies of data exist up to 3 years for reference.
  • The seed key has not changed.
The problem

Selecting the first 10 rows saved in the table (By AUTO_INCREMENT Key and confirmed by dates), 8 of 10 are now corrupt. Select the last 10 rows inserted, zero are corrupt. Across 20,000 records 75% are now corrupt.

A lot of analysis has been performed to identify and track the data that was recorded, a certain amount of data forensics, and it was confirmed information was successfully processed last month for example. As this performs financial transactions, there is a lot more auditing available and being reviewed however it is simply a mystery that I can’t solve.

  • What options remain? is this a Hardware problem, Disk or even Memory.
  • What other data maybe corrupt?
  • How can more investigation occur to track the cause of the problem.
mysql> select version(); +------------------+ | version() | +------------------+ | 4.1.10a-standard | +------------------+ 1 row in set (0.00 sec)
Categories: MySQL

Let the coding begin! The Google Summer

Planet MySQL - May 25, 2008 - 2:44pm
Let the coding begin! The Google Summer of Code 2008 coding session has begun.

I am maintaining this blog as a way of recording my progress for others to follow (and to prevent me from slacking off).

I am working on integrating Memcached with the MySQL Query Cache (mentored by Brian Aker from MySQL)

I've done some bonding through the mailing list, blogs, and IRC but I feel I should have done a bit more. I just moved back from college to my parents house and started working full time in Boston (wow what a commute). I've read through the code a few times and feel I have an understand for how it works. It compiles, tests, and runs on my dev box (gentoo 2.6). I've started reading the book Google sent me (SPOILER) and it is wonderful.


In reading through the MySQL codebase I've come up with a list of design decisions I will need to make in the first week of my project.

1)
Implement this as a different query cache storage engine of sorts by building an interface. One version uses the classic single system memory, the other uses Memcached and is controlled by a flag somewhere.
OR
Code this as a straight patch.

2)
How to optimize the functionality of the cache? Tables that update often are actually slower when cached due to the way caches are pruned currently. Is there a cleaner way to do this? Do I pursue this concurrently or wait till I've finished the initial goal of Memcached supported and then try to rework this?

3)
Using one Memcached cluster for several databases would be a good way to save on memory, but there are several things to consider in order to retain 'ACID' principles. Again, do I try and work on this concurrently, or be more cautious so as not to bite off more than I can chew (which I have a tendency to do...)

4)
How much time should I leave at the end for perf testing, regression testing, etc...
Making this work is wonderful, but if it slows down the cache further is it really worthwhile?


Goals for Monday June 2nd:
-Communicate more.
-Have a solid timeline complete with estimated goals that I can rework at each iteration.
-Have answers to these design decisions that I can begin actively working on.
-Have real code, that compiles, and maybe does something (stubs and such).
Categories: MySQL

Screenshots of improved MySQL Cacti templates

Planet MySQL - May 25, 2008 - 1:10pm

I finally have some images to show you what my improved Cacti templates look like.

These aren’t a perfect demo, since for example this server doesn’t have the query cache enabled, but it should show you what I’ve done. Note, for example, that each graph is labeled with the actual values of the images drawn on it. You don’t have to guess what the values are by squinting at the graphs.

You can click on any image to go to a larger version. Enjoy:

Cacti, Cacti templates, graphing, monitoring, mysql
Categories: MySQL

MySQL Stored Procedures book ? Corrected

Planet MySQL - May 25, 2008 - 11:59am

Long ago I wrote a 100-page book on MySQL 5.0 stored procedures. It’s on our dev-zone pages. But it’s out of date. In 2006 I made some corrections for MySQL 5.1. It took me a long time to publish the revised book, but here it is: http://blogs.mysql.com/peterg/files/2008/05/book01.pdf.

Categories: MySQL

Q4M and the need for XA

Planet MySQL - May 25, 2008 - 11:16am
Q4M is a new storage engine that supports persistent queues. For a project that claims to have been started in December 2007, the code and features are impressive. But one thing is missing. I want to perform a dequeue and update InnoDB in one transaction. MySQL could use XA internally to support this, as XA is already used on the master between the binlog and storage engines. However, XA is not supported to group changes between multiple storage engines into one transaction. Until now, there has not been a demand for that feature. Q4M may create that demand.
Categories: MySQL

Improved release model for MySQL Cluster

Planet MySQL - May 25, 2008 - 3:00am

Contrary to mounting rumors, MySQL is not closing or removing MySQL Cluster. The reason for the cluster applications being missing in the latest 5.1 binaries is a split release model, as Kaj Arnö explains. The almost simultaneous announcement of Cluster 6.2 release is a confirmation that the Cluster is alive and kicking.

To reinforce the Cluster buzz, another blogging entity has surfaced. The MySQL Telecom Team is mostly involved in Cluster technologies, although not exclusively.

Categories: MySQL

INFORMATION_SCHEMA tables in the InnoDB pluggable storage engine

Planet MySQL - May 24, 2008 - 8:14am

Much has been written about the new InnoDB pluggable storage engine, which Innobase released at the MySQL conference last month. We've written posts ourselves about its fast index creation capabilities and the compressed row format, and how that affects performance. One of the nice things they added in this InnoDB release is INFORMATION_SCHEMA tables that show some status information about InnoDB. Here are the tables:

PLAIN TEXT SQL:
  1. mysql> SHOW TABLES FROM INFORMATION_SCHEMA LIKE 'INNODB%';
  2. +----------------------------------------+
  3. | Tables_in_INFORMATION_SCHEMA (INNODB%) |
  4. +----------------------------------------+
  5. | INNODB_CMP                             |
  6. | INNODB_CMP_RESET                       |
  7. | INNODB_CMPMEM                          |
  8. | INNODB_CMPMEM_RESET                    |
  9. | INNODB_LOCK_WAITS                      |
  10. | INNODB_LOCKS                           |
  11. | INNODB_TRX                             |
  12. +----------------------------------------+

The _CMP tables show statistics about compression; they contain a lot of useful information about compression, decompression, memory management, fragmentation etc. Beware that selecting from the tables whose names contain RESET has a side effect: it resets the statistics back to 0.

There are also locks and transactions tables. A while ago, the InnoDB developers contacted me to ask my opinion about what would be useful to put in the INFORMATION_SCHEMA. I told them the single biggest thing I could not get from InnoDB at the time was visibility into which transactions are blocking others when there are lock waits. It appears that they agreed this was important to add. (I subsequently discovered that it is possible to find out more information on InnoDB locks even in the older versions of InnoDB, but it's not really easy.)

These tables are fully documented in the InnoDB plugin manual, along with extensive examples of how to use them to find out what is blocking what and so on. Note that the InnoDB plugin manual is being maintained on www.innodb.com, not as part of the regular MySQL manual.

Entry posted by Baron Schwartz | No comment

Add to: | | | |

Categories: MySQL

Speaking at Sun Extended Horizons Summit 2008

Planet MySQL - May 24, 2008 - 6:07am

Sun Microsystems - Australia - APAC Extended Horizons Summit 2008

I’m giving a Cluster session tomorrow (Sunday) and talking about Scaling MySQL on Monday. Hope to see you there!

Categories: MySQL

Using flow control functions for performance monitoring queries

Planet MySQL - May 24, 2008 - 3:43am

I'm not big fan on flow control functions like IF or CASE used in MySQL Queries as they are often abused used to create queries which are poorly readable as well as can hardly be optimized well by MySQL Optimizer.

One way I find IF statement very useful is computing multiple aggregates over different set of rows in the single query sweep.

Here is how I like to use it for web site performance analyzes. As you can see in this table we have recorded "wtime" which is wallclock time it took to generate the page. We also track types of pages because they often have different performance profile.

PLAIN TEXT SQL:
  1. mysql> SELECT count(*) cnt, avg(wtime) avw, sum(IF(wtime>0.3,1,0))/count(*) soso, sum(IF(wtime>1,1,0))/count(*) poor, sum(IF(wtime>5,1,0))/count(*) fatal FROM performance_log_080523;
  2. +---------+------------------+--------+--------+--------+
  3. | cnt     | avw              | soso   | poor   | fatal  |
  4. +---------+------------------+--------+--------+--------+
  5. | 4412134 | 0.18669403011609 | 0.1280 | 0.0396 | 0.0017 |
  6. +---------+------------------+--------+--------+--------+
  7. 1 row IN SET (7.51 sec)

This query scans through page generation log for some site and reports number of requests, average request time as well as classifies requests to multiple classes. From the same query we can see portion of requests which were over 300ms (12.8%) - so we call them "so so" as we set 300ms or less as performance goal for the web site. 4% of these requests get ranking "poor" being over 1 seconds and 0.1% of requests get "fatal" classification because we assume user will not wait over 5 seconds and will already go away.

On the side note I should say the average time is least usable because average does not tell you a lot. It is much better set performance goals for high percentage portion such as - 95% or 99% and see what fraction of requests matches this goal.

In this example if our 95% goal would be 1 seconds we would pass but for 95% 0.3 second response time we would fail, same as 99% requests served within 1 second.

Here is another example:

PLAIN TEXT SQL:
  1. mysql> SELECT count(*) cnt, avg(wtime) avw, sum(IF(wtime>0.3,1,0))/count(*) soso, sum(IF(wtime>1,1,0))/count(*) poor, sum(IF(wtime>5,1,0))/count(*) fatal,page_type FROM performance_log_080523 WHERE bot='google' GROUP BY page_type;
  2. +-------+-------------------+--------+--------+--------+---------------+
  3. | cnt   | avw               | soso   | poor   | fatal  | page_type     |
  4. +-------+-------------------+--------+--------+--------+---------------+
  5. | 21954 |  0.36869757085721 | 0.4373 | 0.0545 | 0.0008 | search        |
  6. | 25843 |   1.1290003426468 | 0.9114 | 0.4267 | 0.0081 | profile  |
  7. ...
  8. |  4393 |  0.63011296296095 | 0.5852 | 0.1689 | 0.0052 | rss           |
  9. +-------+-------------------+--------+--------+--------+---------------+
  10. 15 rows IN SET (5.87 sec)

In this example I hid some rows to obfuscate some date

In this query we're looking at response time for different pages and we can find "search" page responds within 1 second in about 95% while profile page in less than 60% - It is very important to do such grouping by user functions because otherwise you will not catch important but may be less used functions performance problems.

Another thing you may notice I look for performance stats not for all pages but just for pages retrieved by Google crawler. In many cases this is worth looking at (may be for all search bots rather than just google) because bots have very different site access pattern. In many cases your human visitors will visit relatively few hot pages, which will use content from the cache (or be served from the cache all together). Bots however tend to visit distinct pages which tends to have significantly lower cache hit rate.

Entry posted by peter | 7 comments

Add to: | | | |

Categories: MySQL

Italian Event: Marten Mickos in Rome, Friday May 30th

Planet MySQL - May 24, 2008 - 3:38am
Marten Mickos sara' presente a un evento presso l'Universita' La Sapienza di Roma, Venerdi 30 Maggio.
Marten parlera' della sostenibilita' del modello Open Source, del presente e del futuro di MySQL.L'agenda e il modulo di iscrizione, per chi fosse interessato, e' disponibile qui.
In coda all'intervento di Marten, parleremo di scalabilita' e compareremo l'approccio alle architetture basate su MySQL rispetto a quelle utilizzate con altri database relazionali.
Giuseppe Maxia illustrera' il ruolo della Community nell'eco-sistema di MySQL e spieghera' come e' possibile contribuire al miglioramento dei prodotti e delle soluzioni. 
Ci sara' anche spazio per un'ampia sezione di domande e risposte... meglio approfittarne!
Spero di vedervi numerosi a Roma!
-ivan
Categories: MySQL

New webinar on MySQL and memcached

Planet MySQL - May 24, 2008 - 3:34am
Are you ready for the next webinar? This time we will talk about the use of MySQL with memcached. Most of you may already know memcached and how to use it in a typical web environment with high traffic and requirements for high scalability.

During the webinar we will cover different scenarios and best practices regarding the use of MySQL and memcached together.

The webinar will be live next week, Wednesday 28th of May @ 9am GMTD (London, Dublin, Lisbon) / 10am CETD (Paris, Madrid, Munich, Rome etc.)

The registration is available here.
I hope to see you all online next week!
-ivan
Categories: MySQL

Italian Webinar - Materiale, Domande e Risposte per il Webinar "Soluzioni MySQL per l'alta disponibilita'"

Planet MySQL - May 24, 2008 - 3:25am
Le slide e la registrazione del webinar che si e' tenuto il 7 Maggio scorso, sono disponibili qui.
Di seguito ho preparato la sessione di domande e risposte.

Q from Andrea: E' possibile creare un sistema multi-master tramite l'assegnazione di impostazioni auto_increment_offset  diversi?
A: Si, questo e' il meccanismo piu' tipico. Ogni server master utilizza un diverso offset per creare una PK e quindi inserire nuove righe nella stessa tabella, quindi

Q from Massimo: Cosa succede se viene effettuata una lettura su uno slave di alcuni records modificati nel master e non ancora replicati? si puo prevenire questa situazione?
A: La situazione si puo' prevenire, ma la logica da applicare e' abbastanza complessa. Detto questo, se le specifiche richiedono questo tipo di lettura aggiornata in real time, l'utilizzo della replicazione non e' consigliabile. Le soluzioni che posso ipotizzare sono due:
1. L'utilizzo di due connessioni, una per la lettura su server replicati e l'altra per la scrittura e la lettura di dati che devono essere aggiornati in realtime. Quest'ultima connessione viene indirizzata sul master
2. L'utilizzo di una struttura a storage condiviso in configurazione active-active, qualora l'utilizzo di MyISAM abbia senso (active-active e' possibile solo con MyISAM).

Q from Angelo: Chi decide a chi demandare eventuali query verso i servers slave?
A: La scelta deve essere definita a livello applicativo, utilizzando una logica che e' parte integrante dell'applicazione, uno strato logico di gestione delle connessioni o in alcuni casi un middle layer come ad esempio MySQL Proxy.

Q from Andrea: Il failback automatico può funzionare solo con due server? o per la circolarità della struttura è possibile definire dinamicamente i server che compongono l'anello?
A: La circolarita' puo' essere cambiata, ma l'operazione e' sempre manuale (ovvero non esistono automatismi gia' pronti all'uso).

Q from Angelo: Se io ho un database che occupa 8GB, devo avere 8GB di RAM su ENNE nodi che fanno parte di Mysql Cluster?
A: No, la formula e' la seguente: somma della memoria disponibile in tutti i nodi diviso il numero di repliche (normalmente 2).
Per esempio: 4 nodi con 8GB disponibili su ciascun nodo = 32GB, diviso 2 (2 copie dei dati) = 16GB di memoria disponibile per i dati e gli indici.

Q from Massimo: Le connessioni ai diversi SQL node , come possono essere gestite in maniera trasparente all'utente? Devo per forza passare per Mysql Proxy o esistono altre soluzioni che non siano dipendenti da un applicativo?
A: Non ci sono meccanismi automatici, in quanto un eventuale logica gestita a livello di connessione dovrebbe avere conoscenza del tipo di transazione da eseguire - infatti cio' dovrebbe avvenire tramite Proxy e per tale ragione la soluzione assolutamente piu' sicura e' quella di gestire questa suddivisione a livello applicativo.

Q from Eleonora: Usando DRBD c'è possibilità di perdita di dati nel caso in cui il master cade prima di replicare i dati sul nodo slave?
A: No, questo non puo' avvenire, in quanto la replicazione e' sincrona. Solo cambiando i parametri di configurazione e abbassando il livello di sicurezza della replicazione c'e' il rischio di perdita dei dati - ma ovviamente in una configurazione HA questo e' sconsigliato.

Q from Angelo: Durante il failover, la connessione e quindi la query eventuale, viene persa?
A: Si, le connessioni aperte e le conseguenti transazioni aperte vengono interrotte. Le transazioni aperte non vengono scritte su disco, mantenendo cosi' consistente lo stato del DB.

Q from Massimo: Si può usare il sistema DRDB su macchine virtuali?
A: E' possibile, ma occorre essere certi che a livello hardware non siano presenti eventuali Single Point Of Failure.

Q from Stefano: Nella replica master-master, se cade il master active e successivamente viene riattivato, la risincronizzazione è automatica o richiede operazioni manuali?
A: La sincronizzazione e' automatica. L'unica accortezza e' quella di verificare, prima di attivare il meccanismo di risincronizzazione, che tutte le operazioni siano state trasferite dalla macchina "master active" alla macchina "master passive" prima del fault.

Q from Alessandro: MySql Replication e MySQL Cluster possono essere implementati anche in sistemi windows?
A: La replicazione e' disponibile anche su Windows, MySQL Cluster e' disponibile solo su sistemi Solaris, Linux, OS X, FreeBSD e Windriver.

Q from Alberto: MySQL Proxy non e' considerato un servizio HA?
A: Non nella sua funzionalita' di base e non ancora, essendo ancora in una fase alfa. Tramite l'implementazione di appositi script, verra' sicuramente utilizzato anche per operazioni di failover automatico.

Q from Marco: Posso fare la replica con due istanze che risiedono su server con sistemi operativi differenti? Ad esempio Windows e Linux
A: Si, questo e' possibile e supportato.

Q from Mauro: Volevo avere qualche informazione in più sulla gestione del load balancing in ambiente Cluster.
A: La mia risposta si riferisce all'utilizzo di MySQL Cluster, non del cluster a livello di sistema operativo. Le operazioni di load balancing vengono gestite automaticamente e trasparentemente tramite il server MySQL e i data node che fanno parte del Cluster. Il bilanciamento tra il sistema client e il server MySQL deve essere gestito tramite le normali regole di balancing tra server, ovvero con bilanciatori SW, bilanciatori HW o tramite l'utilizzo delle funzionalita' di alcuni connettori, come il connettore JDBC.

Q from Eleonora: C'è un numero minimo di data node e server node nella configurazione cluster?
A: Si, il numero minimo e' di due nodi mysql, due data node e un management node, dislocati su un minimo di tre server. Questa configurazione evita qualsiasi Single Point Of Failure. E' possibile e consigliabile duplicare anche il management node, sebbene l'utilizzo di questo componente avvenga solo in fase di startup o di guasto di un altro nodo, per cui non viene considerata "configurazione minima".

Q from Stefano: master-master: l'automatic failback + multimaster autoincrement funziona anche sotto windows?
A: La replicazione funziona allo stesso modo in Windows come in Linux, Solaris o altri sistemi. Ricordo pero' che se si utilizza la replicazione per l'alta disponibilita' non e' possibile utilizzarla anche per separare le scritture con il meccanismo dell'autoincrement.

Q from Gabriele: Quando e' meglio utilizzare repliche gestite su mysql e quando e' meglio utilizzare repliche gestite dalle tecnologie di replica dello storage?
A: La replicazione di MySQL e' piu' semplice da implementare ed e' multipiattaforma, ma esiste sempre la possibilita', seppure remota, di perdita dei dati. Cio' e' dovuto alla natura asincrona della replicazione. Una replicazione con DRBD e' invece sincrona, pertanto garantisce che i dati presenti su ambedue le repliche siano consistenti.
Categories: MySQL

Updated Website and Professional Blog

Planet MySQL - May 24, 2008 - 1:10am

For those that have my existing blog bookmarked, or use any RSS/Atom feeds please update your information now.

My new blog can be found at http://ronaldbradford.com/blog/

RSS 2 is http://ronaldbradford.com/blog/feed/rss2, Atom is http://ronaldbradford.com/blog/feed/atom

Ok, so before you read on please change now. While all old links will redirect I encourage you to take a few moments in your browser and RSS reader.

So what’s new. Well as Baron stated in I moved this blog to pairLite with zero downtime, and it was easy I moved my old blog site blog.arabx.com.au and my temporary staging site ronaldbradford.com and transitioning my passworded beta site all with no downtime. Even with all my testing over several weeks, and several test users, my production migration had one hitch with some internal url re-directs not working correctly, but my overall site main links were not affected. You would have had to been quick at 1:30am on Saturday to catch it.

Other features is a consolidation of information from various sites now all under my brand ronaldbradford.com and allowing me now to expand on that. I’ve also split my professional blog from my personal views, photos and comments as well as new SEO/SEM optimizations. More on these experiences later. My theme was actually inspired by some business cards I had made in Australia while home last week, quality 300GSM color cards that were designed and printed in a just a few hours and are very good quality (which was very impressive).

I’m now free to work on so many other projects including many MySQL draft posts now Version 1.0 of my new site is out. As with all release early, release often sites, expect 1.1 soon!

Categories: MySQL

MySQL Cluster splits from MySQL — not pluggable?

Planet MySQL - May 24, 2008 - 12:13am

Kaj just announced that as of 5.1.25, MySQL Cluster will no longer be included in the normal MySQL packages. Instead, a new branch is being created for MySQL Cluster releases, where the first release is informally called “6.2.15″ but the releases are really named “mysql-5.1.23-ndb-6.2.15″. This new branch is based on the MySQL Cluster Carrier Grade Edition of MySQL.

Overall, this seems like a good idea — the needs for releases, schedules, pressures, etc., are at odds between MySQL Cluster and MySQL’s core. I am, however, baffled by the decision of how to release the new product: coupled with MySQL as a single monolithic package with compiled-in storage engine. After all, 5.1 has long been touted to have this amazing new pluggable storage engine architecture. Why not use it?

With Oracle/InnoDB recently announcing that they are decoupling from MySQL and releasing their storage engine as a plugin, this would make so much sense. The only thing I can think of is that MySQL and/or the MySQL Cluster team must think that the pluggable storage engine concept is not workable in its current state or easy enough to use… which I would agree1 with absolutely. However, having a team within MySQL really pushing a product and using the pluggable interface to make their releases would help dramatically to make the interface really usable for the rest of the world.

Why not do it? Let’s hear it…

1 My short-list of gripes, by no way complete: The error message interface sucks, no way to compile a plugin outside of the MySQL source (or even symlink it into the source tree), plugins are tied to an exact version of MySQL server, and no reasonable way to manage plugins in an OS context (RPMs, .debs, etc).

Categories: MySQL

MySQL 6.2 is GA, but 5.1 is RC and 6.0 is alpha

Planet MySQL - May 23, 2008 - 9:26pm

MySQL’s version numbering is getting harder and harder to understand. In fact, it’s getting surreal.

Let me state up front that there’s probably a lot I don’t know here. But if I don’t know, how on earth can the general public figure it out?

Before we begin, let’s define terms: GA is completely done, ready for use. RC is a release candidate: don’t change anything, just fix bugs because we’re charging towards a release here. Beta is possibly unsafe code, use at your own risk. Alpha is known to have significant bugs, but if you’re curious please play with it.

Now for the releases/versions game. Let’s recap:

What is going on here?

How is this an improved release model? What is improved about this?

How in the world can anyone figure out what versions of the software have what features? Who can make an educated decision about what product to use in this situation? Are people supposed to just rely on the sales people to help them figure out what to use? Boy, is that trusting the fox to guard the henhouse.

Why didn’t they just release 5.1 Cluster as GA separately, if that reflected the reality in the code? They certainly missed an opportunity to show some progress on 5.1. As it is, 5.1 got robbed of its chance to have at least some of its code go GA after more than 2.5 years in development. Now 5.1 looks like even more of an embarrassment — hey 5.1 team, how come you can’t get anything out the door when these 6.2 people are releasing GA products? Not to mention 6.0 — you guys look bad now too! (Just kidding.)

I tried to draw a timeline of MySQL’s release history, in some detail in the 5.0 history and in very basic detail in the 5.1 and 6.0 and 6.2 trees. You can take a look at that. It’s worth studying for 5 minutes or so, even though it’s kind of ugly. There are lots of oddities to notice about it. Enjoy:

The inmates are running the asylum. This gets more and more amusing as time goes on.

humor, Kaj Arno, mysql, MySQL Cluster
Categories: MySQL

Product Pulse — May 23, 2008

Yodel Anecdotal - May 23, 2008 - 7:48pm

Today is Penny Day, when we pay homage to the most wee of our coinage. It might not buy you much today, but how else could you give someone your two cents or compensate them for their thoughts? What would the frugal among you pinch instead? But for Pete’s sake, if you don’t see Honest Abe staring up at you from the pavement, just keep right on walking. Here’s what we picked up this week:

  • Photo Mailer: Ever downloaded photos from someone’s email and wish you hadn’t? The new image preview feature in Yahoo! Mail will help you avoid future said hassles. It helps make it easier for you to preview photo attachments, right within your email. Receive multiple images? You can preview them as a slideshow or as a full-sized image before you download, which you can do in one fell swoop or just selected images. And, as always, you’ll have the comfort of Norton AntiVirus scans to make sure they’re A-OK. Watch the screencast here.
  • My Flickr: If you’re as addicted to Flickr as I am, you get moving right over to My Yahoo! to add the new Flickr module. Not only can you view your latest snaps, you can check out the “activity” tab to see when you have comments or responses to your comments. All this without ever having to leave your My Yahoo! page. What’s more, you’ll have access to your sets, your groups, your favorites, your contacts’ photos, and even Flickr’s “Interestingness” feature. BTW: While you’re dolling up your My Yahoo! page, you might want to show that your colors don’t run by trying on the Memorial Day theme this weekend.

Subscribe to the RSS feed (or add it to My Yahoo!) to get this Product Pulse every week.

Categories: Yahoo

Ubuntu Developer Summit - Prague

Planet MySQL - May 23, 2008 - 7:42pm

On Monday this week I attended the first day of the Ubuntu Developer Summit in Prague.  The summit which just ended today, was intended to drive plans and decisions for the next Ubuntu release "Intrepid Ibex" which is due out on October 30th.  (Check out the reports from the summit here.)


Mark welcomes the masses while Jono scans the crowd for hecklers.

Sun had about 12 folks there representing GlassFish, Open JDK, NetBeans, Hudson, Sun Studio and MySQL.  We attended sessions, had side meetings and had a big Canonical/Sun sync up at the end of the day to talk about the slew of software we hope to get into Intrepid. 

This was my third UDS and my favorite so far -- I loved the location, I loved all the Sun participation and I loved getting to see people who are becoming old friends! :)

The Many Faces of UDS Intrepid

      


Rick Clark brings the Ubuntu server team to its feet. 


Post pow-wow pic: Kaj Arno (MySQL), Mark Shuttleworth (Canonical), Arseniy Kuznetsov (NetBeans), Matt Zimmerman (Canonical) 

 
Robert has a bazaar conversation with the NetBeans guys (Yarda & Arseniy) 


Abhijit and Nitya talkin' GlassFish 


The road we took to dinner. 

Pau for now... 

Categories: MySQL

Data load speed test

Planet MySQL - May 23, 2008 - 6:18pm
I've run some data load tests with various databases using DBMonster, so connecting to databases through JDBC on a WindowsXP personal computer.
Here are the results, in both cases I loaded 100 rows in the parent table and 1000 in the child table, with foreign keys enabled.


Firebird 2.1 with Jaybird 2.1.3 and DBMonster 1.0.3 (And Java .6)

Table structure is:

CREATE TABLE GUYS(
GUY_ID Integer NOT NULL,
GUY_NAME Varchar(45) NOT NULL,
CONSTRAINT PK_GUYS PRIMARY KEY (GUY_ID)
);
GRANT DELETE, INSERT, REFERENCES, SELECT, UPDATE
ON GUYS TO SYSDBA WITH GRANT OPTION;

CREATE TABLE BADS_ATTRIBUTES(
ATTRIBUTE_ID Integer NOT NULL,
GUY_ID Integer NOT NULL,
ATTRIBUTE_NAME Varchar(45) NOT NULL,
CONSTRAINT PK_BADS_ATTRIBUTES PRIMARY KEY (ATTRIBUTE_ID,GUY_ID)
);
ALTER TABLE BADS_ATTRIBUTES ADD CONSTRAINT FK_BADS_ATTRIBUTES_1
FOREIGN KEY (GUY_ID) REFERENCES GUYS (GUY_ID) ON UPDATE CASCADE ON DELETE CASCADE;
GRANT DELETE, INSERT, REFERENCES, SELECT, UPDATE
ON BADS_ATTRIBUTES TO SYSDBA WITH GRANT OPTION;


D:\dbmonster-core-1.0.3\bin>dbmonster --grab -t guys bads_attributes -o d:/fireb
ird_2_1_schema.xml

D:\dbmonster-core-1.0.3\bin>rem Batch file to run dbmonster under Windows

D:\dbmonster-core-1.0.3\bin>rem Contributed by Peter De Bruycker
2008-05-23 22:43:03,203 INFO SchemaGrabber - Grabbing schema from database. 2 tables to grab.
2008-05-23 22:43:03,265 INFO SchemaGrabber - Grabbing table GUYS. 50% done.
2008-05-23 22:43:03,359 INFO SchemaGrabber - Grabbing table BADS_ATTRIBUTES. 100% done.
2008-05-23 22:43:03,359 INFO SchemaGrabber - Grabbing schema from database comp
lete.

D:\dbmonster-core-1.0.3\bin>dbmonster -s d:/firebird_2_1_schema.xml

D:\dbmonster-core-1.0.3\bin>rem Batch file to run dbmonster under Windows

D:\dbmonster-core-1.0.3\bin>rem Contributed by Peter De Bruycker
2008-05-23 22:49:32,828 INFO DBMonster - Let's feed this hungry database.
2008-05-23 22:49:32,984 INFO DBCPConnectionProvider - Today we are feeding: Fir
ebird 2.1 Beta 2=WI-T2.1.0.16780 Firebird 2.1 Beta 2/tcp (xxx)/P10 W
I-T2.1.0.16780 Firebird 2.1 Beta 2=WI-T2.1.0.16780 Firebird 2.1 Beta 2/tcp (pm-7
071b5d42629)/P10
2008-05-23 22:49:33,187 INFO Schema - Generating schema .
2008-05-23 22:49:33,187 INFO Table - Generating table .
2008-05-23 22:49:33,187 INFO Table - Generating table .
2008-05-23 22:49:33,359 INFO Table - Generation of table finished.
2008-05-23 22:49:39,375 INFO Table - Generation of table finished.
2008-05-23 22:49:39,375 INFO Schema - Generation of schema finished.
2008-05-23 22:49:39,375 INFO DBMonster - Finished in 6 sec. 547 ms.

D:\dbmonster-core-1.0.3\bin>

The same with MySQL 5.1.23 Connector/J 5.1.6 (InnoDB tables of course as I wanted to have FK)

Table structure is:

DROP TABLE IF EXISTS `test`.`guys`;
CREATE TABLE `test`.`guys` (
`guy_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`guy_name` varchar(45) NOT NULL,
PRIMARY KEY (`guy_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1001 DEFAULT CHARSET=utf8;

DROP TABLE IF EXISTS `test`.`bads_attributes`;
CREATE TABLE `test`.`bads_attributes` (
`attribute_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`guy_id` int(10) unsigned NOT NULL,
`attribute_name` varchar(45) NOT NULL,
PRIMARY KEY (`attribute_id`,`guy_id`),
KEY `FK_bads_attributes_1` (`guy_id`),
CONSTRAINT `FK_bads_attributes_1` FOREIGN KEY (`guy_id`) REFERENCES `guys` (`guy_id`) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=139 DEFAULT CHARSET=utf8;

D:\dbmonster-core-1.0.3\bin>dbmonster --grab -t guys bads_attributes -o d:/mysql
_5_1_23_schema.xml

D:\dbmonster-core-1.0.3\bin>rem Batch file to run dbmonster under Windows

D:\dbmonster-core-1.0.3\bin>rem Contributed by Peter De Bruycker
2008-05-23 22:59:40,515 INFO SchemaGrabber - Grabbing schema from database. 2 tables to grab.
2008-05-23 22:59:40,671 INFO SchemaGrabber - Grabbing table guys. 50% done.
2008-05-23 22:59:40,703 INFO SchemaGrabber - Grabbing table bads_attributes. 100% done.
2008-05-23 22:59:40,703 INFO SchemaGrabber - Grabbing schema from database complete.

D:\dbmonster-core-1.0.3\bin>dbmonster -s d:/mysql_5_1_23_schema.xml

D:\dbmonster-core-1.0.3\bin>rem Batch file to run dbmonster under Windows

D:\dbmonster-core-1.0.3\bin>rem Contributed by Peter De Bruycker
2008-05-23 23:00:02,531 INFO DBMonster - Let's feed this hungry database.
2008-05-23 23:00:02,953 INFO DBCPConnectionProvider - Today we are feeding: MyS
QL 5.1.23-rc-community
2008-05-23 23:00:03,093 INFO Schema - Generating schema .
2008-05-23 23:00:03,093 INFO Table - Generating table .
2008-05-23 23:00:03,125 INFO Table - Generating table .
2008-05-23 23:00:12,812 INFO Table - Generation of table finished.
2008-05-23 23:00:49,000 INFO Table - Generation of table finished.
2008-05-23 23:00:49,000 INFO Schema - Generation of schema finished.
2008-05-23 23:00:49,000 INFO DBMonster - Finished in 14 sec. 187 ms.

D:\dbmonster-core-1.0.3\bin>

The difference is quite large!
You can compare my results to those obtained by the SQLite team, hope that these numbers make sense to you.

I'll try with PostgreSQL too, just don't know when
Categories: MySQL

MySQL Cluster?s Improved Release Model

Planet MySQL - May 23, 2008 - 1:56pm

The upcoming 5.1.25 release will see a change with regards to MySQL Cluster:

MySQL 5.1.25 binaries will be released without Cluster support.


This is due to a change in the development cycle of MySQL Cluster. The work on Cluster specific features is largely independent from the rest of the MySQL Server, which has resulted in our decision to release MySQL Cluster separately from the rest of the MySQL Server, effective with MySQL 5.1.25.

This change just goes for the binaries, though. Let me reassure you that:

  • Sun is still releasing MySQL Cluster under the GPL, as before.
  • MySQL 5.1 source code will still include Cluster, and you can download and compile it with Cluster, if you wish.

The only change is that we’re now releasing two separate packages.

MySQL Cluster is being developed and maintained at a different pace, compared to the regular server. In addition, we have learned that our current users of MySQL Cluster are not as concerned about the regular server’s features. Typical Cluster customer installations rely mostly on updated cluster features. Many users of Cluster use the direct API rather than the SQL interface.

We hope this division will bring benefits for all parties involved:

  • Cluster releases will be happening more quickly, as Cluster releases are fairly independent of MySQL Server releases. With new releases of MySQL Cluster no longer tied to the release of the regular server, cluster users will get their updates as soon as the Cluster software is updated, regardless of whether the regular server is being patched or not.
  • The new Cluster package includes the Carrier Grade Edition, which contains a juicy list of features (http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-changes-5-1-ndb-6-2.html).
  • Users of the main MySQL Server will now be able to download a smaller package with a quicker test suite. This should really help those with low bandwidth connections.

The change is evident when you look at the developer zone (http://dev.mysql.com) where MySQL Cluster uses a different release numbering scheme from MySQL Server, and is labeled 6.2.15. This is not the regular server version. 6.2.15 is the Cluster version, and this package is based on MySQL 5.1.

Bug fixes to the Cluster will be ported to the main server when appropriate. Bug fixes to the main server will be merged to the Cluster sources on a regular basis.

Finally, where shall you turn for the downloads?

Categories: MySQL

MySQL

Yahoo

Recent comments

About Me

Jay Janssen
Yahoo!, Inc.
jayj at yahoo dash inc dot com

MySQL
High Availability
Global Load Balancing
Failover

View Jay Janssen's LinkedIn profileView Jay Janssen's Facebook profile

User login

Friends

Links