<quote>I admit it, Wei, you lost me completely. You just don't use an RDBMS when "you have millions message[sic] to log". You just don't.</quote>
What do you do then?
<quote>Coupled with this piece "When you call a log method, the call is blocked until the event "happens" at database. " - yikes! You're logging millions of log records and blocking the log call?</quote>
What do you do?
<quote>Sounds like you'll be measuring this app using geological time frames, not milliseconds.</quote>
If you say my way is expensive, tell us which way is better.
<quote>On top of all this, you state:" If your application crashed, you can trace and find that condition A happened, which caused condition B happened... So we should record log messages according to the order of happenings. When you call a log method, the call is blocked until the event "happens" at database. The database can record them according to the order of happening."This only works if you are strictly synchronous in everything you do and on the same JVM. And this approach will be ungodly slow. </quote>
It works regardless of number of JVMs. For example, you trace and find an entity bean tried to insert a big amount data in one JVM. That caused a database access timeout'ed in another JVM, ....
Again, if this is slow, tell us which way is faster.
<quote>If everything isn't strictly synchronous, then you lose "The database can record them according to the order of happening" bit.</quote>
No. No matter what happens out side, there is only one log database and you can use database's features to record "happenings" in correct order.
Of cause, the log method call must be blocked. This is the beginning of this discussion. Asynchronous way does not block, asychronous way is not ronoligical, neither for request time, nor for happening time.
<quote>If you've got to log millions of log records, the first thing to do is to use every resource at your disposal to cut that quantity down by a couple of orders of magnitude. Failing that, _don't use a database_.</quote>
This is the third topic, which is "how to prepare". The main topic in this discussion is "how to log". I assume that you have already cut all "fat" and only log necessary log messages.
<quote>As Cameron said, databases are not optimized for doing millions of sequential writes. In my opinion you would be _much_ better off with flat files you control and which are safe on RAID arrays, and log mining utilities to combine such logs after a "bad" condition occured to help analyze the logs.In fact, you would probably find perl mining text files to be a much better log analysis tool than SQL queries.</quote>
If you have many servers and log files are all over places, how can log mining utilities identify the order of those log messages? Let's back to the previous example: on one flat file, a log message reports inserting big amount data. On another flat file, a log message reports a timeout. Which came first?