Discussions

News: Article: Asynchronous logging using JMS and Hibernate

  1. Madhusudhan Konda has written a piece which discusses the development of an asynchronous logging service using JMS and Hibernate.

    Conclusion
    This article discussed how to develop an asynchronous log service. JMS is a powerful feature of J2EE. JMS's asynchronous behavior coupled with Hibernate's object-relational persistence provides flexible and reliable frameworks. Many Java applications that rely on protocols such as RMI, SOAP, and CORBA can benefit to a large extent by using JMS. After EJB 2.0, since JMS is now integrated into the application server architecture, EJBs can take advantage of messaging models. A messaging proxy program that invokes session beans can be successfully replaced by message-driven beans.

    Asynchronous messaging principles can be applied to develop services such as property services, configuration services, and other data services. This article's asynchronous log service can be extended by making its controls manageable using JMX managed beans (MBeans). Also you could create a log viewer to view the log messages when and while they are persisted to the database.
    Read Develop an asynchronous log service using JMS and Hibernate

    Threaded Messages (35)

  2. Good Article[ Go to top ]

    I was thinking about something like this from a long time ago, That's a critical
    requirement for a distributed application, to log the errors in a central
    log database, although the other approach is to hold logs inside each middleware
    but providing a centrlized log managemnt facility, but anyway, I prefer the approach
    of this article but just logging error logs, logging anything even info/... level logs
    will have a bad effect on performance of the application.

    --alireza
  3. Good Article[ Go to top ]

    Log4j includes a syslog appender. We have used this for various distributed logging requirements. It seemed quick enough and uses UDP. A good syslog server can also forward any messages as log files, alerts, emails, etc. You might have to work hard to build this functionality in an MDB.
  4. Log Management Solution[ Go to top ]

    Another great way is to use Log4J and write the log events into files. This is the most fast and reliable method plus you don't add network traffic as an overhead.

    For data collection, archiving and analysis and complete log management solution you can add XpoLog Center from XPLG
    at http://www.xplg.com

    This product can complete the picture, weather if you choose Hibernate with JMS or log4j.
  5. Asynchronous logging is not chronological: JMS guarantee delivery, but not guarantee timing. If a part of JMS is busy or not available at the time you log, your message will be delivered later. If there two related errors and the order of arrival are not chronological, it will be more confusing.

    <quote>
    In a traditional synchronous log model, the caller cannot execute further unless the log service returns successfully. All calls are blocked until the record is persisted or acknowledged by the log service. That clearly results in an overhead, especially if an application is designed to code numerous log messages. Imagine a source file consisting of a large number (sometimes hundreds) of log statements. Each statement should be logged before the following statement is processed—clearly a time-consuming process.
    </quote>

    That is true. When you call asynchronous JMS, the method call returns immediately. It looks fast. But who does the real work? Underlying JMS will do the work. It will compete resources (cpu, memory, network...) with the main program. No resources is saved. The log message has to be sent, the amount of work is not reduced, but JMS overhead is added.

    Wei Jiang
    Perfecting J2EE!
  6. <quote>
    Asynchronous logging is not chronological: JMS guarantee delivery, but not guarantee timing. If a part of JMS is busy or not available at the time you log, your message will be delivered later. If there two related errors and the order of arrival are not chronological, it will be more confusing.
    </quote>

    of course you have to timestamp the log events when you send them to the JMS bus, not when they arrive

    <quote>
    That is true. When you call asynchronous JMS, the method call returns immediately. It looks fast. But who does the real work? Underlying JMS will do the work. It will compete resources (cpu, memory, network...) with the main program. No resources is saved. The log message has to be sent, the amount of work is not reduced, but JMS overhead is added.
    </quote>

    well it's about saving time (or better, preventing blocking calls) in your main program flow. Of course, the grand total of used resources is higher if you use JMS to log but the main program will operate more efficient and besides, in distributed environments memory and cpu resources aren't really an issue.
  7. <quote>of course you have to timestamp the log events when you send them to the JMS bus, not when they arrive</quote>

    The clock resolution for Java (on both Windows and Solaris) is 10 milliseconds. The tolerance of network synchronization clock is 10 milliseconds, at best. You cannot use a timestamp as the identified of order.

    <quote>well it's about saving time (or better, preventing blocking calls) in your main program flow. Of course, the grand total of used resources is higher if you use JMS to log but the main program will operate more efficient and besides, in distributed environments memory and cpu resources aren't really an issue.</quote>

    If the grand total of used resources is higher, you do not save time. If, at any point, you produce more messages then the destination can consume, that is called "message backlog", you will get into more issues.

    This idea was rejected when we design SuperLogging. Logging in J2ee world is more complicated than the first glance.

    Wei Jiang
    Perfecting J2EE!
  8. 1. even if u get a lot of messages , can u not queue them thru JMS and then make sure they are delivered in the orcder they were received
    2. for the exact timestamp of the error - u can always append the timestamp of error in the source code before calling the logger.
    3. for millions of logging messages , wells its a problem it itself if u r logging millions of messages and u can always have more/biger queues. Current JMS tools like SoniqMQ etc can deliver loads upto a few millions..

    Please enlighten me if i m missing something.....
  9. <quote>1. even if u get a lot of messages , can u not queue them thru JMS and then make sure they are delivered in the orcder they were received</quote>

    NO. By JMS spec (actually by messaging spec, JMS is a wrapper for underlying messaging system). JMS does not guarantee timing. If there is one producer and one consumer, most JMS implementation would (most likely) deliver according the order it receives. But in J2ee world, there multi-clients, for multi-producers, the order is undefined.

    <quote>2. for the exact timestamp of the error - u can always append the timestamp of error in the source code before calling the logger.</quote>

    You did not understand my previous post. Suppose at SCIENTIFIC time 9:00.00.0000 you log message form machine A and B. Because the tolerance of network synchronization is 10 milliseconds (at best), the actually clock on machine A may be 9:00.00.0010 and be 8:59.00.0050. If these two machines send log messages at that time, the messages do not look at the same time.

    Try the following program:
    class t {
      public static void main(String[] args) throws Exception {
        long[] time = new long[10000];
        for (int i = 0; i < time.length; i++)
          time[i] = System.currentTimeMillis();
        for (int i = 0; i < time.length; i++)
          System.out.println(i + ": " + time[i]);
      }
    }

    You will find hundreds or thousand messages have the save timestamp.



    <quote>3. for millions of logging messages , wells its a problem it itself if u r logging millions of messages and u can always have more/biger queues. Current JMS tools like SoniqMQ etc can deliver loads upto a few millions..Please enlighten me if i m missing something.....</quote>

    I am sure most JMS systems can deliver big amount messages. If anything goes smoothly, you do not need logging. Logging is needed when something is wrong. At that time, probably a few things are wrong at almost the same time. The order is important.

    It all depends on your applications. Your web server gives you some statistics numbers about your web pages: number of visitors, the time of the visits, ... That is a logging. Is the accuracy important? Maybe. But if your system crashes, you probable want all error messages in correct order, so you can trace the problem.

    When you design a logging system, you should do it right at the first time.

    Wei Jiang
    Perfecting J2EE!
  10. Suppose at SCIENTIFIC time 9:00.00.0000 you log message form machine A and B. Because the tolerance of network synchronization is 10 milliseconds (at best), the actually clock on machine A may be 9:00.00.0010 and be 8:59.00.0050.

    I am confused here.

    The timestamp issue can probably be resolved by either:
    1 - assign the timestamp when the logging message reaches the logging server and use the logging server's timestamp. This is problematic because the time taken to deliver the logging message will shift the timestamp from when it really happened, to when the message arrived. Not so good.

    2 - call a remote API to a central machine/device that provides the "time" to all who request it. Then use this for the timestamp of the message. This of course means a "synchronous" wait on the timestamp request. In a highly loaded envrionment, this may not be acceptable. One of the big advantages of log4j is the performance.

    3 - Differentiate between when you REALLY need accurate timestamps, and when you dont (perhaps stack traces need good accurate timings, but debug messages don't)

    Haven't looked at superlog yet, but I do like the idea of using JMS to capture
    logging information because you can do all sorts of routing etc on message selectors etc. The caveat here is that the logging "server", ie the device that processes the JMS logging messages, Must be a separate box.
  11. (sorry if this is a dupe, got a posting error)

    Lets not forget JMS is merely an API to what could be a wildly inapproprate provider for logging. The problem here is (sadly) a common one, its may be easy to bind JMS into code for logging but are you using the right technology abstractions? I think not.

    My employers uses direct bindings into log4j asynchronous logging (to files AND Netcool http://www.micromuse.com/index.html) - pick a proper logging abstraction, don't layer them for the sake of demonstration purposes coz it looks easy. The important stuff goes to the like of netcool and the less important stuff goes to files - then when you're finally stable turn down the noise on the files...

    Even with such enterprise management systems (netcool and the like) you have to take care on the volume of logging - millions of events, how can your support guys deal with them, not to mention the logging provider! Support only want the important stuff, the rest gets logged to files, hopefully on a SAN or NAS so we can get them if the box blows..

    Colin
  12. <quote>
    Lets not forget JMS is merely an API to what could be a wildly inapproprate provider for logging. The problem here is (sadly) a common one, its may be easy to bind JMS into code for logging but are you using the right technology abstractions? I think not.
    </quote>

    It can be the right technology but it largely depends on the architecture and on the log level you need.
    In a loosely-coupled architecture - or any other architecture where there are independant components - where you need for example end-to-end reporting this type of logging is the right tech.
    I just finished a project where we had a B2B server and some Tibco components glued together with JMS. To provide end-to-end reporting for support we either had the choice of tapping into each component's log facility or sending report events to a jms queue and eventually persist. Guess for which one I chose :)

    But I have to agree that this type of logging only makes sense for business logging events ("Document A was received by the app") rather than system logging events ("Database connection 2 dropped")
    I have to agree that sending log messages on a debug level
  13. 1. JMS is designed for "tolerance": if the server is busy or even down, it is still work, it will deliver when the server is up. That will be hours or days away.

    2. You can get an unique id from a central place. But too expensive.

    3. It is too complicated to deside what kind of logging you on the fly.

    If you can use one stone to kill all above birds, why not?

    Wei Jiang
    Acelet
  14. 1. JMS is designed for "tolerance": if the server is busy or even down, it is still work, it will deliver when the server is up. That will be hours or days away.
    This is generally not true. JMS only gives guarantees for messages marked persistent _and which the JMS API accepts_. The problem is, the JMS send() or publish() calls may not accept your data. For example, if you user a server-based JMS provider and the server is down, it's legal for a JMS provider to throw an exception on every send()/publish() call (indeed it might signal an error via onException() and shut the whole kit 'n' kaboodle down). Only some commercial JMS providers provide automatic, transparent failover, and I don't think any open source ones do at this time. This means if you lose your server connection with those providers, you are hosed.

    The PERSISTENT part of JMS guarantees can also be a big problem - PERSISTENT messages can be quite expensive to send - and most JMS providers will wait for the message to be persisted before returning successfully from the send() or publish() call.

    JMS guarantees do not come for free - they don't, they come at a price, and specifics vary from JMS provider to JMS provider (under the auspices of the old "quality of implementation" bit).

        -Mike
  15. Async logging[ Go to top ]

    I'm using a similar approach here in my application.

    I receive lots of requests in my Server App.

    Each request generates at least 5 log entries (db logging, oracle).

    So I decided to use an async approach. Instead of calling a DAO to insert my log information, I just send a message to JMS.

    I have another application (this one must be a MDB, so you can easily work with transaction between JMS and Oracle) listening my queue and inserting in the DB as soon as possible.

    If it is done in 0,01 ms or 1 minute, I don't mind. I just need this information in the DB to generate my reports (day - 1).

    Today I figured out that log4j addresses this issue:

    http://logging.apache.org/log4j/docs/api/org/apache/log4j/AsyncAppender.html

    Regards,
    Marco Campelo
  16. Wei: This idea was rejected when we design SuperLogging.

    The last time I checked, SuperLogging was a 100-line Java class that did direct JDBC writes to a hardcoded database table. As a general purpose logger, is gets automatic disqualification (code issues aside,) since using a transactional data manager for logging debug messages is like using a safety deposit box as a sock drawer.

    And what happens if the database backs up? What happens if it goes down? How do you log that the database is down?

    One more thing, using a database doesn't guarantee any ordering either. The database is under no obligation to commit its transactions in the order that your distributed application is attempting to log them. Those same "10 millisecond" issues that you were suggesting exist in a JMS/Hibernate logger will exist in SuperLogger.

    shsh: 1. even if u get a lot of messages , can u not queue them thru JMS and then make sure they are delivered in the orcder they were received

    Yes, you can queue them. JMS, like a database suffers from potential backups and outages, although there are some HA implementations of JMS available that will handle the outage scenario.

    On the other hand, the data being logged needs to be pretty important (i.e. something more like an audit trail) before one starts using an HA JMS server or database server to act as a logger.

    shsh: 2. for the exact timestamp of the error - u can always append the timestamp of error in the source code before calling the logger.

    The timestamp is only part of what you'll need to make sense of the data. You'll also want the thread id and a sequence number (for that one JVM) so you can filter and order later.

    shsh: 3. for millions of logging messages , wells its a problem it itself if u r logging millions of messages and u can always have more/biger queues. Current JMS tools like SoniqMQ etc can deliver loads upto a few millions..

    For millions of logging messages you are just plain screwed, because the database will back up then the JMS server will back up and eventually either the application will slow to a crawl (because someone is avoiding running out of memory, maybe by paging to disk) or one of the tiers will simply run out of memory and die.

    Wei: 2. You can get an unique id from a central place. But too expensive.

    You can get a unique id locally. It may not be globally sequential, though. Is that what you meant?

    James: Certainly collating logging output from multiple machines into one place is a very useful diagnostic tool. For many use cases just using files is fine but now and again performing real time, consolidated monitoring is very useful, so a JMS solution sounds reasonable to me.

    Right .. it's the KISS principle. That doesn't mean that one never logs using JMS or to a database, but that it's a choice that needs to have a compelling reason for it to be chosen.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!
  17. Hi Cameron,

    <Cameron>The last time I checked, SuperLogging was a 100-line Java class that did direct JDBC writes to a hardcoded database table. As a general purpose logger, is gets automatic disqualification (code issues aside,) </Cameron>

    Can you explain "disqualification" please?

    <Cameron>since using a transactional data manager for logging debug messages is like using a safety deposit box as a sock drawer.</Cameron>

    Yes. Logging should not use transaction, not only for saving time, but for the nature of its business. You can use no-transaction database, such as mysql.


    <Cameron>And what happens if the database backs up? What happens if it goes down? How do you log that the database is down?</Cameron>

    You have to store your log message is a "data source". Database is one of them. Any data source can fail. When log database is down, SuperLogging will send alert email and do not log until you reset it.


    <Cameron>One more thing, using a database doesn't guarantee any ordering either. The database is under no obligation to commit its transactions in the order that your distributed application is attempting to log them. </Cameron>

    You are right: Data base does not record data according to the order of your requests.

    SuperLogging does guarantee it is in chronological order. The chronological order is the order of "happening".

    Let's say you and me go to a grocery store together (that would be nice). You go to casher 1 to check out first. I go to casher 2 to check our later. Does that mean that you "purchase" before me? No. The action of _purchase_ _happens_ when the database of the grocery store commits the transaction.


    <Cameron>Those same "10 millisecond" issues that you were suggesting exist in a JMS/Hibernate logger will exist in SuperLogger.</Cameron>

    No. SuperLogging does not keep the order according to timestamps. SuperLogging records timestamp for human reading only.


    <Cameron>For millions of logging messages you are just plain screwed, because the database will back up then the JMS server will back up and eventually either the application will slow to a crawl (because someone is avoiding running out of memory, maybe by paging to disk) or one of the tiers will simply run out of memory and die.</Cameron>

    I will address this issue soon.

    Wei Jiang
    Acelet
  18. Wei: Can you explain "disqualification" please?

    I should have been less sweeping with my conclusion, and more clear in my claim. I meant that logging to a database for "general purpose" logging is a disqualification. I've seen applications with tens of thousands of log messages per second (in total across a large cluster); logging those to a database is a non-starter. Even logging them to local files (on 100 servers) slows down the app significantly, and that's with the writes being managed async by the OS.

    (When you turn off logging in a load test and throughput increases by 30% that is a good indicator that logging has overhead.)

    I'm fine with the idea of logging relatively low-frequency application-level stuff to a database. I'd probably call it "auditing" and not "logging" but that's just IMHO.

    Wei: Yes. Logging should not use transaction, not only for saving time, but for the nature of its business. You can use no-transaction database, such as mysql.

    True. I was more referring to the weight of the solution. A database engine isn't optimized for high levels of concurrent sequential appends (a.k.a. logging.)

    Wei: SuperLogging does guarantee it is in chronological order. The chronological order is the order of "happening".

    One cannot possibly put concurrent items from multiple servers into a true chronological order. That's what I was trying to convey. This can be relatively easily shown using a concept such as the uncertainty principle for example (this guy.) While the log messages themselves can be ordered, if they occur before the events that are being logged about, one cannot tell when those events will actualy occur, and if they are logged after, one cannot tell truly (exactly) when those events did occur, particularly vis-a-vis other events from other servers.

    With before-event and after-event logging, one can stitch together a rough concept of ordering, but it is not always possible to determine which event (one occurring on each of two machines) actually occurred first. That's what I was trying to say.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!
  19. We are discussing two different topics "asynchronously" now. One topic is the original one: "how to log". The other is "should log", which is a legitimate issue as well. You have to realize that logging does not come for free. Just like the black boxes on commercial airlines: they are expensive, but necessary. When you have millions message to log, you have two choices: buy a big machine to log them or close your eyes letting it go. I am working on this issue now and hope I can address it soon.

    Let's back to the original topic for a moment. There are two different concepts: the time of request and the time of happening. When you trace log messages, the most important thing is the chronological order of happenings, not necessarily the order of requests. If your application crashed, you can trace and find that condition A happened, which caused condition B happened... So we should record log messages according to the order of happenings. When you call a log method, the call is blocked until the event "happens" at database. The database can record them according to the order of happening.
  20. I admit it, Wei, you lost me completely. You just don't use an RDBMS when "you have millions message[sic] to log". You just don't. Coupled with this piece "When you call a log method, the call is blocked until the event "happens" at database. " - yikes! You're logging millions of log records and blocking the log call? Sounds like you'll be measuring this app using geological time frames, not milliseconds.

    On top of all this, you state:

    " If your application crashed, you can trace and find that condition A happened, which caused condition B happened... So we should record log messages according to the order of happenings. When you call a log method, the call is blocked until the event "happens" at database. The database can record them according to the order of happening."

    This only works if you are strictly synchronous in everything you do and on the same JVM. And this approach will be ungodly slow. If everything isn't strictly synchronous, then you lose "The database can record them according to the order of happening" bit.

    If you've got to log millions of log records, the first thing to do is to use every resource at your disposal to cut that quantity down by a couple of orders of magnitude. Failing that, _don't use a database_. As Cameron said, databases are not optimized for doing millions of sequential writes. In my opinion you would be _much_ better off with flat files you control and which are safe on RAID arrays, and log mining utilities to combine such logs after a "bad" condition occured to help analyze the logs.

    In fact, you would probably find perl mining text files to be a much better log analysis tool than SQL queries.

          -Mike
  21. <quote>I admit it, Wei, you lost me completely. You just don't use an RDBMS when "you have millions message[sic] to log". You just don't.</quote>

    What do you do then?

    <quote>Coupled with this piece "When you call a log method, the call is blocked until the event "happens" at database. " - yikes! You're logging millions of log records and blocking the log call?</quote>

    What do you do?


    <quote>Sounds like you'll be measuring this app using geological time frames, not milliseconds.</quote>

    If you say my way is expensive, tell us which way is better.

    <quote>On top of all this, you state:" If your application crashed, you can trace and find that condition A happened, which caused condition B happened... So we should record log messages according to the order of happenings. When you call a log method, the call is blocked until the event "happens" at database. The database can record them according to the order of happening."This only works if you are strictly synchronous in everything you do and on the same JVM. And this approach will be ungodly slow. </quote>

    It works regardless of number of JVMs. For example, you trace and find an entity bean tried to insert a big amount data in one JVM. That caused a database access timeout'ed in another JVM, ....

    Again, if this is slow, tell us which way is faster.


    <quote>If everything isn't strictly synchronous, then you lose "The database can record them according to the order of happening" bit.</quote>

    No. No matter what happens out side, there is only one log database and you can use database's features to record "happenings" in correct order.

    Of cause, the log method call must be blocked. This is the beginning of this discussion. Asynchronous way does not block, asychronous way is not ronoligical, neither for request time, nor for happening time.


    <quote>If you've got to log millions of log records, the first thing to do is to use every resource at your disposal to cut that quantity down by a couple of orders of magnitude. Failing that, _don't use a database_.</quote>

    This is the third topic, which is "how to prepare". The main topic in this discussion is "how to log". I assume that you have already cut all "fat" and only log necessary log messages.


    <quote>As Cameron said, databases are not optimized for doing millions of sequential writes. In my opinion you would be _much_ better off with flat files you control and which are safe on RAID arrays, and log mining utilities to combine such logs after a "bad" condition occured to help analyze the logs.In fact, you would probably find perl mining text files to be a much better log analysis tool than SQL queries.</quote>

    If you have many servers and log files are all over places, how can log mining utilities identify the order of those log messages? Let's back to the previous example: on one flat file, a log message reports inserting big amount data. On another flat file, a log message reports a timeout. Which came first?

    Wei Jiang
    Acelet
  22. Asynchronous logging is not chronological: JMS guarantee delivery, but not guarantee timing. If a part of JMS is busy or not available at the time you log, your message will be delivered later. If there two related errors and the order of arrival are not chronological, it will be more confusing.
    There is no reason whatsoever why asynchronous logging can't preserve ordering - though you have to be a little careful.

    e.g. each log entry would go on an ordered List/Queue which would then have a separate thread pulling items off the list/queue and using a single JMS MessageProducer to publish each log item to a topic.

    Typically most decent JMS providers will maintain the order of messages sent by a single publisher - though YMMV depending on the underlying transport.


    I think choosing the right granularity is the key here - whether we're talking about files, JMS or database rows.

    Certainly collating logging output from multiple machines into one place is a very useful diagnostic tool. For many use cases just using files is fine but now and again performing real time, consolidated monitoring is very useful, so a JMS solution sounds reasonable to me.

    However sending a JMS message per line of log file may well be overkill; similarly a database insert per log entry could be a large load on your database (depending on how much logging you're doing).

    However buffering up (say) pages of logging to send around a JMS network is easy & simple.

    James
  23. <quote>There is no reason whatsoever why asynchronous logging can't preserve ordering - though you have to be a little careful. e.g. each log entry would go on an ordered List/Queue which would then have a separate thread pulling items off the list/queue and using a single JMS MessageProducer to publish each log item to a topic.</quote>

    If you store important log messages in a List/Queue in MEMORY, they will get lost if the system crashes. Also you introduce more overhead here.

    If, at any point, you generate more messages then the consumer can consume, you will get into more issues.
  24. Building the session factory is a consuming operation. In my oppinion placing it in the persistMessage method is not quite appropriate. I'd get it somwhere else, cache it and use it. See service locator in the millions "enterprise patterns" out there.
  25. Overkill[ Go to top ]

    Isn't using Hibernate in this example really just massive overkill? I mean, the guy has one table with 5 fields. Why not write the 8 lines or so of JDBC code required to do inserts here and be done with it?

    The original article author had this to say when someone asked about overkill:
    Thanks! I didn't want to go towards Entities for persistence neither for direct access. Realised that DAOs and Hibernate are good candidates for this purpose. Hibernate is cool from developers point and I implemented with it. Due to the simplistic scenarios, the usage might not be justified, but if the same framework extends, lot of stuff need to be persisted.
    I guess this proves a few things:

    • Anybody can write an article these days :-)
    • A Masters in Technology doesn't mean all that much
    • The author was too lazy (or too busy polishing his degrees) to find the JMS bits built into Log4j
    • It's not a big leap from Junior Consultant to Senior Consultant :-)
         -Mike
  26. Overkill[ Go to top ]

    "This article will help you develop a simple log service. The service creates some log messages, sends them across the network to a JMS provider, and persists them in the database. For this purpose, we use JMS for its asynchronous benefits and Hibernate for its simple yet powerful persistence features. You can persist data in many ways, such as through standalone JDBC (Java Database Connectivity) connections, EJB (Enterprise JavaBeans), and stored procedures. I vote for tools that create domain entities from plain-old Java objects. In this article, I move away from constructing the domain model using EJB entities by using Hibernate."

    This log service is supposed to reduce the overhead in a high volume logging service. How about getting the author getting his head out of his ass. "Domain Model" for a logging service???? MDBs as subscribers of log messages.

    But this piece of code takes takes the cake


    private void persistMessage(LogMessage message) throws
    HibernateException {
       net.sf.hibernate.Session session = null;
       SessionFactory sessionFactory = null;
       Transaction tx = null;
       try {
          // Create a session factory from the configuration
    (hibernate.properties
          // File should be present in the class path)
          sessionFactory = new Configuration(). addClass(LogMessage.class).
            buildSessionFactory();

          // Create a session
          session = sessionFactory.openSession();

          // Begin a transaction
          tx = session.beginTransaction ();
       }catch(..){
       .....
    }
    try{

       // Assign the message id
       message.setMessageId (""+System.currentTimeMillis ());

       // Save the object to the database and commit the transaction
       session.save(message);
       tx.commit ();
    }catch(Exception ex){
    ....
    }finally {
       // Close the session
       session.close();
    }
    }
  27. Overkill[ Go to top ]

    Totally Agree with Mike. He hit the nail on head - as always :-). In fact only a couple of weeks before I responded to a question in Java forum saying "this is like using JMS and Hibernate for logging" :-). I never thought someone is trying to do that. Wow!

    -Sanjay.
  28. Loggin using JMS is a bad idea...[ Go to top ]

    I honestly think, that this is a very bad idea. There are several reasons why this holds true. Just some of them:

    - When writing a log message in case of an error *writing the log message*, the application needs to make sure that something does or does not happen. Unfortunately, if the error is within the JMS service itself, chances are, that you cannot create the log message at all.

    - JMS is overhead because of network transport etc.

    - Ordering of log messages with same timestamp may be needed and cannot be guaranteed

    - Logging for auditing purpose may well require that the message be persisted. It might *not* acceptable to persist it in the local JMS storage file. Technically, this is not much better than logging in a local file and consolidate at defined intervals.

    - If sending alerts for system management are required in error conditions there are tools on the market that can be used. There is no real need to reinvent the wheel. Besides the alerting infrastructure should be mostly independent from the runtime infrastructure.

    Karl
  29. The best way for Asynchronous messaging is JavaSpaces.
    For Asynchronous with persistency messaging GigaSpaces provides the world first JavaSpaces data grid with built in high-availability the scalability. It can deliver 10K messages per second using standard machines between different clients.
     
    Best Regards,
            Shay
    ----------------------------------------------------
    Shay Hassidim
    Product Manager, GigaSpaces Technologies
    Email: shay at gigaspaces dot com
    Website: www.gigaspaces.com
  30. Shay, a space represents state, and state is orthogonal to messaging, therefore by design, JavaSpaces is not The best way for Asynchronous messaging. Wouldn't JMS (being designed for async messaging) be a better fit?

    BTW were those numbers for distributed? Or for within a single JVM? I'm just wondering because no one I've talked to has seen numbers anywhere near that from Gigaspaces.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Clustered JCache for Grid Computing!
  31. Cameron ,
    Your reply represent exactly what many JMS "addict" guys think – but in fact this is really the opposite. JavaSpaces approach provides both synchronous and asynchronous messaging with one single API.
    You can have one to many , many to many and many to one FIFO/None FIFO messaging. Short dive into the JavaSpaces world will show you how simple it is… Running the examples provided as part of the GigaSpaces Platform can be a pretty good starting point for many of the patterns you can have using JavaSpaces. We will provide soon ability to JMS users to migrate/use JavaSpaces very easily in order to allow them to benefit from the grid server we provide to enhance their application performance and distributed processing capabilities.
    On top of the basic JavaSpaces - GigaSpaces provides reliable Distributed cache with ability to have fully clustered master cache – within a single click.
    Our latest benchmark with 3.2.1 release results show that we can deliver 10K messages per second between 2 remote machines (2 CPU Intel based 2.4 GHz) using our P2P cluster technology over 1G network.
    Using our embedded space technology we can handle about 50K-100K messages per second. We will publish this benchmark soon.

    Best Regards,
            Shay
    ----------------------------------------------------
    Shay Hassidim
    Product Manager, GigaSpaces Technologies
    Email: shay at gigaspaces dot com
    Website: www.gigaspaces.com
  32. Our latest benchmark with 3.2.1 release results show that we can deliver 10K messages per second between 2 remote machines (2 CPU Intel based 2.4 GHz) using our P2P cluster technology over 1G network. Using our embedded space technology we can handle about 50K-100K messages per second. We will publish this benchmark soon.
    I guess that is an OK number for embedded spaces, but why would your customers want to accept a single points of failure like that? FWIW, we can do over 100K messages per second in a P2P cluster running good old 100Mbit ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  33. Hum, I guess there is no limit on the complexity of solutions engineers can create for simple problems. Log4J out of the box comes with an asynchronous appender that can wrap any other appender (e.g. a database one - as a matter of fact log4J comes with a JDBC appender). Using JMS for logging is huge overhead.
  34. I totally agree with you Philippe - JMS for logging is huge overhead.
    My point what about synchronous and async messaging and not for the logging issue.
  35. Hi all, is there any latest reference implementation of async logging using log4j? if yes please do let meknow.
  36. Awesome article[ Go to top ]

    Describes in sufficient details each component involved in the project.