Discussions

News: $100 from Gavin King? Win his Hibernate Performance Challenge

  1. Gavin King is offering $100 to anyone who can show a significant performance difference between a piece of Hibernate code, and handcoded JDBC that executes the same SQL calls. This is an interesting challenge, that can help Hibernate by having you try to find any performance issues! Why not take the challenge?

    http://www.hibernate.org/News/PerformanceChallenge

    Gavin: Is the winnings in US dollars, or AUS dollars (pesos?) ;)

    Threaded Messages (71)

  2. Performance of O/R mappers[ Go to top ]

    Hi,

    This performance challenge is sure very interesting idea. But I am afraid that the main performance problems of O/R mappers are not covered by the challenge. I belive that any good O/R mapper will not be more than couple of percents (if any) slower than plain JDBC when executing the same SQLs. The problem is to find the optimal SQLs to perform a business task. This is hard to do for the particualar database and much more harder to support different databases optimaly.

    I would like to know how good is Hibernate in finding the optimal queries/DML ?

    Mileta
  3. Performance of O/R mappers[ Go to top ]

    Absolutely!
  4. When I changed the persistency of my project from cmp entity beans with container managed relations (ejb2.0) to hibernate, it was about twice as fast. (I agree with the purists : it wasn't a calibrated scientific benchmark) But still, given those results, I actually don't care how Hibernate does it :-)

    And on top of that I had the inheritance, and a powerfull query language and better portability... what does a developers heart want more !

    Regards,
    Tom Baeyens
    Founder of Java Business Process Management (jBpm)
  5. Performance of O/R mappers[ Go to top ]

    If it hard for OR-Mappers to come up with an optimal SQL, then it is next to impossible for average developers who often do not even know basics of RDBMS theory and SQL.

    In my experience Hibernate performance is great and I do not care even if it is 5% slower than _optimally_ handcoded SQL. IMO: 5% difference at ORM level causes less than 0.1 % difference in overall performance of a typical web application.
  6. Performance of O/R mappers[ Go to top ]

    this benachmark suggestion is rather ridiculous. As has been pointed out earlier, reality is not about comparing both options executing the same simple SQL statement.

    IMO, it is still easy to fill a report by formulating one SQL query that fetches a complex result set from multiple tables using subselects, outer joins, projections and all the rest, where achieving the same effect with an O/R mapper will force you to navigate the object net, having the mapper issue scores of queries under the hood and getting horrible performance. The strength of "raw" relational database access lies in the availability of an extremely powerful *declarative* query language (called SQL) that lets you offload the processing to the server, using the sophisticated optimization technologies built into the DBMS and minimizing network traffic. A client-only technology will always fall short in this contest.

    As an aside, I remember when I first read about the access strategies used by early EJB implementations (and mandated by the spec), where loading an EJB would first fetch the key data, and then issue another DBMS call to load the bean proper when it is accessed. It was clear at first glance: I you were looking for a way to kill your applications performance - look no further, there it was. Only way to make sense of it was to understand that the spec was invented by a company selling server hardware.

    Bottom line: by all means use a mapper where it is appropriate, but dont forget about the other achievements that are in your hands. You've made it this far in the programming trade, learning a little SQL wont kill you.

    Christian
  7. Performance of O/R mappers[ Go to top ]

    this benachmark suggestion is rather ridiculous. As has been pointed out earlier, reality is not about comparing both options executing the same simple SQL statement. <

    It depends what you are trying to do. In this case I am concerned with finding problems like thread contention, memory leaks and just plain inefficient code that might affect the _scalability_ of Hibernate under load. Obviously, I don't think there _are_any_ :)


    Of course ORM performance is all about generating efficient SQL. Its wierd to have this thrown back at me when I've been saying it 'till I'm blue in the face for the past two years....


    >> IMO, it is still easy to fill a report by formulating one SQL query that fetches a complex result set from multiple tables using subselects, outer joins, projections and all the rest <
    The Hibernate query language provides ALL the things you just mentioned: subselects, outerjoins, projection. Also aggregation, etc. And all MUCH less verbose than handwritten SQL. But then in HQL you refer to classes and properties, and have polymorphism.

    Or, you can write a SQL query and have Hibernate populate objects for you. And you _still_ benefit from the use of the mapper!


    >> achieving the same effect with an O/R mapper will force you to navigate the object net, having the mapper issue scores of queries under the hood and getting horrible performance.<
    You must be thinking of Some Other O/R Mapper. I encourage you to check out Hibernate2. You will be amazed at what is now possible.


    >> You've made it this far in the programming trade, learning a little SQL wont kill you. <
    Funny, this is something else I usually say to people.....


    :-)
  8. Hibernate and SQL[ Go to top ]

    The Hibernate query language provides ALL the things you just mentioned: subselects, outerjoins, projection. Also aggregation, etc. And all MUCH less verbose than handwritten SQL. But then in HQL you refer to classes and properties, and have polymorphism.


    > You must be thinking of Some Other O/R Mapper. I encourage you to check out Hibernate2. You will be amazed at what is now possible.

    I admit my statement was a shot in the dark, as I dont know Hibernate well enough. I am glad to hear that you have already covered these issues, and look forward to my next opportunity to learn/use Hibernate. Certainly heard a lot of good stuff about it.

    regards,
    christian
  9. Performance of O/R mappers[ Go to top ]


    > IMO, it is still easy to fill a report by formulating one SQL query that fetches a complex result set from multiple tables using subselects, outer joins, projections and all the rest, where achieving the same effect with an O/R mapper will force you to navigate the object net, having the mapper issue scores of queries under the hood and getting horrible performance. The strength of "raw" relational database access lies in the availability of an extremely powerful *declarative* query language (called SQL) that lets you offload the processing to the server, using the sophisticated optimization technologies built into the DBMS and minimizing network traffic. A client-only technology will always fall short in this contest.
    >

    This is one of the common misperceptions I often see... Look! I can merge this all into one query, so it must be faster!

    The reality is that your one query with many subselects and projections will be slower than the 5 queries used by an ORM to populate your main object and its 4 collections of sub-objects... But, as Gavin said, you can choose to do this with Hibernate as well.
  10. Performance of O/R mappers[ Go to top ]

    Personally, I think OR mappers are great things for those that a) want to worry about other areas of the application such as flow control and business logic and aren't too relyant on the database to do any serious work for them or b) don't know much about SQL. My main problem with them is that like any layer of abstraction they abstract away some of the power of the underlying implementation either by imposing artificial limits or by not exposing features. I don't know much about Hibernate and have never used it so I may be wrong but I can't see an easy way it would achieve set based processing, for instance.

    One of the fastest ways to do multi record database operations is "by set". That is you compile a set of data in the DB (usually in a global session temp table or some other table that isn't in the sys catalogs and isn't logged) and then perform set based operations on it.

    Surely, though, the whole debate just boils down to a "horses for courses" type of decision. If you want to leverage all the power out of the DB (say in a very data operationally intensive scenario) use JDBC, but if all you want to do is stuff your information in and out the DB and not really worry too much about how it is done an O/R mapping can save you an aweful lot of time and heartache.

    Jesse
  11. misconception[ Go to top ]

    This is one of the common misperceptions I often see... Look! I can merge this all into one query, so it must be faster!

    >
    > The reality is that your one query with many subselects and projections will be slower than the 5 queries used by an ORM to populate your main object and its 4 collections of sub-objects... But, as Gavin said, you can choose to do this with Hibernate as well.

    That is one of the most outlandish statements I have heard in a long time, and one that is obviously totally unsubstantiated, as you dont even know the SQL query.
    Anyone who has dealt with databases more closely knows that query performance can vary widely, depending on the RDBMS in use, database configuration (indexes and the like), and even details in the query formulation. To get really good performance, you will have to do some work, that much is true. This is also an area where I have some serious doubts with regard to any query language supplied by an O/R mapper. It can sometimes be the matter of a slight modification in the SQL statement that decides between perfect and unaccteptable performance.

    I have seen applications that would issue literally hundreds of SQL statements through an O/R mapper to collect data for a complex dialog, taking several minutes before the dialog appeared. After getting done with the "pure OO at all cost" and "databases are for weaklings" fundamentalists, it was an easy task to improve performance by several magnitudes.

    As I said before, I am all for O/R mappers, however I am agains myths and propaganda. A good database query optimizer is still a good database query optimizer, and a network roundtrip will always be a network roundtrip.

    peace (as Cameron Purdy always says),
    Christian
  12. ooorrrmmmm[ Go to top ]

    Guys, stop talking as if its an either/or thing!

    Three things are true:

    * If you are doing a complex query across five huge tables of millions of rows then a hand-optimized query certainly might be faster than what Hibernate will generate for you (you can do query hints and that kind of nonportable stuff that is quite hard for an ORM)

    * 98% of all queries in 90% of Java applications are not like that

    * for the remaining 98%, Hibernate can probably do a better job of achieving optimal data access than your development team can given time/money constraints - furthermore, it will be easier to tune the performance later (even w/o code changes, perhaps)

    So, what we do is make it extremely easy to mix 'n match direct handwritten SQL with tool generated stuff. And this works nicely in practice.

    You can mix Hibernate DAOs with direct JDBC DAOs, by using a Hibernate UserType, for example. Or, you can just grab the JDBC connection at any time and execute any SQL you like, in the same transaction. Or, you can give us some SQL you wrote yourself, and have Hibernate instantiate and manage objects for you.

    In practice, people find that they need this stuff rarely, because HQL is so powerful. But from time to time, when it is needed, Just Do It. :)
  13. ooorrrmmmm[ Go to top ]

    Gavin, I like the tradeoffs you've done. No O-R mapping can do everything perfectely, after all we're abstracting away details.

    Will you ever start porting Hibernate to .NET? I know of NHibernate, but it's a dead project.

    A lot of people are moving to .NET / mono on Linux these days, and a product such as Hibernate on top of MySQL is a killer application.

    Our company would have paid good bucks for it (well... in the range of, say, 2-3000 USD for a site license.) Do you plan to go commercial, offering support programs and such? Perhaps that could help finance .NET porting... we need both a Java and .NET version badly ;-)
  14. JLCA?[ Go to top ]

    <han theman>
    we need both a Java and .NET version badly ;-)
    </han theman>

    You can give it a try using JLCA.

    I have the same need. But first I have to install .NET 1.1 to run JLCA :-(.
  15. ooorrrmmmm[ Go to top ]

    Han

    > Will you ever start porting Hibernate to .NET? I know of NHibernate, but it's > a dead project.

    I am not sure if you are looking for an open source .NET alternative to Hibernate but there are a couple of O/R mappers out there in the market for .NET

    check out these:
    http://www.llblgen.com/defaultgeneric.aspx
    http://www.thona-consulting.com/content/products/entitybroker.aspx

    --Dilip
  16. My real world example[ Go to top ]

    I had been using hand tooled sql inside DAO's for our system. As we began to migrate the legacy systems/databases into our new system, we found many areas where the database design was lacking. After doing some database refactoring and deciding on a new design that would meet our needs, the thought of going out and re-writing sql code for some 300 - 400 tables with multiple queries made me nauseated. Anyway, to make a long story short, it is easy to write a value object to map to a table, and the hibernate files are much easier to maintain than raw sql. I do still write some sql for special circumstances, but for about 90% of my data access, hibernate handles it just fine, even better than I probably could have done. It has reduced the code inside my dao's by about 80%. Thats my real world example, and I would imagine many other people have been in similar situations.
  17. While you can never get away from hand-written SQL (unless you are developing extremely simple applications), using O/R mappers for most of the SQL is a huge productivity gain. Maintaining mundane persistent SQL as hand-written JDBC calls is neither elegant nor maintainable. Today's O/R mapping tools have smart caches (distributed too). They also avoid unnecessary SQL (e.g they do not generate SQL for columns that have not changed). This is an important for a high-throughput application.

    my 2 cents

    Sriram
  18. misconception[ Go to top ]

    The fact is that multi-subselect queries and projections don't scale. Maybe we're talking different types of apps, but the database usually ends up being the bottleneck in large-scale systems (ok, Cameron, insert plug for Coherence here :-)) and loading it with all of this work is not an efficient use of resources. For one thing, you're tying up the CPU of the database server to do more work. For another, with big queries like that you end up wasting a lot of bandwidth on every record returning columns that are only used once or only a few times.

    In this case, it's better to be able to put some of the processing to pull the records together onto the middle tier and only have the database do 5 quick and simple queries.
  19. misconception[ Go to top ]

    The fact is that multi-subselect queries and projections don't scale.


    I'd accept this as your perception.

    >the database usually ends up being the bottleneck in large-scale systems

    not my perception. We are talking of different apps obviously. I can surely say that mine is a real one.

    IMO saying that the database will be the bottleneck dont make sense for another reason. The apps I know usually consist of a lot of simple CRUD accesses, where in terms of the SQL it makes no difference whether you use a mapper or not. In these cases a mapper is a must, as you will have to wrap the data into objects somehow anyway.
    But then there are a few places where a complex report is required for display or other processing purposes. In those cases one should be open to look at all available options to meet the requirement - even if that means leaving the nice cosy world of the O/R mapper. Beyond that, it is always wise to keep an eye on the DBMS access profile of your app.
    As I said, trying to state that a hand-coded and optimized SQL query will (always) be slow or wont scale is plain propaganda, at least until you know the query, and the environment in which it is run.

    hmm.. I dont know how many times I have read or lead this discussion. Up to some point it can still be fun. But I guess we're getting close to the limit.

    Christian
  20. misconception[ Go to top ]

    Christian wrote:
    > But then there are a few places where a complex report is required for display or other processing purposes. In those cases one should be open to look at all available options to meet the requirement - even if that means leaving the nice cosy world of the O/R mapper.

    Gavin introduced me to the Hibernate feature, I think he called it, projection?

    Where you can do the following

    "select new ReportObject(someobject.field, anotherobject.anotherfield) from someobject, anotherobject where .....)"

    So, if if the overhead of the above statement is minimal to straight SQL for a complex report, then the I'd rather use an O/R mapper such as Hibernate to do complex reporting as it reduces my codesize astronomically.

    Bill
  21. misconception[ Go to top ]

    So, if if the overhead of the above statement is minimal to straight SQL for a complex report, then the I'd rather use an O/R mapper such as Hibernate to do complex reporting as it reduces my codesize astronomically.


    my words. If Hibernates QL does the job, I will probably prefer it anytine (havent tried it yet). If it reaches its limits (which I fear it will, long before the full power of SQL has been exploited), I can use straight SQL and Gavin tells me I can populate my objects from the query result.

    As Gavin said, its not a black/white world. I'd argue though that the black/white ratio is probably much less extreme than his figures earlier were (98% vs. 2%).
  22. misconception[ Go to top ]

    "The fact is that multi-subselect queries and projections don't scale. "

    Absolutely NOT. This is completely dependent on the database.

     "In this case, it's better to be able to put some of the processing to pull the records together onto the middle tier and only have the database do 5 quick and simple queries."

    <RUNNING AND COWERING BEHIND THE HILLS>

    Please tell me you didn't say that.

    That assertion is a complete falsehood with an Oracle database. It might be true on MySQL or Sybase. But not on Oracle. Oracle will be faster at doing a complex query than you can in memory with your for loops. Guaranteed.

    J2EE developers have a tendency to underestimate the power available to them in the database. O/R mappers need to USE this power, not just give you objects so you can iterate over them. That will lead to unscalable, slow, and resource hogging applications.
  23. Open Source vs/ closed source databases[ Go to top ]

    Stu:
    That assertion is a complete falsehood with an Oracle database. It might be true on MySQL or Sybase. But not on Oracle. Oracle will be faster at doing a complex query than you can in memory with your for loops. Guaranteed.
     
    J2EE developers have a tendency to underestimate the power available to them in the database.


    Actually, I would rephrase your statement to make the distinction between "Open Source developers" and "commercial developers" (a bit arbitrary, but bear with me).

    In short, there are developers out there who have only ever used open-source databases such as MySQL or Postgres, and they draw all kinds of conclusions based on their experience. I suspect Jason falls in that category (correct me if I'm wrong, Jason, but if your assertion included Oracle, then I have a few extra questions for you :-)).

    Then, there are the other developers, who are involved in mission-critical applications and have to deal with the intricacies of Oracle and SQL Server on a daily basis.

    These two worlds are vastly different.

    --
    Cedric
  24. Java vs SQL?[ Go to top ]

    Stu:

    >That assertion is a complete falsehood with an Oracle database. It might be true >on MySQL or Sybase. But not on Oracle. Oracle will be faster at doing a complex >query than you can in memory with your for loops. Guaranteed.
     
    >J2EE developers have a tendency to underestimate the power available to them in >the database.

    Funny thing, I use to say exactly the opposite.. that DB developers thend to underestimate the power of J2EE (well... let's say "Java" so the thread don't branch on Entity EJB...)

    I have programmed, tested and benchmarked a complex query in ORACLE that ran at least 10 slower than the equivalent "multi-query, post-processing" Java counterpart. I have to confess that I did very little optimization of the original query, but doing in the "Java Way" took me only 20 minutes, and it was 10 times faster (and fast enought, btw).

    If I had programmed a store procedure with some queries and the post-proc, I bet that it would have been even faster, but my hands where tied in that respect (no store procedures)

    Also there is another point to consider: Normalized databases are optimized for the use of SQL and to maintain data integrity at the DB layer while conserving space.

    BUT, i bet that if you denormalize your DB just a bit (or perhaps more than that) and maintain data integrity at Application Level (not difficult to do, if you propose to do it) and don't mind losing a little bit of disk space (measure it just in case) THEN using a OR mapper can be as fast or faster than hand-coded SQL.

    Why? Because in the typical case (one object per table, with simple relationships between them) there is little difference between the kind of queries generated for the ORM and the handcoded ones...

    I'm to far off the target?

    Rafael
  25. Java vs SQL?[ Go to top ]

    Rafael,

    "I have programmed, tested and benchmarked a complex query in ORACLE that ran at least 10 slower than the equivalent "multi-query, post-processing" Java counterpart. I have to confess that I did very little optimization of the original query, but doing in the "Java Way" took me only 20 minutes, and it was 10 times faster (and fast enought, btw). "

    If it was fast enough and took you 20 minutes, good for you, it probably was the right solution.

    However, I'm quite sure the Oracle query could be rewritten to be just as fast, if not faster, than your Java approach that used multiple queries.

    A lot of times this just comes down to skill sets and comfort levels. Hey, I get it, you wanna get the job done, you're more comfortable in Java, so you do it in Java. The problem is that doing it in Java requires network overhead, processing overhead, and is largely re-inventing the wheels that are already inside of Oracle.

    "If I had programmed a store procedure with some queries and the post-proc, I bet that it would have been even faster, but my hands where tied in that respect (no store procedures) "

    Well you could send an anonymous PL/SQL block "begin <code> end;" from Java that has the same effect. And I'm not advocating record-at-a-time processing, just putting the whole thing in a block and using PL/SQL's features to get the job done.

    Doing it in Java is fine too, it's a trade-off between network and CPU on your app server vs. your database.

    "BUT, i bet that if you denormalize your DB just a bit (or perhaps more than that) and maintain data integrity at Application Level (not difficult to do, if you propose to do it) and don't mind losing a little bit of disk space (measure it just in case) THEN using a OR mapper can be as fast or faster than hand-coded SQL"

    I think OR mappers are usually quite fast for simple SQL. I like ORMs. I just don't like it when people use ORMs to grab a bunch of objects and then loop through them in memory, and then wonder why their app server's CPU is getting killed when they try to scale. A database has a query processor and indices that will enable such operations to execute with much less resource consumption.

    There are problems with maintaining integrity in the application level. These problems have existed for decades in the old mainframe COBOL-based systems from the 60's and 70's -- maintaining integrity in your object model makes a LOT of assumptions about your system and your data:

    a) Are you certain your application will be the ONLY one touching the data? History shows this is rarely the case after some time in production. Other applications now must only access data through your object model if it is to take advantage of integrity constraints. Or they can write their own integrity model (hope it's not buggy!). This is a tremendous problem with the big CRM and ERP vendors like SAP, PeopleSoft, Siebel, Clarify, etc, which is why you have to throw an excessive amount of hardware at these solutions.

    b) you can't take advantage of SQL tools that can read the structure of your database's constraints, so reverse-engineering efforts will be harder. in the future when your application is re-written or incorporated into a new, larger system, which is easier: reverse-engineering your object model to determine the constraints, or to take a look at a data dictionary? I've had to go through 1.5 million lines of COBOL code to figure out its constraints. It's not fun.

    Certainly you can replicate your constraints in your application for responsiveness purposes. But that does not eliminate the need for checks at the database level - they are the last chance to save your data from inconsistency.
  26. Just use an OODBMS people![ Go to top ]

    Why don't we all just use an OODBMS and forget these RDBMS things?!

    Resistance is futile. Your objects will be persisted.
  27. Java vs SQL?[ Go to top ]

    A lot of times this just comes down to skill sets and comfort levels. Hey, I get it, you wanna get the job done, you're more comfortable in Java, so you do it in Java. The problem is that doing it in Java requires network overhead, processing overhead, and is largely re-inventing the wheels that are already inside of Oracle.

    >

    Having large result sets with many columns you only use once per 1000 records (in joins) has its own network overhead.
    >
    > Well you could send an anonymous PL/SQL block "begin <code> end;" from Java that has the same effect. And I'm not advocating record-at-a-time processing, just putting the whole thing in a block and using PL/SQL's features to get the job done.
    >

    Do you really recommend this? I can't imagine anything uglier...

    > I think OR mappers are usually quite fast for simple SQL. I like ORMs. I just don't like it when people use ORMs to grab a bunch of objects and then loop through them in memory, and then wonder why their app server's CPU is getting killed when they try to scale. A database has a query processor and indices that will enable such operations to execute with much less resource consumption.
    >

    I started this thread by saying that sometimes it's better to have multiple smaller queries... note that I didn't say this was through an ORM layer. The specific example I'm thinking of is a large batch process that involved synchonizing the records from 4 or 5 open scrollable result sets... It was 3 orders of magnitude faster than the ORM solution we had (of course it was a horrible Castor implementation, but still)

    > There are problems with maintaining integrity in the application level. These problems have existed for decades in the old mainframe COBOL-based systems from the 60's and 70's -- maintaining integrity in your object model makes a LOT of assumptions about your system and your data:
    >
    > a) Are you certain your application will be the ONLY one touching the data? History shows this is rarely the case after some time in production. Other applications now must only access data through your object model if it is to take advantage of integrity constraints. Or they can write their own integrity model (hope it's not buggy!). This is a tremendous problem with the big CRM and ERP vendors like SAP, PeopleSoft, Siebel, Clarify, etc, which is why you have to throw an excessive amount of hardware at these solutions.
    >

    So we should throw it all in the database? You don't work for Oracle, by any chance, do you? :-)

    > b) you can't take advantage of SQL tools that can read the structure of your database's constraints, so reverse-engineering efforts will be harder. in the future when your application is re-written or incorporated into a new, larger system, which is easier: reverse-engineering your object model to determine the constraints, or to take a look at a data dictionary? I've had to go through 1.5 million lines of COBOL code to figure out its constraints. It's not fun.
    >
    > Certainly you can replicate your constraints in your application for responsiveness purposes. But that does not eliminate the need for checks at the database level - they are the last chance to save your data from inconsistency.


    Referential Integrity is definitely your friend, but putting business logic there is not the direction I would go.


    Note here that I'm not the most adamant about this in my organization... One of our guys is from the COBOL world doing mods to a large ERP system, and he wanted us to have NO joins, because he was concerned about their performance.
  28. Joins[ Go to top ]


    Note here that I'm not the most adamant about this in my organization... One of our guys is from the COBOL world doing mods to a large ERP system, and he wanted us to have NO joins, because he was concerned about their performance.
    <
    Jason, joins are fast in modern RDBs. Faster than roundtrips. I think some older guys remember when they were more expensive, but this was before my time ;)
  29. Joins[ Go to top ]


    > Note here that I'm not the most adamant about this in my organization... One of our guys is from the COBOL world doing mods to a large ERP system, and he wanted us to have NO joins, because he was concerned about their performance.
    > <>
    > Jason, joins are fast in modern RDBs. Faster than roundtrips. I think some older guys remember when they were more expensive, but this was before my time ;)

    Yeah, I know, and that's what I tell him... but on the other hand, there's still a lot of cost in doing complicated queries with correlated subqueries, and it's often more efficient NOT to do them. If you're building a big report, you'll often end up with 10's of thousands of records. If you're adding 100 columns to every one of these records for the header info which you only use once every 1000 records, then isn't it more efficient to select the line items and header separately? This gets to be more and more true the more levels deep and subqueries you add.
  30. Hibernate Book?[ Go to top ]

    Gavin, how is Hibernate in Action coming along? I am looking forward to some more beta chapters.
  31. Hibernate Book?[ Go to top ]

    We are about halfway there now ... maybe ;)

    It is a Hard Thing. Our goals were/are pretty ambitious, perhaps too ambitious :)


    A new chapter will be ready for TSS soon.
  32. Distributed cache[ Go to top ]

    Does Hibernate handle invalidating cache between JVM's? If so...can it do this at an object and/or property level? Can you point me to some doco on this subject?

    Thanks,
    Rod Brehm
  33. Distributed cache[ Go to top ]

    Hibernate has a pluggable cache API that can support caching in a cluster, including support for Tangosol Coherence.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  34. caching[ Go to top ]

    I've also just added caching of query result sets (in addition to object lookups by id) to CVS .... its something that I always thought was better done in the application, but it was very easy to implement, and Bill Burke convinced me there were some use cases for it.
  35. Java vs SQL?[ Go to top ]

    Having large result sets with many columns you only use once per 1000 records (in joins) has its own network overhead.

    Why would you even bring them across the wire then? You bring across what you need.

    You could send an anonymous PL/SQL block "begin <code> end;" from Java that has the same effect. And I'm not advocating record-at-a-time processing, just putting the whole thing in a block and using PL/SQL's features to get the job done.

    Do you really recommend this? I can't imagine anything uglier...


    Why is it any uglier than sending SQL across with a JDBC call? Proprietary is not ugly, it's a choice. It's not an option if your team doesn't have PL/SQL as a skill set. If that's the case, move along.

    I started this thread by saying that sometimes it's better to have multiple smaller queries... note that I didn't say this was through an ORM layer. The specific example I'm thinking of is a large batch process that involved synchonizing the records from 4 or 5 open scrollable result sets... It was 3 orders of magnitude faster than the ORM solution we had (of course it was a horrible Castor implementation, but still)

    Many large batch processes do not need record-at-a-time semantics to synchronize resultsets. I do not know the specifics of your situation, so I won't comment further other than suggest you could have had it written it as a stored procedure job that got it done quickly and very little code.

    So we should throw it all in the database? You don't work for Oracle, by any chance, do you? :-)


    No, I don't, but it's DATA, it SITS in the database. Does it make sense to
    a) grab all your data and throw it into another machine
    b) figure out what to do with it
    c) send it back to the database after you've figured out what you're doing with it

    vs.
    a) grab it , figure out what to do with it, save it -- on the same machine?

    I mean really, It's not that weird of an idea. It's how most systems were written before Java :-)

    Referential Integrity is definitely your friend, but putting business logic there is not the direction I would go.

    Business logic, domain logic often is referential integrity. Tell me, if I had a requirement in my data model: "no employee may report to a subordinate", how would you implement it?

    From what I can tell, you have three choices:
    a) a stored proc
    b) triggers
    c) a java domain model.

    (or choice (d), don't bother validating it :)

    Each approach is loaded with gotchas and tradeoffs, none are perfect. I think the stored proc approach is easiest to understand and maintain, however.

    Note here that I'm not the most adamant about this in my organization... One of our guys is from the COBOL world doing mods to a large ERP system, and he wanted us to have NO joins, because he was concerned about their performance.

    Well, apparently you've got bigger problems.. :) On some older databases, pre-1995 timeframe, maybe this was a problem.
  36. huh?[ Go to top ]

    I think OR mappers are usually quite fast for simple SQL. I like ORMs. I just don't like it when people use ORMs to grab a bunch of objects and then loop through them in memory, and then wonder why their app server's CPU is getting killed when they try to scale.<

    What kind of ORM technology are you talking about here? This might be a problem for your homemade ORM that you spent a week on. It is NOT a problem for Hibernate! Our query language has had joins forever. We now have such notions as dynamic outer join fetching, where you can (easuly) tell Hibernate at runtime how deep you want to fetch into the graph of objects (all in a single select).



    >>>There are problems with maintaining integrity in the application level.<>>>Are you certain your application will be the ONLY one touching the data?<>>>you can't take advantage of SQL tools that can read the structure of your database's constraints<
    I think you are vastly confused about how Hibernate and other modern ORM implementations work. None of these things is the slightest problem for us. Geez, Hibernate's schema export tool will *automatically* generate your PK, FK, UK constraints. We even interoperate nicely with triggers and things like that.

    And Hibernate was designed from the ground up to assume it is not the only application accessing the database.



    >>This is a tremendous problem with the big CRM and ERP vendors like SAP, PeopleSoft, Siebel, Clarify, etc, which is why you have to throw an excessive amount of hardware at these solutions.<
    An ORM like Hibernate is not remotely similar to peoplesoft. This is a complete bait-and-switch.
  37. huh?[ Go to top ]

    What kind of ORM technology are you talking about here? This might be a problem for your homemade ORM that you spent a week on. It is NOT a problem for Hibernate!

    Perhaps I misspoke. I was not discussing Hibernate's capabilities, I was suggesting that it is common for developers to grab a bunch of objects out of the database and then procede to perform filters using for & while loops instead of actually taking advantage of Hibernate-QL or SQL to do the join in the database.

    I think you are vastly confused about how Hibernate and other modern ORM implementations work. None of these things is the slightest problem for us. Geez, Hibernate's schema export tool will *automatically* generate your PK, FK, UK constraints. We even interoperate nicely with triggers and things like that.

    I am not confused at all, and I think you're reading things into my comments that were not intended. My comment had nothing to do with limits within Hibernate.

    I was commenting about performing integrity checks within your Java code instead of the database. Most "domain logic" is usually integrity checks of some sort, otherwise it is application level logic. Good developers will rely on database foreign key and check constraints, but I can't tell you how many times I've seen these disabled or turned off in lieu of doing pure application-level integrity checks for mythical "performance reasons".

    But even assuming fk/pk constraints are properly set in the database, There are 3 general approaches to performing for coding data integrity, for more complicated integrity rules that go beyond simple pk/fk relationships:
    - triggers in the database
    - stored procedures
    - java domain models

    So if you don't rely on stored procedures or triggers, you're going to have to code the data integrity in the Java domain model. That implies now that architecturally, other applications must either a) talk to the database through your domain model, or b) replicate the constraints in their own code. This is not a palatable approach.

    An ORM like Hibernate is not remotely similar to peoplesoft. This is a complete bait-and-switch.

    It is not a bait and switch at all, I am merely stating that ERP vendors such as Peoplesoft have to develop for multiple database servers and do not properly take into account the differences between database servers.

    I'm well aware that Hibernate is capable of varying SQL dialects, but there are still *differences* between database locking models that need to be taken into account! Oracle, for example, implements the serializable isolation level very differently from Sybase and SQL Server.

    Gavin, I am not criticizing Hibernate in particular with the above assertions. I am suggesting that an ORM implies serious trade-offs and architectural questions about why it's being used. It is not a decision that should be 'automatic', in my opinon.
  38. Open Source vs/ closed source databases[ Go to top ]

    Actually, I would rephrase your statement to make the distinction between "Open Source developers" and "commercial developers" (a bit arbitrary, but bear with me).

    >

    <snip jealous nonsense/>

    Cedric, your jealousy of open source developers is evident, you wish you were out there frolicking in green technology pastures instead of being a drone at BEA. All the best developers are in open source, everyone knows that, get over it.

    marcf
  39. is he serious?[ Go to top ]

    .. or is he joking? I cant make up my mind.
  40. is he serious?[ Go to top ]

    .. or is he joking? I cant make up my mind.


    of course I am serious, I am always serious,

    hehe, it is the "all the best developers are in open source" line that throws Cedric off. He used to rant about it a year ago, he was so adamant to prove that they had good people at BEA. It amused us to no end because, while we do think the best are at JBoss ;) it's obvious to everyone that there are good people everywhere, he took so seriously.

    My next line of t-shirts is "JBOSS GROUP: all your best developers are belong to us" and I will send a few to cedric's team, just to get on his nerves.

    I got to go back to serious work ;)
  41. Open Source vs/ closed source databases[ Go to top ]

    Actually, I would rephrase your statement to make the distinction between "Open Source developers" and "commercial developers" (a bit arbitrary, but bear with me).

    > >
    >
    > <snip jealous nonsense/>
    >
    Marc:
    Cedric, your jealousy of open source developers is evident, you wish you were out there frolicking in green technology pastures instead of being a drone at BEA.


    Getting a bit defensive, aren't we?

    Where did I ever say which databases were best?

    All I said was that each database is different and just because certain things work on certain databases doesn't mean it extends to all of them.

    All the best developers are in open source, everyone knows that, get over it.

    You mean "All the best developers work for JBoss", right? And those who no longer do get their commit privileges revoked.

    Come on, all the best developers work for BEA, everybody knows that, get over it (and stop trying to hire them ;-)).

    You crack me up.

    --
    Cedric
  42. Open Source vs/ closed source databases[ Go to top ]

    All the best developers are in open source, everyone knows that, get over it.

    >
    > Come on, all the best developers work for BEA, everybody knows that, get over it (and stop trying to hire them ;-)).
    >
    > You crack me up.

    that was my line, it is the way you come up with "open source bad" "BEA/proprietary good" any time there is a thread about *something*... you need to let go, it is becoming obsessive. I am reading a thread on SQL vs CMP and *somehow* your manage to chime in with open vs closed. LOL.

    You are a nervous wreck, time to jump ship and come work for me, we got work for you and you are french, you may even fit.

    marcf
  43. Open Source vs/ closed source databases[ Go to top ]

    Marc:
    You are a nervous wreck, time to jump ship and come work for me, we got work for you and you are french, you may even fit.


    Nah, I'm a closed-source developer, I'm not worthy.

    --
    Cedric
  44. Open Source vs/ closed source databases[ Go to top ]

    Cedric:
    > Then, there are the other developers, who are involved in mission-critical applications and have to deal with the intricacies of Oracle and SQL Server on a daily basis.
    >
    > These two worlds are vastly different.

    Quite true, Cedric, and thank you for providing the distinction. I guess i'm too steeped into the "corporate" world at the moment.
  45. In short, there are developers out there who have only ever used open-source databases such as MySQL or Postgres, and they draw all kinds of conclusions based on their experience. I suspect Jason falls in that category (correct me if I'm wrong, Jason, but if your assertion included Oracle, then I have a few extra questions for you :-)).

    >

    We use only Oracle. Here's an ugly little secret that has to figure in: Oracle is EXPENSIVE. Scaling Oracle is EVEN MORE EXPENSIVE. If you have to scale up in one box, you end up paying more and more per CPU. If you have to cluster it, you have to get an expensive shared disk array, plus even more expensive Oracle licenses.

    If you can avoid it, don't push all of your processing onto your database. Let it do what it can do quickly. Processing power is much cheaper, especially if you can do it without BEA licenses (sorry Cedric).

    > Then, there are the other developers, who are involved in mission-critical applications and have to deal with the intricacies of Oracle and SQL Server on a daily basis.
    >

    Yep. That's us. I just have different assumptions about where I want to spend my money (or not).
  46. Here's an ugly little secret that has to figure in: Oracle is EXPENSIVE. Scaling Oracle is EVEN MORE EXPENSIVE.

    Granted, but this thread was about performance and scalability.

    If you want to factor cost into it, you're opening up a can of worms. How expensive it is to buy an extra CPU + processor licesnse for Oracle vs. buying an extra 4-CPU application server that needs to be administered? It varies from environment to environment. Some places won't let you install x86 for their apps, it has to be HP-UX or Solaris.

    If you can avoid it, don't push all of your processing onto your database. Let it do what it can do quickly. Processing power is much cheaper, especially if you can do it without BEA licenses (sorry Cedric).

    The point of this debate was that the database can do most processing and integrity checks in a much quicker and more scalable fashion than on an application server. This does, however, make the assumption that the added hardware and administrative costs of scaling out an application server solution overruns that of Oracle licenses.
  47. Performance of O/R mappers[ Go to top ]

    This is one of the common misperceptions I often see... Look! I can merge this all into one query, so it must be faster!

    >
    I have question about this statement. Agreed that merging queries together may have undesired impact in terms of the "cost of query" but is this only cost we need to consider? If we could merge 5 queries in 1, then it will reduce the network traffic from JVM server to DB server. Also, since DB calls are blocking network IO calls, 5 queries will more expensive than 1. So to me, it appears that considering overall impact on performance will be always better to combine queries as much as possible. That is for a particular business process, get as much as data as possible from DB in one shot and then process it programmatically. Please note that I am not suggesting merging queries for unrelated business processes.
    My question is: What parameters one should consider in trying to merge the query if a business process permits the merging of queries? Is only database cost is the correct parameter?
  48. Performance of O/R mappers[ Go to top ]

    Also, since DB calls are blocking network IO calls, 5 queries will more expensive than 1.


    Many times, separated queries are faster than all in one.
  49. $100 from Gavin King?[ Go to top ]

    How much do we get if we show Hibernate code that runs 100 times faster than raw JDBC?

    *"100 times faster than raw JDBC" is a trademark of Thought, Inc. Please contact Ward Mullins for licensing details.
  50. Is this the right challenge ?[ Go to top ]

    -- I am a hibernate fan --

    Any O-R mapping tool will need to perform the following activities :

    (a). Convert a set of data access calls into a set of SQLs.
    (b). process the sql queries and get the result set
    (c). process the result set and get a set of populated objects.

    Assuming gavin doesnt get poorer by 100$ it would go to prove that
    hibernate does activities (b) and (c) efficiently. His statements in
    this thread seem to indicate that he is fairly confident of activity
    (a) as well though that is not included in the performance challenge.

    The biggest impact on performance due to usage of O-R mappers has
    IMHO been always due to (a) and sometimes the sequence in which it
    executes them. I believe however that the simplicity
    and maintainability of resultant code is a sufficient benefit to
    compensate for most (as in > 95%) cases. We need a better understanding
    of how to deal with the other 5%.

    It might be an useful exercise to actually start documenting specific
    scenarios where O-R mappers (or hibernate specifically) were found to
    be challenged in step (a). This could result in a set of suggested
    alternatives or improvements in the hibernate code itself.

    As an example think of a threaded forums scenario. Here each message
    shall have a reference to its parent thread. During display one needs
    to show all the threads in the thread order (not like TSS does ;) ).
    It would be interesting to see what is the performance difference
    between hibernate and raw jdbc. I believe in the scenario below it
    should be possible to get a much better performance using raw jdbc
    than an O-R mapper though it would require some complex coding.

    eg.
    CREATE TABLE MESSAGE
    (
      message_id INTEGER NOT NULL,
      parent_message_id INTEGER,
      thread_sequence_number INTEGER,
      thread_id INTEGER,
      parent_thread_id INTEGER,
      author VARCHAR(50),
      text TEXT
    );

    // define necessary indices

    // In the display code

    public void displayThread(int thread_id)
    {
    // Returns the top level message
    // alternative implementations using hibernate / jdbc
    Message message = getMessageForThread(int thread_id);
    displayMessage(Message m, "");
    }
    public void displayMessage(Message message, String buffer)
    {
    System.out.println(buffer + message.getText());
    String newBuffer = buffer + " ";
            Iterator iterator = message.getChildrenIterator();
            while(iterator.hasNext())
    {
    Message child = (Message) iterator.next();
    displayMessage(message, newBuffer);
    }
    }

    I shall look forward to being pleasantly surprised where O-R
    mappers would work as fast as (not to speak of 100 times faster
    than ;) ) JDBC

    -- Dhananjay
  51. Could I use DataSource with Hibernate?[ Go to top ]

    I'm a novice at hibernate,but I want to know how to use DataSource with it, or there is no need for us to deal with DataSource at all?
  52. this challenge is flawed[ Go to top ]

    "the results must be reproduceable on various databases"
    "the actual SQL executed should be the same in both tests"

    The real competitor to hibernate is wrapping a bunch of stored procedures. This challenege does not try to compete with this approach. This is a challenge over CPU overhead, not total execution time.

    And if it were a real challenge, it would allow me to vary the SQL code per database.
  53. SPs[ Go to top ]

    The real competitor to hibernate is wrapping a bunch of stored procedures. <

    Really??? Wow!

    If you want to use stored procedures, use stored procedures. Hibernate does not compete with stored procedures.

    Stored procedures are a non-relational, non-object-oriented view of data, and do not work very well with domain models. They work well in some other kinds of less object oriented architectures. Its up to you what kind of application architecture you have, and if you want your app to be completely nonportable.

    >> This challenege does not try to compete with this approach. <
    No, of course not. I don't know of many businesses who are tossing up between ORM and stored procedures! What a strange notion.
  54. SPs[ Go to top ]

    "Stored procedures are a non-relational, non-object-oriented view of data, and do not work very well with domain models."

    Stored procedures are non-relational? Non object-oriented?

    Stored procedures are just procedures. They have absolutely no bearing on your design model. If you want to wrap them in an object, go ahead!

    And while one certainly can invoke record-at-a-time logic with a stored procedure, that is not what I am advocating. So I fail to see how they're not "relational".

    "They work well in some other kinds of less object oriented architectures. Its up to you what kind of application architecture you have, and if you want your app to be completely nonportable. "

    Which is why they are competitors. The ORM approach is one architectural view, the wrapped stored procedure package is another view.

    Take a look at Martin Fowler's Patterns of Enterprise Application Architecture. He sets this up right away as a set of tradeoffs: use a transaction script, use a table module, or use a domain model + an ORM. There are tradeoffs among these approaches

    "No, of course not. I don't know of many businesses who are tossing up between ORM and stored procedures! What a strange notion."

    A "strange" notion? This is easily the most encountered data persistence decision encountered on every single project I've consulted on or any group I've trained. When do you decide to ditch the stored procedures and move to a domain model and an ORM? When do you *need* a domain model? What are the performance trade-offs of a domain model?

    WAY, WAY too many projects are jumping on the ORM bandwagon when they don't need to, and a stored procedure based solution would be more maintainable, and much faster.
  55. Scope of challange[ Go to top ]

    While I can see the merits of this challange (how much overhead does Hibernate bring, compared to plain JDBC using the same SQL statements), I'd be more interested in a challange that given a specific problem compares the performance of using Hibernate for persistance to hand made persistance. Both solutions should support the same functionality, but each solution can use whatever SQL statement they need. This would help clarify in which cases (if any) hand coding can be benifitial, as well as give the Hibernate team some ideas of possible performance enhancements.

    ///Odd Möller
  56. Scope of challange[ Go to top ]

    Give Gavin a break!

    I am personally impressed by the knack for publicity, this is a genius stunt. I wish I had thought of it,

    marcf
  57. This is ridiculous! Don't buy in to the Sun Java Cartel ... you don't need an O/R Mapper! It's all a conspiracy! The ONLY technology you need is XUL. XUL subsumes all other technologies. XUL is based on open specifications! Don't let Sun or Gavin King dictate how you write your applications; you don't need a database even, just XUL. Fred Ward may have patented O/R mapping but he can't touch XUL so just use that. And Java Web Start. Using XML. Because it's open and there's all these motors for it. Microsoft is afraid of XUL ... that's why they're building it into the next IE. I have trust-worthy sources. They fear XUL and so should you ... all must bow down before almighty XUL lest it smite thee. And its OPEN!

    (Sorry ... something was obviously missing from this discussion. It was making me nervous).
  58. This is ridiculous! Don't buy in to the Sun Java Cartel ... you don't need an O/R Mapper! It's all a conspiracy! The ONLY technology you need is XUL. XUL subsumes all other technologies. XUL is based on open specifications! Don't let Sun or Gavin King dictate how you write your applications; you don't need a database even, just XUL. Fred Ward may have patented O/R mapping but he can't touch XUL so just use that. And Java Web Start. Using XML. Because it's open and there's all these motors for it. Microsoft is afraid of XUL ... that's why they're building it into the next IE. I have trust-worthy sources. They fear XUL and so should you ... all must bow down before almighty XUL lest it smite thee. And its OPEN!

    >
    > (Sorry ... something was obviously missing from this discussion. It was making me nervous).

    Hehehe, nice try! Wasn't for the lack of the link to his site about XUL, you'd have conviced me you were Gerald Bauer. :)
  59. As I see it, Hibernate is a way for me to achieve database independence. If I want the app to run with Oracle, I set hibernate.dialect to cirrus.hibernate.sql.OracleDialect. For DB2, cirrus.hibernate.sql.DB2Dialect. For SAPDB, cirrus.hibernate.sql.SAPDBDialect. Etc.

    THIS is the true benefit of Hibernate.

    (Well - one of them; not having to learn CMP is another :) )
  60. Hibernate is Great but[ Go to top ]

    I learnt about many things on this thread about Hibernate that I did not know otherwise like Projections and Dynamic outer join and caching. I have also found myself learning how to use Hibernate from the mailing list often.

    Gevin, I really wish that your new book comes out fast. At least there will be more examples. And I hope the documentation improves as it only comes with a single example of Cat table/class.

    Is it really enough?
  61. Does the comparison have to be between Hibernate and handcoded JDBC or
    can it be with another O/R mapper? I describe a scenario below where
    Hibernate will be significantly slower than other O/R engines.

    I'm a Hibernate user and I think it's a good tool, but I feel that the
    refusal to map private instance variables is a mistake. According to
    the documentation: "Many other ORM tools directly persist instance
    variables. We believe it is far better to decouple this implementation
    detail from the persistence mechanism. [...] Properties need not be
    declared public - Hibernate can persist a property with a default,
    protected or private get / set pair".

    This constraint doesn't make sense to me. If I have to add private
    accessor methods to a class that have no use other than for Hibernate,
    you haven't decoupled me much from the implementation details of the
    O/R engine.

    So what does this have to do with performance? Let's say I have a Java
    bean with several bean properties and that modification to these
    properties are checked for access authorization and the modification
    is logged. When an object is swizzled from the database, neither of
    these are concerns. Initializing the private fields from persistent
    data does not represent a modification to the bean properties so no
    authorization or logging is required. Furthermore, assume the bean is
    used in a context where there are many, many readers and and very few
    writers.

    If I'd like to persist this object with Hibernate I am faced with a
    dilemma. Mapping the existing bean properties would several
    undesirable results. It could result in a serious performance impact
    and I will have many spurious access log entries for the object (one
    for each mapped property each time the object is loaded). If I was
    able to map the private data, this would not have been an issue. I
    could create a separate, private get/set pair for each property (for
    example, setFooFromHibernate / getFooFromHibernate). In other words,
    I'd have to infect my existing domain objects with code specific to
    Hibernate. If I ever change to another O/R mapper that can map private
    fields this will be dead code. In fact, IDEs that flag dead code (like
    IntelliJ) will warn about these methods since there are no compile
    time references to them. On our team, we pay attention to those
    warnings so this can be quite an annoyance and the extra noise could
    lead to overlooked legitimate warnings.

    I can easily create an example like the one described above that will
    show Hibernate exhibiting much slower mapping performance (due to the
    overhead of using the bean property methods) than other O/R mapping
    engines that map to private fields. You can demand that I must modify
    my objects to add Hibernate-specific methods, but that doesn't seem
    quite fair since the other O/R engine will not require them. If you
    require the same SQL statements, why not require the same domain
    objects? I'm guessing this example will not be considered valid for
    the competition, but I'd rather have the private field access than the
    $100 anyway. ;)

    The early Hibernate documentation claimed that mapping to private
    fields broke encapsulation, therefore one should map private methods
    instead (as if that doesn't break encapsulation). I believe this has
    been flawed thinking from the beginning and that it's more accurate to
    think of Hibernate as an externalized constructor that initializes
    instances from persistent data. A normal object constructor can access
    private fields, so it makes sense (using this analogy) that Hibernate
    could do the same when it's acting in that constructor role. A similar
    analogy would be object serialization where property methods are not
    used to initialize private data when an object instance is restored
    from persisted data.

    Steve
  62. This is so funny!

    About 3.5 hours ago I comitted a small patch to allow Hibernate to access instance variables directly. You just add access="field" to the <property> mapping.

    I still strongly prefer the abstraction of properties, but now its up to you.


    P.S. This is a religious debate, apparently, so I'm not going to try to change you mind ;)
  63. That's great news! I didn't think it would be a difficult change.

    I'm not sure that our views are that far apart. Some people have the
    belief that *all* access to private fields should be through methods.
    From looking at the Hibernate source code it doesn't appear that you
    and the other developers have that belief. Private variables are
    often accessed directly within the class.

    I'm suggesting that tools like O/R mappings, serializers, marshaller/demarshallers are types of externalized instance constructors
    and should have access to private instance data. Hibernate has always
    allowed this private access via methods, so now it's nice that it
    can also directly access the fields.
  64. I wonder what will be the next fad?[ Go to top ]

    Those well-meaning impractical J2EE theoreticians reminds me of the cadre of physiology therapists, running from fad to fad as soon as the old is out(was it false dream resurrections that was the last one?).

    Now that nobody any longer defends EJB they try to convince the public that it is a good idea to put a massive bloat of code between the application and the database, ignoring the fact that the database companies might know something about their business as well as front soldiers with 10, 15 or 20+ experience in the field.

    And to actually hear Marc Fleurly saying "I am personally impressed by the knack for publicity"

    !!! That was the right person to say so! (He was serious too, he really meant it:)

    In fact I have more understanding with the physiologists after all they need to make a living! J2EE's base their reason for living on idiotic persistent mechanisms like EJB and ORM, it makes them feel superior.

    I wonder what will be the next fad?

    Regards
    Rolf Tollerud
  65. Rolf, baaaaby[ Go to top ]

    Hey Rolf: love ya work!


    ;->
  66. iBatis is the King[ Go to top ]

    To Gavin King - YAWIT, (yet another Well-meaning Impractical Theoretician)

    Having worked with iBatis.NET (c#) for a while I wonder how anyone even for a second can contemplate using Hibernate or any other O/R system.

    Best Regards
    Rolf Tollerud
    (BTW, how is life at JBoss?, have you learned the correct way to save dates yet?)
  67. Prove[ Go to top ]

    Hi Gavin,

    I am sure that you know the Test from org.hibernate.auction package.

    Here is my test result(on MySQL v4) :

    long t0 = System.currentTimeMillis();;
    test.viewAuctionsByDescription("auction item 1", 44);
    System.out.println("TOTAL=" + (System.currentTimeMillis()-t0));

    -------------> TOTAL=157

    The resulted SQL is :

    select this.id as id0_, this.description as descript2_0_, this.ends as ends0_, this.condition as condition0_, this.seller as seller0_, this.successfulBid as successf6_0_ from AuctionItem this where (lower(this.description) like ? and this.condition=?)

    I used a trivial JDBC test using the above SQL and same conditions and I created the AuctionItem objects from ResultSet.

    The result is concludent:
    -------------> TOTAL=31

    Please tell me what is wrong or, if not, I will send you my bank details :)

    Thank you,
    Cornel
  68. easy money[ Go to top ]

    Cornel 1+
  69. Cornel,

    Did you get the easy money or any response from hibernate team?

    Peng
  70. Re: Prove[ Go to top ]

    Not yet ... still wating.
  71. I wonder what will be the next fad?[ Go to top ]

    Next "fad"? .NET with ORM?

    See...
    http://www.nolics.net/
    http://www.mongoosesolutions.com/mg/objectz_net.aspx
    http://www.olero.com/ormweb/index.aspx
    http://www.2lkit.com/
    http://www.alachisoft.com/
    ... and on and on.

    Seriously, ORM has been around for almost as long object-oriented techniques (maybe you think that's a fad too, I don't know). It's become more mainstream and better, more general tools have been developed in the last several years, but I've personally been working with ORM technology going on 10 years now.
  72. Hi,

    I started using Hibernate 1 month ago and now I'm stucked in some horrible performance degradation. I have a query that uses 6 tables making some sum, count, max and group by. The resulted method looks very nice but the execution time is almost 2s. I wrote the same method using PreparedStatement and Connection obtained from the Session object and surprise : it took 0.2s!

    I also noticed that most of the queries take more time than using standard JDBC ! What can I do ?