Real-World Experiences With Hibernate

Discussions

News: Real-World Experiences With Hibernate

  1. Real-World Experiences With Hibernate (72 messages)

    Shine Technologies has recently released an article on Real-World Experiences With Hibernate. In it, we share our thoughts on Hibernate after using it on a couple of projects. We'll also present a liberal sprinkling of real-world examples. On balance, we think Hibernate is great. However, we did learn a couple of hard lessons along the way. To summarise:
    • Hibernate is big. Sounds obvious, but if you're using it on a real project for the first time, it can be easy to underestimate how long it will take to learn if you want to fully leverage its power. This is especially the case when it comes to relationship mappings.
    • Hibernate works best in an environment where everything is in-memory and you get to maintain object trees for as long as you want. The problem is that such an environment is actually pretty rare. This in itself is not a fault of Hibernate, but don't think that Hibernate is going to bridge the mismatch for you.
    • Hibernate can perform, but it's a good idea to think about performance sooner rather than later. This is especially the case with batch processing, where you can encounter many problems beyond the simple 'n+1 select' issue.
    • In Hibernate, it can appear that there are many ways to skin a cat. However, whether you realise it or not, in each case you will probably be trading off a number of factors - for example performance, development time, conciseness and even correctness.
    • Don't blindly apply "best practices". They may be overkill for what you are doing and end up being more trouble than they're worth.
    On reflection, most of the problems we encountered probably stemmed from us having vague and/or incorrect expectations of Hibernate to begin with. We hope that by reading this article, other new adopters of Hibernate will be able to avoid making similar mistakes. We'd also be interested in your opinion of some of the issues that we raise.

    Threaded Messages (72)

  2. thanks for posting this link to this excellent articles, the article clearly shows the pros and cons of hibernate, with the pro its power, with the cons, the openness and huge complexity of the api, and the vague ways on how to achieve the result, which often ends in a lot of trial and errors until things work.
  3. There's nothing fundamentally new in these lessons learnt, it is the same stuff you have to consider when using any serious O-R mapping technology. It's easy to break if the wrong people get at it, it's easy to misrepresent business data if you bypass the O-R layer, the performance is only good if you stay inside the envelope you planned, etc etc etc.
  4. It's a nice attitude brain dump on how to attack Hibernate.
  5. It's a nice attitude brain dump on how to attack Hibernate.
    I have found no particular upsetting news too. Saying this does not mean a Hibernate attack. And neither this: I have not experienced some of the troubles ("uncomfortabilities") pointed out in the article using my favourite JDO implementation. It is simply my experience. What I would not suggest, and so I totally disagree with the article, is the use of annotations. Guido.
  6. And neither this: I have not experienced some of the troubles ("uncomfortabilities") pointed out in the article using my favourite JDO implementation.
    Personally, I have, and I think the article is useful for more than just Hibernate users.
    What I would not suggest, and so I totally disagree with the article, is the use of annotations.
    I agree.
  7. And neither this: I have not experienced some of the troubles ("uncomfortabilities") pointed out in the article using my favourite JDO implementation.


    Personally, I have, and I think the article is useful for more than just Hibernate users.

    What I would not suggest, and so I totally disagree with the article, is the use of annotations.


    I agree.
    I disagree here, I am perfectly in line with the article regarding annotations, you still can keep your mapping pojos in one package so you still have a central config location, but having been in several hibernate related projects in the past, once you cannot slay the entire mapping issue with tools, the xml mapping files become a major burden, you are unable to do decent refactoring, having to maintain two separate files per class does not help either, to the worse, it slows down startup time significantly. Not for small dbs, but once you have to deal with several hundred mapping pojos, startup times become a real issue in hibernate, partially to be blamed on the XML which has to be parsed as well. I think the article was dead on in many spots, I see the biggest problem in Hibernate as being open to a lot of things without giving a clear roadmap to follow (unlike rails and its orm solution which says follow the roadmap or you are on your own)
  8. Having implemented a domain model using mapping files and then re-implementing it using annotations I have to say this: I completely agree with the article that you should only use the annotations if you are already comfortable with using the mapping files. The main reason for this is that the documentation on the annotations is weak and I personally don't think you can intuit well what they will do without understanding how Hibernate is working under the hood. That said, I agree with the parent that there are issues with annotations: 1) not all of the functionality of hibernate is exposed via the annotations. There's a good chance you'll want/need to do something and will have to fall back to HBM files. Fortunately, hibernate lets you do this where needed (though with some complications). 2) If you have a domain model class that is shared between two different databases that may (perhaps) be modeled differently you've got no choice but to go the HBM route. With #2 that is typically only going to be a problem when you are dealing with a legacy application and database to which you are now trying to apply Hibernate. It's pain I know I'm going to feel in about 6 mos. Using annotations has been a big help for me, though. It makes things go much quicker and makes refactoring so much easier. I highly recommend them, but also recommend having a very open mind about not using them.
  9. I'd like to present some practical study. We developed a rather big piece of software - some kind of internet banking system. It uses Hibernate under the covers. We didn't want to create xml mappings by hand, annotations project hasn't exist - it was in Java 1.4 times. Finally after looking for some possible solutions we decided to use XDoclet annotations. 1 Keeping source code and meta-information together has a big advantage in maintability. 2 XDoclet support is rich enough (as least for our purposes), so nobody has had to manually create xml mappings 3 It is not built-in the language like annotations but requires an extra preprocessing run. It may be easily done using eclipse plugin. 4 Data classes drives the database schema. create or update scripts may be automatically generated. 5 It is good to externalize hql queries. That way syntax correctnes of all queries may be checked in one unit test. In case of a small schema change (insert,rename or delete of a filed) the test fails showing query which was not modified accordingly. On the opposite side another project in our company was developed using different approach. It was started from a relational database schema, then java classes and xml mappings were created. Soon synchronization between java code, mappings and database schema becamed a difficult, error prone and time consuming task. So in the end I also recommend using annotations (or something similar in spirit like XDoclet tags). In every non trivial application synchronization between separate java code and xml mappings may easily become a nightmare.
  10. We also have authored a large system using Hibernate (~300 tables). We chose to use the Hibernate mapping file as the sole manually maintained artifact. From this we generate 1) an abstract base class for each persistent class including getters/setters/equals/hashCode 2) the DDL to create the database, 3) HTML that documents our data model, 4) Java constants for each property of each class. We think that this scheme is ideal because 1) the XML allows the most concise possible expression of the domain model in both relational and OO forms 2) it is fully DRY, more concise than the annotation or XDoclet approach - we maintain no bolierplate Java (getters/setters, etc) Also of note: We create manual subclasses of the generated entity classes to add business logic. When we refactor, we let the IDE refactor both manual and generated code, then adjust the XML, then regenerate. Slightly less than ideal, but very workable. We do not handwrite hql but instead have developed a Java query abstraction that produces hql. We invoke this with the generated property constants, so it is validated at coding time by our IDE, not at test time. We also have a declarative approach to the UI that uses these constants.
  11. old lessons but still important[ Go to top ]

    It is the same old lesson with picking any tool, even the good ones, that the more complicated the tool the more likely it will be used incorrectly. Hibernate has been blindly adopted by many without consideration for its learning curve and appropriateness. I'm sure it has also been taken seriously by others where it has shown itself to be useful. I'm glad the article took a fairly balanced look at Hibernate. There was a recent post on TSS that had some interesting points but devolved mostly into a flame war. Hibernate is a hot topic and there is a lot of hyperbole surrounding it and its usefulness. I'm happy to have a article that doesn't bash or over praise this tool. ______________ George Coller DevilElephant
  12. +1
  13. Startup time[ Go to top ]

    Our major gripe with Hibernate is what seems to be a startup time that is linear with the number of classes mapped (roughly .5 seconds per class). This is a major hassle since we have quite a few (too many!) tests that use Hibernate. Some of these run really, really slow, all because of Hibernate. Other than that, the Hibernate experience has been mostly a positive one.
  14. I think hibernate is OK as long as one realizes that it does not fit onto everything. It is too rigid for (sub)systems where not everything has to be guarded to the max. In that case a customer can loose a lot of his money on unnecessary quests for the origination of unclear messages (that can make one jump through the roof).
  15. Perhaps I am biased in favour of the simplicity of pure JDBC and so I read the article's undertone to imply exactly that Which is : a) Writing a simple DAO is easy enough no matter what you use(pure JDBC, ibatis, spring, hibernate etc). BUT b) Writing complex queries involves defining complex relations in another format(xml usually for ibatis, spring, hibernate) or use an elaborate/complex SQL for pure JDBC. If I know the RDBMS' native language, SQL, reasonably well am I not better off using JDBC with some good quality SQL? Have people found the complexity worth it or do they still drop down to SQL and pure JDBC for anything complex? Do people find it easier to use these OR mapping tools than writing SQL....and do they know for sure that the generated SQL is as good as can be written with a little bit of SQL knowledge.
  16. Using a ORM tool you get more functionality than what is usually created in hand written JDBC Dao's: - caching (first level & second level) - lazy loading of related objects - transaction support (ejb3, transaction context propagation) - better/cleaner query language ('fetch') - fetching of related objects in a join, good for performance and more... My experience is that using an ORM is easier and gives cleaner code compared to plain JDBC when writing comparable functionality, not the trivial select or update statement. I'm not saying that it is not possible to get the same functionallity with plain JDBC and creating your own caching etc. It's just tedious to write the resultset-to-pojo (and reverse) code. An ORM tool can only be compared to the jdbc-solution when you create the same functionality. Edwin van der Elst
  17. If I know the RDBMS' native language, SQL, reasonably well am I not better off using JDBC with some good quality SQL?
    You may know that RDBMS's native language, but what about another RDBMS's native language? SQL is simply not easily portable for significant applications, no matter what relational database advocates may say, Java is a language designed from the start around the idea of portability. For this reason alone, ORMs and Java go well together.
    Have people found the complexity worth it or do they still drop down to SQL and pure JDBC for anything complex?
    The good thing about a quality ORM is that you don't need to drop down to SQL and JDBC for things that need it. You can use SQL together with the ORM.
  18. Dropping down - its not nuts[ Go to top ]

    All ORM tools allow you to "drop down" to a more native SQL when needed. Like Steve said, with Hibernate it isn't JDBC. When you drop down it is more like using Spring SQL Maps or iBatis. Hibernate still does some of the grunt work like connection handling, which is what you want. ______________ George Coller DevilElephant
  19. ORMs & Portability[ Go to top ]

    You may know that RDBMS's native language, but what about another RDBMS's native language?
    For some projects, db portability is very interesting (e.g. switch to HSQLDB for quick tests), but in many large organisations, you might have to use the DB that IT supports. In those cases, portability is not really a Java issue, but a political one since IT probably has the responsiblity to optimise the heck out of the DB (providing stored procs, and so on). You have to look at portability as a "nice-to-have" in these situations.
    you don't need to drop down to SQL and JDBC
    In a large project, it's a safe bet to assume that you will need to drop down to SQL (but hopefully not too often). I don't want to be argumentative, but the DB is at the heart of most performance problems.
  20. Re: ORMs & Portability[ Go to top ]

    You may know that RDBMS's native language, but what about another RDBMS's native language?

    For some projects, db portability is very interesting (e.g. switch to HSQLDB for quick tests), but in many large organisations, you might have to use the DB that IT supports. In those cases, portability is not really a Java issue, but a political one since IT probably has the responsiblity to optimise the heck out of the DB (providing stored procs, and so on). You have to look at portability as a "nice-to-have" in these situations.
    I profoundly disagree. Let IT do the substantial optimisation when and if it is needed, but that should not in any way prevent the use of ORM for the majority of work, and if it does then this is definitely a form of premature optimisation. Portability is not just interesting - my view is that it should be core part of any development: targetting applications for a single relational database vendor's products should surely be discouraged. Use a good ORM and you get portability for most of your code with little effort.
    you don't need to drop down to SQL and JDBC

    In a large project, it's a safe bet to assume that you will need to drop down to SQL (but hopefully not too often). I don't want to be argumentative, but the DB is at the heart of most performance problems.
    Which is why using a quality ORM with transparent cacheing can often be a major performance benefit.
  21. Re: ORMs & Portability[ Go to top ]

    targetting applications for a single relational database vendor's products should surely be discouraged. Use a good ORM and you get portability for most of your code with little effort.
    I agree, ORMs will limit the effect of a change of db. In small organisations, that might happen regularly. In a large organisation, saying that you're going from a db to another is similar to saying you're changing the deployment OS. It is rare especially considering that the DB probably already exists and will outlive the Java project (and perhaps the Java language itself). Of course most ORMs provide ways to link in to the existing systems and other interesting features that make them worthwhile.
  22. Re: ORMs & Portability[ Go to top ]

    I know we have discussed this a million times on other threads but ...
    I agree, ORMs will limit the effect of a change of db.
    They actually make it easy. Even when you switch dbs (Sybase to DB2) and OS's (Linux to Mainframe) AND the naming conventions are not the same so one has to change the mappings. (BYW, we were done before the DBA)
    In small organisations, that might happen regularly.
    Not really. Just as much as large organizations.
    In a large organisation, saying that you're going from a db to another is similar to saying you're changing the deployment OS.
    Oddly, in large organizations the development is usually on Windows and deployment is Linux/Unix/Mainframe. Usually the db stays the same. The issue is mostly because they can't (apps are tied to the db) not because they won't (yes, there is a lot of won't too). Just think how much leverage an organization would have come contract renewal time when they can switch database vendors by changing a config file.
    It is rare especially considering that the DB probably already exists and will outlive the Java project (and perhaps the Java language itself).
    Maybe the RDBMS. Even then the RDMBS version is updated more often than the app. Unless one coded the app to work with a specific version. But the existing database structure? Seldom. Usually the current db schema is reworked too.
    Of course most ORMs provide ways to link in to the existing systems and other interesting features that make them worthwhile.
    Not sure what you mean by this.
  23. Re: ORMs & Portability[ Go to top ]

    In my experience (only 6 years) is simply that portability in some situations is really the main driver to use an ORM, and, yes, I agree with everything that you've both said.
    Usually the db stays the same.

    The issue is mostly because they can't (apps are tied to the db) not because they won't (yes, there is a lot of won't too). Just think how much leverage an organization would have come contract renewal time when they can switch database vendors by changing a config file.
    Fair enough. I've worked in companies that had company-wide agreements with vendors. I thought it might be more generalised practice. In my case, there was no way that we could change the deployment db.
    Not sure what you mean by this.
    Was unclear. I meant that you can easily use an ORM to connect to a database schema that you don't own. Was sightly off topic.
  24. ORM good for simple crud apps[ Go to top ]

    Hmmm - I would have to disagree that portability should be a 'core' goal of any project - that seems to me to be putting the technical horse before the business requirements cart. There may be a smallish subset of experimental projects for which portability may be useful because of a longer term business strategy (e.g starting a risky, seat of the pants projects on a cheap mysql install with the idea of trading up to oracle or db2 if things go well). However for any large, reasonably funded, enterprise db backed project then IMHO portability should not even be considered. As we all know, enterprise db's cost a _lot_ of money. For all this money, you get a _lot_ of features - these features vary radically across db's. If you allow ORM to dictate how you access these features then you will miss out on the 80% of db functionality that you cannot access using ORM. Developers would also be encouraged to develop the mindset of treating the db like a 'black box' - something that they don't really need to worry about because hibernate or whatever is doing all the work for them. This is a recipe for disaster. Hibernate is excellent at performing straightforward selects, updates and inserts - indeed it is an integral part of most projects I'm involved in. However, if for example a shop was using oracle and was told that 'portability' was a core requirement of their project, then they could not use things like Fine Grained Access Control, hierarchical queries, and Analytic Functions. Know thy database. -
  25. hose before cart = good.[ Go to top ]

    Of course that should be the technical _cart_ before the business requirements _horse_. Silly me.
  26. Hmmm - I would have to disagree that portability should be a 'core' goal of any project - that seems to me to be putting the technical horse before the business requirements cart. There may be a smallish subset of experimental projects for which portability may be useful because of a longer term business strategy (e.g starting a risky, seat of the pants projects on a cheap mysql install with the idea of trading up to oracle or db2 if things go well).

    However for any large, reasonably funded, enterprise db backed project then IMHO portability should not even be considered.

    As we all know, enterprise db's cost a _lot_ of money. For all this money, you get a _lot_ of features - these features vary radically across db's.

    If you allow ORM to dictate how you access these features then you will miss out on the 80% of db functionality that you cannot access using ORM. Developers would also be encouraged to develop the mindset of treating the db like a 'black box' - something that they don't really need to worry about because hibernate or whatever is doing all the work for them. This is a recipe for disaster.

    Hibernate is excellent at performing straightforward selects, updates and inserts - indeed it is an integral part of most projects I'm involved in. However, if for example a shop was using oracle and was told that 'portability' was a core requirement of their project, then they could not use things like Fine Grained Access Control, hierarchical queries, and Analytic Functions.

    Know thy database.
    -
    This sounds very much like a 'mainframe' attitute, which fitted well with the 'no-one gets fired for buying IBM' approach from decades ago. Things have changed - they changed in the 80s with the introduction of Open Systems, which allowed a choice of operating system while retaining the same software and infrastructure. They have changed just as fundamentally in my view with the introduction of Java - where true and guaranteed portability of software, even at the binary level, is finally available. The advantages - indeed the long-term necessity for portablity seems to have been recognised throughout most of the IT industry. There are few remaining areas where such ideas seem to have not yet taken hold. Relational databases is one of these, with poor standards compliance in query languages and vendor-specific tools. What I find hard to understand is why, when portability is not easily available, developers in this area aren't complaining loudly instead of stating that it isn't required! Firstly, the additional features you mention are just that - additional features. My experience is that the core of most database work is pretty mundane. Also, the idea that you are missing out on 80% of functionality by using ORM is plain wrong - at least for good ORM products. You certainly would be if used ORM with some minor portable subset of SQL (this is one of my objections to Rails), but there are good quality ORM products that generate (largely automatically) optimised SQL using vendor-specific features of your particular database. The idea that a developer using ORM treats the thing like a 'black box' is only the case for a very poor developer. Maybe there will come a time when this is a reasonable approach, but for now good use of ORM requires a good understanding of SQL and relational systems. Portability can be profoundly useful not just in a smallish subset of experimental products. Developers should not have to sit around some expensive relational database installation all the time (again, the idea of consoles onto a mainframe comes to mind) - much development and testing of ideas should be lightweight and mobile. For example, I do most of my prototyping and development and testing on PostgreSQL, although my final deployment and optimisation is on Oracle. This is not 'risky, seat of the pants' stuff - it is the way new parts of a major application can be developed, and it is a huge advantage of portable development. Finally, it is my view that portability should definitely be a significant business requirement - we expect it with things like infrastructure, hardware, and (at least server-side) operating systems - why are we stuck with such old-fashioned attitudes with databases? ORM helps with this, and even if it means that only 75% of your database use is portable, that is still a benefit if licensing or requirements changes means you need to look around for alternate or additional vendors.
  27. Finally, it is my view that portability should definitely be a significant business requirement - we expect it with things like infrastructure, hardware, and (at least server-side) operating systems - why are we stuck with such old-fashioned attitudes with databases? ORM helps with this, and even if it means that only 75% of your database use is portable, that is still a benefit if licensing or requirements changes means you need to look around for alternate or additional vendors.
    Ok, you're starting to get there, but this was one-too-many portability posts and I have to call "bullshit". In any medium to large sized database schema, there's a LOT more to using a database that the particulars of how your SQL selects and inserts are written to take advantage of different SQL dialects. You don't use stored procedures? No triggers? Some data structures, such as hierarchical / tree data structures, are just INCREDIBLY expensive to query at runtime if you just keep it in its normalized form. It's common to use triggers to maintain flattened out structures to let you find related nodes quickly. There's no ORM standard for writing triggers, nor for stored procedures, etc. Plus, there's the deployment setup, the development environment and QA setup, the operations teams to get in the loop, etc. It's just not realistic to think you're going to go about changing database providers willy-nilly.
  28. Finally, it is my view that portability should definitely be a significant business requirement - we expect it with things like infrastructure, hardware, and (at least server-side) operating systems - why are we stuck with such old-fashioned attitudes with databases? ORM helps with this, and even if it means that only 75% of your database use is portable, that is still a benefit if licensing or requirements changes means you need to look around for alternate or additional vendors.


    Ok, you're starting to get there, but this was one-too-many portability posts and I have to call "bullshit".

    In any medium to large sized database schema, there's a LOT more to using a database that the particulars of how your SQL selects and inserts are written to take advantage of different SQL dialects.

    You don't use stored procedures? No triggers? Some data structures, such as hierarchical / tree data structures, are just INCREDIBLY expensive to query at runtime if you just keep it in its normalized form. It's common to use triggers to maintain flattened out structures to let you find related nodes quickly. There's no ORM standard for writing triggers, nor for stored procedures, etc.

    Plus, there's the deployment setup, the development environment and QA setup, the operations teams to get in the loop, etc.

    It's just not realistic to think you're going to go about changing database providers willy-nilly.
    I wasn't implying anyone would. I was also not implying that on substantial databases (or databases of any scale) you would not require triggers or non-normalised data for efficiency, or stored procedures. (Although I don't see why such issues of data structure are of any relevance to the issue of portability). Of course there is a lot more to using a database than the query language dialect, but the skills and knowledge about how to optimise, when to normalise and not and so on are, surely, transferrable skills - they have don't have that much to do with the issue of portability. (There are obviously some highly-vendor-specific matters of management and tuning, but should they be the basis around which other decisions are made?) I have seen far too many projects stuck in the mire of vendor-dependence, and I have seen the consequences. Anything that can help with this surely needs to be considered, in contrast to an attitude that portability should only be a minor consideration, if thought of at all. I may seem over-the-top about this; perhaps I am - perhaps I do tend to rant on the extreme wing of ORM politics! But it is as a results of harsh lessons learned.
  29. Ok, you're starting to get there, but this was one-too-many portability posts and I have to call "bullshit"...

    It's just not realistic to think you're going to go about changing database providers willy-nilly.
    We just changed from Sybase to Oracle last week. Next month we have MS SQL Server scheduled. Next year we are looking at PostGres, MySQL and most likely will try Sybase again. Dude, we change DBs all the time.
  30. Ok, you're starting to get there, but this was one-too-many portability posts and I have to call "bullshit"...

    It's just not realistic to think you're going to go about changing database providers willy-nilly.


    We just changed from Sybase to Oracle last week. Next month we have MS SQL Server scheduled. Next year we are looking at PostGres, MySQL and most likely will try Sybase again. Dude, we change DBs all the time.
    Ah.. Irony. Not all companies are centralised and with a single large data centre. Some companies may perhaps want the freedom to investigate going with alternative vendors for new projects, or for extensions to existing ones, or for installations at new sites. I find this 'all we will ever need is a single vendor' attitude baffling.
  31. Ok, you're starting to get there, but this was one-too-many portability posts and I have to call "bullshit"...

    It's just not realistic to think you're going to go about changing database providers willy-nilly.


    We just changed from Sybase to Oracle last week. Next month we have MS SQL Server scheduled. Next year we are looking at PostGres, MySQL and most likely will try Sybase again. Dude, we change DBs all the time.
    What's wrong, the trial license keys keep on expiring? :-)
  32. Not sure what the mainframe attitude is - I've never worked on mainframes (thankfully). I think comparing the portability of java across platforms to the non-portability of sql across RDBMS's is like comparing apples and oranges. The sql itself has different capabilities across different DB's - I think a more accurate comparision would be to compare the bytecode generated by java against that of ruby or python (which no-one expects to be portable). Your third paragraph here is simply begging the question. You are right - the core of most database work is pretty mundane. ORM is ideal for reducing the drudgery involved with most CRUD actions. However, ORM only directly supports a small subset of what a database can do - for sure it may be the small subset that makes up 70% of a typical application, but it is a small subset technically. As anyone with any experience in using ORM on complex schemas will tell you it does not take a particularly complicated report or query to render ORM unusable. I can never envisage a time when treating the db like a black box would be feasible. Being in a position of ignorance on how any of your application's inputs works would be a problem IMHO. Being ignorant of how your most critical input works would be foolish indeed. I find it interesting that you do your development on Postgre and deploy to Oracle - what is the thinking behind this? I would have thought it would make more sense to develop on the same platform as you deploy - at least you will catch certain classes of bugs in development that otherwise would escape into pre-production or whatever. As someone pointed out already in most real-world environments it is the java front end applications that are frequently end-of-lifed - the databases tend to last a long time because there tends to be many other applications reliant on the data contained therein (I realise I'm sort of begging the question here as well). Also, non-developers in the business tend to build up valuable skills dealing with the database in question. The best thing developers can do in my opinion is to realise this and educate themselves about the vagaries of their environment. At the very least, if the database changes then the developers have got into the habit of learning about their target database, and they can begin the process of learning their new target platform. Unless their company is incredibly flighty they won't change the db again for a long time to come.
  33. I think comparing the portability of java across platforms to the non-portability of sql across RDBMS's is like comparing apples and oranges. The sql itself has different capabilities across different DB's - I think a more accurate comparision would be to compare the bytecode generated by java against that of ruby or python (which no-one expects to be portable).
    I disagree, as what I would consider to be core parts of an SQL implementation - such as text handling functions - differ from vendor to vendor.
    As anyone with any experience in using ORM on complex schemas will tell you it does not take a particularly complicated report or query to render ORM unusable.
    I am not sure I follow. I can see ORM can be innapropriate for a particularly complicated query or report, but surely that does not make ORM unusable elsewhere?
    I can never envisage a time when treating the db like a black box would be feasible. Being in a position of ignorance on how any of your application's inputs works would be a problem IMHO. Being ignorant of how your most critical input works would be foolish indeed.
    I agree. I was not implying that treating the db like a black box was a good thing, just that as ORM standards become more and more accepted, I can see a time when at least more tools than now are targetted at the ORM rather than the database.
    I find it interesting that you do your development on Postgre and deploy to Oracle - what is the thinking behind this?
    The thinking is - why should developers have to sit all the time at the target database, which may be expensive and have substantial resource requirements? There is no reason why developers should not work for a time with a subset of the data, allowing off-site working, and demonstrations of the application.
    I would have thought it would make more sense to develop on the same platform as you deploy - at least you will catch certain classes of bugs in development that otherwise would escape into pre-production or whatever.
    Well, I was not suggesting that development code is never tested on the target database - of course it is. But if you have gone to the trouble (as I have) of purchasing a commercial ORM that has full support for a range of databases, and guarantees portability, it makes no sense if you ignore that capability.
    As someone pointed out already in most real-world environments it is the java front end applications that are frequently end-of-lifed - the databases tend to last a long time because there tends to be many other applications reliant on the data contained therein (I realise I'm sort of begging the question here as well). Also, non-developers in the business tend to build up valuable skills dealing with the database in question. The best thing developers can do in my opinion is to realise this and educate themselves about the vagaries of their environment.

    At the very least, if the database changes then the developers have got into the habit of learning about their target database, and they can begin the process of learning their new target platform. Unless their company is incredibly flighty they won't change the db again for a long time to come.
    Some databases change, some don't. There may be other reasons why portability is useful: a company may merge with another with uses a different vendor, or a company may grow and want to investigate the use of different products for new sites.
  34. For example, I do most of my prototyping and development and testing on PostgreSQL, although my final deployment and optimisation is on Oracle.
    Why? Oracle licenses for development purposes don't cost anything. You can run full Oracle Enterprise Edition on your local PC or a small server. Sure, the machines need to be beefy enough, but that's mostly RAM and RAM is cheap.
    Finally, it is my view that portability should definitely be a significant business requirement - we expect it with things like infrastructure, hardware, and (at least server-side) operating systems - why are we stuck with such old-fashioned attitudes with databases?
    It's not a requirement, it's a constraint. I completely understand why an ISV would want portability - failing to support the right platform (OS, DB, AppServer, etc) can be a deal killer. But investing in preserving portability within an enterprise seems like a waste of money. A well managed infrastructure alleviates to eliminates the licensing concerns. Pursueing portability within an enterprise just seems like tilting at windmills - or an excuse to avoid real business concerns that require thought outside the realm of technology..
  35. It's not a requirement, it's a constraint.
    The two aren't exclusive. You are constraining things when you choose a particular vendor or particular development language or framework that meet your requirements.
    But investing in preserving portability within an enterprise seems like a waste of money.
    My experience is the opposite - it is an investment in the future.
    A well managed infrastructure alleviates to eliminates the licensing concerns.
    But removes flexibility.
    Pursueing portability within an enterprise just seems like tilting at windmills - or an excuse to avoid real business concerns that require thought outside the realm of technology..
    The long-term future of a company is surely a real business concern. Part of the way that survival is helped is, in my view, through the use of standards and abstractions that allow choice. This happened with the use of open systems. It happened with the use of Unix/Linux. It has happened with the use of relational systems and to some degree with SQL. My view here is that ORM also has a role to play.
  36. A well managed infrastructure alleviates to eliminates the licensing concerns.
    But removes flexibility.
    In my experience a well-managed infrastructure greatly enhances flexibility. If it didn't, it would be well-managed.
    The long-term future of a company is surely a real business concern. Part of the way that survival is helped is, in my view, through the use of standards and abstractions that allow choice. This happened with the use of open systems. It happened with the use of Unix/Linux. It has happened with the use of relational systems and to some degree with SQL. My view here is that ORM also has a role to play.
    If we were talking about fly-by-night application vendors then I would agree, but I would say the future of Oracle and it's database lineup is probably far more certain than the futures of most well-established companies. The same probably can be said for IBM and DB2. It certainly can be said for Microsoft. Alternatively - isn't locking yourself into an ORM provider just as bad? Or would you say one whould only use "standard" ORM frameworks and features?
  37. In my experience a well-managed infrastructure greatly enhances flexibility.
    I guess it depends on what you call flexibility - for me it means the ability to install different systems at a new site, or or to move at least parts of systems onto different databases and platforms as needed without consideration of licensing. I realise that these things can be planned for, and licensing arranged, but how much more flexibile not to have to worry at all?
    If we were talking about fly-by-night application vendors then I would agree, but I would say the future of Oracle and it's database lineup is probably far more certain than the futures of most well-established companies. The same probably can be said for IBM and DB2. It certainly can be said for Microsoft.
    Just say that to the OS/2 or VB6 developers :) It is hard to know what can be said for Microsoft, as the company remains while products are abandoned. It is not just a case of things being certain - I would say it is also about being versatile. It is about being able to implement subsets of your system on different, smaller (perhaps even open source) databases. It is about being able to bargain with vendors, knowing that a migration is at least possible if necessary.
    Alternatively - isn't locking yourself into an ORM provider just as bad? Or would you say one whould only use "standard" ORM frameworks and features?
    Yes, I would. I have the same objections to locking myself into an ORM provider and I would have to locking myself into a DB provider. This is why I think Hibernate's support for JPA is a thoroughly good thing. It is also why I am somewhat concered about JPA, as I believe it's limitations encourage vendor-specific enhancements (in the way that JDO 1.0 did).
  38. I guess it depends on what you call flexibility - for me it means the ability to install different systems at a new site,
    What's the motivation for multiple instances of a system within a single corporation? What part of that motivation justifies supporting multiple configurations?
    or or to move at least parts of systems onto different databases and platforms as needed without consideration of licensing.
    A sufficiently sophisticated software sourcing organization should make any licensing issues near transparent. But where does the need for variation come from?
    Just say that to the OS/2 or VB6 developers :) It is hard to know what can be said for Microsoft, as the company remains while products are abandoned.
    Touche.
    It is about being able to bargain with vendors, knowing that a migration is at least possible if necessary.
    Ahhh, the old multi-sourcing argument. You may have circumstances that make it important. I don't know your environment. I just don't think it is common. I'm really curious as to what about your requirements and/or constraints makes database portability so important.
  39. What's the motivation for multiple instances of a system within a single corporation? What part of that motivation justifies supporting multiple configurations?
    There can be many. One is when there is a multi-site arrangement - the main site may deal with the majority of the information and traffic. However, the equivalent server+database installation may make no sense at a smaller subsiduary office with far lesser needs. The other situation is development - the way I work allows developers to prototype and design systems off-site, often on their own PCs and systems. It makes a lot of sense for those developers to use lesser configurations for at least the initial stages of this work. The current setup is that developers are prototyping, doing initial testing, and demonstrating their work using JDO+PostgreSQL, with future on-site final-stage testing and deployment on Oracle. Using a quality JDO implementation we have had few issues so far with this approach, and it allows a great deal of flexibility of working.
    Ahhh, the old multi-sourcing argument. You may have circumstances that make it important. I don't know your environment. I just don't think it is common.

    I'm really curious as to what about your requirements and/or constraints makes database portability so important.
    I assumed that multi-sourcing was a common-sense approach to any technology or resource that was critical to an organisation. I have come from a UK academic background (some time ago). A project proposal would be rejected unless it allowed for competitive tendering for all commercial resources, for each new project. An argument like "we have always used Oracle and IBM, so we want to continue" just would not be accepted. I guess this explains a lot about my attitude! Also, I have had many experiences where portability has been and continues to be important. I hope you will forgive me, but I can't talk about specific examples, but I hope I can give some idea. Let's just say that I am currently attempting to migrate a legacy system that was implemented on a database system that was (at the time) very well-known, and would obviously never become out-of-date! Of course, this system is live and critical, and can't be shut down during this process... Also, I have come across developers who have been so enticed into a particular vendor's products that they won't offer solutions which allow the client any choice about the database. These developers have built up their skillset, and that is an end to it. Any idea that they could have used any widely available ORM solution (such as Hibernate) instead, allowing the end-users choice, is rejected. (And, these are indeed simple applications, where few could argue that ORM would be effective). My personal experience means that for me, database portability is a vital part of any development I undertake. I often work with others for whom it is not such an issue, but it is a priority for me.
  40. Re: ORM good for simple crud apps[ Go to top ]

    And you never know when you'll be called upon to port. We had a request from one customer to port one of our apps from Oracle to SQL Server. The Hibernate portions of this app were trivially ported. The JDBC sections, which represented a much, MUCH smaller fraction took the lion's share of the porting time. We had another request for one of our apps that integrate with a particular system to be ported to a similar system. You never know what a customer will want.
  41. As was mentioned before, there are I'm sure a subset of projects for which DB portability should be taken into consideration at development time. I'm thinking of a definition like 'DB-driven applications developed for sale to third parties who already have existing database expertise'. Perhaps there are other project types I'm missing out on. However, I believe applications for which portability should be an important consideration are still in a relatively small minority. For those projects that do end up ported to a different database, for sure hibernate mappings are almost trivially portable in many cases (assuming of course that the schema is ported as-is). I must disclose that I have never participated in a porting job. I also know that porting hibernate code would be very straightforward (I suppose just changing the dialect and your primary key generator, if even) However I would imagine that porting simple jdbc would not be too difficult either. I would imagine that what would eat up time would be the complicated queries - and it is precisely these sort of queries that are not conducive to modelling in hibernate. FWIW, if I was in charge of such a porting effort I would not begin the porting until all developers involved had RTFM for Sql Server's core concepts - e.g. how it handles transactions, are selects blocking, how do inserts/updates lock the table, how are commits/rollbacks handled under the covers etc. With this under their belt, the porting would not be a simple search/replace job, but instead developers could use their knowledge of the db's in question to ensure that any code that worked well on the old database that may have trouble on the new one can be discussed, before it can become a problem on the new one.
  42. As was mentioned before, there are I'm sure a subset of projects for which DB portability should be taken into consideration at development time.

    I'm thinking of a definition like 'DB-driven applications developed for sale to third parties who already have existing database expertise'. Perhaps there are other project types I'm missing out on. However, I believe applications for which portability should be an important consideration are still in a relatively small minority.
    You may be right... for now. But, as I said, this is precisely the same 'mainframe' attitude that was common in the 70s and early 80s with regards to operating systems - there was simply no point worrying, as ICL and DEC would always be around, so we might as well code assembler for their systems. A surprising number of projects from those times are still around, and having to be supported. OK, so it is definitely not that bad regarding relational databases, SQL and other features, but it just seems to me that lessons have not been learned.
    However I would imagine that porting simple jdbc would not be too difficult either. I would imagine that what would eat up time would be the complicated queries - and it is precisely these sort of queries that are not conducive to modelling in hibernate.
    There can be a lot more than that. Hibernate and other ORMs do a far better job of hiding the details of a database than JDBC.
  43. If I know the RDBMS' native language, SQL, reasonably well am I not better off using JDBC with some good quality SQL?


    You may know that RDBMS's native language, but what about another RDBMS's native language? SQL is simply not easily portable for significant applications, no matter what relational database advocates may say, Java is a language designed from the start around the idea of portability. For this reason alone, ORMs and Java go well together.

    Have people found the complexity worth it or do they still drop down to SQL and pure JDBC for anything complex?


    The good thing about a quality ORM is that you don't need to drop down to SQL and JDBC for things that need it. You can use SQL together with the ORM.
    I understand why people prefer SQL to ORM. We used to evaluate Hibernate2.0 last year, but the experience was not so exciting. For example, UPDATE EMPLOYEE SET SALARY = SALARY*1.2 WHERE SPECIALISM = 'ORM'; Above SQL is simple & straightforward enough, but we just failed to make it done with Hibernate easily. We have to fectch rows and update one by one, which is performance killing & too much complicated. I don't know if JDO & JPA builds in such limitations, would you please show me how to express above SQL effectively with JDO / JPA ?
  44. I am sure bulk update exists in Hibernate as it does in other leading ORM solutions. When using JPA you can perform such operations using: entityManager.createQuery("UPDATE Employee e SET e.salary = e.salary*1.2 WHERE e.specialism = 'ORM'").executeUpdate(); It is true earlier versions of ORM solutions required the approach you described but based on consumer demand this functionality has been available for a while now. Doug Clarke Oracle TopLink
  45. I am sure bulk update exists in Hibernate as it does in other leading ORM solutions.

    When using JPA you can perform such operations using:


    entityManager.createQuery("UPDATE Employee e SET e.salary = e.salary*1.2 WHERE e.specialism = 'ORM'").executeUpdate();


    It is true earlier versions of ORM solutions required the approach you described but based on consumer demand this functionality has been available for a while now.

    Doug Clarke
    Oracle TopLink
    I just checked that Hibernate3 supports bulk updating. Great! I believe this is a very basic need for any projects, at the time of our evaluation (last year), Hibernate is of version 2.xx, and not ver0.8, ver0.9 or beta release, so we were surprised. Does anyone know JDO supports bulk updating ?
  46. Does anyone know JDO supports bulk updating ?
    No, it doesn't. I suspect this is for two reasons - firstly that sort of updating is primarily only useful for relational systems, and JDO is general purpose. Secondly, it is potentially very dangerous in terms of data integrity, as it works directly with the database and bypasses any objects being handled in memory by the persistence manager: there could be major problems with optimistic locking and version management at the very least.
  47. Does anyone know JDO supports bulk updating ?


    No, it doesn't. I suspect this is for two reasons - firstly that sort of updating is primarily only useful for relational systems, and JDO is general purpose. Secondly, it is potentially very dangerous in terms of data integrity, as it works directly with the database and bypasses any objects being handled in memory by the persistence manager: there could be major problems with optimistic locking and version management at the very least.
    I strongly disagree! You cannot, firstly, complicate the bulk-updating, and then declare that bulk-updating is not good. Therefore, your argument is simply a "strawman fallacy" !!! Many developers have handled such UPDATE statements in millions of projects without any problems. Most importantly, bulk-updating is critical for performance. I cannot imagine to retrieve and update rows one by one slowly, while a simple statement can serves the same purpose easily. JDO gurus should seriously consider this point.
  48. You cannot, firstly, complicate the bulk-updating, and then declare that bulk-updating is not good. Therefore, your argument is simply a "strawman fallacy" !!!

    Many developers have handled such UPDATE statements in millions of projects without any problems. Most importantly, bulk-updating is critical for performance. I cannot imagine to retrieve and update rows one by one slowly, while a simple statement can serves the same purpose easily.

    JDO gurus should seriously consider this point.
    Firstly, I did not say bulk updating is not good in general. What I mean to say is that it is potentially dangerous if used with an ORM. The issue of how other developers use bulk update outside of an ORM is irrelevant - they are not dealing with in-memory caches of objects, which have state that needs to be maintained and matters of versioning. You can use bulk updating in some ORMs - Hibernate and JPA for example, but you have to be careful how you use it - for example, it is best used within its own transaction. Secondly, no-one is saying you have to retrieve rows one by one - that is a "straw man" argument about how ORMs work. For example in JDO, you can use FetchPlan.setFetchSize(..) to set for batch retrieval - large batches of objects can be retrieved, updated and persisted efficiently. This is certainly not as fast as bulk updating using SQL directly, but in my view it is not a 'show stopper'.
  49. For example in JDO, you can use

    FetchPlan.setFetchSize(..)

    to set for batch retrieval - large batches of objects can be retrieved, updated and persisted efficiently.

    This is certainly not as fast as bulk updating using SQL directly, but in my view it is not a 'show stopper'.
    It seems that extra strategy needed for UPDATE and DELETE with JDO. SQL 1) UPDATE EMPLOYEE SET SALARY = SALARY*1.2 WHERE SPECIALISM = 'ORM'; SQL 2) DELETE FROM EMPLOYEE WHERE SPECIALISM = 'SQL-ONLY'; A) Would you please demostrate how to perform above 2 tasks with JDO ? (pseudocodes pls) B) Would somebody else please doc a design-pattern for UPDATE and DELETE with JDO ?
  50. For example in JDO, you can use

    FetchPlan.setFetchSize(..)

    to set for batch retrieval - large batches of objects can be retrieved, updated and persisted efficiently.

    This is certainly not as fast as bulk updating using SQL directly, but in my view it is not a 'show stopper'.



    It seems that extra strategy needed for UPDATE and DELETE with JDO.


    SQL 1) UPDATE EMPLOYEE SET SALARY = SALARY*1.2 WHERE SPECIALISM = 'ORM';

    SQL 2) DELETE FROM EMPLOYEE WHERE SPECIALISM = 'SQL-ONLY';


    A) Would you please demostrate how to perform above 2 tasks with JDO ? (pseudocodes pls)

    B) Would somebody else please doc a design-pattern for UPDATE and DELETE with JDO ?
    1) Query query = persistenceManager.newQuery("SELECT FROM Employee WHERE specialism == 'ORM'"); List employees = (List) query.execute(); for (Employee employee : employees) employee.setSalary(employee.getSalary() * 1.2); For efficiency, you may want to add: query.getFetchPlan().setFetchSize(FETCH_SIZE); after the first line, where FETCH_SIZE is something large - this will indicate that you want rows fetched in units of 'FETCH_SIZE', not individually. If this involves too much memory use (if there are a very large number of rows), you can add a persistenceManager.flush(); call after every (say) few thousand objects have been processed. 2) Query query = persistenceManager.newQuery("SELECT FROM Employee WHERE specialism == 'SQL-ONLY'"); query.deletePersistentAll();
  51. For example in JDO, you can use

    FetchPlan.setFetchSize(..)

    to set for batch retrieval - large batches of objects can be retrieved, updated and persisted efficiently.

    This is certainly not as fast as bulk updating using SQL directly, but in my view it is not a 'show stopper'.



    It seems that extra strategy needed for UPDATE and DELETE with JDO.


    SQL 1) UPDATE EMPLOYEE SET SALARY = SALARY*1.2 WHERE SPECIALISM = 'ORM';

    SQL 2) DELETE FROM EMPLOYEE WHERE SPECIALISM = 'SQL-ONLY';


    A) Would you please demostrate how to perform above 2 tasks with JDO ? (pseudocodes pls)

    B) Would somebody else please doc a design-pattern for UPDATE and DELETE with JDO ?



    1)
    Query query = persistenceManager.newQuery("SELECT FROM Employee WHERE specialism == 'ORM'");
    List employees = (List) query.execute();
    for (Employee employee : employees)
    employee.setSalary(employee.getSalary() * 1.2);

    For efficiency, you may want to add:

    query.getFetchPlan().setFetchSize(FETCH_SIZE);

    after the first line, where FETCH_SIZE is something large - this will indicate that you want rows fetched in units of 'FETCH_SIZE', not individually.

    If this involves too much memory use (if there are a very large number of rows), you can add a

    persistenceManager.flush();

    call after every (say) few thousand objects have been processed.

    2)
    Query query = persistenceManager.newQuery("SELECT FROM Employee WHERE specialism == 'SQL-ONLY'");
    query.deletePersistentAll();
    1) The UPDATE statement become very intricate, doesn't it? 2) Is this really a DELETE statement? You are showing a SELECT QUERY + A DELETE ACTION. But for developers, the most explicit method for DELETE should be "DELETE XXX ..." instead of "SELECT XXX ...". Too many works, but no gains, especially on performance!
  52. But for developers, the most explicit method for DELETE should be "DELETE XXX ..." instead of "SELECT XXX ...".

    Too many works, but no gains, especially on performance!
    Oddly, when I use an sql tool to do deletes - I always do a select first to ensure I am deleting what I think I am deleting.
  53. The UPDATE statement become very intricate, doesn't it?

    2) Is this really a DELETE statement? You are showing a SELECT QUERY + A DELETE ACTION. But for developers, the most explicit method for DELETE should be "DELETE XXX ..." instead of "SELECT XXX ...".

    Too many works, but no gains, especially on performance!
    No gains? Shall I chop down the tree directly in front of you so you can see the forest?
  54. For example in JDO, you can use

    FetchPlan.setFetchSize(..)

    to set for batch retrieval - large batches of objects can be retrieved, updated and persisted efficiently.

    This is certainly not as fast as bulk updating using SQL directly, but in my view it is not a 'show stopper'.



    It seems that extra strategy needed for UPDATE and DELETE with JDO.


    SQL 1) UPDATE EMPLOYEE SET SALARY = SALARY*1.2 WHERE SPECIALISM = 'ORM';

    SQL 2) DELETE FROM EMPLOYEE WHERE SPECIALISM = 'SQL-ONLY';


    A) Would you please demostrate how to perform above 2 tasks with JDO ? (pseudocodes pls)

    B) Would somebody else please doc a design-pattern for UPDATE and DELETE with JDO ?
    Now your turn. How would you do "Distributed Write-Behind Caching" with the above? :) (Yes, it is quoted and is not my own words.)
  55. For example,

    UPDATE EMPLOYEE SET SALARY = SALARY*1.2 WHERE SPECIALISM = 'ORM';

    Above SQL is simple & straightforward enough,
    but we just failed to make it done with Hibernate easily.

    We have to fectch rows and update one by one, which is performance killing & too much complicated.
    'O' in ORM stands for 'Object'. As I mentioned in my post above, ORMs are great for Object-Oriented systems that uses Domain-Model architectural pattern. The quoted example shows a typical case of using ORM for static class-table mapping in a system that is NOT object-oriented - the primary notion is ROW which is fetched , processed and updated. In OO system the primary notion is Employee object with getSalary(), setSalary() methods and changes in db are just SIDE-EFFECT of calling those methods. This means that objects graph cannot be updated by a single SQL UPDATE invocation. The objects have to be updated one by one. The question remains how those changes should be efficiently propagated to db (a general solution could be sending a batch of updates at the end of transaction, but a case of updating eg. 1million objects should be rather treated differently). http://www.enterpriseware.eu
  56. This means that objects graph cannot be updated by a single SQL UPDATE invocation.
    The object graph can't be, but JPA does allow a bulk update. However, this should be used with caution because it bypasses the object graph.
    The objects have to be updated one by one. The question remains how those changes should be efficiently propagated to db (a general solution could be sending a batch of updates at the end of transaction, but a case of updating eg. 1million objects should be rather treated differently).
    Modern ORMs have a 'flush' which allows batches of updates to be performed even in the middle of a transaction. This can allow very large transactions to be performed.
  57. For example,

    UPDATE EMPLOYEE SET SALARY = SALARY*1.2 WHERE SPECIALISM = 'ORM';

    Above SQL is simple & straightforward enough,
    but we just failed to make it done with Hibernate easily.

    We have to fectch rows and update one by one, which is performance killing & too much complicated.


    'O' in ORM stands for 'Object'. As I mentioned in my post above, ORMs are great for Object-Oriented systems that uses Domain-Model architectural pattern.

    The quoted example shows a typical case of using ORM for static class-table mapping in a system that is NOT object-oriented - the primary notion is ROW which is fetched , processed and updated. In OO system the primary notion is Employee object with getSalary(), setSalary() methods and changes in db are just SIDE-EFFECT of calling those methods.
    This means that objects graph cannot be updated by a single SQL UPDATE invocation. The objects have to be updated one by one. The question remains how those changes should be efficiently propagated to db (a general solution could be sending a batch of updates at the end of transaction, but a case of updating eg. 1million objects should be rather treated differently).

    http://www.enterpriseware.eu
    IMO, even in Object-Oriented systems, there're chances that we need to bulk-update/delete "Objects". As I said, this is very very very basic needs for any projects. After last evaluation, we had finally chose iBatis instead. So far, we have built 2 systems with iBatis, and we do think our systems are "Object-Oriented". So, the problem is not "Object-Oriented" or not, one can build OO systems too without ORM/iBatis. The problem is that some ORM tools simply make simple things too much complicated & inefficient. But it is great to know that ORM is improving.
  58. Organizational Issues[ Go to top ]

    To cope with the complexities of Hibernate you need: 1. A uniform mapping strategy. It makes no sense to mix the diverse mapping options in one project. 2. A person (or two) dedicated to work with Hibernate. Project members delegate their persistency requirements to the person who is responsible for the smooth and consistent working of Hibernate.
  59. Wrong use of ORMs[ Go to top ]

    ORMs can help to generate SQLs and many people (maybe most of them) use it only for this purpose. But the real challenge is not HOW to generate SQL (thats easy), but WHEN to call it so the overal state (db + app layer) is consistent. This is the real advantage of ORMs. http://www.enterpriseware.eu
  60. Performance issues vs. AOP[ Go to top ]

    Having worked with Hibernate, I must add the following to the comments about Hibernate performances. The basic Hibernate framework is not 'production quality' right out of the box. Anyone who forgets to tweak the caches and performance botlenecks is making a grave error; a newbie one. Hibernate used in cunjonction with a AOP framework has ten folded the performances in our projects. In our case, a simple Method Interceptor made with Spring classes actually divided the execution times by 100 in some cases. Don't forget that Hibernate is a general framework and cannot be bundled for your specific project needs.
  61. Re: Performance issues vs. AOP[ Go to top ]

    In our case, a simple Method Interceptor made with Spring classes actually divided the execution times by 100 in some cases.
    As a matter of interest, what does this interceptor do? I appreciate it's probably something specific to your project, but if it's not commercially sensitive I'd be interested to know how you managed such a large performance gain.
  62. Re: Performance issues vs. AOP[ Go to top ]

    The trick is to wrap your management beans (business logic) in proxy factory beans and intercept calls to their methods. Example. You have a manager responsible of fetching pojos from the database through your DAO objects. This manager has a method called getCached(Long id). Setup your proxy to redirect calls to org.aopalliance.intercept.MethodInterceptor which is injected with a cache, say EHCache for example. The MethodInterceptor hashes the request and looks in it's cache for previous similar requests. If it's in the cache, it returns the previously serialized object which was returned. If not, it executes the method through an invoker, serializes the result for future use, and finally returns the result object. The advantage over Hibernate's cache is that when you access objects with lots of links to other objects, the lazy initialization process fetches way too much information for nothing at each request, even with Hibernate cache turned on. When you intercept the manager method, the object is already constructed and in memory, ready for immediate access instead of requiring access to many caches and calling a heck lot of setters.
  63. Good Tools Need Good Developers[ Go to top ]

    I started a comment here, but it turned into a blog entry: Good Tools Need Good Developers
  64. ... so that even not so good developers can use them! Nobody says: 'Good cars need good drivers'.
  65. ... so that even not so good developers can use them!
    Nobody says: 'Good cars need good drivers'.
    Yeah, you're right... We should all drive Indy cars. That should make everyone safer and let us get to work at 200 MPH!
  66. I started a comment here, but it turned into a blog entry:

    Good Tools Need Good Developers
    Agree. Should someone who doesn't yet understand relational database theory, indexes, JDBC, or how to write a performant query use hibernate? Probably not without some expert guidance. I think the comment by Casual about good cars needing good drivers is a bit off topic. Is the car really classified as a tool? Software development isn't carpentry but it is true that when you are starting out you don't get the expensive air hammer, or lathe, or sanders. You start out with the simple tools even if they take more time. I also blogged on a similar topic criticizing the push of smart tools over smart developers. Getting back to the original post I agree that Hibernate is a good tool, just not an easy tool to be applied blindly. ______________ George Coller DevilElephant
  67. Agree. Should someone who doesn't yet understand relational database theory, indexes, JDBC, or how to write a performant query use hibernate? Probably not without some expert guidance.
    Clearly it is better more you know, but I don't think it is an absolute necessity in all cases. However the reasons of using Hibernate are entirely different; ease of use would not be anywhere near the top of that list.
    post I agree that Hibernate is a good tool, just not an easy tool to be applied blindly.
    I don't think any tool is meant to be used blindly.
  68. Re: Good Tools Need Good Developers[ Go to top ]

    I don't think any tool is meant to be used blindly.
    Unless you are a double ninja master! (Call back - )
  69. Seeing eye dog?[ Go to top ]

    I don't think any tool is meant to be used blindly.
    Doesn't matter how a tool is meant to be used. I've caught my wife using an expensive screwdriver as a hammer and a paint stir. A point I made earlier on this thread is that good lessons need to be taught over and over because people forget and make the same mistakes. My point is that Hibernate has and is being used blindly. The important take-away from this article, I feel, is that a team should really think about if Hibernate will gracefully solve their problem and is worth the effort behind using it over a lighter, more transparent, framework. I think Hibernate gets picked on because it is over-hyped. It is the EJB of this decade and just like EJB there are many who use it because it is used instead of for the discrete set of use cases it excels at. In fairness to Hibernate, I'd like to see more articles discussing those points of excellence. ______________ George Coller DevilElephant
  70. Is this a joke? "It makes it more difficult for people to setup classpaths in your deployment environment. Admittedly, this will probably be less of a problem with the introduction of Mustang's Class-Path Wildcards, but it's still rather messy." Who is crazy enough to set the Classpath environment variable, compromising ALL projects classpaths.... if you really need to run something from the command line, you can use ant for jars wildcards! maybe they use command line javac to build their projects..s
  71. About portability[ Go to top ]

    There are a number of posts about portability, but it seems that everyone is focused on the possibility (likelihood) of having different DB within an enterprise. Noone has pointed out the possibility of systems (or reuse of parts of systems) deployed in different environments. Some could have/prefer Oracle, others MySQL and so on. It is worth "built-in" portability in this case ? It looks so old-ish a statement like: "for now we support these DB", when you could claim "we support all DB (RDBMS) supported by XXX". Or better, "we support any DB with a JDO interface on top". Guido
  72. Hi, There is yet another very good tutorial on struts at http://www.roseindia.net/hibernate This tutorial provide step by step instructions on using Hibernate 3 This tutorial explains the advance topics like:Hibernate * Hibernate Query Language * HQL Examples * HQL from Clause Example * HQL Select Clause Example * HQL Where Clause Example Read more about this at http://www.roseindia.net/hibernate Thanks Deepak Kumar
  73. darn got here too late[ Go to top ]

    This wsa a good article. I've used Hibernate for a few projects and I really like it. It speeds up dev time significantly after you are familiar with its quirks. 10% of the time I find myself breaking down to JDBC. There are some queries Hibernate cannot do. Like aggregates with createSQLQuery(). With hql, select new() doesnt quite work sometimes when hibernate has to try and figure out a complex syntax in the new(...). Other than those things, I fiddle with it and figure out the correct query. Once you have that query down you're golden and everything works like magic. I also had to add some code to hibernate to allow for certain database functions. That was easy. I wasn't aware of the hilo solution, I just generate an ID w/ each insert, but given how I'm looking at a lot of transactions in a few months, I'll need that solution for a few tables. Overall, Hibernate is amazing, it's my secret tool. People tell me, "You can't do that in that amount of time" and I just laugh. Oh yes I can.