Cartoon: In search of a better persistence API

Discussions

News: Cartoon: In search of a better persistence API

  1. Everyone knows the Java community needs a POJO persistence spec. Should we start from scratch under auspices of EJB, or work with JDO which is available now? This latest cartoon pokes fun at the current political state of affairs with Java persistence.

    Watch: In search of a better persistence API

    Editors Note: Due to past response, we would like to remind the readers that cartoons are not to be taken seriously - Cartoons intended to be satiracle.

    Threaded Messages (309)

  2. cartoon makeover[ Go to top ]

    I have one little request. If we're gonna make cartoons characters of real people, can jazz it up a bit and give them a *cough* make-over. After all it's a cartoon :) We can pretend geeks look like tom cruise, are built like swartzenegger and have ooodles of fans.

    </dump-joke>
  3. Maybe we have enough persistence technologies already?

    Many Java developers are managing just fine (and have been for many years) in delivering world-class solutions using Java. The constant re-invention of key areas like persistence technology is only increasing the risk of new projects failing as we all jump onto the next fad.

    My advice, albeit highly biased, is to design your db schema using good old-fashioned database design tools and blast out your POJO persistence code using one of the many code generators out there (buf preferably using this one) and then concentrate on writing some business logic and presentation logic. There are enough challenges there to keep us all busy without looking for a new persistence technology every time we start a new project.

    You'd think after 20 years or so we would have cracked the problem of mapping objects to relational database, wouldn't you?

    Andy Grove
    http://www.codefutures.com
  4. Maybe we have enough persistence technologies already? Many Java developers are managing just fine (and have been for many years) in delivering world-class solutions using Java. The constant re-invention of key areas like persistence technology is only increasing the risk of new projects failing as we all jump onto the next fad.
    --snip--

    My advice, albeit highly biased, is to design your db schema using good old-fashioned database design tools and blast out your POJO persistence code using one of the many code generators out there (but preferably using [CodeFutures Firestorm])
    The fact is that plenty of people have seen a need beyond EJB and the code generation approach, hence Hibernate, iBatis, TopLink, OJB, Castor, etc. I would say that the risk of developers putting the future of their applications in the hands of numerous incompatible, proprietary frameworks is greater than the risk of some people adopting JDO. Standardized ORM has some merit and validity, and as such, a standard, portable solution is a good idea.

    Furthermore, JDO has an active userbase. The current debate is essentially whether to let that standard stagnate while vendors extend their JDO solution to meet needs in a non-standard way while they wait to port to EJB3, or whether to standardize the way some needed features are implemented and give old and new JDO users a migration path toward EJB3.
  5. The current debate is essentially whether to let that standard stagnate while vendors extend their JDO solution to meet needs in a non-standard way while they wait to port to EJB3, or whether to standardize the way some needed features are implemented and give old and new JDO users a migration path toward EJB3.

    Not everyone necessarily wants to migrate towards EJB3! JDO 2.0 (or whatever its possible non-JCP alternative might be if the vote goes the wrong way) is a full-featured POJO persistence specification. Many of us are doing pretty well even with JDO 1.0.1 and some vendor extensions. The EJB3.0 persistence mechanism will have to be a significantly superior superset of JDO features to encourage everyone to eventually migrate.
  6. Not everyone necessarily wants to migrate towards EJB3! JDO 2.0 (or whatever its possible non-JCP alternative might be if the vote goes the wrong way) is a full-featured POJO persistence specification. Many of us are doing pretty well even with JDO 1.0.1 and some vendor extensions. The EJB3.0 persistence mechanism will have to be a significantly superior superset of JDO features to encourage everyone to eventually migrate.
    Okay, not toward full EJB3, but you will be moving toward a single POJO persistence model that is being spun off from the EJB3 spec. That's already been agreed upon. I misphrased it, but sharing the same foundation as EJB3 is the direction of JDO2+, if it survives.
  7. Not everyone necessarily wants to migrate towards EJB3! JDO 2.0 (or whatever its possible non-JCP alternative might be if the vote goes the wrong way) is a full-featured POJO persistence specification. Many of us are doing pretty well even with JDO 1.0.1 and some vendor extensions. The EJB3.0 persistence mechanism will have to be a significantly superior superset of JDO features to encourage everyone to eventually migrate.
    Okay, not toward full EJB3, but you will be moving toward a single POJO persistence model that is being spun off from the EJB3 spec. That's already been agreed upon. I misphrased it, but sharing the same foundation as EJB3 is the direction of JDO2+, if it survives.

    I understand all this, but it provides no incentive (so far) for me as a developer and decision maker to plan to migrate. I have a substantial amount of code that uses JDO. I foolishly expected that a common POJO persistence model would not require re-writing of that existing code. I expected a superset of the JDO API (an existing JCP specification). (The word 'unified' - as in 'unified model' - seems to have been re-defined as 'entirely new'). I'm not interested in 'leveraging the new common persistence API' (from your link) - whatever that vague phrase is supposed to mean. Unless there is an easy migration path, there is no reason for me to use this new specification unless it is a major superset of JDO features. Simply stating that it is the intended new 'unified model' achieves nothing. There has to be a reason to use it.

    'JDO2+' will survive in some form; outside of the JCP if necessary.
  8. I have a substantial amount of code that uses JDO. I foolishly expected that a common POJO persistence model would not require re-writing of that existing code.
    Do not forget to wrapp EJB3 using Spring next time :)
  9. History is repeating itself[ Go to top ]

    History is repeating itself. Just look at what was said about EJB2 four years ago by people such as Tyler Jewell and Billy Newport.
  10. Maybe we have enough persistence technologies already? Many Java developers are managing just fine (and have been for many years) in delivering world-class solutions using Java. The constant re-invention of key areas like persistence technology is only increasing the risk of new projects failing as we all jump onto the next fad.My advice, albeit highly biased, is to design your db schema using good old-fashioned database design tools and blast out your POJO persistence code using one of the many code generators out there (buf preferably using this one) and then concentrate on writing some business logic and presentation logic.

    Good point. As Smith said in the Matrix, "Never send a man to do a machine's job!" It will be interesting to try your product.

    You'd think after 20 years or so we would have cracked the problem of mapping objects to relational database, wouldn't you?Andy Grovehttp://www.codefutures.com

    We have. It's called JDO 2.0! :-)
  11. Please don't expect many sales here, many of us are not corporate moneybags and have had bad experiences with closed source products i.e. no support for a paid for old product or product version when the developer gets tired of it!

    I've found that OSS products has saved me many times because I could rapidly code a fix for a bug or annoyance and release product on time, closed source doesn't allow this. I've had to adapt and extend some old OSS project versions because the target environment only has JRE 1.3 installed.

    In the XML domain I have found some commercial products to be inferior to OSS products for marshalling/unmarshalling XML in Java.
  12. Dear Java geeks,
    you all think you are pretty smart. But at the end of
    the day the people who suffer are the people you build the solutions for. Have you people forgotten what a "standard" is? Moreover what an "open standard" is and why people are supposed to use open standards? Remember in the J2EE world the standards define a contract between the application server vendors, the component developers and the tool vendors; so we can all be talking the same language. In the end, the people who are supposed to benefit are the people you build the systems for. This should apply to persitence as well. I think the clients should demand "solutions" that application server vendors support, and not whatever is the latest fad for the next java geek, who can't be bothered learning what J2EE is all about.
  13. Instead of Gavin...[ Go to top ]

    Instead of Gavin with his arm around Craig, it should have been Dion Almaer saying "Don't worry Craig, I'll make sure TSS promotes JDO.". Or...better yet, Gehr Magnusson say "Don't worry, I'll defend you on the EC through GlueCode's, err...whoops..., I mean Apache's seat on the EC."

    Bill
  14. Instead of Gavin with his arm around Craig, it should have been Dion Almaer saying "Don't worry Craig, I'll make sure TSS promotes JDO.". Or...better yet, Gehr Magnusson say "Don't worry, I'll defend you on the EC through GlueCode's, err...whoops..., I mean Apache's seat on the EC."Bill

    This is why I don't trust the JBoss/Hibernate grouping as a viable business technology choise. JBoss/Hibernate grouping always attacks JDO, even though they will never do an implemenation themselfes. This is a mystery to me. Why so involved in something one don't care about? JDO people (as my self) don't mind Hibernate nor EJB3 as long as the JCP realises that many users have a lot of investments in the JDO standard and need the JDO 2 to be approved.

    If the JCP don't guarantee the standards it self have initially approved (JSR-12 and JSR-243) , then why should JCP have any credibilty at all. Why should I trust that the coming EJB3 is a "standard" when it's coming from a "standard" group that don't protect it's users investment. Why should JCP at all be regarded as a standard forum?????

    In a technology descision I take in consideration if a technology promoter is bluntly putting more effort in politics and beeing annoyed by competitive standards, than doing an effort to make a consistent and sustainable products and API themselves.

    Both EJB Persistence vendors (EJB version 1.1, 2.x and now 3) and Hibernate (version 2, and 3) are doing big shift in their API:s and still want to be regarded as good for standards and users. This kind of wobbling is not a standard effort at all to me. JDO Has a 99,9% backwards compatible API path, with a huge API standard feedback loop from acutal users since august 2003.

    As a decision maker in persistent storage solutions, I am loosing trust in JCP as a whole if JDO 2.0 doesnt become approved. Even if EJB3 will come, I can not regard JCP as a credible formun, and hence EJB3 is not credible either. And if Bill B or Gavin K will be happy about it they have to know that the alternative for me in order to get a stable API is not Hibernate or EJB3, it's just plain and simply .NET.

    Johan Strandler
    Smart Connexion
  15. If the JCP don't guarantee the standards it self have initially approved (JSR-12 and JSR-243) , then why should JCP have any credibilty at all. Why should I trust that the coming EJB3 is a "standard" when it's coming from a "standard" group that don't protect it's users investment. Why should JCP at all be regarded as a standard forum?????

    My view exactly.
    And if Bill B or Gavin K will be happy about it they have to know that the alternative for me in order to get a stable API is not Hibernate or EJB3, it's just plain and simply .NET.

    I would suggest that such a drastic move would not be necessary - the JDO 1.0.1 specification remains whatever happens, and I believe that there is considerable vendor and developer interest in providing JDO 2.0-type features with multi-vendor support outside of the JCP if necessary. There will be a stable API, even if it is not a JSR.
  16. JDO people (as my self) don't mind Hibernate nor EJB3 as long as the JCP realises that many users have a lot of investments in the JDO standard and need the JDO 2 to be approved.

    It appears to me that the problem is that your investment in JDO technology seems to have been contingent on promises of future functionality (JDO 2), rather than on the current spec (JDO 1).
    If the JCP don't guarantee the standards it self have initially approved (JSR-12 and JSR-243) , then why should JCP have any credibilty at all... I am loosing trust in JCP as a whole if JDO 2.0 doesnt become approved.

    JSR-12 (JDO 1) has been finalized; JSR-243 (JDO 2) has not been finalized (or approved, as you put it).
    ...the alternative for me in order to get a stable API is not Hibernate or EJB3, it's just plain and simply .NET

    I may be wrong, but as far as I know there is no ORM / persistence standard in .NET, so that the API you are tied to is the vendor's API. This is the worst-case scenario in J2EE (since there *are* standards here), rather than the best-case scenario as it appears to be in .NET...
  17. I may be wrong, but as far as I know there is no ORM / persistence standard in .NET, so that the API you are tied to is the vendor's API. This is the worst-case scenario in J2EE (since there *are* standards here), rather than the best-case scenario as it appears to be in .NET...

    And like their data access has been stable - DAO/RDO/ADO/ADO.Net and sometime in the future - Object Store. Probably why one doesn't see to many .Net ORM solutions being used (Yes many are available - but their product lifespan will be short). It has not been declared from on high to use ORM. Currently, for any MS projects I do I use NHibernate.
  18. Daniel,

    As you might se from my comments is that I start to distrust JCP itself based on the fact that it promotes changing API:s EJB1.1, EJB2.x, EJB3 instead of stable ones (JDO1 and 2) and that it focus on those users who totally accept product and API changes in their systems instead of users that actually use products long term. So, dismissing JDO on standardisation base is pure nonsense to me and a proof of the faliure of JCP itself.

    Also, I distrust those who dismisses JDO in favor of other ORM technologies. Clearly, anyone who has at least a basic knowledge of ORM, knows JDO API and its products just works and are highly performant. Also, JDO2 will allow for both SQL and EJBOQL query language. So dismissing JDO (or Hibernate for that matter) on technical reasons is really, really big nonsense to me and shows me more about the technical knowledge of those dismissing it than anything about the technology choises. I have worked with OODB since late 1980, with ORM the whole 90'ies and with really large EJB projects since december 1998, and JDO simply works within and without an EJB-server, and is a truely stable API.

    Finally, what I tried to say around the .NET comment, is that I start to belioeve much more in the stableness of that API than "standards" from JCP - where the stableness is key to decision making. The JCP has become an unuseful playground for Java-vendors and will soon go the same destiny that the Open Platform DCE initiative in early 1990'ies where the vendors easlily and happlily killed their own market by fighting each other with the users in the middle of the bullet-rain.

    Cheers,
    Johan Strandler
    Smart Connexion
  19. I have worked with OODB since late 1980

    Ok, that was a little bit bold statement. Should have been "late 80'ies". I guess everybody understood that, but better safe than sorry. :-)

    /Johan Strandler
  20. Instead of Gavin...[ Go to top ]

    Instead of Gavin with his arm around Craig, it should have been Dion Almaer saying "Don't worry Craig, I'll make sure TSS promotes JDO.". Or...better yet, Gehr Magnusson say "Don't worry, I'll defend you on the EC through GlueCode's, err...whoops..., I mean Apache's seat on the EC."Bill

    Are you channeling Vic now?

    Apache's vote was based on Apache's interest in supporting open standards and open communitiies, letting the ecosystem/market free to choose technology and architectures. Competition is good, Bill. Embrace it, don't fear it.

    geir

    (P.S. Gluecode doesn't have a JDO implementation, nor do we have any intention of implementing "EJayBernate", the POJO persistence sub-spec from JSR-200. What are you worried about?)

    [I am the Apache representative on the JCP EC and voted in favor of JDO2. I also happen to work for Gluecode Software]
  21. Geir...[ Go to top ]

    [I am the Apache representative on the JCP EC and voted in favor of JDO2. I also happen to work for Gluecode Software]

    As an Apache rep for now you have to vote for Apache. How long would you hold that position if you f_ up? Not long! You had no choice, trust me on this.

    You just happen to get a job offer from GlueCode while GlueCode Core Developer Network memebers where taking jBoss ideas - as per response document sent by you(need a link?); and filtering it trough Apache Geronimo, something you rail-roded through Apache.
    Something very few people would have done. Did you get a $ bonus for messing over 2 open source communities? Feel good about it?
    Did I get something wrong, please correct it.

    Had jBoss patented it's ideas, you could not have done that, but they did not patent!
    GlueCode license is now no longer open source: http://www.gluecode.com/website/products/BinaryCodeLicense-JOE.pdf - you have to sign up just to get the binnary and agree to not reverse enginner. Now that's odd.
    People can still get original current open source code from jBoss using standard open source licese by the people that own the ideas, but did not patent them.

    This is typcal of EJB, becuase EJB it'self is a lie, so everyone is pretending it's OK and applyting that level of ethics to their work. JDO want's a piece of that O/R pie.
    (http://www.agiledata.org/essays/impedanceMismatch.html)
    How many failed EJB projects can you name?

    In the mean time, majority of projects are using standard SQL and E/R , as per http://jroller.com/page/yizhou/20050121#ultimate_java_persistence_design

    Tomcat works fine, Apache iBatis too.

    .V
  22. Geir...[ Go to top ]

    How’s EJB3 got hijacked by a bunch of arrogant hackers???

    George
    (Also from down under)
  23. Geir...[ Go to top ]

    How's EJB3 got hijacked by a bunch of arrogant hackers???George(Also from down under)

    What?

    - geir
  24. Geir...[ Go to top ]

    By EJB3, I mean "EJB3 spec". Actually they are not arrogant hackers, they are JBossers.
  25. JBoss[ Go to top ]

    What really puzzles me is this:

    Why does Jboss (or Hibernate) want to get into the business of EJB-EB which is much closer to demise than JDO.
  26. JBoss[ Go to top ]

    Why does Jboss (or Hibernate) want to get into the business of EJB-EB which is much closer to demise than JDO.

    They're doing it just to piss you off. Didn't you realize that the EJB3.0 committee was formed just to upset you? Well, not *just* you, Steve Zara too. I've seen some of the meetings in action, so you can definitely trust what I'm saying.

    Pretty scary huh? Bet you didn't realize this, did you. You thought this was about technology? LOL. Nope, it's purely personal.

     - Don
  27. JBoss[ Go to top ]

    <blockquoteYou thought this was about technology? LOL.
    Nop.

    It is about marketing. And it is bad marketing.

    Cheers
    George
  28. JBoss[ Go to top ]

    Why does Jboss (or Hibernate) want to get into the business of EJB-EB which is much closer to demise than JDO.
    They're doing it just to piss you off. Didn't you realize that the EJB3.0 committee was formed just to upset you? Well, not *just* you, Steve Zara too. I've seen some of the meetings in action, so you can definitely trust what I'm saying.Pretty scary huh? Bet you didn't realize this, did you. You thought this was about technology? LOL. Nope, it's purely personal.&nbsp;- Don

    I would prefer it if you did not try and read my mind. I have nothing against EJB 3.0 as such. I have nothing against the new EJB 3.0 POJO persistence API. What I do have objection to is the JCP seeming to (and that is all I can say, as I can't read their minds either) deprecate an existing technology which is used by many developers and implemented by many vendors. There may be certain good political and business reasons for this, but I believe it is damaging to the JCP. It's not just me that feels this - this is pretty much what Sun says in their comments regarding the JDO vote.

    I have been through all this before, years ago. I used to develop using a portable GUI API called IFC (from Netscape). It was a sort of mini-Swing. Sun promised a simple migration to the 'new standard' - Swing. This migration path led nowhere, and applications had to be re-written from scratch.

    I would not be a happy person if the same eventually happened with JDO -> the new API.
  29. Geir...[ Go to top ]

    I know I'm going to regret this...
    [I am the Apache representative on the JCP EC and voted in favor of JDO2. I also happen to work for Gluecode Software]
    As an Apache rep for now you have to vote for Apache. How long would you hold that position if you f_ up? Not long! You had no choice, trust me on this

    Not really. If you look back, I've held this position about JDO and EJB before I became a Gluecode employee. Do we want to revisit that? About having a separate JSR to look at POJO persistence to get rid of the politics between the two communities, and the problem of tying the spec to J2EE 1.5 w/o ever being tested or implemented? Go look here :

    http://blogs.codehaus.org/people/geir/archives/2004_06.html#000758_persisting_problems

    You just happen to get a job offer from GlueCode while GlueCode Core Developer Network memebers where taking jBoss ideas - as per response document sent by you(need a link?); and filtering it trough Apache Geronimo, something you rail-roded through Apache. Something very few people would have done.

    That's an interesting but factually incorrect view of history. I think that everything was fairly explained in the letters, and that we all hope to put this behind us and move on.

    Did you get a $ bonus for messing over 2 open source communities?
     Feel good about it?Did I get something wrong, please correct it

    Yah, you got it wrong. Apache Geronimo started in August of 2003, and I began working for Gluecode in August of 2004. That's a whole year. At the time, my currently employer didn't give a flying rats patootie about Geronimo, nor my open source activities. I did community-based open source before Gluecode, and I will do community-based open source after Gluecode. And JBoss seems to be doing just fine.

    Gluecode really has nothing to do with this. It's my job. The line between my Apache duties and Gluecode duties is clear. I don't cross it, and I don't mix them. I can't imagine any reason why anyone would think that Gluecode has an interest in JDO and EJB3.
    Had jBoss patented it's ideas, you could not have done that, but they did not patent!

    Vic, go look at what happened. Better still, go look at the code, and give us a full report, telling us about both sides.
    GlueCode license is now no longer open source: http://www.gluecode.com/website/products/BinaryCodeLicense-JOE.pdf - you have to sign up just to get the binnary and agree to not reverse enginner. Now that's odd.People can still get original current open source code from jBoss using standard open source licese by the people that own the ideas, but did not patent them

    Yes, Gluecode sells a proprietary software product for which you can get the source if you are a customer of Gluecode. It really isn't that revolutionary of an idea. As a matter of fact, they did that before they had JOE - they've done it for years, well before I got there, and in fact, before Geronimo was founded as a project. Gluecode contributes a lot to many open source projects by allowing employees to contribute to OSS projects, including Geronimo, and is happy to have it's employees be able to participate.
    This is typcal of EJB, becuase EJB it'self is a lie, so everyone is pretending it's OK and applyting that level of ethics to their work.

    I'm no fan of EJB. I think it has it's place, and that's not for everyone, in every situation. But there are people who do need it.
    JDO want's a piece of that O/R pie.(http://www.agiledata.org/essays/impedanceMismatch.html)

    Actually, I thought that JDO is doing object persistence, and O/R mapping is just one aspect, as the store to which you could persist doesn't have to be a RDBMS. Of course, it can be, but that's not required. For example, they can use modern databases like ORDBMSs, like Oracle's support of SQL-99. Try that with EJB3.

    Peace, as Cam would say.

    - geir
  30. EJB[ Go to top ]

    Geir wrote:
    ... no fan of EJB. I think it has it's place, and that's not for everyone, in every situation.

    I'll bite: What do you think what place or purpose or use does EJB have?

    .V
  31. Gehr...[ Go to top ]

    Instead of Gavin with his arm around Craig, it should have been Dion Almaer saying "Don't worry Craig, I'll make sure TSS promotes JDO.". Or...better yet, Gehr Magnusson say "Don't worry, I'll defend you on the EC through GlueCode's, err...whoops..., I mean Apache's seat on the EC."Bill
    (P.S. Gluecode doesn't have ... any intention of implementing "EJayBernate", the POJO persistence sub-spec from JSR-200.)

    Which is why you don't want JSR-220 to succeed as Gluecode (err... I mean Geronimo...err...) really has no chance of implementing it or J2EE 1.5 for that matter. You're having a hard enough time getting the TCK work done. Believe me. I feel for you. We had to put a lot of resources into the TCK so we know how hard it is.

    But...At least JBoss is blunt about what we want. The smashmouth style has made a us a few enemies, but I'd personally prefer this approach to political weasleness. We don't hide behind Apache or the veil of "the desires of the community". I'm so sick of hearing about your so-called pure intentions. Fess up Gehr.

    Bill
  32. Gehr...[ Go to top ]

    Bill:
    But...At least JBoss is blunt about what we want. The smashmouth style has made a us a few enemies, but I'd personally prefer this approach to political weasleness. We don't hide behind Apache or the veil of ..
    .. Joe Murray?

    He that is without sin among you, let him cast the first stone ..

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  33. Speaking of astroturfing...[ Go to top ]

    Bill:
    But...At least JBoss is blunt about what we want. The smashmouth style has made a us a few enemies, but I'd personally prefer this approach to political weasleness. We don't hide behind Apache or the veil of ..
    .. Joe Murray?He that is without sin among you, let him cast the first stone ..Peace,Cameron PurdyTangosol, Inc.Coherence: Shared Memories for J2EE Clusters

    LOL! This is coming from Mr. I'll-do-anything-to-get-attention...Anyways...

    Anybody remember "I Know"? Well, Mr. "I Know" just sent an email to ejb3-feedback. I replied, to which I got a response:

    Out of the office reply: wcheng at versant dot com

    Still, astroturfing on TSS is a bit different than astroturfing on the EC for Gluecode.
  34. Speaking of astroturfing...[ Go to top ]

    Still, astroturfing on TSS is a bit different than astroturfing on the EC for Gluecode.

    Bill, you're like a little wind-up bunny!

    First I'm voting for Gluecode, and now I'm astroturfing for Gluecode. What's next? Stealing candy from babies for Gluecode?

    How about this - why don't you explain why voting to allow JDO2 to continue in line with their chartered purpose is

    a) "for Gluecode", since Gluecode doesn't have a JDO product or one competing with it (or Hibernate) and
    b) different and out of character for me, since I voted positively for the orignal JDO2 JSR proposal (and argued to separate EJayBernate from JSR-220 to be an independent spec) before ever working for Gluecode?

    And if you can't, maybe just show what I've done that's out of character with previous history, now that I work for Gluecode.

    And if you can't, maybe you can find some other way to attack Gluecode, other than impugning my integrity? I've always thought of you as a nice guy that just worried about the technical stuff, so I'm a little surprised by this.

    And if you can't, maybe we can talk the motivations for JBoss' vote ;) ?

    -- geir
  35. Gehr...[ Go to top ]

    Worried, Bill?

    Oh, and in case you've forgotten: http://dictionary.reference.com/search?q=community
    "Sharing, participation, and fellowship."
  36. Gehr...[ Go to top ]

    Instead of Gavin with his arm around Craig, it should have been Dion Almaer saying "Don't worry Craig, I'll make sure TSS promotes JDO.". Or...better yet, Gehr Magnusson say "Don't worry, I'll defend you on the EC through GlueCode's, err...whoops..., I mean Apache's seat on the EC."Bill
    (P.S. Gluecode doesn't have ... any intention of implementing "EJayBernate", the POJO persistence sub-spec from JSR-200.)
    Which is why you don't want JSR-220 to succeed as Gluecode (err... I mean Geronimo...err...) really has no chance of implementing it or J2EE 1.5 for that matter. You're having a hard enough time getting the TCK work done. Believe me. I feel for you. We had to put a lot of resources into the TCK so we know how hard it is.

    Geronimo is focused on J2EE 1.4, and given that the people that developed JBoss EJB implementation are now working on Geronimo, I am confident we'll get Geronimo done as well. ;) Yes, doing the TCK work is hard, but we're having a good time, and invite others to come help if interested.

    AS for J2EE 1.5, there is no J2EE 1.5 yet - work is still going on. I'm sure that when the spec does come out, Geronimo will do that too. But lets wait for it.

    As for JSR-220, my position about this clearly hasn't changed, and it's provable that it has nothing to do with my employer. It also has nothing to do with wanting or not wanting JSR-220 to succeed, as I want it to succeed. I actually believe that JSR-220 making EJBs easier for developers is a necessary thing for the J2EE platform. I just would like to see it be compatible with the existing platform to help preserve current investments.

    <blockquite>
    But...At least JBoss is blunt about what we want. The smashmouth style has made a us a few enemies, but I'd personally prefer this approach to political weasleness. We don't hide behind Apache or the veil of "the desires of the community". I'm so sick of hearing about your so-called pure intentions. Fess up Gehr.Bill
    I'm starting to understand the problem. I don't think that you can grok an open source project that's indpendent of a commercial interest, or from the other direction, being a part of a commercial organization that has an interest in an open source project that it doesn't think it "owns".

    Believe it or not, I am able to be both an Apache member, and a Gluecode employee and keep them separate. Also remember that there's far more to Apache than Geronimo (like Apache Tomcat, Apache Axis, Apache Log4j... check your app server... they're there...), so the interests of the foundation, and the community, are what I represent. Ironically, they are your interests too, as JBoss is a participant in the Apache community.

    -- geir

    P.S. the deliberate misspelling of my name is a nice touch :)
  37. Gehr...[ Go to top ]

    Instead of Gavin with his arm around Craig, it should have been Dion Almaer saying "Don't worry Craig, I'll make sure TSS promotes JDO.". Or...better yet, Gehr Magnusson say "Don't worry, I'll defend you on the EC through GlueCode's, err...whoops..., I mean Apache's seat on the EC."Bill
    (P.S. Gluecode doesn't have ... any intention of implementing "EJayBernate", the POJO persistence sub-spec from JSR-200.)
    Which is why you don't want JSR-220 to succeed as Gluecode (err... I mean Geronimo...err...) really has no chance of implementing it or J2EE 1.5 for that matter. You're having a hard enough time getting the TCK work done. Believe me. I feel for you. We had to put a lot of resources into the TCK so we know how hard it is.

    Geronimo is focused on J2EE 1.4, and given that the people that developed JBoss EJB implementation are now working on Geronimo, I am confident we'll get Geronimo done as well. ;) Yes, doing the TCK work is hard, but we're having a good time, and invite others to come help if interested.

    AS for J2EE 1.5, there is no J2EE 1.5 yet - work is still going on. I'm sure that when the spec does come out, Geronimo will do that too. But lets wait for it.

    As for JSR-220, my position about this clearly hasn't changed, and it's provable that it has nothing to do with my employer. It also has nothing to do with wanting or not wanting JSR-220 to succeed, as I want it to succeed. I actually believe that JSR-220 making EJBs easier for developers is a necessary thing for the J2EE platform. I just would like to see it be compatible with the existing platform to help preserve current investments.
    But...At least JBoss is blunt about what we want. The smashmouth style has made a us a few enemies, but I'd personally prefer this approach to political weasleness. We don't hide behind Apache or the veil of "the desires of the community". I'm so sick of hearing about your so-called pure intentions. Fess up Gehr.Bill

    I'm starting to understand the problem. I don't think that you can grok an open source project that's indpendent of a commercial interest, or from the other direction, being a part of a commercial organization that has an interest in an open source project that it doesn't think it "owns".

    Believe it or not, I am able to be both an Apache member, and a Gluecode employee and keep them separate. Also remember that there's far more to Apache than Geronimo (like Apache Tomcat, Apache Axis, Apache Log4j... check your app server... they're there...), so the interests of the foundation, and the community, are what I represent. Ironically, they are your interests too, as JBoss is a participant in the Apache community.

    -- geir

    P.S. the deliberate misspelling of my name is a nice touch :)
  38. Instead of Gavin...[ Go to top ]

    Competition is good, Bill.

    Competition of implementations is good.

    Competition of standards is bad.

    Very bad.

    --
    Cedric
  39. Instead of Gavin...[ Go to top ]

    Competition is good, Bill.
    Competition of implementations is good.Competition of standards is bad.Very bad.-- Cedric

    True, but it's not really clear that we should be standardizing before we have specifications that we know work. Specs aren't standards.

    The claim is that JDO is a specification that wasn't solving the problem appropriately for EJB, and thus the EJB expert group started a new specification to solve the problem. (I know that you don't believe that JDO was a standard, or you woudn't be advocating an alternative that competes with it.)

    I have every belief that the spec from the EJB expert group will be a good specification for EJBs needs. I'm not convinced (because I have zero information as there are no production-tested implementations of the spec as it isn't even complete...) that it's something we need to make into a universial standard (yet).

    There's a difference between specification and standardization. Lets make specifications so we can make implementations and test them, and choose the best specification based on the experience.

    geir
  40. Instead of Gavin...[ Go to top ]

    <geir>
    True, but it's not really clear that we should be standardizing before we have specifications that we know work. Specs aren't standards.
    </geir>

    Ahmm... are you saying that all the specs (EJB, Servlet, Portlet, J2EE, J2SE, ...) which are coming from JCP are not standards? At least not "wanna be standards"? IMO, this is not correct. All specs from JCP (included JDO + EJB) are standards! You can only deliver your specs together with a TCK and Reference Implementation. So,

    Spec (API) + TCK + Reference Impl. from JCP is a standard! This includes JDO and EJB!

    I can trully understand how all the JDO users and developers are angry because of the attitude of the JCP members against JDO standard:

    1. The new EJB EB API and Hibernate are very similar to JDO. Hibernate is a great product, I really like it and I think, it would be not a problem at all to make the API just the same as JDO API. I remember reading an article about an API comparison between JDO and Hibernate. They (the APIs) are very similar. How it is going to be implemented (reflection vs. code generation) is a different story. For "normal" developers (users of the APIs) what I need to know is "how" to use the APIs, not more not less. The new EJB EB API seems also to be very similar with JDO API, so why not use already standardized JDO API as the foundation of the persistance layer?

    2. I know it is very hard for the EJB specs. folks to do this, because then you have to admit that EJB EB was not a good solution. On the other side all the vendors of EJB EB will also not satisfy with the situation. So how can we make a "WIN - WIN" situation? At the end the importance of standards is to have a WIN - WIN situation for all, correct?

    3. Bring up a new persistance API! (I thought they - JDO and EJB EB camps - are all agree with this??). What does this mean

    - to JDO camps: work on the new spec. and discontinue the JDO spec. Offer a migration path to all the users of JDO.

    - to EJB camps: work on the new spec. and discontinue the EJB EB (again, Entity Bean) spec. Offer a migration path to all the users of EJB EB.

    So, what's the problem?

    It would be very wise for Apache, ObjectWeb, JBoss, all the Open Source community to be the "example" and implementor of the new persistance APIs. Drop the JDO APIs, Hibernate APIs, EJB EB APIs and work together for the new persistance APIs instead of fighting each other here in this forum...

    BTW. I prefer an API which is similar to Hibernate and JDO which also supports JDK 1.4 (instead of using all of the annotations...). In the future I hope not to see this trouble again (at least until some level of degree). Each JCP spec. and *standard* should offer an AndroMDA cartridge:

    AndroMDA Cartridge + API spec + TCK + Reference Implementation == JCP standard.

    So that we can migrate without too much trouble :-)

    Cheers,
    Lofi.
  41. Instead of Gavin...[ Go to top ]

    It would be very wise for Apache, ObjectWeb, JBoss, all the Open Source community to be the "example" and implementor of the new persistance APIs. Drop the JDO APIs, Hibernate APIs, EJB EB APIs and work together for the new persistance APIs instead of fighting each other here in this forum...

    OpenSource does it by duking it out friendly way. CodeHaus vs sf.net vs Apache.org vs dark horse....
    May the best one win.

    Check out APache's entry if you have not yet, see what the people say on it's mail list.
    http://incubator.apache.org/ibatis/site/index.html

    (it has .NET port and jPetStore sample, and it's pure SQL, in exernalized files so DBA can have at it.)

    RespeKt,
    .V
  42. Instead of Gavin...[ Go to top ]

    Vic,

    yes, I know iBatis and also like it a lot because it makes easier to handle those SQL statements in Java application :-) I also agree with you that every Java programmers who want to work with relational databases *must* understand SQL. No more no less. Full stop. SQL is designed to handle "set" operations in optimal way and relational databases are based on "set" theory, that's it.

    Let's analyse our problem once again:
    The problem of O/R mapping and also OO-DB are an old story. 1. We all "think" and "model" in "objects/classes" to be able to capture our subject/domain in the real world.
    2. But at the end we need to save our objects in relational databases.

    So here we have a "gap" between objects and relationals way. How we can have a solution which can bridge the gap?

    The answer could be:

    1. We need an O/R mapper: JDO, Hibernate, etc. You are unsatisfied, since you think all those O/R mappers are slow and not optimized - they have some burdens - to compare with the pure SQL. For many normal applications - I would say - O/R mapper is actually fast enough.

    2. Use automatic transformation to transform all your OO operations from your UML diagram (which is OO) automatically into your own SQL statements. An example of this can be seen by AndroMDA Hibernate cartridge. It transforms/translates all the OCL queries into Hibernate queries automatically.

    So, you Vic, as an expert in SQL, you can put your knowledge into a new AndroMDA cartridge iBATIS which translates all those UML OCL queries into *optimized* SQL queries implementations, which can be used by all OO developers.
    At the end you can "pour" your SQL knowlegde into the OO world. All involved roles will be happy:

    1. You will be happy because you know that all those SQL commands are optimized since you made the translations by yourself.
    2. All OO developers and designers are happy since they can think and model in OO world and no need to worry about the performance since they just translate the operations automatically.

    So, life could be bright and beautiful, right? :-)

    Cheers,
    Lofi.
  43. AndroMDA iBATIS cartridge?[ Go to top ]

    There is a fair amount of tools that lets you map an entity/business class along with its properties to/from a user interface so you do not have to write most of the UI and CRUD operations. Some of them don't even generate any code! (Naked Objects, Visual Studio 2005 table wizards).

    The idea is tempting but masks a very real problem, the UI generated is data oriented not task oriented. What's the use with something that is functional but not usable?
    By mapping the Domain Model or Domain Objects directly to a UI, you are creating screens that reflect the structure of your domain objects. Not screens that reflect the way people want use those domain objects.

    But it is fine for creating screens that are rarely used like Admin Screens etc.

    In other words, it can create the 80% of screens, that is used only 20% of the time. But it can't help with the 20% of screens that are used 80% of the time!

    Of course this doesn't mean that this is not useful. After all it still allows you to generate 80% of the screens! :)

    But how do you generate the UI and business domain operations that the customer will use 80% of the time?

    Only actually talent will do. No AndroMDA cartridge will help you there.

    Regards
    Rolf Tollerud
  44. AndroMDA iBATIS cartridge?[ Go to top ]

    Rolf,

    <rolf>
    But how do you generate the UI and business domain operations that the customer will use 80% of the time?
    Only actually talent will do. No AndroMDA cartridge will help you there.
    </rolf>

    I do not talk about UI at all... I only talked about the domain model (UI is a different story). If I model and design my app I think in OO world (a Person, a Student extends a Person, etc.). With AndroMDA you can for example generate 100% of the domain model into Hibernate model (UML2Hibernate). You can see that this cartridge translate UML OCL into HibernateQL for example, see:
    http://www.andromda.org/andromda-translation-libraries/index.html

    Vic does not like Hibernate as a solution of O/R Mapper and he likes to have pure SQL commands (iBATIS) therefore I asked him to build an AndroMDA cartridge (UML2iBATIS) which will have a translation queries library (OCL2PureSQL). So, we all can have optimized SQL everywhere. That's it. No UI generation and so on :-) Simply I can model my UML diagrams (for example the hierarchy he mentioned above) with OCL and the AndroMDA iBatis cartridge will translate all the queries into *optimized* SQL queries...

    Cheers,
    Lofi.
  45. follow up question[ Go to top ]

    Lofi,

    Can your proposed AndroMDA "UML2iBATIS system" handle queries like,

    SELECT DISTINCT COM.NAME,COM.ADR1,COM.ZIP,COM.CITY,COM.ID FROM COM, CAT CAT0, CAT CAT1, CAT CAT2, CAT CAT3
    WHERE (COM.ID = CAT0.ID AND COM.ID = CAT1.ID AND COM.ID = CAT2.ID AND COM.ID = CAT3.ID) AND ((CAT0.CATEGORI ='Campanj') AND (CAT1.CATEGORI ='Concerntip') AND (CAT2.CATEGORI ='Weekly') AND (CAT3.CATEGORI ='Customer-B'))

    Dynamicly created by the user With arbitrary colums?

    Regards
    Rolf Tollerud
    (right from an app I am working on att the moment)
  46. follow up question[ Go to top ]

    It is possible to draw query, but it doe's not make any sence. Probably it was about DDL, but you need to edit it using text editor to tune anyway(partitions,clusters,dataspaces, indexes). It is possible to reuse some stuff from picture at development time, but diagrams are designed for modeling and communication and do not need to know about implementation details and tuning. Transformation can helps to avoid some typing twice and can generaty some boring stuff. But
    I think it must be better to generate boring stuff from iBatis files and Model ( DB schema ), model can be designed using diagram.
  47. follow up question[ Go to top ]

    Juozas,

    I don't understand a word of what you are saying!. But I would be interested in a possible UML2iBATIS generator..
  48. follow up question[ Go to top ]

    I develop E/R editor on weekends, If you want help me to develop tools for EJB4 persistence then we can setup some project on SF.
  49. follow up question[ Go to top ]

    BTW "The E-R (entity-relationship) data model views the real world as a set of basic objects (entities) and relationships among these objects"
    http://www.cs.sfu.ca/CC/354/zaiane/material/notes/Chapter2/node1.html
  50. follow up question[ Go to top ]

    Hi Rolf,

    <rolf>
    Dynamicly created by the user With arbitrary colums?
    </rolf>

    dynamic SQL is difficult to translate from the UML diagram (+ OCL) since you need to have the domain model... The question is why do you need "dynamic" created tables or colums? Are you implementing "Microsoft Access" (database application) in your application? :-) If this is the case, you need to do different things. What I meant was about "business-oriented" applications... But this is also an old story: some like "dynamic" creating everything (weak typing) and some like "generating" everything (compiler, strong typing)...

    Cheers,
    Lofi.
  51. Cultural differences???[ Go to top ]

    "The question is why do you need "dynamic" created tables or colums"

    But Lofi,

    We give the user the possibility to categorize any company (or kind of info really), and later to select out a list of companies based on one or more categories. Sure that must be very common, how do you do it???

    To personalize the view by adding or detracting columns is a feature that exists in all MS applications, like cut & paste.

    Regards
    Rolf Tollerud
  52. Cultural differences???[ Go to top ]

    Sure that must be very common, how do you do it??? To personalize the view by adding or detracting columns is a feature that exists in all MS applications, like cut &amp; paste.RegardsRolf Tollerud
    I do not think it is common, you can model it without dynamic DDL, dynamic DDL is not secure and dynamic schema is unmaintainable. Do you tune it at runtime too ?
  53. Cultural differences???[ Go to top ]

    Hi Rolf,

    maybe I don't understand you correctly - sorry for this - but as Juozas said, you don't need to use dynamic colums and tables for this example. This is just:

    Category (n) <-> (m) Company (n:m relationship)...

    A Category can contain many Companies and A Company can have many Categories...

    Cheers,
    Lofi.
  54. Cultural differences???[ Go to top ]

    Lofi,

    "maybe I don't understand you correctly - sorry for this"

    Well, being sorry don't help much! :)

    That a Category can contain many Companies and a Company can have many Categories was obvious I thought, in my naiveté.

    The question is, how do you handle the very very common scenario when the user categories and search companies by categories when there are large files involved.

    Regards
    Rolf Tollerud
  55. Cultural differences???[ Go to top ]

    Rolf,

    <quote>
    Well, being sorry don't help much! :)
    </quote>

    OK, I'll try to help you, no worry :-)

    <quote>
    The question is, how do you handle the very very common scenario when the user categories and search companies by categories when there are large files involved.
    </quote>

    Simply like this:

    1. Model you domain in UML (n:m relationship) like I said above -> Category - Company.
    2. Describe the OCL for findCompaniesByCategory(category) method.
    3. Compile the UML model with AndroMDA to get working Java files. If I'm using AndroMDA Hibernate Cartridge, I'll get HibernateQL. If I'm using Vic's AndroMDA iBatis Cartridge I'll get an optimized SQL command (because Vic has implemented the translation lib UML2iBatis).

    Very simple, right? Everyone will be happy, all the specs. for the application is completely fulfiled... Nobody needs to discuss what is better SQL or O/R, just use the correct Cartridge :-)

    Cheers,
    Lofi.
  56. 1) That the compiler correctly translates the UML-OCL to correct SQL i believe when I see it. In fact I will eat my hat without any sugar the day it happens!

    2) And how does the situation look for the user there he sits wanting to do his work?

    Regards
    Rolf Tollerud
  57. He he Rolf,
    we don't even need AI for this part :-)

    <quote>
    1) That the compiler correctly translates the UML-OCL to correct SQL i believe when I see it. In fact I will eat my hat without any sugar the day it happens!
    </quote>

    Just take a look again (but this time more seriously ;-)) to this example: OCL -> HibernateQL and OCL -> EJBQL:
    http://www.andromda.org/andromda-translation-libraries/index.html

    You can even take a look how this translation is implemented, because AndroMDA is Open Source... Many thanks to all AndroMDA developers for the great product I've ever seen so far...

    Rolf, it is a pleasure to chat with you again!

    Cheers,
    Lofi.
  58. by large files do you mean blobs?[ Go to top ]

    Lofi,"maybe I don't understand you correctly - sorry for this"Well, being sorry don't help much! :)

    That a Category can contain many Companies and a Company can have many Categories was obvious I thought, in my naiveté.

    The question is, how do you handle the very very common scenario when the user categories and search companies by categories when there are large files involved.RegardsRolf Tollerud

    If we're talking about storing text files in Sql 2000, you'd have to setup the table correctly right. Here is an excerpt from a msdn page http://www.microsoft.com/resources/documentation/sql/2000/all/reskit/en-us/part3/c1161.mspx:
    Full-text index and search operations

    You can index and search certain types of data stored in BLOB columns. When a database designer decides that a table will contain a BLOB column and the column will participate in a full-text index, the designer must create, in the same table, a separate character-based data column that will hold the file extension of the file in the corresponding BLOB field. During the full-text indexing operation, the full-text service looks at the extensions listed in the character-based column, applies the corresponding filter to interpret the binary data, and extracts the textual information needed for indexing and querying.

    When a field in a BLOB column contains documents with one of the following file extensions, the full-text search service uses a filter to interpret the binary data and extract the textual information.
    •.doc
    •.txt
    •.xls
    •.ppt
    •.htm

    The extracted text is indexed and becomes available for querying.

    In addition, after it has been full-text indexed, text data stored in BLOB columns can be queried using the CONTAINS or FREETEXT predicates. For example, this query searches the Description column in the Categories table for the phrase "bean curd". Description is an ntext column.

    Although that technique works, if you want to get better performance, it might be better to pre-process the text files by stripping all non-noun words. Once you have that, you can then save that in a column in the same table or a different table and index it. This technique is used in data mining quite a bit. When I worked on personalization features, some system perform statistical analysis by doing a count of the nouns and sorting them in sequence. This topic is pretty rich and there are numerous algorithms for analyzing text. Some build a knowledgebase and weight specific words, while others process a large amount of sample text to generate weights. The resulting weights are then used to pre-process the text and the information is saved in summary tables to optimize queries.
  59. by large files do you mean blobs? No[ Go to top ]

    Peter,

    How is it that you and Lofi have so big problem to understand simple everyday routines? I am not talking BLOB or free text search here, but structured searches on predefined categories in a table.

    If I do a search by some categories I want to be able to say with some authority, "this market segment has grown/shrinked so and so since february last year".

    You give the user a dialog where she can drill down the category groups to find the right one(s). Then she should be able to both set and search. In other words KISS (for the user).

    By large files I mean only that they are too large to pre-load.

    But because of your background the word "large" trigger something completely different in your mind.

    Regards
    Rolf Tollerud
  60. sorry i can't read minds[ Go to top ]

    well I tried my best to understand what you meant. apparently, i was totally off, since you weren't talking about something totally different. By pre-load do you mean pre-load on the client side? Pre-loading on the database side wouldn't make much sense. If the dataset is multi-dimensional one could approach by generating summary tables or using OLAP. I'll use your sales example. It's a very common scenario used to sell OLAP. Say I have the following tables.

    product (
    sku
    category
    subcategory
    desc
    long_desc
    count
    unit_price
    cost
    manufacturer
    )

    inventory (
    date_purchased
    lot_size
    destination
    unit_price
    total
    sku
    region
    warehouse
    )

    sales (
    sku
    units_sold
    month
    store
    region
    price
    )

    Now say I want to create an OLAP cube with the following dimensions:
    region
    store
    sku
    profit_margin
    category
    subcategory
    manufacturer

    when I create the cube, I can map it several ways. One way is to join by sku, since all three tables have that column. to get the profit margin, I can average the cost of the product over that month and use the value for the calculation. In other words:
    sales.price - average cost = profit margin

    that is just an estimate and isn't really the actual profit margin. I'm not a CPA, so I don't know what is really reported in financial statements. Doing the real profit/loss calculation is rather difficult because I may buy 10 pallets of a product, but it's divided between 3 stores. some stores will sell out within that month, while another may not. therefore the real actual profit loss will be different. anyway, back to the query problem.

    Once I have the cube created, running drill down queries is pretty standard. when you run an MDX query, you get slices and navigate to that particular cell. In the case of sales reports using OLAP, ROLAP is way over kill. Multidimensional OLAP makes more sense and will generally take less than 5ms for a database with 1million rows. For a database that is 10million rows, the query time should be around 10-15ms. This is all guestimates and will vary depending on the sparse-ness of your data. hope that provides something useful.

    peter
  61. The Principle of "Always Surprise"[ Go to top ]

    You know what Peter, I think your answers is interesting, even if they have nothing to do with what we are discussing for the moment. I especially enjoyed that one about stripping out all non-noun words!

    I am convinced that if I gave you 10 marks, numbered from 1 to 10 and asked you to sort them, you will find some other way to sort them than by numbers, for example sorting the clean ones in one heap and the not-so-clean in another heap or whatsoever, you just never ever do what one would expect. Instead of "The Principle of Least Surprise" you drive by The Principle of "Always Surprise"!

    I am not talking of some exotic far out technology tonight but ordinary run-of-the-mill mainstream problem such as we never will have any software that auto generate SQL-queries type this.
    That is what we are talking about, the advantage of handwritten SQL compared to ORM.

    Regards
    Rolf Tollerud
  62. uhh, sort is built into java and C#[ Go to top ]

    You're too funny. if someone asked me sort 10 marks (what ever a mark is suppose to be) I'd yawn and go back to sleep. Actually, I'd politely point them to the API docs and then go back to sleep. I've implemented quick and bubble sort a couple of times over the last 4 years and it's trivial and boring. Most CS majors will have implemented it once in the undergraduate course work.

    By the way, non of these techniques are exotic, bizzar or strange. They are standard techniques for dealing with large datasets or large blobs. they're especially common in data warehousing applications that store stuff like terabytes of articles, papers, emails and books.

    I don't understand why you think generating efficient sql is impossible with an ORM. The source code for numerous tools are available, so it's rather easy to look at the code and point out weaknesses/flaws. Vic provide concrete cases, but you still haven't. Feel free to prove me wrong and point out a specific case there an ORM fails to map correctly. Repeating what vic has already said doesn't count.
  63. some progress made[ Go to top ]

    "Repeating what vic has already said doesn't count"

    Don't say that Peter, after all I have done some small discovered of my own in this little thread...

    Apparently there is not allowed for Java users to classify and search by category, neither are they allowed to change the columns in the views. That may seem insignificant but if you multiply by 100 (not yet discovered missing pieces) it could become a major factor.

    Regards
    Rolf Tollerud
  64. P.S.[ Go to top ]

    Further, a modern trend in contemporary UI (I almost said art) is to allow the (authorized) customer to add fields even in the detail view (shows up in the grid too of course). Actually add columns in the database! How do you ORM guys handle that?
  65. P.S.[ Go to top ]

    Further, a modern trend in contemporary UI (I almost said art) is to allow the (authorized) customer to add fields even in the detail view (shows up in the grid too of course). Actually add columns in the database! How do you ORM guys handle that?
    Forget it, DBA will never let it to do. Probably it is not a problem for single user DB (a.k.a "Dynamic Data Binding"), but it is a very bad idea to add/remove columns or tables dynamicly in the shared database (shared by many users and applications).
  66. Juozas,

    "Forget it, DBA will never let you do it"

    Why don't you point out this for the architects of SalesForce.com, the worlds biggest online ASP provider, 8 times bigger than Siebel in that area and with a booming third party market. As far as I know, the most successful Java App ever (build upon Resin).

    Because they do just that.

    Regards
    Rolf Tollerud
  67. I hope you will not pollute my database with your fields after I become SalesForce.com user, I can promisse not to add fields to your database too :)
  68. Juozas,"Forget it, DBA will never let you do it" Why don't you point out this for the architects of SalesForce.com, the worlds biggest online ASP provider, 8 times bigger than Siebel in that area and with a booming third party market. As far as I know, the most successful Java App ever (build upon Resin).Because they do just that.RegardsRolf Tollerud

    Resin is a J2EE server. So you are describing not only a successful Java App, but a successful J2EE App? Amazing!
  69. Juozasm,

    "I hope you will not pollute my database with your fields after I become SalesForce.com user, I can promisse not to add fields to your database too :)"

    Thank you Juozas! Agreed.

    Henrique,
    "I have already done that"
    I knew there was hope for you Henrique, I have always defended you before strangers!

    Steve,

    It is not until recently that Resin can be bought in a version with a full J2EE stack and Salesforce has never used that version. They started out very early with Resin, and then shifted to JRun, but after a while went back to Resin for the superior performance. EJB never.

    Regards
    Rolf Tollerud
  70. It is not until recently that Resin can be bought in a version with a full J2EE stack and Salesforce has never used that version. They started out very early with Resin, and then shifted to JRun, but after a while went back to Resin for the superior performance. EJB never.

    So, to confirm - all your posts critising J2EE were really only about Enterprise Java Beans? So its only EJBs that have been costing the world billions, causing recessions etc.?
  71. In fact yes[ Go to top ]

    "So, to confirm - all your posts critising J2EE were really only about Enterprise Java Beans? So its only EJBs that have been costing the world billions, causing recessions etc.?"

    In fact yes - and the attitude that goes with it. If only people had been more like Vic, I don't mean exactly like but more like, maybe lost of lots of people would have been as rich as "the founders and employees of salesforce.com that have retained the vast majority of the company's ownership, leaving a relatively small amount of stock for investors".
    Salesforce.com (NYSE: CRM) soared on its first day of public trading Wednesday as the second-most hotly anticipated technology IPO of the year -- behind only Google's (Nasdaq: GOOG) still-to-come initial public offering -- gained 56 percent and valued the company at US$1.7 billion.

    Salesforce new valuation is more than 400 times 2003 earnings and 10 times current year revenues

    Regards
    Rolf Tollerud
  72. you're gonna fall for that again[ Go to top ]

    After all the bogus valuations in the late 90's you're gonna believe that hype. I for one think that kind of valuation is a wild exaggeration. That's not to say salesforce isn't a solid company. Just that the valuation of what salesforce is worth is questionable.
  73. In fact yes[ Go to top ]

    "So, to confirm - all your posts critising J2EE were really only about Enterprise Java Beans? So its only EJBs that have been costing the world billions, causing recessions etc.?"In fact yes - and the attitude that goes with it.

    Well, I think there is a consensus that Entity Beans suffered from a lack of flexibility and performance - hence the move towards a POJO persistence API. What I do find difficult to understand is the huge resistance to ORM, often by people who have never used it, or who haven't used modern high-performance products. Take a typical statement by the pro-SQL-ers, and substitute the word 'assembler' for 'SQL' and you get my attitude:

    "SQL(assembler) is portable because you can do the same thing in many products and the same features are available."

    "You will never be able to get high performance unless you deal with finely tuned SQL(assembler)"

    "If you move the logic away from SQL(assembler) you will lose performance."

    Its the same old arguments that have been going on for decades and decades. Every time there is an advance in IT that makes things more portable and easier for developers, there is huge resistance from supporters of existing technologies, insisting that their specialised knowledge is vital to maintain performance, and that everything can and should be expressed in terms of their way of working. Their knowledge is important in some critical situations, but that does not mean you need to write everything explicitly at the lower level, any more that all applications have to be written in assembler or C.

    I would remind those who keep bringing up the KISS principle that simplicity does not mean always writing something at the lowest level (SQL) or a library that uses it (JDBC). Abstractions like ORM are a very good way to hide complexity in many situations. Simplicity depends on your point of view, and for many of us, its certainly not SQL or JDBC.
  74. Re: In fact yes[ Go to top ]

    "So, to confirm - all your posts critising J2EE were really only about Enterprise Java Beans? So its only EJBs that have been costing the world billions, causing recessions etc.?"In fact yes - and the attitude that goes with it.
    Well, I think there is a consensus that Entity Beans suffered from a lack of flexibility and performance - hence the move towards a POJO persistence API. What I do find difficult to understand is the huge resistance to ORM, often by people who have never used it, or who haven't used modern high-performance products. Take a typical statement by the pro-SQL-ers, and substitute the word 'assembler' for 'SQL' and you get my attitude:"SQL(assembler) is portable because you can do the same thing in many products and the same features are available.""You will never be able to get high performance unless you deal with finely tuned SQL(assembler)""If you move the logic away from SQL(assembler) you will lose performance."Its the same old arguments that have been going on for decades and decades. Every time there is an advance in IT that makes things more portable and easier for developers, there is huge resistance from supporters of existing technologies, insisting that their specialised knowledge is vital to maintain performance, and that everything can and should be expressed in terms of their way of working. Their knowledge is important in some critical situations, but that does not mean you need to write everything explicitly at the lower level, any more that all applications have to be written in assembler or C.I would remind those who keep bringing up the KISS principle that simplicity does not mean always writing something at the lowest level (SQL) or a library that uses it (JDBC). Abstractions like ORM are a very good way to hide complexity in many situations. Simplicity depends on your point of view, and for many of us, its certainly not SQL or JDBC.

    Well put Steve!!! Well put! I couldn't have said it better myself. Thank you for providing some sane thought to this discussion.

    Long live higher levels of abstraction!!
  75. Re: In fact yes[ Go to top ]

    <quote>
    Well put Steve!!! Well put! I couldn't have said it better myself. Thank you for providing some sane thought to this discussion.
    Long live higher levels of abstraction!!
    </quote>

    if we were talking about HIGHER LEVEL OF ABSTRACTION, why did we always talk about JDO, Hibernate, iBatis, JDBC? If we were talking about HIGHER LEVEL OF ABSTRACTION, we should only talk about:
    - "BUSINESS OBJECT" OR "ENTITY" and
    - "WORKFLOW" or "SYSTEM OBJECT" or "SERVICES"

    Who care about those implementation details (Hibernate, JDO, etc.)? Wake up folks, go towards MDA!

    Cheers,
    Lofi.
  76. Re: In fact yes[ Go to top ]

    If we were talking about HIGHER LEVEL OF ABSTRACTION, we should only talk about: - "BUSINESS OBJECT" OR "ENTITY" and - "WORKFLOW" or "SYSTEM OBJECT" or "SERVICES"Who care about those implementation details (Hibernate, JDO, etc.)?

    We talk about Hibernate, JDO etc, because these systems were specifically designed to hide implementation details. They allow the developer to access and modify plain old Java objects, without embedded persistence logic in most situations. This might not necessarily be a 'higher' level of abstraction, but its clean and allows separation of concerns.
  77. Re: In fact yes[ Go to top ]

    MDA,WORKFLOW, RULE engines, .. stuff makes sence for very large projects only (it takes a lot of time to setup and to learn it). "Abstraction" on TSS forums is a "tranparence", as I understand it is used as workaround for broken drivers and invalid models.
  78. Re: In fact yes[ Go to top ]

    "Abstraction" on TSS forums is a "tranparence", as I understand it is used as workaround for broken drivers and invalid models.

    No! It is about having the code in the appropriate place. Transparence means that I can deal with data in terms of objects without having to explicitly state how I retrieve or store that data. I like transparence because I like to keep different concerns of my code separate. When I am writing code that deals with (for example) accounts and payments, I don't want to have to include the terms 'record set' or 'field', or have embedded JDBC calls.

    Of course, you can deal with all these things in terms of pure SQL and stored procedures if you wish to. There is nothing wrong with that. But I choose not to (I have issues with SQL portability). I want to deal with it in Java, and I want to express my logic in pure Java wherever possible. This is what transparence allows me to do.

    This is not a workaround, and nothing to do with broken drivers or invalid code.
  79. Re: In fact yes[ Go to top ]

    As I understand one of posters is an MDA advocate, "Abstraction" has nothing to do with OOP or JAVA in this camp (It is ASM from MDA point of view too).
  80. Re: In fact yes[ Go to top ]

    <quote>Well put Steve!!! Well put! I couldn't have said it better myself. Thank you for providing some sane thought to this discussion.Long live higher levels of abstraction!!</quote>if we were talking about HIGHER LEVEL OF ABSTRACTION, why did we always talk about JDO, Hibernate, iBatis, JDBC? If we were talking about HIGHER LEVEL OF ABSTRACTION, we should only talk about: - "BUSINESS OBJECT" OR "ENTITY" and - "WORKFLOW" or "SYSTEM OBJECT" or "SERVICES"Who care about those implementation details (Hibernate, JDO, etc.)? Wake up folks, go towards MDA!Cheers,Lofi.

    I agree with you Lofi. And in a perfect world we would already be firmly planted in the MDA world.

    But, while MDA continues to mature we have excellent specs such as JDO that sort of act as baby steps up the ladder to successively higher level of abstractions.

    Walk before you run and all that...... :-)
  81. ORM in large projects[ Go to top ]

    Oh and by the way, just as a note to the "YOU MUST USE OPTIMIZED HAND CODED SQL IN LARGE PROJECTS" crowd...I have, as have many software architects and developers, used and recommended ORM products in many "large" projects. Being professionals we are usually not at liberty to reveal the companies and/or the projects. Suffice to say, companies like Solarmetric, Versant, etc. will give you customer testamonials if your would like them.

    Anyone using an OO language like Java with a Relational data store and has a non-trivial business object model is writing some ORM type code anyway. Why not let a tried and tested product do that work for you!? Most of the top notch JDO implementations out there produce highly optimized SQL that is modifiable by the developer if need be. The best of both worlds!

    To follow up on Steve's assembler analogy...Let's look at a completly different field for a second. That of video games. This is a field where performance is paramount. At one time all video games were written in assembler. Then when languages like C, C++ etc. started to gain traction and good performant compilers, developers moved to these higher level languages and dropped down into assembler only if absolutely required. Now you would be hard pressed to find any hand written assembler in any modern fast action video game.

    I feel that using a good ORM tool is similar in nature to the above. For all developers that develop applications with a strong object model, I highly suggest that you at least try a top notch JDO implementation or Hibernate. You will probably walk away thinking how you could have written apps the old way! :-)
  82. ORM in large projects[ Go to top ]

    I have, as have many software architects and developers, used and recommended ORM products in many "large" projects. Being professionals we are usually not at liberty to reveal the companies and/or the projects.

    Bullshit!

    .V
  83. ORM in large projects[ Go to top ]

    I have, as have many software architects and developers, used and recommended ORM products in many "large" projects. Being professionals we are usually not at liberty to reveal the companies and/or the projects.
    Bullshit!.V

    I realise now that it is not just time to stop participating in individual threads, but to give up on theserverside entirely for a while. If posts like this are considered appropriate, what is the point?
  84. ORM in large projects[ Go to top ]

    This reminds me of another discussion: "TheServerSide Calls for Real World J2EE Project Stories". Success stories with J2EE Java Servers and EJB in large projects does not exist just as success stories with ORM in large projects does not exist. It is more or less the same thing as J2EE developers use EJB (intended as a replacement for Corba) as a percistence mechanism.

    There has not come forth any evidence that a high load performance mission critical system written with EJBs exist anywhere on the planet. Sites like eBay, Salesforce and some bank systems do not use EJB at all.

    Floyd Marinescu can decide this matter. Poor Floyd was forced by political pressure to say that he had received a lot of good stories. Why don't you admit Floyd the real situation?

    Regards
    Rolf Tollerud
  85. ever hear of NDA[ Go to top ]

    I'm guessing you've never worked for a large international corporation that requires all the developers sign NDA and promise never to expose any information. You can troll all you want, cases exist in numerous places, but no one is going to tell you and risk loosing their job. If you want to know first hand, learn the technology, become proficient at it and move to NY, Boston, LA, SF, London, or Tokyo. You'll find out pretty quickly after a few interviews how much data and transactions these companies are pushing every second/hour/day/week/month/year.
  86. ever hear of NDA[ Go to top ]

    I'm guessing you've never worked for a large international corporation that requires all the developers sign NDA and promise never to expose any information. You can troll all you want, cases exist in numerous places, but no one is going to tell you and risk loosing their job. If you want to know first hand, learn the technology, become proficient at it and move to NY, Boston, LA, SF, London, or Tokyo. You'll find out pretty quickly after a few interviews how much data and transactions these companies are pushing every second/hour/day/week/month/year.

    Well put Peter. Thank You.
  87. ever hear of NDA[ Go to top ]

    This thread was, there was general consensus that SQL is key. Then Steve Zara and JJ said SQL is "bad" and we should do OQL.
    You'll find out pretty quickly how much data and transactions these companies are pushing every second/hour/day/week/month/year.
    I have no doubt about VLDB and large loads, I work on them myself from time to time.
    Ex: 82,000 tpc:
    http://tpc.org/results/individual_results/RackSaver/RackSaverQuatreX-64ES030708.pdf

    I am questioning that EJB/EQL could do that. I know of cases where EJB/EQL could not scale and best was to have them removed and use SQL + cacheing DAO.
    You said you know 3rd hand of a case where EJB could do VLDB. Maybe they have a large DB and EJBs, but not the same application, I don't know.

    What we can see is slides for free is Javapolis slides where Rod Johnson again repeated that on his slide:
    "EJB does not have any of it's many promised benefits"
    and also
    "It's worse for persistance than almost anything else". You can just google on it or go to JL.

    I am for scientific development, means that it can be reproduced or verified what works and how effective it is.

    Ex: These types of applciations have a lot of reference user sites listed and they are SQL centric.
    http://www.opensourcecms.com/index.php?option=com_wrapper&Itemid=139
    It be better to duplicate a sucess then risk a failed project I think.

    I know 1st hand of J2EE SQL centric VLDB sucess projects!
    If anyone wants to do Steve Zara designs, go ahead, no skin of my back. If you decide to do SQL E/R type DAO even in Java... I get zero benefits. What's wrong with a debate?


    .V
  88. clarification[ Go to top ]

    This thread was, there was general consensus that SQL is key. Then Steve Zara and JJ said SQL is "bad" and we should do OQL.
    You'll find out pretty quickly how much data and transactions these companies are pushing every second/hour/day/week/month/year.
    I have no doubt about VLDB and large loads, I work on them myself from time to time.Ex: 82,000 tpc:http://tpc.org/results/individual_results/RackSaver/RackSaverQuatreX-64ES030708.pdfI am questioning that EJB/EQL could do that. I know of cases where EJB/EQL could not scale and best was to have them removed and use SQL + cacheing DAO.You said you know 3rd hand of a case where EJB could do VLDB. Maybe they have a large DB and EJBs, but not the same application, I don't know. What we can see is slides for free is Javapolis slides where Rod Johnson again repeated that on his slide:"EJB does not have any of it's many promised benefits"and also"It's worse for persistance than almost anything else". You can just google on it or go to JL.I am for scientific development, means that it can be reproduced or verified what works and how effective it is.Ex: These types of applciations have a lot of reference user sites listed and they are SQL centric.http://www.opensourcecms.com/index.php?option=com_wrapper&Itemid=139 It be better to duplicate a sucess then risk a failed project I think.I know 1st hand of J2EE SQL centric VLDB sucess projects! If anyone wants to do Steve Zara designs, go ahead, no skin of my back. If you decide to do SQL E/R type DAO even in Java... I get zero benefits. What's wrong with a debate?.V

    I hope I didn't give the impression these deployments are using EQL. Alot of what I know and have stated is either from first hand or from talking to friends who build these systems. My perspective is most likely skewed because I'm seeing it from a pre-trade compliance perspective. If you're looking at it from a pure TPS perspective it's very different. In fact I would say they are completely different problems and require completely different approaches.

    I am familiar with using Sql to run compliance, though once you see the process written down in detail, it becomes obvious using a pure Sql based approach will be difficult to scale. That's not to say it can't scale, just that scaling vertically using big mainframes is a very hard approach to sell. Rather than talk abstractly, I'll use an example. Say I stick with ANSI Sql to calculate a weight rule. If I use Oracle 8i, it doesn't have temp tables right. In order to calculate weights of the positions in a given account, I have to query the positions and sort by any number columns like GISC(sector, industry group, industry, subindustry), security type (bond, mm, stock), issuer, rating (s&p, moodies, fitch), state, exchange and country to name a few.

    This means I have to get sets of sets and run sum() function on those sets. Each set can then contribute to additional aggregates. For example, there are rules like.

    1. account cannot exceed 10% in sector X
    2. account cannot exceed 15% in industry X

    Since GISC categories are heirarchical, any sum() of child branch contributes to the sum of the parent. Now, if I'm doing 10-15 aggregate calculations for a small account with 25 positions it's no big deal. If I get a block order of 2K in one message, it still might not be a big deal if I queue it up, sort and process it by account. One of the many challenges is all those shifts in positions contributes to a firms exposure to any given issuer. If there are firm wide rules and the database has 50 million positions, there's no way it can be run in realtime.

    At that point, one has to go with an event driven approach and incrementally estimate the potential exposure. In the event a shift is significant like 3-4x the delta, the system may want to do a partial calculation for a spot check. Under normal circumstances, the system cannot automatically initiate a firm wide compliance check. A person must manually run it and there's usually a protocol for doing this. From a compliance perspective, what Cameron says is right on. The total size of the database really isn't as important as the type of process it's running.

    If I'm just building a tree using cartesian joins, it's not a deal. but if I'm calculating all the aggregates combinations for 50 million positions, I have a huge problem. It's going to take hours or days for a clean run. Hopefully this provides a context for my ramblings.

    peter
  89. you call that clarification?[ Go to top ]

    Keeping to the subject is always difficult for Peter

    But I though that is what we have MOLAP, ROLAP, and HOLAP for Peter, as you yourself explained earlier, what have that to do with ORM and OQL?

    Each post from you obscure the discussion. Remember: A thinker makes clear what is obscure. Hugh Kingsmill

    It was not the other way around!

    Regards
    ROlf Tollerud
  90. silly me[ Go to top ]

    I was attempting to address vic's question of "do they use eql or sql." the answer to that is I don't know anyone using eql with ejb. Most are using entity beans, which means sql.
    But I though that is what we have MOLAP, ROLAP, and HOLAP for Peter, as you yourself explained earlier, what have that to do with ORM and OQL?

    I was attempting to point out that even if some is a Sql master with powerful kung-fu, it isn't always appropriate or desirable to do it with sql. My apologies, I keep forgetting others are not familiar with the field and haven't had to solve these classes of problems.

    enjoy.

    peter
  91. clarification[ Go to top ]

    Do you mean OQL and O/R mapping can help to caclulate this stuff ? Probably the tool you use is not a O/R mapping (runtime object to relation mapping engine).
    BTW iBatis is ORM too, it is just implemented in the right way ("right" means Old Plain SQL and Old Plain JAVA data structures). Probably it is possible to drop XML to make it better.
  92. clarification[ Go to top ]

    Do you mean OQL and O/R mapping can help to caclulate this stuff ? Probably the tool you use is not a O/R mapping (runtime object to relation mapping engine).BTW iBatis is ORM too, it is just implemented in the right way ("right" means Old Plain SQL and Old Plain JAVA data structures). Probably it is possible to drop XML to make it better.

    Not at all, ORM doesn't help you calculate this stuff. ORM just helps you get the data from the database into your app. for example, most system make positions a single table, with a dozen or so supporting tables for additional information. A typical position table might look like this:

    position (
    account_id
    ticker
    cusip
    quantity
    purchase_price
    current_price
    cost
    security_type
    gisc_sector
    gisc_industry_group
    gisc_industry
    gisc_sub_industry
    exchange
    date_purchased
    country
    state
    isAssetBacked
    isser_id
    s&p_long_rating
    s&p_short_rating
    moody_long_rating
    moody_short_rating
    )

    since Rolf also asked, I'll answer both at once. since the positions are constantly shifting for the most active accounts, calculating the aggregates is challenging.

    Using MOLAP assumes there some mechanism that periodically updates analysis service, so there's some time lag. In some cases its preferrable to use MOLAP, like overnight batch reports. For realtime, it's not feasible for a couple of reasons. the time to refresh molap is directly related to the number of dimensions in a cube, the number of rows participating in the cube and how frequently the positions shift. for a small-ish database of 1million rows it's feasible to do refresh periodically. analysis service is a solid product and handles that well.

    Using ROLAP (relational OLAP) is an option, but it can be slow. Rolap will do queries directly against the tables in Sql Server. The challenge here is you can't run ROLAP queries with locking. It would affect TPS (transactions per second) in sql server. So at best it would still be a snapshot. Using ROLAP in this case saves you the trouble of writing the sql, since the user has already defined the mapping between the cube and the tables. ROLAP is good at generating optimized queries. In fact I would argue the queries ROLAP generates are just as good hand tuned sql.

    Using Sql is also an option. In sql server, you can do a select into a temp table and then run your select queries on that. but the catch is this. When should a transaction trigger the aggregation stored procedure? If I insert an order with 100 transactions, I should only insert 1 row and trigger the validation process. I should also be able to trigger validation for a specific transaction. Needless to say, doing all this in sql isn't practical to me. Others will disagree and there are products out there claiming to do it all with sql.

    If I don't use stored procedures, and stick with ANSI sql, my application would have to contain those queries.

    what do I use? I use a mix of sql, OLAP, Java/C# and rule engines to calculate and perform compliance validation. There are plenty of papers out there on this topic and these are stock techniques. Say I have to get the positions into my app, but I have a mix of databases. Here an ORM can save me sometime in development because it's one table or one view in most cases. Using an ORM helps me smoothe over the differences in naming convention. Position.SecurityType might be mapped to: security_type, sectype, sec_type, sec_code. I always try to benchmark. If my stress tests shows the ORM queries are 5-8% slower than hand tuned sql under load, seriously who cares. In many cases, the query is less than 5% of the total validation time, so it's chicken scratch. On the otherhand, if the queries are 25% of the total validation time, I'd hand tune the sql.

    these are techniques and tools. if it doesn't work for your case, then don't use it. it's that simple. In the case of compliance, I'm better off looking at how to improve the validation algorithm. Once I've exhausted all my other options for performance improvement I'll look at my sql.

    sorry for the off topic post, but I don't agree with "all this/that" mentality. I prefer to be flexible.

    peter
  93. clarification[ Go to top ]

    I know how ORM can work for web applications to preload data for 10/20 kb page, but ORM + OLAP sounds exotic, probably you have very good product, it must take a lot time to load "a small-ish database of 1million rows" using ORM (and probably it will throw OOME after DB become large). I do not want to recommend some silly thing for you, but probably materialized view can help to caclulate and to cache this stuff. I am not sure it is a good idea for your use case, but some people use "replica" for "hot" reports without problems.
  94. clarification[ Go to top ]

    BTW, Iam not expert, but experts recomment bitmap indexes for this stuff too (it is slow to update, so you need some workaround with replication anyway)
    http://www.dba-oracle.com/oracle_tips_bitmapped_indexes.htm
  95. clarification[ Go to top ]

    I know how ORM can work for web applications to preload data for 10/20 kb page, but ORM + OLAP sounds exotic, probably you have very good product, it must take a lot time to load "a small-ish database of 1million rows" using ORM (and probably it will throw OOME after DB become large). I do not want to recommend some silly thing for you, but probably materialized view can help to caclulate and to cache this stuff. I am not sure it is a good idea for your use case, but some people use "replica" for "hot" reports without problems.

    ORM is not used with OLAP at all. OLAP is already abstracted into dimensions. if you're talking about analysis service, the query language is MDX. If you're talking about oracle it's sql. OLAP products like Analysis Service integrate tightly with the database. In the case of Analysis service, it integrates very closely with sql server. I have loaded 1 million rows on a system with 1gb of ram, it's not a big deal if you know how to do it.

    Yup materialized views can help alot. Before OLAP products were popular, many people were using materialized view to create aggregate views where a columns uses sum() or other built in functions. Again, these are standard techniques.

    it's expected that some customers will not want an OLAP server, so absolutely one would use materialized view to create summary tables. the benefit of OLAP is that it builds a bitmap index so that multi-dimensional queries use just the indexes to traverse to the cell. for simple things like sum(), OLAP will generally be 3-10x faster than raw sql. Where you win with OLAP is that it doesn't have to pre-calculate.

    Using materialized views to generate summary tables is a precalculated approach. which means every time you insert to a table, it has to index and update all the materialized views it affects. For high transactional systems, that may not be acceptable. The number of materialized views a table participates in increases the performance cost of an insert. It can done in a performant way, but I'd spend several months profiling and tuning to make sure it was within the performance requirements.

    If you want to learn more about this stuff, I would recomment www.olapreport.com. I'm done blabbing about this stuff. It's a dry topic. hope that helps clarify any idiot/confusing/obtuse comments in my previous posts.

    peter
  96. clarification[ Go to top ]

    I do not need OLAP at this time, but it is interesting to learn too. Do you know some good online reference for MDX stuff ?
  97. msdn is decent[ Go to top ]

    there's some decent articles on msdn, but if you really want to get familiar with it, I would install and play with it. MS and Essbase are suppose to support XMLA in future release, but I haven't kept track of whether that is done or in progress still.

    peter
  98. msdn is decent[ Go to top ]

    It looks like MS reinvent iBatis in this XMLA stuff
    <Command>
    <Statement>
    select [Measures].members on Columns from Sales
    </Statement>
    <Command>
  99. LOL, it's not much of a standard[ Go to top ]

    actually, we wrote our own markup language for MDX, since XMLA doesn't meet our needs. I wanted to be able to describe aggregates with meta and then auto-generate the appropriate MDX query. Our approach was to describe the cubes in XML. That data is then used as a map to generate our queries. We could also hand tune the MDX if needed. this was a design/deployment time approach and not a runtime dynamic mdx query approach. hopefully XMLA will mature and become useful. I'm totally bias here :)
  100. LOL, it's not much of a standard[ Go to top ]

    I respect MS research too http://research.microsoft.com/research/detail.aspx?id=3 it is more scientific than JCP marketing show. Competition is a good thing, but I see it in marketing only.
  101. please, please[ Go to top ]

    Please Peter,

    I will do anything, repeat anything if you stop talking about those boring financial aggregates.

    You remind me about when I was in elementary school. At that time I was crazy about Edison, and I read and learned everything about him, and as a result every time we were given a school essay test, no matter the subject I skewed it to be about Edison one way or other.

    But I was a little boy! You do not have that excuse.

    We all know by know what your expertise is, enough is enough, heard of overselling?. Put your CV on a website and set a link in your signature as Cameron does. We are so tired of hearing about this stuff, which usually has absolutely no relation whatsoever to what we are talking about.

    It is quite another matter that if I ever become the owner of a financial services company I know exactly what person to call:

    Vic Cekvenich

    Regards
    Rolf Tollerud
  102. ORM in large projects[ Go to top ]

    I have, as have many software architects and developers, used and recommended ORM products in many "large" projects. Being professionals we are usually not at liberty to reveal the companies and/or the projects.
    Bullshit!.V

    Obviously Vic, you don't know what it is like in the world of professional consulting. I will not attempt to educate you further on the matter.
  103. ORM in large projects[ Go to top ]

    Anyone using an OO language like Java with a Relational data store and has a non-trivial business object model is writing some ORM type code anyway.

    You are full of it JJ!
    According to this I linked allready:
    http://jroller.com/page/yizhou/20050121#ultimate_java_persistence_design
    majority IS NOT USING O/R, they are using JDBC and SQL.

    What other OO langs do you know? C#? PowerBuilder? Delphi? What? And what O/R tool would that be? EJB for C#?
    PowerBuild was quite OO, and it used SQL.

    Taking a RowSet from JDBC and instantiating it to memeory does not make it OO. I am quite ready to see if you know what OO is.
    But to limit the scope... lets see this auto-magic non-ansi querry optimization.
    A simple 2 way join? 8 way? Self join? Co-related? String % search?
    Show me the OQL that is optimizable based on platforms, I will do the work!!
    Is there? No me thinks.


    .V
    <the only way I would feel bad here is if it turns out this dude has a learning disability >
  104. ORM in large projects[ Go to top ]

    Anyone using an OO language like Java with a Relational data store and has a non-trivial business object model is writing some ORM type code anyway.
    You are full of it JJ!According to this I linked allready:http://jroller.com/page/yizhou/20050121#ultimate_java_persistence_designmajority IS NOT USING O/R, they are using JDBC and SQL.What other OO langs do you know? C#? PowerBuilder? Delphi? What? And what O/R tool would that be? EJB for C#?PowerBuild was quite OO, and it used SQL.Taking a RowSet from JDBC and instantiating it to memeory does not make it OO. I am quite ready to see if you know what OO is.But to limit the scope... lets see this auto-magic non-ansi querry optimization. A simple 2 way join? 8 way? Self join? Co-related? String % search?Show me the OQL that is optimizable based on platforms, I will do the work!! Is there? No me thinks..V<the only way I would feel bad here is if it turns out this dude has a learning disability >
    Take it easy Vic, he said "ORM type code", not "ORM Tool".

    I guess you are aware you have crossed the line which divides normal posters from the trolls, aren't you? I wouldn't like you to take sides with Rolf the trollmaster. One is enough in this forum. Please.

    Regards,
    Henrique Steckleberg
  105. ORM in large projects[ Go to top ]

    he said "ORM type code", not "ORM Tool"
    ??

    I quoted what was said:
    some O/R product will look at "OQL" and depending on if it's conected to MS SQL or Oracle for example,optimize and write an optimzed Select sql command?.... this is EXACTLY what Kodo JDO from Solarmetric does.

    And I said.... I will give him $ if he sends me the example OQL (EQL, JQL, HQL, whatever) so that I can prove he is a liar.
    I assure you I took no chances, I knew I would win (12 years of SQL it says on my resume, some of it was Performance Tunning, and it's real, I was there ;-). For all of you running slow systems, you need pros, not fakers.
    I was not even in this thread, "HorseGlue - the code just stick to us" brought me in.

    -We agree that SQL is important at "enterpise" scale or larger.
    -We agree that track record is important.

    In summary: There are pros and pretenders amongst us.
    .V
  106. ORM in large projects[ Go to top ]

    I hope this link can help clear up whether JDO can issue vendor specific SQL or not:
    http://www.solarmetric.com/jdo/Documentation/3.3.0beta2/docs/ref_guide_dbsetup_dbsupport.html

    Regards,
    Henrique Steckelberg
  107. ORM in large projects[ Go to top ]

    You can use adapters and bridges without ORM, it is not ORM specific feature, some RDBMS can emulate competitor product too.
  108. ORM in large projects[ Go to top ]

    I hope this link can help clear up whether JDO can issue vendor specific SQL...

    I can see how this could be confusing:
    "CreatePrimaryKeys: If false, then do not create database primary keys for identifiers. Defaults to true.
    MaxColumnNameLength: The maximum number of characters in a column name. Defaults to 128."
    Any E/R tool like E/RWin from the early '80s does that.

    Nothing about optimizing querry peformance using vendor native tequiques. Which ones I asked? force an index? others?
    This is the issue on the table called out:
    " some O/R product will look at "OQL" and depending on if it's conected to MS SQL or Oracle for example,optimize and write an optimzed Select sql command?.... this is EXACTLY what Kodo JDO from Solarmetric does.
    All JJ/Zara or you have to do is give me the "OQL" so we can see how it's optimized and what the native optimzation things it used. Come on, make a $, plus you get to prove me wrong publicly. So far the clues are Oracle + Kodo + DB2(has to be DB2?).

    I need one more part of the equation. The "OQL" querry. PLEASE! Make like we are working in a scientific field and that there are no magicians. We can all re-produce the steps.
    You can just change table names to A, B, C.


    .V
  109. ORM in large projects[ Go to top ]

    All JJ/Zara or you have to do is give me the "OQL" so we can see how it's optimized and what the native optimzation things it used. Come on, make a $, plus you get to prove me wrong publicly.
    It is a dirty game, OQLs support 10% of standard SQL functionality, so I do not think it needs any specific optimization. I want your money, but I am afraid I have no chances to take it.
  110. ORM in large projects[ Go to top ]

    OQLs support .. % of "ANSI" SQL functionality
    An anti-clamacitc end.
    .V
  111. ORM in large projects[ Go to top ]

    OQLs support .. % of "ANSI" SQL functionality
    An anti-clamacitc end..V
    Yes, OQLs is a very silly thing, as I understand it is a dead stuff from DOT COM ERA (It was posible to sell any crap), but some people can not forget it for some reason.
  112. ORM in large projects[ Go to top ]

    OQLs support .. % of "ANSI" SQL functionality
    An anti-clamacitc end..V
    Yes, OQLs is a very silly thing, as I understand it is a dead stuff from DOT COM ERA (It was posible to sell any crap), but some people can not forget it for some reason.

    I think that the posters were talking about JDOQL not OQL (from the ODMG).

    JDOQL is alive and kicking in JDO 1.0, JDO 1.0.1, and the proposed JDO 2.0.
  113. ORM in large projects[ Go to top ]

    We use KODO and Hibernate in "large" projects, we will use EJB 3 without problems too. I am ORM expert myself and I do not have any problems to use it and I know how not to use it.
    I see future JDO versions are going to use relational language, but it must be possible to improve this stuff, just support SQL in the right way. It is trivial to reinvent some new relational language, but I see no reason to do it.
    "Relational language" means it operates on "relations"(it is well defined data structure) using select,project,
    rename, cartesian product, union and set-difference functions/operators. SQL is one of relational languages, probably it is possible to invent something better, but it is a defacto standard. I hope you know it to use RDBMS products without portability problems. If you do not know it then ask your DBA and stop this nonsence about "vendor specific, evil SQL", because any QL is more vendor specific.
  114. ORM in large projects[ Go to top ]

    I hope this link can help clear up whether JDO can issue vendor specific SQL...
    I can see how this could be confusing:"CreatePrimaryKeys: If false, then do not create database primary keys for identifiers. Defaults to true.MaxColumnNameLength: The maximum number of characters in a column name. Defaults to 128."Any E/R tool like E/RWin from the early '80s does that.Nothing about optimizing querry peformance using vendor native tequiques. Which ones I asked? force an index? others?This is the issue on the table called out:
    " some O/R product will look at "OQL" and depending on if it's conected to MS SQL or Oracle for example,optimize and write an optimzed Select sql command?.... this is EXACTLY what Kodo JDO from Solarmetric does.
    All JJ/Zara or you have to do is give me the "OQL" so we can see how it's optimized and what the native optimzation things it used. Come on, make a $, plus you get to prove me wrong publicly. So far the clues are Oracle + Kodo + DB2(has to be DB2?). I need one more part of the equation. The "OQL" querry. PLEASE! Make like we are working in a scientific field and that there are no magicians. We can all re-produce the steps. You can just change table names to A, B, C..V

    Just wanted to jump in to this very entertaining (if not some what mean spirited) thread as a successful implementer of JDO technology.

    Here's an interesting link from the Solarmetric site that indicates the advantages of JDOQL from both a flexibility stand point and an optimization standpoint:

    http://solarmetric.com/Software/Documentation/3.2.4/docs/jdo_overview_sqlquery_conclusion.html

    I have to say that Vic, you should really improve your posting style. Try to get less personal and discuss things from a technological standpoint. Quite frankly your behaviour on this thread has been deplorable.
  115. to be vaccinated against FUD[ Go to top ]

    Vic didn't ask for marketing fluff but for a single practical sample and so do I.

    No I don't really. The question is so absurd that for people that believe such nonsense there are not anything else to do than to start from the beginning with
    Sir James George Fraser-The Golden Bough.

    Happy reading.

    Regards
    Rolf Tollerud
  116. to be vaccinated against FUD[ Go to top ]

    Vic didn't ask for marketing fluff but for a single practical sample and so do I.No I don't really. The question is so absurd that for people that believe such nonsense there are not anything else to do than to start from the beginning withSir James George Fraser-The Golden Bough.Happy reading.RegardsRolf Tollerud

    There is no point in giving an example. You can see examples for yourself! All you have to do is download any of the free trials of JDO vendors product, or download Hibernate, and look at the results. I could post an example, but I won't, as I know what will happen. Anything that is posted will be questioned as being either an unrealistic example (because it does not involve terabytes of data or thousands of tables), or because it does not deviate from ANSI SQL 'enough' to be called 'truly using vendor-specific SQL' or because it is not a 'sufficiently complex example of relational calculus', or does not 'use any more than a fraction of the power of SQL' and so on ad infinitum.

    This reminds me of the repeated 'Java is slow' trolls on forums such as Slashdot. No amount of showing how Java is practical and fast for almost all uses will ever win over the doubters. No amount of benchmarking or samples ever works to convince those who are expert in C or C++ that Java is practical because they can always come up with a specific case showing how raw pointers and finely tuned memory allocation are vital for performance, forgetting that their specialised situation is rare.

    However, as I promised earlier, I'm not going to get into any more technical arguments on this - there is too much name-calling and goalpost-moving that I can't see the point.
  117. to be vaccinated against FUD[ Go to top ]

    Vic didn't ask for marketing fluff but for a single practical sample and so do I.No I don't really. The question is so absurd that for people that believe such nonsense there are not anything else to do than to start from the beginning withSir James George Fraser-The Golden Bough.Happy reading.RegardsRolf Tollerud
    There is no point in giving an example. You can see examples for yourself! All you have to do is download any of the free trials of JDO vendors product, or download Hibernate, and look at the results. I could post an example, but I won't, as I know what will happen. Anything that is posted will be questioned as being either an unrealistic example (because it does not involve terabytes of data or thousands of tables), or because it does not deviate from ANSI SQL 'enough' to be called 'truly using vendor-specific SQL' or because it is not a 'sufficiently complex example of relational calculus', or does not 'use any more than a fraction of the power of SQL' and so on ad infinitum.This reminds me of the repeated 'Java is slow' trolls on forums such as Slashdot. No amount of showing how Java is practical and fast for almost all uses will ever win over the doubters. No amount of benchmarking or samples ever works to convince those who are expert in C or C++ that Java is practical because they can always come up with a specific case showing how raw pointers and finely tuned memory allocation are vital for performance, forgetting that their specialised situation is rare.However, as I promised earlier, I'm not going to get into any more technical arguments on this - there is too much name-calling and goalpost-moving that I can't see the point.

    What a lot of the doubters don't understand is the 80-20 rule. In most applications, with respect to performance, 80% of the time an ORM tool will be good enough in terms of performance. For the other 20% of the time drop down into the SQL and go nuts! Fine tune to your heart's content.

    In terms of productivity, it works for me and my team.
  118. ORM in large projects[ Go to top ]

    http://solarmetric.com/Software/Documentation/3.2.4/docs/jdo_overview_sqlquery_conclusion.html>
    "SQL queries tie your application to the particulars of your current data model and database vendor." It means SQL support in your product is wrong.
  119. ORM in large projects[ Go to top ]

    http://solarmetric.com/Software/Documentation/3.2.4/docs/jdo_overview_sqlquery_conclusion.html>
    "SQL queries tie your application to the particulars of your current data model and database vendor." It means SQL support in your product is wrong.

    Firstly, I'm assuming that you meant "using SQL in your product is wrong", not "SQL support in your product is wrong".

    Given that assumption, then that's absolutely not what that line in our docs is indicating. The line is there to highlight that, well, using a SQL query will tie your application to the particulars of your current data model and database vendor.

    A careful reader will note that we are highlighting two limitations here: first, that you will couple your application logic (your SQL queries) to the current mappings that you're using; and second, that you will (probably) end up coupling your application logic to your database vendor as well.

    Nothing in either of these two limitations is "wrong", per se. They are simply limitations compared to when using JDOQL for query execution. We feel it's useful to educate our clients about these types of limitations so that they can make a good decision about when to use SQL vs. when to use JDOQL.

    For example, if a developer wanted to find the top 10 open orders in an accounting system, they could do it in JDOQL like so:

    Query q = pm.newQuery("select from PurchaseOrder where status == ? order by grossPrice range 0 to 10");
    List orders = (List) q.execute(PurchaseOrder.STATUS_OPEN);

    The SQL route for MySQL might look like:

    Query q = pm.newQuery("javax.jdo.query.SQL", "SELECT * FROM PURCHASE_ORDERS WHERE STATUS = ? ORDER BY GROSS_PRICE LIMIT 0, 10");
    List orders = (List) q.execute(PurchaseOrder.STATUS_OPEN);

    This relies on a particular mapping as well as on MySQL's limit syntax. This isn't resilient to database schema changes or to database vendor changes.

    Clearly, this is not always a problem -- most of our clients have existing schemas that will *never* change, and are pretty much fixed on a particular database. However, for those that target multiple databases (for example, MySQL for most development work and Oracle for testing / QA / deployment), this can be inconvenient.

    Also, Juozas' original quotation of our docs only grabbed a piece of the text at the quoted URL. He ignored one pretty significant performance-related issue: Kodo does not cache direct SQL statements results (Kodo does almost no parsing of direct SQL statements, so it does not know which tables are involved in a given query). So, the results from queries defined in JDOQL are potentially cacheable, whereas ones defined in SQL are not.

    -Patrick

    --
    Patrick Linskey
    Kodo JDO
    SolarMetric Inc.
  120. ORM in large projects[ Go to top ]

    ....for example, MySQL for most development work and Oracle for testing / QA / deployment.

    Woooow. It could happen, it could happen.
    I got the munchies.
    Hey man, lets party.

    .V
  121. ORM in large projects[ Go to top ]

    ....for example, MySQL for most development work and Oracle for testing / QA / deployment.
    Woooow. It could happen, it could happen. I got the munchies. Hey man, lets party. .V

    Yup nothing but a Troll.

    I'm done.
  122. ORM in large projects[ Go to top ]

    So, the results from queries defined in JDOQL are potentially cacheable, whereas ones defined in SQL are not.
    Do you think this is consistent right SQL support ? Your product is good (performs better than analogs), I use it myself, but SQL is very important too. It is very strange to say it for experts and for database development tool vendors.
  123. In fact yes[ Go to top ]

    Everyone so far agreed that knowing SQL is most important. So here is Steve Zara. I think he also said that he does JSF, not that there is anything wrong.
    So Venn diagrams are old, we don't need that no more. How about information theory, is from the '40s, no need for that?
    substitute the word 'assembler' for 'SQL' and you get my attitude:

    "SQL is portable because ...
    .... it's a long standing standard that is supported on mutiple platforms for decades with a stable API.
    The sucesfull P-langs are SQL centric and not O/R centric. For example Friendster, converted from J2EE to PHP. Most enterpise level DB's have mutiple platforms conecting to it, not just java
    Compare ANSI SQL with EQL v1.0, HQL,JQL, OQL, ... they are not portable to each other, and don't work with Ruby, pLangs, etc. and tend to change year to year.
    Ansi SQL is portable accross platforms! except the old versions of mySQL. People that like portability would chouse SQL I think
    You will never be able to .... "
    ..... write more than a simple design in O/R and have it go to production. The track record for EJB is horrible, best to do wait and see on EQL3.

    Compare this with ANSI SQL, by Celko for example. Not one of the designs in his books are possible by O/R, and by being ANSI SQL they work on DB2, Oracle, Informix, Postgresql, MS SQL, Sybase, Firebird, etc. WITHOUT using any of the propriatery api that each of them has. I'd be glad to make a $ offer to do a single example from "SQL for Smarties" in _EQL.

    Some
    times there is an "advance" in IT that

    .... prove that it works in theory under lab conditions, and ads risk in real production world.

    By using E/R, you get a good architecture w/ seperation of concerns. People that are good at persistance can do SQL; for example in iBatis (an Object Oriented compoenet) SQL
    strings are

    SQL is a high level modern declerative langage. In more traditional langs like Java, you have to code how you want something to happen.
    People that use only O/R, I found tend to be also the ones that pretend that they know how to apply OO, in addition to not knowing basic SQL. Feel free to put OQL expert in you resume and then say that you know large scale development.
    If you do small projects, then maybe you don't see the ocean from your row boat in the harbor. You can chouse to use marketing oriented designs, and chose not to learn.

    THE ONLY THING THAT MATTERS IS TRACK RECORD, all else is theory. How many failed EJB proejcts there are?
    You can say, oh, well that was not O/R enough.
    O/R hurt J2EE becuase it became OK to lie.

    Now as to your attitude of assembly is as hard as SQL:
    "SELECT * from A,B where A.id = B.aid"
    What does this look like in assembly syntax, please educate me as to the similarities?






    There are pretenders amongst us!!!
    I wish you luck on your projects,
    .V
  124. In fact yes[ Go to top ]

    write more than a simple design in O/R and have it go to production.

    I'm doing that now. I also know personally of projects with large O/R design that have gone beyond production into successful deployment.
    The track record for EJB is horrible, best to do wait and see on EQL3.

    EJB is not the start and end of O/R.
    SQL is a high level modern declerative langage.

    This is historically wrong. Both SQL AND Java are based on old principles (most of the principles of relational calculus are from the 70s). I'm not saying that SQL is redundant, but it is not modern.
    Compare this with ANSI SQL, by Celko for example. Not one of the designs in his books are possible by O/R, and by being ANSI SQL they work on DB2, Oracle, Informix, Postgresql, MS SQL, Sybase, Firebird, etc. WITHOUT using any of the propriatery api that each of them has. I'd be glad to make a $ offer to do a single example from "SQL for Smarties" in _EQL.

    I am not after clever tricks in SQL. I am after routine (but optimised) storage and retrieval. What I do take objection to is that I should purchase (or download) a full-featured database engine and not take advantage of it by sticking to 'ANSI SQL'. That is a waste. The ORM I use contains detailed knowledge of how to make best use of the particular database engine I use so as to generate optimised queries using proprietary features. I would love to see you give examples common portable stored procedures for Oracle, Postgresql, SQL Server and MySQL!
    If you do small projects, then maybe you don't see the ocean from your row boat in the harbor. You can chouse to use marketing oriented designs, and chose not to learn.

    I don't think you are in a position to judge whether or not I am doing small projects. (I do both small and large projects). I don't think it is reasonable to state that someone who wants to use abstractions from the database is 'choosing not to learn' - I think that is a silly thing to say. Just because I am not personally in favour of your perferred technology does not mean I am ignorant.
    THE ONLY THING THAT MATTERS IS TRACK RECORD,
    I absolutely agree. That is why I am using O/R. It allows me to combine both the good track record of SQL and relation databases with pure Java. I also know of the long history of problems that have arisen with non-portable legacy SQL. SQL dialects have a track record of incompatibility. I know this from painful personal experience.
  125. In fact yes[ Go to top ]

    I don't think you are in a position to judge whether or not I am doing small projects.

    OK, if you want to play mine is larger than yours ;-} maybe there is an URL of "a ORM project in prodction" so other's can see if they think it's large or small.

    You would not be one of them pretenders, you talk the talk.

    .V

    ps: my definition of VLDB is several terabytes with at least 10K concurents users, but I agree, others may differ on VLDB. A 400Gig drive is now $400, by the time you raid it... (and my projects I teach lead on in that range are 1up, asociates and vantage)
  126. In fact yes[ Go to top ]

    OK, if you want to play mine is larger than yours ;-} maybe there is an URL of "a ORM project in prodction" so other's can see if they think it's large or small.

    You would not be one of them pretenders, you talk the talk.

    My project is an internal commercial development. I have no intention of releasing any information about it to you - it would be unprofessional if I did. If I did we would simply get into a debate of what you think is 'large or small'! If you want to see evidence of large projects, just ask Hibernate users, or JDO vendors. There are plenty of them, and plenty of large projects.
    ps: my definition of VLDB is several terabytes with at least 10K concurents users, but I agree, others may differ on VLDB. A 400Gig drive is now $400, by the time you raid it... (and my projects I teach lead on in that range are 1up, asociates and vantage)

    Complexity is nothing to do with database size or concurrent users. Those are database performance issues, which are totally irrelevant to the matter of whether or not an ORM is used.

    I'm sure you are a real expert in SQL. Well that is fine - it is an important skill. What I still find disconcerting is your continued stating that yours is the only way, that EJB project failures mean that all O/R is doomed to be an unscalable low-performance mess.

    Anyway - this is going around in circles - can we agree to respect each others view and skills and stop at this point?
  127. In fact yes[ Go to top ]

    this is going around in circles - can we agree to respect each others view and skills and stop at this point?

    My view is that:
        There are pretenders amongst us.

    I am not sure about anyone's skills or designs; unless they show me a production track record that can be verified, none of the "it's internal" stuff. If it can't be messaured, it's not scientific, end of story.!

    You said you do O/R and JSF and that it's " internal commercial development. " Internal and commercial and it's large development in production?
    And you want me to trust you?

    I don't trust you. Nothing personal, I trust no one! That's just me.
    Whay you say is opposite of what I have been trained in and experienced.
    If somone trusts your designs, they should enjoy the associated benefits.

    As to your comments about "portability", that is something Linda De Michels says. I take that as an insult.
    Her line is that: Some people that don't care about portability can use SQL.

    That implines that some don't like protability? or that we are stupid in some way. Screw her.
    Celko examples, are all ANSI SQL in all of his books and cross portable. He states that stored procedures are to be avoided becuase declearing what you want should be enough.

    Simple things should be simple and complex things should be possible.

    There are pretenders amongst us.

    .V
  128. In fact yes[ Go to top ]

    I can understand this " internal commercial development. ", some of my stuff is not public too. Believe it or not, but I design and develop MDA and workflow stuff. I am ORM "supporter" too, this library http://cglig.sf.net was designed for ORM (it supports JDO too), it is public. But it doe's not mean I am happy about political garbage and lie around this technology.
  129. In fact yes[ Go to top ]

    I said I would finish this discussion, but..
    As to your comments about "portability", that is something Linda De Michels says. I take that as an insult.
    Her line is that: Some people that don't care about portability can use SQL.

    I honestly do not wish you any disrespect or insult. You seem to be someone who is highly skilled at what you do. If I have personally caused you to feel insulted, I apologise.

    However, I think there is a valid point: Database vendors often sell their products on the basis of proprietary extensions to improve performance. If you stick to portable SQL, you are using their products at less than top performance. You are using a minimal compatible subset of SQL. If you use a high quality ORM, that ORM product can include knowledge of the proprietary extensions of each database product, so it can be a high-performance superset, not a subset, of features.

    There is a useful analogy in terms of Java GUIs: the Eclipse SWT API uses native GUI features, so can usually get very good performance. However, the common set of GUI components between all the platforms that SWT runs on is small. What SWT does is to provide a superset of GUI features, and emulates what is missing on less-capable platforms. This way, the developer can code to a rich API, and ensure that all the widgets run at the best possible speed on all platforms - where possible they are run as native. Exactly the same applies to a high quality JDO or Hibernate system, in terms of the querying capabilities on different databases.

    Now, assuming you realise that I do honestly respect your views, I shall end my contribution to this debate.
  130. In fact yes[ Go to top ]

    If you use a high quality ORM, that ORM product can include knowledge of the proprietary extensions of each database product, so it can be a high-performance superset


    Steve, do you belive that?

    You want me to think that some O/R product will look at "OQL" and depending on if it's conected to
    MS SQL
     or
    Oracle
    for example,
    optimize and write a diferent Select sql command?

    $10 via paypal to your email address if you can give an url of SQL optimization based on DB vendor by an O/R tool.


    THERE ARE PRETENDERS AMONGST US.

    .V

    "... people seem to misinterpret complexity as sophistication, which is baffling — the incomprehensible should cause suspicion rather than admiration. Possibly this trend results from a mistaken belief that using a somewhat mysterious device confers an aura of power on the user."
  131. So?[ Go to top ]

    You want me to think that some O/R product will look at "E\OQL" and depending on if it's conected to
    MS SQL or
    Oracle for example,
    optimize and write a diferent Select sql command?
    $75 via paypal to your email address if you can give an url of SQL select statment optimization based on DB vendor by an O/R tool.

    "SQL is for those that don't like portability".
    It's an insult to the real developers with production track record.
    It's like the TV show "Pretender" where he pretends to be a Doctor one time, next episode he is designing a building....
    A lot of J2EE projects fail, becuase it's OK to lie, and some people can't tell a pretneder from a developer.

    THERE ARE PRETENDERS AMONGST US.
    .V
    (if there are posts here... email me at vin at friendVU.com so that I can pay up)
  132. In fact yes[ Go to top ]

    If you use a high quality ORM, that ORM product can include knowledge of the proprietary extensions of each database product, so it can be a high-performance superset
    Steve, do you belive that?You want me to think that some O/R product will look at "OQL" and depending on if it's conected to MS SQL &nbsp;or Oracle for example,optimize and write a diferent Select sql command?

    Uh Vic, this is EXACTLY what Kodo JDO from Solarmetric does. It has pluggable dialects of SQL optimized for each supported database. I'm sure that there are many other ORM vendors that do the same.
  133. In fact yes[ Go to top ]

    You want me to think that some O/R product will look at "OQL" and depending on if it's conected to MS SQL or Oracle for example,optimize and write a diferent Select sql command?
    .... this is EXACTLY what Kodo JDO from Solarmetric does.

    J. Java /aka S. Zara? I don't matter.
    You are quite the bullshiter JJ, it be fun to have a beer with you; of that I am sure. Get your other opinions and crack me up. You should go apply for the EG, we would so vote for you, keep us entertined. Hey, can you stik a pole up you, and go bar to bar, you can tell them you are a pop-sickle. I bet you could fool some of the people some of the time.

    I looked at the site docs, it does not even say that! They make no claim that they optimize select querry performance using propriatery SQL non-ANSI extensions based on the DB vendor you are conected to. The brown stuff is over your head.

    How much can you lie? Tell me another one, about computers and magic.

    THERE ARE ... you know whats amongst us. And I just ID me one.
    Where's a closest cold beer. A toast to you "JJ".
    "Dyn-no-mite"
    .V
  134. In fact yes[ Go to top ]

    You want me to think that some O/R product will look at "OQL" and depending on if it's conected to MS SQL or Oracle for example,optimize and write a diferent Select sql command?.... this is EXACTLY what Kodo JDO from Solarmetric does.

    [..] Get your other opinions and crack me up. You should go apply for the EG, we would so vote for you, keep us entertined. Hey, can you stik a pole up you, and go bar to bar, you can tell them you are a pop-sickle. I bet you could fool some of the people some of the time. I looked at the site docs, it does not even say that! They make no claim that they optimize select querry performance using propriatery SQL non-ANSI extensions based on the DB vendor you are conected to. The brown stuff is over your head.

    So if it turns out that Kodo does do database-specific optimizations, what are you going to do to make up for your brash remarks? It seems you've just raised the ante quite a bit .. we're past "eating crow" now, that's for sure ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  135. In fact yes[ Go to top ]

    So if it turns out that Kodo does do database-specific optimizations, what are you going to do to make up for your remarks? It seems you've just raised the ante quite a bit .. we're past "eating crow" now, that's for sure

    I am all for it. Eat crow, and post a picture on my blog of it, and make a donation to ... your favoirte charity!

    Lets go boys, bring it.
    Let's reproduce the example "JJ" claims?
    Does anyone want to suggest a OQL querry that is optimizable?
    Oracle 10 download is free. Can we get hands on DB2 or can it be another?
    Kodo docs say that they have a "SHOW SQL" based on OQL, or we can use a JDBC snooper.

    See that would be scientific!
    Suggest an OQL querry please!!! Anyone game?

    .V
  136. In fact yes[ Go to top ]

    You want me to think that some O/R product will look at "OQL" and depending on if it's conected to MS SQL or Oracle for example,optimize and write a diferent Select sql command?.... this is EXACTLY what Kodo JDO from Solarmetric does.
    J. Java /aka S. Zara? I don't matter.You are quite the bullshiter JJ, it be fun to have a beer with you; of that I am sure. Get your other opinions and crack me up. You should go apply for the EG, we would so vote for you, keep us entertined. Hey, can you stik a pole up you, and go bar to bar, you can tell them you are a pop-sickle. I bet you could fool some of the people some of the time.I looked at the site docs, it does not even say that! They make no claim that they optimize select querry performance using propriatery SQL non-ANSI extensions based on the DB vendor you are conected to. The brown stuff is over your head.How much can you lie? Tell me another one, about computers and magic. THERE ARE ... you know whats amongst us. And I just ID me one.Where's a closest cold beer. A toast to you "JJ"."Dyn-no-mite".V

    Obviously YOU did not look hard enough. This is how the tool works. Plain and simple. I have used Kodo on both Oracle and DB2 databases. The SQL produced for each IS different. Disregard this knowledge at your own peril.

    Hopefully, Abe White or Patrick Linksky of Solarmetric (who frequent these forums) will chime in to rebut your incoherent accusations.

    Until then, I will cease responding to your poorly framed, insult laden posts. Good day.
  137. In fact yes[ Go to top ]

    I have used Kodo on both Oracle and DB2 databases. The SQL produced for each IS different.
     I will cease responding to your poorly framed, insult laden posts.

    Liar! Cut and run.
    You are telling me that a SELECT statment has been optimized using non-ANSI vendor specific extensions based on OQL you write? You are so a Liar!
    <g>
    You may not even ever done " optimization using non-ANSI vendor specific extensions" EVER.
    Like which ones do you know? Let me tell you: YOU HAVE NEVER OPTIMIZED USING NON ANSI SQL.

    We are not talking about create table, E/R design tools will do things timestamp or date time based on a vendor, nothing to do with queery optimization.

    "JJ", you loser. I can get Oracle 10, Does it have to be DB/2, can it be something else?
    Suggest an OQL querry 1st please!!!!!!
    PLEASE!!! I'll paypall you $10 to tell me the OQL querry.

    I predict he does not, becuse JJ/Steve Zara belive in "aura of power" that I quoted from KISS page.

    .V
    (feel free to take side bets people, this is for play/play, not your real work).
  138. In fact yes[ Go to top ]

    I have used Kodo on both Oracle and DB2 databases. The SQL produced for each IS different. &nbsp;I will cease responding to your poorly framed, insult laden posts.
    Liar! Cut and run.You are telling me that a SELECT statment has been optimized using non-ANSI vendor specific extensions based on OQL you write? You are so a Liar!

    Someone brought this thread to my attention, and I can clear things up very easily: Kodo absolutely generates different SELECT statements for different databases. I believe all major ORM products do. Any product that tries to use ANSI SQL for everything won't get far in the market.

    There are many reasons why different databases require different SELECTs. Off the top of my head:
    - Different join abilities
    - Different join syntax
    - Different locking syntax
    - Different locking abilities
    - Different syntax for string manipulation functions
    - Different ways of getting a specific range of results
    - Different subselect aliasing rules

    Some of these are optimizations (efficient locking, result ranges, eager fetching via joins), and some are just differences. Point is, Vic, you should apologize to the people you accused of lying.

    Abe White
    Kodo JDO
    SolarMetric, Inc.
  139. You forget something[ Go to top ]

    Vic's large scale applications works.

    If ORM large scale applications works we don't know yet because nobody has been able to present any! ;)

    So until then...I am sure you get the gist of my speculation.

    Regards
    Rolf Tollerud
  140. You forget something[ Go to top ]

    Vic's large scale applications works.

    Funny how you choose to accept without evidence the things that you want to believe.
    If ORM large scale applications works we don't know yet because nobody has been able to present any!

    Funny how you choose to disbelieve despite the evidence the things that you do not want to believe. For example, would large-scale involve hosting 2500 dynamic web sites off a single cluster, or are you looking for a site that just serves tons of pages? Those both use ORM.

    Don't you (and Vic for that matter) realize how ridiculous you sound, claiming that it's impossible to use ORM successfully? It's rare to see any OLTP apps being built now that don't use ORM.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  141. No, you forget something[ Go to top ]

    "It's rare to see any OLTP apps being built now that don't use ORM"

    Have you forgotten the statistic?

    60% JDBC
    20% Homegrown Persistence Framework
    10% O/R Mapping Tools
    5% Java Data Objects (JDO)
    5% EJB CMP / BMP
    0% Service Data Objects (SDO)

    "do you realize how ridiculous you sound?"

    I never said that ORM is the only reason. The reason for J2EE applications typical poor performance is caused by a number of reasons that can be grouped together as over-architecture. However O/R though adds neither performance nor quality.

    Nowadays anything counts as "Web Application". I myself like to reserve that denotation for real complex windows-form business apps transformed to web. This ones that before was done in Powerbuilder, VB6, Delphi, C++, etc.

    Salesforce is a good example. I have before on more than one occasion mentioned Salesforce, (you see my Crystal-ball works just fine), but the TSS members casually dismissed it.

    I now come to my suggestion. The last time I mentioned Salesforce, you mentioned that you had been visiting and talked to them. So even if you may not know this people at least you have an acquaintance relationship with them!

    So why don't you ask them if they are using ORM?

    Regards
    Rolf Tollerud
  142. No, you forget something[ Go to top ]

    So why don't you ask them if they are using ORM?RegardsRolf Tollerud
    If you use JAVA and RDBMS then you use ORM too, there are many right ways to map programming language data structures to relations,if you know how to use it then run time transformation engine is not so bad too. "small" result sets and read mostly application, it sounds like web page, doe's not it ? Performance, scalability and security depends more on developer than on framework, but engine with good SQL support is very usefull too (see iBatis).
  143. what's the obsession with salesforce?[ Go to top ]

    I have before on more than one occasion mentioned Salesforce, (you see my Crystal-ball works just fine), but the TSS members casually dismissed it. I now come to my suggestion. The last time I mentioned Salesforce, you mentioned that you had been visiting and talked to them. So even if you may not know this people at least you have an acquaintance relationship with them!So why don't you ask them if they are using ORM?
    RegardsRolf Tollerud

    I find it odd, you feel it is acceptable to speak for salesforce.com. Do you work for them?
  144. once a long time ago in another Galax,[ Go to top ]

    before Salesforce was even a thought, I and some others had the same idea. We already had a windows CRM system that was selling well in Sweden. We build the new system in Microsoft Java and ASP calling the Java Objects directly from ASP Javascript. The work was progressing well and we had funding coming. Then Sun sued MS and our investors and customers disapeared overnight. So you see I have no reason to love Sun.

    But I think we are even now..

    Best regards
    Rolf Tollerud
  145. No, you forget something[ Go to top ]

    "It's rare to see any OLTP apps being built now that don't use ORM"

    Have you forgotten the statistic?

    60% JDBC
    20% Homegrown Persistence Framework
    10% O/R Mapping Tools
    5% Java Data Objects (JDO)
    5% EJB CMP / BMP
    0% Service Data Objects (SDO)

    Well, Rolf, since you gave no evidence for the origin of that stat when you first quoted it, I'd expect that most readers here have been largely ignoring it.

    However. We on TSS have never been sticklers for citatitons, so let's assume that your numbers are roughly correct, and that they apply to Cameron's situation ("OLTP apps being built now").

    Clearly, most of the 40% non-JDBC access in your "statistic" are using some sort of ORM. (I say "most of" because presumably, some of the homegrown, JDO, and EJB crowds are using LDAP or XML or an ODB or a mainframe or something.)

    Of the 60% JDBC users in your study, I bet that a significant percentage of them are also using some sort of ORM. Maybe it's not a declaratively-configured tool, but the venerable DAO pattern is still ORM. The only folks *not* using ORM at all are those that are not sucking the data from their database into some sort of object model.

    I would expect that closer questioning of the folks in your study would reveal that a big chunk of the 60% were actually using an object model for representation of their data, and that they were doing some basic ORM in their own code. I would not be surprised if the majority of OLTP apps being built now were doing something recognizable as ORM in their app, regardless of whether it was being done by a third-party tool or hand-built SQL statements and mappings.

    -Patrick

    --
    Patrick Linskey
    Kodo JDO
    SolarMetric Inc.
  146. You forget something[ Go to top ]

    Funny how you choose to disbelieve despite the evidence the things that you do not want to believe.
    Exactly! If one can't verify or reproduce... it could be a fantasy.
    So we wait on this OQL that can show me how it's optimized beyond ANSI SQL.

    1up.com you can go to and run. All the content is stored in SQL + iBatis.
    No one so far has doubtet scalability of SQL.
    ...hosting 2500 dynamic web sites off a single cluster <snip> use ORM.


    That to me at least does not look DB intesive or VLDB. How many concurent users and what is DB size?
    I'd belive you that OQL could work on small to medium scale, where it DB fits in a RAM or similar.

    VLDB you have to use SQL ... and people that know SQL; that is the difference. If you have people that know SQL, they can use OQL. SQL centric works better, this is silly kids.
    No one is doubting that the utlimate O/R, the EJB can wreck projects, right?

    JDO2 can do SQL, EJB3 can do SQL, Spring does SQL-JDBC, iBatis. It's silly to learn limited XQL.
    You know who else has external SQL string in an XML file like iBatis? Hibrenate! Some people don't have a leraning disability.
    You'll see, SQL wins.


    .V
  147. You forget something[ Go to top ]

    JDO2 can do SQL, EJB3 can do SQL, Spring does SQL-JDBC, iBatis. It's silly to learn limited XQL. You know who else has external SQL string in an XML file like iBatis? Hibrenate! Some people don't have a leraning disability.You'll see, SQL wins..V
    Most O/R mapping frameworks support SQL, this is the main ORM feature for me. Some frameworks support SQL in the right way, but some frameworks fail to support it. Some ORM experts think SQL is wrong, but probably it is just a lie to sell incomplete ORM product.
  148. You forget something[ Go to top ]

    Vic -
    So we wait on this OQL that can show me how it's optimized beyond ANSI SQL.

    I'm still confused about what OQL has to do with this. I haven't personally seen any ORM products in the Java space using OQL.

    Do you mean JDOQL? Which is basically SQL except that the column names are replaced with the object property names?
    1up.com you can go to and run. All the content is stored in SQL + iBatis.No one so far has doubtet scalability of SQL.

    Relational databases scale both poorly and expensively. I've never even heard of 1up.com.
    ...hosting 2500 dynamic web sites off a single cluster <snip> use ORM.

    That to me at least does not look DB intesive or VLDB. How many concurent users and what is DB size?

    Well, how would you know if it's DB intensive? Honestly, you're getting as bad as Rolf with these instant prognostications. It turns out that it had completely saturated a database (i.e. 100% load) with only 400 sites hosted. You can read some of the details here (there's a PDF link with more info). In short, it was extremely DB intensive, but it doesn't qualify as what I'd call a VLDB.

    The main value of ORM isn't performance, though. (They got performance from pre-loading and caching all the Os, which isn't what I'd call a "traditional" use of ORM.) The main value of ORM is that you can have many pieces of application code that work against the same "O" design, just like years ago a shared database was a huge leap forward (inefficient as they were) because you coud have one schema shared by multiple applications, and the applications didn't have to deal directly with storage any more.
    I'd belive you that OQL could work on small to medium scale, where it DB fits in a RAM or similar.

    What is OQL? You keep talking about OQL. I know what SQL is, and I know what EJBQL is, and I know what JDOQL is, but what is OQL? Is this a trick to keep your $10, since nobody uses this OQL thing? ;-)
    VLDB you have to use SQL ... and people that know SQL; that is the difference.

    Look, there are apps that are best built with straight SQL access -- agreed. What I'm suggesting is that more and more apps can be built without going down to that level. ORM is one of the tools making that possible.
    No one is doubting that the utlimate O/R, the EJB can wreck projects, right?

    Container-Managed Entity EJB, previous to version 3, has basically no ORM capability.
    JDO2 can do SQL, EJB3 can do SQL, Spring does SQL-JDBC, iBatis. It's silly to learn limited XQL.

    What is XQL? First it was OQL, now XQL? Are you talking about XQuery or something? What does XQuery have to do with ORM? Do you know what ORM is? You really do have to find out what ORM is before you do any more damage to your reputation.
    You know who else has external SQL string in an XML file like iBatis? Hibrenate! Some people don't have a leraning disability.You'll see, SQL wins.

    Hibernate is ORM. It sounds like you are saying that ORM wins, right?

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  149. You forget something[ Go to top ]

    I haven't personally seen any ORM products in the Java space using OQL.Do you mean JDOQL?
    Yes, future standard JDO query looks very similar SQL, probably it is relational too, but nobody tried to model it (I hope future specifications will define differences in some scientific way)
  150. You forget something[ Go to top ]

    <blockquoteFirst it was OQL, now XQL?
    JDOQL, EQL, HQL, OQL.....

    xQL is anything but SQL. Just replace your value for x.

    SQL is ansi SQL, by vendors that claim ANSI SQL complaince.


    .V
  151. You forget something[ Go to top ]

    Hibernate is ORM. It sounds like you are saying that ORM wins, right?

    iBatis offers sql mapping!
    New hibrenate offers sql mapping! That is people w/ a learning ability.
    I do not know your defintion of ORM.

    I am saying SQL is important.
    I am saying ORM will not do querry performance and tuning (which is what Steve Zara is saying)

    .V
  152. You forget something[ Go to top ]

    I am saying ORM will not do querry performance and tuning (which is what Steve Zara is saying).V

    No! Please stop misquoting me (yet again). I have never mentioned tuning.

    I have said:
    The ORM I use contains detailed knowledge of how to make best use of the particular database engine I use so as to generate optimised queries using proprietary features

    By 'optimised' I meant that the queries used proprietary features, making them more optimised for a database than queries which did not use proprietary features. Whether you agree with me or not, that was the point I was making. I'm sorry if this was not clear.

    I also said:
    If you use a high quality ORM, that ORM product can include knowledge of the proprietary extensions of each database product, so it can be a high-performance superset, not a subset, of features.

    At no point have I mentioned 'performance tuning'. In fact, the word 'tuning' does not appear in any of my posts!

    I really don't think you are helping your case by repeatedly mis-representing what I have said.
  153. You forget something[ Go to top ]

    Vic -
    Hibernate is ORM. It sounds like you are saying that ORM wins, right?

    [..]

    I do not know your defintion of ORM. I am saying SQL is important. [..]

    I may not be using the official definition of ORM, but O is for Object, R is for Relational and M is for Mapping, which I have always assumed meant "I have a relational database, some Java objcts, and I need some mechanism that knows how to convert between the two."

    Primarily, this means to me that I can populate the data members of a Java object from a database query, and I can modify a database from the changes that occur to that Java object.

    As a possible addition, an ORM "system" could automatically track those changes for me, allowing "units of work" or "transactions" to be collected into a chunk of work for the database.

    I assume that if it were an ORM "system", then I would absolutely need to be able to ask for an object by its persistent key (i.e. it's primary key). I haven't spent as much time pondering the other querying capabilities; I assume that's what JDOQL (etc.) handles.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  154. Vic does not belong to the infamous group of "Well-meaning Impractical Theoreticians" that for so long now has dominated the J2EE arena.

    He is in fact the worlds most renowned expert on PRACTICAL deployments of large databases. The one you call to save projects. If you are of another opinion, please bring forth this persons track record so we can compare with Vics.

    Furthermore he is free from all commercial connections, unlike you. (The more that fall into the EJB and ORM trap - the more needs to buy Tangosol's Coherence to save at least something from the wreck).

    So I suggest it is wise follow his advice.

    Me - I am also an admirer of style and taste. So I wonder how people can even think of replacing beautiful and elegant high-level SQL set operations with the ugly hack that is called ORM.

    Regards
    Rolf Tollerud
  155. "Well-meaning Impractical Theoreticians"
    Believe it or not, but I work in some "large"
    R&D deparment, so yo can blame me as "Impractical Theoretician" too.
  156. Juozas,

    These people never work in R&D departments, and may never have been to university at all! Look, the ("all the buggy/dodgy JDBC drivers out there") person above does not even know that all ORM tools use JDBC as the underlying protocol.

    To understand you need profound knowledge of the human nature, and with Cameron’s permission I give it to you,

    1) When I was very young I thought that sex was the most powerful drive in the human being.

    2) But as I grew older I realized that I was wrong, obviously the besserwisser drive is more powerful!

    3) Unfortunately that was not correct either, but by now, I know the real truth. The compulsion or urge, to sit behind your desk and construct large systems is the most powerful drive in human beings.

    Regards
    Rolf Tollerud
  157. buggy / dodgy JDBC drivers[ Go to top ]

    These people never work in R&amp;D departments, and may never have been to university at all! Look, the ("all the buggy/dodgy JDBC drivers out there") person above does not even know that all ORM tools use JDBC as the underlying protocol.

    Rolf,

    FYI, David is certainly intimately aware of the intricacies of JDO on top of JDBC -- he's one of the authors of Versant's relational JDO implementation. In my experience, nobody really understands the bugs and dodginess of the JDBC driver world until getting involved in a product that supports more than one variant thereof.

    -Patrick

    --
    Patrick Linskey
    Kodo JDO
    SolarMetric Inc.
  158. Juozas,These people never work in R&amp;D departments, and may never have been to university at all! Look, the ("all the buggy/dodgy JDBC drivers out there") person above does not even know that all ORM tools use JDBC as the underlying protocol.

    Rolf, David Tinker is the primary author of an ORM JDO implementation (Versant's Open Access), and as such has very detailed knowledge of numerous JDBC drivers. As one of the authors of Kodo JDO, I have the same experience, and I fully agree with him. JDBC drivers on the whole are incredibly buggy. You are the one who is demonstrating his lack of practical experience, not David.
  159. Yes, some drivers are "broken" (probably it is true for most), but I do not think it is a good motivation for query language and ORM specification, there are better reasons to use ORM.
  160. Are you saying that Versant is not using JDBC as the underlying protocol?

    BTW, I did a fast visit at their site. Assuming just for the sake of the argument that everything works as advertised,

    "What is the use of a tool that is more complicated that the problem you are trying to solve in the first place?"

    Wondering
    Rolf Tollerud
  161. Are you saying that Versant is not using JDBC as the underlying protocol?BTW, I did a fast visit at their site. Assuming just for the sake of the argument that everything works as advertised,"What is the use of a tool that is more complicated that the problem you are trying to solve in the first place?"WonderingRolf Tollerud

    We use complicated tools all the time to make us productive. For example, a C compiler or a Java VM is usually a far more complicated program than the software that they typically compile.

    What matters is not the complexity of the tool itself, but whether it makes solving problems simple.

    The beauty of a system like JDO is that it is very simple to use, and makes code simple. It hugely reduces the volume of code I have to write to get things done in contrast to using JDBC. I have never understood why the use of volumes of proprietary SQL or tedious JDBC is supposed to be 'simple'.
  162. Rolf, David Tinker is the primary author of an ORM JDO implementation (Versant's Open Access), and as such has very detailed knowledge of numerous JDBC drivers. As one of the authors of Kodo JDO, I have the same experience, and I fully agree with him. JDBC drivers on the whole are incredibly buggy. You are the one who is demonstrating his lack of practical experience, not David.

    I for one cannot prevent myself from smiling when I read this.

    My first comment, by far the least important, is that I don't share your point of view on the buginess aspect ... has it occurred to you that it may be related to you using incorrectly/not understanding the technology more than anything else? But I do not wish to debat on this aspect.

    However the real point here is that how can you motivate a different technology set by implying that the first set is full of bugs? I usually stop listening to a vendor that denigrates others, it shows bad attitude.

    If you are so good at writing bug free code, why don't you implement a JDBC driver and sell it for a fortune as the others don't work?

    Regards,

    Cedric

    ps: Regardless of what others may say, I for one find the cartoons excellent and I'm always avidly waiting for the next one.
  163. Rolf, David Tinker is the primary author of an ORM JDO implementation (Versant's Open Access), and as such has very detailed knowledge of numerous JDBC drivers. As one of the authors of Kodo JDO, I have the same experience, and I fully agree with him. JDBC drivers on the whole are incredibly buggy. You are the one who is demonstrating his lack of practical experience, not David.
    I for one cannot prevent myself from smiling when I read this.My first comment, by far the least important, is that I don't share your point of view on the buginess aspect ... has it occurred to you that it may be related to you using incorrectly/not understanding the technology more than anything else? But I do not wish to debat on this aspect.However the real point here is that how can you motivate a different technology set by implying that the first set is full of bugs? I usually stop listening to a vendor that denigrates others, it shows bad attitude.

    I feel especially qualified to comment here as I have had to battle with JDBC bugs. The bugginess is not a point of view, or an attitude. It is established fact, usually documented honestly by the JDBC writers. All you have to do is go to their websites and look - it's all there, although sometimes it is simply labelled as a 'lack of a performance feature' or 'as yet unimplemented API call'.

    Implementors of specifications such as JDO have very deep understandings of the technologies. They need to in order to have any chance of selling their products.
  164. My first comment, by far the least important, is that I don't share your point of view on the buginess aspect ... has it occurred to you that it may be related to you using incorrectly/not understanding the technology more than anything else?

    I think that my experience has given me a good understanding of the JDBC specification. But even if I'm mistaken, it's pretty easy to spot bugs when you work with as many JDBC drivers as Kodo supports. When 14 drivers act in a certain way that seems to match the JDBC spec, and one driver is acting some different way that doesn't seem to match the spec, it's fairly obvious that that one driver has a bug. A lot of the time, though, it's even more obvious than that: the driver will throw a NotImplementedException or an internal NullPointerException or something that clearly indicates a problem.
    However the real point here is that how can you motivate a different technology set by implying that the first set is full of bugs? I usually stop listening to a vendor that denigrates others, it shows bad attitude.

    I made no implications that JDBC driver bugginess is an excuse to use any other technology. The only reason I brought it up at all was to point out that Rolf's veiled insult to David Tinker (and by extension, all those other supposedly-impractical people who like ORM) was totally off base. David knows what he's talking about when he says JDBC drivers have bugs, and I was just backing up his experiences with my own. As authors of ORM products, we have to work around those bugs every day.

    I also didn't denigrate anyone that I know of, though I'd be happy to apologize if I did so unwittingly. I certainly didn't single out any JDBC driver authors, because many of them do a fantastic job.
  165. Juozas,These people never work in R&amp;D departments, and may never have been to university at all! Look, the ("all the buggy/dodgy JDBC drivers out there") person above does not even know that all ORM tools use JDBC as the underlying protocol.To understand you need profound knowledge of the human nature, and with Cameron’s permission I give it to you,1) When I was very young I thought that sex was the most powerful drive in the human being.2) But as I grew older I realized that I was wrong, obviously the besserwisser drive is more powerful!3) Unfortunately that was not correct either, but by now, I know the real truth. The compulsion or urge, to sit behind your desk and construct large systems is the most powerful drive in human beings.RegardsRolf Tollerud

    Ok, you need to get a serious clue here. I've discovered bugs in jdbc drivers myself a few times over the last 4 years. I won't bother reciting the ones I am aware of and which version fixed those bugs. 10 minutes on google will return several hits on known bugs in various JDBC drivers.

    google is your friend.

    peter
  166. Rolf -
    To understand you need profound knowledge of the human nature, and with Cameron's permission I give it to you,

    1) When I was very young I thought that sex was the most powerful drive in the human being.

    2) But as I grew older I realized that I was wrong, obviously the besserwisser drive is more powerful!

    3) Unfortunately that was not correct either, but by now, I know the real truth. The compulsion or urge, to sit behind your desk and construct large systems is the most powerful drive in human beings.

    .. sounds like you're a lonely guy who can't keep his beer down. ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  167. Beer?

    Ah, I forgot! You are American..
    I suggest you look at the title again.

    In Europe we are drinking wine Cameron.
  168. In Europe we are drinking wine Cameron.

    Speak for yourself old chap. I'm British. When I last looked, we were part of Europe. We British are proud of our warm beer.
  169. When did the British join Europe?
    I never noticed..

    Regards
    Rolf Tollerud
  170. When did the British join Europe?I never noticed..RegardsRolf Tollerud

    Geographically, we have been part of Europe for tens of millions of years (although parts of Scotland only joined recently as a result of continental drift).

    We joined the EEC in the early 1970s. We may talk the same language (with significant differences - they spell things wrongly) as the Americans, but many of us are enthusiastic Europeans - hence the channel tunnel.
  171. So I wonder how people can even think of replacing beautiful and elegant high-level SQL set operations with the ugly hack that is called ORM.RegardsRolf Tollerud
    Well, I've never been on a airplane, and I don't know the details of how they are built and work, but man, do I hate them! All that trouble and noise just to get someone to the other side of the world! We've been using ships like forever, what's the problem with them??? I can get on a beautiful and elegant ship and float all the way to the europe, so why do we need to be stuck in planes' small seats for hours anyway? No one need that stupid hack that are called planes.

    Note: I am being ironic, in case someone mistakes this for a real post. :)

    Regards,
    Henrique Steckelberg
  172. god gave man feet[ Go to top ]

    So I wonder how people can even think of replacing beautiful and elegant high-level SQL set operations with the ugly hack that is called ORM.RegardsRolf Tollerud
    Well, I've never been on a airplane, and I don't know the details of how they are built and work, but man, do I hate them! All that trouble and noise just to get someone to the other side of the world! We've been using ships like forever, what's the problem with them??? I can get on a beautiful and elegant ship and float all the way to the europe, so why do we need to be stuck in planes' small seats for hours anyway? No one need that stupid hack that are called planes.Note: I am being ironic, in case someone mistakes this for a real post. :)Regards,Henrique Steckelberg

    Why would people think of replacing the human feet with expensive, impractical things like cars, ships and planes. The human body is much more efficient and doesn't cost millions like a ship or plane. Plus, by walking you improve your health, have a sunny disposition and keep the environment clean. Here in denmark, I think the queen protests too much. She should stop whining about not having a car and realize god gave her gorgeous feet to woooo men. Plus, she can always use her charms and elegance to get men to do the work for her.

    </sarcasm>

    peter
  173. Henrique, it is no use being ironic if you have to explain that you are ironic..

    I was just making fun of people that says that the whole underlaying technology for their products is full of bugs. Talk about sawing off the branch you’re sitting on!

    For of course in addition to all the possible JDBC bugs comes all the bugs in a product with a million or so lined of code. :)

    Regards
    Rolf Tollerud
  174. Henrique, it is no use being ironic if you have to explain that you are ironic..I was just making fun of people that says that the whole underlaying technology for their products is full of bugs. Talk about sawing off the branch you’re sitting on! For of course in addition to all the possible JDBC bugs comes all the bugs in a product with a million or so lined of code. :)RegardsRolf Tollerud
    Well, if you like making fun of people who are trying to sell you a car which drives smoothly over bumpy roads... ;)

    Regarding my notice about being ironic, after reading posts after posts full of ORM misconceptions, I decided I should take no chances and make things exceedingly clear for everyone. Better be on the safe side. :)

    Regards,
    Henrique Steckelberg
  175. In fact yes[ Go to top ]

    Any product that tries to use ANSI SQL for everything won't get far in the market.There are many reasons why different databases require different SELECTs. Off the top of my head:- Different join abilities- Different join syntax- Different locking syntax- Different locking abilities- Different syntax for string manipulation functions- Different ways of getting a specific range of results- Different subselect aliasing rulesSome of these are optimizations (efficient locking, result ranges, eager fetching via joins), and some are just differences. Point is, Vic, you should apologize to the people you accused of lying.Abe WhiteKodo JDOSolarMetric, Inc.
    Yes, it is possible to find broken database implementation, but it is possible to find broken ORM implementation too.
  176. In fact yes[ Go to top ]

    Any product that tries to use ANSI SQL for everything won't get far in the market.There are many reasons why different databases require different SELECTs. Off the top of my head:- Different join abilities- Different join syntax- Different locking syntax- Different locking abilities- Different syntax for string manipulation functions- Different ways of getting a specific range of results- Different subselect aliasing rulesSome of these are optimizations (efficient locking, result ranges, eager fetching via joins), and some are just differences. Point is, Vic, you should apologize to the people you accused of lying.Abe WhiteKodo JDOSolarMetric, Inc.
    Yes, it is possible to find broken database implementation, but it is possible to find broken ORM implementation too.

    Juozas I don't think that you understand Abe's point. He did not say that there are "broken" relational database products that rely on ANSI SQL.

    What he did say was, ORM products that rely soley on ANSI SQL will be less than satisfactory since they will only support a subset of the database's functionality. He then went on to describe why different databases require ORM's to produce vendor specific SQL. Quite simple really.
  177. In fact yes[ Go to top ]

    Juozas I don't think that you understand Abe's point. He did not say that there are "broken" relational database products that rely on ANSI SQL. What he did say was, ORM products that rely soley on ANSI SQL will be less than satisfactory since they will only support a subset of the database's functionality. He then went on to describe why different databases require ORM's to produce vendor specific SQL. Quite simple really.
    Doe's it mean JDOQL is a superset of ANSI SQL and includes all vendor specific SQL features ? Can you provide an example query ? Vic will pay 10$ for you, I have tried to take his money, but I failed. I hope you know better this stuff than me.
  178. In fact yes[ Go to top ]

    Juozas I don't think that you understand Abe's point. He did not say that there are "broken" relational database products that rely on ANSI SQL. What he did say was, ORM products that rely soley on ANSI SQL will be less than satisfactory since they will only support a subset of the database's functionality. He then went on to describe why different databases require ORM's to produce vendor specific SQL. Quite simple really.
    Doe's it mean JDOQL is a superset of ANSI SQL and includes all vendor specific SQL features ? Can you provide an example query ? Vic will pay 10$ for you, I have tried to take his money, but I failed. I hope you know better this stuff than me.

    No JDOQL is not a superset of ANSI SQL. It does have some features that are unable to be satisfied in an ANSI compliant way, however.

    Let me give you an example:

    Many web applications have the requirement to display query results a page at a time. If a query returns say a million records, instead of returning the full amount to the presentation tier; the requirement could be to display, say, 40 records at a time. It might be postulated that after three or so pages most users would be done with the query results. Think of a google search for an example of the above. Thus it is much more effiecient to return results a page (whatever the page size is defined to be) at a time then returning the entire result set.

    Unfortunately, ANSI SQL does not define a standard method of doing this. Different databases support this requirement differently. JDOQL in JDO 2.0 does have direct support for paging however.

    In JDOQL the code for doing this would look something like this (from the Robin Roos article):

    Query q = pm.newQuery (AuctionItem.class);
    q.setRange (10, 15); // start from the 10th up to but excluding the 15th
    q.setOrdering ("title ascending");
    Collection results = (Collection) q.execute ();

    The generated SQL for Oracle would be something like this:
    SELECT * FROM AUCTIONITEM WHERE ROWNUM between 10 and 15


    While the generated SQL for MySQL would be something like this:
    SELECT * FROM AUCTIONITEM LIMIT 10, 5 ORDER BY TITLE

    And finally generated SQL for HSQLDB would be something like this:
    SELECT LIMIT 10 5 * FROM AUCTIONITEM ORDER BY TITLE Asc


    So to support the JDOQL for paging, the JDO implementation would generate different vendor specific SQL. Since there is no ANSI SQL standard way of doing paging that is supported by the various popular databases out there.

    I hope this helps.
  179. In fact yes[ Go to top ]

    Standard SQL way is to "FETCH" as many stuff as you need from cursor, but conectivity API's like JDBC helps to "scroll" or "setMaxRows" too.
  180. In fact yes[ Go to top ]

    Standard SQL way is to "FETCH" as many stuff as you need from cursor, but conectivity API's like JDBC helps to "scroll" or "setMaxRows" too.

    Juozas,

    Doing the range selection on the database side is typically much more efficient. By issuing a single non-cursored statement, the database can do the entire query in an atomic fashion, and does not need to maintain any additional resources after the query is executed.

    -Patrick

    --
    Patrick Linskey
    Kodo JDO
    SolarMetric Inc.
  181. In fact yes[ Go to top ]

    Standard SQL way is to "FETCH" as many stuff as you need from cursor, but conectivity API's like JDBC helps to "scroll" or "setMaxRows" too.
    Juozas,Doing the range selection on the database side is typically much more efficient. By issuing a single non-cursored statement, the database can do the entire query in an atomic fashion, and does not need to maintain any additional resources after the query is executed.-Patrick--Patrick LinskeyKodo JDOSolarMetric Inc.

    Exactly. Thanks Patrick.
  182. In fact yes[ Go to top ]

    Standard SQL way is to "FETCH" as many stuff as you need from cursor, but conectivity API's like JDBC helps to "scroll" or "setMaxRows" too.
    Juozas,Doing the range selection on the database side is typically much more efficient. By issuing a single non-cursored statement, the database can do the entire query in an atomic fashion, and does not need to maintain any additional resources after the query is executed.-Patrick--Patrick LinskeyKodo JDOSolarMetric Inc.
    Yes, it is true for web applications. "LIMIT" Is a usefull feature for web applications, but SQL experts decided this stuff doe's not belong to query and moved it to embedded SQL and cursors (I am not sure, but it looks like SQL 99 alows it for interactive queries too).
  183. In fact no[ Go to top ]

    In JDOQL the code for doing this would look something like this (from the Robin Roos article):

    Query q = pm.newQuery (AuctionItem.class);
    q.setRange (10, 15);
    q.setOrdering ("title ascending");
    Collection results = (Collection) q.execute ();

    The generated SQL for Oracle would be something like this:

    SELECT * FROM AUCTIONITEM WHERE ROWNUM between 10 and 15

    this is sooo wrong, mate

    Make sure you have read the docs before posting examples like that, because tomorrow some newbie will copy&paste it and then spend the rest of the week wondering why it's not working
  184. In fact no[ Go to top ]

    In JDOQL the code for doing this would look something like this (from the Robin Roos article):Query q = pm.newQuery (AuctionItem.class);q.setRange (10, 15); q.setOrdering ("title ascending");Collection results = (Collection) q.execute ();The generated SQL for Oracle would be something like this:SELECT * FROM AUCTIONITEM WHERE ROWNUM between 10 and 15
    this is sooo wrong, mate Make sure you have read the docs before posting examples like that, because tomorrow some newbie will copy&amp;paste it and then spend the rest of the week wondering why it's not working

    The SQL given was off the top of my head. It was meant to be illustrative, not iron clad cut and pastable code! LOL.

    Hence the words "something like." If the statement is "sooo wrong" as you put it, why not do the newbies out there a public service and post the correction.
  185. In fact yes[ Go to top ]

    generated SQL for Oracle would be something like this:
    SELECT * FROM AUCTIONITEM WHERE ROWNUM between 10 and 15

    While the generated SQL for MySQL would be something like this:
    SELECT * FROM AUCTIONITEM LIMIT 10, 5 ORDER BY TITLE
    (I assume you wanted to say order by title in oracle as well else you'd get random records. What about MySQL syntax, limit should be last I think HQL is a memory resident db and non ansi - so that not be my area)

    So more theory?!
    Would limit/offset work like in here:
    http://p2p.wrox.com/topic.asp?TOPIC_ID=7064
    You could get limit offset allways.
    That is how this other OQL did it:
    http://jroller.org/page/CastorLive/20050209
    So this:

    SELECT * FROM AUCTIONITEM ORDER BY TITLE
    LIMIT 5 OFSET 3

    would be the iBatis way. I do above a lot, since I do my pagination on the SQL side, not in Java. In Java, iBatis caches the freqent "page" combination up to softmap memory.

    Or
    select *
    from (
        select *,
                rank() over (order by id) rnk
        from emp
    )
    where rnk between 10 and 15

    ( of course, as per ANSI 99 you could have optimized it cross platform! By using SQL/CLI envorimet settings in a portable way, but even human sql writers tend not to do that. FYI ... create procedure is an ANSI 99 and ANSI 2003 standard)
    Query q = pm.newQuery (AuctionItem.class);
    q.setRange (10, 15); // start from the 10th up to but excluding the 15th
    q.setOrdering ("title ascending");
    Collection results = (Collection) q.execute ();

    Which I wonder would DBA be able to help you with?
    Storing SQL strings external like iBatis and Hibreante or your code above? (If Hibreante does external SQL string, maybe EJB 3 will???)
    This I think is KISS:
    SELECT * FROM AUCTIONITEM ORDER BY TITLE
    LIMIT 5 OFSET 3
    I have pgSQL 8 installed and it workes in mySQL 5 also on linux. Do you want a screen shot?

    Was that your example, JJ/Zara? Optimized? There was no performance and tuning done.

    It be easier if you used SQL even w/ JDO, instaed of JQL... and more-re:ussable learing.

    .V
  186. In fact yes[ Go to top ]

    Was that your example, JJ/Zara?

    Leave me out of it! Address your responses to the people whose posts you are replying to. This debate has gone on and on as I predicted and I have no interest in persuing it, as I can't see a way of getting agreement. Maybe more skilled debaters than me can deal with the constant goalpost-moving and denial of evidence, but it is beyond me!
  187. In fact yes[ Go to top ]

    Was that your example, JJ/Zara? Optimized? There was no performance and tuning done.
    It was all about ORM issuing vendor specific sql, then performance, now you damand tuning too? What's next? read/writing directly to DB proprietary files? :)

    Aren't you stretching people's meaning a little bit?

    Regards,
    Henrique Stecklelberg
  188. In fact yes[ Go to top ]

    So this:SELECT * FROM AUCTIONITEM ORDER BY TITLELIMIT 5 OFSET 3 would be the iBatis way.

    Except that that SQL won't work on most databases. At all. Won't work in Oracle. Won't work in DB2. Won't work in SQLServer. Won't work in Sybase. I could go on...

    The original point of contention was whether Kodo changes its SQL SELECT statements to optimize for different databases. I pointed out that Kodo changes its SQL in several ways, and that some of these ways were optimizations:
    - In pessimistic transactions, locking the original SELECT where possible based on the abilities of the DB (some DBs can't lock a select with multiple tables, etc) to avoid trips back to the DB to lock individual records.
    - Using the join abilities of the DB (some DBs can't perform proper outer joins under some conditions, or have non-ANSI syntax for outer or cross joins) to eagerly fetch data.
    - Using the DB's native LIMIT/TOP/ROWNUM abilities to retrieve a result range, to avoid faking it with a much less efficient and/or resource-consuming cursor.

    These are all major optimizations that give you a lot more benefit than other fine-tuning steps you might undertake later with your DBA to get the most out of the final SQL.

    I have no idea what the point of your last response was supposed to be, but all it did was prove the point that the last bullet above is completely valid. Your post and the links you cited actually go out of their way to highlight how you need vendor-specific SQL to get a result range efficiently, and how ORM tools use this vendor-specific SQL.

    You can debate all you want about ORM in general, and I'll leave you to it. But there is no debate that ORM products use vendor-specific, non-ANSI SQL to optimize data retrieval. If you can't admit you were wrong and apologize to those you insulted, you will lose all credibility.
  189. In fact yes[ Go to top ]

    So this:SELECT * FROM AUCTIONITEM ORDER BY TITLELIMIT 5 OFSET 3 would be the iBatis way.
    Except that that SQL won't work on most databases. At all. Won't work in Oracle. Won't work in DB2. Won't work in SQLServer. Won't work in Sybase. I could go on...The original point of contention was whether Kodo changes its SQL SELECT statements to optimize for different databases.
    JDOQL won't work on most programming languages and ORM implementations at all, as I understand JDO2 is not a standard too, but I think Vic must pay 10$, cursor elemination using non ANSI SQL is an optimization for web applications (workarounds and SQL 99 do not count).
    I hope iBatis will parse SQL to support "LIMIT" or fake "FETCH" at the end of query. It will be a good political argument too :)
  190. Eating Crow....[ Go to top ]

    So who want's me to email me the $ for the education? (take a screen shot of the transaction and post it on your blog)
    Steve, JJ or/and KODO?
    Just ping me your email at vin at friendVU . com and it's yours!
    Won't work in Oracle. Won't work in DB2. Won't work in SQLServer. Won't work in Sybase.


    Point KODO!
    Kodo changes its SQL in several ways, and that some of these ways were optimizations:
    - In pessimistic transactions, locking the original SELECT where possible based on the abilities of the DB (some DBs can't lock a select with multiple tables, etc) to avoid trips back to the DB to lock individual records.

    I assume I can tell it not to lock on select?
    Using the DB's native LIMIT/TOP/ROWNUM abilities to retrieve a result range, to avoid faking it with a much less efficient and/or resource-consuming cursor.
    Yes it seems. iBatis only does Java side. Some newbie developers might get niped.
    ...use vendor-specific, non-ANSI SQL to optimize data retrieval. ... Admit you were wrong and apologize to those you insulted

    Yes! Hmm:

    I aplogize to all I insulted.

    Now... to see which one I like the best. JQL, HQL or EQL. They all do pagniation DB side like this?

    .V
  191. Paging in query languages[ Go to top ]

    JQL, HQL or EQL. They all do pagniation DB side like this?

    JDOQL (as of JDO2) and EJBQL (as of EJB3) definitely do. I imagine that HQL does as well.

    -Patrick

    --
    Patrick Linskey
    Kodo JDO
    SolarMetric Inc.
  192. Paging in query languages[ Go to top ]

    JQL, HQL or EQL. They all do pagniation DB side like this?
    JDOQL (as of JDO2) and EJBQL (as of EJB3) definitely do. I imagine that HQL does as well.-Patrick--Patrick LinskeyKodo JDOSolarMetric Inc.
    HQL doe's it in API level.
  193. Eating Crow....[ Go to top ]

    Admitting your mistake was an admirable thing to do, Vic. And I'm sure those who were involved appreciate the apology. As far as I'm concerned, though, you can keep your money. To answer your questions:
    In pessimistic transactions, locking the original SELECT where possible based on the abilities of the DB (some DBs can't lock a select with multiple tables, etc) to avoid trips back to the DB to lock individual records.
    I assume I can tell it not to lock on select?

    Of course. Kodo supports both optimistic locking (with multiple versioning strategies, and the ability to have multiple "lock groups" in an object) and pessimistic locking. There are APIs for controlling pessimistic locking at a fine-grained level.
    Now... to see which one I like the best. JQL, HQL or EQL. They all do pagniation DB side like this?.V

    I believe they all have pagination capabilities, though you have to look at JDOQL in JDO 2 and EJBQL in EJB 3, because prior versions didn't have pagination to the best of my knowledge.
  194. In fact yes[ Go to top ]

    Hi All
    So this:SELECT * FROM AUCTIONITEM ORDER BY TITLELIMIT 5 OFSET 3 would be the iBatis way.
    Except that that SQL won't work on most databases. At all. Won't work in Oracle. Won't work in DB2. Won't work in SQLServer. Won't work in Sybase.

    I find it amazing that anyone who has worked on more than one database could suggest that an ORM tool could issue the same SQL for all databases! Tuning the SQL to match the target is an absolute requirement, not an optional feature. There are so many basic SQL things that are non-portable:

    * Outer joins.
    * Subqueries (not on MySQL so must do outer join with distinct).
    * Order by (name or index, must columns be in select list?).
    * etc etc

    And then you get to all the buggy/dodgy JDBC drivers out there. Anyone tried to do BLOBs/CLOBs on Oracle 8? You have to use proprietary features of Oracles JDBC driver.

    Cheers
    David
    Versant Open Access
  195. In fact yes[ Go to top ]

    Liar! Cut and run.You are telling me that a SELECT statment has been optimized using non-ANSI vendor specific extensions based on OQL you write? You are so a Liar!
    Kodo absolutely generates different SELECT statements for different databases

    And optimizes for peformance? Not!

    The thread starter claimed that diferent database had native performance imporvments, words to that effect. By using KODO, it would otpmize for a performance. Let me see it! He said DB/2 and Oracle,
    and I beged for an example OQL.

    I will more than aplologize, I am not wrong.

    .V
    (and what's my upside? somone paypal me $10 if it proves that it was a lie that it would create an optimized non-ansi command)
  196. In fact yes[ Go to top ]

    Liar! Cut and run.You are telling me that a SELECT statment has been optimized using non-ANSI vendor specific extensions based on OQL you write? You are so a Liar!
    Kodo absolutely generates different SELECT statements for different databases
    And optimizes for peformance? Not!The thread starter claimed that diferent database had native performance imporvments, words to that effect. By using KODO, it would otpmize for a performance. Let me see it! He said DB/2 and Oracle, and I beged for an example OQL. I will more than aplologize, I am not wrong..V(and what's my upside? somone paypal me $10 if it proves that it was a lie that it would create an optimized non-ansi command)

    Vic, you do realize that Abe stated the following in his post:
    Some of these are optimizations (efficient locking, result ranges, eager fetching via joins), and some are just differences. Point is, Vic, you should apologize to the people you accused of lying.

    This is directly from the vendor of Kodo. Yet you maintain your stance. I guess you really are just a troll. Oh well, look away nothing to see here.
  197. In fact yes[ Go to top ]

    -Different join abilities :non- ANSI SQL? Example? Like Oracle ANSI SQL joins is like this and DB/2(or MS SQL ) ANSI SQL Join is like that.
    -Different join syntax :non ANSI SQL? Example?

    So if I write in iBatis, or SQL based:
    select * from A, B where a.id = b.aid
    this is less portable, and your code would for performance optimization?

    Great, sounds like it grows hair. Show me an OQL and what DBs and what perormance did it optimize as oposed to me typing:
    select * from A, B where a.id = b.aid
    My guess is that this is not the querry that you optimize, show me something you do. (and I will do the work, anyone can replicate). This magic is the big benefit of using a non-SQL ORM?

    I think what you are saying is that JDO works with flat files that don't have an ANSI SQL engine. For people that do OLTP on a flat file?


    .V
  198. Steve, we agreed that SQL was important and track record in this thread.

    You came into the thread and said SQL is not important.
    You added this:
    Database vendors often sell their products on the basis of proprietary extensions to improve performance. If you stick to portable SQL, you are using their products at less than top performance. You are using a minimal compatible subset of SQL. If you use a high quality ORM, that ORM product can include knowledge of the proprietary extensions of each database product, so it can be a high-performance superset, not a subset, of features.

    I then said:
    it be fun to have a beer with you; of that I am sure. Get your other opinions and crack me up. You should go apply for the EG, we would so vote for you, keep us entertined. Hey, can you stik a pole up you, and go bar to bar, you can tell them you are a pop-sickle. I bet you could fool some of the people some of the time.

    As it turns out... JDO people said they do syntax for "DB"s that fall short, but *NO performance querry tuning.*
    SELECT * FROM AUCTIONITEM ORDER BY TITLE
    LIMIT 5 OFSET 3
    Long after Java... we'll be writing ANSI SQL.

    So I take it you retract:
    "proprietary extensions of each database for high-performance"

    .V
  199. This is simply to clear up what you are suggesting happened.
    Steve, we agreed that SQL was important and track record in this thread.You came into the thread and said SQL is not important.

    I have never said that SQL is not important. Maybe you have misread 'not important' for 'not portable'?
    You added this:
    Database vendors often sell their products on the basis of proprietary extensions to improve performance. If you stick to portable SQL, you are using their products at less than top performance. You are using a minimal compatible subset of SQL. If you use a high quality ORM, that ORM product can include knowledge of the proprietary extensions of each database product, so it can be a high-performance superset, not a subset, of features.

    True. I did then say this.
    I then said:
    it be fun to have a beer with you; of that I am sure. Get your other opinions and crack me up. You should go apply for the EG, we would so vote for you, keep us entertined. Hey, can you stik a pole up you, and go bar to bar, you can tell them you are a pop-sickle. I bet you could fool some of the people some of the time.

    You did, but not to me. This was not a response to anything I posted. (I'm sure I would have remembered it!)
    As it turns out... JDO people said they do syntax for "DB"s that fall short, but *NO performance querry tuning.*
    SELECT * FROM AUCTIONITEM ORDER BY TITLELIMIT 5 OFSET 3
    Long after Java... we'll be writing ANSI SQL.So I take it you retract:"proprietary extensions of each database for high-performance".V

    As I keep saying, I am not going to respond to these technical matters anymore. There is no point in addressing such posts to me. What I am going to insist that you don't misquote me and that you don't mislead people as to the sequences of responses in this thread.
  200. In fact yes[ Go to top ]

    ps: my definition of VLDB is several terabytes with at least 10K concurents users, but I agree, others may differ on VLDB. A 400Gig drive is now $400, by the time you raid it... (and my projects I teach lead on in that range are 1up, asociates and vantage)

    that's pretty big. Though some of the firms I know using EJB have databases over 20Tb range. This might be office topic, but EJB don't have to use ORM. You're free to back it up with stored procs, hand tuned sql or whatever you like. Mis-applying technology in my mind does not invalidate it's inherent values. Whether that technology is mainstream or niche is a different story. From all these posts, the one thing I walk away is "everyone is different and have different needs."

    what works for one person is no gaurantee it works for someone else.
  201. In fact yes[ Go to top ]

    some of the firms I know using EJB have databases over 20Tb range.

    Is this a fact ... that we can verify? Anyone?

    Who? What container? How many users?
    Something like this would CHANGE EVERYTHING.

    .V
  202. I would I could say[ Go to top ]

    Honestly I wish could say the names of the companies, but I do not want to get in trouble or get my friends in trouble. Don't take my word for it though, since you don't know me. I know it sucks big time to make claims that others can't verify with cold hard facts, but I know of atleast 3 large production systems with multiple TB databases. This isn't including data archives, which don't count. In the mutual fund sector, a ton of data is generated and logged. You can get a rough idea based on my previous posts on compliance reports. If a system is handling 100K trades a day and each one of those trades generates several megs of compliance logs, it's pretty easy to generate hundreds of gigs of data every week. Add that up for a month and the system easily generates a TB or more of data. In the financial sector in Boston, there's really only two main players. I'm sure you can guess which containers they are.
  203. boston financials[ Go to top ]

    In the financial sector in Boston, there's really only two main players. I'm sure you can guess which containers they are.

    Peter, can you drop me an email (first name at tangosol dot com), because Wellington is currently looking for some Java + Coherence people. (Are you currently there?)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  204. thanks for the message[ Go to top ]

    In the financial sector in Boston, there's really only two main players. I'm sure you can guess which containers they are.
    Peter, can you drop me an email (first name at tangosol dot com), because Wellington is currently looking for some Java + Coherence people. (Are you currently there?)Peace,Cameron PurdyTangosol, Inc.Coherence: Shared Memories for J2EE Clusters

    I'll shoot you an email in a few hours. I just got home, the turnpike was slow :( The funny thing is, I've interviewed at Wellington and worked with a few X-wellington compliance guys :)

    peter
  205. In fact yes[ Go to top ]

    my definition of VLDB is several terabytes with at least 10K concurents users, but I agree, others may differ on VLDB.

    It's not how much data, and not how many users, but how those users are using that data. I know of an extremely well-optimized app that took several hours to do a single simple (non-ordered, non-unioned, non-joined) select, and that was for all intents and purposes a single-user system. (It just happens to be the world's largest database, not counting the spooks of course.)

    Vic, instead of whipping out your willies, why don't you and Steve compare the specific things that you like about your approaches. Since I've never used the iBatis stuff (I've just looked at the examples Clinton did like Petstore), I'm curious to hear what areas you find that it works well at.

    Also, you should consider that Steve has some experience with something that you don't, and try to learn from what he's done. Maybe you'll be pleasantly surprised that not all apps have the same exact requirements ;-)

    Otherwise you guys are just going to repeat this conversation once a week for the next five years, every time someone mentions one of {SQL, database, Oracle, JDO, EJB, TopLink, ORM, JSR 220, iBatis, Hibernate, Kodo, JSR 243, DB2, impedence mismatch, RDBMS}.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  206. In fact yes[ Go to top ]

    Vic, instead of whipping out your willies,

    Tsk. Shame on you Sir! I believe I was only responding to the suggestion that my will^H^H^H^Hproject was smaller. (Actually, it (the project) is, and Vic wins in that respect).
    why don't you and Steve compare the specific things that you like about your approaches.

    But I have... again and again, round and round, ad nauseam, probably resulting in boredom. But here I go again (I don't mind being boring): Tranparency, quick development, separation of concerns, and portability.

    Portability is key for me:

    I have seen large SQL projects drag a company into disaster because they were tied into proprietary code that meant they could not quickly migrate to a more scalable and robust database engine. I have seen other projects fail because the developers would not let the customer choose what database they wanted to use.

    These may not be the experience of others. They are mine, and have determined my priorities.

    When I argue for portability, I get two replies: 1. You really should use SQL because you can make best use of the proprietary features of each database (after all, you have paid enough for Oracle/SQL Server/DB2); 2. SQL IS portable - you should use ANSI SQL and not use proprietary features.
    Otherwise you guys are just going to repeat this conversation once a week for the next five years, every time someone mentions one of {SQL, database, Oracle, JDO, EJB, TopLink, ORM, JSR 220, iBatis, Hibernate, Kodo, JSR 243, DB2, impedence mismatch, RDBMS}

    No, I promise, I'll take a holiday from this debate. I'll get friends to hold me back from the keyboard when these topics arise. I'll try and resist my usual compulsion to have the last word.
  207. In fact yes[ Go to top ]

    Now as to your attitude of assembly is as hard as SQL:
    "SELECT * from A,B where A.id = B.aid"
    What does this look like in assembly syntax, please educate me as to the similarities?

    I did not say that assembly was as hard as SQL. I said that the same arguments were being put forward for sticking with SQL and not abstracting from it as have been put forward in the past for sticking with assembler.

    My point was that these kinds of arguments have always been used through the history of computing when new levels of abstractions have been devised. Even SQL and relational theory were controversial when introduced in the 70s.

    Nowhere did I say that one was harder or easier than another.
  208. P.S.[ Go to top ]

    Further, a modern trend in contemporary UI (I almost said art) is to allow the (authorized) customer to add fields even in the detail view (shows up in the grid too of course). Actually add columns in the database! How do you ORM guys handle that?
    I am working with a system which provides just that, we "stole" the idea from a legacy system (back from 92) originally in C language. The user needed to create new DB entities and/or change their attributes very often. We ended up with a metadata inside the DB, which maps, for each entity, which tables and fields compound it. Plus for each field we have its label, so whenever we have a CRUD form, we go to the DB metadata to get the entity fields, diplay them, get user data, verify it against field definitions and domain (also stored in DB metadata), and generate the proper SQL for its CRUD operation, all dinamically. Works like a charm, and by the age of the original implementation, should not be seen as "magic" nowadays... :)

    PS: the system is not allowed to issue any DDL command to DB, DBAs do it, and afterwards we just update DB metadata to reflect changes, so the system is able to use the new entities/fields without any recompilation or UI refactoring: users are happy, DBAs are happy, developers are sad (less work for them!!) ;)

    Regards,
    Henrique Steckelberg
  209. P.S.[ Go to top ]

    Yes, this dynamic binding stuff and modeling ignorance was very popular in the past. I was sure everybody knows this mistake.
  210. P.S.[ Go to top ]

    http://www.databasejournal.com/features/oracle/article.php/3335331
  211. P.S.[ Go to top ]

    Juozas,

    I don't know if you posted that link as a critic over the solution I described, but I assure you we maintain proper DB normalization using that solution, not only on entities tables, but also on DB metadata tables. In fact, that was one of the implementation requirements.

    Regards,
    Henrique Steckelberg
  212. P.S.[ Go to top ]

    I saw a few "legacy" databases with this design "trick".It is a workaround for "Mistake 1" in all of cases I have ever saw. It is not trivial to find this flaw in "complex" domain, but it is the same mistake, just sample is very trivial in this article.
  213. P.S.[ Go to top ]

    I saw a few "legacy" databases with this design "trick".It is a workaround for "Mistake 1" in all of cases I have ever saw. It is not trivial to find this flaw in "complex" domain, but it is the same mistake, just sample is very trivial in this article.
    So I present you a case which mantains proper DB normalization in every aspect of the application! :)

    Since every change demanded by user are done by system admins and DBAs, we are able to detect changes that would break DB normalization before it gets applied. The end user never changes DB structure, just DBAs, so it is almost impossible to mess the database up using this solution.

    Regards,
    Henrique Steckelberg
  214. P.S.[ Go to top ]

    "Typically, the developer will have to put some unusual logic, functions or application code around or within queries just to make the data manageable"

    ORM is a usefull tool in many ways, but typically this "unusual logic, functions or application code" is ORM.
  215. P.S.[ Go to top ]

    So I present you a case which mantains proper DB normalization in every aspect of the application! :) Since every change demanded by user are done by system admins and DBAs, we are able to detect changes that would break DB normalization before it gets applied. The end user never changes DB structure, just DBAs, so it is almost impossible to mess the database up using this solution.Regards,Henrique Steckelberg
    I believe it, but it means you do not have any impedence mismatch problem too.
  216. P.S.[ Go to top ]

    So I present you a case which mantains proper DB normalization in every aspect of the application! :) Since every change demanded by user are done by system admins and DBAs, we are able to detect changes that would break DB normalization before it gets applied. The end user never changes DB structure, just DBAs, so it is almost impossible to mess the database up using this solution.Regards,Henrique Steckelberg
    I believe it, but it means you do not have any impedence mismatch problem too.

    Here's a bit of humor. On a recent project, the personal responsible for the data access said this, "I want the object to match the database tables exactly."

    when asked by the whole team "why do you want to do that?" His reply was this, "So that my framework has less work to do when you call the framework to save an object." I'm not joking either. this individual wanted the objects to have exactly the same naming convention as the tables. As in:

    class customer {
      string first_name;
      string last_name;
      address primary_address;
      etc....
    }

    there's lots of ways to write Data Abstraction layers is all I'll say :)
  217. P.S.[ Go to top ]

    I believe it, but it means you do not have any impedence mismatch problem too.
    I guess you can say it so, because the mapping is simple 1:1 between entities/fields and tables/columns. Besides, the original implementation was in C language, which had nothing OO about it!
  218. P.S.[ Go to top ]

    I believe it, but it means you do not have any impedence mismatch problem too.
    I guess you can say it so, because the mapping is simple 1:1 between entities/fields and tables/columns. Besides, the original implementation was in C language, which had nothing OO about it!
    I see no impedence mismatch problems with "complex" mappings too, there is finite set of rules to transform conceptual model to logical or to physical model (probably less than 10).
    There is nothing wrong to transform models, you can do it at design, development, deployment or run time. most O/R tools do it runtime (probably to reduce tool setup time).
    I am not ORM technology enemy, but I see it is used to motivate crap and political stuff. There is nothing wrong to design some cool home made query language if you can design calculus for this stuff and can prove this calculus is more complete or can use more performant implementation algorythms than relational stuff.
  219. P.S.[ Go to top ]

    There is nothing wrong to design some cool home made query language if you can design calculus for this stuff and can prove this calculus is more complete or can use more performant implementation algorythms than relational stuff.

    Juozas,

    Enough with the calculus references .. you're totally failing to impress the resident math geniuses and the rest of the mensa crowd. Application developers need solutions, not math problems. There are a lot of things (including most of SQL) that are not based on calculus, and they work just fine.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  220. P.S.[ Go to top ]

    Juozas,Enough with the calculus references .. you're totally failing to impress the resident math geniuses and the rest of the mensa crowd. Application developers need solutions, not math problems. There are a lot of things (including most of SQL) that are not based on calculus, and they work just fine.Peace,Cameron PurdyTangosol, Inc.Coherence: Shared Memories for J2EE Clusters
    Yes, application developers do not need to know this stuff, but "experts" and QL designers must. SQL is based on relational calculus, this calculus is used in theory, but it is used in practice too. SQL optimizer transforms query to this calculus (It is a good way to optimize stuff on this level using theory)
  221. P.S.[ Go to top ]

    I believe it, but it means you do not have any impedence mismatch problem too.
    I guess you can say it so, because the mapping is simple 1:1 between entities/fields and tables/columns. Besides, the original implementation was in C language, which had nothing OO about it!
    I see no impedence mismatch problems with "complex" mappings too, there is finite set of rules to transform conceptual model to logical or to physical model (probably less than 10).There is nothing wrong to transform models, you can do it at design, development, deployment or run time. most O/R tools do it runtime (probably to reduce tool setup time).I am not ORM technology enemy, but I see it is used to motivate crap and political stuff. There is nothing wrong to design some cool home made query language if you can design calculus for this stuff and can prove this calculus is more complete or can use more performant implementation algorythms than relational stuff.
    Juozas, you are mixing things here: I am not talking about transforming from conceptual to implementation, or from logical to physical representation. I am talking about translating data which is in one representation (relational) to another representation (object). These are different in many aspects and similar in others, as I have already pointed before, and thus usually there is be some kind of translation between them.

    I'll give you an analogy: take a phrase in english "I like you", and translate it to portuguese: "eu gosto de você". Both phrases have the exact same meaning (the same conceptual model), but each language have different phrases(different implementation): the words are different (I = eu, you = você), the number of words may be different (like = gosto de), the order of the words may change too, etc. It's the same when you have to store object in a relational database, depending on how you system is implemented, and it has nothing to do with the conceptual model. Got it now?

    Regards,
    Henrique Steckelberg
  222. P.S.[ Go to top ]

    There is problem with "object" representation, if you can not define it in formal way. Common way to define it in data modeling is use to "entity relationship" concepts, if you use something different then probably you use ORM in a wrong way, because it is a tool to transform "entities", attributes, generalization and
    1:1,1:N,N:M relationships to "tables","fields" and 1:N relationship. Some of valid rules are defined in EJB3 spec too.
  223. P.S.[ Go to top ]

    There is problem with "object" representation, if you can not define it in formal way. Common way to define it in data modeling is use to "entity relationship" concepts, if you use something different then probably you use ORM in a wrong way, because it is a tool to transform "entities", attributes, generalization and1:1,1:N,N:M relationships to "tables","fields" and 1:N relationship. Some of valid rules are defined in EJB3 spec too.
    Sorry Juozas, but you are mistaken: ORM tools are not used to transform "entities", attributes, generalization and 1:1,1:N,N:M relationships to "tables","fields" and 1:N relationship, this is something you do once during the design phase of your project, manually, and no tool will do it for you automatically. Even if there are tools that do it (translate from conceptual to implementation), these are NOT called ORM tools. I emphasize: Translating from conceptual to implementation IS NOT what ORM tools do.

    ORMs are about interfacing your Object Oriented application with a Relational DB Server to store data at execution time, loading and saving Object Oriented data from your application into a Relational DB Server. Given that OO data stored in application memory and manipulated (added, edited, deleted) by the user has to be persisted on a relational database, and given that this OO data inside your running app usually differs in structure from persisted relational data inside your RDBMS, having a tool which could seamlessly handle this data movimentation from OO side to DB side during application runtime would be a big win-win situation, no? :)

    Ok, this the last time I try to explain this. It shouldn't be that hard to understand ORM, but I guess some people have some concepts so deeply ingrained, that seeing things out of the box becomes simply impossible.

    Life goes on, meanwhile.
    Henrique Steckelberg
  224. P.S.[ Go to top ]

    I know it, many people use ORM as fake object database, but I think this is a wrong way to use ORM. If your problem is to "save" object, then probably you do not need any ORM engines.
  225. P.S.[ Go to top ]

    And just to make things clear: in the solution I described, application never issues DDL (create table, alter table, etc.) commands at all. DBAs are responsible for managing DB struture as they usually would, and just after they make the changes we map the updates to DB metadata so the application can use the new/changed entities(tables) and fields (columns). Everything using PONT (Plain Old Normalized Tables ;), so we are able to use PKs, FKs, AKs, checks, triggers, etc. normally on entity tables to enforce data integrity.

    Regards,
    Henrique Steckelberg
  226. +1[ Go to top ]

    Further, a modern trend in contemporary UI (I almost said art) is to allow the (authorized) customer to add fields even in the detail view (shows up in the grid too of course). Actually add columns in the database! How do you ORM guys handle that?
    I am working with a system which provides just that, we "stole" the idea from a legacy system (back from 92) originally in C language. The user needed to create new DB entities and/or change their attributes very often. We ended up with a metadata inside the DB, which maps, for each entity, which tables and fields compound it. Plus for each field we have its label, so whenever we have a CRUD form, we go to the DB metadata to get the entity fields, diplay them, get user data, verify it against field definitions and domain (also stored in DB metadata), and generate the proper SQL for its CRUD operation, all dinamically. Works like a charm, and by the age of the original implementation, should not be seen as "magic" nowadays... :)PS: the system is not allowed to issue any DDL command to DB, DBAs do it, and afterwards we just update DB metadata to reflect changes, so the system is able to use the new entities/fields without any recompilation or UI refactoring: users are happy, DBAs are happy, developers are sad (less work for them!!) ;)Regards,Henrique Steckelberg

    Now I see what Rolf means. Adding tables and columns using DDL, like alter table or create table. That is strictly prohibited by all the DBA's I know personally and most seasoned developers I know would never consider doing such a thing. Having said that, I recently saw a developer write a session management framework that create a new table for each user and deletes the table when they close the session. Needless to say it was horrible. The approach that I've used is very similar to what Henrique described to handle "new fields/attributes" The only cases where I would allow the application to create new tables is in system provisioning. In those cases, it's simple, autogenerate the sql for the "create table ......" and any required ORM mapping. Take it a step further, it's simple enough to generate a schema, use the schema to compile source code + compile, and generate a simple mapping. Where it would get tricky is when a person wants the tables to use many-to-many relationships. In those cases, I wouldn't attempt to autogenerate. Not because it's impossible to write a tool to do it, but usually the design can be revised so that it doesn't have to use many-to-many.

    If the database model really requires many-to-many relationships, than it's a custom solution and ORM would not be a good fit. On the otherhand, if that data is primarily viewed in tree format, I would generate the necessary views to make querying easier. With a view, I can still use an ORM to map to my objects. It's up to each person to choose what they want to live with.

    peter
  227. Hello Rolf Tollerud. I need your Help[ Go to top ]

    Pardon my intrusion into this realm.

    I have been on your trail from
    http://www.experts-exchange.com/Web/Web_Languages/Q_20694402.html

    I am really interested in your implementation of "zipping and unzipping zipped files on a remote webserver.

    Yes the server supports ASP.NET and ASP (as indicted in your post on EE)

    I need this if you can spare some codetime'

    WHAT I NEED
    1: 2 function that take 2 parameters
    Example
    -----------------------------------------------
    Function UnzipToFolder(sTargetZipFile, sDestinationFolder)

    Function UnzipToFile (sTargetZipFile, sDestinationFile)

    Thanks
  228. J# supports gzip[ Go to top ]

    you should be able to add J# library to your project and use the gzip utility that is in there. It's rather simple.

    peter
  229. Instead of Gavin...[ Go to top ]

    Use automatic transformation to transform all your OO operations from your UML diagram (which is OO) automatically into your own SQL statements.
    Yes, this is a good way and this is very common using ERD, E/R diagrams are as OO as UML class diagrams. It is possible to draw valid E/R diagrams using UML too, if you prefer this notation.
  230. SQL anyone?[ Go to top ]

    also agree with you that every Java programmers who want to work with relational databases *must* understand SQL. No more no less.

    The problem of O/R mapping ...
    1. We need an O/R mapper ...

    Consider this solution.
    We eliminate the O/R in our applications!
    Just like iBatis.

    Best part of Java is Collections, so we use that as a DTO.
    In esence ... you have a iBatis that returns a DTO and now your are of to do your model map to your UI forms.
    (Mostly I go the other way, create forms, show them to the client, and if they like, I make a model maping the UI, write my SQL to populate the model. So I don't do MDA, I do requirements driven development, start with mockup/prototype so I build bonds and responsivnes with the client. They love adding/removing fields, so I don't even code untill they say OK.)

    This is everything I learned in development on one short page here:
    http://www.sandrasf.com/kiss
    It says, among other things:
    ..."designed not by piling feature on top of feature, but by removing the weaknesses and restrictions that make additional features appear necessary."

    But the main point of OUR agreement:
    Some Java developers are limited by a belief that they can design DB applicaitons without MASTERING SQL.
    The more sucessfull P-lang developers tend to undertstand the SQL.

    Peter, SQL has only 4 or 6 commands, and it's declerative.
    By declerative, you tell it what you want, and not how to go execute it. It's beyond 4GL. It figures it out for you. (But you can see the execution path if you want). By gaining experience in writing larger and complex application in SQL... you end up learning how to become a good designer .. and a tech lead.
    Simple thing in SQL are easy, and complex one are possible.
    You don not have to be expert in SQL to QUICKER and eiser write anything that is possible in O/R. An outer join is not hard.
    O/R ... can't do complex things, it does not scale and it create impedance on the technology... and often on the entire project.

    Let me try this: DO NOT READ CELKO BOOKS. Do not look at Apache iBatis jPetStore.

    ;-) Ok, now let me try to make some $.

    .V

    ot: One day UI will be declerative too, w/ XAML, Flex or JDNC. We have to keep learning, but ... that is why we are in this profesion. (I wonder about the profesion part sometimes)
  231. SQL anyone?[ Go to top ]

    Peter, SQL has only 4 or 6 commands, and it's declerative. By declerative, you tell it what you want, and not how to go execute it. It's beyond 4GL. It figures it out for you. (But you can see the execution path if you want). By gaining experience in writing larger and complex application in SQL... you end up learning how to become a good designer .. and a tech lead.
    Simple thing in SQL are easy, and complex one are possible. You don not have to be expert in SQL to QUICKER and eiser write anything that is possible in O/R. An outer join is not hard.O/R ... can't do complex things, it does not scale and it create impedance on the technology... and often on the entire project. Let me try this: DO NOT READ CELKO BOOKS. Do not look at Apache iBatis jPetStore.;-) Ok, now let me try to make some $..Vot: One day UI will be declerative too, w/ XAML, Flex or JDNC. We have to keep learning, but ... that is why we are in this profesion. (I wonder about the profesion part sometimes)

    I hope I didn't give the impression I don't like sql. I like declarative languages like LISP and Sql. Plsql I'm not so fond of compared to tsql in Sybase. But that's another story.

    I'm not sure I agree that it's necessarily "quicker", since debugging stored procedures remotely on the database can be a real pain. I agree it's powerful and can make things easier, but I've also seen hardcore OO guys create super normalized database models with 1000 tables. What's more funny is when I ask "can I please create some materialized views for these 4 queries" the answer is "no, just do the subqueries and/or joins". now back to work for me too.

    peter
  232. SQL anyone?[ Go to top ]

    OO guys
    ...
     create super normalized database models with 1000 tables.
    ....
    when I ask "can I please create some materialized views for these 4 queries"

    I think I am an OO guy!!!! Composition, extensions, polymorhisam.... I do it all to get more re-use. It works with iBatis.

    I just showed how you can do a ceo,boss,employee in one table w/self joins for unlimited levels. Creating 1000 tables, Who thinkgs that is EVER good for anything?

    re: views: Yes, DBA's HATE programers, and we hate them back. I have 3 DBA's that report to me on a project... and I think they want to beat me up and my guys. But we respect each other's skills.

    .V

    Rolf, I will cash in right now, since you owe me! As a favor to repay me, install Fedora on one of your old PCs. And .... every now and then try to do something on it. This way you see the "Dark Side of The Moon" where the heavy lifting happens.
  233. SQL anyone?[ Go to top ]

    I think I am an OO guy!!!! Composition, extensions, polymorhisam.... I do it all to get more re-use. It works with iBatis.

    I just showed how you can do a ceo,boss,employee in one table w/self joins for unlimited levels. Creating 1000 tables, Who thinkgs that is EVER good for anything?

    re: views: Yes, DBA's HATE programers, and we hate them back. I have 3 DBA's that report to me on a project... and I think they want to beat me up and my guys. But we respect each other's skills. .VRolf, I will cash in right now, since you owe me! As a favor to repay me, install Fedora on one of your old PCs. And .... every now and then try to do something on it. This way you see the "Dark Side of The Moon" where the heavy lifting happens.

    I wasn't disagreeing on self-join for unlimited levels. I would rather stay away from unlimited levels. Six levels deep is about as far as I would want to go in the compliance scenario. I've used these techniques. In cases where cartesian joins were appropriate, we used them. Specifically, in the case of reconciliation where we need to look at pending, open, filled and closed transactions they are all in the same table. So like you said, it's a rather straight forward query to get out any branch or node. What isn't straight forward is implementing an accounting system that accurately calculates the Profit/loss for tax calculations in semi-realtime. this is often needed to accurately calculate compliance and account exposures. I'll stop there, since this stuff is rather dry and monotonous.

    peter
  234. SQL anyone?[ Go to top ]

    Specifically, in the case of reconciliation where we need to look at pending, open, filled and closed transactions they are all in the same table. So like you said, it's a rather straight forward query to get out any branch or node. What isn't straight forward is implementing an accounting system that accurately calculates the Profit/loss for tax calculations in semi-realtime. this is often needed to accurately calculate compliance and account exposures. I'll stop there, since this stuff is rather dry and monotonous.

    Actually, it's a pretty good basis for your argument. Scalable real-time compliance cannot be done using SQL (i.e. dishing the work off to a relational database) for two big reasons:

    1) Insufficient TPS to handle peak load (even with in-memory databases);
    2) The latency of the operation is such that the compliance check will prevent the trade execution from making it into its prescribed time window.

    A couple systems that I'm aware of measure the portion of their execution window reserved for real-time compliance in milliseconds (not single-digits yet, but definitely less than 50ms).

    You combine legacy systems with business directives (e.g. what is required that the system check and keep by the fund rules) and add awkward government regulations, and the resulting database operations go well into the thousands of milliseconds, and that's when there's no load on the system.

    FWIW / $.02

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  235. SQL anyone?[ Go to top ]

    Actually, it's a pretty good basis for your argument. Scalable real-time compliance cannot be done using SQL
    As I understand, if you can not do it using SQL then you can not do it using wrapper API too.
  236. you read my mind[ Go to top ]

    Specifically, in the case of reconciliation where we need to look at pending, open, filled and closed transactions they are all in the same table. So like you said, it's a rather straight forward query to get out any branch or node. What isn't straight forward is implementing an accounting system that accurately calculates the Profit/loss for tax calculations in semi-realtime. this is often needed to accurately calculate compliance and account exposures. I'll stop there, since this stuff is rather dry and monotonous.
    Actually, it's a pretty good basis for your argument. Scalable real-time compliance cannot be done using SQL (i.e. dishing the work off to a relational database) for two big reasons:
    1) Insufficient TPS to handle peak load (even with in-memory databases);
    2) The latency of the operation is such that the compliance check will prevent the trade execution from making it into its prescribed time window.
    A couple systems that I'm aware of measure the portion of their execution window reserved for real-time compliance in milliseconds (not single-digits yet, but definitely less than 50ms). You combine legacy systems with business directives (e.g. what is required that the system check and keep by the fund rules) and add awkward government regulations, and the resulting database operations go well into the thousands of milliseconds, and that's when there's no load on the system.FWIW / $.02Peace,Cameron PurdyTangosol, Inc.Coherence: Shared Memories for J2EE Clusters

    I have to say my experience is very close to what you just described. Many of the firms I am aware of only run simple restriction rules like "cannot buy IBM" for pre-trade because of the hit on TPS. Running any sort of weight (ie regulation rules) or diversification rules makes a big dent and is rather hard to scale :) I did an extensive performance analysis of current market compliance systems back in 2002 and even the best systems top out at 20-40 TPS running just restriction and weight rules. That's not including firm wide, and all the various regulatory rules like 1940Act, 2A7, FSA and foriegn regs.

    peter
  237. You know what is wrong with these discussions? This thread is a good example.

    With the question "what to use for persistent API" there is understood that we are talking about mainstream, the run-of-a-mill projects under 5 million dollars or so (or chose your own limit).

    But every time we try to solve everyday problems here in TSS there always comes some idiot that work with Atomic bomb simulations for Pentagon or NASA or the Worlds Biggest Financial Service Provider with billions of transactions per day.

    Why do they do it? Not to contribute anything to the discussission of course, for their problems if so much different and far out from ours. No, they do it to show for everybody how important persons they are, working with such big projects. (this syndrom is called "Queen Elisabeths Summer-Cottage")

    Cameron for example, I am sure, can not create a simple stateless RIA application even if his life depended on it.

    No need to be impressed or intimidated. Remember that the competition is always more fierce and unforgiving in common sports like 400 or 800 m running than in for instance Horse Polo.

    Regards
    Rolf Tollerud
  238. SQL anyone?[ Go to top ]

    A couple systems that I'm aware of measure the portion of their execution window reserved for real-time compliance in milliseconds

    I assume you are talking about caching the domain to the local applications RAM, so you can get it as fast as the RAM is, and as much RAM as your 64 bit app servers have.
    So if you have 8 application server, and one large DB server, each application server has some recent cache that decay’s; giving performance and of-loading.

    Domain could be E/R just us much, nothing magic about O/R and cacheing. It could be RowSet caching, or automatic softmap cache like in iBatis. There is no advantage in O/R vs E/R for domain caching either way.

    What is more interesting if there is a cache miss:
    Which will then be faster with a realistic querry, Native SQL or _XQL!

    And which do you want to build the expertise in.

    .V

    ps: What happened to the mud here? It’s almost Friday!!!!
  239. SQL anyone?[ Go to top ]

    A couple systems that I'm aware of measure the portion of their execution window reserved for real-time compliance in milliseconds

    I assume you are talking about caching the domain to the local applications RAM, so you can get it as fast as the RAM is, and as much RAM as your 64 bit app servers have. So if you have 8 application server, and one large DB server, each application server has some recent cache that decays; giving performance and of-loading.

    Compliance can't use decay caches. They basically have to be transactionally consistent.

    It's not about 64-bit, either, since big heaps on 64-bit machines are more painful than big heaps on 32-bit servers. (No full GC allowed.)
    What is more interesting if there is a cache miss: Which will then be faster with a realistic querry, Native SQL or _XQL! And which do you want to build the expertise in.

    It doesn't even matter. If you had to do either, you'd just throw away the trade because you won't make the window.

    In other words, there can be no cache misses. ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  240. SQL anyone?[ Go to top ]

    A couple systems that I'm aware of measure the portion of their execution window reserved for real-time compliance in milliseconds
    It doesn't even matter. If you had to do either, you'd just throw away the trade because you won't make the window.

    I do not know what the company's secret sauce is. Push replicated cache w/ async transaction writes is a guess.

    Just network pipes are .25ms, so it all has to fit in RAM, else you have cache misses. I am sure it rocks, sounds like fun.

    .V
  241. SQL anyone?[ Go to top ]

    I hope I didn't give the impression I don't like sql. I like declarative languages like LISP and Sql. Plsql I'm not so fond of compared to tsql in Sybase. But that's another story.I'm not sure I agree that it's necessarily "quicker", since debugging stored procedures remotely on the database can be a real pain. I agree it's powerful and can make things easier, but I've also seen hardcore OO guys create super normalized database models with 1000 tables. What's more funny is when I ask "can I please create some materialized views for these 4 queries" the answer is "no, just do the subqueries and/or joins". now back to work for me too.peter

    A 1000 tables sounds a bit excessive and must have a negative performance impact on the database, it sounds like those developers need to simplify their object model and use XML CLOBS for plain persistance of object trees.
  242. SQL anyone?[ Go to top ]

    iBatis has a very big problem, it is too simple. There is no way to sell support, training or license for this kind of tools.
  243. The Principle of Least Surprise :)[ Go to top ]

    Vic!

    Thank you for the brilliant collection of KISS quotations!

    I owe you.

    Regards
    Rolf Tollerud
  244. Ok, we have the OO world, where your app lives, and we have the Relational world, where your data lives. How do we make both talk to each other? There are some differences between these 2 worlds that must be dealt with before both can talk, and there are many ways to achive the solution.
    Some people like writing tons of SQL in order to make them talk. Some would rather create tons of DB Procedures for that. Others like using Recordsets. And some prefer using a tool that does the mapping from one world to the other, declaratively.
    Fact is, we will have to deal with this transformation from one world to the other, and the best solution depends on many things, like you experience, the kind of app, DB features available, etc, etc. So to simply dismiss any option just because you happen to not like it will just lower your chance of finding the best solution fitted to a particular problem.
    Each of above solutions has their pros and cons. And sometimes the simpler solution may become a maintenance hell later on: sometimes a KISS is not goodbye... :) VB apps are rather famous for that: it just can't get any simpler, but try to do some maintenance, and you'll find out things should've been done differently in the first place.
    I love KISS, but like anything else, it must be applied with some sense, not indiscriminately, or you may end up with something so weak that it may fall under its own weight later on, and no refactoring will ever save it.

    An example: a dept in a company I know tried to apply KISS and decided to use Excel spreadsheets for inventory control. The day audit knocked at their door, they saw what was the price of that.

    Regards,
    Henrique Steckelberg
  245. Ok, we have the OO world, where your app lives, and we have the Relational world, where your data lives. How do we make both talk to each other? There are some differences between these 2 worlds that must be dealt with before both can talk, and there are many ways to achive the solution.
    Can you explain this difference ? Is it naming conventions ? I am sure it is possible to create a database view for any world including "tranparence".
  246. Ok, we have the OO world, where your app lives, and we have the Relational world, where your data lives. How do we make both talk to each other? There are some differences between these 2 worlds that must be dealt with before both can talk, and there are many ways to achive the solution.
    Can you explain this difference ? Is it naming conventions ? I am sure it is possible to create a database view for any world including "tranparence".
    There are many levels of differences:
    - naming conventions (OO app: userName, DB: USER_NAME)
    - types (OO app: String, DB: VARCHAR2(30))
    - inheritance (OO app: woman inherits person + man inherits person, DB: one table with person's + woman's + man's attributes, or 2 tables one with persons + woman's and one with man's attr, or 3 tables just like OO app)
    - relationship (OO app: job M:N employee, DB: JOB table 1:N JOB_EMPLOYEE table N:1 EMPLOYEE table)

    Not only DB structure differences, but how your app is coded can have influence in gap between O and R worlds: if you have a rich OO domain model (complex mapping between O and R), or if you use only hashmaps to transfer form data to/from db (1:1 mapping between O and R), or if all db access goes through stored procedures (mapping may be complex or not).

    If you compare UML and E/R models, they surely look very similar, but when it comes to implementation, the differences can be staggering.

    More details here: http://www.agiledata.org/essays/impedanceMismatch.html

    Regards,
    Henrique Steckelberg
  247. If you compare UML and E/R models, they surely look very similar, but when it comes to implementation, the differences can be staggering.
    E/R is a conceptual model. Implementation model can be any valid model. Implementation is more about performance and security than about domain concepts. There is nothing very OOP specific in UML class diagrams, if it makes sence then you can use ASM to implement it too. It is very common to use relational model to implement E/R model. ORM is a good thing if you are domain expert and programmer at the same time (probably it is common too), you can use modeling concepts as implementation model and runtime engine helps to transforms it to relational model. If you use E/R modeling concepts in application implementation then ORM is a right tool for this job. Valid OO data model is E/R, just some modeling languages like UML use different notation.
  248. If you compare UML and E/R models, they surely look very similar, but when it comes to implementation, the differences can be staggering.
    E/R is a conceptual model. Implementation model can be any valid model. Implementation is more about performance and security than about domain concepts. There is nothing very OOP specific in UML class diagrams, if it makes sence then you can use ASM to implement it too. It is very common to use relational model to implement E/R model. ORM is a good thing if you are domain expert and programmer at the same time (probably it is common too), you can use modeling concepts as implementation model and runtime engine helps to transforms it to relational model. If you use E/R modeling concepts in application implementation then ORM is a right tool for this job. Valid OO data model is E/R, just some modeling languages like UML use different notation.
    Yeah, in conceptual world, you assume that everything is beautiful and perfect: you have unlimited processing power, memory and storage. When you translate conceptual model to implementation, the differences show up, because each world has their own way of dealing with performance, data structure, rule enforcement, etc. That is what is called impedance mismatch: each world has its particularities, and one can minimize their differences, but not make these differences disappear completely.

    Regards,
    Henrique Steckelberg
  249. Probably we understand "impedance mismatch" term in different way. I understand it as a myth (It was invented by OODBMS vendors, is not it ?) to sell object database, OODBMS vendors just do not have more arguments to sell this stuff, RDBMS vendors have math. I think there is no "mismatch" if you can transform stuff to match, some tools can do automaticaly (runtime ORM engine is one of ways).
  250. Probably we understand "impedance mismatch" term in different way. I understand it as a myth (It was invented by OODBMS vendors, is not it ?) to sell object database, OODBMS vendors just do not have more arguments to sell this stuff, RDBMS vendors have math. I think there is no "mismatch" if you can transform stuff to match, some tools can do automaticaly (runtime ORM engine is one of ways).
    Well, at least for me, "impedance mismatch" is just another name for the very need to transform, be it simply to translate attribute names and/or types, or because of structural differences. If there were no mismatch, there would be no need for transform in the first place, or even JDBC for that matter. We would just say "store!" to our objects, and it would be done! :)

    Here is a nice discussion around this topic: http://c2.com/cgi/wiki?ObjectRelationalImpedanceMismatchDoesNotExist

    Regards,
    Henrique Steckelberg
  251. sometimes a KISS is not goodbye[ Go to top ]

    Probably we understand "impedance mismatch" term in different way. I understand it as a myth (It was invented by OODBMS vendors, is not it ?) to sell object database, OODBMS vendors just do not have more arguments to sell this stuff, RDBMS vendors have math. I think there is no "mismatch" if you can transform stuff to match, some tools can do automaticaly (runtime ORM engine is one of ways).

    Actually the fact that you mention that a transformation is required, indicates that there is an impedence mismatch.

    Look here for more:
    http://www.agiledata.org/essays/impedanceMismatch.html
  252. sometimes a KISS is not goodbye[ Go to top ]

    Actually the fact that you mention that a transformation is required, indicates that there is an impedence mismatch.Look here for more:http://www.agiledata.org/essays/impedanceMismatch.html
    If object database can be accessed from more than one programming language then it is impedence mismatch too, because you need to transform data types and probably data structures.
     Transformations are common in programming, you can transform data using view in RDBMS itself and I do not think transformation is a problem, javac does it on code too.
  253. sometimes a KISS is not goodbye[ Go to top ]

    Ok, we have the OO world, where your app lives, and we have the Relational world, where your data lives. How do we make both talk to each other? There are some differences between these 2 worlds that must be dealt with before both can talk, and there are many ways to achive the solution.
    Can you explain this difference ? Is it naming conventions ? I am sure it is possible to create a database view for any world including "tranparence".

    The following whitepaper describes the benefits of using an OR-Mapping technology (instead of using raw JDBC/SQL) to bridge the gap between these 2 worlds.

    SIMPLIFYING DATA INTEGRATION - LINKING JAVA OBJECTS TO RELATIONAL DATABASES TECHNOLOGY WHITE PAPER

    -- Damodar
  254. Ok, we have the OO world, where your app lives, and we have the Relational world, where your data lives. How do we make both talk to each other?

    Henrique, How would one answer this question:
    A relational DB returns a Set of records, righ.
     
    < ignore this for a moment: a list of rows; each row has name/value pairs (column names and the values) />

    How do we represent Sets in Java?

    Now... lets say you are using a more modern langage than Java
    (Ruby, Groovy, P-langs, etc.)

    How would you answer it then?

    IMO, the answer is the same, regardless of the OO langage you use. Nothing special about Java that prevents one from using OO with Collections.

    From my Kiss page:
    "... people seem to misinterpret complexity as sophistication, which is baffling — the incomprehensible should cause suspicion rather than admiration. Possibly this trend results from a mistaken belief that using a somewhat mysterious device confers an aura of power on the user."

    .V
  255. Vic,

    do you agree that using collections to map R to O is just one way of doing it? Sometimes it is better to have a rich domain model in your OO app, are you aware of this or do you always dismiss it as a possible solution?

    Regards,
    Henrique Steckelberg
  256. Just for the record (pun intended!): Recordset

    Regards,
    Henrique Steckelberg
  257. rich domain model in your OO app, ... do you always dismiss it as a possible solution?

    I think the P-lang people allways dismiss it.
    I agree that most of the time an E/R reprsents the domain better, for example:
    Get a list of customers.
    Click and get a list of invoices.
    Click and get a list of items.

    .V
  258. Instead of Gavin...[ Go to top ]

    Competition is good, Bill.
    Competition of implementations is good.Competition of standards is bad.Very bad.-- Cedric

    Took the words right out of my mouth....
  259. Competition of implementations is good.Competition of standards is bad.Very bad.--

    If only the open source community would embrace this as a mantra.

    Tapestry, Velocity, Struts, Spring web, etc. etc. Even Log4j v's java logging. Only the authors care about their implementations being the best, the rest of us just want to have to read less.

    How about an open source project to DISCONTINUE open source frameworks/designs/architectures. It will never happen.

    Jonathan
    MicroSpring - IOC container in 30K
  260. Better this way...[ Go to top ]

    ...with the JCP and EJB EG grabbing their ankles and a big Hibernate monster (with a very small appendage) preparing to make their entry.
  261. poll ?[ Go to top ]

    Hi,

    Did TSS ever consider having a simple poll to go with the cartoons ? Somehow I suspect the majority of votes would be in the "not funny at all, waste of time" category...

    If I need a laugh, there is plenty of stuff to laugh about in the regular discussions, if not there is always the Onion.

    Luc.
  262. Is this not the second cartoon with a lame 'persistence' punch line? see here

    Please stop the PUNinshment.
  263. Misplaced Gavin[ Go to top ]

    I could be off base. I don't keep up with the latest gossip, but based off of last years TSSS I have a different perception.

    I got the distinct impression that Gavin would be in the room with the EJB persistent experts not hugging the JDO guy outside.
  264. Misplaced Gavin[ Go to top ]

    I got the distinct impression that Gavin would be in the room with the EJB persistent experts not hugging the JDO guy outside.

    hugging? I thought he was about to administer the Vulcan neck pinch.
  265. Misplaced Gavin[ Go to top ]

    I got the distinct impression that Gavin would be in the room with the EJB persistent experts not hugging the JDO guy outside.
    hugging? I thought he was about to administer the Vulcan neck pinch.

    LOL... That was my impression as well. Perhaps he was sent out of the room the back way to get rid of the agitator out front?
  266. Misplaced Gavin[ Go to top ]

    You just exposed a lie about an unified POJO persistence API under EJB3 (Sun's letter about "EJB and JDO" join force).

    What's wrong with Sun? If BEA couldn't save EJB-EB, if IBM couldn't save EJB-EB, if Oracle couldn't save EJB-EB, why do they think JBoss will be able to save EJB-EB?

    By simply re-naming Hibernate to EJB-CMP3? or by create yet another elephant??
  267. EJB-EB[ Go to top ]

    You just exposed a lie about an unified POJO persistence API under EJB3 (Sun's letter about "EJB and JDO" join force).What's wrong with Sun? If BEA couldn't save EJB-EB, if IBM couldn't save EJB-EB, if Oracle couldn't save EJB-EB, why do they think JBoss will be able to save EJB-EB?By simply re-naming Hibernate to EJB-CMP3? or by create yet another elephant??
    There is nothing wrong with EJB-EB BMP, unfortunately there was CMP that was collectively mistaken for ORM....
  268. EJB-EB[ Go to top ]

    You just exposed a lie about an unified POJO persistence API under EJB3 (Sun's letter about "EJB and JDO" join force).What's wrong with Sun? If BEA couldn't save EJB-EB, if IBM couldn't save EJB-EB, if Oracle couldn't save EJB-EB, why do they think JBoss will be able to save EJB-EB?By simply re-naming Hibernate to EJB-CMP3? or by create yet another elephant??
    There is nothing wrong with EJB-EB BMP, unfortunately there was CMP that was collectively mistaken for ORM....

    Why would you use EJB-EB BMP, instead of POJO DAO with JDBC?
  269. EJB-EB[ Go to top ]

    You just exposed a lie about an unified POJO persistence API under EJB3 (Sun's letter about "EJB and JDO" join force).What's wrong with Sun? If BEA couldn't save EJB-EB, if IBM couldn't save EJB-EB, if Oracle couldn't save EJB-EB, why do they think JBoss will be able to save EJB-EB?By simply re-naming Hibernate to EJB-CMP3? or by create yet another elephant??
    There is nothing wrong with EJB-EB BMP, unfortunately there was CMP that was collectively mistaken for ORM....

    Below is quoted from Rod Johnson's J2EE Development without EJB:

    "BMP entity beans are particularly onerous to implement and pose intractable performance problems such as the well-known n+1 finder problem. Using BMP adds significant complexity compared to ... (such as JDBC) directly, but delivers little value."
  270. Good cartoon, but it ...[ Go to top ]

    Should have been in blue!
    But that would be the licensed blue, the one off the spec.
    And those lines aren't really correct, after all we know orthogonal is best.
    As for the thickness, it's obvious that the thickness of the line is in direct relation to the alpha-blending, I mean thats just bleedin obvious.

    Makes me sick it does.

    Jonathan
    ps MicroSpring! Go on, take a glance, IOC in 30k
  271. Off topic: MicroSpring[ Go to top ]

    MicroSpring! Go on, take a glance, IOC in 30k

    Now that's cute :-)

    However, Jonathan, please note that the large list of libraries that Spring includes in the distribution is only due to the pre-integrations that we ship (J2EE support, persistence frameworks, web view technologies, etc).

    As stated in readme.txt, the only dependencies of the core IoC container are JDK 1.3 and Commons Logging. Even the AOP framework just requires AOP Alliance in addition (as long as just proxying interfaces). See readme.txt for details on dependencies of other modules.

    So in the current packaging, all you need is spring-core.jar (~265 KB) and commons-logging.jar (~31 KB) to get a full-fledged IoC container. Add spring-aop.jar (~140 KB) and aopalliance.jar (~5 KB), and you got a full-fledged proxy-based AOP framework too.

    Juergen
  272. Off topic: MicroSpring[ Go to top ]

    Hi,

    But it's a great way to learn your APIs - write my own version. And it did prove a pretty simple activity.

    Which made me wonder....

    Jonathan
  273. Off topic: MicroSpring[ Go to top ]

    Jonathan,
    Hi,But it's a great way to learn your APIs - write my own version. And it did prove a pretty simple activity.Which made me wonder....Jonathan

    Sure, no doubt that it is an interesting exercise! And if it is even sufficient for your application's requirements, then all the power to you :-)

    I just wanted to point out the minimal dependencies of the Spring core because you gave a different impression on the MicroSpring website (that the Spring core depends on all those libraries that are part of the distribution).

    Juergen
  274. just never choose the obvious[ Go to top ]

    60% JDBC
    20% Homegrown Persistence Framework
    10% O/R Mapping Tools
    5% Java Data Objects (JDO)
    5% EJB CMP / BMP
    0% Service Data Objects (SDO)

    Considering that ORM only has 10% of the market, and "seldom is a good choice in systems typically with very large dataset and complex queries" one can wonder what the fuss is all about?

    Why not settle for a common abstraction layer and let each run according to own taste. It should be obvious by now that it is never going to be any consensus between the different persistence camps.

    One big advantage is that it will not be necessary to frisk for weapons at the entrance to conferences and seminars! :)

    Yi Zhou:
    "I propose a cohesive persistence layer based on Spring Persistence Layer"

    The obvious solution. (Think logic, reason, common sense. Exist for both Java and .NET) Can anyone imagine how much money that could be saved by this approach? All over the world?

    Unfortunatly there is never possible to settle for anything obvious as long as a committee is involved.

    Regards
    Rolf Tollerud
  275. just never choose the obvious[ Go to top ]

    "I propose a cohesive persistence layer based on Spring Persistence Layer"

    Which is ... ? I thought that Spring is an IoC + AOP container which integrates tightly with the major persistence mechanisms out there, but not defining any persistence semantics by itself ? Has it changed since I've looked at it ?
  276. just never choose the obvious[ Go to top ]

    60% JDBC 20% Homegrown Persistence Framework 10% O/R Mapping Tools 5% Java Data Objects (JDO) 5% EJB CMP / BMP 0% Service Data Objects (SDO)
    Ok. Who let Rolf have the crystal ball bacK?
    Can anyone imagine how much money that could be saved by this approach? All over the world?
    Well, at least you got one thing right (A very bright guy at the last place I was working was going to push this after I presented him with NHibernate and Spring.Net and NAnt and told him where they got their start). At least until Microsoft ships their solutions for the things they are currently missing. Then it is back to square one. As usual. :(
  277. Fair comment, however....[ Go to top ]

    Springs focus on integration with other open source projects does tend to obfuscate the inherent simplicity of IOC and your core API's.

    I've had an extra hour or two since the weekend to use Spring and it is a doddle. So a different site and documentation set and release bundle, a PicoSpring (who?)
    version would be great.

    MicroSpring works and I'll use it because I don't need other Spring features. Maybe if you had the tiny site simply for IOC I wouldn't have bothered.

    Jonathan
    MicroSpring - tiny IOC container
  278. Here's a question 4 U[ Go to top ]

    60% JDBC
    20% Homegrown Persistence Framework
    10% O/R Mapping Tools
    5% Java Data Objects (JDO)
    5% EJB CMP / BMP
    0% Service Data Objects (SDO)
    Considering that ORM only has 10% of the market, and "seldom is a good choice in systems typically with very large dataset and complex queries" one can wonder what the fuss is all about?Why not settle for a common abstraction layer and let each run according to own taste. It should be obvious by now that it is never going to be any consensus between the different persistence camps.One big advantage is that it will not be necessary to frisk for weapons at the entrance to conferences and seminars! :)Yi Zhou:"I propose a cohesive persistence layer based on Spring Persistence Layer"The obvious solution. (Think logic, reason, common sense. Exist for both Java and .NET) Can anyone imagine how much money that could be saved by this approach? All over the world?Unfortunatly there is never possible to settle for anything obvious as long as a committee is involved.RegardsRolf Tollerud

    Here's a question 4 U Rolf. Let's say you have a business process which performs some kind of analytics, such that for each insert it validates that entry and generates a report. The report may contain 100-1000K rows of data. Speaking from a high level perspective, which would you use?

    I have my own bias on just about every approach based on experience. In cases like the one I just described, trying to save the reports in real-time is seriously asking for trouble. If the system were to get 50 concurrent transactions that require validation, it could potentially generate 20-40K rows of inserts. Without some kind of batch process and caching mechanism, it would overwhelm the database very quickly. Doesn't really matter which database you use, trying to save that many entries in a short amount of time is limited by the laws of physics.

    JDBC - if i try to use plain JDBC without some abstraction it's going to be a pain to maintain. All the approaches have to jdbc anyways, so I don't consider it a valid choice. It's a minimum requirement, and not a choice by itself.

    Home grown - having done this, it's fine in simpler/smaller projects that don't have a lot of changes or large code base.

    ORM - works well if you have an object graph which has to be persisted. it reduces the need to write a ton of stored procedures. If i am just saving a single row to the database, ORM is a bit heavy handed. If my object graph varies dramatically, than a good ORM can generate more performant SQL than if I hand code a ton of stored procedures. Plus, most database today cache execution plans and the compiled Sql, so ORM generated Sql won't necessarily be slower than hand-tuned stored procedures.

    JDO - I haven't used it on a project, so can't say.

    EJB - Having used mostly entity beans, they have a place. Though I would still use something like Hibernate to make things easier to maintain. If I need to tune the sql, I'll use a good profiler like TOAD, which still works fine with hibernate. If I don't need to optimize the sql, hibernate can save me some time. Though in most of the apps I've worked on the last few years, we log the query performance and generate nightly stats. If a particular query is executed a lot, but isn't optimal, we go in and tune it.

    as gump would say, "life is like a box of chocolates"

    peter
  279. Here's a question 4 U[ Go to top ]

    Peter,

    although not Rolf I'm replying!

    I think what you are saying about your approach is:

    1) Where performance is not a problem use some off the shelf thing to avoid needless donkey work. i.e. select/insert/update/delete auto generated.

    2) Where volumes/design is such that a holding area or cache is appropriate use that.

    3) Where performance is really slow, go in and hand craft it to make it really fast.

    4) Ensure your codebase can monitor performance so that you can step from 1, to 2, to 3 above.

    Or to put it another way, use sprocs and jdbc when you need to, cache when you need to, and when you don't use Emeraldjb (plug), Torque, Ibatis, CMP, Hibernate, whatever.

    Seems sensible to me.

    Jonathan
    Emeraldjb - the sparkly green code generator
  280. just never choose the obvious[ Go to top ]

    60% JDBC 20% Homegrown Persistence Framework 10% O/R Mapping Tools 5% Java Data Objects (JDO) 5% EJB CMP / BMP 0% Service Data Objects (SDO)

    Now that is a good idea that should to be in line with the current JCP EC: Write an open letter signed by Sun and then kill the JDBC standard. EJB3 will solve the problem soon, and probably much better than JDBC! We have to conform the storage thinking in the Java community - right?

    ;-)

    Cheers,
    Johan Strandler
  281. real men use assemblies[ Go to top ]

    what's all this high level language junk. everyone should be using assembly. none of that jdbc/odbc/ado/ado.net silliness. unforunately, that would me I'd be out of work, since I have no assembly experience :) so much to learn, so little time.
  282. It is strange that a so little problem has not solved satisfactory after all this years. Even MS has problems remember (ObjectSpaces is delayed again and will not see the light before after the release of Longhorn sometimes in the distant future).

    That MS too has run into trouble is a pretty good sign that it is something inherently wrong with O/R altogether IMO!

    Regards
    Rolf Tollerud
  283. on that level[ Go to top ]

    after all these years, there's still dll's hell. there must be something inherently wrong with the design :)

    by the way I'm just joking. everything has warts, just because Microsoft hasn't played in the enterprise data center market for 20+ years and doesn't know which direction they want to go with objectspaces doesn't prove ORM is bad.

    that's like saying "I can't rebuild a Hemi engine without a manual, therefore hemi engines are flawed." There are plenty of mechanics that could build rebuild a hemi engine in their sleep. Just because I can't do it in no invalidates the technology. Most things in life are not black and white.
  284. Peter is naive[ Go to top ]

    Well Peter, I don't know you, but I guess you are rather young! When you get older you will learn to read the small signs on the wall. "Something's rotten in the state of Objectspaces er..Denmark".

    Take my word for it.

    Regards
    Rolf Tollerud
  285. what does age have to do with it? whether I am young or old has nothing to do with the merits of ORM. I asked in a previous post, so I'll ask again. "how would save an object graph efficiently, making sure it is reliably and robust?" Surprise me for once and attempt to answer the question with hard technical facts :) By the way, there are plenty of things I am naive and ignorant about, but saving large object graphs to the database is something I have solid experience in. Don't take my word for it. Try it yourself and see how painful it is to hand code a ton of Sql to save an object graph that is 4 levels deep.
  286. Try it yourself and see how painful it is to hand code a ton of Sql to save an object graph that is 4 levels deep.

    It makes no dif. if it's 100 levels deep, why would it?
    http://www.onlamp.com/pub/a/onlamp/2004/08/05/hierarchical_sql.html

    .V
  287. good point[ Go to top ]

    Try it yourself and see how painful it is to hand code a ton of Sql to save an object graph that is 4 levels deep.
    It makes no dif. if it's 100 levels deep, why would it?http://www.onlamp.com/pub/a/onlamp/2004/08/05/hierarchical_sql.html.V

    the article makes some good points, but there are limitations. In the summary of that article it states:
    Path enumeration models have problems with deeper trees and with trees that do not have a natural way of naming the nodes or edges. Maintaining proper constraints can be difficult in actual SQL products because of the lack of support for Full SQL-92 features.

    On the plus side, path enumeration is a very fast and natural technique for trees without great depth and frequent changes to the structure. If you perform most of your searches and aggregations from the root down, it can handle surprisingly wide tree structures.

    the rough structure of the type of object graph looks something like this:

    report
      |- report item
      | |- rule[]
      | | |- rule name
      | | |- rule desc
      | | |- rule version
      | | |- rule type
      | | |- aggregate[]
      | | | |- values[]
      | | | |- attributes[]
      | | | |- type
      | | | |- previous values[]
      | | |- transaction
      | |- Order id
      |- report summary (aggregates one or more items)

    the real graph is quite a bit more complex than that. in fact, it gets worse. In practice, the reports are also aggregated and involve updates to existing entries in the database :) Using hierarchical sql helps in some cases, but the biggest problem is how to save so much data efficiently and using batch/bulk inserts when possible. The techniques described in the article help with selects, but in the case of inserts I doubt it provides much benefit.

    In my previous post, I wasn't clear. I wasn't saying hand writing the sql is inherently unmaintainable or something crazy like that. Just that there's considerable overhead in profiling, and maintenance. Most developers I've met haven't had to deal with saving large object graphs and it takes a bit of time to learn through experience. If using an ORM makes it easier and less error proned, than it's worth while. On the otherhand, if everyone on the team is a Sql expert with powerful kung-fu, than I wouldn't bother using an ORM.

    In this specific case, the depth matters because the amount of data that needs to be saved is directly related to depth of the graph. As the validation rules get more complex, the graph gets deeper and generates much more data to save. If runtime efficiently and latency isn't important, than I would agree depth makes no difference. On the otherhand, if there's a constant stream of transactions that have to go through validation and the generated data has to be saved, depth matters quite a bit.

    peter
  288. So are you saying that you are using ORM in you precocious super-complex financial applications over which you have been pontificating for a while now? (After 20 years I begin to understand.. etc).

    Then I understand why you got problems.

    "On the other hand, if everyone on the team is a Sql expert with powerful kung-fu, than I wouldn't bother using an ORM."

    So why are we discussing then, when we agree on everything?

    A better test than making a person answer questions is to let him make up the questions. For a while now you have been asking me to answers poorly phrased questions based on insuffient information.

    JDBC is of course the underlying protocol for ORM which is good only for saving developing time at the best. If you know what you are doing, handwritten JDBC by a Sql expert will always be faster and more robust.

    So what I want is to ask you, what ORM product(s) do you use in your financial applications. Home-grown?

    Regards
    Rolf Tollerud
  289. I negotiate with the project manager[ Go to top ]

    So are you saying that you are using ORM in you precocious super-complex financial applications over which you have been pontificating for a while now? (After 20 years I begin to understand.. etc).Then I understand why you got problems. "On the other hand, if everyone on the team is a Sql expert with powerful kung-fu, than I wouldn't bother using an ORM."So why are we discussing then, when we agree on everything?A better test than making a person answer questions is to let him make up the questions. For a while now you have been asking me to answers poorly phrased questions based on insuffient information. JDBC is of course the underlying protocol for ORM which is good only for saving developing time at the best. If you know what you are doing, handwritten JDBC by a Sql expert will always be faster and more robust.So what I want is to ask you, what ORM product(s) do you use in your financial applications. Home-grown?RegardsRolf Tollerud

    If time is not a constraint and I have all the time I need to profile and optimize all the queries, my preference is hand tune the most frequently used inserts. In reality, most managers say this, "don't worry about tuning the sql right now, just get these other (new) features prototyped."

    Now, one could moan, groan and scream, but that's life. Even though I usually ask for 1 month of profiling, tuning, testing, it never happens. I usually have to sneak it in and do it on my own time. By "my own time" I mean not charge my client for the time I spend beyond 8hrs for that day.

    I'll be the first to say I'm no Sql expert, but I have used various techniques to optimize and tune sql. In the last 4 years, I've only met one person I consider a Sql expert with 10+ years of Oracle experience. The financial firms I know are very tight about the database. Many firms require developers to send query requests to the DBA. The DBA then take 1-2 weeks to write, test, and profile the sql.

    I wouldn't call it "pontificating for a while now?" I've worked on and written applications to handle pre-trade compliance for realtime and semi-realtime applications. Saving the data is just the step of a process which has 4 distinct steps for the simple case and upwards of 10 in the more complex cases. If only it was as easy as "save a bunch of data" to the database.
  290. SQL trees[ Go to top ]

    Check out nested set model
    http://www.developersdex.com/gurus/articles/112.asp

    it works from top, bottom and from any node, extremely convenient for hierarchical aggregates and queries.
  291. good point[ Go to top ]

    Maintaining proper constraints can be difficult in actual SQL products because of the lack of support for Full SQL-92 features.
    ....
    As the validation rules get more complex, the graph gets deeper and generates much more data to save

    If they are not complaint... then they are not ANSI SQL. Even MySQL 5 is almost 100% complaint, stored procedures, views, co-related querries, etc.

    As far as complexity... with SQL and a self join, one can make it quite simple and fast, and even tune it. Plus it's cross platform (for P-langages, etc.) Most SQL dbs have special tuning for trees and graphs, becuase they happen a lot. To bypass that tuning is silly, when it's there.

    I would say that trees and graphs are a huge plus for E/R vs O/R.
    AFAIK O/R can't do self joins.
    Also, Celko wrote 3 books full of SQL examples, my guess is not one of them could be implemented in EQL or OQL or JQL.
    The thing is SQL is a Set langage... that's what it's built for.
    I'll go further. I not only think O/R people don't know E/R, set theory, etc.... I sincerely belive that O/R crowd does not even know OO.
    (they think by using the O. word in a sentence, that makes them more OO. Fact is that using E/R DAO or O/R DAO... either one of them could be more or less OO; it's how you use it.). Or do they know other langs, or large or complex applications that may have mutiple platforms aginst it.

    O/R can only handle limited cases. And it's not bad to be a newbie! I was once a newbie and did a lot of sillythings, that I belived in at the time. Untill I got production experience w/ VLDB. Some things are oposite from small db to large db.
    (I also used to be proud that I could write complex code. I now take severe steps to write simplest code.)
    . and they did sucker me too, but now I want to see same scale prouduciton project ahead of me.

    Good people are newbies too!

    I implemented trees and graphs 2 times with iBatis and did not think it unusual. Some Java members on the team freqed for a few days when they started using self Joins, and walking the trees, but ... that is what tech leads are for ;-), and that's why we make the big $. You allways have a bell curve of a team. 2 humble bad ass dudes you keep in reserve, 2 opinionated newbies (you let them code BEFORE you get the requirments, they don't know what that is) and everyone else in the middle. Then there are the DBAs. (but lets not talk about that)

    To have infiniate depth, you do one table, w/ self join.
    To optimize, you may store the path denormalizing it (does anyone remeber this anymore). Descreate data structures!
    Ex:
    id, parent_id, node_name, node_type, path_d, etc.

    To get a list of nodes:
    select * from bla where parent_id ='123';
    If you click on a node, that then becomes parent.
    Going up is just as easy.
    You can add some muti colum indexes if you see fit.

    Insert a node? Do I realy need to show that, it's too easy.
    You just need to tell it what parent it is, and there it is.
    No way relational would slow down or get complex ahead of O/R. I think this is a toy for E/R to handle at large trnsaction loads.

    Your example looks like a report that subtotals. Some reports require mutiple queries, I have run into that. Or sometimes you pass the data to denormlize and pre-compute.
    Ex: As of last night, you bank blance is... (notice they don't tell you the balance now, but wait for the batch job)

    I am willing to learn. Can somone link me a O/R implementation that handle grpahs, trees or similar use case?

    Who ever told people that they can get away with knowing only Java.... lied.
    SQL, scripting langs, etc. Sooner or later you will have to write a non-Java appaication. There is no Easter Bunny either. Don't tell that to the O/R people, I realy want EJB 3 to stink it up again.

    .V
  292. good point[ Go to top ]

    first thing, thanks for responding Vic. I did consider flattening the database model, but after a lot of debates with the domain expert and the dba, we were only able to de-normalize and flatten the tables a little bit.
    I would say that trees and graphs are a huge plus for E/R vs O/R. AFAIK O/R can't do self joins. Also, Celko wrote 3 books full of SQL examples, my guess is not one of them could be implemented in EQL or OQL or JQL. The thing is SQL is a Set langage... that's what it's built for. I'll go further. I not only think O/R people don't know E/R, set theory, etc.... I sincerely belive that O/R crowd does not even know OO.

    Agree 100% on set theory. not I'm an expert or anything. Before I used Oracle to query sets of sets, I was happily naive about database. Luckily a co-worker was an Oracle DBA with 10+ years experience with large military projects, so he helped me understand.
    (they think by using the O. word in a sentence, that makes them more OO. Fact is that using E/R DAO or O/R DAO... either one of them could be more or less OO; it's how you use it.). Or do they know other langs, or large or complex applications that may have mutiple platforms aginst it. O/R can only handle limited cases. And it's not bad to be a newbie! I was once a newbie and did a lot of sillythings, that I belived in at the time. Untill I got production experience w/ VLDB. Some things are oposite from small db to large db.(I also used to be proud that I could write complex code. I now take severe steps to write simplest code.). and they did sucker me too, but now I want to see same scale prouduciton project ahead of me.Good people are newbies too!

    I would agree that O/R can only handle limited cases. Your example of self-join (cartesian join) is one. This my own experience and not necessarily true for other cases, but in my case, the reports are aggregated at every level to generate a variety of reports based any number of dimensions. Like by trader, time, order status, violations, account, rule, rule group, security type, security rating, rating class, GISC code and issuer. Usually the reports involve 2 or more dimensions.
    I implemented trees and graphs 2 times with iBatis and did not think it unusual. Some Java members on the team freqed for a few days when they started using self Joins, and walking the trees, but ... that is what tech leads are for ;-), and that's why we make the big $. You allways have a bell curve of a team. 2 humble bad ass dudes you keep in reserve, 2 opinionated newbies (you let them code BEFORE you get the requirments, they don't know what that is) and everyone else in the middle. Then there are the DBAs. (but lets not talk about that)To have infiniate depth, you do one table, w/ self join. To optimize, you may store the path denormalizing it (does anyone remeber this anymore). Descreate data structures!Ex:id, parent_id, node_name, node_type, path_d, etc. To get a list of nodes:select * from bla where parent_id ='123';If you click on a node, that then becomes parent. Going up is just as easy. You can add some muti colum indexes if you see fit. Insert a node? Do I realy need to show that, it's too easy. You just need to tell it what parent it is, and there it is.

    In this specific case, it's not just inserting a node. Inserting a node is trivial and I have used those techniques. Part of the challenge here is the compliance system generates a report, which must be saved due to regulations. The generated report participates in the report the user wants to see. So the first step in persistence is just a trigger for a reporting process. Even if I had a big E15K server, i would still prefer to handle that outside the database. The heuristics of trading from the data mining I performed for the this particular project is that a small percentage of the accounts contribute a significant % of the trades. Which means that most of the accounts trade infrequently. Rather than bog down the database, which has a ton of other work to do, the most active accounts generate the most activity. Therefore, it makes sense to consider that the working set and load that in memory.
    No way relational would slow down or get complex ahead of O/R. I think this is a toy for E/R to handle at large trnsaction loads. Your example looks like a report that subtotals. Some reports require mutiple queries, I have run into that. Or sometimes you pass the data to denormlize and pre-compute. Ex: As of last night, you bank blance is... (notice they don't tell you the balance now, but wait for the batch job)I am willing to learn. Can somone link me a O/R implementation that handle grpahs, trees or similar use case?V

    I did try using hibernate on another part of the project dealing with reconciliation and it worked well. More accurately a member of my team did and i assisted. I didn't try it on the report object, since my plan from day 1 was to batch process those. The reports are compliance reports. For example, in 1940Act/2A7 there's a concept of safe harbor where a trade may be allowed to execute but it must be resolved in 3 days. Part of the reporting process has to generate a detailed summary. The challenge here is the security designated as the "safe harbor" may change numerous times in those three days. If it was only 1 safe harbor for all three days, it would be much simpler and trivial for an experiene sql developer. Fact that the safe harbor changes and the security may involve one or more positions in an account makes for a rather interesting time. Using summary tables to aggregate all that data is usually the first step. If only lawyers wrote laws that are easier to implement, it would make it easier :)

    peter
  293. It is strange that a so little problem has not solved satisfactory after all this years. Even MS has problems remember (ObjectSpaces is delayed again and will not see the light before after the release of Longhorn sometimes in the distant future).That MS too has run into trouble is a pretty good sign that it is something inherently wrong with O/R altogether IMO!

    Well, there's only two logical choices:

    1) There's something wrong with "O", or ..
    2) There's something wrong with "R"

    Maybe _you_ can't see the little writing on the wall ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  294. Well, there's only two logical choices:1) There's something wrong with "O", or ..2) There's something wrong with "R"

    Actually, there are four choices:
    3) there's nothing wrong with either one of them (not very likely)
    4) there's something wrong with both of them...
  295. It is strange that a so little problem has not solved satisfactory after all this years. Even MS has problems remember (ObjectSpaces is delayed again and will not see the light before after the release of Longhorn sometimes in the distant future).That MS too has run into trouble is a pretty good sign that it is something inherently wrong with O/R altogether IMO!RegardsRolf Tollerud

    More like they have so much to do and they have to deliver something sometime. Don't they? :) Now everything is going to be pushed back more cause they gotta fix IE to stop the migration to FireFox (http://www.computerworld.com/softwaretopics/os/windows/story/0,10801,99793,00.html).
  296. That MS too has run into trouble is a pretty good sign that it is something inherently wrong with O/R altogether IMO!RegardsRolf Tollerud
    By your logic, I think there's something inherently wrong with operational system security and reliability altoguether... :)
  297. just never choose the obvious[ Go to top ]

    60% JDBC 10% O/R Mapping Tools

    The whole EJB3 POJO saga is based on the popularity of Hibernate. Should Hibernate fade, the saga would fade too.

    Ironically, the buzz ‘POJO’ (plain old java object) was created to be the opposite of EJB.
  298. Interesting, but to whom?[ Go to top ]

    I'm sure this would all be very interesting and entertaining if I was one of the people selling JDO, but as an user/developer, I don't really care which way things went or if the process was fair as long as the end result (in this case, EJB3) gives me sufficient value.

    Politics is an important and necessary part of teamwork in heterogenous environments - the lack of skill in politics is a valid reason to reject a technical spec from a lacking group. I have no idea whatsoever if this was the reason in this case, but the cartoon seems to promote the idea.
  299. Remember ...[ Go to top ]

    Persistence is futile , you will be garbage collected.
  300. Remember ...[ Go to top ]

    Persistence is futile , you will be garbage collected.

    Very funny!
  301. Everybody seems to be in search of a better j2ee sense of humour. Is there any JSR HAHAHA for that?
  302. One EJB project[ Go to top ]

    Done by the PetStore book:
    300 EJB screens, 3 years, $300 Million and counting.
    Vendors are listed here, as the project started:
    http://news.com.com/1200-1070-975486.html
    It's 1 year late and just got scraped this Monday, all the consultants got fired.

    Tom
  303. For anyone that wants JDO 2 to be passed by the JCP as a standard and hasn't yet sent an email to the JCP Executive Committee, their email addresses are here:

    http://www.jcp.org/en/participation/committee#SEEE

    As far as I know, they're voting this week...
  304. Persistence is a problem[ Go to top ]

    ORM tools only try to make the best of a bad situation. SQL is unfortunately better at it when it comes to dealing with the database as a disconnected, proprietary data repository.

    Its ironic to see this discussion pit ORM vs SQL when the database is the real problem. Technology is unfortunately heading backwards from mainframe databases that had capabilities that companies nowadays are too cheap to provide.

    Solid distribution, transaction management, audit trail and distributed recovery have become relics that are too expensive for a common, cheap client/server database. We now see simple commit and rollback as nearly the entirety of database functionality. Just talk to someone who is out of a job who used to build solid mainframe databases. They will tell you how lacking oracle and sybase and client/server DB2 are first hand.

    So here we are at the new paradigm shift comparing ORM to SQL as if thats even a comparison. A tightly coupled database query language vs an object integration technology built on top of those same proprietary database engines which just wants to translate 80 percent of database calls well through proprietary, inconsistent SQL.

    Of course it will be somewhat inefficient! Of course, the mapping sucks! Of course, the relationships are brittle and the caching is problematic! Of course tightly-coupled, disconnected SQL is more powerful! The users who think of the situation as an object problem will be misguided. The DB guys who blame java will be wrong.

    Tables are, well, a joke. SQL (the table language) is the only way to get data out of them. Our whole notion of data organization must change before the situation gets better. We need better concepts, better strategies to provide persistence for applications.

    Objects have their problems too. Encapsulation is kind of a farce these days when you consider byte code manipulation, universal getters and setters effectively making everything public, the lack of inheritance as a serious strategy for reuse due to the need for endless object representations and pervasive application layering limiting classes to a certain domain of behavior, and the need to constantly copy those data values from layer to layer (loose coupling? NOT!). After all the component technologies have spoken the POJO is all the rage! How can we innovatively molest a POJO is the question of the day.

    Its seems we have a coupling/granularity problem. Object guys would like to couple persistence to objects. DB guys would like to couple persistence to SQL or a db specific domain.

    Hmm. Both wrong? Probably. We have not found the ideal object persistence mechanism yet and maybe objects arent ideal to start with.

    Or maybe we just haven't found the best data structures and the best query language.

    Or maybe we just cant accept pure object persistence and integrate it with legacy data stores. Not sure.

    But SQL vs ORM is not the discussion its just a distraction. Obviously SQL is not the future and ORM is a bandaid. Maybe we should all get a clue.
  305. O/R it seems is best at non-sql[ Go to top ]

    After reading a bit on JDO... it seems that OrM's such as JDO, are best at just serlizing a class, they do not realte to sql joins well, etc. (note small r)
    It seems that the utimate OrM is something like Prevayer, where you create an objhect and the seralize and deserialize or find it. (At least Pervayer is a Kiss OrM, the API for JDO is silly. Like detach an object? Huh?)

    I can't imagine VLDB using OrM, and I can see why P-langages are so sucessfull in production relative to Java.
    You want to be able to tune your SQL and design, even if you have to use use non-ANSI SQL. Some Java developers are weak on the R part of OrM, so if they get to larger systems, they will be at a steap cliff drop of. E/R I find superior for designing a business application. I am sorry that I can't help you see that. Trying to be a better EJB is the wrong direction. Venn diagrams, Set theory and relational algebra is more accurate.

    it's good to be a SQL design tunner, I wish you non-sql people lots of luck, you'll need it. I'll stick with many SQL centric DAO tools, such as iBatis and I am confident SQL will win out.


    .V
  306. O/R it seems is best at non-sql[ Go to top ]

    After reading a bit on JDO... it seems that OrM's such as JDO, are best at just serlizing a class, they do not realte to sql joins well, etc. (note small r)It seems that the utimate OrM is something like Prevayer, where you create an objhect and the seralize and deserialize or find it. (At least Pervayer is a Kiss OrM, the API for JDO is silly. Like detach an object? Huh?)

    That isn't ORM. The "R" in ORM still stands for "relational" and the "M" still stands for "mapping", meaning that the state of the Java Object is mapped to the relational database tables and columns.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  307. Vic,

    You (and others) are having serious problems understanding what ORM is all about, maybe that's why you are polarizing so much. It shouldn't be like this, as ORM concepts are not such a complex thing to grasp, involves no rocket science at all.

    You know, despite the fact that someone can be the best in what they do, they can still make mistakes, or fail to see the value in different lines of works besides their own, as this thread has shown clearly. Nothing that a bit of common sense can't fix. :)

    Regards,
    Henrique Steckelberg
  308. O/R it seems is best at non-sql[ Go to top ]

    Vic,You (and others) are having serious problems understanding what ORM is all about
    Yup.
    iBatis (SQL maps + DAO) is an ORM tool. Users of it just get to do more work than some other ORM tools.
  309. O/R it seems is best at non-sql[ Go to top ]

    Vic,You (and others) are having serious problems understanding what ORM is all about
    Yup.iBatis (SQL maps + DAO) is an ORM tool. Users of it just get to do more work than some other ORM tools.
    Yes, iBatis is an ORM, but iBatis users do not need to think in two query languages. Some of ORM users I know (including myself) think in SQL but are trieng to convert it to some ORM specific OQL and need to read log to validate this undefined stuff again. Workaround is to use views for everything (probably it is not a bad idea too).
  310. POJOs Considered Sexy[ Go to top ]

    "Linda DeMichiel and Craig Russell, though, "Specification Leads, JSR-220 and JSR-243", simply decided to hijack the definition of POJO from Martin Fowler and, thanks to their "A Letter to the Java Technology Community" the new JDO+EJB overhead is now known as "POJO persistence".

    http://www.people4objects.org/Klaus/2005/02/pojos-considered-sexy.html

    See you, Klaus.