Discussions

News: Article: AJAX JSF Frameworks Review

  1. Article: AJAX JSF Frameworks Review (92 messages)

    This review gives an overview of current commercial JSF frameworks that use Ajax to update Web sites. The frameworks Icefaces, Netadvantage and Quipukit are compared by analyzing specific components. The authors also give the positives and negatives they experienced during the installation and use of each framework.
    In the end, none of these three frameworks could fully satisfy our wishes with the components treetable, list and autocomplete. Quipukit and Icefaces are nearly completely functional for the tabletree. They can be customized easily and quite quickly with all the features we need except for one. The list is average and every framework can be used for it, depending on your favorite. Autocomplete is only supported by Icefaces but it is perfect. It is easy to code and it fetches its data using Ajax to keep the Internet traffic low. These three JSF frameworks are just the tip of the iceberg. There are many more Ajax JSF products, some of them also free of course. We wrote an overview about the most popular representatives in this genre and what features they bring along.
    What JSF products do you see using Ajax successfully? Which would you recommend?

    Threaded Messages (92)

  2. Using opensource product[ Go to top ]

    I use ajax4jsf it's an open source library which allows you to 'ajaxify' standard JSF components without writing any javascript.
  3. Should focus on JSF AJAX framework[ Go to top ]

    I think for simple webapps, most commercial JSF Ajax components will do the job, but for large enterprise rich UI webapps, none of the out of box commercial components will meet the requirements. This is why if this review/research would be better if it focus more on the framework, like how easy to extends, how easy to use api, how easy to change skin, etc. If the commercial components don't meet these flexibilities, then is not worth a dime. Most likely you might end up hiring consultants from the vendor to customize those components base on business requirements. And that will end up costing alot more than their license anyway. bill h....
  4. Re: Should focus on JSF AJAX framework[ Go to top ]

    I think for simple webapps, most commercial JSF Ajax components will do the job, but for large enterprise rich UI webapps, none of the out of box commercial components will meet the requirements.
    I completely agree. No JSF component suite/framework addition upto date is complete enough for serious enterprise apps. Either component suite is not rich enough, either implementation is unacceptably slow or lack some of the cruical features, like support for conversations, easy layout management and customisation, partial page updates, etc... Personally, I realy doubt we'll ever seen such a component suite! If you look at ASP.NET web component suites, you'll see the similar situation (add to that more browser incompatibilities then JSF components as they are spawn from MS-centric world). The reason is that it is realy very hard to write such a suite! You have to write and maintain a lot of tricky and unreadable JavaScript code, handle AJAX roundtrips efficiently and avoid concurrency problems, handle session and conversational state efficiently, fight with browser differences and JavaScript/DHTML performance problems that arise around every corner... Just too much problems to solve. We have to admit: HTML (with all the dynamic hacks) is not for rich UI enterprise apps. That's why we should concentrate on using real windowing toolkits for rich UI enterprise Internet apps and deploy them with Java Web Start or ClickOnce using appropriate remoting technology (WebServices, CORBA, RMI, ICE, depending on the context).
  5. I realy doubt we'll ever seen such a component suite!
    Me too.
    If you look at ASP.NET web component suites, you'll see the similar situation (add to that more browser incompatibilities then JSF components as they are spawn from MS-centric world). The reason is that it is realy very hard to write such a suite! You have to write and maintain a lot of tricky and unreadable JavaScript code, handle AJAX roundtrips efficiently and avoid concurrency problems, handle session and conversational state efficiently, fight with browser differences and JavaScript/DHTML performance problems that arise around every corner... Just too much problems to solve. We have to admit: HTML (with all the dynamic hacks) is not for rich UI enterprise apps.

    That's why we should concentrate on using real windowing toolkits for rich UI enterprise Internet apps and deploy them with Java Web Start or ClickOnce using appropriate remoting technology (WebServices, CORBA, RMI, ICE, depending on the context).
    Unfortunately, they have their own problems. For the programming model, I'm sure everyone would rather write JWS apps. But HTML component libraries are doable, and the fact that JSF has a bunch, whereas say Struts hasn't, means we're seeing at least some progress. Just don't buy the idea of component libraries that solve all your (UI) problems. Rather, be sure you understand JSF (or .NET) and be ready to develop your own widgets when the ones that are available don't suffice.
  6. Just don't buy the idea of component libraries that solve all your (UI) problems.
    Wow, that seems to be very wrong, I do expect out of the box usable libraries. At the end of past millennium in year 1996 or so I have been using perfect component libraries in Delphi and VB, and they were extremely easy to use, convenient, easy to extend, etc. Now, are you saying that we should not expect such libraries of components in the year 2006?
  7. Just don't buy the idea of component libraries that solve all your (UI) problems.


    Wow, that seems to be very wrong, I do expect out of the box usable libraries. At the end of past millennium in year 1996 or so I have been using perfect component libraries in Delphi and VB, and they were extremely easy to use, convenient, easy to extend, etc. Now, are you saying that we should not expect such libraries of components in the year 2006?
    Well, that is kind-of what I'm saying yes. We're not building Delphi applications anymore. User interfaces got more diverse, and you can actually see that really successful applications have distinguishing user interfaces. Btw, those 'earlier times were better' arguments are getting old. Ask a bloody RPG/400 programmer what he thinks of Delphi and he'll say it's crap, slows the user down and serves no business purpose.
  8. Btw, those 'earlier times were better' arguments are getting old. Ask a bloody RPG/400 programmer what he thinks of Delphi and he'll say it's crap, slows the user down and serves no business purpose.
    Bloody RPG/400 .. cripes, what a waste time. Back when I was programming we could only use 1s and 0s, at least until Microsoft patented them: Microsoft Patents Ones, Zeroes Peace, Cameron Purdy Tangosol Coherence: The Java Data Grid
  9. I completely agree. No JSF component suite/framework addition upto date is complete enough for serious enterprise apps. Either component suite is not rich enough, either implementation is unacceptably slow or lack some of the cruical features, like support for conversations, easy layout management and customisation, partial page updates, etc... Personally, I realy doubt we'll ever seen such a component suite!
    Nor should there be one! If you want fat clients on the desktop you have plenty of alternatives. Fortunately all attempts to bloat the Internet with "rich" components have failed so far.
  10. I completely agree. No JSF component suite/framework addition upto date is complete enough for serious enterprise apps.
    Well, I have to disagree, as I am currently using JSF for just such applications.
    either implementation is unacceptably slow or lack some of the cruical features, like support for conversations, easy layout management and customisation, partial page updates, etc...
    I have not come across the issue of performance, and existing web frameworks have been used for serious enterprise apps without support for such 'crucial' features for a very long time. It may not have been easy, but to call such features 'crucial' seems misleading to me.
  11. Editable tree table[ Go to top ]

    If you're interested, have a look at Editable Tree Table Demo. But it's not JSF, it's Wicket :) And the server seems to respond very slowly, so bear with it. The tree table is backed by a swing TreeModel. -Matej
  12. I would recommend not to use JSF at all. If you want AJAX solution use GWT. GWT allows you to test UI - JSF don't. GWT allows you to write server stateless UI web application - JSF don't (Writing stateless UI application simplifies clustering). GWT is easy to use(IMHO GWT + AJAX is too complex) GWT is FREE - listed JSF AJAX Frameworks are not. so what are the reasons to use JSF + AJAX when you have GWT? --Mark
  13. Re: Article: AJAX JSF Frameworks Review[ Go to top ]

    so what are the reasons to use JSF + AJAX when you have GWT?

    --Mark
    Multiple implementations competing for quality. The ability to implement interfaces which can fall-back to alternatives when Javascript is not available. The ability to visually design interfaces in IDEs. Things aren't as simple as you imply.
  14. The Common Theory Of Everything[ Go to top ]

    so what are the reasons to use JSF + AJAX when you have GWT?

    --Mark


    Multiple implementations competing for quality. The ability to implement interfaces which can fall-back to alternatives when Javascript is not available. The ability to visually design interfaces in IDEs. Things aren't as simple as you imply.
    IMHO its good idea to have different implementations of servers but not different implementations of framework. Its very funny to see 10 implementations of class that prints

    Hello World

    The ability to implement interfaces which can fall-back to alternatives when JavaScript is not available. And even when HTML is not available... But what comes next? Interfaces that work when computer not available? It looks that one should read "The Common Theory Of Everything" before he starts working on JSF project. --Mark
  15. Re: The Common Theory Of Everything[ Go to top ]

    IMHO its good idea to have different implementations of servers but not different implementations of framework.
    Its very funny to see 10 implementations of class that prints

    Hello World



    The ability to implement interfaces which can fall-back to alternatives when JavaScript is not available.
    And even when HTML is not available...
    But what comes next? Interfaces that work when computer not available? It looks that one should read "The Common Theory Of Everything" before he starts working on JSF project.

    --Mark
    I disagree with every point you make here! Competing implementations of standards have led to very high quality products. I want the best printing of "Hello World"! The ability to implement interfaces when JavaScript is certainly available. Most JSF standard components can work like this. (Check out the MyFaces ALLOW_JAVASCRIPT option, which can be set to false). The point of JSF is that you don't have to understand any "theory of everything" when you write an interface. You can let the JSF implementation deal with differences in client-side interfaces, even when they aren't HTML - Oracle has shown great examples of this working with ADF. (This may not be an ideal way to work, but it is certainly possible). I find it puzzling that developers are so keen to have portability in general with Java, but so many are resistant to the idea of portable interfaces, such as independence from HTML, which JSF can potentially give you.
  16. Re: The Common Theory Of Everything[ Go to top ]

    (Check out the MyFaces ALLOW_JAVASCRIPT option, which can be set to false).

    The point of JSF is that you don't have to understand any "theory of everything" when you write an interface. You can let the JSF implementation deal with differences in client-side interfaces, even when they aren't HTML - Oracle has shown great examples of this working with ADF. (This may not be an ideal way to work, but it is certainly possible).

    I find it puzzling that developers are so keen to have portability in general with Java, but so many are resistant to the idea of portable interfaces, such as independence from HTML, which JSF can potentially give you.
    Each time you have to add a flash, applet, javascript or even plain HTML to your web application the portability of interfaces disappears. Do you know many sites that were written without using hard coded/static HTML? --Mark.
  17. Re: The Common Theory Of Everything[ Go to top ]

    Each time you have to add a flash, applet, javascript or even plain HTML to your web application the portability of interfaces disappears.
    Yes, but this is not necessarily the way things have to be.
    Do you know many sites that were written without using hard coded/static HTML?

    --Mark.
    I personally know of a couple, and they are for internal use. There aren't many, to be sure, but the point is that JSF has the potential to allow such sites to be developed. I am not saying that this is a good way to develop most sites; just that it is possible. I really like the potential of JSF to allow rendering to a range of different client presentation technologies - even though I have never once used it!
  18. Re: The Common Theory Of Everything[ Go to top ]

    Each time you have to add a flash, applet, javascript or even plain HTML to your web application the portability of interfaces disappears.


    Yes, but this is not necessarily the way things have to be.

    Do you know many sites that were written without using hard coded/static HTML?

    --Mark.


    I personally know of a couple, and they are for internal use. There aren't many, to be sure, but the point is that JSF has the potential to allow such sites to be developed. I am not saying that this is a good way to develop most sites; just that it is possible. I really like the potential of JSF to allow rendering to a range of different client presentation technologies - even though I have never once used it!
    That's what i'm talking about. JSF is too theoretical, its overengineering to me. While GWT meets the needs of real web application. --Mark
  19. Re: The Common Theory Of Everything[ Go to top ]

    Each time you have to add a flash, applet, javascript or even plain HTML to your web application the portability of interfaces disappears.


    Yes, but this is not necessarily the way things have to be.

    Do you know many sites that were written without using hard coded/static HTML?

    --Mark.


    I personally know of a couple, and they are for internal use. There aren't many, to be sure, but the point is that JSF has the potential to allow such sites to be developed. I am not saying that this is a good way to develop most sites; just that it is possible. I really like the potential of JSF to allow rendering to a range of different client presentation technologies - even though I have never once used it!


    That's what i'm talking about. JSF is too theoretical, its overengineering to me. While GWT meets the needs of real web application.

    --Mark
    It is hardly theoretical - these features have been demonstrated in practical applications, and you will find JSF is far more widely used right now for real web applications that GWT. GWT certainly has its merits, but JSF has a wider range of uses. A highly AJAX-intensive approach is not suitable for all websites; perhaps not even for the majority right now.
  20. It is hardly theoretical - these features have been demonstrated in practical applications, and you will find JSF is far more widely used right now for real web applications that GWT. GWT certainly has its merits, but JSF has a wider range of uses.
    Here is why I consider JSF as theoretical and overengineered. JSF claims that it is possible to substitute renders for different clients. I claim its not possible for current standards. E.g. It's impossible simply replace HTML with WAP because WAP devices are too small. You have to REDESIGN the whole site to make it work on WAP devices. Same goes for disabling JavaScript. You can not simply disable JavaScript in most cases. BTW Does anybody know ONE well known site that works on JSF? --Mark
  21. It is hardly theoretical - these features have been demonstrated in practical applications, and you will find JSF is far more widely used right now for real web applications that GWT. GWT certainly has its merits, but JSF has a wider range of uses.


    Here is why I consider JSF as theoretical and overengineered.
    JSF claims that it is possible to substitute renders for different clients. I claim its not possible for current standards.E.g. It's impossible simply replace HTML with WAP because WAP devices are too small. You have to REDESIGN the whole site to make it work on WAP devices.
    Same goes for disabling JavaScript. You can not simply disable JavaScript in most cases.

    BTW Does anybody know ONE well known site that works on JSF?

    --Mark
    You have set up a straw man argument. No-one is suggesting that an entire site can be instantly flipped from one technology to another. But what can happen is that significant sections of sites can be re-used for different technologies, re-using considerable amounts of the view layer and most if not all of the rest of the code. It also means that you don't have to learn a new technology for coding forms, navigation, etc. This alone makes JSF a useful specification. And, if you design a page correctly, you definitely can simply disable JavaScript and have a respectable website still operating. I am not sure if this is a wise way to code a site these days, but the MyFaces implementation of JSF allows exactly that. We obviously aren't talking about most cases, but a good technology, in my view, allows uncommon cases that would otherwise require a lot of work to be easily implemented. For example, Oracle's ADF (now on Apache) includes a renderkit for telnet text screens. They did not do this just out of curiosity - it was a real use case; and meant that the same approach to forms, navigation, buttons and so on could be used. Instead of simply stating what you believe can't be done, why not look into examples of how it actually works? Here is a good example http://www.oracle.com/technology/products/iaswe/adfmb.html Using ADF for designing mobile and telnet interfaces. And this presentation: http://www.nyoug.org/Presentations/2005/20050929jsf.pdf Includes on page 37 the same JSF page being rendered on a web browser, PDA, telnet and (I believe) some sort of instant messaging client, directly contradicting your claim that simultaneous rendering is not possible. Having a single technology that can do all of this is not over-engineering: it is a considerable time and code-saver. As for well-known sites that use JSF, I don't know any obvious ones, however a list of public sites that definitely use JSF is: http://wiki.java.net/bin/view/Projects/RealWorldJSFLinks
  22. Does any of listed sites has WAP version based on JSF rendering? PS Of course by well knon sites I mean WELL KNOWN sites. well known sites are ADOBE.COM AMAZON.COM EBAY.COM FUJITSTU.com GOOGLE.COM HP.COM IBM.COM JBOSS.COM NOVELL.COM ORACLE.COM REDHAT.COM SONY.COM SUN.COM
  23. Does any of listed sites has WAP version based on JSF rendering?

    PS Of course by well knon sites I mean WELL KNOWN sites.
    well known sites are
    ADOBE.COM
    AMAZON.COM
    EBAY.COM
    FUJITSTU.com
    GOOGLE.COM
    HP.COM
    IBM.COM
    JBOSS.COM
    NOVELL.COM
    ORACLE.COM
    REDHAT.COM
    SONY.COM
    SUN.COM
    But here is the issue, this only 13 sites and most people would not agree that Novell, Redhat, and JBoss would make a "well known" cut. If there are, say, hundreds of in-house corporates apps using JSF(and I don't know if there are or aren't), that would mean as much to me as whether or not Sony or Adobe uses the technology. No one, I think, would deny that Struts was huge and an unqualifed success, but I'll bet none of those sites used Struts either.
  24. The type of high throughput commercial sites you speak of tend not to use Java for the web interface at all. Companies like Google and Amazon tend to follow the KISS principle. Their sites are extremely simple and only use fancy things like Ajax where it is absoultely necessary (e.g. Google Earth). The rest of the time they stick to plain old HTTP/HTML and CGI with a Lamp technology stack. In the enterprise space however things tend to be very different. We have all types of .NET/Java based Web "frameworks" and "toolkits". IMO these have more to do with vendors trying to make a buck, then the delivery of appropriate "web" technology. Personally, I believe that a web interface should be as simple as possible. We all know that when it comes to a rich user experience, that javascript is an hack, AJAX is an even worst hack IMO, and HTTP/CGI/HTML is just not designed for the job. No amount of "framework" layering will ever change these basic facts. We need to get back to calling a spade a spade (or a web page a web page) IMO, and if you want a rich user experience then simply look elsewhere. Paul.
  25. The type of high throughput commercial sites you speak of tend not to use Java for the web interface at all.Companies like Google and Amazon tend to follow the KISS principle. Their sites are extremely simple and only use fancy things like Ajax where it is absoultely necessary (e.g. Google Earth). The rest of the time they stick to plain old HTTP/HTML and CGI with a Lamp technology stack.
    Some of the highest throughput sites make substantial use of Java at the web interface. E-Bay - the most sucessful by far - is a good example,and their infrastructure is based primarily on a highly tuned J2EE/EJB/ORM setup. You are also wrong about Google. They have a broad range of services, many of which use AJAX extensively, not just 'where it is absolutely necessary'. Well-known examples are Google Mail, Google Calendar, Google Docs and Spreadsheets. How does Google handle these high volume sites? Let me quote: http://www.devx.com/webdev/Article/31868/1954?pf=true "A small team at Google created a Java-to-JavaScript compiler that takes in Java code and spits out JavaScript, enabling the Google developers to design, develop, debug, and test in Java and leave the compiler to deal with the vicissitudes of JavaScript. Heeding Google's 'Do No Evil' motto, the team decided to share its technology freely with the developer community under the name Google Web Toolkit (GWT)." Google develops these newer services - these rich, high-volume sites - in Java, not LAMP.
    IMO these have more to do with vendors trying to make a buck, then the delivery of appropriate "web" technology.
    As a matter of fact, not opinion, J2EE is a well-established method of delivering the approriate web technology - something that can deal with very high demand transactional systems. The highest performance sites in this area, such as stock markets, use J2EE.
    No amount of "framework" layering will ever change these basic facts. We need to get back to calling a spade a spade (or a web page a web page) IMO, and if you want a rich user experience then simply look elsewhere.
    All well and good, but in the real world users want rich experiences now and we have to find ways to deliver that experience with existing mechanisms, not some ideal mechanism that isn't there yet. For now, AJAX seems to help, and JSF is a good way to bypass the need to have to manually deal with much of it. One of the advantages of JSF is that it will allow a simple plain-object based interface to work with future, better technologies.
  26. HI Steve, As always, your enthusiasm has blinkered your thinking. What you say about google is true, but Google Mail, Google Calender, etc. just aren't your standard comercial website. If my busines model required me to push the bounds of web 2.0 technolog after successfully dominating the web 1.0 technology space, I would be looking at GWT like technology too. You ignored my reference to Amazon, whose business goal is just to run an online shop (not to challenge Microsoft on the desktop). And if you thireally think that the bulk of commercial websites are written in Java then you are really deluded. For most of us, we simply need to present data centric applications over an intranet and through a browser. In that space web 1.0 and the KISS principle is more than adquate IMO. Its funny how everyone knows what the solution should be, yet no one is focusing on the problem. Who are these users crying out for AJAX? In my experience users are more concerned in getting working applications in a timely fashion, rather then the latest web 2.0 technology hype. Paul.
  27. HI Steve,

    As always, your enthusiasm has blinkered your thinking.
    Nice way to debate :)
    What you say about google is true, but Google Mail, Google Calender, etc. just aren't your standard comercial website.
    Sorry, but Google is a high volume commercial website. You can't simply ignore the bits of it that don't fit your thesis. However, you are right in one way - Google is not quite a standard website - they have set standards in user-friendliness and in pioneering. Their increasing use of Java+AJAX is surely a pointer to the future.
    You ignored my reference to Amazon, whose business goal is just to run an online shop (not to challenge Microsoft on the desktop). And if you thireally think that the bulk of commercial websites are written in Java then you are really deluded.

    For most of us, we simply need to present data centric applications over an intranet and through a browser. In that space web 1.0 and the KISS principle is more than adquate IMO.

    Its funny how everyone knows what the solution should be, yet no one is focusing on the problem. Who are these users crying out for AJAX? In my experience users are more concerned in getting working applications in a timely fashion, rather then the latest web 2.0 technology hype.

    Paul.
    You can choose any particular technology and find at least one site that uses it successfully, and Amazon does indeed use Python for some of their site. However, being selective like that does not help an argument; as I said, the vast majority of 'high throughput commercial sites' use J2EE. For them J2EE is the KISS approach to this requirement. (I am beginning to think that the 'KISS' term is one of the most misunderstood and misused in IT! - whichever approach someone favours is simply labelled 'KISS', because simplicity is relative). I shall mention, yet again, banks, stock markets, and VERY high-volume sites like E-Bay. They have no interest in challenging Microsoft on the desktop either. They simply want high-volume transactions. J2EE dominates there. Of course, if you now move the goalposts and change your criterion from "high throughput commercial sites" as in your previous post to simply "commercial sites", you are right - many do use LAMP, because they don't have requirements for high-performance transactions, and they so don't need the resources that sites like Amazon have to allow this sort of approach to scale. However, any measure of what commercial websites are written in, J2EE dominates.... from dice.com today. "Java" jobs: 14,498 "J2EE" jobs: 6993 "PHP" jobs: 1204 "Ruby on Rails" jobs: 92 I realise it is nowhere near a direct extrapolation from job adverts to technology use, but a ratio of nearly 6:1 of J2EE to PHP is hard to interpret as any kind of delusion.... And now, back on topic! As for users crying out for AJAX - well, they obviously don't, any more than they would cry out for Java or Python or Ruby... they 'cry out' for more versatile sites, which perform better. This means more client-side response and partial page refreshes - things that JavaScript/AJAX enables. I agree is it is a poor substitute for better approaches, but it seems to work.
  28. Hi Steve, Your miss-understanding stems from the fact that you do not appreciate that in most instances bank-end and front-end processing has different needs and hence often employ different technology. I have read an article recently that clearly spells out the strategy adopted by most commercial websites and it is not J2EE. J2EE is dominant in the enteprise space, where vendors are selling mostly to IT Managers. Elsewhere people split the roles, using web technology like LAMP for the web front end, and traditional back-end technology like C++ (or perhaps Java) for back-end processing (order forfilment, billing etc). Like I said, you are deluded if you believe that J2EE is the dominat web technology for commercial web sites. But hey, don't let the odd fact spoil the party :^) Paul.
  29. Hi Steve,

    Your miss-understanding stems from the fact that you do not appreciate that in most instances bank-end and front-end processing has different needs and hence often employ different technology.

    I have read an article recently that clearly spells out the strategy adopted by most commercial websites and it is not J2EE. J2EE is dominant in the enteprise space, where vendors are selling mostly to IT Managers.

    Elsewhere people split the roles, using web technology like LAMP for the web front end, and traditional back-end technology like C++ (or perhaps Java) for back-end processing (order forfilment, billing etc).

    Like I said, you are deluded if you believe that J2EE is the dominat web technology for commercial web sites.

    But hey, don't let the odd fact spoil the party :^)

    Paul.
    You are just plain wrong about this, and as Cameron as shown, you were wrong about Google and Amazon too. Like it or not, J2EE is the dominant web stack for commercial sites, at least for now (PHP use is certainly growing). Sure, you can be selective and refine what you mean by 'commercial' until you find a set of sites that back your point of view. Sure, there is still a lot of C++ and other software at the back end. However Java is often used as front end to all that. Further evidence that this is the case is given by the TIOBE software index (http://www.tiobe.com/tpci.htm) - which is a measure of number of developers and resources available for each language - a qualitatitive measure of popularity. Java is the top-rated languages, with LAMP languages well behind. The thing is, Java has never had great use on the client side (although this is changing) - it has always been dominant on the server. By far the main way that Java is used on the server is J2EE, so it is reasonable to claim that what the TIOBE index is measuring there is largely Java used for websites. So, when I claim that Java is indeed the main language used for commercial websites, I have a lot of evidence to back me up - I have job site statistics (what commercial companies require in terms of web site development skills), the relative popularity of Java to other languages using metrics such as TIOBE (and sourceforge), and years of reports documenting the implementation and history of J2EE in specific high-volume sites and in general, and there is also a lot of information and experience about the appropriate use cases for LAMP, and where (in spite of the earnest wishes of its supporters) it has major limitations. And, Sun has recently overtaken Dell in terms of server sales. I would suggest that although LAMP (if you can still call it that!) can be hosted on Solaris, it is far more likely to be used as a J2EE host. There is a lot of excitement around LAMP, and there is much innovation there, but the IT industry as a whole is slow-moving. It took a long time for Java to become very widely adopted (it only recently surpassed C and C++ as the most-in-demand language) and it will take a long time for other approaches to significantly impact J2EE use.
  30. Hi Steve,

    Your miss-understanding stems from the fact that you do not appreciate that in most instances bank-end and front-end processing has different needs and hence often employ different technology.

    I have read an article recently that clearly spells out the strategy adopted by most commercial websites and it is not J2EE. J2EE is dominant in the enteprise space, where vendors are selling mostly to IT Managers.

    Elsewhere people split the roles, using web technology like LAMP for the web front end, and traditional back-end technology like C++ (or perhaps Java) for back-end processing (order forfilment, billing etc).

    Like I said, you are deluded if you believe that J2EE is the dominat web technology for commercial web sites.

    But hey, don't let the odd fact spoil the party :^)

    Paul.


    You are just plain wrong about this, and as Cameron as shown, you were wrong about Google and Amazon too. Like it or not, J2EE is the dominant web stack for commercial sites, at least for now (PHP use is certainly growing). Sure, you can be selective and refine what you mean by 'commercial' until you find a set of sites that back your point of view. Sure, there is still a lot of C++ and other software at the back end. However Java is often used as front end to all that. Further evidence that this is the case is given by the TIOBE software index (http://www.tiobe.com/tpci.htm) - which is a measure of number of developers and resources available for each language - a qualitatitive measure of popularity. Java is the top-rated languages, with LAMP languages well behind. The thing is, Java has never had great use on the client side (although this is changing) - it has always been dominant on the server. By far the main way that Java is used on the server is J2EE, so it is reasonable to claim that what the TIOBE index is measuring there is largely Java used for websites.

    So, when I claim that Java is indeed the main language used for commercial websites, I have a lot of evidence to back me up - I have job site statistics (what commercial companies require in terms of web site development skills), the relative popularity of Java to other languages using metrics such as TIOBE (and sourceforge), and years of reports documenting the implementation and history of J2EE in specific high-volume sites and in general, and there is also a lot of information and experience about the appropriate use cases for LAMP, and where (in spite of the earnest wishes of its supporters) it has major limitations.

    And, Sun has recently overtaken Dell in terms of server sales. I would suggest that although LAMP (if you can still call it that!) can be hosted on Solaris, it is far more likely to be used as a J2EE host.

    There is a lot of excitement around LAMP, and there is much innovation there, but the IT industry as a whole is slow-moving. It took a long time for Java to become very widely adopted (it only recently surpassed C and C++ as the most-in-demand language) and it will take a long time for other approaches to significantly impact J2EE use.
    Hi Steve, Your analysis is accurate. My point was as Cameron points out, many large commercial sites use several technologies (the standard languages at Google are Java, Python and C++ for example). My other point is that many commercial sites keep their interface extremely simple. Infact this is how Google search became popular in the first place, a simple uncluttered UI coupled with an extremely fast search engine. So back to KISS. Users are increasingly becoming use to the web paradigm and I work hard to set expectations such that my users do not expect a visual-basic like GUI from a web interface. Using this philosophy , attractive and useable web interfaces can be created quickly (often with the help of a web designer). They can also be easily tested. I don't know how to effectively test javascript on a browser or AJAX either, and I think there is a lot of mileage in a simple web centric approach, instead of trying to pretend that a browser is a rich client platform. If a rich interface is really required then there is always Java Webstart, etc. BTW. There are component centric web approaches that do both simplify and speed-up web development, meeting the KISS principle IMO. Continuation servers come to mind. But as you know they are better suited to low user load scenarios, and are limited to dynamic languages with continuation support. Paul.
  31. commercial sites keep their interface extremely simple. Infact this is how Google search became popular in the first place, a simple uncluttered UI coupled with an extremely fast search engine.
    Google's initial design was introduced many years ago, way before the term 'AJAX' was even thought of. The success of Google was because of one thing - their searches were actually useful (I remember the amazement of colleagues when they were first introduced to Google after having used older search engines years ago. I remember my amazement!). Users would not have cared what the Google page looked like - content was what mattered. However, as Google is now showing, functionality (and fast functionality) is becoming the rule.
    So back to KISS. Users are increasingly becoming use to the web paradigm and I work hard to set expectations such that my users do not expect a visual-basic like GUI from a web interface.
    I think one has to compromise. From personal experience, talking down what websites can do is not that effective when the client's competitors' websites have rich functionality. Like it or not, sites like Flickr and Google Mail have raised users's expectations in general.
    I don't know how to effectively test javascript on a browser or AJAX either
    At the moment I am using HttpUnit. It has some JavaScript support. I don't explicitly test JavaScript, as I don't explicitly code it (I use JSF), but this works for my pages, which do require JavaScript for functionality. GWT is designed to allow full testing of UI functionality. Run your application in 'hosted mode' and you can debug the whole interface. In this respect, a Java solution seems hugely in advance of other languages (for now!). I have heard that Selenium is a good cross-language way to test JavaScript and AJAX applications. So, testing of AJAX is certainly feasible.
    If a rich interface is really required then there is always Java Webstart, etc.
    Java Webstart is not a bad idea for internal use, but totally impractical for public-facing commercial sites.
    BTW. There are component centric web approaches that do both simplify and speed-up web development, meeting the KISS principle IMO. Continuation servers come to mind. But as you know they are better suited to low user load scenarios, and are limited to dynamic languages with continuation support.


    Paul.
    Not quite. Component use and continuations are quite separate approaches. Continuations servers are not limited to dynamic languages; there are Java-based solutions like RIFE, Cocoon, Apache Commons Javaflow. I would agree that continuations may indeed be better suited to low-load situations (although I believe this is starting to change - memory is going to be far less of an issue on 64-bit platforms), but component-based approaches are far less limited in this respect. I have to admit I have not researched continuations enough to comment in detail, and what I have said may be out of date. Component-based systems like JSF can be tuned for different uses, with component state saved either on the server or the client. AJAX can actually help with this, as partial page updates can involve less traffic to the server.
  32. Hi Steve, Agree mostly with what you've said. but...
    From personal experience, talking down what websites can do is not that effective when the client's competitors' websites have rich functionality. Like it or not, sites like Flickr and Google Mail have raised users's expectations in general.
    Perhaps, but setting user expectation is useful. In an enterprise situation, it is easy to present trade-offs to users. E.g. a simple interface in two weeks versus a "rich" interface in a month. Invariably the response to this question is that we'll have the simple interface now and and the rich one in a later release. But mostly when it comes to planning the later release though, the users have often'discovered' higher priority functionality they would prefer rather than jazzing up the interface. It's their money, so they can choose.
    I have heard that Selenium is a good cross-language way to test JavaScript and AJAX applications.
    I am aware of Selenium and other web testing tools, but these tend to fall into the acceptance/functional testing arena. What concerns me is unit testing and TDD in particular.
    GWT is designed to allow full testing of UI functionality. Run your application in 'hosted mode' and you can debug the whole interface. In this respect, a Java solution seems hugely in advance of other languages (for now!).
    Debuging is one thing. Design level, repeatable unit tests are another. BTW, Seaside provides some impressive debugging faclities (in the browser). I would be surprised if GWT could compete, but I haven't seen GWT in action.
    Java Webstart is not a bad idea for internal use, but totally impractical for public-facing commercial sites.
    Yes, I agree. But out there on the web,I believe the user expectation is for well designed and simple 'web' interfaces. Most commercial sites take this approach and tend to look like web pages, rather than rich GUI's. The drive for rich UIs as far as I can tell (outside Google of course) is coming from the corporate intranet space. Even here, I'm not sure if it is the users though, my guess is that the drive to use AJAX is coming from developers and vendors.
    Not quite. Component use and continuations are quite separate approaches.
    I think you should take a closer look at continuation servers. By removing, the page metaphor and the stateless resquest/response paradigm, continuation servers allow you to use the same programming model that you would use with a traditional rich client component framework, like Swing. There is no request/response, and there is no HTML/XML. There are just components with registered callbacks, which can optionaly embed javascript and AJAX. Take a look at Seaside, they provide a demo shopping cart app, which is the shortest most componentised implementation of such an application that I have yet to come across. Paul.
  33. Hi Steve,

    Agree mostly with what you've said. but...

    From personal experience, talking down what websites can do is not that effective when the client's competitors' websites have rich functionality. Like it or not, sites like Flickr and Google Mail have raised users's expectations in general.


    Perhaps, but setting user expectation is useful. In an enterprise situation, it is easy to present trade-offs to users. E.g. a simple interface in two weeks versus a "rich" interface in a month. Invariably the response to this question is that we'll have the simple interface now and and the rich one in a later release. But mostly when it comes to planning the later release though, the users have often'discovered' higher priority functionality they would prefer rather than jazzing up the interface. It's their money, so they can choose.
    But now we are back on topic, and a major advantage of JSF comes to the fore. You don't have to put work into providing a rich interface - you get off-the-shelf components that provide that additional functionality for you. Using Java Studio Creator you can incorporate AJAX in a matter of seconds, just by placing a component on a form and adding a few lines of code. Here is an example: http://developers.sun.com/prodtech/javatools/jscreator/learning/tutorials/2/textcompletion.html#02
    What concerns me is unit testing and TDD in particular.
    I am indeed using HttpUnit for unit testing.
    GWT is designed to allow full testing of UI functionality. Run your application in 'hosted mode' and you can debug the whole interface. In this respect, a Java solution seems hugely in advance of other languages (for now!).


    Debuging is one thing. Design level, repeatable unit tests are another. BTW, Seaside provides some impressive debugging faclities (in the browser). I would be surprised if GWT could compete, but I haven't seen GWT in action.
    Well, you should look at it. It does indeed allow unit testing and test-driven development of interfaces. That was one of its design considerations. From the GWT website: "You can use all of your favorite Java development tools (Eclipse, IntelliJ, JProfiler, JUnit) for AJAX development."
    Java Webstart is not a bad idea for internal use, but totally impractical for public-facing commercial sites.


    Yes, I agree. But out there on the web,I believe the user expectation is for well designed and simple 'web' interfaces. Most commercial sites take this approach and tend to look like web pages, rather than rich GUI's.
    So far, and I disagree with you about user expectation. My experience is that users are increasinlgy requiring highly functional websites. What matters is the amount of work we have to put in to deliver them. AJAX (and JSF) can be a huge time-saver.
    The drive for rich UIs as far as I can tell (outside Google of course) is coming from the corporate intranet space.
    No, it isn't. I really don't see how you can possibly claim that given the influence of blogging sites, Flickr, YouTube, Digg etc. It is the home user who is often the first to experience AJAX and/or rich website interfaces.
    I think you should take a closer look at continuation servers. By removing, the page metaphor and the stateless resquest/response paradigm, continuation servers allow you to use the same programming model that you would use with a traditional rich client component framework, like Swing. There is no request/response, and there is no HTML/XML. There are just components with registered callbacks, which can optionaly embed javascript and AJAX. Take a look at Seaside, they provide a demo shopping cart app, which is the shortest most componentised implementation of such an application that I have yet to come across.

    Paul.
    Seaside mixes components with continuations. Other continuation servers don't. Continations are not connected to component use; they are simply approaches that can work well together to allow the stateless interface of the web to be handled more easily. There are component-based web servers that allow abstraction away from HTML that don't use continations. JSF has often been 'accused' of trying to emulate a Swing-based approach and it is not based on continuations. If you actually looked at the Java continuation systems I mentioned you will see the separation.
  34. Hi Steve, Took a look at the sites you mentioned, and the UI looked like the standard web page to me. I guess that people are using AJAX as a way to make the normal form controls more responsive. In this role, I agree AJAX doesn't have to cost you much, then again IMO you don't get much in return either. Also I've been on the scriptilicious website, and they do provide extensive unit testing for AJAX. I guess I'm swiming against the tide on this one! And I don't think AJAX as such is what I have a difficulty with. I think for me the problem is the idea of creating a desktop like GUI through a browser using Javascript, XMLHttpRequest and HTML. Why? Personally, I would explore Webstart more if I really needed to (there are some interesting Webstart/Applet based products out there who claim to have resolved many of the problems with applets), before looking at something like GWT or Echo or the other javascript based "rich client in a browser" technologies. As for JSF, as a component framework its fine and you do get rudimentry AJAX for free, you wouldn't write Google Earth with it though. IMO when there is a compeling reason to use AJAX (e.g. Google Earth) then it can really shine, otherwise why not keep things simple? Paul.
  35. Hi Steve,

    Took a look at the sites you mentioned, and the UI looked like the standard web page to me. I guess that people are using AJAX as a way to make the normal form controls more responsive. In this role, I agree AJAX doesn't have to cost you much, then again IMO you don't get much in return either.
    You can get a lot in return. Just take a look at how Flickr works. I am involved in the development of a site that allows some minor image manipulation on the browser. This use of JavaScript is incredibly responsive, and saves a huge amount of bandwidth.
    And I don't think AJAX as such is what I have a difficulty with. I think for me the problem is the idea of creating a desktop like GUI through a browser using Javascript, XMLHttpRequest and HTML. Why?
    The answer to 'why' is that it is a mechanism to deliver rich client functionality to users without them having to download large plug-ins. I would expect the downloading of an up-to-date JRE to be a major put-off for potential users of high-volume sites. Users often want to get quick answers from a commercial website; they may do things on impulse. Asking them to wait for even just 5 minutes to download a plugin can be a commercial disadvantage for a site. And why bother, when toolkits like GWT make development almost as easy as writing Applets?
    As for JSF, as a component framework its fine and you do get rudimentry AJAX for free, you wouldn't write Google Earth with it though.
    You get far more than just rudimentary AJAX. See below.
    IMO when there is a compeling reason to use AJAX (e.g. Google Earth) then it can really shine, otherwise why not keep things simple?


    Paul.
    AJAX does keep things simple. AJAX means that the developer can produce more highly responsive pages with a lot less effort. AJAX+JSF is a very simple approach, as you re-use existing components that provide functionality that would otherwise involve a considerable amount of coding. An example is the presentation of geographical information on websites. In the past this has involved a considerable amount of specialised coding or the purchase of software. Sun's new PetStore demo shows how you can combine JSF+AJAX with Google Maps to do this easily.
  36. Hi Steve, As you know, I see most web programming as one big hack. After finally concluding that web programming is only tolerable and practical if it can be done cheaply, I've settled on Rails as my preferred solution. No components, just templates; low cost, and high productivity.
    The answer to 'why' is that it is a mechanism to deliver rich client functionality to users without them having to download large plug-ins.
    Well you've got me thinking, and I could be about to make a u-turn. It was this quote on a blog that finally swung it for me:
    One of my favorite answers was Steve Yegge’s answer to What do you think will be the next big thing in computer programming? (mostly because it mirrors my own opinion): I think web application programming is gradually going to become the most important client-side programming out there. I think it will mostly obsolete all other client-side toolkits: GTK, Java Swing/SWT, Qt, and of course all the platform-specific ones like Cocoa and Win32/MFC/etc. It’s not going to happen overnight. It’s very slowly been going that direction for ten years, and it could well be another ten years before web apps “win”. The tools, languages, APIs, protocols, and browser technology will all have to improve far beyond what you can accomplish with them today. But each year they get a little closer, and I’ve finally decided to switch all my own app development over to browser-based programming from now on. Microsoft and Apple definitely don’t want this to happen, so a necessary first step will be for an open-source browser such as Firefox to achieve a dominant market position, which will in turn require some sort of Firefox-only killer app. (A killer app would be something like iTunes, something that everyone in the world wants to use, badly enough to download Firefox for it.)
    http://curthibbs.wordpress.com/2006/07/23/interesting-answers-from-great-programmers/ I can envision the browser becoming something akin to an X Windows client. In such a scenario the browser becomes a standard rich client presentation platform instead of the barely compatible HTML/CSS/Javascript/XMLHttpRequest hack it is today. Lots of hurdles untill we get there though, mostly political I think. In the meanwhile I guess GWT is a glimpse of what is possible. For once I can see myself being a technology laggered here. Until its clear what type of browser technology wins through then I'll probably be sticking to low cost, low investment, high productivuty solutions like Rails were I can deliver solutions quickly at low risk, throwing in AJAX here and there when needed. When Microsoft introduced XMLHttpRequest, I'm sure that they didn't expect it to be used to challenge them on the desktop. Who knows, the Firefox guys could come up with a winner that makes javascript and GWT look positively weak. And there is always the risk of that killer app turning up :^) I guess GWT will shake things up a bit, and shaking things up is always a good idea :^). Paul.
  37. Hi Steve,

    As you know, I see most web programming as one big hack. After finally concluding that web programming is only tolerable and practical if it can be done cheaply, I've settled on Rails as my preferred solution. No components, just templates; low cost, and high productivity.
    The reason I have settled on JSF is for exactly the same reason - high productivity because of components - with a large number of components out there in rich libraries like Oracle's ADF and ICEFaces, it cuts out the coding; and I don't have to compromise on what I can deliver to the client. I would not say that web programming is that much of a hack - what is a real hack (due to past politics) is the way that the GUI is delivered, with browser-specific JavaScript and CSS. But, if most of that is all handled by high-quality components, and enclosed in an agile view framework like facelets, development can be elegant.
    The answer to 'why' is that it is a mechanism to deliver rich client functionality to users without them having to download large plug-ins.


    Well you've got me thinking, and I could be about to make a u-turn. It was this quote on a blog that finally swung it for me:
    This is one of the reasons why I find debate so valuable - occasionally one does a U-turn. This happened last year to me when I was discussing Ruby on this site, and I came to understand why it is such an important language.

    One of my favorite answers was Steve Yegge’s answer to What do you think will be the next big thing in computer programming? (mostly because it mirrors my own opinion):

    I think web application programming is gradually going to become the most important client-side programming out there. I think it will mostly obsolete all other client-side toolkits: GTK, Java Swing/SWT, Qt, and of course all the platform-specific ones like Cocoa and Win32/MFC/etc.

    It’s not going to happen overnight. It’s very slowly been going that direction for ten years, and it could well be another ten years before web apps “win”. The tools, languages, APIs, protocols, and browser technology will all have to improve far beyond what you can accomplish with them today. But each year they get a little closer, and I’ve finally decided to switch all my own app development over to browser-based programming from now on.

    Microsoft and Apple definitely don’t want this to happen, so a necessary first step will be for an open-source browser such as Firefox to achieve a dominant market position, which will in turn require some sort of Firefox-only killer app. (A killer app would be something like iTunes, something that everyone in the world wants to use, badly enough to download Firefox for it.)



    http://curthibbs.wordpress.com/2006/07/23/interesting-answers-from-great-programmers/

    I can envision the browser becoming something akin to an X Windows client. In such a scenario the browser becomes a standard rich client presentation platform instead of the barely compatible HTML/CSS/Javascript/XMLHttpRequest hack it is today.

    Lots of hurdles untill we get there though, mostly political I think. In the meanwhile I guess GWT is a glimpse of what is possible. For once I can see myself being a technology laggered here.

    Until its clear what type of browser technology wins through then I'll probably be sticking to low cost, low investment, high productivuty solutions like Rails were I can deliver solutions quickly at low risk, throwing in AJAX here and there when needed.

    When Microsoft introduced XMLHttpRequest, I'm sure that they didn't expect it to be used to challenge them on the desktop. Who knows, the Firefox guys could come up with a winner that makes javascript and GWT look positively weak. And there is always the risk of that killer app turning up :^)

    I guess GWT will shake things up a bit, and shaking things up is always a good idea :^).

    Paul.
    You well know my view of risk :) This is precisely why I am so enthusiastic about relatively technology-neutral approaches like JSF - when that new client-side technology appears, I believe I can be reasonably confident that JSF (or some successor of it) will render to that technology. I am hoping that JSF will protect me from client-side technology dependence in the same way that Java protects me from operating system development. Even if I am wrong, I find it a very productive, elegant, and even fun way to develop web pages.
  38. Hi Steve,
    The reason I have settled on JSF is for exactly the same reason - high productivity because of components
    I agree, the lack of visual components in Rails is it's biggest weakness IMO. I'm hoping that the speed of development associated with REPL and DSL's in Ruby will make up for this deficiency. Rails is simple though. Hopefully it's simplicity will mean that there are few gotchas down the line. With Rails you are in control and you know wht you're getting. I like components, but I'm not fully sold on JSF. I have used a home grown component web framework in the past which was much simpler than JSF. In particular, I didn't like the templating in JSF when I looked at it in detail.However you say that this can be replaced by Facelets which is much better in your opinion, so I'm keeping an open mind. My preferred component approach is the one used in Seaside. There is a Ruby Seaside clone, that may become popular down the line and could end up being married with Active Record. Rails 2.0 perhaps? Who knows? Ultimately, I'm pretty tired of Java, and am moving to a dynamic language for a broad set of reasons. So Rails is probably a good choice for me personally.
    You well know my view of risk :)
    Talking about risk, I see integration technologies as a key mechanism for allow interoperability between languages, thus reducing the risks associated with any given language choice. Apparently when you do a search on Google there are about 100 services that work together to deliver content (mash-up) to the page you finally see. Tying together service components in this way using web technologies like REST (or even SOAP) is a way forward which people like Google and Amazon are already using. So web technologies may end up succeeding where CORBA and EJB's have failed. Paul.
  39. Rails is simple though. Hopefully it's simplicity will mean that there are few gotchas down the line. With Rails you are in control and you know wht you're getting.

    I like components, but I'm not fully sold on JSF. I have used a home grown component web framework in the past which was much simpler than JSF.
    Undoubtedly, as the renderkit approach of JSF can make component writing fiddly.
    In particular, I didn't like the templating in JSF when I looked at it in detail.However you say that this can be replaced by Facelets which is much better in your opinion, so I'm keeping an open mind.
    Facelets is truly superb. I keep finding new features which speed up development.
    My preferred component approach is the one used in Seaside. There is a Ruby Seaside clone, that may become popular down the line and could end up being married with Active Record. Rails 2.0 perhaps? Who knows?
    Yes, but that is yet more uncertainty....
    You well know my view of risk :)


    Talking about risk, I see integration technologies as a key mechanism for allow interoperability between languages, thus reducing the risks associated with any given language choice. Apparently when you do a search on Google there are about 100 services that work together to deliver content (mash-up) to the page you finally see.

    Tying together service components in this way using web technologies like REST (or even SOAP) is a way forward which people like Google and Amazon are already using. So web technologies may end up succeeding where CORBA and EJB's have failed.

    Paul.
    I don't see them solving this problem at all, if it encourages heterogenous environments, with all the associated support and version issues. SOA can be a way to integrate existing legacy and hard-to-update functionality with new approaches. I am more concerned with avoiding that hard-to-update situation. I think language choice is better solved by really very tight integration. A good example of this is Groovy, or the very latest Release of JRuby, where you can do: class MyClass that is integration!
  40. Hi Steve,
    don't see them solving this problem at all, if it encourages heterogenous environments, with all the associated support and version issues. SOA can be a way to integrate existing legacy and hard-to-update functionality with new approaches. I am more concerned with avoiding that hard-to-update situation. I think language choice is better solved by really very tight integration. A good example of this is Groovy, or the very latest Release of JRuby, where you can do: class MyClass < my.java.Klass now that is integration!</blockquote> I agree. I think there are two issues here. Personally I don't believe that we should still be building software one statement at a time. By now we should have found a way to utilise larger re-usable components. No one in the motor industry still makes their own tires, or distributors etc. There are specialist that make those things, and motor manufacturing is mostly assembly. So components is one issue. The other issue is language interoperability. I like the JRuby approach, but when it comes to deep integration between Java and Ruby then the language differences do come to the fore. My optimism with REST is that it could help address the first problem, namely re-usable components. IMO, the issue here has been coupling and early (compile-time) binding. REST reduces coupling by making very few assumptions about the target service. All that is known by the client is the URL of the service and the data format of the document (message) sent. The service then acts on the document and performs an implicit operation. HTTP and Webservers provide late binding. Any service implementation can be bound to a URL at runtime. So in a way REST is similar to Smalltalk style message sending between objects. As you know from what I've said in the past, I believe that messaging, low coupling and late binding hold out the possibility for an effective component model, and re-useable components. Back to tight language integration. I agree with the approach of JRuby, but as Eric has pointed out in the past, seamless integration is more than just library calls. Ruby programming has certain idioms and approaches that just aren't available in Java. So JRuby programmers could end up as Java programmers using Ruby syntax. I'm not sure what the solution is here, but I do know that there is more differences between languages than syntax. I like the layered approach where low level services are consumed by higher level abstractions. As you pointed out, the interface between the operating system and the programmng language is one such layered interface. I see dynamic languages ultimately winning out here. The language platform (VM) can be extended in a standard way by plugging in system components written in a low level language like C or C++. Ultimately how low level system services are implemented (hardware, operating system, VM plug-in etc) should be opaque to the programmer. Currently we see things like I/O and threading as in the realm of the system. Croquet as extended this idea to include 3D graphics rendering, and spatial sound. With the right set of service primtitives, then performance should no longer be a reason to change languages. So why have more than one high level language? Perhaps different problem domains. E.g. Lisp like languages have a strong tradition in Artificial Intelligience. Ultimately though, I think that modern OO languages are essentially all the same in vision and the differences stem from practical compromises made in their implementation. As processors get faster and VM technology improves there is no longer any practical reason IMO why these languages (Java, C#, Ruby, Python, Smalltalk) could not share a common object model (the same object memory), in such a scenario the differences would only come down to syntax, and JRuby like integration could be true integration (an in-memory Java Object is equivalent to an in-memory Ruby Object). Imagine compiling Java or C# to the Strongtalk VM and inspecting the in-memory objects using the Smalltalk Inspector! Paul.
  41. Hi Steve,
    I am hoping that JSF will protect me from client-side technology dependence in the same way that Java protects me from operating system development. Even if I am wrong, I find it a very productive, elegant, and even fun way to develop web pages.
    I here what you say about fun! After being subjected to Struts, JSP, JSTL, etc over the last 8 years or so, I can definately do with more fun. I can imagine how graphically assembling components with JSF could be both productive and fun. I'm finding using Ruby, DSLs and Rails fun too. Rails or JSF could be gone in a few years. But if we've had fun, learnt something, and been productive to boot, then I guess that counts as a result! Paul. Paul.
  42. I can imagine how graphically assembling components with JSF could be both productive and fun.
    Actually, I have never used JSF graphically. I enjoy a using component-based approach in facelets.
    I'm finding using Ruby, DSLs and Rails fun too. Rails or JSF could be gone in a few years. But if we've had fun, learnt something, and been productive to boot, then I guess that counts as a result!

    Paul.

    Paul.
    One of the reasons why I have chosen JSF as my web framework of choice is that it is extremely unlikely to be gone in a few years. It is part of JEE5, has multiple implementations and has wide industry support.
  43. Hi Steve,
    One of the reasons why I have chosen JSF as my web framework of choice is that it is extremely unlikely to be gone in a few years. It is part of JEE5, has multiple implementations and has wide industry support.
    I know. But if we've established anything during this debate it is that the future is pretty uncertain, and the eventual outcome has more to do with politics then technical merit. Everyone believed that EJB's would be the standard component technology of choice for a very long time, and then came along Spring. Who knows what is around the corner in the web space. 5 years from now you'll still be able to use JSF for sure, and your implementation is still likely to be supported by your vendor. But by then, there is likely to be something new on the block, and JSF may no longer be seen as a "forward looking" approach. If this happens, you'll find yourself learning yet another new API all over again. Our industry is still immature and unstable. If you've been productive, delivered business value and had fun along the way, then you've come out ahead in my book, what ever happens. Paul.
  44. Everyone believed that EJB's would be the standard component technology of choice for a very long time, and then came along Spring. Who knows what is around the corner in the web space. 5 years from now you'll still be able to use JSF for sure, and your implementation is still likely to be supported by your vendor.

    But by then, there is likely to be something new on the block, and JSF may no longer be seen as a "forward looking" approach. If this happens, you'll find yourself learning yet another new API all over again.
    That is not the point. I have no objection to learning new APIs. What troubles me (because I have considerable experience of it) is the use of APIs that lose support. Firsly, EJBs show no sign of losing their status as a standard component technology (indeed, Spring works well with them). Secondly, even though there are new APIs for EJBs, the old APIs are still supported. That is what matters for those writing software that is expected to last for a long time.
  45. Hi Steve,
    Secondly, even though there are new APIs for EJBs, the old APIs are still supported. That is what matters for those writing software that is expected to last for a long time.
    This is an interesting sentence. What do you mean by supported? If a working implementation of the version of the API is still available does that count as supported? I clearly stated that JSF is still likely to be supported by vendors in years to come. But I'm talking about a different investment. The intellectual investment in the ideas and concepts behind an API and the practical investment in learning how best to work through the practical issues with any given implementation. There is no guarantee that vendors will support this investment in the future. I've invested countless hours mastering the EJB1.0/2.0/2.1 APIs and the finer details of the Weblogic 4.5/6.0/7.1 class loaders. There was no guarantee that this investment would be protected. Infact it was almost a given certainty that the issues with class loading in weblogic needed to be rediscovered afresh with each new release. And the only thing the EJB3.0 API has in common with EJB2.1 is the name. In contrast, untill recently I had tyres for a car I bought ten years ago that will still fit a modern car today. The word standard is much abused in the Software Industry, and in reality it seldom means what we take it to mean. Paul.
  46. Hi Steve,

    Secondly, even though there are new APIs for EJBs, the old APIs are still supported. That is what matters for those writing software that is expected to last for a long time.



    This is an interesting sentence. What do you mean by supported? If a working implementation of the version of the API is still available does that count as supported?
    Obviously.
    I clearly stated that JSF is still likely to be supported by vendors in years to come. But I'm talking about a different investment. The intellectual investment in the ideas and concepts behind an API and the practical investment in learning how best to work through the practical issues with any given implementation. There is no guarantee that vendors will support this investment in the future. I've invested countless hours mastering the EJB1.0/2.0/2.1 APIs and the finer details of the Weblogic 4.5/6.0/7.1 class loaders.

    There was no guarantee that this investment would be protected. Infact it was almost a given certainty that the issues with class loading in weblogic needed to be rediscovered afresh with each new release. And the only thing the EJB3.0 API has in common with EJB2.1 is the name.


    In contrast, untill recently I had tyres for a car I bought ten years ago that will still fit a modern car today.


    The word standard is much abused in the Software Industry, and in reality it seldom means what we take it to mean.

    Paul.
    EJB 3.0 may well be a different API in some respects, but a certified JEE5 server must still support EJB 2.1, and there is a migration path to EJB3.0. One thing that is improving with specifications coming out of the JCP is that there are fewer vendor-specific issues. For example, code using JDO 2.0 and EJB 3.0/JPA implementations should (and in my experience, is) pretty easily transferred between vendors. One can then learn general approaches to fine-tuning rather than vendor-specific approaches. Much work is being put in to overcome such issues in JSF, so that components from any source can be used together. Anyway, having issues with fine details is not the same as an entire API being dropped. Believe me, individual issues with different implementations are nothing compared to having to deal with long-abandonded legacy APIs which were supported by now non-existent single vendor or developer group.
  47. Hi Steve,

    Secondly, even though there are new APIs for EJBs, the old APIs are still supported. That is what matters for those writing software that is expected to last for a long time.



    This is an interesting sentence. What do you mean by supported? If a working implementation of the version of the API is still available does that count as supported?


    Obviously.

    I clearly stated that JSF is still likely to be supported by vendors in years to come. But I'm talking about a different investment. The intellectual investment in the ideas and concepts behind an API and the practical investment in learning how best to work through the practical issues with any given implementation. There is no guarantee that vendors will support this investment in the future. I've invested countless hours mastering the EJB1.0/2.0/2.1 APIs and the finer details of the Weblogic 4.5/6.0/7.1 class loaders.

    There was no guarantee that this investment would be protected. Infact it was almost a given certainty that the issues with class loading in weblogic needed to be rediscovered afresh with each new release. And the only thing the EJB3.0 API has in common with EJB2.1 is the name.


    In contrast, untill recently I had tyres for a car I bought ten years ago that will still fit a modern car today.


    The word standard is much abused in the Software Industry, and in reality it seldom means what we take it to mean.

    Paul.


    EJB 3.0 may well be a different API in some respects, but a certified JEE5 server must still support EJB 2.1, and there is a migration path to EJB3.0. One thing that is improving with specifications coming out of the JCP is that there are fewer vendor-specific issues. For example, code using JDO 2.0 and EJB 3.0/JPA implementations should (and in my experience, is) pretty easily transferred between vendors. One can then learn general approaches to fine-tuning rather than vendor-specific approaches. Much work is being put in to overcome such issues in JSF, so that components from any source can be used together. Anyway, having issues with fine details is not the same as an entire API being dropped. Believe me, individual issues with different implementations are nothing compared to having to deal with long-abandonded legacy APIs which were supported by now non-existent single vendor or developer group.
    Hi Steve, I do not what to dwell on this particular subject too long, but I believe you are over simplifying what is in effect a very complex mix of issues. My point is that as far as the future is concerned there aren't any guarantees. As a contractor I have worked for a large number of companies and the maintainablility of their legacy code has had little to do with standards. I advocated the adoption of J2EE standards back in 1998. I quickly began to understand that for some subset of J2EE that was a good idea and for others it wasn't. The most important issue IMO over the years is that the APIs used where well designed and well implemented, and a good fit for the task for which they were selected. Also, another thing that seems to be important is that a given implementation has broad developer comunity support, whether the implementation has been standardised or not. The issue of whether an implementation completely dissapears off the map can be overcome with access to the source code. In fact this is why ESCROW was setup in the first place. The access to source code addresses several support issues. Vendors can develop partial deafness when you request bugs fixes for an implementaton that is no longer flavour of the month. Whether you get that fix or not depends on their commercial priorities not yours. Even if they are still listening, your support problem may become their commercial opportunity. I have experienced a number of times when I have had problems with a vendor implementation, when they went into denail and said it was our code, when clearly it wasn't. They then would sell us a consultant to 'help us' who would cost the earth. After finding no fault in our code, the consultant would then offer to sell us something else, because we had adopted the wrong strategy, or thob us off with some promise of a future patch. When I first started using open source, it was a revealation when I found that I could debug problems into library and framework code myself. Exploring all the code revealed two things: 1. The framework wasn't that complicated and given the time I could write it myself. 2. It is easier to find and fix a bug in the framework and submit that fix, then read realms of vendor documentation, trawl through known bugs online, visit developer forums for tips, downloaded and try out patches to see if they help, and finally submit a support call just to be told that it is your fault and you don't know what you're doing. I digress, but this is born from years of hard earned experience. I would not go back there and I am amazed that vendors can still tout 'compliance to standards and vendor support' as potential benefits after years of exploiting both mercilessly for comercial gain. Of course some vendors are better then others, but like I say it is a complex set of issues and their is no golden rule! Paul.
  48. Hi Steve,

    Secondly, even though there are new APIs for EJBs, the old APIs are still supported. That is what matters for those writing software that is expected to last for a long time.



    This is an interesting sentence. What do you mean by supported? If a working implementation of the version of the API is still available does that count as supported?


    Obviously.

    I clearly stated that JSF is still likely to be supported by vendors in years to come. But I'm talking about a different investment. The intellectual investment in the ideas and concepts behind an API and the practical investment in learning how best to work through the practical issues with any given implementation. There is no guarantee that vendors will support this investment in the future. I've invested countless hours mastering the EJB1.0/2.0/2.1 APIs and the finer details of the Weblogic 4.5/6.0/7.1 class loaders.

    There was no guarantee that this investment would be protected. Infact it was almost a given certainty that the issues with class loading in weblogic needed to be rediscovered afresh with each new release. And the only thing the EJB3.0 API has in common with EJB2.1 is the name.


    In contrast, untill recently I had tyres for a car I bought ten years ago that will still fit a modern car today.


    The word standard is much abused in the Software Industry, and in reality it seldom means what we take it to mean.

    Paul.


    EJB 3.0 may well be a different API in some respects, but a certified JEE5 server must still support EJB 2.1, and there is a migration path to EJB3.0. One thing that is improving with specifications coming out of the JCP is that there are fewer vendor-specific issues. For example, code using JDO 2.0 and EJB 3.0/JPA implementations should (and in my experience, is) pretty easily transferred between vendors. One can then learn general approaches to fine-tuning rather than vendor-specific approaches. Much work is being put in to overcome such issues in JSF, so that components from any source can be used together. Anyway, having issues with fine details is not the same as an entire API being dropped. Believe me, individual issues with different implementations are nothing compared to having to deal with long-abandonded legacy APIs which were supported by now non-existent single vendor or developer group.
    Hi Steve, I do not what to dwell on this particular subject too long, but I believe you are over simplifying what is in effect a very complex mix of issues. My point is that as far as the future is concerned there aren't any guarantees. As a contractor I have worked for a large number of companies and the maintainablility of their legacy code has had little to do with standards. I advocated the adoption of J2EE standards back in 1998. I quickly began to understand that for some subset of J2EE that was a good idea and for others it wasn't. The most important issue IMO over the years is that the APIs used where well designed and well implemented, and a good fit for the task for which they were selected. Also, another thing that seems to be important is that a given implementation has broad developer comunity support, whether the implementation has been standardised or not. The issue of whether an implementation completely dissapears off the map can be overcome with access to the source code. In fact this is why ESCROW was setup in the first place. The access to source code addresses several support issues. Vendors can develop partial deafness when you request bugs fixes for an implementaton that is no longer flavour of the month. Whether you get that fix or not depends on their commercial priorities not yours. Even if they are still listening, your support problem may become their commercial opportunity. I have experienced a number of times when I have had problems with a vendor implementation, when they went into denail and said it was our code, when clearly it wasn't. They then would sell us a consultant to 'help us' who would cost the earth. After finding no fault in our code, the consultant would then offer to sell us something else, because we had adopted the wrong strategy, or thob us off with some promise of a future patch. When I first started using open source, it was a revealation when I found that I could debug problems into library and framework code myself. Exploring all the code revealed two things: 1. The framework wasn't that complicated and given the time I could write it myself. 2. It is easier to find and fix a bug in the framework and submit that fix, then read realms of vendor documentation, trawl through known bugs online, visit developer forums for tips, downloaded and try out patches to see if they help, and finally submit a support call just to be told that it is your fault and you don't know what you're doing. I digress, but this is born from years of hard earned experience. I would not go back there and I am amazed that vendors can still tout 'compliance to standards and vendor support' as potential benefits after years of exploiting both mercilessly for comercial gain. Of course some vendors are better then others, but like I say it is a complex set of issues and their is no golden rule! Paul.
  49. I digress, but this is born from years of hard earned experience. I would not go back there and I am amazed that vendors can still tout 'compliance to standards and vendor support' as potential benefits after years of exploiting both mercilessly for comercial gain.
    One does not blindly run for any standard just because it is a standard. I don't particularly like some standards because they aren't complete enough - vendor extensions are required to get respectable functionality. So I don't use them. I carefully review the ones that suit what I am doing. This is why I have used JDO for years, rather than EJB persistence.
    The most important issue IMO over the years is that the APIs used where well designed and well implemented, and a good fit for the task for which they were selected.
    I strongly disagree. This is of no relevance at all if the API is designed and supported by a developer group or single vendor that either breaks up or goes out of business. Some great APIs and products have gone into limbo in the past, leaving developers stranded. Good design is no guarantee of long life.
    The access to source code addresses several support issues. Vendors can develop partial deafness when you request bugs fixes for an implementaton that is no longer flavour of the month. Whether you get that fix or not depends on their commercial priorities not yours.
    And of course, this never happens for open source products....
    When I first started using open source, it was a revealation when I found that I could debug problems into library and framework code myself.
    I don't see why this should have been a revelation.
    Of course some vendors are better then others, but like I say it is a complex set of issues and their is no golden rule!
    Of course some vendors are better that others! That is the whole point of having a 'market' based on 'standards', so that when one vendor messes you up, you can move easily to a competing (possibly even open source) implementation. I have done exactly that in the past. What is supposed to happen if a rich and complex framework (let's call it 'Frails')goes wrong at some point? Do you really expect to be able to debug it yourself? What if the bug you are interested in is not part of the Frails's teams priorities? Do you post on a mailing list and hope? What alternative competing implementation of Frails do you switch to? The exact same issues are there in free software as in commercial software. At least with commercial software you have SLAs and legal agreements. I don't think that a warm fuzzy 'developer community' feeling is a good substitute when you have a major problem and a deadline. You see, my long term experience is that I don't trust anyone - I have been let down by both free and open source projects and proprietary commercial software. So, I choose to use standards, especially those with quality open source implementations. JSF and JDO are such standards. Use of standards gives me an escape route if a product is failing me. That way I get the best of all situations (or you may say 'the least worst').
  50. Hi Steve, The crux is community support, whether open source or commercial. If enough teams adopt and use a framework in their systems, then the likelyhood is that the framwork will succeed. I believe Rails as crossed that crucial piont of critical mass, and so has JSF probably. Neither are going to disappear off the map.
    One does not blindly run for any standard just because it is a standard. I don't particularly like some standards because they aren't complete enough - vendor extensions are required to get respectable functionality. So I don't use them.
    I'm glad to hear it. I fully understand the idea behind industry standards, but IMO we aren't there yet. For me a prerequisite for a standard is that it is based on a best of breed (or a combination of best of breed) implementation(s) that has been proven over many years in the industry. A good example of this is the Hayes AT command set, used for call control on modems. This was a proven defacto standard long before it became an official ITU standard. I come from a telecoms background, so I know what real standards look like. In software we have very few such standards IMO. One of the reasons why IMO, is much of the technology 'standards' we use in Software is clearly not best practice and has more to do with competing commercial interests. The whole web fiasco IMO falls into this category. Technically we could do so much better, but we don't. As far as moving between standard implementations is concerned, it would be nice if it was easy, but given that the vendors tend to invest the most in defining the standard in the first place, then the likelyhood of this happening is low. As for support. If an implementation is complex and poorly written, then it will be buggy and difficult to support, whether it is open source or commercial. Personally I like to know what I'm getting, and being able to see the source code is a big bonus. For open source projects there is also an incentive to keep things simple. Rails was written by one person in four weeks. For some commercial standards and implementations, there is an incentive to make things as complex as possible. This can be another cynical trick to tie customers in, and keep them comming back for consultancy and support. Let's drop this subject. I think your point about being careful about what you sign up to just about covers it. BTW. What do you think about my views on REST, component models and mash-ups? This seems to be the direction in which things are moving. Paul.
  51. Hi Steve,

    The crux is community support, whether open source or commercial. If enough teams adopt and use a framework in their systems, then the likelyhood is that the framwork will succeed.
    Yes, but that does not prevent things losing support. What matters is multiple sources for an API, not multiple users. While you have a single vendor you are always at some risk of product incompatiblities or licence changes. Some very popular products have gone that way in the past.
    I believe Rails as crossed that crucial piont of critical mass, and so has JSF probably. Neither are going to disappear off the map.
    Perhaps not, but there is nothing to stop the Rails developers deciding to make a new incompatible version. Indeed, future versions of Ruby and Python may well not just 'deprecate' current features of the language, but stop supporting them altogether. If I were a Ruby or Python developer, that would greatly concern me. On the other hand, JSF conforms to a specification, and future specifications will require backward support (just as happens with EJB).
    I'm glad to hear it. I fully understand the idea behind industry standards, but IMO we aren't there yet. For me a prerequisite for a standard is that it is based on a best of breed (or a combination of best of breed) implementation(s) that has been proven over many years in the industry.
    There are many, many ways that things in software aren't up to the standards in other industries. I could rant on about this at length. But, having some standards is certainly better than none.
    One of the reasons why IMO, is much of the technology 'standards' we use in Software is clearly not best practice and has more to do with competing commercial interests. The whole web fiasco IMO falls into this category. Technically we could do so much better, but we don't.
    Sorry, but calling the web a 'fiasco' is silly. It evolved the way it did because it worked. It may look silly now, but that is with hindsight. At the time, having a simple stateless system was what was needed.
    As far as moving between standard implementations is concerned, it would be nice if it was easy, but given that the vendors tend to invest the most in defining the standard in the first place, then the likelyhood of this happening is low.
    This is just plain wrong. It really happens, and it is realistic to expect it. I simply don't accept that the way I use standards is 'low likelihood'. Even if it is, its is nothing compared to not having any standards. Your attitude here is an uncessary council of despair. Thins are really not that bad, as I know from experience.
    For open source projects there is also an incentive to keep things simple. Rails was written by one person in four weeks. For some commercial standards and implementations, there is an incentive to make things as complex as possible. This can be another cynical trick to tie customers in, and keep them comming back for consultancy and support.
    Sorry, I am going to sound harsh here :) I think this is just plain cynical nonsense. Commercial products have as much incentive to keep products lean and mean as open-source and free products. This is especially true if they compete. They compete if they offer implementations of a popular standard. I really think you are letting experience with a subset of Java specifications unfairly tarnish what vendors and other developers do.
    BTW. What do you think about my views on REST, component models and mash-ups? This seems to be the direction in which things are moving.
    Erm.. I have lost track - perhaps best wait for another thread, where this would be on-topic :)
  52. Yes, but that does not prevent things losing support. What matters is multiple sources for an API, not multiple users. While you have a single vendor you are always at some risk of product incompatiblities or licence changes. Some very popular products have gone that way in the past.
    I haven't got a degree in economics, but it sort of does in my book. If enough large companies depend on an API, the vendor producing it is very unlikey to go bust. They will be providing maintenance releases and support for a long time to come. Legacy API's like legacy code, don't go away they just mature. BEA has been making money out of Tuxedo for years for this very reason with very little investment. In fact some companies seek out others with products in this phase as ripe for acquisiton. Computer Associates are famous for this. Basically if you depend on their code then they've got you by the B..ls.
    I think this is just plain cynical nonsense. Commercial products have as much incentive to keep products lean and mean as open-source and free products. This is especially true if they compete. They compete if they offer implementations of a popular standard.
    Really? Maybe. But check the blogs of other developers, and you will see that many others are as cynical as me. Try speaking to people that have worked in sales in large vendors like IBM or Oracle and you may come to the conclusion that I'm not being cynical enough. IMO the software industry is blighted by vendor interests. At least in the telecoms industry goverments had the good sense to introduce regulation ... but we are moving way of subject. You pays your money and you take your choice, and increasingly for a lot of people that choice is open source.
    BTW. What do you think about my views on REST, component models and mash-ups? This seems to be the direction in which things are moving.
    This actually more on subject than the above discussion. You said that you didn't see web based component services as a viable way forward for cross language interoperability, and I responded with this post: http://www.theserverside.com/news/thread.tss?thread_id=42603#220806 Paul.
  53. I haven't got a degree in economics, but it sort of does in my book. If enough large companies depend on an API, the vendor producing it is very unlikey to go bust. They will be providing maintenance releases and support for a long time to come. Legacy API's like legacy code, don't go away they just mature.
    There are, unfortunately for developers, major examples that show exactly the opposite process. One of the best examples is how Microsoft has dropped support for 'legacy' Visual Basic (pre .NET), requiring major re-writing of software for those who wish to carry on the Microsoft route.
    Really? Maybe. But check the blogs of other developers, and you will see that many others are as cynical as me. Try speaking to people that have worked in sales in large vendors like IBM or Oracle and you may come to the conclusion that I'm not being cynical enough.
    Perhaps :) I may have been too harsh, but I still believe that the distinction between 'Free' software and commercial software is not that great. Software bloat and tie-in can be just as possible away from commercial pressure.
    IMO the software industry is blighted by vendor interests.
    I truly think that this is a very harsh generalisation.
    You pays your money and you take your choice, and increasingly for a lot of people that choice is open source.
    I think you are confusing things by mentioning open source in this context. Vendors do open source too! Sun has open sourced their JSF implemenation, their app server, their operating system, and soon their Java. BEA is open sourcing their JPA implementation, as is Oracle. 'Vendor' and 'open source' are not opposites, and these are examples of how much of the software industry, far from being 'blighted' by vendors, is driven positively forward by them.

    BTW. What do you think about my views on REST, component models and mash-ups? This seems to be the direction in which things are moving.


    This actually more on subject than the above discussion. You said that you didn't see web based component services as a viable way forward for cross language interoperability, and I responded with this post:


    http://www.theserverside.com/news/thread.tss?thread_id=42603#220806

    Paul.
    OK ....
    Currently we see things like I/O and threading as in the realm of the system. Croquet as extended this idea to include 3D graphics rendering, and spatial sound. With the right set of service primtitives, then performance should no longer be a reason to change languages.
    Software development isn't like that. There are real needs for high performance that can't be reduced to simply connecting services together. Part of my job involves numerical simulation work. I don't expect very specialised iterative equation solutions to be available as a system call (or even as a library)! Your view here is too simple, I believe.
    So why have more than one high level language? Perhaps different problem domains. E.g. Lisp like languages have a strong tradition in Artificial Intelligience.
    Absolutely.
    Ultimately though, I think that modern OO languages are essentially all the same in vision and the differences stem from practical compromises made in their implementation. As processors get faster and VM technology improves there is no longer any practical reason IMO why these languages (Java, C#, Ruby, Python, Smalltalk) could not share a common object model (the same object memory), in such a scenario the differences would only come down to syntax, and JRuby like integration could be true integration (an in-memory Java Object is equivalent to an in-memory Ruby Object).
    I don't know enough about object memory models to comment.
    Imagine compiling Java or C# to the Strongtalk VM and inspecting the in-memory objects using the Smalltalk Inspector!
    I would rather imagine compiling Java or C# or Smalltalk or LISP to the JVM - a VM that has proven speed, proven portability and proven security. I am FAR more interested in work to make this awesome platform more suitable for other languages than any attempt to bring the long-neglected (and buggy) Strongtalk VM up to speed.
  54. Hi Steve, I agree, with you on vendors, standards and open source. It is not black and white and things are changing. I must insist though that all vendors want to tie you in, it is just plain good business sense. If you look through many of the vendor dominated standards this aspect is clear. They may call it vendor "differentiation" or such like - but it is the same thing. I believe that the JCP have woken up to this too, and they are looking for more non-vendor participation, probably to increase their credibility (which has taken a knock in recent years) and the quality of their standards. As for the W3C they pretty much rubber stamp what ever it is the big vendors propose as the next big thin. What ever way you cut it our industry is dominated by monopolies and cartels and we have got a long way before we reach the openness of the telecoms industry. Imagine 3G Mobile phones that with incompatible SIM cards. It is unthinkable. I must agree though, that some standardisation is better than none. It is just not a battle I choose to fight. The EU seems to be getting thougher with Microsoft, local goverment are looking at open source solutions like Linux, and emerging economies like China are choosing to regulate their software market place. So who knows, telecoms like regulation may come to the software industry too.
    I don't know enough about object memory models to comment.
    Imagine compiling Java or C# to the Strongtalk VM and inspecting the in-memory objects using the Smalltalk Inspector!
    I would rather imagine compiling Java or C# or Smalltalk or LISP to the JVM - a VM that has proven speed, proven portability and proven security. I am FAR more interested in work to make this awesome platform more suitable for other languages than any attempt to bring the long-neglected (and buggy) Strongtalk VM up to speed.
    The issue here is indirection. Object memory is just an in-memory representation of an object allocated off the heap. So a Smalltalk system as a collection with pointers to all objects in the system. Each object has a pointer to its class which is just another object, and each class object has a pointer to a method dictionary, which is just a hashmap of method objects. Processes are objects too and stack frames are objects known as Contexts. So a lot of indirection, but it means that you end up with a fully dynamic and reflective representation of the running objects in memory. this is how Smalltalk and Ruby do a lot of their tricks. The Java VM doesn't have object memory. This as been optimised out at compile time. So the JRuby implementation creates all of this object memory stuff on top of the JVM in Java code. So Ruby running on the JVM is actually being interpreted by the JRuby runtime on top of the JVM. It just so happens that they believe that eventually they can do this faster than the C Ruby interpreter, which parses source code at runtime and has very little optimisation. So to make a Java object look like a ruby object the JRuby guys proxy each Java object with a real Ruby object. The Ruby object is then interpreted as I describe. Compiling Java to a VM with object memory support would simplify things greatly and get rid of the proxies. As I understand it the proxies in JRuby are limited anyway when compared to true Ruby objects. With object memory all objects would be equal. No proxies no translations. With a common base class Object with method support for all languages, a single runtime object could be passed between Java and Ruby seamlessly. A Java object could be introspected by Ruby in the same way that a Ruby object can. I don't believe you can do this with JRuby today. Paul.
  55. I believe that the JCP have woken up to this too, and they are looking for more non-vendor participation, probably to increase their credibility (which has taken a knock in recent years) and the quality of their standards.
    The JCP have been looking for non-vendor participation for as long as I can remember.
    I must agree though, that some standardisation is better than none. It is just not a battle I choose to fight.
    There is no battle. One simply opts for standards that are useful.
    The EU seems to be getting thougher with Microsoft, local goverment are looking at open source solutions like Linux, and emerging economies like China are choosing to regulate their software market place.
    Some vendors have been pushing for quality standards (and openness) for years. Sun was one of the great pioneers for the use of common standards in IT in the 80s; the idea of using open systems and compatible UNIX implementations rather than prioprietary incompatible systems. This is yet more evidence that any division between vendors and open source solutions here is meaningless.
    The issue here is indirection. Object memory is just an in-memory representation of an object allocated off the heap. So a Smalltalk system as a collection with pointers to all objects in the system. Each object has a pointer to its class which is just another object, and each class object has a pointer to a method dictionary, which is just a hashmap of method objects. Processes are objects too and stack frames are objects known as Contexts. So a lot of indirection, but it means that you end up with a fully dynamic and reflective representation of the running objects in memory. this is how Smalltalk and Ruby do a lot of their tricks. The Java VM doesn't have object memory. This as been optimised out at compile time. So the JRuby implementation creates all of this object memory stuff on top of the JVM in Java code.
    Sorry, I don't understand this. You have a fully reflexive representation of Java objects in memory at run time. You can inspect individual objects, run .getClass() on them. Test them with instanceof. Find out what methods and fields they have. Nothing about this is optimised out.
    So Ruby running on the JVM is actually being interpreted by the JRuby runtime on top of the JVM. It just so happens that they believe that eventually they can do this faster than the C Ruby interpreter, which parses source code at runtime and has very little optimisation.
    No. They believe they can make it faster because they intend compiling as much as possible into bytecodes, so it looks like Java classes, and can be hotspotted.
    So to make a Java object look like a ruby object the JRuby guys proxy each Java object with a real Ruby object. The Ruby object is then interpreted as I describe.


    Compiling Java to a VM with object memory support would simplify things greatly and get rid of the proxies. As I understand it the proxies in JRuby are limited anyway when compared to true Ruby objects. With object memory all objects would be equal. No proxies no translations. With a common base class Object with method support for all languages, a single runtime object could be passed between Java and Ruby seamlessly. A Java object could be introspected by Ruby in the same way that a Ruby object can.

    I don't believe you can do this with JRuby today.

    Paul.
    In principle, you don't need any kind of change of the JVM to allow tight interaction between Java and dynamic languages. Groovy already has this integration. The issue is the difference in class hierarchies between Java and Ruby. There is always going to be a mismatch here.
  56. Hi Steve,
    Some vendors have been pushing for quality standards (and openness) for years. Sun was one of the great pioneers for the use of common standards in IT in the 80s; the idea of using open systems and compatible UNIX implementations rather than prioprietary incompatible systems. This is yet more evidence that any division between vendors and open source solutions here is meaningless.
    Some vendors ... division between vendors and open source is meaningless? You are contradicting yourself here. Sun as allways been in a minority when it comes to its relationship with the developer community (choosing University based open source BSD over commercial System 5 Unix, hosting Free software, etc) and the customer community at large. This is why people were willng to trust Sun as the custodian of Java. Other vendors like IBM, Oracle and Microsoft have a very different reputation. It is this reputation that creates a very large difference in peoples perception between the motives behind vendor solutions and vendor lead standards versus open source. You may not share that perception yourself, but it is widely held by many (most?). Try telling Stallman that any difference between vendors and open source is meaningless.
    In principle, you don't need any kind of change of the JVM to allow tight interaction between Java and dynamic languages. Groovy already has this integration. The issue is the difference in class hierarchies between Java and Ruby. There is always going to be a mismatch here.
    For someone who says he doesn't know much about object memory, this is a very bold statement. Here is a link to a presentation on just how the JVM works and its applicability to dynamic languages: http://download.microsoft.com/download/9/4/1/94138e2a-d9dc-435a-9240-bcd985bf5bd7/GiladBracha-final.wmv And here is a presentation on how to implement Ruby on the CLR. The issues for Ruby on the JVM are more or less the same: http://download.microsoft.com/download/9/4/1/94138e2a-d9dc-435a-9240-bcd985bf5bd7/JohnGough-RubyOnTheCLR.wmv Paul.
  57. Hi Steve,

    Some vendors have been pushing for quality standards (and openness) for years. Sun was one of the great pioneers for the use of common standards in IT in the 80s; the idea of using open systems and compatible UNIX implementations rather than prioprietary incompatible systems. This is yet more evidence that any division between vendors and open source solutions here is meaningless.


    Some vendors ... division between vendors and open source is meaningless? You are contradicting yourself here. Sun as allways been in a minority when it comes to its relationship with the developer community (choosing University based open source BSD over commercial System 5 Unix, hosting Free software, etc) and the customer community at large. This is why people were willng to trust Sun as the custodian of Java. Other vendors like IBM, Oracle and Microsoft have a very different reputation. It is this reputation that creates a very large difference in peoples perception between the motives behind vendor solutions and vendor lead standards versus open source.

    You may not share that perception yourself, but it is widely held by many (most?).
    But you are precisely backing up my point. Not all vendors are the same. I recognise the fact whereas you put forward broad unqualified accusations like "I am amazed that vendors can still tout 'compliance to standards and vendor support' as potential benefits after years of exploiting both mercilessly for comercial gain.". And while you are throwing mud at IBM and Oracle, you would be ignoring the contributions they have made to open source in recent years - IBM with Linux and Eclipse, Oracle with TopLink. This are not as simple as you repeatedly claim. Even a single vendor can have a mixed reputation, and things can change.
    Try telling Stallman that any difference between vendors and open source is meaningless.
    Please don't distort what I was saying. I was talking about particular contexts, such as software bloat, meeting standards etc, standards. This is why I used the phrase "This is yet more evidence that any division between vendors and open source solutions here is meaningless." and not "This is yet more evidence that any division between vendors and open source solutions is meaningless." I am careful what I write.
    In principle, you don't need any kind of change of the JVM to allow tight interaction between Java and dynamic languages. Groovy already has this integration. The issue is the difference in class hierarchies between Java and Ruby. There is always going to be a mismatch here.



    For someone who says he doesn't know much about object memory, this is a very bold statement.

    Here is a link to a presentation on just how the JVM works and its applicability to dynamic languages:

    http://download.microsoft.com/download/9/4/1/94138e2a-d9dc-435a-9240-bcd985bf5bd7/GiladBracha-final.wmv

    And here is a presentation on how to implement Ruby on the CLR. The issues for Ruby on the JVM are more or less the same:

    http://download.microsoft.com/download/9/4/1/94138e2a-d9dc-435a-9240-bcd985bf5bd7/JohnGough-RubyOnTheCLR.wmv

    Paul.
    Well, I might, but you keep giving me links that won't open in my Windows Media, even with the latest updates :( However, I know how Java works in relation to dynamic languages. I know the issues, and I know the details of how JRuby works with Java object - I have even contributed a (very, very minor) change to the JRuby project. I was talking about the specifics of object memory for all the languages you were talking about - Smalltalk, C# and so on. The thing is, the exact way that Java object memory is implemented is specific to the implementation; for example the precise memory size of objects in native memory can vary.... but to claim that it is 'optimised out' is nonsense, as you can inspect objects at runtime, manipulate them, get meta-information and so on. But anyway, what is bold? To claim that Ruby has a different object hierarchy? This is true. Or to claim that Groovy works fine and dynamically with current JVMs? This is also true. You can't just compile 'Object' in Ruby into some common memory and hope. There would be classloader/namespace issues. When you have the same object names, and when they have different behaviour and security (such as the ability to be 'open' at the class and instance levels), which is different from Java (which thankfully does not allow such behaviour with Java classes), you are going to have a mismatch. Any idea of just 'putting things into the same object memory' isn't going to work. There is always going to have to be some layer of services around a language line Ruby to allow the class/instance functionality, and to isolate that functionality from other languages, even if it is just a specific ClassLoader.
  58. Hi Steve, When it comes to standards I think we are in violent agreement. The stamp of a "standards" body in itself doesn't mean much. What matters is the substance, quality and the detail behind any standard. Then you've still got the issue over choosing an implementation, whether open source or comercial closed source. So some standards are good (best of bread, open etc) and some are bad (over complex, serving vendor interests, transient and unproven technology, vendor lock-in etc). Some vendor implementations are good and some open source ones are bad. We each need to choose on a case by case basis. But there are no golden rules or guarantees. What is worst than no standard however, is the adoption of a bad standard through the herd mentality or FUD. Avoiding poor standards can be just as important as adopting good ones. So how do you know which of the countless JSRs is worth adopting? - well it will come down to judgement, and each individuals judgement is likely to be different; coloured by their circumstances and their experiences.
    There would be classloader/namespace issues.
    This is my point, in a dynamic VM there is no class loader. There is no class verifier either. The links I supplied work on my machine. They are from presentations at the Lang.NET symposium this year. Here is the link: http://www.langnetsymposium.com/ Go to speakers. The first link was a presentation by Gilad Bacha. The leader of JSR292. The secound link is to a presentation by John Gough of Queensland University. He as taken porting Ruby to a static language VM (the CLR) much further than the JRuby guys. He as got a Ruby compiler that generates CLR bytecode (something on the todo list for the JRuby guys) yet there is still a lot more to do. Suns Gilad Bacha makes reference to the presentation of John Gough in his presentation, and explains why pretty much the same issues apply for the JVM. Hopefully you'll get to see these. They are a real eye opener. Paul.
  59. Hi Steve,

    When it comes to standards I think we are in violent agreement. The stamp of a "standards" body in itself doesn't mean much. What matters is the substance, quality and the detail behind any standard. Then you've still got the issue over choosing an implementation, whether open source or comercial closed source.


    So some standards are good (best of bread, open etc) and some are bad (over complex, serving vendor interests, transient and unproven technology, vendor lock-in etc). Some vendor implementations are good and some open source ones are bad. We each need to choose on a case by case basis.
    OK.
    But there are no golden rules or guarantees.

    What is worst than no standard however, is the adoption of a bad standard through the herd mentality or FUD.
    This is where we disagree. I would rather adopt a somewhat poor standard than risk using a higher-quality single-source product, as I have fallen foul of the latter approach.
    Avoiding poor standards can be just as important as adopting good ones. So how do you know which of the countless JSRs is worth adopting? - well it will come down to judgement, and each individuals judgement is likely to be different; coloured by their circumstances and their experiences.
    Exactly. Which is why generalisations about vendors, open source and so on are not the primary issue in my view. It is about whether or not a standard is suitable, and how much weight you put on using a standard or not. I am also not sure that it need be down to individuals. I believe you can make generalisations (I know, I have been complaining about you generalising!). You can broadly judge a spec using the following criteria: 1. Is it functionally complete enough for most purposes - will users of implementations be able to avoid much use of vendor-specific extensions (yes for JDO, JSF... perhaps less so for current JPA). 2. Does is have reasonable industry and developer support? Do more than just one or two vendors support the standard (yes(ish) for JDO, yes JPA and JSF) 3. Are there open source implementations? (again, yes for JDO, JPA and JSF) I am sure others could add more.... But, I believe these are reasonably objective measures.
    There would be classloader/namespace issues.


    This is my point, in a dynamic VM there is no class loader. There is no class verifier either.
    Depends entirely on the languuage, and the implementation of the language. Groovy and BeanShell are certainly dynamic languages. This is also true for Kawa, dSelf, Rhino, Dawn, and many others. They all run on the JVM and they all have class loaders. There is nothing about dynamic languages that does or does not imply use of class loaders or name spaces.
    The links I supplied work on my machine.
    I am afraid all I get is 'Windows media cannot perform the requested action at this time'... which is odd, as other windows media presentations work fine. I'll see if I can troubleshoot this.
  60. Hi Steve,
    Exactly. Which is why generalisations about vendors, open source and so on are not the primary issue in my view. It is about whether or not a standard is suitable, and how much weight you put on using a standard or not.
    You've got to laugh. You where the one generalising about the benefits of standards. I was just trying to point out that the choice is not that simple.
    This is where we disagree. I would rather adopt a somewhat poor standard than risk using a higher-quality single-source product, as I have fallen foul of the latter approach.
    Well "some what poor" is subjective. Also the risk of being left high and dry with a single-source implementation, with no access to the source code is a subjective judgement too. Steve, I have my opinions, but what I don't do is try and make hard and fast rules, because there is always an exception. Microsoft "coercing" VB programmers to "migrate" to VB.NET is just one example of a vendor with excessive power (monopoly) doing what they like, which comes back to my point about regulation. Most vendors could not have done this and got away with it! I wouldn't buy anything from Microsoft :^) I see this example as an exception. Generally you need to make a judgement balancing the benefits of an API versus the likelyhood of that implementation still being around in the future. Access to source code gives you more confidence. Multiple implementations gives you more confidence too. Also to be balanced is the cost of implementation and the rate at which you can deliver a working solution today with a "somewhat poor" standard API versus a faster single sourced API. In some circumstances this could be the differene between business failure and business success. There are no hard and fast rules here. Just subtle (business) judgements.
    1. Is it functionally complete enough for most purposes - will users of implementations be able to avoid much use of vendor-specific extensions (yes for JDO, JSF... perhaps less so for current JPA). 2. Does is have reasonable industry and developer support? Do more than just one or two vendors support the standard (yes(ish) for JDO, yes JPA and JSF) 3. Are there open source implementations? (again, yes for JDO, JPA and JSF)
    Oh yes, many more. And I'm sure we would place different emphasis on each. So it is not as simple as just looking for the kite mark :^)
    Depends entirely on the languuage, and the implementation of the language. Groovy and BeanShell are certainly dynamic languages. This is also true for Kawa, dSelf, Rhino, Dawn, and many others. They all run on the JVM and they all have class loaders. There is nothing about dynamic languages that does or does not imply use of class loaders or name spaces.
    I can implement any language with any other language, given enough indirection. That doesn't mean that it will perform optimally though. Dynamic language VMs are optimised for dynamic languages, Static language VMs are optimised for static languages. A dynamic language will never run as fast on a static language VM as it would on a dynamic language VM. I do not know about groovy etc, but Ruby, and Python on the CLR and JVM are both implemented by building a dynamic VM in Java/IL bytecode on top of the JVM/CLR C/C++ runtime. The use of a compiler gets rid of the runtime parsing of source files, but the resultant bytecode still needs to be "interpreted" before it can run on the native VM. It is a shame that you can't see the presentation yourself, another opinion would be good to check my understanding, but I have watched both a couple of times and I believe my description is mostly right. Paul.
  61. Hi Steve,

    Exactly. Which is why generalisations about vendors, open source and so on are not the primary issue in my view. It is about whether or not a standard is suitable, and how much weight you put on using a standard or not.


    You've got to laugh. You where the one generalising about the benefits of standards. I was just trying to point out that the choice is not that simple.
    Yes, I was. My point is indeed very simple; so simple that it seems plainly obvious - having more than one implementation of an API is a major benefit. The way that usually happens is if the API conforms to some sort of standard. So yes, I am generalising about this, as it is indeed a general point.
    Well "some what poor" is subjective. Also the risk of being left high and dry with a single-source implementation, with no access to the source code is a subjective judgement too.
    Indeed, which is why one goes for standards with multiple implementors. You have less chance of being left high and dry. We have been over this.
    Access to source code gives you more confidence.
    I think this is vastly over-rated issue. This is only the case if your developer team feels it has the resources to take on support of the codebase of the product or tool. Although I think having the source code can be very useful (such as for diagnosing bugs), the idea that it gives any long-term security is potentially very misleading, in my view.
    There are no hard and fast rules here. Just subtle (business) judgements.
    I agree. It is just that my experience means I put different weights on these things from you. Your experiences means you have other opinions. The thing is, that I keep reading very general and broad statements from you about vendors and standards, and I think things are more complicated. Perhaps I am just misreading things based on your writing style?
    Depends entirely on the languuage, and the implementation of the language. Groovy and BeanShell are certainly dynamic languages. This is also true for Kawa, dSelf, Rhino, Dawn, and many others. They all run on the JVM and they all have class loaders. There is nothing about dynamic languages that does or does not imply use of class loaders or name spaces.


    I can implement any language with any other language, given enough indirection. That doesn't mean that it will perform optimally though. Dynamic language VMs are optimised for dynamic languages, Static language VMs are optimised for static languages. A dynamic language will never run as fast on a static language VM as it would on a dynamic language VM.

    I do not know about groovy etc, but Ruby, and Python on the CLR and JVM are both implemented by building a dynamic VM in Java/IL bytecode on top of the JVM/CLR C/C++ runtime.

    The use of a compiler gets rid of the runtime parsing of source files, but the resultant bytecode still needs to be "interpreted" before it can run on the native VM.

    It is a shame that you can't see the presentation yourself, another opinion would be good to check my understanding, but I have watched both a couple of times and I believe my description is mostly right.

    Paul.
    Firstly, this is nothing to do with whether or not you have a classloader (I have no idea why that was raised), secondly, dynamic languages necessarily don't need to be interpreted; they can compile to byte code, which can be hotspotted to native code. The 'invokedynamic' bytecode for the JVM will allow many dynamic languages to compile more or less fully, although there will be issues with, for example, multiple inheritance. My impression is that the issues for many dynamic languages is not so much the nature of the VM as what happens when you try and interface static and dynamic languages. All VMs have restrictions and and suited to particular languages; I think the distinction between 'static VM' and 'dynamic VM' does not help. A typical Smalltalk VM would run other dynamic languages very slowly, as features would have to be emulated. I really would suggest you take at least a brief look at Groovy; it has a lot of what Ruby has, but compiles on the JVM. Grails is a fully agile dynamic approach to web development along the lines of Rails, but with few of the disadvantages. I have managed to watch the presentation (it works when downloaded, but not on-line, for some reason) - there was little new to me; although I did not realise the full horror of trying to call Java methods overridden by type from a dynamic language. I also noted the mention of the very poor speed of Ruby :) You need to realise that what you are getting from Gilad Bracha is a point of view from a major Smalltalk fan, so he would say that (as in his talk) 'dynamic languages are going to take over the world' :) There are no objective views on this, other than the observation that everyone disagrees!
  62. Hi Steve,
    You need to realise that what you are getting from Gilad Bracha is a point of view from a major Smalltalk fan, so he would say that (as in his talk) 'dynamic languages are going to take over the world' :) There are no objective views on this, other than the observation that everyone disagrees!
    Lets not blame the messenger. Gilads presentation was very detailed and technical. He goes into invokedynamic, but that is not the only issue. A big problem is how overloading is handled in static languages versus dynamic ones, method invocation in general is very diiferent in a dynamic language. The JVM uses the VTable idea borrowed from C++, so a method invocation is just an offset (a hard coded number) from a fixed reference. You just can't do that in a dynamic language. Another problem is the verifier in static languages. Johns presentation on Ruby and .NET presented the exact issues but with regards to a specific language (Ruby) implementation, also amongst the presentations at the symposium is a presentation on IronPython on the CLR and the issues and the general approach is the same here also. There are serious issues with hosting dynamic lamguages on a static platform, the advantage of a static language is that the compiler does more, the down side is that the runtime is just presented with a collection of bits (or bytes). The byte codes in the JVM use hard coded offsets and magic numbers determined at compile time. The verifier checks that all these magic numbers are safe and don't present a security breach (point off into areas of memory they shouldn't), but there is little information for the runtime to go on after this. This is why the metaprogramming model in dynamic languages is so much more sophisticated. Like it or loath it, the JVM cannot deal with things like changing an objects class at runtime, or adding methods to a class (or an object) at runtime. The design of the byte codes do not allow for this. So what you have to do is build up infrastruture to support such things. I need to watch the presentatons again, but what I believe John Cough did was compile Ruby source into and Abstract Syntax Tree (AST) in IL bytecode. He then has a runtime that reads the AST, executing the IL byte code in the same way a dynamic VM would do. So you have byte code running byte code. This is not the same as byte code running natively on the VM. This is why Ruby on the JVM will never be as fast as Ruby on Strongtalk for instance. Incidently with dynamic complation, there is good reason to believe that Java on a Strongtalk based VM could eventually run as fast as Java on the JVM. I am not sure whether the same techniques can be applied to the JVM too (probably), but for sure a dynamic language on the JVM will not be as fast (perhaps by an order of magnitude) as the same dynamic language on a dynamic VM. Incidently, the speed-up of languages like Python and Ruby that can be eventually gained from using the JVM or CLR is only possible because the current C interpreters for these languages are so slow. You have got to remember that these are open source languages, so to improve on their implementation doesn't take much if you have the right technical expertise and the time and resources. Even so, as I understand it both the Python and Ruby C interpreters still out perform the JVM/CLR implementations today (2 to 3 times faster I think, but you would need to check the presentations). Paul.
  63. Hi Steve,

    You need to realise that what you are getting from Gilad Bracha is a point of view from a major Smalltalk fan, so he would say that (as in his talk) 'dynamic languages are going to take over the world' :) There are no objective views on this, other than the observation that everyone disagrees!



    Lets not blame the messenger. Gilads presentation was very detailed and technical.
    I think you misunderstand. I was only intending to comment on a few light comments he made at the end, not criticising the presentation.
    He goes into invokedynamic, but that is not the only issue. A big problem is how overloading is handled in static languages versus dynamic ones, method invocation in general is very diiferent in a dynamic language. The JVM uses the VTable idea borrowed from C++, so a method invocation is just an offset (a hard coded number) from a fixed reference.
    Actually, there is nothing that specifies how the JVM does this. This is implementation-dependent. A JVM may use a range of ways of invoking a method, depending on the situation, or whether it is for an interface, and so on.
    You just can't do that in a dynamic language.

    Another problem is the verifier in static languages. Johns presentation on Ruby and .NET presented the exact issues but with regards to a specific language (Ruby) implementation, also amongst the presentations at the symposium is a presentation on IronPython on the CLR and the issues and the general approach is the same here also.

    There are serious issues with hosting dynamic lamguages on a static platform, the advantage of a static language is that the compiler does more, the down side is that the runtime is just presented with a collection of bits (or bytes). The byte codes in the JVM use hard coded offsets and magic numbers determined at compile time. The verifier checks that all these magic numbers are safe and don't present a security breach (point off into areas of memory they shouldn't), but there is little information for the runtime to go on after this. This is why the metaprogramming model in dynamic languages is so much more sophisticated. Like it or loath it, the JVM cannot deal with things like changing an objects class at runtime, or adding methods to a class (or an object) at runtime. The design of the byte codes do not allow for this.
    Which is why Sun is working on adding new features to the JVM.
    So what you have to do is build up infrastruture to support such things. I need to watch the presentatons again, but what I believe John Cough did was compile Ruby source into and Abstract Syntax Tree (AST) in IL bytecode. He then has a runtime that reads the AST, executing the IL byte code in the same way a dynamic VM would do. So you have byte code running byte code. This is not the same as byte code running natively on the VM.

    This is why Ruby on the JVM will never be as fast as Ruby on Strongtalk for instance. Incidently with dynamic complation, there is good reason to believe that Java on a Strongtalk based VM could eventually run as fast as Java on the JVM. I am not sure whether the same techniques can be applied to the JVM too (probably), but for sure a dynamic language on the JVM will not be as fast (perhaps by an order of magnitude) as the same dynamic language on a dynamic VM.

    Incidently, the speed-up of languages like Python and Ruby that can be eventually gained from using the JVM or CLR is only possible because the current C interpreters for these languages are so slow. You have got to remember that these are open source languages, so to improve on their implementation doesn't take much if you have the right technical expertise and the time and resources.

    Even so, as I understand it both the Python and Ruby C interpreters still out perform the JVM/CLR implementations today (2 to 3 times faster I think, but you would need to check the presentations).

    Paul.
    Fair comments. However, my impression was that Gilad was far less pessimistic than you - he was talking about significant performance benefits. Check about 8 minutes in - "The performance potential of these things is really awesome". This may require some changes to the JVM, but it would retain its fundamental nature. The issues vary considerably from one dynamic language to another. There are different problems with Smalltalk (close d classes, single inheritance) than with, say, Ruby. My view is that you are equating hard-to-implement with slow. The two don't necessarily go together. For example, awkward method lookups can be cached. I am really not sure what your theme here is. If it is 'abandon the JVM, and go for Strongtalk on which everything will run well', that is never going to happen (I make a firm prediction!), and for good reason. The JVM has security features that are vital for its widespread acceptance. The way forward is to adapt the JVM to make it better, and even good, for dynamic languages. What I got from Gilad's presentation is that this is happening. Time to finish, I think. Thanks for the discussion.
  64. Hi Steve,
    I am really not sure what your theme here is. If it is 'abandon the JVM, and go for Strongtalk on which everything will run well', that is never going to happen (I make a firm prediction!), and for good reason. The JVM has security features that are vital for its widespread acceptance. The way forward is to adapt the JVM to make it better, and even good, for dynamic languages. What I got from Gilad's presentation is that this is happening. Time to finish, I think. Thanks for the discussion.
    No thankyou. My theme is interoperability between languages. It has become clear to me that the semantics of a static language at runtime is a subset of the semantics available to a dynamic language at runtime. This being the case, then in the long run for optimum interoperability it seems clear to me that it would make more sense to base your universal VM platform on the design of a dynamic VM versus using the design of a static VM. It so happens that at this moment in history we have a lot invested in static VMs. Over time though the balance of this investment may change, in which case adopting for a new VM becomes more attractive. My other point was that along with the semantics, the object model of a static language is a subset of the object model of a dynamic language at runtime. Given this I believe that full interoperabilty without the need for proxies and translations should be possible if the abstract runtime object model is based on the object memory used in a dynamic languages like Smalltalk (actually the Self object memory is more abstract and probably a better template). It is easier IMO to implement specific instances from an abstract base (selecting a subset of the features possible) then trying to implement abstract semantics upon a specific base. In short IMO static language runtimes provide a subset of the semantics of dynamic language runtimes (Something like Self being the most abstract), so if in the future tight interoperability like you mentioned earlier really does take off then IMO a dynamic VM provides a much better substrate. If true, what will it mean? Not sure, but it is another reason to edge your bets. BTW. Watch John Goughs presentation when you get time. He is not as much fun as Gilad, but he does go into the details of a specific implementation, which really brings the point home. Thanks again, and speak soon. Paul.
  65. Hi Steve, The exact implementation of John Goughs implementation of Ruby on the CLR slipped me before. I've just watched the video again. The compiler does a lot more than generate an AST (parse Ruby source) using a tool equivalent to yacc, it also uses a lex like tool to generate C#. The C# is a representation of the Ruby Object Memory in C# (classes as objects, with method dictionaries, method as objects etc) which is eventually compiled to IL byte code. The runtime then executes the object memory following pointers to class objects method objects etc and executing. Think of the loaded, compiled IL files (one per C# class I guess) as a sort of Smalltalk Image (with objects, classes, meta classes, method dictionaries, method objects etc). This image is then executed by a runtime infrastructure in C# the same way a Smalltalk VM would. So my explanantion was sort of correct. He shows a slide where there are about a dozen C# classes (and about half a dozen dictionaries) connected by pointers to represent the implementation of a single Ruby class: class A < B include M end So the overhead is considerable. Even so he claims that the performance is about the same as that as the Ruby C interpreter and they haven't tried to optimise yet. Paul.
  66. StackOverflowException[ Go to top ]

    A StackOverflowException has occurred: at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.post() [PaulBeckford.java line 1257] at org.apache.ai.SteveZara.post() [SteveZara.java line 1683] at org.apache.ai.PaulBeckford.criticizeEstablishment() [PaulBeckford.java line 275] at org.apache.ai.SteveZara.defendStandard() [SteveZara.java line 791] at org.apache.jsf.Detractor.flameOn() [Detractor.java line 612] at org.apache.jsf.Supporter.autopost() [Supporter.java line 612] at org.apache.community.Forums.main() [Forums.java line 482]
  67. Re: StackOverflowException[ Go to top ]

    A StackOverflowException has occurred:
    Fair point......
  68. Re: StackOverflowException[ Go to top ]

    Sorry Steve, I really couldn't resist ;-) Peace, Cameron Purdy Tangosol Coherence: Clustered Cache for Java
  69. Re: StackOverflowException[ Go to top ]

    Sorry Steve, I really couldn't resist ;-)

    Peace,

    Cameron Purdy
    Tangosol Coherence: Clustered Cache for Java
    No, you were right! Some debates are better carried on in e-mail or on personal blogs where long discussions can drift into all kind of areas without potentially boring a considerable number of readers..... Paul - you have my e-mail if you would like to continue this discussion at any time.
  70. Hi Steve,

    Secondly, even though there are new APIs for EJBs, the old APIs are still supported. That is what matters for those writing software that is expected to last for a long time.



    This is an interesting sentence. What do you mean by supported? If a working implementation of the version of the API is still available does that count as supported?


    Obviously.

    I clearly stated that JSF is still likely to be supported by vendors in years to come. But I'm talking about a different investment. The intellectual investment in the ideas and concepts behind an API and the practical investment in learning how best to work through the practical issues with any given implementation. There is no guarantee that vendors will support this investment in the future. I've invested countless hours mastering the EJB1.0/2.0/2.1 APIs and the finer details of the Weblogic 4.5/6.0/7.1 class loaders.

    There was no guarantee that this investment would be protected. Infact it was almost a given certainty that the issues with class loading in weblogic needed to be rediscovered afresh with each new release. And the only thing the EJB3.0 API has in common with EJB2.1 is the name.


    In contrast, untill recently I had tyres for a car I bought ten years ago that will still fit a modern car today.


    The word standard is much abused in the Software Industry, and in reality it seldom means what we take it to mean.

    Paul.


    EJB 3.0 may well be a different API in some respects, but a certified JEE5 server must still support EJB 2.1, and there is a migration path to EJB3.0. One thing that is improving with specifications coming out of the JCP is that there are fewer vendor-specific issues. For example, code using JDO 2.0 and EJB 3.0/JPA implementations should (and in my experience, is) pretty easily transferred between vendors. One can then learn general approaches to fine-tuning rather than vendor-specific approaches. Much work is being put in to overcome such issues in JSF, so that components from any source can be used together. Anyway, having issues with fine details is not the same as an entire API being dropped. Believe me, individual issues with different implementations are nothing compared to having to deal with long-abandonded legacy APIs which were supported by now non-existent single vendor or developer group.
    Hi Steve, I do not what to dwell on this particular subject too long, but I believe you are over simplifying what is in effect a very complex mix of issues. My point is that as far as the future is concerned there aren't any guarantees. As a contractor I have worked for a large number of companies and the maintainablility of their legacy code has had little to do with standards. I advocated the adoption of J2EE standards back in 1998. I quickly began to understand that for some subset of J2EE that was a good idea and for others it wasn't. The most important issue IMO over the years is that the APIs used where well designed and well implemented, and a good fit for the task for which they were selected. Also, another thing that seems to be important is that a given implementation has broad developer comunity support, whether the implementation has been standardised or not. The issue of whether an implementation completely dissapears off the map can be overcome with access to the source code. In fact this is why ESCROW was setup in the first place. The access to source code addresses several support issues. Vendors can develop partial deafness when you request bugs fixes for an implementaton that is no longer flavour of the month. Whether you get that fix or not depends on their commercial priorities not yours. Even if they are still listening, your support problem may become their commercial opportunity. I have experienced a number of times when I have had problems with a vendor implementation, when they went into denail and said it was our code, when clearly it wasn't. They then would sell us a consultant to 'help us' who would cost the earth. After finding no fault in our code, the consultant would then offer to sell us something else, because we had adopted the wrong strategy, or thob us off with some promise of a future patch. When I first started using open source, it was a revealation when I found that I could debug problems into library and framework code myself. Exploring all the code revealed two things: 1. The framework wasn't that complicated and given the time I could write it myself. 2. It is easier to find and fix a bug in the framework and submit that fix, then read realms of vendor documentation, trawl through known bugs online, visit developer forums for tips, downloaded and try out patches to see if they help, and finally submit a support call just to be told that it is your fault and you don't know what you're doing. I digress, but this is born from years of hard earned experience. I would not go back there and I am amazed that vendors can still tout 'compliance to standards and vendor support' as potential benefits after years of exploiting both mercilessly for comercial gain. Of course some vendors are better then others, but like I say it is a complex set of issues and their is no golden rule! Paul.
  71. I am aware of Selenium and other web testing tools, but these tend to fall into the acceptance/functional testing arena. What concerns me is unit testing and TDD in particular.
    Well, Selenium wrapped in STIQ could do the trick http://storytestiq.sourceforge.net/
  72. Companies like Google and Amazon tend to follow the KISS principle. Their sites are extremely simple and only use fancy things like Ajax where it is absoultely necessary (e.g. Google Earth). The rest of the time they stick to plain old HTTP/HTML and CGI with a Lamp technology stack.
    I'm guessing you're joking, since both Google and Amazon have been using predominantly Java for server-side development for years. Same with Yahoo, AOL, Expedia, and dozens more of the most prominent web properties. I have no idea who is using JSF or even JSP, and none of these companies is fortunately enough to have a single, simple technology stack. Peace, Cameron Purdy Tangosol Coherence: The Java Data Grid
  73. Does any of listed sites has WAP version based on JSF rendering?
    I don't know. I am not sure how I would tell, apart from attempting to contact the IT departments of each company and asking them... But what is the issue? JSF is available; it is widely supported (being a standard part of JEE5); its ability to render appropriately designed pages to alternate client technology has been clearly demonstrated, and there are high-quality components available that can do this. This flexibility and extensibility (a major feature of JSF that has allowed frameworks like facelets to be developed) has allowed many JSF implementors to incorporate AJAX. Undoubtedly it will allow JSF to adapt to future client-side technologies as they arise. Basing your choice of web framework on what that selected group of sites do or don't use does not seem a useful approach to me - I don't see myself developing a website that would have the same size or number of users as Amazon or EBay! Also, you are going to have to put in a lot of research trying to find out the range of frameworks that each company uses....
  74. well sure[ Go to top ]

    BTW Does anybody know ONE well known site that works on JSF?
    There are quite a number although I suppose it depends on what you mean by well known. A few examples:
  75. Re: The Common Theory Of Everything[ Go to top ]

    Each time you have to add a flash, applet, javascript or even plain HTML to your web application the portability of interfaces disappears.


    Yes, but this is not necessarily the way things have to be.

    Do you know many sites that were written without using hard coded/static HTML?

    --Mark.


    I personally know of a couple, and they are for internal use. There aren't many, to be sure, but the point is that JSF has the potential to allow such sites to be developed. I am not saying that this is a good way to develop most sites; just that it is possible. I really like the potential of JSF to allow rendering to a range of different client presentation technologies - even though I have never once used it!


    That's what i'm talking about. JSF is too theoretical, its overengineering to me. While GWT meets the needs of real web application.

    --Mark
    I would argue that ICESoft fulfills that potential. Standard JSF tags compatible with servers from Tomcat to Weblogic, compatible with MyFaces, yet allows for the partial page refreshes that someone mentioned by proving, what I believe is a different render kit. Tree controls, progreSs indicators, tabs, menus, etc, in addition to asynchronous server-side updates. Their sample apps look good. Soon, I will try their community version.
  76. Re: The Common Theory Of Everything[ Go to top ]

    I personally know of a couple, and they are for internal use. There aren't many, to be sure, but the point is that JSF has the potential to allow such sites to be developed. I am not saying that this is a good way to develop most sites; just that it is possible. I really like the potential of JSF to allow rendering to a range of different client presentation technologies - even though I have never once used it!
    There's your problem right there. Isn't that one of the cardinal rules of software design? That is - only design and code exactly the features you need and none that you don't. I hate to say it but it shows ;)
  77. Re: The Common Theory Of Everything[ Go to top ]

    I personally know of a couple, and they are for internal use. There aren't many, to be sure, but the point is that JSF has the potential to allow such sites to be developed. I am not saying that this is a good way to develop most sites; just that it is possible. I really like the potential of JSF to allow rendering to a range of different client presentation technologies - even though I have never once used it!


    There's your problem right there. Isn't that one of the cardinal rules of software design? That is - only design and code exactly the features you need and none that you don't. I hate to say it but it shows ;)
    Just because JSF implementations can include the option for different rendering, does not mean that an implementation has to. Implementations can target specific client technologies and handle them well. This is just like Java itself - it is available on a wide range of platforms, but individual VM implementations don't cover all the range. The only way that the flexibility of JSF in this area can be a problem for developers is potential additional complexity in the construction of components. I have not yet found the need to implement a component myself, so I have had no problem with this issue. Also, that is only, as far as I know, a cardinal rule of some specific approaches to software design. My view is that a good approach to design is to build in flexibility...
  78. Re: The Common Theory Of Everything[ Go to top ]

    Just because JSF implementations can include the option for different rendering, does not mean that an implementation has to. Implementations can target specific client technologies and handle them well. This is just like Java itself - it is available on a wide range of platforms, but individual VM implementations don't cover all the range.

    The only way that the flexibility of JSF in this area can be a problem for developers is potential additional complexity in the construction of components. I have not yet found the need to implement a component myself, so I have had no problem with this issue.

    Also, that is only, as far as I know, a cardinal rule of some specific approaches to software design. My view is that a good approach to design is to build in flexibility...
    And what does it say about a technology when you find that the one core thing it does - provide reusable components - is hard to implement? Doesn't the irony just make you sick? ;)
  79. Re: The Common Theory Of Everything[ Go to top ]

    Just because JSF implementations can include the option for different rendering, does not mean that an implementation has to. Implementations can target specific client technologies and handle them well. This is just like Java itself - it is available on a wide range of platforms, but individual VM implementations don't cover all the range.

    The only way that the flexibility of JSF in this area can be a problem for developers is potential additional complexity in the construction of components. I have not yet found the need to implement a component myself, so I have had no problem with this issue.

    Also, that is only, as far as I know, a cardinal rule of some specific approaches to software design. My view is that a good approach to design is to build in flexibility...


    And what does it say about a technology when you find that the one core thing it does - provide reusable components - is hard to implement? Doesn't the irony just make you sick? ;)
    I fail to see why this is ironic. If you take a look at other approaches based on components the number of developers who are component users usually greatly outweighs the number of developers who produce components. For this to be the case there has to be a rich set of components available, and this certainly seems to be the case with JSF. But anyway, JSF components are not hard to implement; they simply involve more than just a few lines of code. However, there are plenty of resources that demonstrate how this can be done, and it is well within the capabilities of a competent Java developer. Of course, if you really wish to stick with HTML, then facelets makes implementation of JSF components trivial, and removes the need to write Java code at all.
  80. Re: The Common Theory Of Everything[ Go to top ]

    I fail to see why this is ironic. If you take a look at other approaches based on components the number of developers who are component users usually greatly outweighs the number of developers who produce components. For this to be the case there has to be a rich set of components available, and this certainly seems to be the case with JSF.

    But anyway, JSF components are not hard to implement; they simply involve more than just a few lines of code. However, there are plenty of resources that demonstrate how this can be done, and it is well within the capabilities of a competent Java developer.

    Of course, if you really wish to stick with HTML, then facelets makes implementation of JSF components trivial, and removes the need to write Java code at all.
    Ok, so then can you explain how this "simple" development style is proven by reading this article? http://radio.weblogs.com/0118231/2006/10/10.html#a741 It looks to be about the size of a nice book, and it's only a tutorial. I'm not convinced what you're saying is actually true.
  81. Re: The Common Theory Of Everything[ Go to top ]

    I fail to see why this is ironic. If you take a look at other approaches based on components the number of developers who are component users usually greatly outweighs the number of developers who produce components. For this to be the case there has to be a rich set of components available, and this certainly seems to be the case with JSF.

    But anyway, JSF components are not hard to implement; they simply involve more than just a few lines of code. However, there are plenty of resources that demonstrate how this can be done, and it is well within the capabilities of a competent Java developer.

    Of course, if you really wish to stick with HTML, then facelets makes implementation of JSF components trivial, and removes the need to write Java code at all.


    Ok, so then can you explain how this "simple" development style is proven by reading this article? http://radio.weblogs.com/0118231/2006/10/10.html#a741

    It looks to be about the size of a nice book, and it's only a tutorial. I'm not convinced what you're saying is actually true.
    Well, firstly, that is not about developing components, it is about designing with existing complements; specifically, it is about visually developing a rich data-linked application using JDeveloper, with complete step-by-step instructions. If you favour development using visual tools, this is a good approach, and certainly is simple (any powerful visual tool will look complex at first glance if you are taken through it step by step). There are simpler tools, such as Studio Creator - I have seen total novices at web development create data-linked applications very quickly with this product. My preferred way to develop with JSF is to code pages directly, in which case it is no more complex, in my experience, than most other web frameworks. I believe what I am saying is true; but then I would, as I actually use JSF for large-scale projects. Also, as I said, the only relatively complex aspect of JSF is the creation of components, and that need not be too hard, as shown by: http://www-128.ibm.com/developerworks/java/library/j-jsf4/
  82. Isn't that one of the cardinal rules of software design? That is - only design and code exactly the features you need and none that you don't. I hate to say it but it shows ;)
    While I am certain that such a cardinal rule is important to quote on occasion, I for one cannot think of any software worth using that was fashioned in such a manner. ;-) Peace, Cameron Purdy Tangosol Coherence: The Java Data Grid
  83. Re: The Common Theory Of Everything[ Go to top ]

    I really like the potential of JSF to allow rendering to a range of different client presentation technologies - even though I have never once used it!
    I'm looking forward to the JSF->SecondLife renderkit. :o) Kit
  84. The ability to implement interfaces which can fall-back to alternatives when Javascript is not available.
    And we will use Asynchronous JavaScript (Ajax) without JavaScript :-) Dmitry Non-existed components for Struts
  85. I would recommend not to use JSF at all.

    If you want AJAX solution use GWT.

    GWT allows you to test UI - JSF don't.

    GWT allows you to write server stateless UI web application - JSF don't (Writing stateless UI application simplifies clustering).

    GWT is easy to use(IMHO GWT + AJAX is too complex)

    GWT is FREE - listed JSF AJAX Frameworks are not.

    so what are the reasons to use JSF + AJAX when you have GWT?

    --Mark
    By using GWT you're completely forced to use DTOs. If you're using JSF, Tapestry or any other server-side framework, you can forget about that layer and use directly entity objets.
    DTOs are great and provide a much cleaner architecture, but require another layer you have to deal with (and all the libs I've seen to deal with DTO<=>Entity mapping are quite rudimentary).
    Most projects can work well using directly entity objects (JPA-persistent objects, session-bound hibernate objects...), but with GWT you lose this ability because your objects have to go over the wire - It's the same as developing a three-tier desktop app.(Don't throw over the wire an hibernate dettached-object... it's ugly)
  86. Most projects can work well using directly entity objects (JPA-persistent objects, session-bound hibernate objects...), but with GWT you lose this ability because your objects have to go over the wire - It's the same as developing a three-tier desktop app.(Don't throw over the wire an hibernate dettached-object... it's ugly)
    Yes, DTO is prise we have to pay. The thing I don't like in GWT is the JDK1.4 restriction for client classes. But in the rest developing GWT application is as simple as developing thick client application. --Mark
  87. Most projects can work well using directly entity objects (JPA-persistent objects, session-bound hibernate objects...), but with GWT you lose this ability because your objects have to go over the wire - It's the same as developing a three-tier desktop app.(Don't throw over the wire an hibernate dettached-object... it's ugly)


    Yes, DTO is prise we have to pay.
    The thing I don't like in GWT is the JDK1.4 restriction for client classes.
    But in the rest developing GWT application is as simple as developing thick client application.

    --Mark
    I've gotten into numerous discussions on this one-- one of the main issues is that when we deal with client side components, or UI widgets in general, it's not your business model (DTO) that should be transferred, but a UI model, specific to the widget. JSF is forced into this role by luxury of EL bindings... in the case of some of the new JSF-AJAX components, the widget's model is transferred, not some specialized business DTO that requires development.
  88. By using GWT you're completely forced to use DTOs. If you're using JSF, Tapestry or any other server-side framework, you can forget about that layer and use directly entity objets.

    DTOs are great and provide a much cleaner architecture, but require another layer you have to deal with (and all the libs I've seen to deal with DTO<=>Entity mapping are quite rudimentary).


    Most projects can work well using directly entity objects (JPA-persistent objects, session-bound hibernate objects...), but with GWT you lose this ability because your objects have to go over the wire - It's the same as developing a three-tier desktop app.(Don't throw over the wire an hibernate dettached-object... it's ugly)
    +1. You have to watch what your doing, as it might actually kill that one great advantage that GWT potentially has: scalability. I like GWT, but it's not the perfect tool either.
  89. GWT databindings?[ Go to top ]

    I like JSF a lot is that it offers me simple databinding and vlaidation, I'm afraid I don't have that from Google.
  90. Unstable Mix[ Go to top ]

    The "rich client" term describes behaviors of app on the client device, not the server. JSF was originally designed to be a server-driven technology. Now JSF vendors have retrofitted their server-driven components to be “Ajaxified” and to remain competitive. JSF does a great job of abstracting HTTP’s request/response model into a rich component model designed for the server-side. The server handles the event lifecycles, component management, session managemen. Ajax frameworks and Ajax-based components, on the other hand, are designed for the client. They provide levels of abstraction for event lifecycles and component managements on the client-side. So, mixing these two technologies (which mostly solve the same issues) can (and will) create new challenges for your development team. These “Ajaxified” JSF frameworks looks great for Demo’s. However, once you throw them in the trenches, they seem to choke. Invariably, you end up writing custom JavaScript code to compensate for their shortcomings. Bottom line, if you want rich Ajax-based thin client-side applications: - Must know/learn JavaScript/CSS – period - Use a proven Ajax client-side framework (Yahoo, Google, Dojo, Rico, Backbase, etc) - Consider data-remoting with DWR Or, alternatively consider non-ajax thin client-side solutions - Still Learn/Know JavaScript/CSS - Implement a Flash-based solution using OpenLaszlo or Adobe Flex Or, if you want to leverage your Java knowledge to build a thicker client: - Look into Netbeans Platforms and WebStart (Swing-based) - Eclipse RCP (learn/know SWT) - Jaxx (xml-driven Swing) vlv
  91. One more promising Framework[ Go to top ]

    I think the Oracle backed Apache Trinidad holds some promise as a rich client JSF/AJAX based framework. Oracle's original press release regarding their contribution stated that the goal of the Trinidad project was to develop a set of Ajax-enabled rich user interface components.
  92. Re: One more promising Framework[ Go to top ]

    I think the Oracle backed Apache Trinidad holds some promise as a rich client JSF/AJAX based framework. Oracle's original press release regarding their contribution stated that the goal of the Trinidad project was to develop a set of Ajax-enabled rich user interface components.
    This is the new name for the excellent Oracle ADF Faces JSF implementation and component set, which was donated to the Apache MyFaces project. There are around 100 components in Trinidad, each of which can currently render to either HTML or WML.
  93. Is it AJAX JSF Frameworks Review?[ Go to top ]

    I have just one question to the author: If you compare the Components Libraries, why you name it as "Frameworks Review"?