Discussions

News: Jetty 5 released and potential future mapped out

  1. Jetty 5 released and potential future mapped out (18 messages)

    Greg Wilkins has announced the release of version 5.0 of the popular Jetty servlet container. He has also provided some valuable insight into the future of Jetty to take advantage of some of the new features of Java, such as NIO.
    JettyExperimental now implements most of HTTP/1.1 is a push-pull architecture that works with either bio or nio. When using nio, gather writes are used to combine header and content into a single write and static content is served directed from mapped file buffers. An advanced NIO scheduler avoid many of the NIO problems inherent with a producer/consumer model.

    Thus JE is ready as a platform to experiment with the content API ideas introduced above. I plan to initially work toward a pure content based application API and thus to discover what cannot be easily and efficiently fitted into that model. Hopefully what will result is lots of great ideas for the next generation servlet API and a great HTTP infrastructure for Jetty6.
    What would you like to see in Jetty, or from the Servlet API itself?

    Read more on Greg's blog

    Threaded Messages (18)

  2. From a "code perspective," Jetty is still my favorite. You can tell that Greg & co. really put a lot of love into making the code readable and well-done. Anyone who wants to learn about the non-complexities of a Servlet Container (i.e. how simple it really is) should look at Jetty.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  3. From a "developers perspective" Jetty is great as well. Great for running your J(Web)Unit tests and debugging your web apps.

    The Eclipse plugin JettyLauncher just moved to a seperate project: http://jettylauncher.sourceforge.net. Not only does the new version support Jetty 5, but it also supports Jetty Plus, which is great for people developing apps that use JNDI etc.

    Eelco Hillenius
  4. slightly odd thought[ Go to top ]

    I've been pondering this question in my head for a while. Does it make sense to create a High Availability/High performance specification that uses SEDA style architecture? I've been thinking it's just too daunting to retrofit servlet specification to support an event driven approach using non-blocking IO and lots of resource sharing. Having read through SEDA and Haboob on several occasions to learn exactly how it works, I find it elegant and very powerful.

    The servlet approach is easy for new programmers to pick up, but at a certain point, the servlet container just can't handle anymore connections. Most people don't actually need to support crazy loads or handle ten thousand concurrent connections. It's probably a crazy idea, but what do people think?
  5. slightly odd thought[ Go to top ]

    I think you are 100% spot on. Applications
    need a rational scalable architecture.
    Servlets are just an entry point for requests
    and responses, like RMI, SOAP, or whatever.
    More better stuff is needed underneath.

    For some of my thoughts take a look at
    http://www.possibility.com/epowiki/Wiki.jsp?page=AppBackplane.
  6. slightly odd thought[ Go to top ]

    I think you are 100% spot on. Applicationsneed a rational scalable architecture.Servlets are just an entry point for requestsand responses, like RMI, SOAP, or whatever.More better stuff is needed underneath.For some of my thoughts take a look athttp://www.possibility.com/epowiki/Wiki.jsp?page=AppBackplane.
    thanks for the link. it's interesting. I'll have to read it more thoroughly and compare it to SEDA and other Event driven approaches :)
  7. I have a question.

    Has/Is Jetty used in production apps ? Small small or medium ?
  8. Definitely, I know of at least one *large* deployment that uses Jetty as the webserver and servlet container. In the past, Jetty enjoyed better performace than it's competitors (mainly tomcat), but after large refactorings on the tomcat code base, the difference is less noticeable now.

    Tomcat 5 is probably a reasonable competitor to Jetty 5, but older versions don't come close to the reliability and predictability of Jetty.
  9. Has/Is Jetty used in production apps ? Small small or medium ?
    I can't tell if you're joking or not, but Jetty's been used in production apps for years.

    I've personally seen it being used in one very big cluster. (Coherence includes HttpSession and ServletContext clustering support for Jetty.)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  10. Has/Is Jetty used in production apps ? Small small or medium ?
    I can't tell if you're joking or not, but Jetty's been used in production apps for years.I've personally seen it being used in one very big cluster. (Coherence includes HttpSession and ServletContext clustering support for Jetty.)Peace,Cameron PurdyTangosol, Inc.Coherence: Shared Memories for J2EE Clusters
    I would add that tomcat and resin both get asked this question quite frequently. Unless it's from one of the top 5 commercial providers, many people still think it's not used in production or in large deployments.
    I know of a few large sites getting 5million+ page views a day using one of the three servlet containers. An easy way to find out is to use jmeter to send a request to a site and look at the raw response and http header.
  11. An easy way to find out is to use jmeter to send a request to a site and look at the raw response and http header.
    Sometimes .. but the site I'm thinking of re-writes those headers (actually, it strips them) to hide its implementation details. Other sites have apache (etc.) in front, but the content comes from a servlet container, so you cannot tell from "outside".

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  12. An easy way to find out is to use jmeter to send a request to a site and look at the raw response and http header.
    Sometimes .. but the site I'm thinking of re-writes those headers (actually, it strips them) to hide its implementation details. Other sites have apache (etc.) in front, but the content comes from a servlet container, so you cannot tell from "outside".Peace,Cameron PurdyTangosol, Inc.Coherence: Shared Memories for J2EE Clusters
    hehe, it's funny you mention that. I've had several people ask me, "how can we hide the fact we're using server X?" I've definitely come across the situation you describe where someone puts Apache infront to hide things, or use router to hide which server they're using.

    when ever I see some one do that, I have to laugh. just because site X uses server X, doesn't someone else can magically achieve the same level of service. it's as if, server X is the secret sauce to great performance. Back in 99, a co-worker attended a talk given by one of the architects of MP3.com before they sold out. He took good notes and gave us a detailed summary of the talk. Although they were using apache, mysql and other OSS technologies, it was how they applied the technology that allowed them to scale rapidly and maintain fast response times.
  13. Just curious, but if you use a SEDA architecture or a push/pull architecture, don't you break with the common one thread per request scenario that us developers have come to depend on?

    For example, take a framework like Spring (or JBoss, Struts, etc.). These frameworks will often use ThreadLocals to maintain context information regarding transactions, logging, security, you name it, to propogate the context throughout the framework. Heck, we use ThreadLocals in our application code to pass context.

    So, won't switching to a non thread-based request process render all of these frameworks ineffective?
  14. one thread per request is broken[ Go to top ]

    don't you break with the common one thread per request
    > scenario that us developers have come to depend on?

    It needs to be broken. These frameworks force
    an application architecture. Your application architecture
    shouldn't be determined by a servlet or a database or
    anything but the needs of your application.

    Sure, a single threaded approach may work fine for
    a stateless web back end.

    But what if you are doing a real application on the
    backend like handling air traffic control or a
    manufacturing process?

    In these cases a single threaded approach makes
    no sense because a web page is just one of a thousand
    different events an application will be handling.

    All events are not created equal. Threads, queues,
    priorities, CPU limits, batching, etc are all tools
    you can use to handle it all.

    It took me a while to figure out why i was having problems
    with certain frameworks. It is because they hard code a
    threading architecture into your apps.

    If i want an object to participate in transactions from
    multiple threads, hibernate would barf saying an object
    can't be in more than one session. Or an AOP approach would
    just assume it knew my transaction scope.

    That perplexed me until i saw that everything works that
    way. It makes some sense as the default mode for
    simple web apps.

    If i have work to do that i want handle smartly,
    you can't use the common frameworks.

    Why different threads? Read the SEDA papers for a good
    introduction.

    It has a lot to do with viewing your application performance
    as a whole, instead of a vertical slice in time. With
    a multi threaded approach you can create the idea of
    quality of service. You can have certain work done at
    higher priorities. You can aggregate work together even
    though it came in at different times. You can green light
    high priority traffic. You can reschedule lower priority
    traffic. You can drop duplicate work. You can limit the
    CPU usage for work items so you don't starve other work.
    You can do lots of things, none of which you can do with a
    single task that runs until completion.
  15. one thread per request is broken[ Go to top ]

    >don't you break with the common one thread per request > scenario that us developers have come to depend on?

    It needs to be broken.
    Need to be broken aside...

    My point was relating to Jetty, the Servlet Container, and its redesign to a selector/event based approach. IMO this will cause a great deal of existing infrastructure to stop working.

    I'm asking if this is true, or is there a work-around available?
    Read the SEDA papers for a good introduction. It has a lot to do with viewing your application performance as a whole, instead of a vertical slice in time. With a multi threaded approach you can create the idea of quality of service. <...snip...> You can do lots of things, none of which you can do with a single task that runs until completion.
    I have read Matt Welsh's SEDA documents a couple years ago. The concept of stages to perform a discrete task and interconnecting queues to propogate the request from stage to stage. All well and good, but most SEDA-type designs feature non-blocking I/O on the front-end. Not sure where multi-threaded comes into play here, typically these systems will have only one or two threads allocated per CPU.

    Hence the question, in an asynchronous I/O implementation of a servlet container, how can frameworks or applications layered on top be able to leverage the ThreadLocal concept that so many of them depend on for basic context propogation?
  16. one thread per request is broken[ Go to top ]

    I'm asking if this is true, or is there a work-around available?

    I don't know.

    >All well and good, but most SEDA-type designs feature non-blocking I/O on the >front-end. Not sure where multi-threaded comes into play here, typically these >systems will have only one or two threads allocated per CPU.

    That doesn't really impact the backend, which is the processing
    of the work. SEDA will add threads to stages if it needs to.
    That's part of the adaptive part.

    >Hence the question, in an asynchronous I/O implementation
    >of a servlet container, how can frameworks or applications
    >layered on top be able to leverage the ThreadLocal concept
    >that so many of them depend on for basic context propogation?

    From an app perspective an async IO implementation can
    be transparent. You block. I do everything async behind
    the scenes. I get a complete response. Give the the result
    to you and unblock you. Nothing need change.
  17. one thread per request is broken[ Go to top ]

    From an app perspective an async IO implementation can be transparent. You block. I do everything async behind the scenes. I get a complete response. Give the the result to you and unblock you. Nothing need change.
    I understand what you're saying, but it seems very counterproductive to add NIO to a servlet container like Jetty, when you just have to resort to a thread-per-request blocking IO approach once the full request has been received.

    Perhaps Greg can comment, but I would draw the conclusion that the performance impact of a NIO frontend is minimal in a servlet-based application because the thread-per-request paradigm probably still must exist. It's only been pushed back to the handling of the HTTPRequest instead of the socket communication.
  18. one thread per request is broken[ Go to top ]

    I understand what you're saying, but it seems very
    >counterproductive to add NIO to a servlet container
    >like Jetty, when you just have to resort to a
    >thread-per-request blocking IO approach once the full
    >request has been received.

    I guess it is an appliction stack.

    You are still getting a win because jetty is more capable.

    If you could extend that model all the way down into the apps then
    their would be some win. But such a radical
    change in application architecture makes it impossibile.
  19. one thread per request is broken[ Go to top ]

    >I understand what you're saying, but it seems very >counterproductive to add NIO to a servlet container >like Jetty, when you just have to resort to a >thread-per-request blocking IO approach once the full >request has been received.I guess it is an appliction stack. You are still getting a win because jetty is more capable. If you could extend that model all the way down into the apps then their would be some win. But such a radical change in application architecture makes it impossibile.
    interesting discussion. my bias take on it is once the performance requirements increases dramatically, the best option might be to re-architect the performance sensitive pieces. If the problem is such that scaling horizontally is a real option, then whether or not a servlet container multi-plexes the network IO with NIO and events probably isn't going to be a huge win. At a previous job, we wrote a local cache for a well known servlet container, because the data changes once a month.

    In cases like these, the benefit of breaking request parsing, data queries, page generation and writing the response into separate stages like SEDA definitely will scale better than a single threaded approach. A good example of this would be the /. effect when several thousand concurrent requests hits a small cluster of webservers. I think Matt Welsh had a good write up about this specific scenario for one of the talks he gave.

    I've seen other people tackle this specific type of problem by building a system which periodically generate/re-generate the pages as static content. MP3.com used a similar approach, and wrote some custom scripts to propagate data from the main database out to the webservers. Of course this approach doesn't work for highly dynamic pages that are specific to a user and their context. Breaking an existing app that uses ThreadLocal is a tough call, but I don't see that as a Jetty problem. In my mind, each user has to make that decision and weigh the costs.