Azul's Java hardware appliance approach getting recognized

Discussions

News: Azul's Java hardware appliance approach getting recognized

  1. Azul Systems in September announced the debut launch of it's unique hardware server appliance designed exclusively to process JVM bytecode, with hardware features to accelerate GC and concurrency. The vision of the appliance is eliminating capacity planning at the application level along with much of the cost and complexity associated with the conventional delivery of computing resources.

    While the company has not announced any deployments yet (field testing mode), the company and their "network attached processing" vision has recently been recognized by awards from FastCompany (the Fast50 top innovators) and eWeek's excellence awards hardware server category, launched today. Javaworld has also featured them today in a new article the next wave in J2EE deployment.

    According to Azul:
    Network attached processing delivers compute as a shared network resource comprising compute pools of ultra-high capacity compute appliances. Any Java or J2EE platform-based application in the datacenter can tap the pool without application modifications or long-term binary or instruction set lock in. The design provides a small footprint, high rack density, low power consumption and simple administration.
    The appliances (compute pools) contain up to 384 coherent processor cores and 256 gigabytes of memory in a fully symmetrical multiprocessing (SMP) system. Transparency as far as application code is concerned is achieved by a virtual machine proxy technology redirects application workloads to the compute pool trasparently.

    When first announced on TSS, one TSS member pointed out that this doesn't solve the performance bottlenecks of IO. In the same thread, Sam Pullara of Accel Partners (an investor in Azul) says summarized that their work has definitely moved us closer to the infinite CPU source in the sky that storage systems have had for years now.

    Will network attached processing become an essential reality in large scale enterprise Java deployments?
  2. In the same thread, Sam Pullara of Accel Partners (an investor in Azul) says ..

    Just to clarify the record, Sam was a huge booster of Azul long before joining Accel. In fact, he seriously considered going to work for Azul back when nobody even knew what they were doing.

    And for those of you that remember 3dfx (alter caps as desired), check out who started Azul.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Cluster your POJOs!
  3. When first announced on TSS, one TSS member pointed out that this doesn't solve the performance bottlenecks of IO.

    Last month was granted an American software patent for the Howard Cascade that optimizes circuit switching. It can be applied to a cheap ethernet switch or a high performance interconnection network of switches. I read the patent and believe that it does much to relieve the network bottleneck Cameron complained of.

    Supposedly the patent increases network scalability so much that Amdahl's Law is relevant, and large numbers of cheap CPUs can be aggregated without the traditional amount of efficiency lost to packet collisions.

    Hmmm, large numbers of cheap CPUs? Is the Azul appliance cheap? How many dollars per FLOP does Azul cost? Or is its primary use not numerics?
  4. Is the Azul appliance cheap? How many dollars per FLOP does Azul cost?

    the price of their system AFAIK is a tad bit above the equivalent # of cores from Dell. So fairly competitive, however when you then factor in the "greenness" de Azul - i.e. the floorspace, juice consumption, cooling requirements vs. Dell then there is a significant difference in the real TCO over a few years.

    More of interest to me if what kind of licensing the app server and packaged app vendors will negotiate. We have seen some early indications of this with the dual-core Intel schemes to date. However now consider the 2 cores vs 384 cores (once again AFAIK) that Azul is pitching. For any notable size of cluster this could be huge and truly change the model of per CPU pricing.
  5. missing the point![ Go to top ]

    I love the idea, but slide 3 of the JavaWorld article makes me want to cry.

    The RUBiS benchmark showed once-and-for-all that client/server communication between the web tier and the "business tier" was the bottleneck for typical web apps. So even if the Azul box brings the total CPU crunching time to near zero in the middle tier, the system will still run slower than if you never "remoted" the methods from the web tier to begin with.

    If you create an architecture with a "method tier" you are putting leg irons on your app in most cases. Even if the methods execute instantly you can never recover the comm and context switch thrash costs incurred from going over the network. So if you are stuck with an appsserver "method tier", maybe azul can help you reduce the pain, but you will get the biggest benefits from pulling out the "method tier" entirely.

    The only time when a "method tier" makes sense is when the methods are service-scale in their execution time, not method scale, and that's a web service, which makes sense.
  6. missing the point![ Go to top ]

    I love the idea, but slide 3 of the JavaWorld article makes me want to cry. The RUBiS benchmark showed once-and-for-all that client/server communication between the web tier and the "business tier" was the bottleneck for typical web apps.

    Post a link, please. A lot of early J2EE apps suffered performance issues by marshalling (RMI using Object streams) between the JSP/Servlet and the EJB tiers. Note that the performance problem was NOT the network, since often the marshalling was simply between two classloader contexts within the same JVM, and even when it was across the network, most of the wall-clock time was spent in serialization and deserialization (which is way slower than the ~76 microseconds that it takes to move some data over gigE).

    At any rate, I think that I/O is a legitimate concern for Azul, but it's not one that they have ignored.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Cluster your POJOs!
  7. Strange behavior...[ Go to top ]

    Regarding that new techno, i'm bit surprised to see nowhere any tech papers describing the azul box/archi with details.
    as if, azul had something to hide
    as if, this were just marketing...(although i believe not)

    if one of you guy, has had a chance to read something concrete about it, please let us know.

    Laurent.
  8. white papers are coming[ Go to top ]

    Regarding that new techno, i'm bit surprised to see nowhere any tech papers describing the azul box/archi with details.as if, azul had something to hideas if, this were just marketing...(although i believe not)if one of you guy, has had a chance to read something concrete about it, please let us know.Laurent.

    White papers are coming.
    We've been making the technology work first.
    Papers come second. :-)

    Cliff
  9. A lot of early J2EE apps suffered performance issues by marshalling (RMI using Object streams) between the JSP/Servlet and the EJB tiers.

    It's nice you know a performance limitation of "early J2EE apps". Maybe you could share some knowledge about modern distributed Java programs. Is object serialization CPU load still a common bottleneck? Or has the bottleneck now been pushed all the way down to the physical layer at the bottom of the OSI protocol stack, where it's always been with other cluster languages such as C++ and ForTran?
  10. A lot of early J2EE apps suffered performance issues by marshalling (RMI using Object streams) between the JSP/Servlet and the EJB tiers.

    It's nice you know a performance limitation of "early J2EE apps". Maybe you could share some knowledge about modern distributed Java programs. Is object serialization CPU load still a common bottleneck? Or has the bottleneck now been pushed all the way down to the physical layer at the bottom of the OSI protocol stack, where it's always been with other cluster languages such as C++ and ForTran?

    The thing is that the performance hit was there in NON-distributed environments. In other words, when a JSP called an EJB that was IN THE SAME VM it would go through marshalling!

    First rule of distributed computing: Don't distribute.

    (Second rule: Use Coherence ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Cluster your POJOs!
  11. network attached colorful zoo[ Go to top ]

    Interesting trend - the more network attached appliances are coming.

    I know that our sys admins bought some Google search appliances recently and very happy with them. On the last FOSE show here in Wash DC I saw the first time the text search appliance from Thunderstone and spent some time with one of their representative. He told that eBay uses their appliances to index auctions and this is why the ebay search speed is so impressive. The color of appliance is dark blue - he said "the Google sells yellow boxes and we selected blue". He saw companies with both Google (yellow) and their blades (blue) in one rack.

    I read somewhere that there are appliances that specialize on XSLT transformation. Do not know the color they picked though. Did Azul pick their color?

    Just for your information, Thunderstone is Linux-based blade. The price is comparable to one cpu license of Weblogic server or ILOG JRules.

    Probably it's a good trend to have fast specialized appliances -- as long as they stay specialized and do not produce too much mutual redundancy.
  12. Increase in utilization[ Go to top ]

    The Javaworld article focused on improving utilization of deployments by offloading the JVMs to the appliance.

    Now, modulo any hardware improvements they add to speed up the JVM (GC being noted as one area where they're working), I think that they're going to have to a lot of work to keep their prices in line down with "commodity" hardware.

    As the multi-core CPUs start becoming more commonplace, and we start getting "16 cores" where we had 4 CPUs before, those prices are going to come down.

    Layer on top of those the potential of things like the new MVM being worked on by Sun, or available today things like Solaris containers (I know Linux has something similar, though not necessarily comparable), you're going to be able to get a lot more instances and deployments on single machines.

    Through OS virtualization, standard deployments are going to end up being easier to pull off in terms of stacking more and more software on to a single machine with little or no chance of conflict with other software on the same machine.

    So, save for hardware redundancy there is getting to be less and less reason today for buying boxes for applications when I can simply slice up a current emplaced box and isolate the applications completely and easily (assuming I have "bandwidth" on the box to support the new app, of course).

    Now your area of isolation on a single machine is no longer limited to the process or virtual IP interface, but actually even higher up.

    When I can do this with commodity off the shelf hardware, from a variety of vendors, specialized hardware vendors such as this end up competing with generic hardware that can do similar tasks. They'll need to shine pretty bright, and fast I'm afraid to get into the market and compete. If I can get 80% of the performance for 60-70% of the price, well, price drives most decisions.
  13. There are two main kinds of benefits with Azul:

    1. Application performance and throughput

    We know from queuing theory that during the busiest times of the day, we need more bank tellers to keep the time spent in line low. Certainly you can decrease the service time by having the tellers work faster, but we are seeing the limits of Moore's law, and dual-core systems are now the hot thing. Azul's processors have 24 cores, and a 7 foot rack can fit 3 big boxes and 1 small box, for a total of 1248 cores, each of which (pace system overhead), can run a separate Java thread. Such a rack would require about 8500 watts.

    Azul is not just a big SMP, but a hardware/software combination specifically designed to run VMs. The instruction set was designed in conjunction with the JIT developers, and there is specific hardware support for things like GC, synchronization, and object allocation. Those of you who were at JavaOne might have heard presentations on these subjects and they will be covered shortly in whitepapers*

    Some of the technological benefits are that you can now realistically have up to 96 GIGAbytes of heap; you no longer have to do a lot of capacity planning because the Azul machine will tell you how much resources your application is using, rather than the other way around; you get nice flat, predictable response times even under unpredictable load; you can use coarser-grained locking because of optimistic (rather than pessimistic) locking. These are unique to the product.

    2. Consolidation

    Yes blade servers are cheap to buy, but they have a lot of hidden costs in management, physical aspects (size, power, A/C), and networking ports (including for NAS). This is also covered in the whitepapers above.

    --bob
    who works for Azul
    *[yes, I know you have to register for the whitepapers, but I don't think I should have to repeat all the information here.]