Azul releases new server appliance that processes J2EE tasks

Discussions

News: Azul releases new server appliance that processes J2EE tasks

  1. A new startup company, Azul Systems, is introducing a new concept to out-source our J2EE code. Azul Systems has created a server applicance that when mounted on existing servers take up all processing tasks.

    Customer need to install a proxy server into their servers and the server applicance acts as a booster. It runs Java applications, divided into smaller, discrete tasks called threads. This would enable much faster processing times than that achieved by today's servers.

    "An interesting concept" said Kevin Krewell, editor-in-chief of Microprocessor Report, "The issue is the overhead".
    What makes Azul's approach different? Today's servers are designed to run a particular brand of software. For example, PCs are tuned to run Windows-compatible programs. But nearly all new corporate software is developed with so-called "virtual-machine" technologies, such as Java or Microsoft's (MSFT) .net, that let it run on any type of underlying hardware. Azul's server, dubbed the compute appliance, is the first designed from scratch to do one thing: run this "virtual machine" code faster and more efficiently than existing servers.
    Azul press release on network attached processsing

    What are your thoughts about the concept?

    ~Jay
    http://JavaRSS.com

    Threaded Messages (16)

  2. VMs have been especially designed for running on any platform and now there is a new platform especially designed to run VMs? Am I missing something?
  3. VMs have been especially designed for running on any platform and now there is a new platform especially designed to run VMs? Am I missing something?
    Yeah, you're missing something: One Azul box will have more Java horsepower than an entire rack of servers (I don't know yet if they've unveiled just how much, so that's all I can say,) and it can handle a couple of hundred GB on a Java heap with no full GC pauses.

    One word: Sweet.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  4. What is"special" about VM code ?[ Go to top ]

    Hi Cameron,
     
    From the article, ' "virtual machine" code faster and more efficiently than existing servers.'

    PMI, but what is so different about VM code, say from say C code ?
  5. What is"special" about VM code ?[ Go to top ]

    I imagine the point of this hardware is that it will be optimised so that things that require complex code in current VMs (such as garbage collection, synchronization, security etc) no longer do so using the new hardware. Then they'll ship special VMs which are to a greater or lesser extent wrappers for this hardware.

    My guess is that they'll be designing something along the line of the CPUs that were supposed to run byte code natively - but of course they'll be a bit more general so that they're not tied to running Java byte code.
  6. What is"special" about VM code ?[ Go to top ]

    From the article, ' "virtual machine" code faster and more efficiently than existing servers.'

    PMI, but what is so different about VM code, say from say C code ?
    There's a standard for VM code ;-)

    Seriously, there's no such thing as executable "C code" .. C code typically compiles to an intermediate language (higher level than assembly, but designed for assembly production) which is then optimized/assembled into machine code. That machine code is designed for a hardware processor, hence it's "machine code."

    What Azul does is run "machine code" that is designed for a Java virtual machine, instead of for a particular hardware processor. I am not allowed to talk about the performance, but I can say that it can run more threads concurrently in a relatively small box than the biggest existing server on the market today, and there's little chance that anyone will pass them for quite a while. ;-)

    Regarding some of the other coolness, they own the hardware design, so they do some (in retrospect) pretty obvious stuff with memory that allows them to do GCs without stopping execution.

    So, as soon as this hits the market, there's no such thing as "compute bound" anymore for Java applications. And the technology can apply to other "virtual machine languages" too, including the CLI, if/when there is market demand.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  7. Compute Bound and IO Bound[ Go to top ]

    Hi Cameron,

    This all sounds fascinating and for many of the GC issues associated with J2EE containers under heavy concurrency this may offer some relief but it is not going to solve the main IO issues (JCA, JMS, JDBC, and SOAP).

    Honestly how much of the J2EE work can be offloaded while at the same time not offsetting any gains during the transfer of required data and code. Have we got any figures on this. Note I am focusing on the integration of the product into an existing environment rather than the total migration to the Azul server. I would assume in the latter case the gains would be greater by a significant margin.

    By the way does this represent an business opportunity or threat to your "Shared Memories for J2EE Clusters" product.

    Regards,


    William Louth
    CTO, JInspired

    "J2EE tuning, tracing and testing with Insight"
    http://www.jinspired.com
  8. Compute Bound and IO Bound[ Go to top ]

    By the way does this represent an business opportunity or threat to your "Shared Memories for J2EE Clusters" product.
    I'll just make up a new tag-line. ;-)

    To be honest, it's such a tangent from everything we've seen in the market that it's hard to say how it will effect anyone. If it naturally increases the need for scalable access to data, then I'm going to guess that it helps open new doors for Coherence.

    The real question is how we get rid of the "main IO issues" in the long term .. for example, are there ways of scaling up IO with processing power besides caching. I'm looking at cluster-durable JMS queues and topics as one good example of achieving scalable performance for messaging (to/from or within an application, not as an ESB.) For database scalability, a well-built pure-Java database should now easily be able to achieve the highest non-clustered throughput numbers (running on this box) for any benchmark. SOAP is mostly compute bound (parsing, etc.) so this box is perfect for XML stuff (XML is certified 99.999% inefficient.)

    The things that we can't just "get rid of" are the legacy application integration points, including mainframe services, the well-established datastores, and the enterprise-adopted messaging systems. Over time, some of them can be phased out one-by-one, but it's rare for IT to get rid of something when they can build something complex and slower on top of it instead. ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Shared Memories for J2EE Clusters
  9. My Analysis of Azul[ Go to top ]

    When I first heard about what they were doing over at Azul last year, I have to admit I was pretty skeptical. Creating a new piece of hardware with a custom chip, custom architecture, and a custom JVM is a tall order. Additionally, no one in the past has been able to build custom hardware for generic computation that performs, in the long run, better than commodity processors for a similar price point. This project is different though because they are not limited to a real CPU, but rather to a VM that gives them tremendous opportunity for scale in a similar way to the GPU market. You'll notice that few computers use their CPU to draw graphics on the screen. They can take advantage of some of the same architectural choices to run Java code.

    The multi-core architecture that they have created is definitely the future. If you look at announcements from Intel (both x86 and ARM), Sun (The niagra project), and IBM (PowerPC) you will see the whole industry is headed for a multi-core world where multi-threaded code will get the most return. Azul is just getting there first and with a really, really huge box. Their box will have similar power to racks and racks of Dell blade systems except it will appear to be just a big SMP machine with all the management characteristics of a single system. If one application spikes its load, it just uses more compute power instantaneously, instead of using esoteric management software to try and shuffle resources in an inefficient and ultimately fragile way.

    Their proxy VM system is equally innovative. These boxes themselves have no disk and no direct network access, everything goes through the proxy machine that connects to Azul. Fewer parts that can fail, fewer things to configure on the new system. To switch your application from running locally and running remotely on their box you would simply change JAVA_HOME and point it at the Azul system you want to run on. I was initially skeptical that this proxy system would be major bottleneck. It has an appreciable effect but it is minimized through buffered/prefetched I/O. When it goes GA you can expect to see the biggest SPECjAppServer number ever.

    At WebLogic we often speculated on the things you could do if you owned the hardware to improve the efficiency of the VM. They have done these optimizations. Their chips have specific instructions that optimize concurrency, virtual invocations, and garbage collection. Nearly pauseless garbage collection is a benefit that I think almost any deployed application can take advantage of and it would not be possible without the kinds of control they have in an Azul system.

    Their work has definitely moved us closer to the infinite CPU source in the sky that storage systems have had for years now.

    Sam Pullara
    EIR @ Accel Partners
  10. SOAP is mostly compute bound ?[ Go to top ]

    It has a high GC hit - like serialisation in general.

    The biggest hit I have seen is on the wire though...
    (unless you talk about compression...)

    -Nick
  11. Compute grid here we come[ Go to top ]

    I have a suspicion that this is going to be really interesting for finance analytics computation.

    In maths-intensive calculations - where typically C/C++ was considered the default - it is increasingly becoming the case that Java/C# performance is comparable with native language performance.
    With all of the productivity benefits associated with VM languages, the consensus is growing that default choice is no longer C++...

    Theoretically, even on current hardware, VM language performance can exceed that of compile-time optimised languages like C/C++.

    If specialised hardware like this starts appearing, then the native-vs-vm perf gap is going to reverse sharply. And, as Sam mentioned, with some serious vertical scalability to boot.

    I wonder if Azul - or friends - will be interested in offering a compute service model similar to what Sun are doing with http://www.technewsworld.com/story/36822.html

    Its going to be interesting when some hard data comes out on this thing.

    -Nick
  12. One Azul box will have more Java horsepower than an entire rack of servers (I don't know yet if they've unveiled just how much, so that's all I can say,) and it can handle a couple of hundred GB on a Java heap with no full GC pauses.
    Interestingly AzulSystems.com makes no claim of accelerating Java execution. Instead Azul delivers what it named "network attached processing", aka utility computing. Azul's value-proposition pitch is:

    - "costs can be spread across multiple applications"

    - "dramatically reduce the number of systems under management"

    The entrenched rival of Azul is grid-based cycle scavenging, eg Condor. BusinessWeekAsia.com claims that Azul avoids inherent faults of its grid rival, which "...requires expensive software and often lots of consulting services. And [grid] does nothing to control the proliferation of more and more computers to do all of this..."
  13. One Azul box will have more Java horsepower than an entire rack of servers
    Well, I guess that having 48 times more CPUs and 16 times more memory than the typical server box will help :-)
    And this would help with any kind of language not only Java or .NET

    However, it seems to me that their biggest argument does not concern horse power but TCO. Reducing the TCO by reducing the number of servers needed to survive peaks?!
    It is not uncommon for large data centers to have hundreds of applications running on thousands of servers. Managing and maintaining these applications becomes more challenging and costly every year as more applications are brought on-line. Additionally each application requires dedicated host servers or partitions with headroom capacity to handle unpredictable workloads. The result is millions of dollars spent to deploy and maintain servers with low utilization, slow return on investment and very high total cost of ownership.
    AFAIK when several servers are put together to serve 24x7 applications, having one more can help to support peaks but it is most of the time there for failover.
    If the argument was "our server is 40 times more powerfull than other's, just buy two of them and dump your 100 servers pool" I would say that's sounds good, however this not the case. Unless I misinterpret, they just say drop some, use our JVM on the rest, we will take the overload.

    By the way, I have never seen any client pretending they have "unpredictable workloads"

    Sweet, if I could have one to run a Quake server.

    Prosperity,

    Claude Vedovini
    Anonymous geek
  14. Java Co-Processor[ Go to top ]

    How about selling this technology as a Java Co-Processor? It could be a PCI board that I drop in my server. Ok I'm dreaming nevermind.
  15. Unbound Compute[ Go to top ]

    How about selling this technology as a Java Co-Processor? It could be a PCI board that I drop in my server. Ok I'm dreaming nevermind.
    You're not dreaming. We'll do even better. You just drop our compute appliances in your network and use our J2SDK on any of your existing servers, and the full power of our compute pools becomes transparently available to your Java and J2EE programs.

    Gil Tene
    VP Technology, CTO
    Azul Systems, Inc.<Unbound Compute.
  16. Proprietary JDK/VM?[ Go to top ]

    You're not dreaming. We'll do even better. You just drop our compute appliances in your network and use our J2SDK on any of your existing servers, and the full power of our compute pools becomes transparently available to your Java and J2EE programs.
    Gil,

    So will Azul be developing a custom JDK/VM around your product line or will there just be proxy components as specified in your literature to allow flexibility on choosing a "commodity" JDK/VM for a given task?

    Frank Bolander
  17. Azul Usage with Fortune 500 Enterprises[ Go to top ]

    I am an Enterprise Architect for a Fortune 100 who will be putting Azul through our gauntlet in the December timeframe. Don't listen to speculation from others on this topic, go and figure it out for yourself.

    If time permits, may even blog any discoveries that are generally applicable to Java development at: http://blogs.ittoolbox.com/eai/leadership

    Industry Analysts and those in the media are welcome to contact me in the January timeframe to provide an indepth customer perspective.

    If you happen to be an Azul customer, maybe we could trade notes...