Andrew Nash on SOA teamwork, Binary XML and client-side demand

Discussions

News: Andrew Nash on SOA teamwork, Binary XML and client-side demand

  1. Andrew Nash is the CTO at XML networking vendor Reactivity Inc. He sat for a two-part Q&A, talking about the use of network agent caching to improve application/service performance, business logic in the network, Binary XML and the pending boom of Web services consuming clients.

    In Part 1 he discusses the need to create SOA-specific operations groups. He also goes into the differences between network agents and network appliances and what their functions are.
    You really want to move threat-based security away from the platform so you've created a protective shield around the servers who are actually dealing with these services. For a whole host of reasons dealing with these things with network intermediaries is a much smarter thing to do than with anything that runs on the platform itself, bearing in mind that there are always a class of problems that you'll want to solve on the platform.
    In Part 2 he talks about how Binary XML would make the world a better place [Editor's note: Really? I'm all for it, then.], the need for transferrable policies between SOA/Web services tools and the scale problems presented by a glut of new clients ready to consume Web services.
    The most interesting thing I wish the software guys would get their act together on is Binary XML. At the XML2005 conference I saw a roundtable which had Microsoft, IBM, Sun, Oracle and two other vendors talking about Binary XML and basically what they said is, "We're not going to do it." It's too hard. There's too many issues. We can't work out how to make it backwards compatible

    Yet the reason why I think it would be incredibly valuable is the piece I would most like to add to our appliance is, with the processing that we currently do, I would like to be able to handle it all and ensure that what was passed back to the application was a small-piped, pre-processed sort of structure that you could open without all this SOAP and XML-parsing that you need to handle it at the moment.

    Threaded Messages (12)

  2. He wants CORBA?[ Go to top ]

    Yet the reason why I think it would be incredibly valuable is the piece I would most like to add to our appliance is, with the processing that we currently do, I would like to be able to handle it all and ensure that what was passed back to the application was a small-piped, pre-processed sort of structure that you could open without all this SOAP and XML-parsing that you need to handle it at the moment.

    Isn't he just talking about CORBA?

    Seems that if you try to get rid of the XML parsing and SOAP and try to make it so you can go from object structure on one platform to the same object structure on another platform with a binary transport that you are just going down the CORBA road.

    Maybe Microsoft, IBM, etc have a point when they say that they just won't do it.

    Seems like compressing XML is one way to reduce network traffic. Fast hardware to do it in performance-critical settings.
  3. He wants CORBA?[ Go to top ]

    binary transport that you are just going down the CORBA road.Maybe Microsoft, IBM, etc have a point when they say that they just won't do it.

    Of course vendors do not want CORBA: who will want to buy new tool or consultancy if technology “just works”?
  4. XML? Too much like HTML[ Go to top ]

    Yes...XML is too much like HTML. It will never work over the web. too "chatty". :-)

    Um, since http includes the capability to request that the content be returned zipped, why do we need binary XML? Once again, XML-over-http is the path of the righteous.

    SOAP is the EJB of XML. Give up the ghost.
  5. Nash's summary of the XML 2005 binary XML panel provides a synopsis of the positions articulated by the large venders on the panel. However, I was on that panel representing AgileDelta, arguing for exactly what Nash wants and providing results that show it is possible.

    The 900 pound gorillas argued that binary XML was very hard to do and was not worthwhile because at best it could achieve results two times smaller and two times faster than XML – any more compression would reduce speed and vice-versa. I gave a presentation about Efficient XML, which showed otherwise. I included examples where Efficient XML was five times faster and five times smaller than XML and other examples where it is over 100 times smaller. You can find a copy of the Efficient XML presentation on the XML 2005 site.

    In 2003, we proposed the W3C create a global standard for Efficient XML. As with all standards activities, these things take time, but the W3C just established the Efficient XML Interchange group last November, and plans to complete the standard by November 2007. Here is a copy of our original W3C proposal.

    In the meantime, our customers have been using Efficient XML for a very diverse range of use cases. I visited one customer facility last week where they were using it to efficiently share XML information between aircraft systems, handheld mobile devices, application servers, web servers and high-volume content-based message routers. They were processing hundreds of thousands of XML messages a day ranging in size from 300 bytes to 5.5 Mbytes and seeing compression ratios of over 100:1 for large documents.

         John Schneider
         CTO, AgileDelta, Inc.
         http://www.agiledelta.com
  6. That looks very interesting.

    Do you have some numbers about processing performance?

    The pdf only looks at %size and the w3c proposal link 18 in 3.3 does not valid any more.
  7. That looks very interesting.Do you have some numbers about processing performance?

    Yes, for these examples Efficient XML was 2 to 6 times faster than XML using an unoptimized implementation. The implementation we're currently working on will be even faster.

    BTW: Efficient XML is tunable and lets you choose whether you want to optimize for speed over size or vice-versa. Users with high volume streams of small messages or 100 Mbps intranets generally optimize for speed. Users with larger messages, wireless networks, or internets often optimize for size because the network I/O speedup dwarfs processor speedup.
    The pdf only looks at %size and the w3c proposal link 18 in 3.3 does not valid any more.

    Yep, I fatfingered the URL. :-) Here's the correct URL for our W3C Proposal
  8. Yes, for these examples Efficient XML was 2 to 6 times faster than XML using an unoptimized implementation. The implementation we're currently working on will be even faster.

    How about CORBA being more than 50 times faster and smaller?
  9. How about CORBA being more than 50 times faster and smaller?

    I kind of mentioned CORBA as a joke since I felt as the XML web-services stack gets heavier it seems that many of the same concerns and complexity creep back in too. I'm not saying that CORBA was overly complex - just that the first push of web-services came with a claim of simplicity over previous methods.

    I do like the EFX PDF and even read about the ECMAScript for XML. I especially like that these are open technologies.

    I will stay tuned.
  10. binary XML[ Go to top ]

    I saw the thread of binary XML and thought it was quite interesting... Performance is often thought of as an issue belonging to XML. But the real problem is the poor performance of XML processing models such as DOM and SAX, so the right problem problem to solve is to improve XML processing models, instead of creating new data formats...
  11. Binary XML whatever[ Go to top ]

    People have been doing interoperable extensible binary message stuff for ages. Its called ASN.1, and using E-XER encoding you can compile it from xsd anyhow.

    Using BER/PER is damn fast, many tens of thousands of messages can be processed per second. Yes, even in Java, and the same message can be farmed out to your C++ and C, and C# applications too.

    OSS Nokalva has libraries for all this, but its not free. Their product is very good. http://www.oss.com/asn1 (I dont work for them, but have used their product)

    If you are into your 'document centric' SOA stuff, then just make the document a nice ASN.1 message, and there you are.
  12. Binary XML whatever[ Go to top ]

    People have been doing interoperable extensible binary message stuff for ages. Its called ASN.1, and using E-XER encoding you can compile it from xsd anyhow.Using BER/PER is damn fast, many tens of thousands of messages can be processed per second. Yes, even in Java, and the same message can be farmed out to your C++ and C, and C# applications too.

    Great point here James. I think it's time to step back and "clean out the attic" instead of pushing a new thing. There are technologies already in place, time proven, industry adopted and robust that are already out there.
  13. Binary exists already in XML Schema[ Go to top ]

    Instead of inventing another standard. You could actually use XML Schema elements inside your Web Services. That probably doesn't sit with being able to offer first to market boxes for Reactivity to sell though.

    Base64 encoded fields can be filled with binary messages.
    Use the XML text to pass meta data on how to process your base64 encoded fields and this works pretty well. Performance is good because you are not parsing large XML documents.

    Base64 can encode text or binary objects. Text can be zip compressed within the Base64. Zip is universal. XML and XML Schema is universal.

    Only downside is that if you put XML inside your base64 element, then you need an additional schema. But this can be pointed to by using metadata around the base64 element.

    Another option in a systems environment is JMS. JMS can pass objects or text. JMS doesn't need to parse XML if it is passing objects around.