White Paper: High Performance and Standard Transformation

Discussions

News: White Paper: High Performance and Standard Transformation

  1. In this white paper, John Davies, CTO of software development firm Century 24 Solutions Limited [C24], introduces the company's Integration Objects (IO) and describes their move to XSLT and XQuery for transformation. He then links to a PDF by Mike Kay, author of XQuery and XSLT, on XQuery. Xquery is a language written for querying complex XML documents. As the complexity of XML has now exceeded even the largest databases, XQuery has become a serious alternative to SQL. When used for integration, documents are traditionally transformed first into XML and then transformed using XQuery (or XSLT) before being transformed back into the destination format. This makes many EAI/ESB/SOA tools extremely slow for transforming non-XML formats like comma-delimited files using standards like XSLT and XQuery. C24 asked Mike Kay, author of the XSLT programmers reference, editor of W3C's XSLT 2.0 specification and founder of Saxonica, to write them a native XPath 2.0, XSLT 2.0 and XQuery engine that fits into their modelling and binding tool "Integration Objects" (IO). What do you think of Integration Objects?

    Threaded Messages (30)

  2. Code demo[ Go to top ]

    Just a little FYI, if you're interested in the code you can download reference implementations (FpML, SWIFT, ISO 20022 etc.) here:- http://www.c24s.biz/confluence/display/IO/C24+IO+DevNet. As a quick demo the following will execute the XQuery string (myXQuery) on a C24 ISO-20022 object, the output will be to System.out, this example was taken from the paper.
    DataDocument dataDoc = (DataDocument) readFpMLFromFile( filename ); Configuration config = new Configuration(); StaticQueryContext staticContext = new StaticQueryContext(config); XQueryExpression exp = staticContext.compileQuery( myXQuery ); DynamicQueryContext context = new DynamicQueryContext(config); DocumentNode docNode = new DocumentNode(config, dataDoc, true, true); context.setContextItem(docNode); exp.run(context, new StreamResult(System.out), new Properties());
    Obviously this code is the same for any XQuery so it could be refactored into a simple method. -John-
  3. ifx experiences[ Go to top ]

    I was wondering, do you have anu experiences with IFX messaging ??? If C24 is focused on the financial services industry, on my opinion ifx should be considered
  4. Re: ifx experiences[ Go to top ]

    I was wondering, do you have anu experiences with IFX messaging ??? If C24 is focused on the financial services industry, on my opinion ifx should be considered
    Alan, Since IFX is expressed as XML schema it's a simple task of importing it. Having said that we have now added IFX as one of our supported standards. I was interested to see that they use some of the ISO-20022 PAIN messages. If you need any information in this please contact me (John) at C24 dot biz. Thanks, -John-
  5. IFX is fully supported[ Go to top ]

    Sorry Alan, I'm out of date on my own product :-) We already fully support IFX and it's supported on the current version of IO. -John-
  6. I haven't gotten to use it yet in dev myself, but John and I whipped up an example (Coherence Continuous Query with FIX messages via C24 IO) in about an hour. Will be interested in seeing how XQuery integrates .. Peace, Cameron Purdy Tangosol Coherence: The Java Data Grid
  7. C24+Tangosol[ Go to top ]

    Hi Cam, I seem to remember the hour in question was somewhat impared by a rather nice bottle of Bourbon. We have the ability to create IO object of pretty much any payload and then "write" it to a Coherence cache. While you could do this with JAXB et al you would be restricted to the relatively simple XML payloads without constraints. You could model the data, add the constraints e.g. tradeData must not be after valueDate and write your data into a Coherence cache. Any Coherence node could create/read/update/delete data while still being able to fully validate it. Indices can be built for example using a "ChainedExtractor".
    NamedCache map = CacheFactory.getCache(“myTradeCache"); // Write a new trade (anything FpML, SWIFT, CSV etc. etc.) map.put( trade.getKey(), trade ); // Add an index on TradeDate ChainedExtractor sequenceExtractor = new ChainedExtractor("getTradeDate"); map.addIndex(sequenceExtractor, false, null); // Read a trade using the simple map interface... Trade = (Trade ) map.get( “BNKNYC06075304” );
    I've not tried to integrate the XQuery capabilities with Tangosol although I have discussed with with Cameron, it would be an interesting addition to the Coherence query engine. -John-
  8. Query engine details?[ Go to top ]

    Is there any information available about how the query engine is implemented? -Patrick -- Patrick Linskey http://bea.com
  9. Re: Query engine details?[ Go to top ]

    The XQuery engine is written as an implementation of Saxon (by far the best implementation of the XPath 2.0, XQuery 1.0 and XSLT 2.0 specifications available). The Saxon website provides links to download the source code for the non-schema aware version of Saxon which contains similar implementations for DOM, JDOM, etc. It's interesting to note that when Mike profiled the C24 Saxon implementation he found it ran only fractionally slower than native Saxon. This is quite a feat given the enormous amount of additional semantic and syntactic information held by the C24 IO objects in order to model flat, fixed and delimited files, SWIFT FIN, CREST, FIX, the ISO20022 and FpML validation rules, relational database structures, java classes, etc. Simon Heinrich Product Development Director C24
  10. As the complexity of XML has now exceeded even the largest databases, XQuery has become a serious alternative to SQL.
    WHAT ?! you mean like ... let's query databases with XQuery ? It would be the next buzz, I hope it never happens :-P .
  11. WHAT ?! you mean like ... let's query databases with XQuery ? It would be the next buzz, I hope it never happens :-P .
    Definitely not! why would you want to query a database in XQuery, it is not designed for that. The point is that things like FpML, ISO 20022, TWIST et al need a query language and it would be virtually impossible, not to mention crazy to create a relational mapping and then use SQL. If you've got complex data structures then XQuery is an excellent choice for querying them. We have the majority of large investment banks using our technology not to mention ESB vendors like IONA (Artix for Financial Services), BEA (BEA SWIFT certified though C24) and PolarLake (Financial messages for PolarLake ESB). There are many reasons they chose to use IO but many of them need something that can handle the extreme complexity of these standards. Gone are the days where you just chuck everything into a database and run endless SQL queries, see my comment above on Tangosol, the database is not a good solution for this level of complexity. Regards, -John-
  12. Oops, the BEA link should be: this one, not as above sorry. -John-
  13. As the complexity of XML has now exceeded even the largest databases, XQuery has become a serious alternative to SQL.


    WHAT ?! you mean like ... let's query databases with XQuery ? It would be the next buzz, I hope it never happens :-P .
    Yeah, no kidding. That quote has to be one of the dumbest sentences I've read this year. From a data modelling aspect, if you're dumb enough to use XML for dynamic data storage, XML can do no better than 1970s-era hierarchical databases. The limitations of hierarchical databases were the reason RDBMSs were created. Nothing in the XML kool-aid approaches the notion of ACID. Or various vendor-specific options for query optimization such as indexes or the like. I haven't seen practical uses of XML in the real world beyond: a) messaging b) configuration/read-only information And there are many other approaches and languages I would use for XML manipulation than XSLT. I did XSLT hell for three years, no more. Slow, ugly, difficult to teach to people, ponderous syntax. XPath and parts of XQuery were the last meaningful work produced that made XML easier. But this pipe dream of XML data replacing RDBMSs is just stupid. It's like someone declaring that all modern Turing machine architectures will be replaced with state machines. Uhhh...no: neither in theory or practice.
  14. XML database[ Go to top ]

    But this pipe dream of XML data replacing RDBMSs is just stupid.

    It's like someone declaring that all modern Turing machine architectures will be replaced with state machines. Uhhh...no: neither in theory or practice.
    This is exactly what I thought till I tried Berkley XML DB. We were trying to evaluate various options for storing hierarchical data. It allows you to store XML in DB (as the name suggests) and provides full XQuery support. As an experiment we converted the wordnet (Princeton) data files to XML (2 XML files one 8MB and another 25 MB) and saved them in Berkely DB. We then indexed the fields (tag/attributes) that are used in the query and ran a fairly complicated XQuery (multiple Joins, and loops). It was amazingly fast. We tried all of this using a command line tool without writing a single line of code (barring the XQuery). I am quite surprised to find that Berkley XML db is not seriously discussed. Have any of you used/experimented with it? Have I missed something important?
  15. Re: XML database[ Go to top ]

    But this pipe dream of XML data replacing RDBMSs is just stupid.

    It's like someone declaring that all modern Turing machine architectures will be replaced with state machines. Uhhh...no: neither in theory or practice.


    This is exactly what I thought till I tried Berkley XML DB. We were trying to evaluate various options for storing hierarchical data.
    You guys are talking past each other. The OP is talking about XML replacing *relational* data, Krishnan is talking about hierarchical data -- there is a difference. XML is definitely well-suited for modeling heirarchical data.
  16. Berkley XML DB[ Go to top ]

    Berkley XML DB is pretty cool, it's got good support for XQuery too but it only works on XML. Try storing a SWIFT message in the database and querying that with XQuery. Of course you could use IO to change it into XML and then use Berkley XML DB. I think that's the point you're missing, I'm talking about being able to run XQuery on something other than XML, the XML stuff is commoditised. I am a fan of Berkley DB though, it's interesting to see what RedHat's doing with it. There are Apache projects that need it but they can't use it because of the licensing, all very amusing. -John-
  17. Re: XML database[ Go to top ]

    But this pipe dream of XML data replacing RDBMSs is just stupid.

    It's like someone declaring that all modern Turing machine architectures will be replaced with state machines. Uhhh...no: neither in theory or practice.


    This is exactly what I thought till I tried Berkley XML DB. We were trying to evaluate various options for storing hierarchical data. It allows you to store XML in DB (as the name suggests) and provides full XQuery support. As an experiment we converted the wordnet (Princeton) data files to XML (2 XML files one 8MB and another 25 MB) and saved them in Berkely DB. We then indexed the fields (tag/attributes) that are used in the query and ran a fairly complicated XQuery (multiple Joins, and loops). It was amazingly fast. We tried all of this using a command line tool without writing a single line of code (barring the XQuery).

    I am quite surprised to find that Berkley XML db is not seriously discussed. Have any of you used/experimented with it?

    Have I missed something important?
    Yeah, Berkley is pretty good. There are other vendors that provide more complext functinality and better enterprise features. Take a look at RainingData TigerLogic. I can't believe there are still opponents screaming that XML can't be used to represent hierarchical data structures in an efficient way. Most of these people have never had a need to solve a storage need outside of what fits into the rectangular data structure, though they don't understand. There is a whole level of complexity build around storing data as XML, that allows for a dyanmic hierarchical model, which is not very elegantly resolved with a rectangular data set. The argument of performance, etc... is no longer there either. There are XML databases that perform on par with enterprise relational Dbs and even beat them when dealing with particular data sets that are not well suited for a relational storage. Ilya
  18. You're missing the point[ Go to top ]

    You seem to think I'm proposing XML to replace databases, this couldn't be further from my message. If you Google back to papers I've written in the past (posted here) and talks I've done at various conferences you'll find I'm not a huge fan of XML, in fact I think it's one of the worst formats we've ever had. I advocate design though UML and metadata modelling and then working from the model to produce messaging, business objects and persistence artefacts. The world no longer runs on large relational databases BUT we still need them and I still use them. If your system processes derivatives then model the derivative and derive the messages, business objects and persistence from that model. Odds are you will be exchanging derivative messages with third parties and odds are they won't have the same view as you. Only by managing the metadata definitions of the derivative can you integrate in a manageable way. If you work from the database upwards you will not be able to handle change and reuse or third-party models. XML schema is a good way of describing a model. It is both type safe and has reasonable field-level constraints. Like it or not the world has globalised and we find ourselves exchanging complex and ever changing messages between businesses, unless you manage these at the metadata level you will end up with an ever-increasing integration problem. Using an XML schema does not mean you have to use XML though, the model is the key here, once you have the model you can chose the implementation. This isn't pie-in-the-sky stuff, I've worked at some pretty large banks, this is the way it's done, this isn't XML-centric, far from it, this is model-centric, XML happens to be used, mostly for external messaging but also internally when we're not bothered about performance. Given a model we can also generate more efficient payload formats for application to application integration, one of these is C24's IO. We import the model (from schema or XMI etc.) and generate code that can read/write XML as well as generate DDL through generated Hibernate mappings, if you don't like XML then we support plenty of other (more efficient) payloads, if you want to run queries then you have a choice; stick to SQL if you like (have fun), use Java on an in-memory database/cache though something like Tangosol or GigaSpaces (far quicker than your best query optimisation on a relational database) or on the object model. Choice is the important part here. Take a look at TWIST, ISO 20022, SWIFT 15022 or FpML and tell me you can model that in a relational database, parts of it may be but after you've spent a man-year modelling the several thousand elements you'll be about 4 versions out of data, oh yes, your relational database needs to support several versions. -John-
  19. But this pipe dream of XML data replacing RDBMSs is just stupid.
    Nobody said XML will replace RDBMs, at least not in this post :-). It is all my fault, I saw even John replied seriously to my post, when I was just laughing at Regina's. I'd never think seriously of XQuery becoming a serious alternative to SQL. It's pretty much apples and oranges, I suppose, Regina fogot to be more clear about the context. XQuery could replace SQL eventually in the context of XML databases (or other XML data storage formats) replacing RDBMs. This: "As the complexity of XML has now exceeded even the largest databases, XQuery has become a serious alternative to SQL." just seems to be an unfortunate phrase from Regina's repository :-(. That's what I was laughing at. My apologies.
  20. Many financial applications need to handle market feeds at high performance and low latency. One of the commonly used standard for those feeds is based on the FIX format which is basically a tokenized named value list. In order to improve the performance of handling those feeds it is common practice to transform those feeds into some sort of a binary format, usually proprietary format. While this method can address the performance of parsing and validation of those feeds you loose many of the capabilities associated with standard format such as query capabilities, portability etc. C24 provides tools that provides the benefits of two worlds. I.e. it solves the performance issue by conversion any text based format (Mostly XML) into a binary format through automated code generation tool. It also applies standard validation rules to this transformation process. In addition to that it provides XPath Query support on that binary data which adds a real power behind it i.e. you can query those objects using XPath with the performance of binary format. We (GigaSpaces) used that solution in various occasions in the past and it proved to be very useful. That led us to partner with C24 to provide an integrated solution with our product. Nati S. CTO GigaSpaces
  21. For readers interested in XML and non-XML transformation solutions, check out Milyn Smooks. Smooks is an opensource (LGPL) XML transformation toolkit that allows you mix and match XSLT, Java etc within the context of a single message transformation. It has quite a few other features too. Take a look :-)
  22. Milyn Smooks[ Go to top ]

    I'm sure Milyn has it's uses but it's a message transformation tool. Some points/questions... It doesn't seem to work at the model level, i.e. it's far better to define a transformation for the model than the instance, better still define a transformation for the metadata and the model then flexible. This seems to use a DOM internally, how would it manage streamed data or anything large, DOM's are generally very inefficient. Using a DOM you don't get an API for the model, the examples are very simple, how would you handle a transformation of an interest rate derivative? The EDIFACT example is a good one for non-XML but in the example you're only transforming the instance, the model behind X12 is vastly more complex than the example you have, what happens when you get a different implementation of the same model? -John-
  23. Re: Milyn Smooks[ Go to top ]

    I'm sure Milyn has it's uses but it's a message transformation tool.
    Thanks John. Yes, you're right on both counts - Smooks has it's uses and it's a tool for performing character data Transformations. I wasn't really trying to make a direct comparison with C24's IO solution, which certainly does look cool. To be honest, I was shamelessly using the thread in an effort to direct a bit of attention at an opensource project that I thought would be of interest to the same audience :-) However, the intention is that Smooks is (and will be) quite a bit more than "just another transformation tool". I would summarise the main concepts behind Smooks as follows: 1. It's a framework in which you can manage multiple "mini" transformations (sub-transformations that perform a specific trans task on a specific part of a "profile" of message). This allows you reuse your transformation logic across your message set, with Smooks providing the framework for targeting these "mini" transformations based on predefined message "profiles". 2. It doesn't mandate one transformation technology over another i.e. you can write your "mini" transformations in XSLT, Java, whatever... So Smooks isn't a new "Transformation Tool" in that sense, because you still write the actual transformations using standard techniques... but Smooks does allow you select the technique most appropriate to a given case. 3. Smooks doesn't employ a template based transformation model ala XSLT. Smooks works of the "original" DOM. More on this further down.
    It doesn't seem to work at the model level, i.e. it's far better to define a transformation for the model than the instance, better still define a transformation for the metadata and the model then flexible.
    As I say, I wouldn't compare Smooks to C24's IO. I think they'd be used in different environments, possibly for different problems, and by people/org's with a different budget ;-)
    This seems to use a DOM internally, how would it manage streamed data or anything large,
    Yep, Smooks uses DOM internally. Not sure what you mean by "how would it manage streamed data" but anyway... by coverting it to a DOM using SAX. How would it handle a "large" stream? Now that for sure is something that Smooks would have problems with at the moment, but I have been working with some ideas in that area.
    DOM's are generally very inefficient.
    Are you saying DOM implementations are generally inefficient, or how DOM is typically used (e.g. in the XSLT model) is inefficient? If you're saying the later, then I totaly agree - XSLT creates a completely new message DOM every time, even if your transformation only needs to modify a single document node - classic templating. Put this into a pipeline of XSLTs and we're making the situation even worse. Smooks has purposely avoided this by not implementing a "template" based model. Smooks works on the source document i.e. it doesn't create second, third,... copies (effectively) of the document. This has proven to eliminate a lot of overhead. So in that sense, I'm not so sure that DOM itself is inefficiant. I think it's generally been a case of how it's been used.
    Using a DOM you don't get an API for the model
    Again, I'm not trying to compare Smooks to IO!! However, what Smooks does offer is the ability to extract data from the "instance" (as you call it) and populate JavaBeans. In this way, it is possible for your model to interact with value objects populated from the message and you can do this without writing code that interacts with the DOM in any way.
    the examples are very simple,
    Is that a bad thing? ;-) The examples are purposely very simple! They're not trying to teach people how to write XSLT, Java or whatever. They're sole purpose is to illustrate how to get started with Smooks.
    how would you handle a transformation of an interest rate derivative?
    Of course I'd need the details (since I'm not working in the financial markets) and today's Saturday. However, I'd be very confident that it's very possible with Smooks and without having to jump through endless hoops. Of course, I could be completely wrong and I'm sorry if I underestimate the complexity of a derivative.
    The EDIFACT example is a good one for non-XML but in the example you're only transforming the instance, the model behind X12 is vastly more complex than the example you have,
    Sure. The intention of that tutorial is purely to illustrate a technique for making a non-XML data stream "look like" an XML data stream to Smooks by converting the EDI stream into a stream of SAX events. It purposely doesn't go any further simply because I haven't had time to expand the example and also because the other tutorials try to do the job of illustrating how to perform transformations. The basic idea of the tutorial was... "get it into a DOM, and you can do what you like with it afterwards using Smooks...".
    what happens when you get a different implementation of the same model?
    Using message profiles... different implementations -> different profiles -> different handlers etc targeted at the different profiles. Thanks for taking the time to look at Smooks John and I really appreciate your feedback.
  24. Re: Milyn Smooks[ Go to top ]

    Tom, Thanks for your great reply, I'm going to download it and have a more in depth play. You never know, there might be something we (C24) could release into your project. Regards, -John-
  25. Re: Milyn Smooks[ Go to top ]

    Tom,
    Thanks for your great reply, I'm going to download it and have a more in depth play.

    You never know, there might be something we (C24) could release into your project.

    Regards,

    -John-
    Thanks John. FYI... we're currently in the process of integrating Smooks into JBoss ESB for its GA release due out end of this year. So in JBoss ESB, Smooks will provide the capability to manage message transformations on a "message profile" basis using e.g. message routing and content information (ala WS-Addressing) to dynamically build up the transforamtion as a message flows from producer to consumer. It's not fully integrated yet (only days away), but is something worth keeping an eye on as an example of how Smooks can be used.
  26. a very helpful tool[ Go to top ]

    Thanks for posting this John. This seems to me to represent a helpful step towards the emerging financial services architecture in which: (1) financial message standards are dealt with in a uniform manner, (2) the event model is abstracted to include either caching at the client or messaging or both, (3) worker services run in client side containers and access only local query managed active collections. This approach is a winner for all sorts of reasons mostly to do with flexibility and integration cost, but can also deliver performance benefits. We are not quite there yet but some of the other commenters on this thread are certainly contributing as well.
  27. XBRL[ Go to top ]

    I've used C24 on a project in a London bank a short while ago, and I found the modelling approach quite interesting - it's flexible enough to handle very complex custom data transformation tasks. It makes sense to look at protocols like SWIFT and FIX at the metamodel level - this is in fact how they are specified and defined. Incidentally, one standard that I have been looking at recently is XBRL, which is an XML-based accounting integration standard that attempts to provide a universal electronic interchange for accounting data that can handle the various accouting standards in use (e.g. US GAAP, IFRS). It is being encouraged by the SEC in the US, however the FSA in the UK seem to be defining a less complex standard for the UK market. John, have you looked at this area at all?
  28. Re: XBRL[ Go to top ]

    I've used C24 on a project in a London bank a short while ago, and I found the modelling approach quite interesting - it's flexible enough to handle very complex custom data transformation tasks. It makes sense to look at protocols like SWIFT and FIX at the metamodel level - this is in fact how they are specified and defined. Incidentally, one standard that I have been looking at recently is XBRL, which is an XML-based accounting integration standard that attempts to provide a universal electronic interchange for accounting data that can handle the various accouting standards in use (e.g. US GAAP, IFRS). It is being encouraged by the SEC in the US, however the FSA in the UK seem to be defining a less complex standard for the UK market. John, have you looked at this area at all?
    Hi Rory, I know XBRL reasonably well but we haven't added to our list of supported standards. We tend to add standards based on demand, however if they're simple, i.e. not too many rules external to the schema, then we add the odd one further down the wish-list. I think XBRL would be an interesting addition, I've just downloaded the latest 2.1 schema and taxonomy sets. I'll take a look over the coming days. As for US GAAP and IFRS, I persaonlly know little about these although there seems to be quite a bit on Google, those two I will look into. It's always amusing following all the politics behind these standards, we're currently very much into TWIST and ISO-20022. There's enough politics to write a book on, problem is there's only a dozen or so people that would read it. Please get in contact with us and we'll let you know about XBRL etc. Thanks, -John-
  29. Love that XML[ Go to top ]

    Posts on anything on XML on TSS seems to whip up a hornets nest. I just love the hyperbole. I'm looking forward to reading the C24 white paper. In the meanwhile let me add my 2 cents for Java and XML. For my part, I love XML. I love how many choices I have to use it in Java. I love the choices I have to store XML natively. I love that is has become a lingua franca for data interchange. I also love the XML data model because the parent/child relationships make it easy for me to map out the objects I will write in some code. On the performance front, the tests I have run to compare relational storage of XML to native XML databases make me believe that there is a niche market for native XML persistence. Check out http://www.rainingdata.com/products/tl for TigerLogic, a commercial native XML DB, and http://www.rainingdata.com/products/soa/soatestkit/index.html for the performance kit from which the numbers come. In my experience, the reason for native XML persistence comes down to these: 1) Do the message schemas stay the same? 2) Are the messages small? and 3) Are the messages simple in terms of orders of hierarchy? If any of the answers are no, then I would point developers to look at native XML engines. -Frank Cohen
  30. archetypes do a better job[ Go to top ]

    The fundamental issue is how to tackle business rules and constraints while avoiding hard-coded solutions. The approach adopted by openEHR (www.openEHR.org) for an equally complex domain - health care - is to develop two levels of models, information and knowledge. The latter, via a generic knowledge modelling methodology, known as archetypes, create libraries of formally expressed notions that convert rules or concepts defined in plain English into computable objects. They can be used to create complex objects represented as a group of archetypes or processed individually. Ocean Informatics (www.oceaninformatics.biz) has built a free archetype editor based on the Archetype Definition Language (ADL)that allows clinicians to create archetypes without any knowledge of the underlying software. Archetypes are knowledge structures that organises the storing of information as well as subsequent queries where XPath is also used for navigational queries. A template builder tools constructs intelligent forms by dragging and dropping archetypes into the form. These archetypes do the data vallidation behind the scenes thus making sure that only valid object instances are created or accepted from incoming messages.
    We can import the extremely complex schema including idrefs and substitution groups, etc., and deploy an fpml.jar. Ignoring several other advanced features for now, we can use the API in this jar to parse FpML instance documents and validate them. This includes the several dozen rules that exist outside of the schema, something that very few tools can currently handle - rules typically defined in plain English or more recently in schematron and OCL. Once parsed the object can be passed from application to application either locally, by reference, or remotely via its own customized serialization. The object can be accessed for reading and updating through a Java-bean API or built-in XPath 2 navigators. >>
    Yes, archetypes do that and more... Shared archetype libraries and common data types are the basis of this combination of a very small and stable information model with the parallel development of archetypes libraries whose evolution is decoupled from the task of building software. We are working in financial services as well. There, we find the same need for a new approach to standards that will create a clear separation of concerns between information models and domain ontologies.
  31. John thanks for the info, i was asking about the support to ifx because of the transformation perfomance that an ifx app may demand, considering ifx is sometimes applicable as a messaging format for critic low latency fin transactions, like a withdrawl from an atm machine or so....