Discussions

News: Podcast: John Davies on the Investment Banking Technology Stack

  1. In this podcast, taken from a keynote presentation given at TSSJS-Europe, C24 co-founder and CTO John Davies discusses the technology stack used by investment banks. In his presentation, Davies positions various technologies as used by some of the world's largest financial institutions. The presentation covers what banks like or don't like and why, what they use, what they'd rather not use, and what they're looking at for the future. Technologies covered include POJOs, JSE 5 and 6, JEE, Spring, Linux, JavaSpaces, messaging, caching, databases, and Web services. Java 1.5:
    If you get (programmers in banks) programming badly, it creates a nightmare to manage. A lot of these features we had to limit, so we said "Fine, you use Java 1.5, but you can only use the traditional features." We allowed the loops and we allowed generics, generics is very good.
    Messaging:
    Typically...you'll find Tibco Rendezvous. It's a very high-performance network - one of the main reasons is it sits and works very well with Microsoft. We don't see Microsoft in the banking world, but you do see it on people's desktops.
    Open source:
    Banking and open source - we are seriously into open source, but I think not for the same reasons that you're imagining why we like open source. As a consumer, we're worried about how long these companies will be around...One of the main reasons we like open source is it gives us security - we have access to the source code. If the company disappears, we still have the source code.
    View the slides for this presentation. What technologies do you think are best suited for investment banking?

    Threaded Messages (51)

  2. Yicks![ Go to top ]

    Wow, you nasty guys, I didn't know you'd recorded it! Please bare in mind I hadn't had much sleep :-) -John-
  3. If Online Processing matters: Tandem and C++
  4. If Online Processing matters: Tandem and C++
    Welcome to 1995? ;-) I rarely see any _new_ C++ applications, and I definitely haven't seen _any_ applications being built for or moving to Tandem in years. Even the infinitely deep pockets (e.g. LSE) are working on moving off of Tandem. Peace, Cameron Purdy Tangosol Coherence: Clustered Shared Memory for Java
  5. I see (some) new C++ apps still being developed, but most seem to be in maintenance mode.
  6. Any hope of a transcript?
  7. Typically...you'll find Tibco Rendezvous. It's a very high-performance network - one of the main reasons is it sits and works very well with Microsoft. We don't see Microsoft in the banking world, but you do see it on people's desktops.
    Hi, Just some clarification, as from this post some people *might* think that Tibco Rendezvous is tightly linked to Microsoft technology. Although Tibco Rendezvous is a robust, high performance, technology agnostic messaging bus, that you can run on various flavours of Unix and Linux, as well as mainframes and Microsoft various Windows flavors, the main reason why it is so widely used in the financial world is historical. For sure you can see some TIBCO Rendezvous implementation on 100% Unix (Solaris, HP-UX, ...) based ystems, with no Microsoft technology on server side at all, and no messaging on the client desktop. Tibco rendezvous has been implemented some 15 years ago, and for a long time the only competition has been IBM MQSeries, althouh their respective application field did not really overlap, as MQSeries was more focused on transactional messaging while rendezvous was more focused on fast robust message delivery. Incidently, Java JMS specification was designed with these two messaging systems in mind, with IBM and Tibco part of the spec leads : queues concept was coming from MQSeries while topics concept was coming from Rendezvous, hence the two messaging modes now part of the standard. Since the publications of Java JMS standard, Tibco also sells a JMS server, named Tibco EMS. Cheers, Christian Disclaimer : I am a former Tibco employee, having worked with Rendezvous and Tibco EMS (and competed against MQSeries) in various fields such as international financial and telco companies.
  8. Sorry for the previous bold text, i do not mean to be rude ! Unfortunately the "preview" is not really WYSIWIG... :o) Christian
  9. Indeed, Tibco in financials is mostly historically, and we all know how easy it is to switch ;-). However, not everyone who's on RV has stayed there, and there are plenty of other messaging buses that can compete in the same space.
  10. Indeed, Tibco in financials is mostly historically, and we all know how easy it is to switch ;-). However, not everyone who's on RV has stayed there, and there are plenty of other messaging buses that can compete in the same space.
    Between MQ and Tib, that is about 97% of finserv "enterprise" messaging. Sonic is maybe 1% and the other 80 vendors fight for the remainder. Peace, Cameron Purdy Tangosol Coherence: The Java Data Grid
  11. <between mq="" and="" tib,="" that="" is="" about="" 97%="" of="" finserv="" "enterprise"="" messaging.="" sonic="" maybe="" 1%="" the="" other="" 80="" vendors="" fight="" for="" remainder.<="" lockquote=""> I've personally seen Oracle AQ more often than Tibco but I don't know the exact maket share numbers for Oracle. http://www.enterpriseware.eu
  12. <between mq="" and="" tib,="" that="" is="" about="" 97%="" of="" finserv="" "enterprise"="" messaging.="" sonic="" maybe="" 1%="" the="" other="" 80="" vendors="" fight="" for="" remainder.="">

    I've personally seen Oracle AQ more often than Tibco but I don't know the exact maket share numbers for Oracle.

    http://www.enterpriseware.eu
    I was careful to say "enterprise" messaging, i.e. the standard for application-to-application messaging within various companies. I have yet to see Oracle AQ used as a company standard message bus. Within a particular application it is totally different: you may have Oracle AQ or WebLogic JMS or ActiveMQ or SwiftMQ or Sonic, but outside of the application IBM and Tib still dominate. Peace, Cameron Purdy Tangosol Coherence: Clustered Shared Memory for Java
  13. Indeed, Tibco in financials is mostly historically, and we all know how easy it is to switch ;-).
    Jin, I did and won so many proof-of-concepts using Tibco Rendezvous, and believe me or not, the performance and robustness has always been on the side of Rendezvous whoever the competitor :o) This is the reason why this product is still so widely used in highly stressed architectures, such as financial and telcos. These customers do not feel the need to take the risk to migrate to other product to supposedly save some money, their messaging needs are 200% fulfilled ;o) Cheers, Christian Disclaimer : i still have friends in this company or at customers :o)
  14. If you get (programmers in banks) programming badly, it creates a nightmare to manage.
    The majority of maintanance problems in financial applications comes not from bad programming but from a bad design or architecture. And very often that part of development (design) is either missing or done in a random way during the coding. So I would like to add things like OOAD, UML and RUP / UDP to the list of technologies well suited for investment banking. http://www.enterpriseware.eu
  15. The majority of maintanance problems in financial applications comes not from bad programming but from a bad design or architecture.
    I wouldn't know about the majority, haven't seen enough. But from what I have seen and heard in banks, a lot of financial applications are made for very specific user groups, with those groups dictating requirements and priorities in isolation of eachother. Fine as long as it is not too prevalent, but quickly becomes a labyrinth of tightly-coupled stovepipe systems with little or no reuse or interoperability. That's when you get into management and maintenance hell, when there is no overview or strategy whatsoever of how applications are done (and which applications are done), reused or integrated. At that point it doesn't matter if the code in the individual applications is perfect and does exactly what it is required to do.
  16. But from what I have seen and heard in banks, a lot of financial applications are made for very specific user groups, with those groups dictating requirements and priorities in isolation of eachother.
    Are you by any chance referring to the City? :-) http://www.enterpriseware.eu

  17. Are you by any chance referring to the City? :-)

    http://www.enterpriseware.eu
    Yes, I am London-based if that is what you are asking. :)

  18. Are you by any chance referring to the City? :-)

    http://www.enterpriseware.eu


    Yes, I am London-based if that is what you are asking. :)
    Your previous statement was literally what I heard from a City based contractor some time ago hence my (apparently correct) presumption :-) http://www.enterpriseware.eu
  19. Databases[ Go to top ]

    "Databases are on the way out." That's a pretty strong statement. I can understand the performance benefits of working with in-VM data. I don't understand though how such an architecture helps you when data needs to be shared across VMs. Do in-memory "DBs" + distributed caches always = better performance than just updating a central DB? At some stage, you need to notify others of changes, and it always seems to me that the n/w roundtrips required end up being roughly equal to just maintaining one DB. Maybe a separate LAN hosting a javaspace? Puzzled, Kit
  20. Re: Databases[ Go to top ]

    "Databases are on the way out."

    That's a pretty strong statement.

    I can understand the performance benefits of working with in-VM data. I don't understand though how such an architecture helps you when data needs to be shared across VMs. Do in-memory "DBs" + distributed caches always = better performance than just updating a central DB? At some stage, you need to notify others of changes, and it always seems to me that the n/w roundtrips required end up being roughly equal to just maintaining one DB.

    Maybe a separate LAN hosting a javaspace?

    Puzzled,
    Kit
    I can see using distributed caches for the transactional store, but what about regulatory requirements? Don't things like SOX mean that you need to save each transaction separately to an on-disk archival store? Aren't RDBM's uniquely qualified in this area?
  21. Compliance is an 800 pound Gorilla[ Go to top ]

    We are mid-sized financial firm that sees compliance as the leviathan on the horizon. Our firm uses a system based on Informix. Informix was purchased for 1 billion dollars by IBM; and, one of the features IBM liked ( besides the user base ) enough to incorporate into DB2 was(and is) the Replication feature of Informix. A low overhead method to replicate and merge data (as in a data wharehouse- something many large financial firms consider an asset). I have pondered the question between data grids (like Javaspaces) and centralized Databases. Compliance and Marketing seem to favor a Central DB- it is interesting that several large banks have taken back their outsourced credit-card-proccessing from First Data. But, the "new world order" of the e-payments via mobile or "contacless" means data must be processed near light speed as the number of transactions and verifications rise exponentially. This e-reality tips the scale in favor of data grids like Javaspaces. But, where will the twain meet ? I suppose we end up with a chimeric animal lashed together with code for baling wire. Are there some novel ( perhaps, better) solutions to this financial dilemma ?
  22. Re: Databases[ Go to top ]

    "Databases are on the way out."

    That's a pretty strong statement.

    I can understand the performance benefits of working with in-VM data. I don't understand though how such an architecture helps you when data needs to be shared across VMs. Do in-memory "DBs" + distributed caches always = better performance than just updating a central DB? At some stage, you need to notify others of changes, and it always seems to me that the n/w roundtrips required end up being roughly equal to just maintaining one DB.
    You would not use an in-memory DB plus a distributed cache, since the distributed cache (speaking in terms of Coherence anyhow) _is_ the in-memory DB. See: http://wiki.tangosol.com/display/COH32UG/Cluster+your+objects+and+your+data http://wiki.tangosol.com/display/COH32UG/Provide+a+Queryable+Data+Fabric http://wiki.tangosol.com/display/COH32UG/Provide+a+Data+Grid Notifying others of changes is challenging with a central DB model, while it could not be simpler with a distributed cache. See: http://wiki.tangosol.com/display/COH32UG/Deliver+events+for+changes+as+they+occur Regarding performance, for the types of applications that we work with, the central database model cannot come close on a large-scale system. With our latest release, we can sustain hundreds of thousands of cluster-durable transactions per second on commodity hardware. At that rate, those transactions are each very small, but humorously -- or ironically! -- enough those transaction rates are often driven by SOX '02 ;-) Obviously, there are also many applications which lend themselves quite naturally to the central database model, so one must not imply that there is a one-size-fits-all solution. We tend to see a class of financial applications (e.g. risk analytics, order book / trading systems, compliance, metrics, data feeds, CEP, etc.) that simply cannot meet stated requirements if they are built on the central database model, so my personal sample set (and John's ;-) is likely badly skewed. Peace, Cameron Purdy Tangosol Coherence: Clustered Shared Memory for Java
  23. Re: Databases[ Go to top ]

    Regarding performance, for the types of applications that we work with, the central database model cannot come close on a large-scale system. With our latest release, we can sustain hundreds of thousands of cluster-durable transactions per second on commodity hardware. At that rate, those transactions are each very small, but humorously -- or ironically! -- enough those transaction rates are often driven by SOX '02 ;-)
    Yeah, but don't those have to be written off somewhere durable for SOX compliance?
  24. Re: Databases[ Go to top ]

    With databases on the way out what happens to the 'D' in ACID transaction. Would you use a Tx manager on a raw partition, journaled file system, flash memory, something else? Whatever it is, it needs to support failover, logging, concurrency, etc.
  25. Re: Databases[ Go to top ]

    With databases on the way out what happens to the 'D' in ACID transaction.
    There are various forms of "D" just as there are various forms of "I"; for example, ANSI defines READ UNCOMMITTED, READ COMMITTED, REPEATABLE READ and SERIALIZABLE. Similarly, durability can include in-memory durability, redundant in-memory durability, flash, local disk, RAID, SAN, etc. When it comes to durability, the ability to select the required durability is very important. For example, we support synchronous write-through caching for when durability to a database or a SAN is absolutely required, and in-memory durability (local or clustered) when no disk durability is required, and everything in between (write-behind, etc.) Peace, Cameron Purdy Tangosol Coherence: The Java Data Grid
  26. Re: Databases[ Go to top ]

    "Databases are on the way out." Kit's reply... That's a pretty strong statement.
    Yes, you're right, it might strike a sore point with a few database vendors and die-hard DBAs but I could have said COBOL is on it's way out 20 years ago when Objective-C and C++ started to appear on the scene but COBOL is still around. Databases will still be around in 20 years but the ARE on their way out, it's a technology the worked well for large companies but the needs of globalisation will kill it. I suspect (and hope) Oracle will still be the leader, they do make a damn good database. Let me give you a little background to my thinking, purely personal but I do have a lot of experience in this area. You can now put an entire days trades from any of the world's largest banks into memory, true, at the end of the day we need to "archive" it for a number of reasons not least regulatory controls but you rarely need to index it for complex searches once it's archived, in effect this is pure archival. Electronic trading has lead to huge network traffic increases but most of these are requests for quotes, i.e. one machine asking another for a price. We can store these prices in memory and replace them with the new price. We only need to store them (archive them I should say) when the trade is executed. In this area databases disappeared a long time ago, they just can't keep up with the new electronic trading speeds (often well over 10,000 messages a second). As we move from an enterprise scale (say just the bank or large institution and its several hundred internal applications) to global scale, hundreds of banks and thousands of clients (not individuals but other businesses) all interchanging complex messages we run up against some very interesting problems representing the complexity of data in a relational form. Not only that but relational models can not easily manage multiple versions, something needed on this global scale. Even if we manage to somehow design a relational model for our complex data and ignore the continuing changes driven by standards outside of our control how do we manage the complex searches joining several hundred tables. The several hundred tables are the DBA's attempt to model the messages, now the model for our global business. I've seen this so many times in large banks, by the time they've finished designing a relational model and published it in TOAD new requirements have come through and it all goes back into development again. Finally it runs like a dog because the developers expected to use an ORM tool and the DBA claims the queries have to be run in stored procs if you want anything reasonable in performance, it never works. You see globalisation has driven the model towards messaging not relational databases. If we model the world in UML and technologies like OWL (Web Ontology Language) we can start to do away with the extra layers of ORM tools and start to view the world differently. From ontologies and meta-data we can generate code (what my company C24 does), with this code we can parse, route, persist and build distributed applications in Java, there's no need for ORM. Some of, in fact most of the world's largest central banks, investment banks and exchanges are using this technology with grid/caching vendors like Tangosol and GigaSpaces to replace "classic" databases with distributed, transactional, reliable in-memory databases. Very complex messages (EDIFACT, SWIFT, FpML, ISO-20022, TWIST etc.) can be stored after a few minutes work in in-memory database. I've seen these over a terabyte in size, millions of rows of very complex data from more than one version and even message type can be searched several orders of magnitude faster then the best tuned relational equivalent. Microsoft are currently developing similar in-memory architectures, they know damn well it will compete and eventually replace their own database but they also know where things are heading, if they don't keep up they'll fall behind. I was speaking to Nati (CTO of GigaSpaces) the other day and he was telling me how well their .NETSpaces are selling. I'd normally call him a traitor for touching .NET but we all need to earn money to feed the kids. What's interesting is that .NET has nothing as powerful in it's arsenal so it's selling like hot-cakes. Relational Modelling RIP -John-
  27. Re: Databases[ Go to top ]

    Yes, you're right, .... -John-
    John, Excellent explaination. I passed it on to some "lay people" and they were able to easily understand. It helped explain what I have been trying to tell them. I wonder how this can be leveraged in a knowledge base/"data" warehouse type environment. The "dataset" is so large it takes hours to get relatively simple answers. This is mostly risk analytics type processing in the medical field. mark
  28. Re: Databases[ Go to top ]

    Excellent explanation.
    Thanks Mark, great to get feedback. It's a favourite topic of mine, I'm just waiting for someone from Oracle to come along with a large wodge of cash to stop me talking; Failing that it gets me free drinks from the Tangosol, GigaSpaces and Gemstone guys. :-) Data-warehouses are simply a whole years worth of daily data (or more), which I'm sure you know. So if the data was held in a relational model then the data-warehouse is best left in this format however I have seen some interesting data-warehouses using more complex models, i.e. messages-based. This is quite an interesting new area, essentially we need to store all the messages, usually in their original form, converting them to a relation model creates all the issues I mentioned above, problems with multiple versions (increased with the duration) etc. not to mention the complexity of searches and even transformation into a relational model. Very few commercial data-warehouses can do this so it's an interesting niche for someone to pick up on. I'm currently doing some work at a very large financial services company and we're looking at some interesting ideas for searching through a warehouse of transaction, by keeping the data in it's original structure and querying it with things like XQuery we're getting some interesting results. This is opening up new ideas for AML (Anti Money Laundering) and fraud detection. -John- CTO, C24
  29. Re: Databases[ Go to top ]

    John, Thing is, it is not really a data-warehouse. I think they are trying or tried, but it doesn't work because the data is so bad. Anyway, what they want to do with it (run analysis) is virtually impossible because it is too slow and the data is so "bad". They are not really trying to warehouse the data, but pull it in from diverse sources and run analysis. Some of it is what-if type, some of it is standard. We need to be able to query complex objects that know how to return us values based on what they know about themselves. My thoughts were along the lines of a messaging system plus a grid/space. Or something like that. I am trying to get a handle on what is being done in this type of system. I would say what we are doing is more like Geico and less like a bank. But that is just my observation. I am currently on the outside looking in. I would like to hear more about the XQuery stuff.
  30. XQuery[ Go to top ]

    Mark, You can read a little big more about the technology we're using for XQuere from my XQuery paper in the serverside. I forgot to mention that I've seen the Google hardware used a lot for datamining too, very interesting results. This thread is dropping of the bottom of the serverside news and I can't publish too much detail on what we're doing so why not contact me directly if you need any more info (John dot Davies at C24 dot com). -John- CTO C24
  31. Re: XQuery[ Go to top ]

    Mark,
    You can read a little big more about the technology we're using for XQuere from my XQuery paper in the serverside.

    I forgot to mention that I've seen the Google hardware used a lot for datamining too, very interesting results.

    This thread is dropping of the bottom of the serverside news and I can't publish too much detail on what we're doing so why not contact me directly if you need any more info (John dot Davies at C24 dot com).

    -John-
    CTO C24
    Thanks John. I have emailed Cameron a little on this in the past.
  32. Re: Databases[ Go to top ]

    I was speaking to Nati (CTO of GigaSpaces) the other day and he was telling me how well their .NETSpaces are selling. I'd normally call him a traitor for touching .NET but we all need to earn money to feed the kids. What's interesting is that .NET has nothing as powerful in it's arsenal so it's selling like hot-cakes.
    Its sometimes amazing to see how fast information flies:) As for a traitor John, I actually thinks that I'm doing a great service for java since I'm bringing lots of .Net users to use an underlying java based implementation – during that path they are getting exposed to the power of java. Another interesting thing that I found during that experience, that you can get .Net and java frameworks to leave very well together and in that way get the benefits of both worlds without real penalty on performance or even complexity. This is achieved through the introduction of something we refer to as PONO which is basically the equivalent of POJO in .Net. Nati S. GigaSpaces – Write Once Scale Anywhere (Including .Net)
  33. Re: Databases[ Go to top ]

    "Databases are on the way out."

    That's a pretty strong statement.
    It also depends on what you mean by "database". Is it only RBDMS, or databases in general? Our product does content management, so there's a lot of data in our "database". However, instead of storing it in a relational database we have opted for a persistent hashtable a la Berkeley DB, for fast store/load, and then we index whatever needs to be queriable using Lucene. Our next generation of the platform will be storing the object model (which is entirely based on Java using AOP introductions) as XML documents which are then indexed using a separate RDF database. The RDF query model is perfect for what we need, and using XML ensures that we can easily do schema migration and store complex objects without any relational overhead when we serialize/deserialize them. So, with this model we get raw performance, good longterm storage and schema migration possibilities, support for AOP domain models, and excellent queriability. We're also looking at integrating Coherence for clustering of all of this. So far it looks very promising.
    I can understand the performance benefits of working with in-VM data. I don't understand though how such an architecture helps you when data needs to be shared across VMs. Do in-memory "DBs" + distributed caches always = better performance than just updating a central DB? At some stage, you need to notify others of changes, and it always seems to me that the n/w roundtrips required end up being roughly equal to just maintaining one DB.
    I would suggest that you take a look at how for example Coherence works here, because the algorithm used there does not imply n/w roundtrips. It is much more efficient than that.
  34. Re: Databases[ Go to top ]

    Our product does content management
    Good approach w/ Berkley DB for structure/storage and Lucene for access. I do the same CMS but with subversion/property files/lucene. Content providers get by with TortioseSVN. But its read-only. I figure banks for more OLTP. A replacement for a RDBMS still needs to provide all the features of an RDBMS like fault-tolerance, logging, concurrency, ACIDity (isolation, locking, coordination), access (SQL), IO, durable storage, yadda yadda. What replaces all that?
  35. Re: Databases[ Go to top ]

    "Do in-memory "DBs" + distributed caches always = better performance than just updating a central DB? At some stage, you need to notify others of changes, and it always seems to me that the n/w roundtrips required end up being roughly equal to just maintaining one DB. "
    The key is that you don't distribute all the data across all nodes all the time, in addition to that you do not maintain a distributed lock in a way that will imply a centralized point of synchronization. This is achieved mostly through partitioning of the lock, the data or both. If you would like this is pretty much the equivalent of RAID disks in-memory. Anyway there's much more into it that i could cover in this post. You could find plenty of information on ours and possibly others sites.
    Maybe a separate LAN hosting a javaspace?
    Yes I think that the numbers on our benchmark page speaks for themselves. Nati S. GigaSpaces – Write Once Scale Anywhere
  36. I worked with TIBCO RV in context of some other TIBCO products at my previous job and I was very unimpressed. I'm not sure whether it was RV of the applications that were using it but it was fairly unreliable. When message receivers puked on the input, they would often halt all messaging on the subject in question. RV componenets would often shutdown on the system for no apparent reason and TIBCO support wouldn't able to explain to us what had happened. We commonly had to blow away ledger files (and all the messages in them) in order to get things running again. But when I see that the banking industry uses it so much, I feel there is some disconnect. I can't imagine it would still be used if they had the kinds of experiences we had. I guess it must be the over TIBCO components that were at fault or our configuration of the environment. There were many many issues with these other components so it seems likely that they were the cause of our issues. It just seems for the price we paid for these packages, they would be more reliable, configurable and have better documentation.
  37. I worked with TIBCO RV in context of some other TIBCO products at my previous job and I was very unimpressed. I'm not sure whether it was RV of the applications that were using it but it was fairly unreliable.
    Hi James, I have been a Tibco employee for 6 years before i left one year ago. I do really know the products, from Rendezvous as a messaging system accessed via its C, C++ or Java interface, to higher level Tibco tools. Good part of the job was that i could design and develop some pretty interesting low level Java developments for mission critical highly stressed applications, as well as designing higher level IT architecture (SOA like, back in 2001, Tibco implemented the SOA on MOM concept before anyone else that i am aware of). Bad side of the job was the customer waking you up in late evening or week-ends because the IT system was not under control anymore, claiming that it was because of Tibco Rendezvous bugs or whatever... So Tibco consultant had to troubleshoot, even though we had not been involved in the design or the implementation of their IT system. 99.9% of the time the problem was a clear and obvious misused of the products by the customer himself, or the consultants he hired to do the job : -customer programatic bugs in multithreading code in the client implementation -customer programatic bugs bugs in the marshalling/unmarshalling of the data -customer programatic bugs in the opening/closing of rendezvous client session -change of data format at the data provider side while some or all clients remained with old data format -upgrade of Rendezvous software on all the servers but one server "forgotten" and still with very old version -architectural fundamental design errors when architect is designing the solution, ignoring best practices -maintenance deciding to change some middleware technical settings while not making a serious impact study -...the list could go on... but my best is this one : -Hey Tibco consultant, Rendezvous was working for the last 3 days/weeks/months but is not working anymore, this is a bug! -Ok, what did you change in the system ? -We changed nothing, suddenly it is not working anymore -I see, what did you change in the system ? -Nothing for sure -I see, but maybe something was changed by someone, accidentally ? -Oh well, we upgraded the database, migrated to a different Unix server, cleaned-up some garbage data, upgraded to a new version of our application, changed the router multicast behaviour, but none of this should impact the system, so it should be a rendezvous bug, shouldn't it ? As Tibco consultants, our role was not only to teach and help the customer to use Tibco products, but also (if not mainly) to help them do their job as IT engineers. Sometimes because they do not have the knowledge, sometimes because the management would listen to us more than they would listen to their own employee, and sometimes because they were to busy competing against each other that they were absolutely unable to work together. If an employee says "we need 2 more weeks" he won't be listened at all and more pressure on his shoulders, if an external consultant says the same, the team might get one month. Sad but true. For sure Rendezvous is not 100% guaranteed bugsafe as any software, but it is seriously robust enough to have some telcos having their real-time provisioning system based, "standardized", on Rendezvous,same for some financial institutions. At least this is my experience with Rendezvous and other Tibco related products. Cheers, Christian
  38. Bad side of the job was the customer waking you up in late evening or week-ends because the IT system was not under control anymore, claiming that it was because of Tibco Rendezvous bugs or whatever... So Tibco consultant had to troubleshoot, even though we had not been involved in the design or the implementation of their IT system.

    99.9% of the time the problem was a clear and obvious misused of the products by the customer himself, or the consultants he hired to do the job
    I'm completely open to the suggestion that at least part of the issue was our configuration. I was not very involved in this so it's hard for me to say. I do know that in some cases some of the suggestions we received from Tibco support were not followed because they started with buying another $30K+ server when we asked for the ability to set a specific timeout. Perhaps it's just the banking industry is willing to throw more hardware at RV than my previous employer. That said, we didn't write any code that was talking to RV directly. All our RV-based stuff was sold to us by TIBCO e.g. BW and BC. So there isn't any possiblity of client (meaning us) bugs. It could be client, as in BW and BC, bugs, however. It just seems to me that TIBCO should have been able to use their own platform more effectively in their products. But RV related issues were just the tip of the iceberg in these products. I guess the most frustrating thing to me was the unwillingness of the company to admit that things like massive space-leaks and the lack of options such as trading-partner posting thread limits were not features but serious issues.
  39. All our RV-based stuff was sold to us by TIBCO e.g. BW and BC. So there isn't any possiblity of client (meaning us) bugs. It could be client, as in BW and BC, bugs, however. It just seems to me that TIBCO should have been able to use their own platform more effectively in their products. But RV related issues were just the tip of the iceberg in these products. I guess the most frustrating thing to me was the unwillingness of the company to admit that things like massive space-leaks and the lack of options such as trading-partner posting thread limits were not features but serious issues.
    James, I clearly understand what you mean now. I did only a few projects in the B2B field, so i could not pretend to master every details of BusinessConnect line of products. But i know for sure that BW is a flagship product for its productivity, expressiveness, and manageability. I have to admit that the first releases a couple of years ago were a bit buggy and had less tuning capabilities. And still even though you are using BW and not direct Rendezvous API you have to deal with naming, and it is better to doublecheck the messaging elements of the system. Cheers, Christian
  40. I clearly understand what you mean now. I did only a few projects in the B2B field, so i could not pretend to master every details of BusinessConnect line of products.

    But i know for sure that BW is a flagship product for its productivity, expressiveness, and manageability. I have to admit that the first releases a couple of years ago were a bit buggy and had less tuning capabilities. And still even though you are using BW and not direct Rendezvous API you have to deal with naming, and it is better to doublecheck the messaging elements of the system.

    Cheers,

    Christian
    BC is much more reliable and robust, in my experience. We determined the reason for some of these things. For one, if BW had a schema that was not exactly the same as BC's version, a message could be sent to BW that contained unexpected elements. BW would fail to acknowledge and in certain circumstances this would require bringing basically every TIBCO application down on all servers. While this could be avoided with painstaking effort, BW really should have been more robust and handled this more gracefully as it is sold as an enterprise-quality product. The other issues were things like how every deployment was saved in memory on production. After a number of deployments, the footprint of BW would surpass 2 GB resulting in unacceptable thrashing. There was no way to limit the number of saved deployments and the only option was to wipe the whole repository clean and deploy everything from scratch. Deployments could take up to 5 hours as the server would timeout on RV requests and there was no way to increase the timeout. You could shut down all the engines but this is problematic when you are supposed to be up basically 24/7. In addition, non-gui deployments were unstable and would appear to succeed when they did not resulting in semi-deployed apps and bizarre behavior. Those were just the server issues. I would not suggest attempting to use BW developer on any machine with less than 1GB of RAM and I would recomment 2GB. Large projects could take more than half an hour to open and 'index' and because there was no concept of a shared library, it was difficult to split these without serious maintenance issues. On top of that modifications were sometimes unstable causing internal inconsistencies in the BW proprietary files that had to be fixed manually (without documentation). The documentation that did exist was superficial at best: explaining only the most obvious things. Selecting help on one element of the gui often linked to something completely unrelated. Sometimes builds would fail producing a garbage file but the tool would report them as successful. I could go on for quite a while on other problems but having bad memories of very late nights and lost holidays. Conceptually, this tool is a good idea but it's implementation is no where near enterprise quality. It should really be a tool that creates apps deployable on any J2EE-compliant app-server. Version 5.x was a huge improvement over 2.x but there was still a lot of issues. I haven't even mentioned any of the issues that we had in 2.x which was a, IMO, a soul-crushing tool.
  41. James, Nice discussion, it reminds me of some relationship with some customers :o) BW has evolved a lot during the 2 and a half years i have been using it on 75% of my projects. Last year when i left the product was really stable, and was used not only in very large integration projects, as SOA orchestrator, but also as a graphical and very productive development tool for business applications. You will for sure never get the memory and speed efficiency of carefully taylor-made pure java code. But when you realise that for many applications more than 90% of the time is spent in database and presentatiton tier, there is less need to optimise the application tier. And the productivity you get with this kind of graphical tool is absolutelay awesome. I remember winning a proof-of-concept against 10+ IBM consultants developping using websphere, and we were only 2 Tibco developers in contrasts, with much more maintainable business logic while the server load was about equal. But nevertheless BW is not a silver bullet for all kind of applications. And yes there were some bugs, but Tibco support was really reactive from my point of view. Yes the repository eating bunch of memory has been a lasting problem that was resolved at the time i was leaving. It was kind of ennoying, but not really something that the customers i was working with could not turn around one way or another. And 5 hours for a deployment, really means that your deployment strategy was not the right one, having each and every process or adapter loading the whole repository from the repository server. You can have the components read a local file repository instead of contacting the repository server. You can really fragment your repository using different strategies, each component loading the minimum of information. Regarding the documentation customers were satisfied, so it may be a matter of taste. Most of their questions, i would answer: please check the docs, and they would come back saying : ok i got it, thanks. To sumup my feeling, maybe you experienced bugs with some early version of BW ? Yes i agree that there were bugs but quickly really a few, and they were rapidly patched by support. The memory consumption problem could be turned around, and the new release offered more fined grained repository. And for sure BW is of enterprise quality, i never witnessed a project not going in production because of BW. I worked for one year for a major national telco who standardized 100% on BusinessWorks or 24/7 mision critical heavy loaded processes, and believe me they are more than happy and keep on buying Tibco products from Rendezvous to BW and adapters, to build a technical ecosystem around Tibco products to get maximum productivity. Time to market new business processes is more important to them than the price of the RAM i believe. Regarding the deployment on a J2EE server, for the kind of application the customers were using the tool it was not a problem, integration with J2EE based applications could be made using WebServices or direct EJB client invocation, and it ensure Tibco that BW underlying server was not "patched", "enchanced", "customized" by the customer with potential bugs/library incompatibilities introduced accidentally. But as always, one size does not fits all, and of course i am biased :o) Cheers, Christian
  42. James,

    Nice discussion, it reminds me of some relationship with some customers :o)

    BW has evolved a lot during the 2 and a half years i have been using it on 75% of my projects. Last year when i left the product was really stable, and was used not only in very large integration projects, as SOA orchestrator, but also as a graphical and very productive development tool for business applications.

    You will for sure never get the memory and speed efficiency of carefully taylor-made pure java code.
    I never asked for that. Just something that will run on Solaris server with 4CPUs, 4GB of RAM without trashing. For what we were deploying, this should have been a cake walk.
    But when you realise that for many applications more than 90% of the time is spent in database and presentatiton tier, there is less need to optimise the application tier.
    Completely unrelated to what my issues were. Application preformance was acceptable in terms of speed.
    And the productivity you get with this kind of graphical tool is absolutelay awesome.
    Completely disagree. Waiting 30 minutes to load a project and wait for it to 'index namespaces' and then often having to close down and reopen because of 'internal errors' that prevent saving ones work does not contribute to productivity.
    And 5 hours for a deployment, really means that your deployment strategy was not the right one, having each and every process or adapter loading the whole repository from the repository server.
    Wrong. That's exactly what we were doing. It wasn't taking that long to deploy, it's that the deployment would fail repeatedly. We eventually realized it was failing about ten minutes before the administrator would alert us to the failure, which sped things up slightly. The only workaround was to shut down engines that had nothing to do with the deployment.
    Regarding the documentation customers were satisfied, so it may be a matter of taste.
    I was a customer, I was not satisfied.
    I worked for one year for a major national telco who standardized 100% on BusinessWorks or 24/7 mision critical heavy loaded processes, and believe me they are more than happy and keep on buying Tibco products from Rendezvous to BW and adapters, to build a technical ecosystem around Tibco products to get maximum productivity. Time to market new business processes is more important to them than the price of the RAM i believe.
    The space leak problem is unbounded. You can't add RAM forever.
    Regarding the deployment on a J2EE server, for the kind of application the customers were using the tool it was not a problem, integration with J2EE based applications could be made using WebServices or direct EJB client invocation, and it ensure Tibco that BW underlying server was not "patched", "enchanced", "customized" by the customer with potential bugs/library incompatibilities introduced accidentally.
    The reason I want this is not for integration. It was because the bw 'app server' was of low quality. Tibco should focus on improving the core functionality of BW instead of reinventing the wheel. The biggest issues were really with configuration. BC for example only allows setting outbound posting thread limits by protocol. We had a customer that couldn't handle more than a handful of concurrent inbound messages without crashing their server. This meant we were required to limit the threads for that entire protocol to 5 threads on two servers. This caused major production backlogs because it was impossible to keep up with the volume for all the partners using that protocol with that low number of threads. This caused us to miss SLAs and angered major trading partners risking many millions of dollars worth of business. Tibco's response was basically 'tough shit. maybe we'll fix that when you buy the next version in a year' and your attitude is reminiscent of the arrogant attitude of the representatives. If the company I was working for had any balls, they would have kicked Tibco to the curb right then but that would make the people who chose it look bad, so we had to 'make it work'. I literally spent as much time screwing around with problems with the Tibco tools as I did doing actual work. That's not productive. I think the Tibco reps realized were were neck deep in Tibco proprietary code and felt like they could ignore us. The biggest fatal flaw with BW as I see it was that it appears to have a exponential time and space complexity with project size. As the project got large, the problem got worse and worse. So the more we did, the longer it took to do it. So if this could have been solved by a better approach but in my opinion, BW led developers down a path of code proliferation and little reuse. I've used lots of different platforms and tools and I feel that this was one of the worst. It had some nice features but these were far outweighed by all the issues and really, these features can be found elsewhere, including open-source products.
  43. James, I somehow understand what you mean, i supported some customers who were having this kind of troubles. But i always managed to get arount, either by applying the latest patch from support to turnaround bugs, or by rearchitecturing the project organisation, in terms of BW resources management, or some other tricks. Then deployment time goes from 20 minutes down to 50 seconds or so. As a long time BW consultant, i can tell you that we could regularly demonstrate to our customers its flexibility and robustness. Not as flexible as java code on an application server, but so much productive that even newbies fresh out of school could do complex stuff after one week of training. Stuff they would have never been able to achieve using Eclipse to develop a J2EE based application, whatever framework they would be using. I won enough proof-of-concept to backup my claim. And i am an IT engineer 100%, not a "sell at any cost" sales man. BW has evolved from a basic integration tool to a complete business process oriented orchestration tool. There were bugs, i do not deny it for sure. But i do not recognize the latest versions when you describe such a crap :o) Nevertheless i am not a Tibco employee anymore, and have no more financial interest in the success of this company, so hopefully you could achieve your projects using different tools and different frameworks. I am really not trying to be an evangelist here, believe me. Cheers, Christian
  44. i can tell you that we could regularly demonstrate to our customers its flexibility and robustness.
    I could too. The problem is not in a demo. It works greate for demos. The problems were related to large scale use of the product.
    Not as flexible as java code on an application server, but so much productive that even newbies fresh out of school could do complex stuff after one week of training. Stuff they would have never been able to achieve using Eclipse to develop a J2EE based application, whatever framework they would be using.
    And that is great for newbies. I mean it when I say the actual idea of what BW does is great, although I felt the way it was being used where I worked was stretching it beyond it's limits. What I mean about the app server is that if BW were to create real EAR files that could be deployed on an J2EE app server, the product would be much more enterprise ready. Where I worked, there were few newbies and a lot of experienced professionals. What we needed was a tool that allowed us to do sophisticated things and BW was not doing that. We were hamstrung by it's remendial features and GUI-centric design. It was geared toward the dumbass, copy-paste, drag-and-click developer. The Tibco trainer I worked with even told me that it was initially developed for non-developers which means it was not developed with developers in mind and that's who is using it now. I think maybe the company I was working for bought it thinking it was a way they could fire all their developers. The thing is that beyond the most simple uses of this tool, a fairly expert developer is required, at least for what we were doing with it and it didn't cater in anyway to that developer. The mapping tools were great from a development stand point. I don't think I could go back to dealing with XML without some kind of similar tool. This fuctionality is what Tibco should be honing an selling before someone eats their lunch. App servers are a commondity. Tibco's 'app-server' doesn't have the capabilities of the worst J2EE server on the market. Why maintain and enhance something that's so far behind free products? It's silly and annoying for the users who probably already have an app server.
    I won enough proof-of-concept to backup my claim. And i am an IT engineer 100%, not a "sell at any cost" sales man.

    BW has evolved from a basic integration tool to a complete business process oriented orchestration tool. There were bugs, i do not deny it for sure. But i do not recognize the latest versions when you describe such a crap :o)

    Nevertheless i am not a Tibco employee anymore, and have no more financial interest in the success of this company, so hopefully you could achieve your projects using different tools and different frameworks. I am really not trying to be an evangelist here, believe me.

    Cheers,

    Christian
    I'm trying to be constructive here. I'm not trying to bash you or Tibco. The point is that software companies cannot market these 4GL type languages as 'no developers needed' and not expect a backlash when the expert developers are inevitably brought in to clean up. Yes their customers are trapped in a tarpit of poorly written code in a proprietary language but the developers are not. They will move to other companies and relate their experiences to others. I for one cannot in good concience recommend BW to anyone after my experience with this tool. I am also wary of Tibco in general.
  45. James, I am really trying to be constructive here too :o)
    i can tell you that we could regularly demonstrate to our customers its flexibility and robustness.


    I could too. The problem is not in a demo. It works greate for demos. The problems were related to large scale use of the product.

    I was doing proof-of-concepts 20% of my time, and 80% architecture and product consulting. Sorry about using the word demo, as i told you i've worked for one full year for a country leader in telecom (tens of millions of mobile phones and equivalent land lines), they standardized 100% on Tibco, and they would have never come close to delivery date for their new business processes without such a tool, not even talking about J2EE servers.
    Not as flexible as java code on an application server, but so much productive that even newbies fresh out of school could do complex stuff after one week of training.

    Stuff they would have never been able to achieve using Eclipse to develop a J2EE based application, whatever framework they would be using.


    And that is great for newbies. I mean it when I say the actual idea of what BW does is great, although I felt the way it was being used where I worked was stretching it beyond it's limits.

    What I mean about the app server is that if BW were to create real EAR files that could be deployed on an J2EE app server, the product would be much more enterprise ready. Where I worked, there were few newbies and a lot of experienced professionals. What we needed was a tool that allowed us to do sophisticated things and BW was not doing that. We were hamstrung by it's remendial features and GUI-centric design. It was geared toward the dumbass, copy-paste, drag-and-click developer. The Tibco trainer I worked with even told me that it was initially developed for non-developers which means it was not developed with developers in mind and that's who is using it now. I think maybe the company I was working for bought it thinking it was a way they could fire all their developers. The thing is that beyond the most simple uses of this tool, a fairly expert developer is required, at least for what we were doing with it and it didn't cater in anyway to that developer.

    The mapping tools were great from a development stand point. I don't think I could go back to dealing with XML without some kind of similar tool. This fuctionality is what Tibco should be honing an selling before someone eats their lunch. App servers are a commondity. Tibco's 'app-server' doesn't have the capabilities of the worst J2EE server on the market. Why maintain and enhance something that's so far behind free products? It's silly and annoying for the users who probably already have an app server.

    I won enough proof-of-concept to backup my claim. And i am an IT engineer 100%, not a "sell at any cost" sales man.

    BW has evolved from a basic integration tool to a complete business process oriented orchestration tool. There were bugs, i do not deny it for sure. But i do not recognize the latest versions when you describe such a crap :o)

    Nevertheless i am not a Tibco employee anymore, and have no more financial interest in the success of this company, so hopefully you could achieve your projects using different tools and different frameworks. I am really not trying to be an evangelist here, believe me.

    Cheers,

    Christian


    I'm trying to be constructive here. I'm not trying to bash you or Tibco. The point is that software companies cannot market these 4GL type languages as 'no developers needed' and not expect a backlash when the expert developers are inevitably brought in to clean up. Yes their customers are trapped in a tarpit of poorly written code in a proprietary language but the developers are not. They will move to other companies and relate their experiences to others. I for one cannot in good concience recommend BW to anyone after my experience with this tool. I am also wary of Tibco in general.
    I do not know about the constraints of your project, so i will not talk about it. For sure BW had bugs, but it is surely not the only one product in this case. Stopper bugs are/should be given high priority by Tibco support so that engineering can release a patch accordingly (generally a couple of days to a week). If they don't, Tibco loses a customer, and it seems that you are lost forever (joke ;o). Nobody wins all the time :o) Regarding the history of BW, yes it was originally developped for small integration departments, few people with low constraints. But the GUI and the spirit behind it was so well received that Tibco decided to make it its flag product. Then BW has evolved to get more and more functionnalities, until it became easier to design full business processes with its GUI than it is to develop, debug, and maintain the same complexity with any J2EE server that i know of, at least until now. Competition will come - is coming, i am sure. But of course this is my personal opinion about a tool i practiced heavily for about 3 years. Last, regarding the idea of BW being expensive, it all depends. I know customers who do not think they throw their money to the trashbin. They like the ability of unexperienced developers being very productive from day one, they like the ability of clearly understanding a business process implementation even from a business analyst point-of-view, and all these qualities that -at least- some customers are willing to pay. Anyway happy design, happy coding, whatever tool you are using today, again i am not an evangelist, just giving my personal experience, as you do. And of course, happy beer :o) Cheers, Christian PS : If you would like to talk more about BW and the overall concepts behind it, i can drop your my email address.
  46. I do not know about the constraints of your project, so i will not talk about it. For sure BW had bugs, but it is surely not the only one product in this case. Stopper bugs are/should be given high priority by Tibco support so that engineering can release a patch accordingly (generally a couple of days to a week). If they don't, Tibco loses a customer, and it seems that you are lost forever (joke ;o).
    The problem was that these really big problems for us were never considered bugs by Tibco. They were always considered features. I think you are missing the point becasue you mention showstopper bugs. It wasn't that we couldn't get things done, it was that doing so was horribly time-consuming and we almost never could get something into production without bugs. So when you talk about productivity and ease of use, I'm thinking you are living on another planet. Here's a couple of examples of what I mean. We had a team of on-shore and off-shore contractors doing a major project in our code base. At the same time, we were having to support smaller projects that could not wait. It was inevitable that someone on one project would need to modify the same thing as somone on another project. If it was Java code, this was no big deal because the source control system could usually merge the code and if not would walk us through the changes. But with BW, the process files are often completely rearranged even when there was only trivial changes and the source control system (ClearCase) was completely lost. Basically, we would have to manually merge changes. I was doing this once or twice a week. This was extremely slow going and very error-prone. Lots of productivity lost. Another good example was the flat file adapter plugin. Aside from it's asinine lack of ability to guess what the next offset would be from the previous field's offset and length, I spent 2 full days walking one of our 'newbie' developers though getting a single adapter set up. He made some mistakes initially and when he tried to change things around, the three (yes three) separate files that define the adaptor became inconsistent with each other. He got a bunch of nonsensical errors and everything looked OK in the designer but you had to open up the and look at the backing XML files in a text editor to see that they didn't agree on anything. In order not to lose the many hours he had already spent on this, I had to try to figure out what the file should look like and modify it by hand. That's not productive. Other problems: While it was easy to get something coded, maintenanace was a nightmare and that there was no way to set a default error handler for unhandled errors in a process other than 'suspend' which was an unacceptable option. Designer could not handle XSDs with more than one enumeration. The only way to figure out that it had a problem was that the XSD didn't have syntax highlighting in the schema designer. Schema downloads from BC were indiscriminate and would break things that were not related to what you were working on requiring manually modification to the underlying XML files. Processes were non-reentrant and would not allow cycles making proper logic difficult to express in a lot of cases. Designer would not tell you if there were paths in a process that did not initialize data structures there were used later. BC and BW created WSDLs that were incompatible with each other. There was no supported pallet designer for custom activites. If you created a custom icon, BW would store a separate copy in the deployment for every use if it addding to the already extreme bloat of an engine (100 MB baseline.)
    Nobody wins all the time :o)
    That's a specious argument. Garry Kasparov didn't win every chess match he ever played but that doesn't mean I'm just as good a chess player as he is.
  47. James, I do really understand what you mean, i could experience personnaly what you are saying, except the conclusion : i could always manage to have things up and running, this is where our experience do not converge, with the help of support or with some tweaking. I mean up and running and perfectly usable for developers, testers and production administrators! I never had this source control problem, for instance, just as an exemple. Maybe we are simply not talking about the same version of the product, but i and the customers could really had it up and running on time, if not ahead of time, project wise. While we were much less developers in the Tibco team, we were half time usually waiting for other people on the project to meet the deadlines, literally ! Regarding winning all the time, it was a joke, really. You have a disastrous experience with BW, and Tibco support did not support, so you lost tima and energy and Tibco is losing a customer. You will not change your mind on this tool due to your personal experience, and from what you are saying i completely understand why. So please let us close the case about BW. Not because i do not want to make bad advertisement about Tibco products, i do not really care anymore, just because i do think we will never agree there. But to a higher level, i am ready to talk about the concept of it. I personally have not seen elsewhere such an integrated tool, have you ? If yes, could you please name a few ? Thanks and cheers, Christian
  48. But to a higher level, i am ready to talk about the concept of it. I personally have not seen elsewhere such an integrated tool, have you ? If yes, could you please name a few ?
    There were some tools that I saw a while back that were BPEL based and looked very similar to BW in terms of the process management part of it. I saw this http://www.active-endpoints.com/products/activewebflow/awfpro/index.html a while ago but never looked too closely. JBoss has a similar tool but it looked pretty crappy, at least in the GUI. I don't know if these tools are very flexible or if they support the xml in the way that BW does. I have seen some open source xslt generators that have the same basic layout as the BW mapping tool. What I would like to have is a tool like BW's xml mapping that creates complex XML transformations as components that can be used in any Java application then combine that with a business process management tool. This tool would provide basic activites like calling a webservice, writing to a file, etc. and also allow for custom activities to be defined in Java with JavaBeans or something like that. I really don't think it would be that hard to pull together a stack of open-source tools for this but I have moved to another job and don't have a need for this at the moment (bigger fish to fry.)
  49. Hi James, Thanks for the feedback. This is also my opinion that there is no such a GUI tool on the market apart from BW. Usually these other tools allow you to design some process orchestration at a high level, but then you would need to map your processes to some java components by writing plumbing code. Combine this with a handfull of basic activities and the graphical mapper, and this is where BW shines. Without a proper standard library, you will endup having each and every team having to write, debug and maintain evolutively their own set of activities, which is very expensive on the long run. This is really of a very high value. I do believe that one of the main factor for java's success is its base classes, for instance. Good luck for the big fish tough, i keep catching up mine :o) Christian
  50. We commonly had to blow away ledger files (and all the messages in them) in order to get things running again.
    James, Blowing away ledger files is typically a very important symptom of bad design in the naming of the subjects, publisher and subscribers in a Rendezvous architecture. Some acknowledgement that should be done is missing, hence the growing of the ledger files. Regarding the other components, i too have been fighting against some bugs for sure :o) Cheers, Christian
  51. I posted this on the "Performance and Scalability" section, but it seems nobody reads that anymore, so I've reposted here, where it seems most relevant: I have been reading over the thread posted by John Davies about the usage of Data Fabric and messaging/middleware technologies at investment banks (http://www.theserverside.com/news/thread.tss?thread_id=42563#220125). As a financial industry architect who has struggled for many years with both the operational and development pitfalls of MOM, database performance constraints, and the difficulties of integrating relational and object models for different end-users, I am starting to see a new paradigm emerging that targets some of these issues. With several projects at GemStone, we are starting to see a strong shift towards a much more information-centric architecture rather than a message-centric architecture. As you may know, the various Data Fabric vendors—whom John finds so adept at buying rounds at the pub--all have the ability to let you register for event notification callbacks when data is created, modified, or destroyed. The potential of this model has been stymied until recently due to inadequate notification delivery guarantees, particularly in failure/failover scenarios. An information-centric architecture must go beyond the simple idea of cache coherence--this simply isn't good enough anymore for demanding use-cases. For data fabric technology to become more useful, you have to guarantee not just cache consistency, but that each and every delta to the cache is also propagated to all critically interested applications. GemFire 5.0 now incorporates strong event notification guarantees, including store-and-forward event queues and the concept of durable subscribers, so that even an application that fails to start before transaction activity begins is still assured of later receiving all relevant events. So, instead of the older model of performing a transaction and then packaging a message to notify other systems, the new model is to either join a cache server cluster or become a client to a cluster, and then register interest in notifications by one of these common methods: (a) being a full mirror, (b) registering for a set of keys of interest, (c) evolving your "interest list" naturally based on what you have previously accessed or created (via get(), put(), or create()), or (d) through simple or complex filter expressions. The owner of any particular piece of data within the cache server cluster automatically becomes responsible for tracking who is interested in what data, pushing the update notifications, and tracking notification completion status, while the backup owner of the data automatically mirrors all relevant interest expression and delivery queues to provide fast and seamless failover. The picture that emerges turns the old model of data storage combined with messaging middleware inside-out. Messaging/Notifications become intrinsic to the duties of the operational datastore, which is quite natural since it is already the logical hub of data activity. This is conceptually similar to using database triggers and built-in database queues to push data updates to interested parties directly from the source, except much, much more efficient due to a data fabric’s distributed nature, much easier to configure, and much easier to code and maintain. For most of our projects, customers are still reluctant to eliminate the relational database as a long-term storage and archival mechanism. After all, the analytics and compliance ecosystems that exist around RDBMS are massive and quite mature. Because of this, guaranteed database write-behind queues have become critical technology in many trading system projects—permitting both the operational efficiency of a data fabric while still assuring that the RDBMS eventually catches-up. Cheers, Gideon GemFire--The Enterprise Data Fabric http://www.gemstone.com
  52. .. customers are still reluctant to eliminate the relational database as a long-term storage and archival mechanism. After all, the analytics and compliance ecosystems that exist around RDBMS are massive and quite mature. Because of this, guaranteed database write-behind queues have become critical technology in many trading system projects—permitting both the operational efficiency of a data fabric while still assuring that the RDBMS eventually catches-up.
    Exactly. Trading, order management, execution / fill, reconciliation ... anything that has high data rates. For more information, see: Cluster your objects and your data Provide a Queryable Data Fabric Read-Through, Write-Through, Refresh-Ahead and Write-Behind Caching Peace, Cameron Purdy Tangosol Coherence: The Java Data Grid