Why should you combine Reliable Messaging with Dist. Caching?

Discussions

News: Why should you combine Reliable Messaging with Dist. Caching?

  1. "Why should you combine Reliable Messaging with Distributed Caching?," by Jags Ramnarayan, chief architect for GemStone systems, discusses how reliable messaging and distributed caching can - and should - be combined to offer consistent views of data, even when a message is consumed after a cache might have been updated.
    When you have several closely cooperating applications, what they typically want is sharing of information along with events. For instance, the application that takes in customer trade orders notifies the application that routes the order to a trading exchange on the arrival of the order, but also has to share related information such as customer credit or delivery information. The common architectural pattern is to combine traditional messaging, used to notify events between applications with common databases to share the required contextual information.

    Threaded Messages (30)

  2. One of the core principals of service design in a SOA is to have isolated services with little or no shared state so that you can reduce coupling and increase re-use opportunities. This article suggests a pattern that goes against that grain, wouldn't this be more suited to EAI rather than SOA?
  3. One of the core principals of service design in a SOA is to have isolated services with little or no shared state so that you can reduce coupling and increase re-use opportunities. This article suggests a pattern that goes against that grain, wouldn't this be more suited to EAI rather than SOA?
    I agree. This sounds to me like an attempt to optimize an anti-pattern. The author specifically states that he is discussing co-operating applications rather than the internal implementation of a single application, so sharing data using a distributed cache seems wrong to me. Of course, using a cache is no worse than using a shared database, which seems to be the starting point for the article.
  4. This is not a service, but the distributed implementation of a computing system. Such a system may support one service thourough some interface in an SOA setting. That service will not necessarily suffer from the above mentioned antipattern on the outer side of the service bus, specially if the state info is purely internal to itself.
  5. This is not a service, but the distributed implementation of a computing system. Such a system may support one service thourough some interface in an SOA setting.

    That service will not necessarily suffer from the above mentioned antipattern on the outer side of the service bus, specially if the state info is purely internal to itself.
    "several closely cooperating applications" The state info is not "purely internal to itself".
  6. "several closely cooperating applications"

    The state info is not "purely internal to itself".
    Hi Steve, Yes, we seem to be talking about two different situations, brief postings and large amount of inexpressed assumed info are the cause I think. "Closely cooperating applications" and "SOA style services" do indeed not belong together. They are as different as the relationship between two pingpong players, and the relationship between a customer and a supermarket-employee. In one interaction steps depend on previous passes, in the other every transaction is to be on its own. For our discussion I had assumed that the SOA style services would be towards systems outside of the messageing + cache system, while the subsystems within would be closesly cooperating with each other to realise such services in analogy to closely cooperating supermarket employees offering a simple transaction service to customers. I agree with you that state would be important for those cooperating aplications on the inside. I also agree that comparing their interaction to SOA style services would indicate an antipattern: supermarket employees should try to avoid transactions that look like a ping pong game with customers (customer should not have to know employee state), while several echanges with colleagues might be necessary to get the ustomer served. In this analogy the cache would be analogus to an electronic device in each employees pocket giving them wireless access to central data, or a rolling basket that contains the goods that employee carries along to process.
  7. I must be expressing myself very badly I think. The article describes a situation where multiple systems/apps/services/components (I don't care) are integrated using message oriented middleware, but also share a database. It recommends adding a distributed cache in front of the database. My assertion is that this is a badly designed system at a fairly fundamental level. The situations in which MOM integration are appropriate are (by and large) different from those where deeply shared state is appropriate. This is why I say that this is optimising an anti-pattern. It has nothing to do with whether any particular system should be closely or losely coupled.
  8. My assertion is that this is a badly designed system at a fairly fundamental level. The situations in which MOM integration are appropriate are (by and large) different from those where deeply shared state is appropriate.

    This is why I say that this is optimising an anti-pattern.
    Thanks for clearing that up. I do agree with your assertion with the following remark. My experience is that integration using asynchronous messages and state info made available in RAM/cache is motivated mainly for reasons of speed and scalability. If we make low latency and high scalability our main criteria then I would say this is a relatively OK design. If one can ensure some discipline at the data side and use the database/cache mainly for serialization, object survival and, failover then it becomes a great design. From a point of view of maintainability, adaptability to future requirements other than low latency + high scalability, specially if the data side is uncontrolled and becomes a conduit for "deeply shared state" then yes it is not the optimum design.
  9. One of the core principals of service design in a SOA is to have isolated services with little or no shared state so that you can reduce coupling and increase re-use opportunities. This article suggests a pattern that goes against that grain, wouldn't this be more suited to EAI rather than SOA?
    Hi Aaron, Yes, this design pattern is not trying to go against the principle of loose coupling between services. I sort of see loose coupling from 2 dimensions: (1) Isolation at a data structure level - 2 apps have their own definition of a domain class; and they use a common agreed upon protocol, typically XML, to notify each other (2) Availability - trying to notify an app irrespective of its availability; the role played by async reliable publish-subscribe messaging products, traditionally. Now, the point of this note here is that there are many applications that land up sharing data using a database - essentially they share a common data model. My definition of what an "Order" is made up of is the same as yours. If the data model were to change both applications are likely impacted. If your services/applications belong to this class, I am broadly categorizing this as "closely cooperating". But, they still want the semantics of async publish-subsribe. This is where a data fabric or data grid products can potentially help. The idea we are trying to promote is that if you have a bunch of applications sharing data (through a database/cache), wouldn't it be nice if the database/cache was "Active". You express interest on data and relationships and you are notified when it changes. Why go through the pain of constructing explicit JMS messages, topics, etc when the underlying data management system can just send you notifications when something changes. Not just that, data integrity is preserved, ordering is preserved and you have all the contextual data you need to make decisions when the notification arrives - For instance, New order came in, but, I need the related customer object before I can operate; In the data fabric all the related data is now available in memory permitting your application to operate at much higher speeds. Cheers! Jags Ramnarayan
  10. the idea is good in principle i think - a data fabric that any independent application (or "agent" as i would call it) can access to update its own local memory. It reminds me of a blackboard architecture i used in a former life, and this works for simple agents and relatively small data structures and volumes. what happens though when you get gigs of data flowing across all these agents, or the agents are more like complex applications that abuse the fabric much like session state is abused in web apps? Won't we get queuing and throughput problems emerging?
  11. the idea is good in principle i think - a data fabric that any independent application (or "agent" as i would call it) can access to update its own local memory. It reminds me of a blackboard architecture i used in a former life, and this works for simple agents and relatively small data structures and volumes. what happens though when you get gigs of data flowing across all these agents, or the agents are more like complex applications that abuse the fabric much like session state is abused in web apps? Won't we get queuing and throughput problems emerging?
    Hi Frank, The blackboard anology is a good one. The fabric, by design is highly partitioned - data is partitioned and the event queues can also be partitioned. Queues, automatically overflow to disk or can also be conflated (If the same object changes at a rate much higher than a consumer can keep up, you simply keep the last value in the queue; that is the only thing that gets sent to the consumer). Of course, as with any software, there are limits and bounds to how much you can scale. Cheers! Jags Ramnarayan
  12. In the Article it mentions "The common architectural pattern is to combine traditional messaging, used to notify events between applications with common databases to share the required contextual information." Is this a Standard pattern that I can find out more information about? It appears we have managed to stumble on something similiar in implementing our SOA / EDA environment and would be interested to see any documentation that supports our approach! Cheers John
  13. There is a class of applications where new events (e.g. trades or market data ticks) cause new calculations to be performed against the combination of the new data and some pre-existing data. The point of combining the messaging (for the new event) and distributed caching, is to ensure that you have the least possible overhead for retrieving the pre-existing data when the new data arrives. In the event that there is a very large amount of trade data, you may want to partition it across multiple machines to achieve horizontal scale. This tends to make locality of reference to the required pre-existing data more difficult unless there is a way built into the fabric for keeping multiple copies of that pre-existing data consistent across all of those machines. I'm not suggesting this is at all trivial, but it is a real world situation that people are wrestling with every day.
  14. In the event that there is a very large amount of trade data, you may want to partition it across multiple machines to achieve horizontal scale.
    Within a single application, using a distributed cache across a cluster to get better scalability is a very sensible approach. And there's obviously nothing wrong with combining data from external events with existing data. What I am arguing against is extending the cache across application boundaries. I don't believe the current generation of distributed cache products offer significant protection from the tight coupling that will inevitably result as deadlines loom and shortcuts are taken. BTW, I am one of the people wrestling with this every day in the real world. And in the real world I have to live with the nightmare caused by applications directly accessing each other's data :-)
  15. In the event that there is a very large amount of trade data, you may want to partition it across multiple machines to achieve horizontal scale.

    Within a single application, using a distributed cache across a cluster to get better scalability is a very sensible approach. And there's obviously nothing wrong with combining data from external events with existing data.

    What I am arguing against is extending the cache across application boundaries. I don't believe the current generation of distributed cache products offer significant protection from the tight coupling that will inevitably result as deadlines loom and shortcuts are taken.

    BTW, I am one of the people wrestling with this every day in the real world. And in the real world I have to live with the nightmare caused by applications directly accessing each other's data :-)
    I want to understand your point here. Let say I have the operations, "place order", "cancel order" and "track order". Clearly these will need to share at least some state. Do you think these one service or multiple services?
  16. I want to understand your point here. Let say I have the operations, "place order", "cancel order" and "track order". Clearly these will need to share at least some state. Do you think these one service or multiple services?
    Obviously a single application. Possibly a single service with multiple operations, maybe multiple services. They obviously share state (the state of the order) and if the application is clustered then a distributed cache is a great way to manage that state. Now, lets extend this to include a different application using JMS-based integration to invoke these services. Should the two applications share the state of the order through the distributed cache?
  17. I want to understand your point here. Let say I have the operations, "place order", "cancel order" and "track order". Clearly these will need to share at least some state. Do you think these one service or multiple services?

    Obviously a single application. Possibly a single service with multiple operations, maybe multiple services. They obviously share state (the state of the order) and if the application is clustered then a distributed cache is a great way to manage that state.

    Now, lets extend this to include a different application using JMS-based integration to invoke these services. Should the two applications share the state of the order through the distributed cache?
    If I understand the question correctly, I would say no, the second application should depend only on the contract of the services provided by the first application. Before you had mentioned that one of the core tenets of SOA was that services should be independent. I understand the principle, I think, but I struggle with how this would be implemented in practice in situations like the above. I wonder if the idea is more that services shouldn't depend on each other directly (i.e. call each other). I completely agree with your main point that different applications shouldn't access each other's data stores but the larger concept of service independence seems ill-defined to me. P.S. I would be surprised if your direct data-access nightmares are worse than mine.
  18. Before you had mentioned that one of the core tenets of SOA was that services should be independent. I understand the principle, I think, but I struggle with how this would be implemented in practice in situations like the above
    Actually that wasn't me, but it is a principal I agree with. In the context of your place/cancel/track order I would implement as a single service (WSDL) with multiple operations, making the whole point moot.
    I wonder if the idea is more that services shouldn't depend on each other directly (i.e. call each other).
    Now we are getting into the whole area of service tiering, etc. which is fascinating but probably wandering off-topic.
  19. I agree with you that extending a cache across application boundaries is dangerous. In fact, we generally recommend that users actually insert our WAN gateway product between the cache instances for separate applications that just happen to cache similar data so that they get store-and-forward semantics, and significantly looser coupling. The only exceptions to that are either applications that themselves are actually tightly coupled for other reasons, or situations where duplicating the memory footprint is prohibitive. And even the memory footprint exception is arguable.

  20. Within a single application, using a distributed cache across a cluster to get better scalability is a very sensible approach. And there's obviously nothing wrong with combining data from external events with existing data.

    What I am arguing against is extending the cache across application boundaries
    I think of traditional messaging, whether based on JMS, SOAP over HTTP, whatever, to be quite appropriate for applications that are inherently loosely coupled - a Order management system interacting with Inventory management or some CRM application. That said, the focus for this article are all the cases where several distributed services (or call them components or "closely cooperating apps"), all access and use the same underlying database sharing data and events. Services communicate asynchronously with each other and continue functioning irrespective of the availability of the dependent service. There is widespread use of traditional messaging APIs for such communications. I think this leads to unnecessary complexity and frankly is prone to data consistency problems. The fact that these services are dependent on a common shared database makes them tightly coupled at a data model level. Hopefully this clears up any confusion on the use of the term "closely cooperating applications".
  21. That said, the focus for this article are all the cases where several distributed services (or call them components or "closely cooperating apps"), all access and use the same underlying database sharing data and events. Services communicate asynchronously with each other and continue functioning irrespective of the availability of the dependent service. There is widespread use of traditional messaging APIs for such communications. I think this leads to unnecessary complexity and frankly is prone to data consistency problems. The fact that these services are dependent on a common shared database makes them tightly coupled at a data model level.
    I still believe the situation you are describing is a bad design, and my preference would be to fix the root cause rather than patch it with a distributed cache.
    I'd hate to see people saying: "Hmmm, we've closely coupled our apps and violated our architectural principles, what should we do? I know, we'll just stick a distributed cache in and everything will be fine". :-)
  22. JMS + JavaSpaces?[ Go to top ]

    Sounds to me like something that combines distributed message passing with a central data store such as JavasSpaces or JBoss Cache or something like that. I think this is a god idea in case there is in deed distributed processing going on. "Depending on the application usage, the data fabric can be configured to make multiple copies of data resident close to the application that is consuming it or spread the data across many nodes or a combination of the two." May not be easy to keep the copies consistent. "When data is updated it merely calculates the delta" I don't want to be the developer who is in charge of calculating the "delta". Is calculating the delta application-specific or generic? Either case, there is a subtle bug in calculating the delta and the result is total disaster ... The term "data fabric" doesn't tell you anything of what the architecture is like. MoM tells you exactly how data is exchanged. "Data fabric" tells you nothing. For that reason I dont find this term meaningful. Regards, Oliver Plohmann
  23. Re: JMS + JavaSpaces?[ Go to top ]

    Oliver, I hope some of my responses below help to clarify the importance of Data Fabric technology as a new fundamental tool of application development.
    Sounds to me like something that combines distributed message passing with a central data store such as JavasSpaces or JBoss Cache or something like that.
    It is something like that, but a key advantage of this new category of data management infrastructure is that the data store itself is distributed and inherent scalability. For the most part you're able to develop the application as if you're interacting with a centralized data store, but when it comes to deployment or future scalability, you're able to re-configure the storage model to partition & replicate the data in the manner that best optimizes application data access patterns. Partitioning strategies are pluggable, replication strategies are merely configuration, and then the Data Fabric technology takes care of optimizing both resource utilization for all players. Metadata is dynamically published to data consumer drivers to automatically connect directly to the actual processes storing & serving the specific data and event of interest. This includes the metadata necessary to maintain open connections to the HA backup processes for the same data, thus assuring immediate failover and no service disruptions when any process fails. It is important also to understand that no single process need be a pure "primary" or "backup" . . . the architecture is multi-master with any given process generally managing at least a pair of unrelated primary/backup data partitions, and usually managing several of each. Anything truly "centralized" architecture is antithetical to the Data Fabric value proposition.


    "Depending on the application usage, the data fabric can be configured to make multiple copies of data resident close to the application that is consuming it or spread the data across many nodes or a combination of the two."

    May not be easy to keep the copies consistent.
    Keeping primary & secondary copies of data consistent is very application and/or usage specific in Data Fabric technology. There is no "one size fits all" easy button for this. Developers and Architects need to make configuration and access control choices for the various classes of data managed in the fabric--often trading performance for consistency when contention can't be factored-out. The fabric does, however, help you to factor out contention by leveraging data locality metadata as described above. If all updates for a given partition of data (say, all Orders for Customer A) always come to the same process, then you can easily maintain distributed consistency without the use of distributed locking. A transaction can occur locally (and acquire necessary locks only locally) safe in the knowledge that updates aren't occurring anywhere else. What makes this particularly valuable is (a) the dynamic and automatically managed distribution & connectivity to specific data partitions, and (b) the machinery built into the data management infrastructure that assures data integrity in failover boundary conditions. The latter never shows-up as a sexy product feature demonstrable by whiz-bang trade-show floor GUI's, but it is absolutely core to any mission critical application using a Data Fabric as it's (if you will) data backbone. For cases where processing and locality aren't possible to control, optimistic or pessimistec distributed transactions are of course part of the technology as well and are still optimized to the greatest degree possible.


    "When data is updated it merely calculates the delta"

    I don't want to be the developer who is in charge of calculating the "delta". Is calculating the delta application-specific or generic?
    There are actually several strategies to achieve efficient delta propagation. One is to use transactions in combination with dynamically sub-classed instances of your domain objects where the "setter" methods are intercepted and accumulated in the transaction context. This is similar to what Terracotta does via bytecode enhancement of existing domain objects. ObjectWave technologies has implemented this approach in conjunction with GemFire in their JGrinderX product. Another approach is to implement referential integrity maintenance strategies though customized serialization logic--essentially allowing you to break a larger object graph down to be stored in multiple logical namespaces (I.E multiple Regions as defined by the JCache specification). If you're able to prevent transitive closure in object serialization--propagation logical pointers instead--then data replication becomes much more granular. This reduces both network replication overhead AND garbage collection overhead. We are working with several partners that combine a model-driven development methodology with this pattern to code-generate intelligent serialization logic capable of maintaining referential integrity across different proceses in the Data Fabric(using Externalizable or GemFire's DataSerializable). A good example is Orders and their associated collection of line-items. For finance, this would generally be the standard object graph of an Order and its Executions. A non-granular replication approach would simple replicate the Order and all of its Executions each time any part of the graph is updated. If the Orders and the Executions are stored in different Regions, and each has the intelligence to propagate references to each other during serialization, then all-of-a-sudden you're replicating only parts of the object graph that have changed, and you get the double benefit of increased granularity (basically, a kind of delta propagation) and referential integrity maintenance across multiple instances in the same cached data. Here we solve not one but two of the major traditional drawbacks of distributed caching technology. When servicing Continuous Queries, delta calculation is actually very safe because the query itself provides the granularity of updates in the SELECT part of the query. This is what Jags is referring to in the context of CQ based subscriptions. Finally, there are data patterns that simply lend themselves very naturally to delta propagation through a "delta" API that delegates delta calculation to the application. One such example would be maintaining the state of a standard messaging session protocol such as WSRM. You can store a single "fat" object in the cache representing the entire session's state information, but then only propagate to the backup each new message generated or received--as a Delta!
    Either case, there is a subtle bug in calculating the delta and the result is total disaster ...
    Sure, it could be. On the other hand, subtle bugs anywhere can result in total disaster, so I'm not sure how meaningful this statement is. The fact that this is difficult to implement reliably means that a successful/reliable solution is all the more valuable.
    The term "data fabric" doesn't tell you anything of what the architecture is like. MoM tells you exactly how data is exchanged. "Data fabric" tells you nothing. For that reason I dont find this term meaningful.
    I think this is rather unfair. The term comes from early adopters of the technology and is intended as an appropriate metaphor of some key features. You can gracefully weave together applications that do in-fact have certain data interdependencies, and the deployment is fundamentally able to stretch when additional scale is needed. Of course no metaphor is perfect, but most people get the idea. Terminology aside, this category of data management infrastructure software does a tremendously good job of managing very large and fast-changing operational data sets, and of providing applications with advanced subscription mechanisms for event notifications whenever data in the fabric is updated. Having a robust distributed data management infrastructure also manage the distribution of event notification to subscribing applications is fundamentally more efficient than the traditional approach of building MoM as a separate layer for such notifications. You can't touch the architectural efficiencies made possible by this coupling with any other approach. Cheers, Gideon
  24. Very short article, no examples, much on the principle level. The essence is synchronisation difficulties when info comes thourough both messages and 'database' replication. Conflict avoidance or resolution is not mentioned. I assume it is seen as something the application has to take care of. Throughput challenges require good application design but also calibration, configuration, and good workload distribution. The devil is again in the details. Are there such controls? Can a given problem resolved easily and quickly with them? The article gives hope that they indeed are. One can only know for sure after spending time and effort with the plattform implementation in question.
  25. This is not primarily about messaging. There are all sorts of situations where cooperating application that have to share state use a central database as "shared memory". The question is wouldn't it be better to use shared memory using a distributed cache. Sure and with something like Terracotta that is simple enough now.
  26. This is not primarily about messaging. There are all sorts of situations where cooperating application that have to share state use a central database as "shared memory". The question is wouldn't it be better to use shared memory using a distributed cache. Sure and with something like Terracotta that is simple enough now.
    Sorry, I don't agree. The entire subject here is the proposed combination of messaging and distributed caching, so messaging is one of the two primary concepts. The author seems to be talking about a situation where applications have been integrated using a messaging technology (a widespread and viable pattern), but where that architecture has been polluted (my phrasing, not a quote) by also sharing state directly. Rather than optimising that pattern (by introducing the distributed cache) I'm suggesting we should fix the root cause and stop sharing state. Alternatively, we may realise that the shared state is fundamental to the solution, in which case we are really talking about a single application. In that case, call it a single application and the distributed cache is just an implementation detail. But you can now scrap the MOM based integration and just use Java method calls or whatever.
  27. I am not going with the used words - "messaging", "distributed caching". I read it 2/3 times to understand the problem on which this is applied. It looks me - Here the problem is driving a solution rather ; I am assuming the scenario is maximum read and less write. The immediate benefits of this architecture is: 1- serving from memory with a method call rather than an ipc. 2 - With proposed architecture the cecntralized information storage has a limited set of agents to serve (Notify about a information update) rather than serving each request. Say I have 200 machines and each machine handles 70 requests / second. So having a centralized database means I need to process 14000 requests / second. With each agent having the intelligence of spitting out information, now the central server need to just handle updates ( 5 updates / sec * 200 = 1000 requests). . So I see a possibility of scalability here. For few years I worked in mobile solutions where the synchronization is a big deal. There are numerous mobile devices syncing data with primary master database. Intellisync like products do this with primary key matching on a updated timestamp and putting a trigger on delete. Here, the delta sync is possible with upfront design consideration of database schema. Still, I am little worried about processing cost of delta. If there are very less writes, it's OK. Consider a hybrid solution. I feel humans use 20% information 80% times. I am talking about these 20% cached in distributed model near to the application and others a hit miss goes to the central server. A little intelligence at the client side. There are concerns about the dirty reading. I am assuming this is not a one design for all solution. More over most of the cases we can allow users to be sticky to a server. (I consider click time 2secs and a faster system will propagate in millis). Cheers, Abinash & Sunil http://sunilabinash.vox.com/
  28. This article is appropriate in the marketing collateral for Gemstone, not as a pattern here. I can't understand why distributed caching is a panacea for every performance issue. First database is trashed as the performance bottleneck, then caching and distributed caching are proposed as the solutions. Here is an example of an architect at Twitter who lost his job going with a very safe pattern in the java world. I can't post the picture - you can read it here http://www.techcrunch.com/2008/05/01/twitter-said-to-be-abandoning-ruby-on-rails/ The exact words from the architect's presentation when he got hired: -------------------- It's Easy, Really (title) 1. Realize your site is slow 2. Optimize the database 3. Cache the Hell out of everything 4. Scale messaging 5. Deal with Abuse 6. Profit -------------------- Caching is an important tool to provide performant systems but in no way the only solution. Thanks Sunil & Abinash http://sunilabinash.vox.com/
  29. This article is appropriate in the marketing collateral for Gemstone, not as a pattern here. I can't understand why distributed caching is a panacea for every performance issue . . . Caching is an important tool to provide performant systems but in no way the only solution
    Sure, we're promoting GemStone's GemFire with these capabilities, but you're wrong that this just about "Distributed Caching" and we are not the only vendor focused on raising our game. If you browse through the vendors building similar capabilities, you'll see that most of them are dancing around the same themes and growing their customer bases like gangbusters. The adoption cycle is already well underway, and you can forgive both Jags and myself for focusing on the implementation we are (by far) most familiar with. Also very important: the concept of a "Data Fabric" rather than a just "Distributed Cache" advances earlier caching technologies to the point where we now have a full-fledged distributed and transactional operational data management platform. Once you can rely on the "cache" to be the authoritative source of operational data, then the integration of distributed event notifications follows quite naturally. The key is to add this feature in a way that's as scalable, reliable, and performant as the distributed caching foundation. There is even now a JSR specifically targeting the integration of data caching and distributed event notifications and I noticed a recent discussion thread about this JSR on TSS. No technology solution is a "panacea", but building an inherently scalable architecture synergistic with current enterprise strategies of leveraging computing resources as a utility does require a transition to this kind of data management technology. You can create something similar by paying through the nose for large teams of developers to re-invent the wheel, or you can consider looking at existing products in the data fabric space. I guess some will stick with the mantra that "nobody ever got fired for using a database/MoM/". Cheers, Gideon
  30. Re: Deliberate plug for a product[ Go to top ]

    Why not use an in memory database and setup triggers.
  31. Useless discussion[ Go to top ]

    Guys, Data integration and EAI and two different things. When you want to combine them - main thing is what is the driver and is it cost effective? To send data over messaging - waste of resources, you have Data integration vendors and products, use them. Once you have assumed the all applications should use the same database vendor for accessing the database, you are going on propriatory route. You can do anything in IT? Is it the right thing - this is where quality if arhictect/designer comes in handy.