Home

News: Cloud + Event Driven Architecture = Internet Scale SOA

  1. [Editor: I suppose it was inevitable that cloud computing would drive the need for queueing and ESB-type services in the cloud. This has interesting implications for the transition of current distributed architectures entirely to the cloud.] cloudMQ is a way to start exploring integration of messaging into applications since no installation or configuration is necessary. If you are looking for: • Cross-platform integration for your enterprise • On-demand , real-time Business-to-Business information exchange • Real-time Business Intelligence • Complex Event Processing cloudMQ provides these benefits and more… Performance cloudMQ has the capacity to hold virtually unlimited number of messages and support thousands of clients. Unlike Amazon’s SQS server, cloudMQ provides all of the enterprise messaging features such as message order preservation, single-phase and two-phase transactions and unlimited message sizes. Reliability Using Amazon EC2 compute cloud and S3 storage we have created state of the art AMQP messaging backbone that spans into thousands of messaging instances.

    Threaded Messages (39)

  2. Isn't latency going to be a problem?
  3. It's a good idea so long as someone is happy with Amazon managing all the infrastructure and are happy security wise about the data being up there. I think we'll see more and more of this type of managed middleware services hosting in ec2 or similar cloud farms. Latency should be <200ms maybe. You won't see front office trading systems there but everything should be fine and I'd imagine throughput for non single thread applications should be fine also.
  4. Latency should be <200ms maybe. You won't see front office trading systems there but everything should be fine and I'd imagine throughput for non single thread applications should be fine also.</blockquote> Let's say that the latency is 100 milliseconds. We have triggers on tables that write to queues. Imagine a batch update with 100,000 records (pretty common). That's 100,000 * 0.1 seconds = 10,000 seconds = 167 minutes added to the batch. That's more than 2 and a half hours added to a single batch. Cloud computing has some very big advantages but I think people are not thinking critically about some of this stuff. It makes sense to have a queuing infrastructure in the cloud for cloud hosted services and for distributed delivery across the internet. But people, come on, there's a huge disadvantage to the cloud: latency and until someone comes up with a way to eliminate it (i.e. send messages faster than the speed of light) that isn't going away.
  5. Imagine a batch update with 100,000 records (pretty common).
    While in some instances batch might be needed, in most it is not. It exists because people don't realize is 2009. I have some use cases in mind where batch could go away if we could use something like cloudMQ.
  6. Imagine a batch update with 100,000 records (pretty common).

    While in some instances batch might be needed, in most it is not. It exists because people don't realize is 2009.

    I have some use cases in mind where batch could go away if we could use something like cloudMQ.
    You can rationalize it all you want the latency is still there. Perhaps you can direct me to your favorite magic-wand retailer. I need one that makes everything 'the way it should be'. My problems have to do with this thing called 'reality'. My job would be so much easier if imagining something were the same as it being real. http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing
  7. Imagine a batch update with 100,000 records (pretty common).

    While in some instances batch might be needed, in most it is not. It exists because people don't realize is 2009.

    I have some use cases in mind where batch could go away if we could use something like cloudMQ.


    You can rationalize it all you want the latency is still there. Perhaps you can direct me to your favorite magic-wand retailer. I need one that makes everything 'the way it should be'. My problems have to do with this thing called 'reality'. My job would be so much easier if imagining something were the same as it being real.

    http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing
    Also responding to:
    Let's say that the latency is 100 milliseconds.
    The ability for any messaging framework to fire messages quickly will be decimated if the network is slow and this is because the overall speed of the system is determined by the weakest link in the architecture. A latency of 100ms is incredibly high and would be unacceptible for anything other than a toy application.
  8. A latency of 100ms is incredibly high and would be unacceptible for anything other than a toy application.
    So it would be perfect for an RoR app? [insert devil emoticon, :) ]
  9. First, I wasn't saying it should be used for everything. But it could useful for somethings. Your example of batch was ... well, having batch, period, is a problem in and of itself.
    You can rationalize it all you want the latency is still there
    I wasn't. But holding ALL transactions up till EOD or EOM because you can only send things via FTP is better? Even with the latency, it might be overall faster than "hold and send a pile". And the data in the pile is NEVER bad. :0 The sooner i get some data, the sooner i can tell the sender that it is crap.
    Perhaps you can direct me to your favorite magic-wand retailer
    My wife has a magic wand. She tells me what to do and hits me with it and TA-DA I do it. I will ask her where she got it. We used to have someone here who had a crystal ball. :)
    I need one that makes everything 'the way it should be'. My problems have to do with this thing called 'reality'.
    I understand. Other people's stupidity affects our reality. Lets fix that.
  10. You can rationalize it all you want the latency is still there

    I wasn't. But holding ALL transactions up till EOD or EOM because you can only send things via FTP is better? Even with the latency, it might be overall faster than "hold and send a pile". And the data in the pile is NEVER bad. :0 The sooner i get some data, the sooner i can tell the sender that it is crap.

    You don't have to convince me but if my answer to batching is a 100 millisecond latency per event, it's going to be hard to convince people that it's an improvement.
    I need one that makes everything 'the way it should be'. My problems have to do with this thing called 'reality'.
    I understand. Other people's stupidity affects our reality. Lets fix that.
    It doesn't make sense to rewrite a bunch of stuff that works just fine in order to make it possible to use an immature and unproven approach no matter how fabulously hypely-advertastic it is. Remember when EJBs were going to make us all shit golden eggs?
  11. You can rationalize it all you want the latency is still there

    I wasn't. But holding ALL transactions up till EOD or EOM because you can only send things via FTP is better? Even with the latency, it might be overall faster than "hold and send a pile". And the data in the pile is NEVER bad. :0 The sooner i get some data, the sooner i can tell the sender that it is crap.



    You don't have to convince me but if my answer to batching is a 100 millisecond latency per event, it's going to be hard to convince people that it's an improvement.

    I need one that makes everything 'the way it should be'. My problems have to do with this thing called 'reality'.
    I understand. Other people's stupidity affects our reality. Lets fix that.


    It doesn't make sense to rewrite a bunch of stuff that works just fine in order to make it possible to use an immature and unproven approach no matter how fabulously hypely-advertastic it is. Remember when EJBs were going to make us all shit golden eggs?
    Valid points. The areas I am thinking I could use this stuff is places they won't understand what you just said anyway. :) People who when I ask then to send me XML instead of comma delimted files send me 1,,1,1,1 LOL. (this did happen)
  12. Lets suppose we have an ecommerce application that is using this to process online orders. Click order and then they use this or something like Amazon SQS to hold the order contents for another system to pick it up and process it. A 100ms isn't so bad here is it?, lets not paint the world a single color because latency is bad for batch. It's all about the use case. If latency was all that matters this whole web thing would never have taken off...
  13. Lets suppose we have an ecommerce application that is using this to process online orders. Click order and then they use this or something like Amazon SQS to hold the order contents for another system to pick it up and process it. A 100ms isn't so bad here is it?, lets not paint the world a single color because latency is bad for batch. It's all about the use case. If latency was all that matters this whole web thing would never have taken off...
    I'm not arguing that this can't be useful. What doesn't make sense is to consider this as a complete replacement for local (as in LAN) based queuing. The example I gave for batch processing is not the only one that has issues with latency, it's just easy to explain. And moreover, latency isn't the only issue. If you need guaranteed delivery, what happens if the cloud goes down? Or, what happens if you are unable to get to the cloud for any reason along the way? Your entire enterprise will grind to a halt. Contrast that with something like MQSeries (a.k.a Websphere MQ) where you can have local managers that will hold onto a message until the destination can receive it. The cloud is undoubtedly going to change the world of computing but it's not going to make local computing obsolete. More realistically, I think we'll see a world where programs can run locally and on the cloud and move around from local systems to different clouds and back seamlessly. If you are putting all your eggs in Amazon's basket and not considering contingencies and mitigating the risks of vendor lock-in, you are just repeating the mistakes of the past. Of course, no one seems to have any accountability in IT so being ignorant of the past often is rewarded.
  14. ... people, come on, there's a huge disadvantage to the cloud: latency and until someone comes up with a way to eliminate it (i.e. send messages faster than the speed of light) that isn't going away.
    Come on indeed, latency is a problem if you need single round-trip request-response in under x milli-seconds but in the vast, I mean VAST majority of cases these are parallelized such that 1000 requests are sent in one second and a 1000 responses are received 100ms (respectively) later. Total throughput is the same as it would be with the same sized "pipe" sitting on the same subnet. Latency delays serialised non-parallelizable or dependent messages and they are very rare in real life. I'm currently using EC2+EBS+S3 to store and process several tens of tera-bytes of exchange data (i.e. stock exchange) in "psudo" real-time, we're only hundreds of milliseconds behind the co-located machines on the exchange and far more up to date and accurate than the classic financial feed services. We can then simulate, test or back-play data from most of the major exchanges from any period in the past few months. FIX 5 over AMQP is a serious option for re-distributing the data so I will be taking a look at this. -John- Incept5
  15. ... people, come on, there's a huge disadvantage to the cloud: latency and until someone comes up with a way to eliminate it (i.e. send messages faster than the speed of light) that isn't going away.


    Come on indeed, latency is a problem if you need single round-trip request-response in under x milli-seconds but in the vast, I mean VAST majority of cases these are parallelized such that 1000 requests are sent in one second and a 1000 responses are received 100ms (respectively) later. Total throughput is the same as it would be with the same sized "pipe" sitting on the same subnet.
    Of course you could rewrite millions of lines of COBOL to be multithreaded. The question is should you do it just so you can be on the cloud bandwagon?
    Latency delays serialised non-parallelizable or dependent messages and they are very rare in real life.
    So the latency could be anything, say 10 seconds. An hour! It doesn't matter right?
  16. Of course you could rewrite millions of lines of COBOL to be multithreaded. The question is should you do it just so you can be on the cloud bandwagon?
    "Clouds" are here to stay and they're going to play a big part in the future of IT, if you're still in COBOL land then it doesn't look like you're in to keeping up with technology so why bother with clouds, just wait for the next thing and slate that too.
    So the latency could be anything, say 10 seconds. An hour! It doesn't matter right?
    The post (snail-mail) takes days, they still manage to send millions a day though. -John-
  17. Of course you could rewrite millions of lines of COBOL to be multithreaded. The question is should you do it just so you can be on the cloud bandwagon?


    "Clouds" are here to stay and they're going to play a big part in the future of IT, if you're still in COBOL land then it doesn't look like you're in to keeping up with technology so why bother with clouds, just wait for the next thing and slate that too.
    I have no problem with clouds and I'm not in COBOL land, I'm building the bridge out of COBOL land. What I have a problem with is people trying to push a technology by glossing over the downsides. I think clouds will become part of the future of IT. I don't think they are the future of IT. If you'd actually read my posts in this thread, you'd already know that. In the real world (as in not the financial industry) companies have to work hard to earn money and they can't just rewrite everything because they see a shiny new toy.
    So the latency could be anything, say 10 seconds. An hour! It doesn't matter right?


    The post (snail-mail) takes days, they still manage to send millions a day though.

    -John-
    You're right! Why are we wasting our time with these silly computers?
  18. I think clouds will become part of the future of IT. I don't think they are the future of IT.
    I totally agree on that.
    In the real world (as in not the financial industry) companies have to work hard to earn money and they can't just rewrite everything because they see a shiny new toy.
    Again, agreed but I'm sure you agree someone has to innovate and speculate, the financial services industry often does than and the "real world" then benefits form the good bits and rarely has to suffer the mistakes. If cloud computing doesn't have a silver lining then you won't have to bother with it, we (the financial services industry) will take the risk.
    Why are we wasting our time with these silly computers?
    It saves having to find a pen, paper, envelope, stamp and postbox, it works for me :-) -John-
  19. Again, agreed but I'm sure you agree someone has to innovate and speculate, the financial services industry often does than and the "real world" then benefits form the good bits and rarely has to suffer the mistakes. If cloud computing doesn't have a silver lining then you won't have to bother with it, we (the financial services industry) will take the risk.
    First, sorry for the attack. I was having a really bad night. Anyway, when it comes down to it, we really aren't far off from each other. What I don't understand is why my pointing out that there's an inherent latency in cloud based transactions is treated as some sort of partisan stance against the cloud in general. My company is suffering pretty significantly from getting on the SaaS bandwagon without considering all the implications. While the cloud isn't the same thing as SaaS, it's got a lot of the same problems. When we went with this vendor, everyone (especially the vendor) kept telling me that it was going to be the best thing ever. the reality is that it's one of the worst things we've ever done. I know everyone laughs at COBOL and I do too sometimes but the reality is that a lot of COBOL applications work well and the Saas shit that we bought doesn't, at least at the level of quality we require. The world runs on COBOL. The cloud is a blip on the radar, an upstart. I think the cloud will be an important part of IT in the future, I know COBOL will be. Not because we want it too but because it's not going away. All I am really trying to say is that you should consider how this added latency will affect your systems. You might decide to go with it understanding that you will may have to design things a little differently e.g. add more parallel processing and therefore do a lot more testing.
  20. Only read latency matters[ Go to top ]

    There are several flaws in this reasoning. Message size will determine latency. Currently, it takes 25 milliseconds on average to place 1K messages and 45 milliseconds for 100K message this is from my laptop at home via fios to ec2. So let's take that 167 minutes and say it would be 80 minutes. What is 80 minutes? What do we compare it with? 80 minutes is the time it took write all those messages to the queue. Assuming that reading messages will take slightly longer then writing because you will be performing some additional function to process the message, queue buffer space will be utilized more than 90% of the time. You will have a backlog, at least according to the queuing theory. Here is a demo. http://www.dcs.ed.ac.uk/home/jeh/Simjava/queueing/mm1_q/mm1_q.html The real question is how much faster would your application be able to process the messages had they been sitting on a LAN queue. I am not sure what the latency on your LAN is. So three points: 1. Message size matters 2. Write related latency does not matter most of the time because in most cases you will have backlog anyways. 3. cloud contributed delay is get throughput from LAN - get throughput from ec2 assuming the consumer does not run in the cloud.
    Latency should be <200ms maybe. You won't see front office trading systems there but everything should be fine and I'd imagine throughput for non single thread applications should be fine also.


    Let's say that the latency is 100 milliseconds. We have triggers on tables that write to queues. Imagine a batch update with 100,000 records (pretty common).

    That's 100,000 * 0.1 seconds = 10,000 seconds = 167 minutes added to the batch. That's more than 2 and a half hours added to a single batch.

    Cloud computing has some very big advantages but I think people are not thinking critically about some of this stuff. It makes sense to have a queuing infrastructure in the cloud for cloud hosted services and for distributed delivery across the internet. But people, come on, there's a huge disadvantage to the cloud: latency and until someone comes up with a way to eliminate it (i.e. send messages faster than the speed of light) that isn't going away.
  21. Re: Only read latency matters[ Go to top ]

    There are several flaws in this reasoning. Message size will determine latency. Currently, it takes 25 milliseconds on average to place 1K messages and 45 milliseconds for 100K message this is from my laptop at home via fios to ec2. So let's take that 167 minutes and say it would be 80 minutes. What is 80 minutes? What do we compare it with?
    In a real world situation, the entire job executes well under 60 minutes now. And realize this is just one job of many.
    80 minutes is the time it took write all those messages to the queue. Assuming that reading messages will take slightly longer then writing because you will be performing some additional function to process the message, queue buffer space will be utilized more than 90% of the time. You will have a backlog, at least according to the queuing theory. ...
    A back log is fine. That's kind of the point of queueing. The processing of the events are not in the critical path. The key is to get the event on the queue and back to processing as fast as possible, in the scenarios I have been involved in (which includes B2B at one of the highest volume wholesalers in the world).
    I am not sure what the latency on your LAN is.
    The latency in question is local to the machine and I've seen (while debugging issues) that it is often under a millisecond.
    So three points:
    1. Message size matters
    And how does that relate to the question at hand other than "unlimited messaged size" -> unlimited potential latency.
    2. Write related latency does not matter most of the time because in most cases you will have backlog anyways.
    That's just plain wrong. In asynchronous processing, read latency is barely consequential.
    3. cloud contributed delay is get throughput from LAN - get throughput from ec2 assuming the consumer does not run in the cloud.
    As I already stated, this makes perfect sense if you are running in the cloud. I'm not talking about that. I;m talking about the notion that cloud based MQ can be a wholesale replacement for local/LAN based queuing.
  22. If you limit the defition of messaging strictly to inter-process queue based communications, your criticism is absolutely correct. When I think of MOM, I think distributed messaging, different platforms and network protocols. I think about examples such as Blue Exchange network, Transportation management system, hospitality integration, social network updates. I think this "Message-oriented middleware (MOM) is a client/server infrastructure that increases the interoperability, portability, and flexibility of an application by allowing the application to be distributed over multiple heterogeneous platforms. It reduces the complexity of developing applications that span multiple operating systems and network protocols by insulating the application developer from the details of the various operating system and network interfaces. APIs that extend across diverse platforms and networks are typically provided by the MOM." You limit the scope of messaging to this. http://en.wikipedia.org/wiki/Message_passing And in that case sure, no distributed messaging will ever make sense, moreover cloud-based.
  23. You limit the scope of messaging to this.
    http://en.wikipedia.org/wiki/Message_passing

    And in that case sure, no distributed messaging will ever make sense, moreover cloud-based.
    Based on this, I don't think you understand my point at all. As a practitioner, I'm quite familiar with MOM. When you send a message to a queue and require guaranteed delivery, you must wait for a response before continuing on. The queue cannot guarantee the delivery of a message it does not receive. This has to be handled locally at some level. Generally this means blocking. You can use some sort of internal queuing but what happens when if the app crashes? Ultimately you need to write the message to some sort of persistent storage or wait for a response from the queuing system.
  24. Latency should be <200ms maybe. You won't see front office trading systems there but everything should be fine and I'd imagine throughput for non single thread applications should be fine also.</blockquote>

    Let's say that the latency is 100 milliseconds. We have triggers on tables that write to queues. Imagine a batch update with 100,000 records (pretty common).

    That's 100,000 * 0.1 seconds = 10,000 seconds = 167 minutes added to the batch. That's more than 2 and a half hours added to a single batch.
    What is this, application design 101's assignment "give an example of as crappy a design as possible"?
  25. Latency should be <200ms maybe. You won't see front office trading systems there but everything should be fine and I'd imagine throughput for non single thread applications should be fine also.</blockquote>

    Let's say that the latency is 100 milliseconds. We have triggers on tables that write to queues. Imagine a batch update with 100,000 records (pretty common).

    That's 100,000 * 0.1 seconds = 10,000 seconds = 167 minutes added to the batch. That's more than 2 and a half hours added to a single batch.


    What is this, application design 101's assignment "give an example of as crappy a design as possible"?
    It's an example of how the vast majority of business applications work today and how you can tie them into a MOM architecture without rewriting a line of code.
  26. cloudmq.jar content[ Go to top ]

    Unexpected cloudmq.jar content don't you think? bash-3.2$ jar tvf cloudmq.jar 0 Mon Jan 19 02:28:18 GMTST 20 META-INF/ 60 Mon Jan 19 02:28:18 GMTST 20 META-INF/MANIFEST.MF 0 Fri Jan 16 03:41:30 GMTST 20 lib/ 13143 Fri Jan 16 03:41:26 GMTST 20 lib/CL3Export.jar 33166 Fri Jan 16 03:41:26 GMTST 20 lib/CL3Nonexport.jar 454099 Fri Jan 16 03:41:26 GMTST 20 lib/com.ibm.mq.jar 19296 Fri Jan 16 03:41:26 GMTST 20 lib/com.ibm.mq.jms.Nojndi.jar 135390 Fri Jan 16 03:41:26 GMTST 20 lib/com.ibm.mq.soap.jar 2472 Fri Jan 16 03:41:26 GMTST 20 lib/com.ibm.mqetclient.jar 1294108 Fri Jan 16 03:41:28 GMTST 20 lib/com.ibm.mqjms.jar 335558 Fri Jan 16 03:41:28 GMTST 20 lib/commonservices.jar 17978 Fri Jan 16 03:41:28 GMTST 20 lib/connector.jar 1997409 Fri Jan 16 03:41:30 GMTST 20 lib/dhbcore.jar 22769 Fri Jan 16 03:41:30 GMTST 20 lib/fscontext.jar 25998 Fri Jan 16 03:41:30 GMTST 20 lib/jms.jar 98496 Fri Jan 16 03:41:30 GMTST 20 lib/jndi.jar 8809 Fri Jan 16 03:41:30 GMTST 20 lib/jta.jar 123717 Fri Jan 16 03:41:30 GMTST 20 lib/ldap.jar 8261 Fri Jan 16 03:41:30 GMTST 20 lib/libmqjbdf02.so 41051 Fri Jan 16 03:41:30 GMTST 20 lib/libmqjbnd.so 95208 Fri Jan 16 03:41:30 GMTST 20 lib/libmqjbnd05.so 29651 Fri Jan 16 03:41:30 GMTST 20 lib/libmqjexitstub01.so 12169 Fri Jan 16 03:41:30 GMTST 20 lib/libMQXAi02.so 27017 Fri Jan 16 03:41:30 GMTST 20 lib/libPgmIpLayer.so 0 Fri Jan 16 14:32:16 GMTST 20 lib/OSGI/ 2272217 Fri Jan 16 14:32:16 GMTST 20 lib/OSGI/com.ibm.mq.osgi.client_6.0.2.0.jar 375536 Fri Jan 16 14:32:14 GMTST 20 lib/OSGI/com.ibm.mq.osgi.directip_6.0.2.0.ja r 329943 Fri Jan 16 14:32:16 GMTST 20 lib/OSGI/com.ibm.mq.osgi.prereq_6.0.2.0.jar 3723 Fri Jan 16 14:32:16 GMTST 20 lib/OSGI/com.ibm.mq.osgi.xa_6.0.2.0.jar 445782 Fri Jan 16 03:41:32 GMTST 20 lib/postcard.jar 77116 Fri Jan 16 03:41:32 GMTST 20 lib/providerutil.jar 889896 Fri Jan 16 03:41:32 GMTST 20 lib/rmm.jar 0 Fri Jan 16 14:32:14 GMTST 20 lib/soap/ 71442 Fri Jan 16 14:32:14 GMTST 20 lib/soap/commons-discovery.jar 31605 Fri Jan 16 14:32:14 GMTST 20 lib/soap/commons-logging.jar 35759 Fri Jan 16 14:32:14 GMTST 20 lib/soap/jaxrpc.jar 18501 Fri Jan 16 14:32:14 GMTST 20 lib/soap/saaj.jar 36202 Fri Jan 16 14:32:14 GMTST 20 lib/soap/servlet.jar 113853 Fri Jan 16 14:32:14 GMTST 20 lib/soap/wsdl4j.jar Regards, Colin. http://hermesjms.com
  27. Re: cloudmq.jar content[ Go to top ]

    Strange that - their website USer Guide clearly states: "4. cloudmq.jar is cloudMQ JMS implementation library. Download it here." ;-)
  28. Re: cloudmq.jar content[ Go to top ]

    Strange that - their website USer Guide clearly states:

    "4. cloudmq.jar is cloudMQ JMS implementation library. Download it here."

    ;-)
    Not only that but they also say:
    we have created state of the art AMQP messaging backbone
    Last I knew IBM where not on the AMQP bandwagon... Odd indeed... Colin.
  29. What about Business to Business exchanges? I've done a couple of B2B projects where Event Driven Architecture would have been appropriate , but we had to go with kludge Web Services mechanism... Imagine that various businesses exchange and react to real-time events on topics... Even government agencies like Motor Vehicle, etc... Opens up a lot of possibilities for value-add stuff like Complex Event Processing and real-time Business Intelligence... Thoughts?
  30. What about Business to Business exchanges? I've done a couple of B2B projects where Event Driven Architecture would have been appropriate , but we had to go with kludge Web Services mechanism...
    I was actually thinking that is exactly where this kind of thing might make sense but I don't really understand why webservices caused you problems. A web service hosted on the web is a web service. If that service is a queue, its still a web service. I've built and maintained webservices that merely wrote to a queue and returned a confirmation of receipt. Sometimes I think that the problem with web services have more to do with people's assumptions about them than any real issues. Web services receive messages and respond to them, often with trivial confirmations. How's that different from how cloud hosted queue would work?
  31. What about Business to Business exchanges? I've done a couple of B2B projects where Event Driven Architecture would have been appropriate , but we had to go with kludge Web Services mechanism...


    I was actually thinking that is exactly where this kind of thing might make sense but I don't really understand why webservices caused you problems. A web service hosted on the web is a web service. If that service is a queue, its still a web service.

    I've built and maintained webservices that merely wrote to a queue and returned a confirmation of receipt. Sometimes I think that the problem with web services have more to do with people's assumptions about them than any real issues. Web services receive messages and respond to them, often with trivial confirmations. How's that different from how cloud hosted queue would work?
  32. What about Business to Business exchanges? I've done a couple of B2B projects where Event Driven Architecture would have been appropriate , but we had to go with kludge Web Services mechanism...


    I was actually thinking that is exactly where this kind of thing might make sense but I don't really understand why webservices caused you problems. A web service hosted on the web is a web service. If that service is a queue, its still a web service.

    I've built and maintained webservices that merely wrote to a queue and returned a confirmation of receipt. Sometimes I think that the problem with web services have more to do with people's assumptions about them than any real issues. Web services receive messages and respond to them, often with trivial confirmations. How's that different from how cloud hosted queue would work?
    It was not Web Service technology that was the problem, but the request-reply paradigm, which necessitated polling for changes... In general this type of pattern should be handled by publish-subscribe architecture, which is usually done through message queuing engines such as cloudMQ... Pub-sub allows for a lot more extensibility in your architecture...
  33. It was not Web Service technology that was the problem, but the request-reply paradigm, which necessitated polling for changes...
    In general this type of pattern should be handled by publish-subscribe architecture, which is usually done through message queuing engines such as cloudMQ...
    Pub-sub allows for a lot more extensibility in your architecture...
    If I'm not mistaken, pub-sub is generally implemented using polling. When I worked on this kind of thing, our customers and vendors would have their own web serivces that would receive updates. No polling. Less sophisticated customers would use polling, though. I guess with pub-sub you don't have to think about the polling that happens. I would say that pub-sub is less well understood than web services and queueing, though. Again, this kind of thing could be a really great addition to a lot of architectures but if you think this is going to magically make your job easy, you are sorely mistaken. The reality is that you are taking on a lot of risk by moving things to the cloud right now. We are squarely in the hype phase of cloud computing and Saas. The backlash has barely started.
  34. I also would be interested in difference between SQL and cloudMQ... From what I read Amazon's SQS does not support Enterprise Messaging features such as message grouping and sequencing, XA transactions and other facilities that JMS supports...
  35. I don't get it.[ Go to top ]

    Okay so nobody bit on my posting about whats in couldmq.jar but got all distracted by latency or maybe didn't realise what Jose and I were eluding to. FWIW yes the cloud will incur latency as its the internet and latency QoS does not apply, doh, no news there and no it's unlikely you'd outsource such a key bit of infrastructure for intra-company use. B2B however is different and having a guaranteed messaging backbone to link your customers to you over the internet has obvious benefit for building on. As for security well an extra few milliseconds for decent encryption is not so hard. I've wandered around the website for a few minutes, downloaded the cloundmq.jar and found it full of WebSphereMQ and RMM client jars. Where is the AMQP there? The guide on connecting talks about a cloudmq JMS provider that support AMQP. No sign of it in the client jar. Maybe I should sign up and see. Are there web pages to do all the JMS management stuff? Browse queues, organise durable subscriptions? Manage JMS logins and so on? It would be nice to see a bit more on the web. FreedomOSS seem to peddle services over ActiveMQ (no AMQP there either), ServiceMix and a few other freely licensed open source projects. My experience tell me something smells a bit iffy on this one.
  36. Re: I don't get it.[ Go to top ]

    FWIW yes the cloud will incur latency as its the internet and latency QoS does not apply, doh, no news there and no it's unlikely you'd outsource such a key bit of infrastructure for intra-company use.
    Why is it unlikely? Because it's foolish? If you believe that, you don't know enough people. And it's specifically one of the things that's recommended by the article, is it not? I think the PT Barnum quote goes: "there's a sucker born every minute."
    B2B however is different and having a guaranteed messaging backbone to link your customers to you over the internet has obvious benefit for building on. As for security well an extra few milliseconds for decent encryption is not so hard.
    Even for B2B, what happens if you temporarily can't reach the queues? Does it block? If not, how does it guarantee delivery? I'm not saying that these issues can't be solved, just that they can't be solved on the cloud.
  37. Re: I don't get it.[ Go to top ]

    FWIW yes the cloud will incur latency as its the internet and latency QoS does not apply, doh, no news there and no it's unlikely you'd outsource such a key bit of infrastructure for intra-company use.


    Why is it unlikely? Because it's foolish? If you believe that, you don't know enough people. And it's specifically one of the things that's recommended by the article, is it not? I think the PT Barnum quote goes: "there's a sucker born every minute."

    B2B however is different and having a guaranteed messaging backbone to link your customers to you over the internet has obvious benefit for building on. As for security well an extra few milliseconds for decent encryption is not so hard.


    Even for B2B, what happens if you temporarily can't reach the queues? Does it block? If not, how does it guarantee delivery? I'm not saying that these issues can't be solved, just that they can't be solved on the cloud.
    I just said unlikely, for some reason you used the word foolish. My comments are based on my experience of quite a few years in the field. I have no axe to grind or product to sell but instead I am at the buyer end of middle ware and have to make this stuff work for clients. The complexities of out sourcing key services is huge and requires very well defined processes for management. Remember I am talking about intra-organisation messaging here. B2B is where the interesting action will be in the next few years for higher level network services, probably backed by a telco who can provide a service level above their private network. All the products I have used in production and for sure AMQP will handle a break between a client and the server just fine, assuming that is what you mean by not being able to reach a queue (all the major products failure detect in an atomically secure way in sync with your messaging with the right flags set). You get to deal with the issue in various ways that suit your design what with sync points, JMS transactions, idempotent receivers and even XA should you feel the need. The fact that the service is on the cloud is not relevant, you have to restrict the patterns you use to deal with the longer latency, lower throughput and so on. There is no reason you cannot extend a guaranteed messaging over the unpredictable cloud - it will still be as guaranteed delivery as you engineer and deploy the software you use for the service. The so called cloud is just what we've been doing for a long time in distributed systems packaged up in an easier to consume and manage fashion. Latency will have a massive distribution tail over a public network and you'll get heaps of duplicates when things go wrong in the cloud but functionally there is no reason why it won't work.
  38. cloudMQ[ Go to top ]

    Colin, I am one of the engineers on the cloudMQ project.
    Okay so nobody bit on my posting about whats in couldmq.jar but got all distracted by latency or maybe didn't realise what Jose and I were eluding to. I've wandered around the website for a few minutes, downloaded the cloundmq.jar and found it full of WebSphereMQ and RMM client jars. Where is the AMQP there?
    AMQP and REST Style interface will be available in the next version of cloudMQ. IBM jms libraries are included because we're experimenting with JMS HTTP tunneling as described here. http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/topic/com.ibm.mq.csqzaw.doc/jm34690_.htm
    Are there web pages to do all the JMS management stuff? Browse queues, organise durable subscriptions? Manage JMS logins and so on?
    cloudMQ does come with easy to use flex based gui interface that helps them to provision various messaging resources.
    FreedomOSS seem to peddle services over ActiveMQ (no AMQP there either), ServiceMix and a few other freely licensed open source projects.

    My experience tell me something smells a bit iffy on this one.
    Freedom Open Source Solutions is a successful organization that employes several hundred persons. Freedom facilitated adoption of open source in industries such as public sector and healthcare who have traditionally relied exclusively on proprietary technology. Our strategy is to provide customers with compelling technology to build competitive advantage.
  39. Re: cloudMQ[ Go to top ]

    Hi Mikhail, Are there any resources explaining how the backbone of your messaging network is engineered? It is far from transparent what is inside your cloud. I was surprised from reading your website what with its claims of an AMQP network to see a WAS MQ JMS download in cloudmq.jar. Sorry I did not mean to show any level of disrespect, rather I found your website did not add up, there are no names of employees or your board or investors or even a nice techie blog; Google only has press releases and I don't see you sponsoring work in the open source world you freely use and so with several hundred employees you definitely keep things close to your chest. Regards, Colin.
  40. Latency?[ Go to top ]

    Surely latency for Batch type message exchanges wouldn't be too much of a problem? It's possbile the message transport layer would utilise negative ack's and batch multiple logical messages in a single network transfer. Bandwith becomes more important than latency in this case. Is it possible to use negative acks on a WAN rather than a LAN using multicast/broadcast?