The scenario is different from that a lot of requests occur at the same time.That is:
With the customer growing,calculating on one computer will take more and more load.Say,if calculating 1000 customer take 10 minutes,then calculating 50000 will cost about 8 hours,which is not acceptable.Coud you give me some direction about it.Can any cluster technology sovle the problem?ie. a cluster of computers could take on its own load dynamically according to a scheduled logic.
u can use Load Balancing Terminalogy in this Problem.
U install a Single Server and relicate another Server as a Mirror Server for the Main Server.In Server u can Specify, Maximum Load .when Load exceeds, then the Request will automatically redirected to the Mirror Server which will handle the Request.If the Limit exceeds then the Load will automatically shared between server.This Load Sharing depends on the Load Balancing Algorithm implemented in the Server.In this way u can handle Thousands of requests in lesser time.
But if it is not thousands of Requests,but one request result in large amount of calculation.Shall we use some maturity technology?
Looks like you are looking for a parallel computing.. I dont think any server technology gives you this inbuilt. This is an application requirement, that has to be built into the application...
One architecture can be having the process split into independent modules
Say eg: you need to calculate the sale price of a car
This can have logically independent processes
1. Calculate State dependent price
2. Calculate dealer basis price
3. Add accesories price
now you can think of having three different stateless Session Beans built and deployed in three different machines.. and will have a controller at the Main server end that pools this information from three different distributed components and returns the sales price...
I hope you got what I am trying to say....
If I don't misunderstand I think your suggestion is deploy different compenents at different computers.That maybe the good choose.But imagine one scenario:
At first we have only 2-3 client.the distributed components on 3 different computers is a waste.And the finding process maybe cost more time than calculation.
When the client amount become 2000-3000,we still use 3 computer,maybe can not undertake.How to do while 20000-30000 clients?So I think the scalibility is poor.
Happy new year!
As i figured out most of the time is not spent on the server processing, but on the synchronous remote call made between the client and server for each customer (in ur case 1000 calls or 50000 calls whatso ever)
With Message Driven Beans we can reduce the total processing time.
Please go through the new pattern i had published in this site recently on Asynchronous multiple file loading and the test results of it at
(or) visit this url
Thanks and regards
Most of the calculations done on the computer are repetitive. Say for an example u have 10 clients and 10 types of cars. And say there are 100 types of car combinations that can happen for 10 client session.
In this case it would be wise to shift the calculations in "real time". That is come up with calculations as needed.
But if you have 1000000 customers and 10 cars and there can be only 100 types of car combination, it would be pretty ugly if u do it in real time.
It would be wiser to make a db table containing combination having 100 types of cars available to all 100000 client sessions at the same type or for even a 100000000 clients. Hence "real time" calculations of the same type are avoided since its a waste of time.
So what if there were 100000 combinations of cars ?????? This would still be the best idea because at any given time a customer can end up with just one result.
Saves plenty of time huh?????????????