When I was going through J2EE specs. One thing, which kept on coming in my mind, was why are we trying to avoid the power of our PCs?
There was a concept called star topology, (Unix is a good example) Later with PCs coming to existence it changed to Ring Topology. With www it again went back to Star.
But in the mean time one tend to forget that PCs too have lot of processing power, and with current app server architecture we tend to misuse PC as a dumb terminal.
I propose a different approach in AppServer as an entity.
What I think we should go for is a middle level compromise. Can we make AppServer to work only for Transaction management, Entity Bean management and Queues and move our session beans, servlets and jsps to client machine app server component?
Consider this the new AppServer will be a distributed entity, a part of it will be sitting on a centralized machine (which will intern have connectivity with Persistence layer) and the other part which will be distributed across clients which intern want to use the application.
The model will be something like; every client will have a lightweight AppServer component that will act as a stub to main AppServer. As we all know that all users dont use all features of the application but visit some of the specific functionality regularly, we can move those new client side components (Session beans, servlets and jsps) to client AppServer stub. Whenever a persistence layer call is required, a message will be posted over http, (tcp/ip, t3, rmi, iiop) that will be handled by a JMS queue or topic and accordingly action will be taken.
What we are achieving is,
1. Low server - client communications.
2. Client level and server level validations done in single machine and hence improvement in performance.
3. Immediate response to client actions.
4. Unlimited number of session pool, reason being session is maintained within client AppServer components.
5. No hassles in maintaining session information and problems with distributed global variables.
6. Low demands on real server (only persistence and some core functionality).
7. Resulting in very high reliability and capacity to handle almost n times more requests.
The client side AppServer components will need following features.
1. Downloading requested business logic components and view components from server (something like java web applications or applets)
2. Running a routine on least used components and removing them as and when needed.
3. Running a separate thread with main server for updating on components and fetching latest versions if there is a change.
With local interface coming in existence, any way entity beans are going to be a part of Database Server/App server amalgamation module.
Session beans with remote calls need to have a connectivity with entity beans with local interfaces, I guess J2EE is going to provide this in next set of specs.
Please put in your inputs for this thought, somehow I have a feeling this thing might work.
The model will be something like; every client will have a lightweight >>AppServer component that will act as a stub to main AppServer.
Something which can be downloaded as a plug-in with standard browser.
Instead of going into gory details, I can just tell you that this strategy has been tested (applets) and moved out towards distributed computing with thin clients only for the reason that all clients are not configured the same way. Nobody is misusing the power of PC but giving the best of the cream to even those insignificant users the power of internet connectivity. There are dumb and low configuration machines which may be accessing your server. So its kindalike considering the least config PC for running your app.
Also, infrasctructure wise, a million powerful PCs would cost you more than couple of server class horses. And if you adopt the strategy you are suggesting then you have to mandate the minimum configuration of the client PCs. YOu have to go to each client for initial installation... you have to support each client instead of couple of servers (though the client app server may have the smartness to download latest files etc etc) because clients will then be app specific.
Bottomline, everyone knows the power of PC but the applications I write shouldnt be restricted to power users in lieu should be accessible to anyone who just can accomodate a browser.
For example, if I have an application which just stores customer interaction data in a central repository, why will I give each of my customer support guy a powerful PC? I better have it as a dumb terminal and put the money in clustering the server.
Comments are specific to J2EE browser based, thin client apps.
I don't know quite where to start with this, but I think I'll pose some questions of your achievement points. My main thread of thinking is that appservers go to great lengths to provide you with load balancing, replication accross clusters and resiliency.
1) Low server - client communications: I disagree with this, when you being actions on an entity, you're assuming you're the only one using this entity? For the period of your method call, we may enter a transactional state on the server, potentially protecting from concurrent updates. If you want to offload that onto the client, effectivly you have to "checkout" your object and when you're done you may return it. Introducing transactional messaging makes this process harder to sync with multiple clients all interested in the same objects. Either that or the state must be constantly communicated to the server and all other people who have registered interest in this object. I therefore think that your comms would have to increase to implement this properly. What happens if the client dies? when does the server "unregister" your interest and how do you deal with intermittent comms failures?
2) I'm not sure what you mean here, why are the validations done on one machine? Not two as I might expect?
3) This would be a good point, however seeing as how server response times can be fractions of a second are you giving your users much more than could be obtained with a some kind of rich client, applet, flash app, java app, whatever and the server infrastructures we see now.
4) Multiply this number by the comms involved, this is rather dependent on your implementation, remember that wherever you deploy this application they will have to download all of the components that allow you to run these mini client side appservers. Consider that web start can take long enough to check the integrity of a client application, let alone all you're appserver components.
5) I think it's far from it, see 1.
6) You have a point here, if this could be sucesfully implemented, then any shared resources still have to be managed. Also, not often have I worked on or designed a large enterprise app and it doesn't involve some kind of integration with other system or systems. If you want distributed integration this makes things get alot more difficult. So you'll probably want the server to be the single touchpoint for that. I think when you say "core functionality" you need to have a bigger think about what you mean here and what the implications are.
7) Capacity is a good point, although what do you mean about reliabilty? Because you can run less on the server, the server lives longer? You're not putting your eggs into one basket? Could you expand.
At the end of the day, I think you're right, in a lot of environments the client machines are powerful machines, but there are many reasons the servers do what they do, and that so much investment has gone into it already. That's not about to go away. Offloading some of the functionality from the server to client is a good idea (one that has gone back and forth, as you've noted in your post and will continue to do so). I'm not convinced by your approach though. Not to shoot it down, but do some more thinking about it.
Hi Max and Vishwas
I agree with you people on following points
1. One needs to support all range of users and all kind of PC configurations.
2. We are trying to put eggs in different baskets and hence increasing the complexity.
One first point I will say there are not low end machines now. With new Intell chipset and the need of every PC owner to use Microsoft Office or equivalant software the general accepted configuration is Pentimum with 256 MB RAM and 40 GB HDD.
Regarding point 2, I agreee that transaction management will be a problem, but remember that transaction comes in existance in a single network call and not the way we are used to now.
Regarding downloading of componenets, Yet initial download time will be a setback, but it is almost like downloading a applet or flash app and later running it.
I was not keen on putting in all the details in one go but let me put it this way, the server will host the application it is hosting right now, but when client starts accessing the pages and business modules then the download process will trigger and the copy will be made on client machine, if the component is available (assuming one component as a single business process) the client side will be used, else the request will be pushed to server side.
What say ?