Leaky bucket algorithm is an algorithm which is used in Network Traffic control. Can this be used in optimizing applications which use entity beans? This is a very rare scenario where objects are not loaded properly i.e. it is loaded partially. When we try to locate the beans using primary keys we will get Finder Exceptions. This doesn’t happen with all, but only happens with some. In such a scenario we will get exception when we try to locate object. For example: A customer at a bank. The operator will be able to do a basic search on customer and pull out the details but operator details cannot be modified because the address bean instance associated with this customer cannot be located. At the same time some customers are fine, because their details can be found. New details can be added. What if this is something which should happen with all our applications? We want to create holes in the bucket where the holes can selectively filter out the unnecessary instances and can destroy the instances when the application realizes that there is no use of having this instance anymore. This doesn’t make a difference for a small customer base but will certainly make a difference when there are millions of customers and we are looking at optimizing. Yes of course we can archive the data which is no longer required. Archiving is not always the solution when there is a requirement for several million instances this might make a difference and were not ready to be archived when the archiving process ran. What will this result in? When a customer is trying to access an account which was not accessed for a year, the customer will get a message, we have your details in our system but you will have to contact the bank to reactivate the account as you have not accessed for a year. Also the old addresses or other details will never be loaded. Loading too much data/information is always bad because we need too much support to carry it and also the other parts of the system starts slowing down. In case of container managed persistence, application servers (GlassFish/JBoss/Websphere/Weblogic) probably have other mechanisms to optimize. Of course there is danger of introducing a bug when logics are introduced and should involve extensive testing to verify the efficiency of introduced logic. Is this something which is appropriate or am I missing something?
I am not clear about the benefit of that 'Leacky bucket' approach. Try 'lazy loading'.. in case of massive complex data structures.
Ie: you initially fetch the customer detail, and when you invoke the "getAddresDetails();", then only load the address details from DB.. instead trying to prefetch all the data at once. The least frequently acessed details could be lazy loaded.. and "remove" the unwanted stuff from memmory once you are finished with processing..if that makes life easier!
In your specific case, it is just a matter of fetching the relevant data from the table;)
or am I missing something ;)
Thanks. No you are not missing anything.
The whole thing came up when the lazy loading was not happening as it should(using hibernate). For some reason the addresses were not loaded completely i.e some addresses were found and some were not and also new addresses could be added successfully. In spite of addresses existing in the database the application was giving null pointer for some addresses. Just restarting the application server resolved this issue.(I think the response times were great when this was happening, I don't have any scientific measures which justifies the response time)
Thanks for your reply and high-lighting 'lazy loading' is beneficial in case of massive complex data structures.