We have a case in question wherein we have to retrieve 70K to 1Lac records from ORACLE 9 DB (say records A). We get this using a call from our stateless Session Bean through to a DAO which interacts with the database to fetch the records. These records are then used to populate info objects (Value objects) which are then all put in an arraylist and sent to the JSP for display purposes.
We need to filter these records which we have got according to some criteria. This criteria is quite complex and involves 14 huge queries. The result returned by this criteria is then compared against the records A and only the common ones from them are returned to the user.
We are using OC4J as app server and ORACLE CLEVELAND framework for the application. The problem is that we are displaying the records with PAGINATION. For example we display records in pages of 100 records. If we are getting all 70K records even after applying the filter criteria (is a possibility :(...) then how do we do pagination. We usually do DB based pagination where in the query itself we say get ROW NUM 1 to 100 . But in this case we don't know before hand as to how many total records will be there. We will get to know the total records only once we apply filter criteria. We thought of applying pagination by keeping all the 70K records in the session but this literally kills the session. Please suggest some alternative.
It will be of great help.
You can keep just the keys from the records in the session. And simply retrieve the next 100 records for the keys in the list. While this can still be quite large, it should be MUCH smaller than keeping the whole keys. If possible, initially only retrieve the keys instead of the whole record.
Another, similiar idea, is possible if your model is setup to allow it. You can simply perform the query each time, but use the primary key to position where in the list you start, and use a rowcount to limit the number of records.
IE. If you have 100 records, with primary keys, 1, 2, 3, 4,.... you can simply add a primarykey > X, and pass that X in from the client.
Note, that the implementation of this second idea, is not very "pure", but you have to sometimes be pragmatic.
After getting the "final" desired result keep it in hashmap or hashtable.
It can be in a bean.
Then fetch each time what ever number of records you want to show.
Remember there is some browser buffer size limits.
Scott, I appreciate your response !!!
But the thing is that we need to display the complete record at the front-end not just the PK's. So according to what you have suggested,if i keep the PK's in the session (after all the filtering is done), i would be required to again take these PK's to the DB to fetch the complete record against those individual keys. That will be one more hit to the DB and i am not sure of the performance issues involved.
As per the usage of Row NUM, the problem is that i have no way of knowing the total number of records that will be shown to the user, so that i can calculate Page 1 of X. See i get some records from my main query, then i run some 7 huge queries independently to get another set of records (of the same type as returned by the main query). Then i am required to take only those records as the final output, which appear in both the lists. So this DBPagination approach will not be useful in this case.
Expecting your valuable inputs !!!
Thanks & Regards
I would be questioning more a design that required those 70k results.
Review the requirements. I would be VERY suprised if you need to retrieve all 70k rows.
I am facing a similar issue...I am retrieving around 35K records from an Oracle DB, and due to some past coding and design mechanisims, the code tries to display them all in a single JSP. The catch involved in implementing pagination is this...
1. The user can randomly select any row displayed (by the use of checkboxes) and delete the rows
2. The user may add any new rows to the table displayed
How do I capture the deleted rows and newly added rows information during pagination?
Any help would be realyl good Rohit..
Thanks in advance
I too face the same issue.
but my recordset has very less results returned.
I dont agree with the the above answer of storing in a hashtable which is a big performance hit.
but it also could not be done by just hitting the DB again for each page display.
My suggestion is to review the design so that there will be no need of showing 70k records.
Else dont stick with hashtable it may hit your performance drastically.
inform me when you find a better solution
tanks and regards,
Check this article.
fetch the top N through M results that are required to be displayed based on the user request.
you really never need to push all tha 70K ( or what ever) records into application code from database.
we can pretty much limit the result set size at database level.