Architectural advice needed on indexing binary files.


Performance and scalability: Architectural advice needed on indexing binary files.

  1. I have an "photo album" type of application where users log on to our website and can organise their photos.

    On the server side, the way we handle these images is by storing them inside an oracle database as blobs fields in a table. When an image is needed, a servlet is called with the image name as its parameter. The servlet goes into the database, grabs the blob associated to the image name, passes this blob on to java classes that manipulate the image and the result is finally returned back as a gif to the browser.
    This, obviously, isn't an efficient way of doing things. The only advantage of this method is that our database is loadbalanced (across three boxes), which is important for us.

    I was thinking of improving this by having an organised directory structure where the image files are kept. The database can store relationships between users and the image paths (rather than users and the actual images themselves). The only problem is adding redundancy (would the image files need to be uploaded on all three boxes?)

    I know a little bit about JNDI. Would that help in what I'm doing?

    Any suggestions are welcome. Also any documentation or resources I can look at would be greatly appreciated.

  2. Hello,

    A file system is, as you wrote, a better approach to store binary files. This is the common way used by the content management systems. Then, it isn't the database itself which will have to be load balanced but the applications servers which host the download/upload servlet. Each application server can run on a different machine. After a servlet has obtained all the meta data of an image, the servlet can access to the binary file stored on a RAID server which automatically manage redondancy and high availability. I don't think that this is usefull and efficient to use JNDI instead of your database to get the meta data.


  3. What if the 3 app servers have access to a file-sharing cluster? I mean, feel like a good old gnutella layout! Or bittorrent if you want...

    Basically, you don't seam to want all servers to host the same image file. Which mean you are willing to let go redundancy. Unless you want at least 2 servers to have the image file, but not all of them. At this point, all your servlets care about is getting the file from whos got it. The storing strategy of your cluster of file server is a topic by itself.

    If all servers have the same file, just like your current DBs, well, you probably already know how to load balance simple httpd servers.
  4. But what is wrong with the blobs?

    If you have a table for mapping imageId to image blob, one image per blob,
    you just need a robust record streaming (small memory usage) method for pulling the image blob, nothing more. And my guess is the images aren't that big anyway...

    Thinking about it, you will always perform the same basic operation: access disk, stream over wire from image server to app server, send to client.

    DB does exactly the same. As long as there is no stupid overhead (hence my comment on blob i/o), I don't see how this is inefficient.

    Plus, you get the benefit of your current DB replication immediately. No changes.
  5. Thanks for the suggestions[ Go to top ]

    The idea by "Fred JEANNE" about setting up a RAID is interesting. Wonder why I didn't think of that.

    Quartz Quartz, I'm not sure how a bittorrent network works, so I should probably read into it. The reason why I don't think the blob method is feasable is because the amount of images that we will soon be storing could get anywhere upto 100,000+. Wouldn't this place extra strain on the database, which we already use for other things? Besides, I have personally never seen this blob method done before so I have nothing to compare it with.

    Thanks again for the suggestions.

  6. 100k records in db[ Go to top ]

    forget it. Not for prod level services, because not predictable enough.

    >amount of images [..] upto 100,000+.
    >Wouldn't this place extra strain on the database,
    >which we already use for other things?

    So? With an index on the image primary key (name, for you, I guess), it is not worst than asking an os to find a file by name in a folder. Can you imagine the load on the folder that hold 100'000 entries? You would probably partition in folders, like, using last 10 bits of a (well known well balanced) hash code of the filename (to get folders 0 to 1023). That kind of idea could just as well be applied to table names, and so you get more tables with less entries in them.

    But again, db servers are designed for that. It is not the blob size that counts, it is the number of entries and the disk space involved for the db server to hold data. I have seen tables with 7 millions record working fine. And you should prepare yourself for testing size limits, 'cause if your images are avg 100kbytes, you are about to store at least a raw 10 gigs, plus structural overhead.
  7. 100k records in db[ Go to top ]

    Can you imagine the load on the folder that hold 100'000 entries? You would probably partition in folders, like, using last 10 bits of a (well known well balanced) hash code of the filename (to get folders 0 to 1023).
    Yeah, this is true. We had a lot of problems in the previous project because they where trying to save hundreds of thousands of entries in the *SAME* directory, and of course, this doesn't have a very good performance... ;)

    They solved the problem using different directories and subdirectories. The final approach was very simple: they created a single directory for every year, and inside this folder, they created sub-folders for the month, inside the month the day of the month, and so on...

    Apart from this structure, they had some tables in Oracle with just the name of the ubication of the file, in a folder/subfolder/file fashion. So, in Oracle there were only the names of the files, not the files thenselves.

    Of course, you should consider using a real document management product, such as Stellent. It works great, and of course it is much more robust than a file-system approach.

    Jose R. Huerga
  8. Is there any specific reason (other than cost obviously), why you havent looked at Oracle Internet Directory? These are precisely the situations it is meant for. As for redundancy, alike databases you could have it hot-replicated onto a different host, or have a RAID0 set up.

    For a cost effective solution, you can setup a cluster of Linux boxes using LVS (and other services such as fake, mon & coda). In such a setup, you can have two "sets" of redundancies = data redundancy, wherein you may want to replicate the data on more than one host, or, have the data stored on a RAID array and serve it via different endpoints.
  9. Have you looked at Jini at all? You could setup a cluster/array of servers and write a service to expose the file system, then use Jini's query-like API to find the file in the network. Jini will take care of durability and searching for you. As more storage was needed, just throw more servers on the network who expose the same service.
  10. Hi Gautam,
    Just to echo Jacob's idea of using jini for your project. I think that you
    will find jini a perfect solution for your situation. Jini offers a simple and very powerful mechanism for storing data of any kind in a distributed manner. As a GigaSpaces employee - I am obviously partial - but I suggest that you look at mature implementations with added services like GigaSpaces, you'll find advanced services such as monitoring, clustering and provisioning
    on top of Jini and in-memory levels of performance. We also have JNDI, JMS
    and JDBC support so if you already have JDBC code you're working with - you can simply (in most cases) plug in GigaSpaces and try it out..

    Check us out at:

    Good luck with your project.

    Gad Barnea
    GigaSpaces Technologies