Dynamic translation of XML into CSV using XSD information


XML & Web services: Dynamic translation of XML into CSV using XSD information

  1. Hi, I have a web application that needs to accept an XSD and a corresponding XML as input via form parameters. I need to parse the XSD, identify the simple and complex types along with other necessary information, and represent this in some sort of a structural format (preferably a tree hierarchy) This structural information will then be used to help parse the associated XML file, record by record into a CSV file. The XSD will be using only a limited set of XSD constructs (namespaces, imports can be ignored for the time being) What is the best way that I can go forward with this? I have considered these options. XSD-Java binding tools (XMLBeans, JAXB). I can get a type hierarchy using these tools. countries .country ..id ..name ..states ...state ....id ....name This would be the ideal scenario as I can create a new instance hierarchy based on the above type hierarchy (acts as an intermediate 'in memory' representation), populate the simple types (or attribute) with their respective values in the new instance of the type hierarchy as I parse through the XML and write it to the file. However, due to the high number of class files that are going to be generated on the system, this option cannot be considered. (For each XSD uploaded, the system would have to generate a set of class files and this is not acceptable) XSOM Using XSOM, i can create a simple tree structure with two user defined types 'ComplexType' and 'SimpleType' ComplexType Name Set of simple types Set of complex types SimpleTypes Name Value The problem here is I am not sure how I can go about parsing the XML and have an intermediate representation which could be committed to the CSV file. DOM is not an option at all due to memory constraints. Example: XML 1 Country1 1 State1 2 State2 2 Country2 3 State3 CSV countries_country_id,countries_country_name,countries_country_states_state_id,countries_country_states_state_name 1,Country1,1,State1 1,Country1,2,State2 2,Country2,1,State3 What I have in mind for the intermediate structure to represent a single line of record in the CSV file is an array of key-value pairs. I could parse through the XSD, identify all the simple types (along with their position relative to the root) and initialize the array keys with the simple type names. eg: [(countries_country_id=''),(countries_country_name=''),(countries_country_states_state_id=''),(countries_country_states_state_name='')] Further on, I could parse the XML (using SAX/StAX), populate the above array one simple type value at a time. I could make use of a stack to maintain the state information by pushing the element names. Once I encounter a type that is already present in the stack, it will identify the end of a line (or record) in the CSV file. The contents of the array will now be written to the CSV file, its contents will be reset (sanity check) and the parsing process will continue). I was also thinking of replacing the array of key-value pairs with a type that extends HashMap, but I really do not need the flexiblity a Hashmap offers. Also, the ordering of elements might become an issue and I would have to move over to a Treemap (performance hit?). I feel that in the case of a deeply nested XML, hashing will provide a considerable performance improvement. So should I implement my own custom hashtable based fixed size array? Awaiting your feedback and comments. I have been burning my head for the past few days trying to figure out the most efficient and scalable way of doing this. If you could provide me with any better alternative approaches, it would be really helpful. Many Thanks, .J.
  2. You are certainly right that binding tools (JAXB, etc.) aren't right for this kind of problem. You need to be working with an API that handles arbitrary XML. From what you have written, you could choose to ignore the Schema, all of the information that you need will be in the XML document itself. While I understand that you want "the most efficient and scalable way of doing this", most real development problems have to find a balance between computational efficiency, scalability, and the time and money required to code/debug/optimise the solution. For that reason, I don't recommend that you try to make the solution any faster or more scalable that it really needs to be (rarely is the XML processing the slowest point in any complete business process, at least in my experience), not if you can save some development/debugging time/cost by being less ambitious. From a programming point of view, a "DOM-like" API is probably the easiest for this. If the DOM uses too much memory for your application (and you said that it does), then look at XOM or VTD-XML. If these also use too much memory for your application, then you are correct that your only remaining choice (but not such a bad one for this problem) is to use SAX or StAX and maintain your own stack. I've certainly written this kind of code before (it's similar to what Excel does when it imports an XML document), and it's not too difficult. If you use SAX or StAX (and if you don't use the Schema), you will have to parse each document twice, once to get the structure so that you can generate the columns and the mapping from the XML paths to the columns, and a second time to generate the data rows. If the files really are so big that parsing them twice is a problem, then the alternative is to process the Schema first to get the structure. I often use XSLT for that kind of thing, but you could use XOM if you prefer to write Java, or you could try the Eclipse API for Schemas (the "org.eclipse.xsd" packages). Once you have the structure information, you can then use SAX or StAX to process the XML file just once and generate the data rows. Cheers, Tony. -- Anthony B. Coates Author, "XML APIs" chapter, "Advanced XML Applications from the Experts at The XML Guild" http://www.amazon.com/XML-Power-Comprehensive-Guide-Guides/dp/1598632140/
  3. Thank you for your feedback Tony. In some of the cases, the XML files are so huge (100mb+) that considering a DOM like API is way out of the question. The same reason rules out the possibility of parsing the XML twice when in fact the users of the product are interested in providing both the XSD and the XML. I'll have a look at the Eclipse API for schemas. Do you have any idea on how it compares with XSOM? I have not found much documentation for XSOM other than the javadoc.