I’ve had this idea for schemaless data collection for a while now. It seems like everyone is trying to ETL data in. Inevitably, people start writing mappers programs and documentation trying to build a massive Rosetta stone. “They call person_name name? We’ll call everything name. Let’s write this all down. Person_name = name, so say we all.” What happens is a lot of investigation and work in determining their ENTIRE schema just to make a copy of it. In the case of XML parsing, sometimes I actually do need to know what the entire source looks like just so I can loop through it. What a pain. Not to mention if the source changes formats, I have to do all this work to re-understand their schema and change my mappers to reflect their change. Maybe I even have to do a mass migration on my end to bring everything up to date. Schemaless data collection will let you copy the data when the source changes and even be able to historically tell you when the schema changed. In an RDBMS, this is impossible without blobs or something else horrible.

What this example shows is simply the collection of the data. But the advantage here, I will show that any kind of querying can be done later very easily. What I won’t show is that normalization can be done in parallel and in batches later too and the whole thing lives in a horizontally scalable database. Of course, the catch is, you need keys and a structured data source to start with. This won’t work with CSV and simple formats.

For example, if we wanted to load XML files into a database. MongoDB is great for this because it can possibly make coding dead simple. For each attribute and children, create attributes and children in the database.

In a normal database, I would have to parse out the XML file and create normalized rows all over the place or use blobs. Of course blobs are useless for search. Let me show you an example.

First, let’s take a look at the XML returned by a Wikipedia exporter URL (trimmed the XSD line a bit for readability):

    MediaWiki 1.18wmf1
      User talk
      Wikipedia talk
      File talk
      MediaWiki talk
      Template talk
      Help talk
      Category talk
      Portal talk
      Book talk

    Ford Motor
      #REDIRECT [[Ford Motor Company]]

Even though this page is just a redirect (see the text element at the end in mediawiki markup), it’s still very long. For a multitude of posts or documents, creating a mapper and tightly handling the document might be very annoying. Even worse, we might delay sucking data in because there is so much mapping work to do. We might even start writing documents detailing the source format, what we will call the attributes internally and document schema changes as they occur.

What I propose is to forget all that mapping and just load the document as-is. We will use a document database (MongoDB) to make this magic happen.

# we are going to intentionally use the vanilla mongo driver
require 'mongo'
require 'nokogiri'
require 'open-uri'
require 'active_support/core_ext' # from rails

include Mongo
pages = Connection.new('localhost', 27017).db('loadtest').collection('pages')

wikipedia_page = "http://en.wikipedia.org/wiki/Special:Export/Ford_Motor"

# noblanks is magic here?  had problems without it
doc = Nokogiri::XML(open(wikipedia_page)) { |config| config.noblanks }
page = Hash.from_xml(doc.to_s)    # here's the magical method from rails
pages.insert page

Ok this is pretty cool. In 10 lines of ruby, I’m downloading an XML file and inserting it into a new collection called ‘pages’ in a database that hasn’t even been created (as long as mongodb is running). Great! But it quickly falls apart.

If you run it again, you now have two documents (rows) in Mongo. Boo. Not to mention, if you actually query mongo you see this (trimmed the XSD line a bit for readability):

> db.pages.find()
         "generator":"MediaWiki 1.18wmf1",
               "User talk",
               "Wikipedia talk",
               "File talk",
               "MediaWiki talk",
               "Template talk",
               "Help talk",
               "Category talk",
               "Portal talk",
               "Book talk"
         "title":"Ford Motor",
            "text":"#REDIRECT [[Ford Motor Company]]"

There’s a whole lot of metadata in there and really all I care about is the content (maybe). So in some cases, you might want to filter incoming data. I have to actually look at my data and pick which attributes I want. Now I’m tightly bound to the source document and have to worry about it changing etc.

But let’s do it anyway. All we have to do is change one line:

pages.insert page["mediawiki"]["page"]

Now our inserted document looks like this:

> db.pages.find()
   "title":"Ford Motor",
      "text":"#REDIRECT [[Ford Motor Company]]"

Of course, we had to clear out the pages collection and re-run it. That’s just because we haven’t written any logic yet to check for existence yet. But let’s take a break here and talk about what we could do even with this piddly little bit of 10 lines of Ruby running. We can query:

> db.pages.find({}, {'title':1} )
{ "_id" : ObjectId("4ec6c1775a498d64f2000001"), "title" : "Ford Motor" }
{ "_id" : ObjectId("4ecafffd5a498d0136000001"), "title" : "Nissan" }

Here we are just showing that we have two pages from Wikipedia stored. The title:1 is like SELECT title FROM pages; in SQL. So if we wanted to search on attributes, it’s pretty easy:

> db.pages.find({ title:/^N/ }, {title:1})
{ "_id" : ObjectId("4ecafffd5a498d0136000001"), "title" : "Nissan" }

It’s pretty forgiving on the key quotes.

In the next part, we’ll dive into handling updates and other formats.