Working with vast volumes of information has been remarkably enlightening. Every day, almost 50 gigs of MLS data pours through our system. This grows exponentially as we bring on new regions. Filtering these huge data sets has presented our tech team with fairly enormous storage and processing challenges.
Initially, we developed an ostensibly bulletproof system. We used standard Microsoft best practices together with acknowledged database structures. All looked peachy with the test data. So we flipped the switch.
We began accepting live MLS data and the site ground to a crawl. Usability was at an all-time low. A lot of complaints flowed in. Large quantities of Mountain Dew, coffee, and hot pockets produce amazing results in both lab rats and computer programmers. Over a 72-hour period, we ripped the current database structure to bits and re-tooled the app with a whole new structure designed to handle massive quantities of data. We had a database specialist come in and he said one word: “hideous”. This confirmed that we were on to something. Maybe it’d work, maybe not, but we were breaking new ground. Initial stress tests showed a huge spike in performance (on an order of magnitude 106 better.) We celebrated with 12 hours of uninterrupted nap time.
Anyway, it looks strange but works beautifully. We’ve added on new feeds with barely a blip in resource usage. Blood, sweat, and tears … just a standard week in the life of a start-up. Our next step is to create scalable storage for all these images in the MLS. The programmers are drooling over that project. We’ll let you know how it goes. And this time, we’re testing it on a development server.