How much data do YOU have?
How do you get massive amounts of live data from publishers around the world? We kicked off our social feed reader beta (aka our v 1.5 release).
That was our tool we used to validate the performance and flexibility of our architecture at scale. We had a number of issues, but quickly got to the place where we could do instant personalized views of 20 million unique assets from 100.000 publishers. At that point, we figured scale was solid enough, so we could beef up the fail.
I’d like a chicken sandwich, to go please
I’m lazy at monitoring systems, so we wanted the software to be as automatic as possible. Sort of like ordering the hot and spicy chicken sandwich when it’s too late at night.
- automatic multi-box data replication
- automatic host fail over
- automatic re-mastering assets to newly introduced machines
- automatically rebuilding and syncing those distributed, replicated databases behind the scenes
- and bringing them back into the fold
- and maybe some ranch dressing on the side
Let’s just say the start of v1.5 wasn’t exactly seamless or automatic, but we battled on and seemed to have beaten the system into submission. WIN!
Then we unceremoniously kicked the reader to our “labs” section until we clean up the severely overloaded UI. Anyway, I might be the only one who wants a million features in my web app.
That’s what we learned with the 1.5. What have you been up to? We’re about to take our next release out for a spin.
What’s the way to say it?