I recently came across this article from GigaOm about CDN architecture and its “scary future”, revealing insights from a paper published in 2010. The whole thing made me feel like I’d been transported back to the mid 1990s. A world where according to Wikipedia we were wearing “hippie-style floral dresses, turtleneck shirts, lace blouses, conch shell necklaces, straw hats, Gypsy tops, long floral skirts and chunky wedge heeled shoes.”
In an odd piece of marketing material dressed up to look like an academic paper, the document beat the tired old drum of a CDN architecture built for the mid 1990s. A world where the Internet was really just starting to grow; where people were still trying to figure out how to manage this massively interconnected network. But it’s more than fifteen years since those days. During that time the Internet has grown massively and the skills of the IP engineers and the sophistication of the systems and equipment they use has improved out of all recognition. The Internet, in the parts of the world that account for well over 90% of all the traffic, is a well-managed high quality system.
The document also downplays the dramatic improvements in the TCP/IP stacks on the majority of PCs over those fifteen or so years. Windows is dramatically better as is Mac OS and Linux. They operate better without the users becoming IP experts and they offer hooks for CDNs to enable faster start-ups and greater sustained throughput. While it’s true that there are still lots of old computers out there, the vast majority have high performing TCP/IP stacks.
And of course the document’s “answer” to the “poor IP network” and the “problems with TCP/IP” is for a CDN architecture that is inside thousands of networks. What does inside mean? The document draws a bottleneck diagram showing a “good CDN” on one side of the bottleneck and Level 3 on the other. Ignoring for a moment that the bottleneck doesn’t actually exist at the interconnection of networks, what does this mean? Well if you are a CDN operator without a network then, by definition, every single one of your servers is inside another network. But Level 3 does have a network.
So, if a non-network CDN has servers in a network-interconnect facility, say an Equinix building, and they interconnect with lots of networks there, then they are “inside” all those other networks. In contrast, if Level 3 interconnects its IP network with those same networks in many more physical locations, none of them are classified as “inside”. But which is better? The only thing that matters is the quality experienced by the person streaming a video.
In order to deliver a high quality experience for video streaming (or download) do you need thousands of locations? No, of course not. And that’s why no CDN built for the modern Internet has taken that approach. In test after test undertaken by our customers, Level 3 comes out equal or better than ALL of our competition when it comes to quality. And that’s quality measured from the client’s PC. Data from every single stream is analysed by our customers (or an independent quality measurement system like Conviva’s Insights) in order to ensure we meet our service level guarantees. We can’t spoof that. We can’t obfuscate that by talking about CDN architecture or the internal workings of TCP/IP. We just deliver great performance.
But for all you IP geeks out there I’ll leave you with this suggestion. Despite all the claims of thousands of locations why don’t you find a video site that the “thousand-location-CDN” delivers video for? And then why don’t you trace a few of the locations from which that video is served. You might be surprised at the number … which, by the way, isn’t close to a thousand.