When the first government requirement to stay at home was issued, everyone in the streaming industry probably held their collective breath. First, because they knew that after one stay-at-home order, there would be more. Second, because it meant that people were going to do a lot more streaming.
Although there was probably some trepidation about the idea of lots of people streaming a lot of content a lot of the day, most big platform operators (and even those launching during the pandemic, like Disney+, HBO Max, Quibi, and Peacock) had already planned their services to accommodate the traffic bursts associated with sudden demand. In fact, most streaming service providers, especially those that operate a network, most likely have specific people whose sole job is to plan for capacity needs that reflect both current and future demand. I am sure none of those capacity-planning engineers predicted everyone staying home and streaming 24/7 for several months, but they still planned for both logical growth and sudden spikes. As a result, at least in the U.S. market, the networks fared very well.
But network capacity is only one part of the streaming workflow. Unlike a broadcast workflow, in which technologies are connected via several standards, a streaming workflow is a hodgepodge of technologies stitched together through middleware and other custom code. In many cases, when streaming providers employ open source software within their operations, customizations may be responsible for one streaming component being able to communicate with another. And, of course, there are companies like Netflix, which has built a lot of its streaming infrastructure from the ground up. The kind of custom and proprietary technology development often necessary to enable streaming workflows is both a blessing and a curse. It provides immense flexibility for streaming providers to design and build workflows that meet their specific requirements (product functionality, monetization mechanisms, etc.). But it can also expose the fragility of those workflows. Custom encoders can choke. Multi-technology stacks, like security, can become real-time bottlenecks as traffic exceeds the capacity of the software, or the server, on which it operates.
And that gets back to the title of this column.
What we’ve learned is that the internet is not going to break, but network operators have to remain vigilant about bandwidth consumption (as evidenced by the European Union’s request of streaming operators, like Netflix and YouTube, to lower video quality while stay-at-home orders remained in effect). The amount of streaming we’ve seen since the beginning of the pandemic is not an outlier use case. It is going to be the new normal at some point. And network operators will need to upgrade equipment to ensure they have the necessary capacity to support it.
What we also learned, though, is that the network is not where the real issues will lie. They will be in other, unexpected components, like authorization web applications installed on web servers, which were not configured for the kind of traffic they’ve seen. It is easy to tip over a web server when it’s only able to support 1 million concurrent requests per hour but is receiving 10 million. What we learned is that streaming is a combination of both hardware and software and that the machines on which the server is installed to provide streaming workflow functionality must be built to support the kind of demand they will be subjected to. The worst thing you can do is build edge caches, for example, that are not optimized or balanced to maximize throughput and performance during high demand or for specific traffic profiles (downloaded content requires different server configurations than segmented streaming to produce optimized performance).
What we learned is that we have work to do. Not because things broke, but because we saw a vision of the streaming future that has given us the opportunity to be proactive rather than reactive, to stay out in front of the curve we know is coming.