Scaling Beyond the CDN – Jake Holland, Principal Architect @ Akamai

How to scale beyond the CDN with 8k video, millions of simultaneous download and streams, local caches and multicast. This episode is the last in the series of 3 in which we discuss scaling the internet.

The main links discussed in this episode are:

https://github.com/GrumpyOldTroll/multicast-ingest-platform

https://github.com/GrumpyOldTroll/wicg-multicast-receiver-api/blob/master/explainer.md

Other main things we referenced:

https://blog.apnic.net/2020/07/28/why-inter-domain-multicast-now-makes-sense/

https://tools.ietf.org/html/rfc6726 (FLUTE)

https://tools.ietf.org/html/rfc8777 (DRIAD)

https://datatracker.ietf.org/doc/draft-ietf-mboned-dorms/

https://datatracker.ietf.org/doc/draft-ietf-mboned-cbacc/

https://datatracker.ietf.org/doc/draft-ietf-mboned-ambi/

https://github.com/GrumpyOldTroll/chromium/tree/multicast_new


One Reply to “Scaling Beyond the CDN – Jake Holland, Principal Architect @ Akamai”

  1. Matthew Walster

    In what seems to be a regular feature for me, a few counter-arguments to the points made in this show:

    1. It took nearly 20 years for roughly 1/3 of all bits in an access network to get IPv6, I think it’s incredibly unlikely for multicast to be supported there. Where multicast has seen usage is in networks operated by those bundling services: delivering the ISPs own content to the set top box they provide. Expanding that (for video) to other devices is laudable, but fraught with issues as previous discussed (wrt WiFi etc).

    2. Just because 20Mbit/s is recommended for 4K streams doesn’t mean that’s the average bitrate, that is the peak. Realistically, the average bitrate is lower than this, but even if we use the 20Mbit/s figure you can be assured that the vast majority of streaming clients are not subscribed to the highest bitrate streams, and that HD (or even SD) streams will consume the vast majority of streams.

    For 100M concurrent viewers of the superbowl, that’s almost certainly counting viewers as people, not devices, and it’s probably much less than half of this figure… In a non-COVID year anyway!

    Regardless, when the superbowl is not on you need the capacity to stream what viewers want to watch on demand, so if on this one day per year you find most people are watching the same content, you still need the network capacity for all the other days of the year.

    3. Streaming OS and game updates via multicast is a terrible idea outside of the enterprise situation. You have to rely on all their devices being turned on (which is possible if you schedule the download for an overnight session for the device to wake itself up, but then the traffic demands on the last-mile access network are considerably smaller anyway) and can only send data at the lowest common denominator.

    Taking the 150GB update as a figure, if you send that multicast stream at 10Mbit/s, you need roughly 30 hours of constant stream to download everything. If someone uses their internet connection during that time, you are almost certainly going to lose packets and therefore have to “fill in the gaps” later via a separate unicast catch-up channel. Whereas if you allow users to download via unicast from a local cache node, and it is spread out over a day or so, if the user has 100Mbit/s internet connection they can download that same amount of data in a little over 3 hours.

    Multicast is a great tool, for sure, but almost all of these use cases that are shared are solutions looking for a problem. I don’t for a moment doubt that there are good cases for multicast, but on a sliding scale of effort versus benefit, unicast almost always wins out considering you need a well-built unicast network to start off with.

Leave a Reply

Your email address will not be published. Required fields are marked *