Tuesday, 24 May 2016

NBN and the missing superfast customers

Australia’s national broadband network (NBN) is the Kim Kardashian of government broadband plans – bigger, bolder and more exposed than any other. (Though to be fair, it has left ‘breaking the internet’ to Telstra)

The NBN represents a renationalisation of the access network, as well as a significant upgrade to fibre - initially FTTP, now mixed technology. Whether or not government ownership is a good idea, it does at least bring a degree of openness on operational detail. This is very handy for broadband watchers around the world.

For instance, NBN publishes the mix of speeds its customers take, which are as follows:

Source: Various NBN co publications
Note: Excludes plans above 100/40, with trivial take-up

The most popular speed tier is 25 Mbps down, 5 Mbps up. At the end of March, 46% of customers were on this plan. Certainly such speeds would be beyond an ADSL connection, so NBN has provided some benefit.

However, a moderate uplift vs ADSL was never the goal for the NBN. When it was announced in 2009, its primary ambition was to offer “up to 100 Megabits per second”. In this regard, consumers seem to have been less impressed. Only 15% of those on FTTP are taking the 100/40 product, a percentage that has been steadily falling since 2012. Despite the fact that the premium for moving from 25/5 to 100/40 is just £10/US$14, the great majority of consumers simply aren’t willing to pay the extra. (The premium from 12/1 to 25/5 is half this).

Based on the European 30 Mbps threshold, just 19% of NBN FTTP customers (or 155,000) are on superfast. The original NBN plan expected roughly double this percentage (and a far higher absolute number).

One consequence of this change in speed mix is that the average capacity  of an NBN line is actually falling over time – from 36 to 34 Mbps between June 2014 and December 2015, for example. However, traffic per line has grown appreciably in the same period – yet more evidence that traffic and bandwidth growth are two very different things.

Source: Author's analysis of data from various NBN co publications

Oh, and demand for speeds beyond 100 Mbps, striving into a gigabit future? Just 65 customers have taken them – 40 on 250/100, 3 on 500/200 and 22 on 1000/400. Clearly Australians are not bitten by the gigabit bug.

So what does this mean for policy makers? I’d suggest:

  • Don’t overestimate willingness-to-pay for higher speeds
  • Leave room for price differentiation of fibre products – lots of customers are happy at the low end, and you need to enable appropriately low prices for them
  • Report equivalent data, so the debate about demand for bandwidth can be anchored in reality

(Note that all the above figures are unconnected with the change of the NBN plan to a mixed technology model. They are for customers who are on the FTTP portion of the network).

Thursday, 5 November 2015

Web page weights, and the rise of the baby hippo

Web pages, like our large friend on the left there, are big and getting bigger. Once upon a time, web pages were just text, but these days they may include many high-res image, Java script, fonts and many other elements that all contribute to the total amount of data that needs to be transferred to create the web page. This is leading to concerns over 'web bloat'.

Not all of these files to be downloaded before you start using the page. 'Below the fold' content (which initially sits off the bottom of your screen) can be downloaded while you're reading the content at the top.

For some sites, below the fold content is massive. The 'height' of the Daily Mail homepage is 5.16 meters, with less than 10% of the content initially visible. In one sense this approach is quite wasteful of internet traffic- the Daily Mail will send you all 5.16 meters, even if you never scroll past the top 30cm (assuming you don't click away elsewhere). But internet traffic is cheap, so the Mail isn't unduly worried.

The net result of larger and richer pages has been steady growth in 'page weights' - the amount of data that makes up a web page. They are now averaging a little over 2MB on the desktop:

Source: HTTP Archive
Technical change in Oct 2012 means data on either side not comparable

It's a toss-up whether this growth is exponential (20-25%) or linear (+345 KB/year), but either way it's substantial and ongoing. That means more traffic for networks to carry, and more bandwidth needed to ensure web pages load briskly. (In practice, for technical reasons, latency is often a more important factor than bandwidth, and beyond 5 Mbps there seem to be diminishing returns).

However, the growth in desktop website page weights is not the whole story - the mother hippo has been joined by a baby hippo. In recent years there has been a massive shift to mobile consumption, and page views from mobile devices now represent almost 40% of the total. (In Africa it's over 60%).

This matters in the context of page weights because mobile pages tend to be much lighter - roughly half the weight of fixed pages. For mobile devices web page designers need to be conscious of higher consumer data charges, they need to fit their content into smaller screens and so on. Consequently both the number and size of files transferred are lower.

Source: HTTP Archive, StatCounter, author's analysis
Weight average based on UK traffic mix

Clearly mobile page weights are growing steadily too, but because they start so much lower than desktop page weights, the shift to mobile is suppressing the growth in average consumed page weight, just as that hippo calf has reduced the average hippo weight in the enclosure. While desktop pages have been growing at +345KB, the average consumed page is only growing at +230KB. 

However, baby hippos don't stay baby hippos, and once the transition to mobile devices is complete, the growth in average page weight will accelerate again - unless of course we've shifted all our usage to apps by then, which are even lighter than mobile pages.

Thursday, 22 October 2015

4K TV : 0.004K TV after compression

High resolution video is often cited as a driver for ultra-fast broadband. Here, for example, is Hyperoptic (a UK ISP) suggesting that if you want to watch 4K TV, you need 1 Gbps. 100 Mbps supposedly won't be enough.

A 4K TV isn't for everyone - apart from anything else it's very large, as you can see (though it isn't mandatory to install two Korean ladies with each set). However, it's certainly becoming more popular, and by 2020 over a third of West European households are expected to have a 4K set.

But what is frequently glossed over in broadband discussions is how little bandwidth is currently required for 4K TV, and how much less will be required in future. To be sure, how much bandwidth is needed for 4K is not a simple question. It depends on (at least) three things: the resolution of the video, the nature of the content and the time you have to compress it.

Uncompressed, high quality 4K can require 3 Gbps or more. However, in practice 4K is never delivered to consumers uncompressed. A compression algorithm (codec) is used to convert the raw digital video into a far smaller data stream. Many techniques are used in such algorithms. For instance, if a portion of the image is unchanged since the previous frame, the algorithm may (effectively) say 'for this portion of the screen, same again'. This requires far less data than retransmitting each pixel in that part of the screen, Or, if a large part of the image is all the same colour, then the algorithm may transmit the boundaries of the colour block, rather than separately transmitting the colour of each pixel within it.

The effectiveness of such techniques depends on many things, including the sophistication of the algorithm, the available processing power & time and the nature of the content (content with lots of movement is inherently more difficult, for instance).

However, the reduction in bandwidth is generally dramatic. Netflix, who know as much about 4K streaming as anyone, say they average 15.6 Mbps. However, sports content (which has lots of movement and must be compressed in real-time) can require more. BT's 4K Sport currently uses 20-30 Mbps.

Thus even today 4K is well within the capabilities of sub-FTTH broadband, and it is baffling that Hyperoptic think 100 Mbps is insufficient. Moreover, 4K's requirements are only going to fall. Moore's Law means we have ever more processing power to play with, which can be traded-off against bandwidth, to maintain picture quality while using fewer Mbps. In addition, processing algorithms grow ever more sophisticated. As a result, roughly 9% less bandwidth has been needed each year to support a given picture quality. Simply because video is such an important component of traffic these days, it appears as if investment in codecs is growing, meaning that the 9% rate may actually accelerate.

Companies are already claiming dramatically lower bandwidths for 4K in trials. For instance, V-Nova has reported streaming 4K at just 6 Mbps in a trial with EE (the UK's largest mobile operator). Tveon, a Canadian start-up, is even more aggressive, suggesting that with their technology 2 Mbps will be enough. (That's better than a 1000:1 compression of the raw stream).

While these claims will need to be proven out, they nonetheless suggest the potential for dramatic improvement. Indeed, even at double V-Nova's 6 Mbps, most ADSL lines would be able to support 4K TV.

Your future TV may or may not be 4K, and you may or may not be able to see the difference even if it is. However, that monster TV won't be a justification for bring fibre to your front door.

Monday, 12 January 2015

Killer Gigabit Apps - and why 1,259 experts are wrong

Sandy Lindsay, Master of Balliol College Oxford (1924-49), was once locked in debate with the fellows (professors) at the college on a contentious issue. It came to a final vote, in which the fellows, to a man, voted against the Master. He scowled around the room, saying “Gentlemen, we appear to have reached an impasse.”

In this post I’m going to take a similarly hubristic approach, by disagreeing with 1,259 experts. The 1,259 experts are cited in a recent report from the Pew Research Center, Killer Apps in the Gigabit Age. The Pew Research Center is a US non-partisan body which publishes much valuable material on media and the internet (among other topics). I’ve frequently cited their work. This report too is full interesting ideas – my main problem with it is its title, for reasons I’ll come on to.

For the report Pew took responses from 1,464 experts, of whom 1,259 said they believed major new applications would capitalise on a significant rise in US bandwidth in the years ahead – the Gigabit Age of the title.

Pew also asked the experts what those applications might be – and here’s where it gets interesting. The experts had many many responses – Pew needs almost 50 pages just to summarise them. But almost none of the proposed applications need gigabit speeds or anything like it.

To take one example, telepresence is a recurring theme in the responses. This may or may not become widespread in the future -but the key point is that it does not require a gigabit. Even professional telepresence systems with a screen down the middle of the conference table seating six at your end and another six in Timbuktoo (or wherever your counterparts are) require just 18 Mbps according to Cisco and Polycom, who make such systems. So if you decide to chop your dining table in two and install multiple hi-def screens so you can have permanent telepresence with your Auntie Ethel, bandwidth will be the least of your worries.

Virtual reality is also oft mentioned in Pew's report. Oculus Rift is the closest we have to usable VR. It's in advance prototype stage, and is already impressive. The official verdict of this 90-year-old tester (having a vitual  tour of Tuscany) is 'holy mackerel!'

I haven't been able to track down official views on the bandwidth required for Oculus Rift, but the displays are 1,000 x 1,000 pixels per eye. In combination that's about a quarter of the resolution of a 4K TV (with similar frame rates). Given that 4K requires 16 Mbps, this suggests that VR may actually be a relatively low bandwidth application.

Some of the experts mentioned holographic displays.Bandwidth for these? Who knows. We'll put them in the 'maybe' category.

A number of the experts mentioned e-health, including monitoring vital signs, remote consultation and so one. Again, these are not high speed apps – they require kilobits or a few megabits at most. Several of the respondents cited that old chestnut, remote surgery. Does anyone seriously think this is enabled by improved home bandwidth?

Wearable computing, the internet of things, life logging and a wide array of other possibilities were mentioned in the report – but again, there is no reason to expect these to need gigabit speeds or anything like it.

So the real story here is not that there's a cornucopia of apps that require gigabits. Rather it is a respected research institute could ask over a thousand experts, and still not find a single clear case of an application requiring gigabit speeds. Change the title to 'Lack of Killer Apps for a Gigabit Age', and the Pew report is spot on.

Wednesday, 28 May 2014

Do we need the Delphic Oracle to make sensible telecoms investments?

The Delphic Oracle was the leading seer of ancient Greece. This was reliably established by King Croesus (as in 'rich as'), who had his messengers ask a sample of seers at a pre-agreed time what he was doing at that very moment. The Delphic Oracle correctly said he was making lamb and tortoise stew.

However, the Oracle's statements were rarely this unambiguous -  they tended to be a bit more, well, Delphic. For instance, asked about a prospective military expedition, she replied: "You will go you will return never in war will you perish" - place punctuation at your own risk.

It is sometimes argued that, absent reliable seers, we have to invest in superfast broadband because of the unknown unknowns - the applications that are surely coming, but which we just lack the foresight to predict today.

There are many problems with this argument, but one of them is that its proponents tend to underestimate our foresight. Here for instance the view of my old friends the FTTH Council on how little we knew in 2000 about the drivers of demand for today's broadband:

Just how accurate is it that these things were unforeseen in 2000?

Videoconferencing with Skype
Skype wasn't founded until 2003, but video calls over the internet have been long discussed - at least since 1995, when Stewart Loken of the Berkeley Lab said "internet videoconferencing is about to become commonplace". Obviously to know what bandwidth you might need, you don't need to know the name of the company that's going to be most successful, you just need to know what the application is, so the fact that Skype didn't exist in 2000 is neither here nor there.

HD-TVs with 42" and more in 3D
The first formal HDTV research programme began in 1970. Consumer sets went on sale in the US in 1998, and some of them had 55" displays. 3D TV was trialled as early as 1994.

We certainly didn't know about Facebook in 2000, it wasn't founded until 2004. (Though you can discuss with the Winklevoss twins exactly when it was conceived). However social media had been around for a long time - GeoCities, an early example, was founded in 1994. And again, we don't need to know the name of the provider to know the necessary bandwidth. Facebook uses text, pictures and a bit of video - all well understood as internet media in 2000.

Online shops
Amazon was founded in 1994. 'nuff said.

By 2000 Google was already available in 10 languages (and had hired its first chef a year prior)

Digital Photography
The first consumer digital cameras were released in 1990 (the same year as Photoshop). Webshots, founded in 1999, was one of the first web-based photo sharing sites, but consumers had been uploading photos to BBSs (not always savoury ones) for some years before that.

iPad and Smartphones
The Palm VII, one of the first PDAs with wireless capability for internet access, shipped in 1999. I'll give the FTTH Council the iPad, which wasn't widely anticipated. Of course, it doesn't need particularly high bandwidth (though it has driven more traffic by extending hours of internet use in the home).

So, of the Council's seven things "we did not know in 2000" it turns out we did know 6½ of them. Their view of our ignorance is a bit ... ignorant.

The vast majority of things we do with the internet today were in fact anticipated in 2000, at least in broad brush strokes. That's why it's particularly problematic for FTTH fans that there are (in their own words) 'no really compelling applications yet' for FTTH. In 2000 we knew (roughly) what we would do with broadband speeds. In 2014 we have no real idea what we might do with superfast.

Thursday, 15 May 2014

The killer app for FTTH? Piracy

Fibre enthusiasts like to point out that those with very high speed connections have higher usage of the internet. This is certainly true, though whether this is because heavier users trade up to superfast broadband, or because superfast actually changes behaviour is a vexed question.

However, a more intriguing issue is what makes up that extra usage. Markets like Hong Kong and Korea have very fast broadband, and high per-line traffic. When you ask insiders in those markets what all that traffic is, they often look a bit shifty and then whisper 'piracy'. Until today I hadn't seen any hard data to back up this assertion.

However, I've just come across a report from COMBO, an EU funded project looking at fixed-mobile convergence. (It has blue-chip participants - France Telecom, Deutsche Telekom, Telefonica, Alcatel Lucent, Ericsson and  several others). On page 104 of the report is the following data for traffic of FTTH and ADSL customers of France Telecom in October 2013 :

Upstream and downstream traffic combined.

As you can see, FTTH users have roughly double the usage of ADSL customers. However, 58% of the additional traffic is down to P2P. Peer-to-peer is a protocol with legitimate uses, but it's very largely used for piracy. It's what enables Bittorrent for example (and is beloved of the copyright-sceptics of The Pirate Bay).

FTTH customers have 4.2x the P2P traffic of ADSL customers, mostly because they upload 7.5x more than ADSL customers. Indeed, for every pirate-byte they download, FTTH users are uploading 2.6. In the words of the report, "this allows us to conclude that some FTTH customers are becoming P2P servers".

Superfast advocates claim significant externalities (societal and economic impacts) for faster broadband. But externalities can be negative as well as positive. See the rigging on the rather fine pirate ship above? Fibre-optic cable, every strand.

Friday, 2 May 2014

Why websites are like turnips

We are sometimes told to eat locally - to consume food that is well suited to our domestic climate and is grown nearby. (For those of us in the UK, this implies a diet heavy in turnips, one of our relatively few native vegetables). By eating locally, we can reduce food-miles and our carbon foot print.

While perhaps less threatening to the the planet, bit-miles can be damaging too, harming your surfing experience.

In my previous post I noted that increasing the speed of your home broadband connection doesn't necessarily lead to an improved experience. One scenario where this can happen is when you are accessing a remote service, such as a website hosted in a remote location. While the transfer of data between your computer and the server may be constrained by your access bandwidth, latency (the time taken for a packet to travel between the two) can make an enormous difference. Moreover, the more hops the packets are taking between your computer and the server, the more opportunities for them to hit congestion somewhere along the way.

It's easy to test the impact of this. Speedtest.net allows you to run a test of transfer rates between your computer and one of their servers. By default, they automatically select a server near you, reducing the likelihood that transfer rates are constrained by factors other than your own access link. (There's a lot of subtlety in speed tests though - see this for a detailed discussion of how they work and their limits).

Here's the result of such a test from my computer:

This test used Speedtest's defaults, and it has selected a server in Maidenhead, about 50 miles away. As a result, my ping (round trip latency) is 31 milliseconds. In terms of bandwidth, I'm getting 18 Mbps, which sounds about right - I'm on DSL, relatively close to the exchange.

Now here's a test where I've overridden the defaults, and used a server in LA:

Ping has quintupled to 162 ms (though before we get too outraged, this still means that the packets were averaging 1/3 the speed of light, given a 10,800 mile round trip). The speed of connection has dropped to 11.4 Mbps. This certainly suggests that the bandwidth of the last mile - the access link to my house - is not the key constraint, since we know that's capable of at least 18 Mbps.

Of course, there are places more remote from my UK home than LA. Here's what happens when we run the test on a Sydney server:

Ping has doubled again, and the effective bandwidth has dropped to 8 Mbps. By visiting a remote server, I've 'lost' 10 Mbps of the capability of my line. Put another way, I'd likely get just as good a performance from the Sydney server if I had an 8 Mbps line as opposed to my actual 18 Mbps line.

While one could in theory abandon tasty vegetables and only eat turnips, it is a bit harder to only source local websites. If I'm desperate for the results of the Brisbane cockroach races, then substituting a visit to a local website about snail racing in Norfolk just won't do. (Though you've got to love a sport where a 6 year old can be the world champion). There's always going to be a portion of our internet use where improving our last mile bandwidth won't make any difference at all.