Monday, 12 January 2015

Killer Gigabit Apps - and why 1,259 experts are wrong

Sandy Lindsay, Master of Balliol College Oxford (1924-49), was once locked in debate with the fellows (professors) at the college on a contentious issue. It came to a final vote, in which the fellows, to a man, voted against the Master. He scowled around the room, saying “Gentlemen, we appear to have reached an impasse.”

In this post I’m going to take a similarly hubristic approach, by disagreeing with 1,259 experts. The 1,259 experts are cited in a recent report from the Pew Research Center, Killer Apps in the Gigabit Age. The Pew Research Center is a US non-partisan body which publishes much valuable material on media and the internet (among other topics). I’ve frequently cited their work. This report too is full interesting ideas – my main problem with it is its title, for reasons I’ll come on to.

For the report Pew took responses from 1,464 experts, of whom 1,259 said they believed major new applications would capitalise on a significant rise in US bandwidth in the years ahead – the Gigabit Age of the title.

Pew also asked the experts what those applications might be – and here’s where it gets interesting. The experts had many many responses – Pew needs almost 50 pages just to summarise them. But almost none of the proposed applications need gigabit speeds or anything like it.

To take one example, telepresence is a recurring theme in the responses. This may or may not become widespread in the future -but the key point is that it does not require a gigabit. Even professional telepresence systems with a screen down the middle of the conference table seating six at your end and another six in Timbuktoo (or wherever your counterparts are) require just 18 Mbps according to Cisco and Polycom, who make such systems. So if you decide to chop your dining table in two and install multiple hi-def screens so you can have permanent telepresence with your Auntie Ethel, bandwidth will be the least of your worries.

Virtual reality is also oft mentioned in Pew's report. Oculus Rift is the closest we have to usable VR. It's in advance prototype stage, and is already impressive. The official verdict of this 90-year-old tester (having a vitual  tour of Tuscany) is 'holy mackerel!'

I haven't been able to track down official views on the bandwidth required for Oculus Rift, but the displays are 1,000 x 1,000 pixels per eye. In combination that's about a quarter of the resolution of a 4K TV (with similar frame rates). Given that 4K requires 16 Mbps, this suggests that VR may actually be a relatively low bandwidth application.

Some of the experts mentioned holographic displays.Bandwidth for these? Who knows. We'll put them in the 'maybe' category.

A number of the experts mentioned e-health, including monitoring vital signs, remote consultation and so one. Again, these are not high speed apps – they require kilobits or a few megabits at most. Several of the respondents cited that old chestnut, remote surgery. Does anyone seriously think this is enabled by improved home bandwidth?

Wearable computing, the internet of things, life logging and a wide array of other possibilities were mentioned in the report – but again, there is no reason to expect these to need gigabit speeds or anything like it.

So the real story here is not that there's a cornucopia of apps that require gigabits. Rather it is a respected research institute could ask over a thousand experts, and still not find a single clear case of an application requiring gigabit speeds. Change the title to 'Lack of Killer Apps for a Gigabit Age', and the Pew report is spot on.

Wednesday, 28 May 2014

Do we need the Delphic Oracle to make sensible telecoms investments?

The Delphic Oracle was the leading seer of ancient Greece. This was reliably established by King Croesus (as in 'rich as'), who had his messengers ask a sample of seers at a pre-agreed time what he was doing at that very moment. The Delphic Oracle correctly said he was making lamb and tortoise stew.

However, the Oracle's statements were rarely this unambiguous -  they tended to be a bit more, well, Delphic. For instance, asked about a prospective military expedition, she replied: "You will go you will return never in war will you perish" - place punctuation at your own risk.

It is sometimes argued that, absent reliable seers, we have to invest in superfast broadband because of the unknown unknowns - the applications that are surely coming, but which we just lack the foresight to predict today.

There are many problems with this argument, but one of them is that its proponents tend to underestimate our foresight. Here for instance the view of my old friends the FTTH Council on how little we knew in 2000 about the drivers of demand for today's broadband:

Just how accurate is it that these things were unforeseen in 2000?

Videoconferencing with Skype
Skype wasn't founded until 2003, but video calls over the internet have been long discussed - at least since 1995, when Stewart Loken of the Berkeley Lab said "internet videoconferencing is about to become commonplace". Obviously to know what bandwidth you might need, you don't need to know the name of the company that's going to be most successful, you just need to know what the application is, so the fact that Skype didn't exist in 2000 is neither here nor there.

HD-TVs with 42" and more in 3D
The first formal HDTV research programme began in 1970. Consumer sets went on sale in the US in 1998, and some of them had 55" displays. 3D TV was trialled as early as 1994.

We certainly didn't know about Facebook in 2000, it wasn't founded until 2004. (Though you can discuss with the Winklevoss twins exactly when it was conceived). However social media had been around for a long time - GeoCities, an early example, was founded in 1994. And again, we don't need to know the name of the provider to know the necessary bandwidth. Facebook uses text, pictures and a bit of video - all well understood as internet media in 2000.

Online shops
Amazon was founded in 1994. 'nuff said.

By 2000 Google was already available in 10 languages (and had hired its first chef a year prior)

Digital Photography
The first consumer digital cameras were released in 1990 (the same year as Photoshop). Webshots, founded in 1999, was one of the first web-based photo sharing sites, but consumers had been uploading photos to BBSs (not always savoury ones) for some years before that.

iPad and Smartphones
The Palm VII, one of the first PDAs with wireless capability for internet access, shipped in 1999. I'll give the FTTH Council the iPad, which wasn't widely anticipated. Of course, it doesn't need particularly high bandwidth (though it has driven more traffic by extending hours of internet use in the home).

So, of the Council's seven things "we did not know in 2000" it turns out we did know 6½ of them. Their view of our ignorance is a bit ... ignorant.

The vast majority of things we do with the internet today were in fact anticipated in 2000, at least in broad brush strokes. That's why it's particularly problematic for FTTH fans that there are (in their own words) 'no really compelling applications yet' for FTTH. In 2000 we knew (roughly) what we would do with broadband speeds. In 2014 we have no real idea what we might do with superfast.

Thursday, 15 May 2014

The killer app for FTTH? Piracy

Fibre enthusiasts like to point out that those with very high speed connections have higher usage of the internet. This is certainly true, though whether this is because heavier users trade up to superfast broadband, or because superfast actually changes behaviour is a vexed question.

However, a more intriguing issue is what makes up that extra usage. Markets like Hong Kong and Korea have very fast broadband, and high per-line traffic. When you ask insiders in those markets what all that traffic is, they often look a bit shifty and then whisper 'piracy'. Until today I hadn't seen any hard data to back up this assertion.

However, I've just come across a report from COMBO, an EU funded project looking at fixed-mobile convergence. (It has blue-chip participants - France Telecom, Deutsche Telekom, Telefonica, Alcatel Lucent, Ericsson and  several others). On page 104 of the report is the following data for traffic of FTTH and ADSL customers of France Telecom in October 2013 :

Upstream and downstream traffic combined.

As you can see, FTTH users have roughly double the usage of ADSL customers. However, 58% of the additional traffic is down to P2P. Peer-to-peer is a protocol with legitimate uses, but it's very largely used for piracy. It's what enables Bittorrent for example (and is beloved of the copyright-sceptics of The Pirate Bay).

FTTH customers have 4.2x the P2P traffic of ADSL customers, mostly because they upload 7.5x more than ADSL customers. Indeed, for every pirate-byte they download, FTTH users are uploading 2.6. In the words of the report, "this allows us to conclude that some FTTH customers are becoming P2P servers".

Superfast advocates claim significant externalities (societal and economic impacts) for faster broadband. But externalities can be negative as well as positive. See the rigging on the rather fine pirate ship above? Fibre-optic cable, every strand.

Friday, 2 May 2014

Why websites are like turnips

We are sometimes told to eat locally - to consume food that is well suited to our domestic climate and is grown nearby. (For those of us in the UK, this implies a diet heavy in turnips, one of our relatively few native vegetables). By eating locally, we can reduce food-miles and our carbon foot print.

While perhaps less threatening to the the planet, bit-miles can be damaging too, harming your surfing experience.

In my previous post I noted that increasing the speed of your home broadband connection doesn't necessarily lead to an improved experience. One scenario where this can happen is when you are accessing a remote service, such as a website hosted in a remote location. While the transfer of data between your computer and the server may be constrained by your access bandwidth, latency (the time taken for a packet to travel between the two) can make an enormous difference. Moreover, the more hops the packets are taking between your computer and the server, the more opportunities for them to hit congestion somewhere along the way.

It's easy to test the impact of this. allows you to run a test of transfer rates between your computer and one of their servers. By default, they automatically select a server near you, reducing the likelihood that transfer rates are constrained by factors other than your own access link. (There's a lot of subtlety in speed tests though - see this for a detailed discussion of how they work and their limits).

Here's the result of such a test from my computer:

This test used Speedtest's defaults, and it has selected a server in Maidenhead, about 50 miles away. As a result, my ping (round trip latency) is 31 milliseconds. In terms of bandwidth, I'm getting 18 Mbps, which sounds about right - I'm on DSL, relatively close to the exchange.

Now here's a test where I've overridden the defaults, and used a server in LA:

Ping has quintupled to 162 ms (though before we get too outraged, this still means that the packets were averaging 1/3 the speed of light, given a 10,800 mile round trip). The speed of connection has dropped to 11.4 Mbps. This certainly suggests that the bandwidth of the last mile - the access link to my house - is not the key constraint, since we know that's capable of at least 18 Mbps.

Of course, there are places more remote from my UK home than LA. Here's what happens when we run the test on a Sydney server:

Ping has doubled again, and the effective bandwidth has dropped to 8 Mbps. By visiting a remote server, I've 'lost' 10 Mbps of the capability of my line. Put another way, I'd likely get just as good a performance from the Sydney server if I had an 8 Mbps line as opposed to my actual 18 Mbps line.

While one could in theory abandon tasty vegetables and only eat turnips, it is a bit harder to only source local websites. If I'm desperate for the results of the Brisbane cockroach races, then substituting a visit to a local website about snail racing in Norfolk just won't do. (Though you've got to love a sport where a 6 year old can be the world champion). There's always going to be a portion of our internet use where improving our last mile bandwidth won't make any difference at all.

Thursday, 24 April 2014

Tilting at Windmills - Latest scores

Sometimes setting yourself entirely arbitrary objectives can be useful. There's good evidence it helps marathon runners improve their times, for instance. However, sometimes it can be entirely quixotic.

The pursuit of best national broadband speed is in the latter category. While there's perhaps reasons to worry if you're at the bottom of the league tables for broadband (and here's Phil Dobbie doing exactly that for Australia this week), there's really no evidence at all that you benefit from being at the top.

That hasn't stopped countries such as the UK and Korea setting what we might call 'league table targets'. The UK for example has a goal to 'have the best superfast broadband network in Europe by 2015'. This is perilously close to 'if you overinvest in broadband, we''ll overinvest more', though happily in practice the UK is being judicious in its superfast broadband.

Of course the people who most love league table targets are equipment vendors, who benefit greatly from that overinvestment. As I've noted before, the FTTH Council love them. They publish annual data on who has rolled out most FTTH (and FTTB- fibre to the building).

Worrying about FTTH league tables is even more perverse than worrying about general broadband league tables. FTTH is a long way removed from actual consumer or societal benefit, in a chain like this:

If any of the steps in this chain are weak, then the societal return on the investment in FTTH will be less. For example, Martin Geddes argues that increased speeds may not lead to improved technical performance (in the sense of reliable and predictable packet delivery), and may in fact sometimes degrade  it.

My focus in this post is narrowly on the second step in the chain - does increased FTTH adoption lead to increased speeds? Of course FTTH will deliver greater speeds in the last mile - but this isn't necessarily the weakest link in any particular flow of traffic. Problems can occur in many places - the users' device, their wifi network, congestion in the backhaul, congestion in peering and transit links, problems at the content server and so on. If one of these is the binding constraint, then more last-mile bandwidth helps very little. And of course even if the underlying connection is FTTH, the consumer might choose a slower (artificially constrained) product if they aren't willing to pay the premium for higher speed.

One way to test the extent of this issue is to compare levels of fibre adoption to measured broadband speeds from the content server to the end user - if FTTH adoption makes a big difference, we might expect to see a good correlation between these two metrics. Akamai, a content delivery network, publishes national data on just such speeds. While they're not perfect for our purposes (a certain amount of mobile network traffic is likely mixed in, for example), they are the best available. Here are the results for Europe:

Sources: FTTH Council (end 2013), Akamai (Q4 2013)

As you can see, not much of a correlation there (R2=0.01), suggesting that FTTH adoption isn't a magic bullet for experienced speed.

Some of the countries which have been least interested in FTTH to date, like the UK (firmly in the FTTH Council's sin bin), are actually doing just fine on measured broadband speed. Conversely, Portugal, which has spent billions to secure 67% coverage for FTTH and 13% penetration, isn't seeing great speeds as a result.

Of course I'm not saying that there's no linkage between FTTH and national broadband speeds (and South Korea with lots of FTTH/B tops Akamai's speed tables), but the linkage is rather weaker than might be assumed.

The fundamental point here is that to begin with infrastructure targets is to start at precisely the wrong end of the problem. The correct starting point is 'what user experiences do we want to enable?' and to work backwards from this to the required bandwidth, latency, mobility and so on. Then from that you can think sensibly about infrastructure needs.

Otherwise we're arguing about who's tilted at most windmills.

[PS: Apologies for the extended absence, I've been busy with clients, not least on figuring out bandwidth requirements - a post for another day]

Thursday, 16 May 2013

Rising mobile offload to Wifi - saviour of the case for FTTH, or warning sign?

The ever-interesting BenoƮt Felten has a post up about WiFi offload, with some stats from Mobidia. These figures show that between two-thirds and three-quarters of traffic to mobile devices travels by WiFi, not macrocellular.

His conclusion is:

   Take that, “we’ll only need mobile networks in the future” posse…

And in that he's surely right. No matter how exciting the potential of mobile data, its simply not, at a nationwide level, a substitute for fixed broadband. If the traffic off-loaded to WiFi and the fixed network today were to be 'on-loaded' on to the cellular network, the cellular network would fall over. (This is different of course from decisions at an individual consumer level - some households may decide they can get by with mobile broadband and no fixed broadband).

However, while the future of fixed broadband seems secure, I think usage of mobile devices has a more subtle message regarding the prospects for superfast broadband. Sandvine have recently released their fascinating Global Internet Phenomena Report. One finding is that in North America, 25% of streaming audio and video traffic is delivered to a mobile device in the home - up, presumably, from 0% just a few years ago in the era before smartphones and tablets. This is roughly consistent with figures from the BBC regarding usage of their iPlayer on-demand TV service - 30% of requests for programmes are coming from mobiles and tablets (up from 15% a year ago).

What's this got to do with FTTH? Well, one of the supposed drivers of the need for FTTH is 4K TVs -here's NBN Co getting excited about 85" TV screens. However, if usage of on-demand TV is already shifting to small, handheld devices, that suggests usage on mega-TVs may not be quite what the enthusiasts hope. Mobile devices offer the convenience of a personal device that they can watch in any room of the house (or perhaps in bed - iPlayer's peak of requests is after 10pm). Consumers increasingly seem to be choosing this over watching the content on a bigger screen, be that a TV or a PC monitor.

This isn't an absolute, of course. Households may want the huge screen for the big film on a Saturday night. But for day-to-day use, convenience may trump resolution.

[A footnote : I suspect the Mobidia numbers may be somewhat too high. The data is drawn from the users of Mobidia's 'My Data Manager' app, tag-line: "Take control of your mobile data". I suspect this may not be a representative sample, since such users may be a little more inclined than average to ... err ... take control of their mobile data. They perhaps are more diligent than the average in offloading. However, there's no doubt that WiFi offload is very significant, and drives an ongoing need for widespread fixed broadband.]

Wednesday, 15 May 2013

Are there any serious analysts still gung-ho for fibre?

Once upon a time it was a lonely path being a fibre sceptic. These days I am in very good company indeed. Here's what leading international telecoms analysts have to say about FTTH:

"The disproportionately high spend on [FTTH] is highly problematic. It may come to be seen as inappropriate use of capital in the emerging competitive environment." [March 2012]

"The time required to roll out and install FTTH is as large a barrier as cost … There are very few plausible combinations of home services that will require over 100Mbps bandwidth by 2017." [April 2012]

"FTTH deployment costs are about five times greater than FTTC costs." [March 2013]

"In the current economic climate, it seems unnecessarily dogmatic to espouse ubiquitous FTTH, particularly when the broader ecosystem and regulatory support is out of step with market realities ... [A]lthough deeper penetration of fiber into the network is a must, universal FTTH is an impractical luxury that telcos cannot really afford. … It is difficult for a significant volume of users to justify paying a premium for higher speeds when the applications that they currently use function sufficiently well over high-speed DSL lines" [February 2013]

"The combination of sunk costs and highly uncertain demand (both in terms of take up and willingness to pay for ultra-fast broadband services) makes [FTTH] investments very risky … services that would make full use of the higher bandwidth of FTTH are not at present available" [Summer 2012]

"The initial focus of the European institutions and of national governments to date has been largely on deployment of fibre-based NGA – [FTTH] – largely to the exclusion of other high speed broadband capable infrastructure. This focus was arguably excessive … More recent statements by the European Commission suggest an increasing recognition of the need for a … strategy that acknowledges the potentially complementary role of other technologies." [September 2012]

"Operators’ interest in pay-as-you-grow strategies is also motivated by a dawning realisation of quite how hard – and expensive – rolling fiber right to the home is. In particular, the challenge of installing fibre in front gardens, buildings and individual homes has been “vastly underestimated”" [November 2012]

"Australia’s current FTTH-led model … is actually going very much against the global trend of operators using existing network assets to avoid the huge costs of FTTH in brown field sites." [March 2013]