Wednesday, 28 May 2014

Do we need the Delphic Oracle to make sensible telecoms investments?

The Delphic Oracle was the leading seer of ancient Greece. This was reliably established by King Croesus (as in 'rich as'), who had his messengers ask a sample of seers at a pre-agreed time what he was doing at that very moment. The Delphic Oracle correctly said he was making lamb and tortoise stew.

However, the Oracle's statements were rarely this unambiguous -  they tended to be a bit more, well, Delphic. For instance, asked about a prospective military expedition, she replied: "You will go you will return never in war will you perish" - place punctuation at your own risk.

It is sometimes argued that, absent reliable seers, we have to invest in superfast broadband because of the unknown unknowns - the applications that are surely coming, but which we just lack the foresight to predict today.

There are many problems with this argument, but one of them is that its proponents tend to underestimate our foresight. Here for instance the view of my old friends the FTTH Council on how little we knew in 2000 about the drivers of demand for today's broadband:


Just how accurate is it that these things were unforeseen in 2000?

Videoconferencing with Skype
Skype wasn't founded until 2003, but video calls over the internet have been long discussed - at least since 1995, when Stewart Loken of the Berkeley Lab said "internet videoconferencing is about to become commonplace". Obviously to know what bandwidth you might need, you don't need to know the name of the company that's going to be most successful, you just need to know what the application is, so the fact that Skype didn't exist in 2000 is neither here nor there.

HD-TVs with 42" and more in 3D
The first formal HDTV research programme began in 1970. Consumer sets went on sale in the US in 1998, and some of them had 55" displays. 3D TV was trialled as early as 1994.

Facebook
We certainly didn't know about Facebook in 2000, it wasn't founded until 2004. (Though you can discuss with the Winklevoss twins exactly when it was conceived). However social media had been around for a long time - GeoCities, an early example, was founded in 1994. And again, we don't need to know the name of the provider to know the necessary bandwidth. Facebook uses text, pictures and a bit of video - all well understood as internet media in 2000.

Online shops
Amazon was founded in 1994. 'nuff said.

Google
By 2000 Google was already available in 10 languages (and had hired its first chef a year prior)

Digital Photography
The first consumer digital cameras were released in 1990 (the same year as Photoshop). Webshots, founded in 1999, was one of the first web-based photo sharing sites, but consumers had been uploading photos to BBSs (not always savoury ones) for some years before that.

iPad and Smartphones
The Palm VII, one of the first PDAs with wireless capability for internet access, shipped in 1999. I'll give the FTTH Council the iPad, which wasn't widely anticipated. Of course, it doesn't need particularly high bandwidth (though it has driven more traffic by extending hours of internet use in the home).


So, of the Council's seven things "we did not know in 2000" it turns out we did know 6½ of them. Their view of our ignorance is a bit ... ignorant.

The vast majority of things we do with the internet today were in fact anticipated in 2000, at least in broad brush strokes. That's why it's particularly problematic for FTTH fans that there are (in their own words) 'no really compelling applications yet' for FTTH. In 2000 we knew (roughly) what we would do with broadband speeds. In 2014 we have no real idea what we might do with superfast.

Thursday, 15 May 2014

The killer app for FTTH? Piracy

Fibre enthusiasts like to point out that those with very high speed connections have higher usage of the internet. This is certainly true, though whether this is because heavier users trade up to superfast broadband, or because superfast actually changes behaviour is a vexed question.

However, a more intriguing issue is what makes up that extra usage. Markets like Hong Kong and Korea have very fast broadband, and high per-line traffic. When you ask insiders in those markets what all that traffic is, they often look a bit shifty and then whisper 'piracy'. Until today I hadn't seen any hard data to back up this assertion.

However, I've just come across a report from COMBO, an EU funded project looking at fixed-mobile convergence. (It has blue-chip participants - France Telecom, Deutsche Telekom, Telefonica, Alcatel Lucent, Ericsson and  several others). On page 104 of the report is the following data for traffic of FTTH and ADSL customers of France Telecom in October 2013 :

Upstream and downstream traffic combined.

As you can see, FTTH users have roughly double the usage of ADSL customers. However, 58% of the additional traffic is down to P2P. Peer-to-peer is a protocol with legitimate uses, but it's very largely used for piracy. It's what enables Bittorrent for example (and is beloved of the copyright-sceptics of The Pirate Bay).

FTTH customers have 4.2x the P2P traffic of ADSL customers, mostly because they upload 7.5x more than ADSL customers. Indeed, for every pirate-byte they download, FTTH users are uploading 2.6. In the words of the report, "this allows us to conclude that some FTTH customers are becoming P2P servers".

Superfast advocates claim significant externalities (societal and economic impacts) for faster broadband. But externalities can be negative as well as positive. See the rigging on the rather fine pirate ship above? Fibre-optic cable, every strand.

Friday, 2 May 2014

Why websites are like turnips

We are sometimes told to eat locally - to consume food that is well suited to our domestic climate and is grown nearby. (For those of us in the UK, this implies a diet heavy in turnips, one of our relatively few native vegetables). By eating locally, we can reduce food-miles and our carbon foot print.

While perhaps less threatening to the the planet, bit-miles can be damaging too, harming your surfing experience.

In my previous post I noted that increasing the speed of your home broadband connection doesn't necessarily lead to an improved experience. One scenario where this can happen is when you are accessing a remote service, such as a website hosted in a remote location. While the transfer of data between your computer and the server may be constrained by your access bandwidth, latency (the time taken for a packet to travel between the two) can make an enormous difference. Moreover, the more hops the packets are taking between your computer and the server, the more opportunities for them to hit congestion somewhere along the way.

It's easy to test the impact of this. Speedtest.net allows you to run a test of transfer rates between your computer and one of their servers. By default, they automatically select a server near you, reducing the likelihood that transfer rates are constrained by factors other than your own access link. (There's a lot of subtlety in speed tests though - see this for a detailed discussion of how they work and their limits).

Here's the result of such a test from my computer:



This test used Speedtest's defaults, and it has selected a server in Maidenhead, about 50 miles away. As a result, my ping (round trip latency) is 31 milliseconds. In terms of bandwidth, I'm getting 18 Mbps, which sounds about right - I'm on DSL, relatively close to the exchange.

Now here's a test where I've overridden the defaults, and used a server in LA:


Ping has quintupled to 162 ms (though before we get too outraged, this still means that the packets were averaging 1/3 the speed of light, given a 10,800 mile round trip). The speed of connection has dropped to 11.4 Mbps. This certainly suggests that the bandwidth of the last mile - the access link to my house - is not the key constraint, since we know that's capable of at least 18 Mbps.

Of course, there are places more remote from my UK home than LA. Here's what happens when we run the test on a Sydney server:


Ping has doubled again, and the effective bandwidth has dropped to 8 Mbps. By visiting a remote server, I've 'lost' 10 Mbps of the capability of my line. Put another way, I'd likely get just as good a performance from the Sydney server if I had an 8 Mbps line as opposed to my actual 18 Mbps line.

While one could in theory abandon tasty vegetables and only eat turnips, it is a bit harder to only source local websites. If I'm desperate for the results of the Brisbane cockroach races, then substituting a visit to a local website about snail racing in Norfolk just won't do. (Though you've got to love a sport where a 6 year old can be the world champion). There's always going to be a portion of our internet use where improving our last mile bandwidth won't make any difference at all.


Thursday, 24 April 2014

Tilting at Windmills - Latest scores

Sometimes setting yourself entirely arbitrary objectives can be useful. There's good evidence it helps marathon runners improve their times, for instance. However, sometimes it can be entirely quixotic.

The pursuit of best national broadband speed is in the latter category. While there's perhaps reasons to worry if you're at the bottom of the league tables for broadband (and here's Phil Dobbie doing exactly that for Australia this week), there's really no evidence at all that you benefit from being at the top.

That hasn't stopped countries such as the UK and Korea setting what we might call 'league table targets'. The UK for example has a goal to 'have the best superfast broadband network in Europe by 2015'. This is perilously close to 'if you overinvest in broadband, we''ll overinvest more', though happily in practice the UK is being judicious in its superfast broadband.

Of course the people who most love league table targets are equipment vendors, who benefit greatly from that overinvestment. As I've noted before, the FTTH Council love them. They publish annual data on who has rolled out most FTTH (and FTTB- fibre to the building).

Worrying about FTTH league tables is even more perverse than worrying about general broadband league tables. FTTH is a long way removed from actual consumer or societal benefit, in a chain like this:


If any of the steps in this chain are weak, then the societal return on the investment in FTTH will be less. For example, Martin Geddes argues that increased speeds may not lead to improved technical performance (in the sense of reliable and predictable packet delivery), and may in fact sometimes degrade  it.

My focus in this post is narrowly on the second step in the chain - does increased FTTH adoption lead to increased speeds? Of course FTTH will deliver greater speeds in the last mile - but this isn't necessarily the weakest link in any particular flow of traffic. Problems can occur in many places - the users' device, their wifi network, congestion in the backhaul, congestion in peering and transit links, problems at the content server and so on. If one of these is the binding constraint, then more last-mile bandwidth helps very little. And of course even if the underlying connection is FTTH, the consumer might choose a slower (artificially constrained) product if they aren't willing to pay the premium for higher speed.

One way to test the extent of this issue is to compare levels of fibre adoption to measured broadband speeds from the content server to the end user - if FTTH adoption makes a big difference, we might expect to see a good correlation between these two metrics. Akamai, a content delivery network, publishes national data on just such speeds. While they're not perfect for our purposes (a certain amount of mobile network traffic is likely mixed in, for example), they are the best available. Here are the results for Europe:

Sources: FTTH Council (end 2013), Akamai (Q4 2013)

As you can see, not much of a correlation there (R2=0.01), suggesting that FTTH adoption isn't a magic bullet for experienced speed.

Some of the countries which have been least interested in FTTH to date, like the UK (firmly in the FTTH Council's sin bin), are actually doing just fine on measured broadband speed. Conversely, Portugal, which has spent billions to secure 67% coverage for FTTH and 13% penetration, isn't seeing great speeds as a result.

Of course I'm not saying that there's no linkage between FTTH and national broadband speeds (and South Korea with lots of FTTH/B tops Akamai's speed tables), but the linkage is rather weaker than might be assumed.

The fundamental point here is that to begin with infrastructure targets is to start at precisely the wrong end of the problem. The correct starting point is 'what user experiences do we want to enable?' and to work backwards from this to the required bandwidth, latency, mobility and so on. Then from that you can think sensibly about infrastructure needs.

Otherwise we're arguing about who's tilted at most windmills.



[PS: Apologies for the extended absence, I've been busy with clients, not least on figuring out bandwidth requirements - a post for another day]


Thursday, 16 May 2013

Rising mobile offload to Wifi - saviour of the case for FTTH, or warning sign?

The ever-interesting BenoƮt Felten has a post up about WiFi offload, with some stats from Mobidia. These figures show that between two-thirds and three-quarters of traffic to mobile devices travels by WiFi, not macrocellular.

His conclusion is:

   Take that, “we’ll only need mobile networks in the future” posse…

And in that he's surely right. No matter how exciting the potential of mobile data, its simply not, at a nationwide level, a substitute for fixed broadband. If the traffic off-loaded to WiFi and the fixed network today were to be 'on-loaded' on to the cellular network, the cellular network would fall over. (This is different of course from decisions at an individual consumer level - some households may decide they can get by with mobile broadband and no fixed broadband).

However, while the future of fixed broadband seems secure, I think usage of mobile devices has a more subtle message regarding the prospects for superfast broadband. Sandvine have recently released their fascinating Global Internet Phenomena Report. One finding is that in North America, 25% of streaming audio and video traffic is delivered to a mobile device in the home - up, presumably, from 0% just a few years ago in the era before smartphones and tablets. This is roughly consistent with figures from the BBC regarding usage of their iPlayer on-demand TV service - 30% of requests for programmes are coming from mobiles and tablets (up from 15% a year ago).

What's this got to do with FTTH? Well, one of the supposed drivers of the need for FTTH is 4K TVs -here's NBN Co getting excited about 85" TV screens. However, if usage of on-demand TV is already shifting to small, handheld devices, that suggests usage on mega-TVs may not be quite what the enthusiasts hope. Mobile devices offer the convenience of a personal device that they can watch in any room of the house (or perhaps in bed - iPlayer's peak of requests is after 10pm). Consumers increasingly seem to be choosing this over watching the content on a bigger screen, be that a TV or a PC monitor.

This isn't an absolute, of course. Households may want the huge screen for the big film on a Saturday night. But for day-to-day use, convenience may trump resolution.


[A footnote : I suspect the Mobidia numbers may be somewhat too high. The data is drawn from the users of Mobidia's 'My Data Manager' app, tag-line: "Take control of your mobile data". I suspect this may not be a representative sample, since such users may be a little more inclined than average to ... err ... take control of their mobile data. They perhaps are more diligent than the average in offloading. However, there's no doubt that WiFi offload is very significant, and drives an ongoing need for widespread fixed broadband.]

Wednesday, 15 May 2013

Are there any serious analysts still gung-ho for fibre?

Once upon a time it was a lonely path being a fibre sceptic. These days I am in very good company indeed. Here's what leading international telecoms analysts have to say about FTTH:




"The disproportionately high spend on [FTTH] is highly problematic. It may come to be seen as inappropriate use of capital in the emerging competitive environment." [March 2012]

"The time required to roll out and install FTTH is as large a barrier as cost … There are very few plausible combinations of home services that will require over 100Mbps bandwidth by 2017." [April 2012]

"FTTH deployment costs are about five times greater than FTTC costs." [March 2013]




"In the current economic climate, it seems unnecessarily dogmatic to espouse ubiquitous FTTH, particularly when the broader ecosystem and regulatory support is out of step with market realities ... [A]lthough deeper penetration of fiber into the network is a must, universal FTTH is an impractical luxury that telcos cannot really afford. … It is difficult for a significant volume of users to justify paying a premium for higher speeds when the applications that they currently use function sufficiently well over high-speed DSL lines" [February 2013]



"The combination of sunk costs and highly uncertain demand (both in terms of take up and willingness to pay for ultra-fast broadband services) makes [FTTH] investments very risky … services that would make full use of the higher bandwidth of FTTH are not at present available" [Summer 2012]



"The initial focus of the European institutions and of national governments to date has been largely on deployment of fibre-based NGA – [FTTH] – largely to the exclusion of other high speed broadband capable infrastructure. This focus was arguably excessive … More recent statements by the European Commission suggest an increasing recognition of the need for a … strategy that acknowledges the potentially complementary role of other technologies." [September 2012]




"Operators’ interest in pay-as-you-grow strategies is also motivated by a dawning realisation of quite how hard – and expensive – rolling fiber right to the home is. In particular, the challenge of installing fibre in front gardens, buildings and individual homes has been “vastly underestimated”" [November 2012]

"Australia’s current FTTH-led model … is actually going very much against the global trend of operators using existing network assets to avoid the huge costs of FTTH in brown field sites." [March 2013]

Tuesday, 14 May 2013

Japan - FTTP with tumbleweeds

Japan is often cited as one of the world leaders for superfast broadband. It has been rolling out fibre-to-the-premises for more than a decade, and now has one of the fastest fixed broadband networks in the world, second only to South Korea in terms of measured speed, with average speeds clocking in at 10.8 Mbps. (The speed of the last mile will be appreciably higher than this).

Of course, we all know that by itself FTTP is just glass in the ground. What matters is what you do with it. How much traffic is travelling across this network? Happily, Japan is one of the countries that tracks and reports this, so we can take a look. [Health warning - what follows is in part based on Google Translate, so there is the possibility I've missed a critical footnote].

Here's Japan's traffic per fixed broadband line since 2004:



A few observations:

  • Bandwidth usage is growing, but not exponentially. The growth rate in the last year was 17%
  • This is despite the fact that there was major adoption of FTTP in this period - as a share of fixed broadband it rose from 36% at the start of 2007 to 67% today (data here)
  • Usage is low relative to the line capacity - 54 Kbps vs the average speed of 10.8 Mbps measured by Akamai, representing a utilisation of 0.5% (though certainly it will periodically spike far higher than this for any given line)
  • It's also not the case that fibre delivered massive growth in per-line upload traffic - this peaked in late 2009, and has fallen almost 30% since then
It seems fair to call this a little underwhelming.  One factor may be that consumers are choosing to use the LTE mobile network rather than fixed connections. Indeed, it seems some consumers are seeing LTE as a complete substitute. According to Informa:
"NTT East and NTT West have been forced to slash their FTTH prices for new subscribers by an eye-watering 34% from ¥5,460 (US$66.70) to ¥3,600 per month to try and re-ignite their subscriber growth and stop the outflow of subscribers to cheaper LTE mobile broadband services."
Be that as it may, given that other countries are spending billions to replicate Japan's superfast fibre-optic infrastructure, presumably Japan is streets ahead of what's typical usage on copper networks? Another country that publishes usage stats is Australia, where the current government is investing massively to build FTTP (though 96% of households are still on cable or DSL).

Here's how Japan and Australia's per line traffic compares:

Source: Australia fixed lines and traffic from ABS. Japan as above
Note: Japan units converted from average kbps to monthly total GB

As of the end of last year, Australia's per-line fixed broadband traffic was roughly 70% higher than that in Japan. Despite a network that is undoubtedly technically inferior to Japan's, Australia is seeing robust growth. Of course all traffic is not created equal, but it seems fair to guess that Australia is getting more value out of the fixed internet than Japan is. By contrast, Japan's very expensive investment in FTTP doesn't seem to have delivered that much.

Network infrastructure is only as valuable as the usage it enables. Is it time for Japan to start envying Australia?