After my earlier experiments with B2, I had an extremely interesting call with Backblaze about B2 features and performance.
Firstly, they have recently added a caching layer to speed up serving repeatedly requested files. This reduces the delay as the file is reassembled from Reed-Solomon slices. They also suggested that I do some new tests, as they thought I should be seeing faster speeds, even for first-access.
I ran some new tests, but I was still getting lacklustre download performance. Some streams dowloaded at <150 KiB/s, with the fastest reaching 1.6MiB/s. This is much better than the last set of tests I ran, but still slower than Backblaze thought I should be getting (25-50MiB/sec).
Speed tests - PlusNet UK Domestic Broadband, 40:20 Mbit/sec:
Upload Sequential:Min Sequential:Max Sequential:Average Parallel:Min Parallel:Max Parallel:Average Parallel:Total 1.3 MiB/sec 0.1 MiB/sec 4.1 MiB/sec 2.1 MiB/sec 0.1 MiB/sec 1.3 MiB/sec 1.0 MiB/sec 0.4 MiB/sec
PlusNet are known for their traffic shaping, so I decided to try the same test from the cloud. In this case, an Amazon EC2 instance in Ireland.
Speed tests - EC2 t2.medium, EU-West-1:
Upload Sequential:Min Sequential:Max Sequential:Average Parallel:Min Parallel:Max Parallel:Average Parallel:Total 3.3 MiB/sec 1.6 MiB/sec 5.3 MiB/sec 2.9 MiB/sec 1.2 MiB/sec 1.9 MiB/sec 1.5 MiB/sec 4.6 MiB/sec
That's a bit better, but the rage from minimum to maximum still seems quite large. t2.medium instances have low-performance networking, but I would still expect them to be able to handle > 5 MiB/sec. This source benchmarks a t1.micro instance at around 60Mbit/s if I'm reading the charts correctly. So, suspecting the problem may be disk IO rather than the network, I added a mode to my tool which discards the download rather than writing it to disk:
Speed tests - EC2 t2.medium, EU-West-1, –discard:
Upload Sequential:Min Sequential:Max Sequential:Average Parallel:Min Parallel:Max Parallel:Average Parallel:Total N/A 3.1 MiB/sec 6.0 MiB/sec 4.2 MiB/sec 1.3 MiB/sec 1.4 MiB/sec 1.3 MiB/sec 4.8 MiB/sec
A little better again, but not much. Finally, I tried a DigitalOcean droplet in San Francisco. This is as close to BackBlaze as I could get.
Speed tests - Digital Ocean SFO1:
Upload Sequential:Min Sequential:Max Sequential:Average Parallel:Min Parallel:Max Parallel:Average Parallel:Total 1.8 MiB/sec 2.4 MiB/sec 49 MiB/sec 33.9 MiB/sec 1.5 MiB/sec 42 MiB/sec 21.9 MiB/sec 5.9 MiB/sec
So, the speed is definitely there, I'm just not close enough.
Finally, although it feels a bit like a sledgehammer, I spun up an Amazon EC2 m4.xlarge instance (“high” network performance) in US-West-1:
$ b2 get -v -j1 --discard riscos-2012-11-01-RC6.zip riscos-2012-11-01-RC6.zip riscos-2012-11-01-RC6.zip riscos-2012-11-01-RC6.zip riscos-2012-11-01-RC6.zip 98 MiB [==================] 2.1 MiB/sec 100% riscos-2012-11-01-RC6.zip 98 MiB [==================] 51 MiB/sec 100% riscos-2012-11-01-RC6.zip 98 MiB [==================] 2.1 MiB/sec 100% riscos-2012-11-01-RC6.zip 98 MiB [=================>] 50 MiB/sec 100%
In summary, the maximum speeds achievable seem to be around 3MiB/sec upload and 50MiB/sec download, but these speeds depend on being close to the Backblaze servers (this applies equally to any cloud service, and is one reason CDNs exist). It should be possible to get more throughput by running multiple streams in parallel.
Downloads seem to start in either a 'fast' or 'slow' mode, which presumably corresponds to cached vs non-cached access, but which speed you get doesn't seem to be predictable, at least in my very limited testing.
This is certainly fast enough for home backup use now, and I can only imagine speeds will improve as systems are optimised. If you want to serve files from B2, it seems like a good idea to use some kind of caching layer or CDN until Backblaze have something in place to speed up international transfers.
All tests were conducted with the same 98MiB file: riscos-2012-11-01-RC6.zip
Sequential and parallel downloads were performed 4 times and with 4 simultaneous downloads respectively.