Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm in the linode SSD beta and results are favorable so far. Here's a simple block write test I ran across Linode (SSD and HDD), Ramnode (SSD), and DO (SSD):

    "dd if=/dev/zero of=test.tmp bs=4k count=1000000"

    Local Hardware (Bare metal, SSD): 633 MB/s
    Linode SSD: 338 MB/s 
    Ramnode SSD: 212 MB/S
    DO SSD: 199 MB/s
    Linode HDD: 30.6 MB/s
The same tests with count=2000000 is a bit more revealing:

    Local Hardware (Bare metal, SSD): 240 MB/s
    Local Hardware 2 (Newer, Metal, SSD): 387 MB/s 
    Linode SSD: 355 MB/s
    Ramnode SSD: 293 MB/s
    DO SSD: 236 MB/s
    Linode HDD: n/a, out of space
(tests run with 'time' and sync showed consistent relative performance, but I need to get back to work - will post later if I get a chance)

With that aside - when I'm looking for remote dev nodes to mess around on, DO and Ramnode are both good bets. They are cheap and discardable.

But when I want top-notch support and uptime, I go with linode. (Currently I have nodes at all three.)



I think you might need to add the 'conv=fdatasync' option to your benchmark there in order to get real results - otherwise you could just be looking at results from the hosts cache:

"dd if=/dev/zero of=test.tmp bs=4k count=1000000 conv=fdatasync"

Even then, the host could be lying and telling your VPS that a write was successful when it was not actually written to the host disk. Still in the hosts disk cache, which means great performance but if the host ever crashes, all data on your VPS disk will likely be lost.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: