Using Amazon EC2

This article describes the hardware used for the Shodan Go Bet, including fine details about how Amazon EC2 was configured and the costs involved.


For the Shodan Go Bet event, held in London at the end of December 2010, one of the bet conditions was that the hardware be physically present, but my attempts to borrow some beefy hardware all fell through [1]. However the bet conditions allowed for hardware up to $5000 value, so there was really no need for the additional condition of "physically present" to prevent the use of a massive supercomputer. So, John and I agreed to use my notebook for the day 1 games, but then try Amazon EC2 from day 2 onwards.

My notebook is quite fast, for a notebook, an i7-740QM, four 1.7Ghz cores (running as 8 hyper-threaded cores) and 8Gb memory. The program that was used, Many Faces, is a Windows program; it was run under linux, using Wine. Then for day 2 we used an "m2.4xlarge" instance, with Windows OS, on Amazon EC2. This is the fastest single machine Amazon offer: 26 ECUs, 8 cores, 68.4Gb of memory. One ECU is roughly 1Ghz (implying my notebook is about 6.8 ECU), or 400 CPUmark points (implying my notebook is 9 ECU), though in real terms (playouts/second) the EC2 machine was approximately twice as fast as my notebook. (See Appendix I, below, for more on hardware speeds, and Appendix II for playout speeds.) Remote desktop was used to connect to the Windows machine, reducing my notebook to the role of dump terminal; it was as if we had an 8-core windows machine present in the room, just with a very long video and keyboard cable. 300 miles long...


The combination of m2.4xlarge, Windows and using the EU (Ireland) data centre is the most expensive one, but is still just $2.48/hour. Going straight to the bottom-line, my EC2 usage in December was $34.85, all but 13 cents of which was the 14 hours using the m2.4xlarge instance. 9 hours of that was the usage on Dec 29th (from 8am to 6pm), and the other four hours was tests and setup prior to that. Note: Dec 27th 10am to 11am was billed as two hours; I started it up, did a test, disconnected. Then I had another idea, started it up again, tried something, disconnected again. Just be aware that this is a bad idea if you do it a lot!

Start-up Time

A fresh linux instance starts up in a few minutes, including the time to sort out the key pair you use. Excuse me a bit of Microsoft-Bashing, but despite what the market has been telling it for the past decade, Microsoft still don't get that people want to use the commandline and ssh sometimes [3]. What this means is you won't be able to use your windows instance for the first 10-15 minutes. It will be running, costing you money, but you have to keep polling the Amazon instance to be able to find out your password. No, I'm serious. This wait-for-password stage was also not noticeably quicker on a m2.4xlarge instance compared to an m1.small instance. Restarting a stopped Windows instance (see below) takes two or three minutes.

Once I had the software installed, password set, etc. I took a snapshot. But it turns out there was no need for that as what you can do is "stop" your instance, instead of choosing "terminate". "stop" saves everything for you anyway. After the event I didn't do any cleanup, so both my stopped instance and snapshot are still there, costing me money! Panic!! But, it turns out it is not much: $2.91 for the full month of January. (I don't know how much is for the snapshot storage, and how much is for the stopped instance storage.) That is very reasonable for the convenience of being able to: a) restart a ready-configured 8-core windows machine with about 2-3 minutes notice; b) clone multiple instances of such a machine. I can see how that is a very cheap insurance policy for a business [4].

Detailed Steps

To get a feel for what is involved, here are my detailed steps for getting Many Faces running on EC2:

  1. Login to AWS (
  2. Switch to "EU West" (no, it never remembers my previous zone, grrr...)
  3. Click "0 running instances" on the right.
  4. Click the stopped instance, and choose start.
  5. When it says 'running' (a few seconds), click it to get the public DNS address.
  6. Wait 2 or 3 minutes, then from linux run:
    rdesktop -u Administrator -K -g 1200x840 &
    where the NNs are the from the public DNS address. If you try before Windows is ready you get a cannot connect error.
  7. Give the Administrator password.
  8. Double-click the Many Faces icon on the (remote) desktop.

And, the setup steps specific to Many Faces:

  1. Download the trial version of the software.
  2. Install it in the suggested default location, and test it runs.
  3. Become a customer, then get the key from David Fotland.
Note: the GUI version is a 32-bit only program. It is limited to using 1.5Gb of memory, which possibly hurts performance on long thinking times. Games 1, 2 and 3 were run using the GUI version.

For the GTP player (which is not generally available, but David Fotland can supply for special events) I used the Many Faces GUI as the front-end (the alternative was installing GoGui, but I didn't want all the hassle of getting Java installed on Windows). Here are those steps:

  1. Put the exe in c:\mf
  2. Run it from a DOS window once to get the product ID, then get the key.
  3. Start the GUI version and choose GTP player (under New Game)
  4. Give the directory as "c:\mf"
  5. Give this command: gtpmfgo-754.exe -l6 -key XXXXX -loggtp -logengine -threads 8 -chinese -memory 12000 -pad_time 30
I chose 12Gb for the memory, despite having 68Gb available, as 12Gb is enough (1Gb per core seems a good guideline), and also David said he had never tested with more.


To delete the items (and stop those small charges), first find the "AMI", right-click it and choose delete. Then go to "Snapshots", and delete. Finally go to "Volumes", and delete (if it says "in use", even after deleting the corresponding "snapshot", just click "refresh").

Why Amazon EC2?

I chose Amazon EC2, over the alternatives, for these reasons:

  • It is the most well-known, meaning:
    • Most useful for me (as a professional software developer) to get familiar with;
    • I could buy books, find mailing list archives, etc.
  • As of December 2010, 26 ECUs was the highest entry listed here:
  • Data centres in Ireland, not just in U.S., implying lower latency to London.

I see they now that CloudPriceCalculator list OpSource with a 34 ECU machine, but after a quick look at their web site it is not clear beyond "8 CPUs" what you get. However, whatever the actual speed, an 8 CPU, 16Gb RAM machine, with one network and 1Gb bandwidth used, would cost $1.07/hour, compared to $2/hour for EC2 (U.S. data centre, linux, though 68Gb memory, not 16Gb). So they seem notably cheaper for particular configuration I needed. (Though if you actually needed 68Gb, then it goes from $1.07/hour to $2.37/hour.)

Appendix I: Hardware

(The prices given here, and the links, are as of early December 2010.)

My notebook has a CPU mark of 3,593. Amazon don't describe the actual hardware but according to Phoronix the m2.4xlarge instance is a dual Intel Xeon X5550 machine, 2.67Ghz; if so, that configuration has a CPU mark of 10,633. (I.e. 2.96 times faster than my notebook, but see Appendix II below for the application numbers that matter.)

I've estimated the $5000 limit would just about get me 48 cores at 2.2Ghz, CPUMark of 29,628 (8.24 times quicker than my notebook, but also 4 times the cores, so there might be a scaling cost). A 12-core 3.33Ghz Intel machine (CPUMark of 18,601) could be put together for about $4500. Check out this page for other multi CPU configurations, their benchmarks and approximate cost.

There is an article on Phoronix that benchmarks all the Amazon EC2 instances. Their conclusion is that if your application is CPU-bound and scales with number of cores then the c1.medium (5 ECU) and c1.xlarge (20 ECU) represent the current sweet point. The OpenBenchmarking web site is (ironically) not open yet, but when it opens in February 2011 it should provide a direct comparison of each EC2 instance against known CPUs.

Appendix II: Playout Speed

For playouts/seconds I have these notes for game 1 (running on the notebook):

  • move 81: 12,053
  • move 83: 11,599
  • move 117: 12,518
  • move 143: 13,024
And game 2 (notebook):
  • move 36: 9,927
  • move 56: 10,169
  • move 80: 9,050
  • move 118: 9,992
  • move 142: 7,939
And game 3 (EC2-32-bit):
  • move 51: 20,590
  • move 53: 20,332
  • move 89: 24,007
  • move 125: 25,472
  • move 165: 27,768
For game 4 (EC2-64-bit) we have the luxury of full information in the log file; here are a few excerpts:
  • move 30-40: 21,100 to 28,300
  • move 50-60: 20,500 to 31,100
  • move 80-90: 21,300 to 22,400
  • move 120-130: 20,800 to 36,000

Conclusion: There is wide variance, both move to move and game to game, but I think we can say 26 ECUs was running more than twice as quick as the notebook, but clearly well below the 3 times more the CPUmarks would suggest, and below the 2.8 to 3.8 times quicker the ECU estimate tells us. Both notebook and EC2 instance ran with 8 cores, so overhead due to thread contention and data passing should not have been a factor.


[1]: Learning 1: It is hard to make useful contacts in London when you are living in Tokyo.
Learning 2: trying to organize an event for the week after Christmas can be hard work as the companies and universities are all shut up, and many individuals who might be interested are out of town.

[2]: The Amazon Web Service billing page is a bit confusing and frustrating. There is an Account Activity page that shows how much you have to pay. To see the breakdown by server and data centre press the cross icon; but there is no further drill down (e.g. by date). On the other hand, the Download Reports link gives you a csv file with StartTime, EndTime and UsageValue, but no prices. So to get a good feel for what costs what you have to flip back and forth between these two reports. If a Cost column was added to the Download Reports I would have no complaint.

[3]: Perhaps Amazon could offer a Windows instance that has cygwin pre-installed and ssh already set up. Then key pair login could be used, users can set their own Administrator password, and be up and running in just a couple of minutes (?). Actually this might be very popular anyway, as cygwin can be a bit of a pain to install and especially to keep it working properly.

[4]: For instance a web site might set up a DB server and a couple of web servers, even if they are hosting elsewhere. Once tested they then just let them sleep. Then they keep a "micro" EC2 instance running all the time ($15/month), and a disk volume, and just use that to keep the data and files up to date. Then if you ever need to start those web servers up they can use that disk volume. It could mean a catastrophic failure at your current ISP would only result in 15 minutes downtime. (Well, assuming DNS is not hosted at the same place, and your TTLs are not higher than 15 minutes.)

© Copyright 2011 Darren Cook <>

Revision History
2011-01-24: First public release