2011-12-06

My other computer is a datacentre. A very small one.

Once upon a time every computer that filled a whole room, and many people had to look after it. The people who wanted to use it had to use punched cards; submit work and wait for an indeterminate period of time to see the results of that work.

Then minicomputers came, which were only the size of a cabinet in a room. These could be shared between less people, and with terminals, be more interactive. And from these machines came Unix, time_t, and buffer overflows in sprintf() statements.

It was still shared; you still had to compete with other people for finite resources. This is why when the microcomputer came along, and then the interactive workstation, something profound happened: you could do more. You could write interactive programs and be more productive without the long waits. That was before web browser updates, interactive downloading of emergency flash updates and AV Scanners got in the way of productive work, so desktop workstations were actually useful. Of course, all these machines were not that well connected on their own, so people would run round holding floppy disks, until networking products became available. Ethernet itself dates from the era of the workstation, though apparently it's frame length is partly driven by DEC's need to keep memory costs down, hence its pitiful size today. Ethernet, Netware, email and the like evolved, so now people can send me emails inviting me to phone conferences at 01:00 UK time, invitations that will somehow be tentatively stuck in my shared calendar without my knowledge, then synchronised to my phone, so that it can wake me from my sleep to let me know that I am late for a phone conference that I didn't know about. Truly, we are blessed.

And yet the wheel goes round. What is fashionable again? The Datacentre. A room full of boxes turning electrons into waste heat the way thermodynamic entropy requires, routing many of the through the CPUs in the process, so doing useful things. The first big datacentres used many racks of machines, each box with 1-2 cpus, a couple of 512MB HDDs and 1 GbE between them. Now you could build up a single rack with the same storage capacity and compute power.

But you'd still have to share it. Which is why I'm pleased to show off a little addition to our facility: a very small Hadoop cluster.

My other computer is a datacentre

These are four SL390s servers in 2 U's of rack; the two bits below are just expansion slots in the 4U chassis.

Each one the same basic node used in one of the top 10 supercomputers, though they have many more units Infiniband interconnect and a total power budget of 1.4MW, which is not something I'd want.

The front of the boxes contain all the interconnects; 2x10GbE on the motherboard, and a management ports hooked up to 100 MbE for ILO management;.. Having the ports at the front is something that Allen W has complained about. It does make sense if your switches work that way, if you can set your hold/cold aisles up so that the fronts are accessible. If your switch has its ports at the back, well, it's "suboptimal"

Round the back: power only. Shared PSUs and fans for a bit more resilience.

My other computer is a datacentre

The twin-socketed Xeon E-series parts have a relatively low power budget for x86-64 servers, though not in ARM terms; the multiple SFF HDDs you can fit into each unit give pretty good bandwidth and 4TB of storage. If you opt for the 3.5" HDDs your bandwidth drops, but you get 6TB/node.

Then there's the RAM: up to 192 GB/node. These ones have a bit less than that in, but there's still more per core than the entire RAM supply in my house.

From a storage perspective, there's not that much capacity: 16-24 TB. The ratio of store:compute and drive:compute is pretty good though, and as you can also sneak in a GPU, if you have compute intensive work, these four machines make for a nice little setup. And given that those TB of storage don't need to be shared with anyone else, it's not so bad.

This then is capable of storing and working through a reasonable amount of data, building up complex in-memory structures and being as responsive mid-afternoon as it is on a weekend, as nobody else is trying to do stuff on it.

At this scale HDFS makes no sense. You don't have the capacity to handle a server failure; 3X replication is too expensive. Better to RAID everything and NFS cross mount the filesystems.

I know the big cluster people will look at these boxes with bemusement, but think about this
  1. It's not the only cluster I have access to. This one is free for me to play with new versions and code on without causing problems.
  2. I'm sure the mainframe people didn't think much of minicomputers, minicomputer aficionados looked down on desktop computers, and -as we can see- desktop computers are having do accept the growing functionality of phones and other devices.
This then, is my own personal datacentre.

3 comments:

  1. What's the electricity bill you have to pay every month?
    Thanks!

    ReplyDelete
  2. No idea. The nice thing about a dedicated cluster for play is that you can suspend it overnight and at weekends. This cuts its consumption down during those hours. You still have to pay the (depreciating) capital costs, which is why there's pressure for shared resources you keep busy (shared clusters, VMs running on "cloud" infrastructures, etc).

    -this rack is in an existing raised floor; it's PUE will be pretty bad compared to the new in-container designs that have a PUE of about 1.1. And with a GPU you can get some compute-intensive work done more efficiently.

    I should hook up one of those little home power meters to measure it.

    Also, search for HPL+sustainable+datacenter to see real work on datacentre sustainability

    ReplyDelete

Comments are usually moderated -sorry.