Almost two years ago now, I bought myself a new – what I thought was at that time – a “real” home lab. The components I bought back then were similar to what MVP Jeff Guillet had described on his blog: http://www.expta.com/2013/04/updated-blistering-fast-hyper-v-2012.html
That server comprised of the following components:
- Intel Core i7-2600K
- Asus motherboard (P8Z77-VLX)
- 32GB DDR RAM
- 3x 120GB SSD
- 1x 1TB Western Digital 7200rpm SATA
- Antec NSK4400 (400W PSU)
Additionally, I also use my “main desktop” machine to run a few VMs from:
- Intel Core i5-3550
- Gigabyte Motherboard (Z68AP-D3)
- 16GB DDR3 RAM
- 2x 120GB SSD
- 1x 2TB Western Digital 7200rpm SATA
- Antec NSK4400 (400W PSU)
From a networking point of view, I bought a relatively cheap HP ProCurve 1810-24G which gives me more than enough ‘power’ to tie everything together. What I also liked about this switch is that it’s relatively easy to configure, low-noise and supports e.g. VLAN tagging.
Challenges
Over time, and especially on the occasion of preparing for the recently demised MCSM training, I started to experience some ‘problems’. Under load, my server would randomly freeze up. While this usually isn’t much of a problem, I sometimes dare to do my demos from these machines. Last time I did that, it froze up in the middle of a demo! Those who were present at TechDays in Belgium will actually remember what that was like…
Given that very unpleasant experience, I made the decision to build a new “home lab”. From a requirements point-of-view, this ‘new’ lab has to be able to accommodate for all the VMs I want to run (+/- 60) and preferably be more stable. In order to do so, I definitely need more memory (CPU wasn’t an issue) and I need more storage. I found that I was able to run quite a lot of VMs out of the limited amount of storage I have right now (360GB SSD) using deduplication in Windows Server 2012. That is also the reason why I decided to keep on using SSDs; which ultimately cost me the most.
From a network point-of-view, I’ll also be looking to replace (or complement) my current switch with a high-performance router. In my persistent lab environment, I have a few subnets which are now routed through a virtual machines (WS 2012). In the future I would like this to be done by a Layer 3 switch. I’ve been doing some research and have found that HP’s 1910 series actually offer up to 32 static routes while remaining relatively cheap. Another option, though, would be to use one of MikroTik’s RouterBoard devices… Still not sure about what to do…
The process
When I first stared looking for solutions, I ended up with a few possibilities. One of them was to move out my lab outside of my house and either host it in a datacenter or use a platform like Amazon Web Services or – preferably – Windows Azure to build my lab.
The problem with either of both is that – given the amount of VMs I’m running at times – this can become a quite costly business. Even though the solution would be the most elegant of all, it’s just not something I can afford.
Next, I looked to moving my hardware to a datacenter. Through a colleague of mine I was able to rent 1u rack space at +/- 35 EURO a month (which is relatively cheap). While from a connectivity point-of-view this was an awesome idea, I had to find hardware that was able to fit in 1 or 2 units. For this, I came up with 2 solutions:
- Modified MAC Mini’s
- Custom-build 1U Servers
Unfortunately, both solutions turned out to be inefficient or too expensive. I could easily fit 4 Mac Minis in 1U rack space, but they can only contain 16GB of RAM and – even by adding a second disk myself – each one would cost up to 850 EUR.The alternative of building a server myself based of some SuperMicro hardware (which is decent quality for a fair price) seemed do-able; except for when trying to fit as much as you can into 1U. Basically, I ended up with the following hardware list ~ which I nearly ended up buying:
- Supermicro 1027R-73DAF
- 1x Intel Xeon e5-2620
- 6x Samsung 250GB EVO SSD
- 8xKingston 16GB DDR3 ECC
The problem here is: cost. Next to buying the hardware (+/- 2700 EUR), I would also need to take into account the monthly recurring cost for the datacenter. All-in-all, a little overkill for what I want to get out of it.
The (final) solution
So I started looking, AGAIN, for something that combined the best of both worlds and ended up building my server with a mix of server and desktop hardware. In the end, I decided to host the hardware at home (despite the advantages of putting it in a DC) and request a new VDSL line which offers me a bunch of external IP addresses that I can use:
- Antec Three Hundred Two
- Corsair 430Watt Modular Power Supply
- Asus P9X79 Pro
- Intel Xeon E5-2620
- 64GB DDR3 1333Mhz
- 4x Samsung 840 EVO 250GB SSD
- 1x Western Digital “Black” 1TB 7200rpm
The total price for this machine was approximately 1700 EUR, which isn’t too bad given what I get in return.
- The reason I chose the Intel Xeon CPU and not a regular i7 is simple: even though some motherboards claim to support 64GB RAM with most i7 CPU’s, there’s only a single one that actually addresses more than 32GB (i7 3930K). The price for that one is actually – compared to the Xeon CPU – insanely high, which is why I went for the latter one.
Because I’m using a “regular” motherboard [P9X79] instead of e.g. one from SuperMicro, I was able to drive down cost on memory as well. Even though I’m now limited to ‘only’ 64GB per host, the additional cost of ECC RAM and the SuperMicro motherboard weren’t worth it in my humble opinion.
The future?
My ultimate goal is to end up with 2 (maybe 3) of these machines and leverage Windows Server 2012 R2’s new capabilities with regards to storage and networking (enhanced deduplication, SMB multichannel, storage spaces, …). This would also allow me to configure the Hyper-V hosts in a cluster which ‘unlocks’ some testing scenarios.
As such, I would like to get to the following setup (which will take months to acquire, for sure!):
I’m still in doubt how I will do networking though. Given that 10Gbe is becoming cheaper by the day [Netgear has a really affordable 8port 10GBe switch], I might end up throwing that into the mix. It’s not that I need it, but at least it gives me something to play and get familiar with.
I’m most likely to transform my current hyper-v host to the iSCSI Target over time. But let’s first start at the beginning.
Once I have received the hardware, I’ll definitely follow up with a post of how I put the components together and my (hopefully positive) first impressions of it. So make sure to look out for it!
Cheers,
Michael
Trackbacks/Pingbacks