21.4.10

nVidia GTX 480 vs Radeon HD 5870 vs Radeon HD 5970

This question has been getting a lot of debate recently.  Which card is currently king?  The benchmarks and comparisons are coming in from all over the show.  Toms Hardware has one.  As does Benchmark Reviews.  As does Gizmodo.  As does Maingear Forums.  As does Dutch site hardware.info (excuse the google translation).  The latter two even have SLI comparisons.  (For anyone considering a truly high-end system, the SLI comparisons have to be the ones that matter.)

Unfortunately, the benchmarks are far from conclusive.  They all agree that the GTX 480 is the fastest single GPU on the market.  Fair enough.  The Radeon HD 5970, with 2 GPU's is faster.  Also fair enough - so the nVidia chip isn't double the speed of the ATi chip.

The nVidia card does well enough on the synthetics, (although here you see it beaten out by a stock Radeon HD 5870,) but that doesn't really mean anything.  In general, the nVidia edges out the Radeon chip, while falling short of the dual ship board on almost every benchmark.

However as soon as you start to run SLI setups, it becomes less clear.  The results are all over the place - or are they?  Where the reviews fall down, is that they don't put their finger on the important difference between the chips.  They will tell you that the nVidia chip doesn't scale well in SLI.  They will also tell you that the Radeons actually come out ahead in some of the benchmark comparisons.

I think I see a pattern emerging.

If you look at the more taxing games, at high res and max'ed out detail, like Metro 2033, (you might need to google translate that,) Crysis and STALKER Call of Pripyat, then the nVidia card has a solid edge.

Other, less taxing games, like DiRT 2, Far Cry 2, and HAWX show the Radeons ahead in places.  You'll note that in these games the frame rates delivered by either chip are so far into playable territory that you won't really care.

What matters here is that, at the top resolutions and maxed out detail settings of the most demanding games, the nVidia always wins.  Basically, this is a chip with a mission.  The GTX 480 is here to show you the most beautiful games at their very best.  For those games that give it up for just any old GPU, there are the Radeons - but for those of you that want to see something you've never seen before, the GTX 480 in 3-way SLI makes 2560x1600 more playable at max detail in Crysis, STALKER CoP and Metro 2033, than anything else you can buy today.

So, if you want to play these games through and enjoy them, ATi has the cards for you.  They are affordable and playable.  But, if you want to see what the state of the art looks like in PC games today, nVidia has the only chip that will show you that.

With Crysis 2 on its way this September, I'm guessing there will be some eye candy that will really show off the capabilities of the top-end rigs.  I wanna be ready for it.

18.4.10

Choosing a power supply

I remember when this bit was simple.  Your case came with a 200W power supply and that was the end of it - you even plugged your monitor into it!  Now it's all become complicated.  First you have to figure out what size power supply you need, then you need to decide if you want a single rail or multi rail unit and then it really starts to get difficult.

After a fair bit of research and reading of reviews, I've brought together the factors that I think are important to this decision.

First, a Word About Efficiency and Heat Output
All power supplies draw more power from the wall socket than they provide to your computer equipment.  The extra power is wasted - mostly as heat - during the transformation from AC to DC and to the voltages that the various outputs require.

A decent power supply these days will be 80 Plus rated - meaning it will be 80% efficient at peak power draw.  The best ones might be over 85% efficient.  They will all be more efficient at 50-80% of peak.  I want my PSU to be at peak efficiency while my system is at peak load, because peak efficiency means the least amount of power being converted to heat - thereby helping case cooling.  If it can also be efficient at average load, even better.

Now - How Big?
High-end CPU's and GPU's require more power.  Peak power draw on an overclocked CPU or GPU can easily hit 300W and more.  Hard disks are in the range of 5-12W, when loaded.  (SSD's seem to stay below 2W, so they shouldn't cause too many headaches.)  A water cooling pump will be up to 25W.  Case fans range from less than 1W, up to about 4W.  Other items are typically not so big as to be a major consideration - unless you have a lot of them.

If I'm going to have enough juice for a CPU and 3 GPU's, there's around 1200W (peak draw) right there without even thinking about the rest of the system.  Figuring in 2 SSD's, (4W,) 3 HDD's, (25W,) 5 case fans (say, 10W) and a water pump, (20W,) leaves me at 1259W, with no headroom for anything extra at all.  There are a few power supplies that will pump out that much juice and more - all the way up to 1600W - but they are awfully expensive and tend to run hot, noisy and inefficiently.  It would be best if I could get away with one of the smaller units.

Fortunately, it is virtually impossible to have every component drawing peak power at the same time.  Even in a perfectly balanced system, at least one of my 3 GPU's will be waiting on the CPU to send it some work to do, which in turn might be waiting on the hard disk to return from an I/O request.  I am unlikely to be sucking data off of all of my hard disks simultaneously.  (To be fair, the pump is likely to be maxed when the rest of the system is maxed, because that's when it will be generating the most heat.)  The point is, I'm pretty unlikely to be drawing 1259W at a time.

I'm guessing that my GPU's won't be maxed out while I'm reading my hard disks - games generally avoid disk I/O during gameplay, as you don't want your expensive, lightning fast graphics card(s) waiting on a slow old disk operation.  I'm guessing that a more realistic peak power draw for the system will be around 75-80% of the CPU and and GPU's combined peak, plus the pump and fans, plus the rest of the system at idle.  That gives me a guesstimate of 1200 (CPU + 3xGPU's) * 0.8 + 17 (SSD's and HDD's at idle) + 30 (pump and fans) = 1007 watts, peak power draw.

Trying to fit my system into the peak efficiency band of most PSU's further increases this number.  1007W is 80% of 1259W - so I'm still gong to need a big power supply!  They don't tend to make them that exact size, but they get pretty close.  They seem to be available in 1000W, 1050W, 1200W, 1250W, 1500W and 1600W varieties.

The high-output PSU's tend to be less efficient, too.  I couldn't find any 80 Plus Gold rated PSU's over 1000W - and precious few rated 80 Plus Silver.

Do Overclockers All Need Single-Rail PSU's?
This post from jonnyGURU seems to be the authoritative word on single rail vs. multi rail PSU's; I recommend it to anyone that is considering buying a high-end power supply.  Basically, if you want any inbuilt protections against short circuit, (e.g. you do not want to get electrocuted or your house to burn down) get a multi-rail power supply.  This will incur the overhead of having to be more careful about how you plug your system together, so that no single rail's capacity is exceeded - or your computer will shut down.

You will also need to be careful to buy a high quality PSU that has enough rails, with enough capacity, to power your equipment.  If your highest capacity 12V rail is only rated at 20A, then you won't be able to drive a high-end CPU or GPU, as it will shut down if you try to draw over 240W.  (12V x 20A.)  If you're overclocking, you will likely need even more current on your rails.  You may also find yourself in the situation where you are unable to draw the full rated power from you PSU because (e.g.) you have a component needing 30W and your 6 rails only have 10W of capacity left, each.  This last is more likely to be a problem when you are near the limit of your PSU's capacity, anyway - another reason to allow yourself a little headroom.

Single rail PSU's do away with this configuration headache and allow you to push your PSU closer to its limit by providing its full capacity with one gigantic rail.

How big did you say?
The more mundane side of buying a high-end power supply is that a higher output rating will typically mean a physically larger unit.  Some PSU's today are so big that they won't fit in some cases.  Others will fit, but might prevent a 5.25" bay, or perhaps a hard drive enclosure, from being used.  Whether this is a problem depends on your case and what you are trying to fit into it.

I'm planning on using the Lian-Li PC-P80, one of the few 10-slot cases on the market, giving me the option of 4 dual-slot graphics cards.  This case will fit a maximum PSU length of 220mm - just short of the largest PSU's about.

Noise
PSU manufacturers - particularly high end ones - are starting to pay more attention to noise reduction now.  Units with bigger fans, moving more air at lower revolutions, tend to stay cool for less noise output.  Some of the less powerful units even get away without cooling fans.  I want mine to be as quiet as possible, but higher end units are going to have some fan noise.

So, which one?
I looked at the reviews for a number of units, before settling on the Enermax Revolution 85+ 1250W.  It is 190mm - leaving 30mm more space than the Silverstone Strider ST1500.  It gets a "Best in Class" award from Hardware Heaven. JonnyGURU has good things to say about its cousins.  It is over 89% efficient between 500W and 1125W power draw, which should be my normal power draw.  If/when I eventually hit 4-way SLI and max this unit out, I can always add one, or even two, of these in 5.25" bays to take care of the excess.

17.4.10

Hard Disk Drives

With the better SSD's offering at least 10 times the real-world performance of the best HDD's, the lure of super-fast Windows startup and application load times is hard to resist.  Tom's Hardware has some interesting benchmarks on SSD performance and on HDD performance.  Putting the OS on a 160GB Intel X25-M seems like the way forward here.

The Fusion-io ioXtreme is another interesting player here.  Having one just for the pagefile and windows temp directory seems like the best use.  While the review on Tom's Hardware shows some blistering performance figures, it also shows some so-so figures for the consumer version.  It seems unlikely to have a big enough effect on frame rates to justify the cost.  Perhaps the price of the professional version will come down enough for this to be revisited at a later phase?

16.4.10

Upping the Ante

In this post, I explained the reasons for my initial decision to go down the ATi route for graphics.  Since then, new evidence has come to light, which has swung my thinking in the other direction.
  • This review shows the relative performance of 2 Radeon HD 5970's in Quadfire vs 3 GeForce GTX 480's in Tri-SLI.  While not decisive, nVidia has an advantage at max settings, with AA enabled, plus more room for optimisation with their brand new drivers and the possibility of scaling to a 4th GPU for either PhysX or Quad-SLI.  (Assuming that they can get their Quad-SLI scaling sorted out!)
  • This news item, shows a dual CPU motherboard from EVGA with SATA 6Gb, USB 3.0, and 2 NF200 chipsets, for 72 lanes of PCIe goodness!  That will support 4 graphics cards at a full 16x speed!  Being based on the EVGA Classified 4-Way SLI, the EVGA board will be more overclocker-friendly the the ASUS P6T7 WS Supercomputer board, which seems aimed more at professionals than enthusiasts.
It seems that motherboards are on the verge of a technology refresh.  Setting aside SATA 6Gb and USB 3.0; the dual CPU option offers a far better upgrade path.  I'm willing to bet that the price premium on the dual-GPU board will be worth it.

In a year or two, 2 cheap, mid-range CPU's will likely keep my frame rates comparable to whatever n-core top-of-the line monster chip Intel is offering at the time - just as 2 Core i7-930 chips offer more CPU power for less money than a single Core i7-980X does today.  Sure, some applications and games don't take advantage of multiple cores at the moment - but that's changing.  DirectX 11 is multi-threaded, so games that use it will be, too.  Both nVidia and ATi have gone multi-threaded with their drivers.  Multiple cores is the brave new world of high-end 3D graphics.

With this in mind, it strikes me that I might be best advised not to invest too heavily in that area just yet.
  • I can save £130 in phase 1 by going with an ASUS Rampage II GENE. That should perform exactly the same as any other board on the market, up until the point that I want to add a 3rd graphics card - by which time it might be time to think about upgrading to a next-generation motherboard, with dual CPU and Quad-SLI support -  I even get Creative SupremeFX audio thrown in!
  • I can save another £100 by choosing the GTX 480 over the Radeon HD 5970, which gets me a faster GPU and more headroom for expansion.
Unfortunately, this change of direction means my previous decision on case needs revisiting.  If the EVGA dual-processor motherboard is the same odd size as the Classified 4-Way SLI, then it won't fit most cases, due to the 10 expansion slots required for a quad-GPU config.  The few cases that will fit the EVGA board all look like they belong on the bridge of the Starship Enterprise, but the Lian-Li PC-P80 is the least offensive of them, with good air cooling, while supporting water cooling and a massive 390mm graphics card length.  Once I replace the fans with LED-less ones, it should be acceptable.

Revised build roadmap and phase 1 parts list to come soon.

8.4.10

Thoughts on Water Cooling vs Air Cooling

I'm hoping to get by on stock fans and heat sinks, other than a few extra case fans for Phase 1, however I doubt that will get me by as I move towards an overclocked, multi-GPU configuration.  Something will have to be done about cooling for Phase 2.

Although there seems to be some debate over the real-world advantages of water over a decent, after-market air cooling system, the consensus seems to be that water is better - if you are willing to spend the money and do the job properly.

After much reading on the subject and a little elementary physics, here is how I think it works.  (If you know better, feel free to comment.)

From a physics perspective, the specific heat of water is over 4 times that of air, while its thermal conductivity is 24 times better.  In simple terms, that means heat will transfer 24 times more quickly into water and a given quantity of water will carry 4 times more heat away.

The poorer thermal conductivity tends to be offset by the fact that an air-cooled heat sink has a far larger surface area exposed to the air than a water block exposes to the water.  This comes at a cost, though; air cooling systems take up far more room in the case, which in turn impedes air flow - further reducing the system's effectiveness.

The lower specific heat of air seems to be what really holds it up, though.  To carry away the heat produced by a hot CPU and GPU(s) you need to move a lot of air.

To expand upon this a little, let's take a fairly standard 120mm fan as an example.  This fan is a fairly decent example.  It has a flow rate of 54.3 CFM, or 1,537 l/m.  Now, a fairly modest water pump has a flow rate of about 5 l/m.  Sounds like the fan is in the clear at the moment, but a closer inspection shows that this is far from the case.  Heat capacity is based on mass not volume and water is a fair bit denser than air.

At 20C, air has a density of about 1.204 kg/m3.  At the same temperature, water has a density of 998.2071 kg/m3 - about 829 times more than air.  Remembering that water also has 4 times the specific heat of air, we would need to move 829x4=3,316 (rounded down a little, for simplicity,) times the amount of air to carry away the same amount of heat.  So our little 5 l/m water pump is really doing the job of 5x3,316=16,580 l/m of air - or over ten fans!  (All those fans, moving all that air is what makes air cooled systems so noisy.)

Most cases - even full tower cases - don't have room for 10 fans, let alone enough fans to compete with some of the more powerful water pumps, which can move up to 25 l/m.  (It is also unlikely that 10 fans would really move 10 times more air.)  Basically, there is no way that you will ever move enough air to compete with the heat carrying capacity of water.

However, the story is not over.  A water cooling system is a closed loop - so that water that is carrying all that heat away will eventually come back around again.  Unless something is done, it will just get hotter and hotter, until it can no longer absorb any more heat - then your system fries.  The remaining part of the equation is the ability to radiate heat away, once it has been absorbed from the hot components.  Both air and water cooling systems ultimately rely on radiating heat into the atmosphere.

Air coolers rely on radiating heat from the components, via a heat sink, directly into the air inside the case and then exhausting the air out of the case, replacing it with cooler ambient air.  Unfortunately, even the best case fans tend to leave the air inside the case around 10 degrees hotter than ambient.  With rate of heat transfer being proportional to the difference in heat between the two mediums, that 10 degrees is going to reduce the rate of heat transfer, when compared to an external radiator used in a water cooling system.  In a system crammed with foot-long graphics boards and huge after-market coolers to slow down the airflow, venting of hot air will be even less efficient.

The killer feature of watercooling systems is that they are able to carry the heat away from the components being cooled before shedding their heat.

Many water cooled systems have their radiators attached to the fan vents of the case, either inside or outside, with fans blowing air over them.  (There seems to be a fair bit of debate over whether ambient air should be blown over the radiator and into the case, or whether case air should be blown over the radiator before being exhausted outside the case.)

Other water cooled systems actually have their radiators outside the case, so that there is no question of blowing heated air back onto the components being cooled, nor is there any problem with trying to radiate heat into pre-heated air.

The final clincher for air vs water cooling is that it need not be an either/or situation.  You can still use case fans to vent hot air and you can still use air-cooled heat sinks on some components.  Water cooling simply routes some of the heat to an external radiator, thereby reducing the load on the case fans - with the added benefit of letting them run slower and therefore quieter.

So, in pursuit of a quieter rig, with a higher theoretical limit for heat dissipation, I'm going to opt for water cooling in phase 2 - let's see if the reality lives up to the theory!

Phase 1 in Detail

Phase 1 will be air-cooled, with just one graphics board and stock clocks.  This phase is about getting a working system up and running with minimal expense and hassle.

That doesn't mean ignoring my design goals, though.  I want to leave myself with the longest and smoothest upgrade path possible, avoiding buying anything that I'll just have to throw away in a later phase.  To that end, I'll be investing in the best motherboard I can find, along with a case that will go the distance and a graphics board that will scale well.

Parts List
Monitor: I happen to have an old Samsung 22", 1680x1050 LCD hanging around, so I'm going to use that for the early build phases.  With all the detail turned up, I doubt I'll have enough grunt to max most titles out on this screen until at least phase 2, depending on how well the overclocking goes.
Motherboard: ASUS Rampage III Extreme
This one is important to get right, as replacing it later is both expensive and a serious hassle.  The OC'ing potential of this board, along with its support for future CPU, IO device and RAM models means it should last the distance as well as any motherboard could.  Also supports both nVidia SLI and ATi CrossfireX, which means I can switch ponies later on, when/if nVidia takes the lead in the 3D graphics race.
Graphics Card: ASUS EAH5970/G/2DIS/2GD5/A
One would hope that using an ASUS graphics board with an ASUS mainboard might reduce the chances of compatibility woes by a smidgen.  This ASUS board is pretty reasonably priced anyway.  nVidia vs ATi was discussed in and earlier post, so I won't go into that again here.
CPU: Intel Core i7 930 (LGA1366)
While not the best CPU available, this one ought to see me through until the rest of the system out paces it.
RAM: 3x2GB DDR3 1600
No need to go all out here.  RAM isn't the most critical part of the overall system when it comes to frame rates.  As long as I have a reasonable amount of moderately spec'ed RAM it shouldn't be the bottleneck - at least in the early phases.  I can upgrade this in a future phase, without throwing away too much money.
Case: Zalman GS1000 Plus
Really pushing the boat out on this one.  This is the one part of the system that will outlast all the other bits.  Until the ATX form factor goes out of fashion, I'll be able to keep on building systems in this enclosure.  Ironically, it's actually the cheapest component in this list.  Further discussion here.

Choosing a Case

If I'm gonna run mulitple high-end GPU's, plus a watercooling rig, I'm going to need plenty of room in the case - for both parts and airflow.  That means going full tower.

If I want to keep the girlfriend sweet with my new project, the case is going to have to be tidy, too.  Besides, if I'm gonna be spending a bit of time with my hands inside this thing, I don't want it to have too many sharp edges!

After reading all the reviews I could find, one manufacturer stands out from the rest: Zalman.  They are expensive, compared to other manufacturers, but they consistently get rated top for build quality and noise levels.  They have 4 gaming cases in their lineup that could potentially fit the bill:

  1. GT1000 Z-Machine.  This is Zalman's flagship case.  It is an extra-wide mid-tower.  It is supposed to fit "full-sized" PCI cards, but without having seen one, I have my doubts about how much room might be left for getting fingers around a 12.5" long Radeon HD 5970.  I can't seem to find exact internal dimensions anywhere.
  2. LQ1000 Z-Machine.  This is the liquid-cooled version of the GT1000.  A thing of beauty, but with an internal water cooling rig, it can only have less space inside than the GT1000.
  3. MS1000-HS2.  This one is a mid tower with dust filters and sound dampening.  It also says it has room for "full-size" PCI cards and further qualifies this to mean 300mm cards.  Unfortunately, the HD 5970 cards on the market today are all pushing 320mm, so I'm afraid this one is out.
  4. GS1000 Plus.  This is a recent update to the GS1000, full tower case.  The Plus adds fans around the drive enclosures, seemingly in response to a review that cited uncomfortably high HDD temps.  The GS1000 is 80mm deeper than the MS1000 and 150mm deeper than the GT1000.  It also supports larger E-ATX motherboards.  Without knowing the internal dimensions or seeing one, I have to say that if this baby can't hold an HD5970, nothing will!
So, the GS1000 Plus seems like the go.  The full tower gives plenty of room inside and several separate reviews have commended its build quality.  It even has a pair of holes in the back for routing coolant tubes to an external watercooling system.  At not much more than half the price of the GT1000, the GS1000 is a bargain!

Another thing worth noting is that a couple of reviews have complained about the noisy fans, but they were specifically talking about the noise when run at 12V.  When run in "silent" mode at 5V, these fans perform very quietly indeed.  The GS1000, which was the variant in all the reviews, comes with 5 120mm fan vents, but only 2 fans.  The other 3 are optional.  With the extra 2 92mm fans sported by the Plus on the front drive enclosure, plus the 3 optional fans, plus the watercooling planned for this rig, I have high hopes that the airflow will be sufficient to cool any un-watercooled components, even in silent mode.

7.4.10

Choosing a CPU

While AMD CPU's offer the best mid-range performance for the money, at the top end they aren't even in the race.  In order to avoid CPU bottlenecks in later phases, with multi-GPU's, I'm gonna need the fastest chip available - and that's an Intel.

At the moment the Gulftown Core i7-980X Extreme Edition is king, but it is also very expensive.  I can pick up a Bloomfield Core i7-930 for less than a third the price.  It has 2 less processing cores and a lower clock speed, but it is still a socket LGA1366 chip, with the associated higher system bus speed.  The necessary X58 chipset gives it access to more PCIe lanes than the P55 chipset used by socket LGA1156 CPU's - and 3-channel memory, too.

It looks like socket LGA 1366 is here for a while.  All of the latest high-end motherboards use it, as do Intel's latest CPU's.

So, I'm gonna go for the i7-930 and hope that watercooling and overclocking it will let it hold up its end until the higher-end chips come down in price.  If the i7-980X EE drops by just a third, over the next few months, it will pay for itself.  Meanwhile, I get to spend the difference on other goodies instead - like a pair of solid state drives for the system partition, to make boot and load times fly by!

Choosing a Graphics Board

Although reviews have nVidia's GeForce GTX 480 as the fastest GPU, the fastest graphics card available today is ATi's dual-GPU Radeon HD 5970, by a clear margin.  While in theory, quad-SLI GTX 480's would get me the best graphics performance possible, I have decided to go down the ATi route for this build, simply because I can get to quad-CrossfireX with only 2 cards.  Given the heat output and power draw of the GTX 480, it seems unlikely that dual GPU boards will be available in the near future.


Using only 2 boards has a number of benefits:

  1. The cards take up less space in the case, leaving more room to route coolant tubes, making it easier to work on and improving airflow.
  2. 4 HD 5970 GPU's generate less heat than 4 GTX 480's, making it easier to keep them cool.
  3. Only 2 PCIe x16 slots are required, rather than 4, making it easier to find a compatible motherboard. (Boards with 4 PCIe x16 slots capable of x16/x16/x16/x16 are uncommon, as the Intel X58 chipset does not support than many PCIe lanes without help from at least 2 nVidia nForce 200 chips.  I have yet to find a motherboard that has 4 full-speed PCIe slots and supports both SLI and CrossfireX.)
There are some reviews that suggest that 3-SLI GTX 480 GPU's might perform as well as 4 HD 5970's, but I am unconvinced.  Different games and different benchmarks seem to favour different GPU's and there seems to be no clear leader there.  When nVidia brings out their next generation of cards, I'll re-evaluate the ATi choice - assuming ATi's next generation don't grab the lead again!

Preliminary musings

Several things seem clear from the start:
  1. One GPU won't do the trick - I'll need CrossfireX or SLI.
  2. Given the potential performance gains, I'm gonna wanna overclock.
  3. With (overclocked) high-end CPU and GPU(s) generating more and more heat, watercooling will be necessary sooner or later.  In fact, for a rig of the scale that this one is likely to grow to, it will have to be a fairly beefy watercooling rig at that.
  4. I'm going to need a big, well-designed case, with enough room for several foot-long PCIe boards.

6.4.10

Introduction

Every few years, I get the urge to see what the state of the art in 3D games looks like now.  Last time was in 2007, when I built a dual 8800 GTX SLI rig and played FEAR for a bit.  That was when I discovered that building a modern gaming rig requires a fair bit more expertise than playing Doom on my first 486 did, over a decade and a half ago.  Computers have evolved, since then.

From cooling problems, to space issues in the case, my last effort turned out to be more difficult than expected and the results were somewhat mediocre, with (from memory) frame rates of around 50fps at 1680x1050, but frequent slow-downs, lock-ups and stuttering.  This time it's going to be different.  I'm planning to build an extreme gaming PC that will play the best-looking games at their best-looking settings.

This blog will log the evolution of this PC, as I build it up from nothing, into a machine capable of maxing out any game that interests me.  I should point out that I am no expert at building gaming rigs - I'm a bit behind the times and I will be learning as I go - so don't expect to find expert advice here!

Just now, my rig is in the planning phase; it exists only in my imagination.  I don't plan to begin the physical build until June 2010, as there is a fair bit of research to be done - and then parts to be ordered - before I begin.  (And also because I am on holiday in France until May!)

After that, I'll begin building the rig in phases, in part due to financial constraints and in part so that I can see the results and learn from each phase of the build before investing in the next phase.  At the end of each phase, I'll publish benchmark results, so that those interested can see the mileage I'm getting from my efforts.  Meanwhile, I'll publish my build plan and parts list on this blog, along with the reasons for my choices.  

Over time, I'll continue to upgrade the rig - either until it can max out all the games that I am playing at the time, or until it becomes unreasonably difficult or expensive to achieve better performance.  Eventually, I suppose, I'll reach a technology dead-end where, despite my best efforts at "future-proofing", I'll be left with an obsolete piece of junk.  So, in a way, this will also be an experiment in how far it is possible to get with incrementally upgrading a computer, before you have to throw it away and start again.

Any comments or feedback on the plan are welcome.  If you know something I don't, feel free to enlighten me.  :-)