In recent years, the graphics card industry has been pretty much a two horse race. nVidia has been around for a while after having taken out and absorbed 3dfx and has been sparring with ATi for most of the last decade.
What has become apparent to me, at least on a basic level, is that progress in the graphics industry seems to hinge on a few key areas, specifically:
– shaders (stream processors),
– bus capacity (or memory width in bits),
– clock speed of GPU and memory (in MHz or GHz),
– amount of onboard memory (in MB or GB),
– DirectX compliance.
Pretty much the philosophy has been to crank up the first four items in the above list whilst keeping up with DirectX when a new version is released. This happens along with the ongoing shrinking of transistors.
What was discovered with CPUs years ago was that simply increasing transistor count and clock speeds was not going to cut the mustard. Particularly with the Pentium 4, heat output and power consumption became a real problem and a fundamental change in architecture was required which culminated in the Intel “Core” platform that focussed more on efficiency and parallelism than raw speed. What we now see with the i7 CPUs are up to six physical cores that provide parallel processing across twelve virtual cores (thanks to Hyper-Threading).
It is my belief that the graphics card industry needs to have a “reboot” of sorts. We already have SLI (or equivalent) technology that allows the use of two or more GPUs to boost performance but we need to see improvements in power consumption and a reduction in heat output. At the top end we are seeing triple slot cards because of the huge fans required to keep a graphics card cool. I am no expert on GPU architecture but it may be more effective to have sixteen lower powered GPUs on one card each handle a portion of a screen rather than one or two powerful GPUs.
At any rate, enthusiasts will still go for the top end but I think nVidia and ATi have an innovative challenge ahead of them to keep up the pace for the next decade.