Random question to hardware guys about CPU/GPU speeds...
The thread about the clock multiplier and CPU speeds got me thinking about CPU vs GPU speeds. Basically what makes the speed diffrence sp much between current CPU's and the fastest video card "GPU" chips? Is it just the way AMD/Intel have chosen for their designs vs Nvidia/ATI? Is it something inherent in the design about a general purpose CPU vs one made just for graphics? Do the CPU guys just focus more on chip speeds because of marketing? If so could we have desktop CPU's that are just as powerful as say a 3Ghz P4 running at only 500 mhz if it was designed diffrently?
I think I'm asking questions that could take a semester's worth of teaching to explain but I have nothing better to do at the momment. :D
Re: Random question to hardware guys about CPU/GPU speeds...
The answer is actually pretty simple.
Originally Posted by Jason Becker
Both AMD and Intel have legions of circuit layout engineers, who spend months tuning circuit paths for not only best performance, but also to maximize clock rate for their particular architecture. They can afford to do this, because the selling price ranges from $50 to $750.
Nvidia and ATI use "off-the-shelf" tools that allow them to assemble their chip layouts. Some hand-tuning and hand-layout occurs, but most of it is very cookie-cutter stuff, which simply can't run at multi-GHz speeds. The graphics guys can't afford to hire hundreds of circuit layout guys, because all their products cost well under $100, and many cost under $20. The product cycles are much shorter, too. The tradeoff is that these design tools allow them to have shorter design cycles.
Wow Lloyd! That was a great explanation! :)
You should get a job as the editor on a tech site or something.
Hmm thats an answer I didn't expect. So if the graphics makers had a big enough budget we could have video chips running at 1.5ghz or more? Hmm interesting. I wonder if the fast pace 6 month cycle is the best then in the long run. It seems like hardware is outstripping games more and more. Its been a 1 year since the first DX9 cards came out and we have like 2 or 3 commercial games that use DX9 features. Its gonna be like a year or more untill its commonplace. I wonder ATI and Nvidia are best served by this 6 month cycle or if its just marketing having too much influence(Gotta keep moving up). Maybe spreading out the cycles would be better in the long run for them and consumers. I mean the Athlon and the P4 are still the CPU's used in new PC's and their both like what 3-4 years old now?
It's more a cost issue than a time-to-market one. If you want to pay $1,500 for that Radeon 9800 XT, then ATI might be able to afford to hand-tune the GPUs.
Originally Posted by Jason Becker
In a sense, though the graphics guys are migrating to longer architecture lifespans. The Radeon 9800XT is actually pretty similar to the original 9700.
However, there are other technologies in the works that can help them speed up their graphics cores.
Our labs develop new materials for high speed CPUs, so we chat quite a bit with the folks at IBM, Intel, AMD, etc. As you know, these guys have pretty impressive roadmaps for where they need to be (and thus the specs required) for 1 year from now, 5 years from now, etc.
I'm not familiar with the folks who are behind GPUs. Do they also have technology and product specs roadmapped out as extensively as the CPU folks?
What I'm thinking about is some new tools that are suitable for increasing the speed of standard logic, not just graphics stuff:
Originally Posted by jeff lackey
It's well suited for anything that has lots of logic, but not much embedded memory (the tools can enable 2+ GHz speeds for standard logic, but not memory).