Next Generation Emulation banner
1 - 12 of 12 Posts

·
Banned
Joined
·
23,263 Posts
Discussion Starter · #1 ·
4/22/2009 by: Theo Valich - Get more from this author


Over the past six months, we heard different bits'n'pieces of information when it comes to GT300, nVidia's next-gen part. We decided to stay silent until we have information confirmed from multiple sources, and now we feel more confident to disclose what is cooking in Santa Clara, India, China and other nV sites around the world.
GT300 isn't the architecture that was envisioned by nVidia's Chief Architect, former Stanford professor Bill Dally, but this architecture will give you a pretty good idea why Bill told Intel to take a hike when the larger chip giant from Santa Clara offered him a job on the Larrabee project.
Thanks to Hardware-Infos, we managed to complete the puzzle what nVidia plans to bring to market in couple of months from now.

What is GT300?


Even though it shares the same first two letters with GT200 architecture [GeForce Tesla], GT300 is the first truly new architecture since SIMD [Single-Instruction Multiple Data] units first appeared in graphical processors.
GT300 architecture groups processing cores in sets of 32 - up from 24 in GT200 architecture. But the difference between the two is that GT300 parts ways with the SIMD architecture that dominate the GPU architecture of today. GT300 Cores rely on MIMD-similar functions [Multiple-Instruction Multiple Data] - all the units work in MPMD mode, executing simple and complex shader and computing operations on-the-go. We're not exactly sure should we continue to use the word "shader processor" or "shader core" as these units are now almost on equal terms as FPUs inside latest AMD and Intel CPUs.
GT300 itself packs 16 groups with 32 cores - yes, we're talking about 512 cores for the high-end part. This number itself raises the computing power of GT300 by more than 2x when compared to the GT200 core. Before the chip tapes-out, there is no way anybody can predict working clocks, but if the clocks remain the same as on GT200, we would have over double the amount of computing power.
If for instance, nVidia gets a 2 GHz clock for the 512 MIMD cores, we are talking about no less than 3TFLOPS with Single-Precision. Dual precision is highly-dependant on how efficient the MIMD-like units will be, but you can count on 6-15x improvement over GT200.

This is not the only change - cluster organization is no longer static. The Scratch Cache is much more granular and allows for larger interactivity between the cores inside the cluster. GPGPU e.g. GPU Computing applications should really benefit from this architectural choice. When it comes to gaming, the question is obviously - how good can GT300 be? Please do bear in mind that this 32-core cluster will be used in next-generation Tegra, Tesla, GeForce and Quadro cards.
This architectural change should result in dramatic increase in Dual-Precision performance, and if GT300 packs enough registers - performance of both Single-Precision and Dual-Precision data might surprise all the players in the industry. Given the timeline when nVidia begun work on GT300, it looks to us like GT200 architecture was a test for real things coming in 2009.
Just like the CPU, GT300 gives direct hardware access [HAL] for CUDA 3.0, DirectX 11, OpenGL 3.1 and OpenCL. You can also do direct programming on the GPU, but we're not exactly sure would development of such a solution that be financially feasible. But the point in question is that now you can do it. It looks like Tim Sweeney's prophecy is slowly, but certainly - coming to life.

http://www.brightsideofnews.com/news/2009/4/22/nvidias-gt300-specifications-revealed---its-a-cgpu!.aspx


:eek:


Should this be true, it fully excuses the rebranding fiasco.
And it also explains why nvidia was so cocky about larabee sucking :p
 

·
Curiously Cheddar
Joined
·
2,077 Posts
Lets hope it lives up to the hype. Really interested in how AMD intend to counter too.
 

·
The one and only
Joined
·
3,660 Posts
Excellent, hopefully my folding for EVGA will net me a free gtx300 :)
 

·
No sir, I don't like it.
Joined
·
5,571 Posts
Dammit! <Throws GTX285 in the trash>

Ah, well. It's a good thing I haven't yet begun building my new system.
 

·
Level 9998
Joined
·
9,384 Posts
I keep imagining these to be big brutes at CUDA...

...then gaming performance would at most be a bit shy of GTX 295 at launch. No, I'm actually quite confident that GTX 380 would be a tiny little bit slower than GTX 295 in a number of games... and then some. :p And it'd most likely be priced at $599.99 MSRP at launch, whereas GTX 295 would be dropped to $399.99 or $449.99 MSRP. But good thing with GTX 380 is that you won't have to deal with SLI. Bad thing might be heat...

But those are just predictions. Usually, I'm not right... :innocent:
 

·
The one and only
Joined
·
3,660 Posts
Yes, ill be sure to get an aftermarket cooler for this one, right now my 260 sounds like a jet at 90% fan.
 

·
Registered
Joined
·
2,882 Posts
If they can really do it. And still go lower than $600. Then kudos to them. (Which I doubt.)

I hope they don't go the 3DFX way. Funny I'm seeing some similarities right now.....

My guesses :

Realistic gaming at it's enthusiast GT300 will be about 50% faster than the GTX285.
Computing performance it will rival and/or even beat the Corei7. (LOL, like the PS3 missing the point. IMHO.)

While for the low end, they'll just relabel their old GT200 video cards another level or two.

My rough estimate? Decent for gaming. Wondrous for servers/specific users. Useless for low end users? :p

I think DirectX isn't in any danger or dying anytime soon, to widespread. That's why my guess it won't be as dramatic a performance in gaming.
 
1 - 12 of 12 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top