Next Generation Emulation banner

AMD Radeon or nVidia GeForce?


  • Total voters
    68
641 - 660 of 671 Posts

·
...
Joined
·
302 Posts
Previously:
nVidia - RTX brings real time ray tracing and DLSS to the plate.
AMD - So how many games feature this right now? Just three? Expensive gimmick LOL

Currently:
AMD - Radeon 7 comes with 16gb of high bandwidth memory.
nVidia - I heard that costed you $300 of the overall price. How many games actually use that much RAM anyway?
AMD - Shut up, we're future-proofing our cards!
Some 4K/8K texture packs for games (mods) and emulators theoretically need 24 GB (32 GB x 0.75) in VRAM for optimal performance.
Some NVIDIA cards drop performance when they use more than 75% (some 87.5%) of their VRAM and are unable to perform true async computing (GTX 970 by instance).
 

·
Registered
Joined
·
6,868 Posts
Previously:
nVidia - RTX brings real time ray tracing and DLSS to the plate.
AMD - So how many games feature this right now? Just three? Expensive gimmick LOL

Currently:
AMD - Radeon 7 comes with 16gb of high bandwidth memory.
nVidia - I heard that costed you $300 of the overall price. How many games actually use that much RAM anyway?
AMD - Shut up, we're future-proofing our cards!
I don't know if it's worth using HBM2 instead of GDDR, but 16gb of VRAM isn't as wasteful as it seems.
People who spend that kind of money typically do so to play games at 4K, and games have already moved past 8gb VRAM. AMD is also marketing their card toward content creation, where they claim a large amount of VRAM will be useful (10+ gb).

Another issue are the RTX features struggling on the 2060. I think DXR looks gorgeous, but the performance issues made people wonder why it was added to the weaker cards. If AMD added 16gb to every single card, including the 1080p cards, it would be quite dumb. Adding an expensive feature to the 4K/Workload card makes more sense.
 

·
Meow Meow Meow
Joined
·
1,386 Posts
And all that power consumption savings went where exactly on the Radeon VII?

Another issue are the RTX features struggling on the 2060. I think DXR looks gorgeous, but the performance issues made people wonder why it was added to the weaker cards.
Remember DLSS? That's how they plan to do it with the 2060. If you're rendering 1440p, then the ray-traced render will done in 1080p and upsampled to 1440p. If you believe the hype, then a lot of the performance penalty can be mitigated. The biggest problem with DLSS is you need to train the neural network to your game first, thus it's not possible with all titles in your library.

u3L05 vh9jV: Are you suggesting this card is best for gamers who mod then? That's a very small demographic to go after.

I hope the Vega 56 version of VII will have saner ram quantity for most of us who aren't doing 4K gaming, perhaps 12gb at most. Hopefully will bring the price down significantly. As it stands, I don't view team red as a viable competitor right now.
 

·
Ya'ver drink Brazilian bold from fkn dunkn donuts!
Joined
·
7,825 Posts
Vega VII evidently is going to receive Radeon Pro drivers.

That is fucking bonkers! It's a strange card design wise with so many questionable design aspects (60 ROPS instad of the usual 64.....), but for a card with (lol) 5000 units and more than enough VRAM having professional support drivers at that price point with that vast amount of VRAM tickles my taint.

That card just went from a firm lol to a casual maybe, if it can entertain my rendering and the API is good enough to make use of those 16gb of fast as fuck memory, I may be inclined to buy. Fairly certain navi isn't going to get such treatment.
 

·
Meow Meow Meow
Joined
·
1,386 Posts
To think I was actually waiting for team red's response to the RTX series. Now I'll probably settle on getting a RTX 2060 next month, need to replace the 670 on my current rig because PyTorch won't run on it anymore. :|
 

·
Better be better than yesterday
Joined
·
4,087 Posts
So...Did you expect Navi ? Yeah ? So you'll have to Navi-gate elsewhere until it *MAY* be released somewhere in October 2019. The last rumours on The "RTX Killer" cards - well let's say GTX killers at least - say that they are delayed.

Source: https://www.digitaltrends.com/computing/amd-navi-graphics-delay-october/

It doesn't smell good for the Red team: Delays, Inconsistent high-range/mid-range products, presence on the laptop market nearly non-existent. I really regret the old ATI days.
 

·
Ya'ver drink Brazilian bold from fkn dunkn donuts!
Joined
·
7,825 Posts
Just sold my R9 Nano and bought a Galax 1070 EX for my media pc.

Lacking hdmi 2.0 is a real bummer for 4k 60fps and I'd still keep the AMD if it didn't cheap out on ports.
 

·
Better be better than yesterday
Joined
·
4,087 Posts

·
Meow Meow Meow
Joined
·
1,386 Posts

·
Better be better than yesterday
Joined
·
4,087 Posts
^
Using Jupyter notebooks, I trained ResNet models 18 to 152 on each Cifar dataset with FP32 then FP16, to compare the time required for 30 epochs.
Ahem. Maybe you can translate this for me, @DinJerr ? Because right now, I am :

TfE4MJC.png


At the end, the point of this article is that RTX cards are better suited for A.I use ? In this case, I may be right. Instead of using it for lighting, they should use it for improving Artificial Intelligence scripts in video games.
 

·
...
Joined
·
302 Posts
In short, all this has to do with raw power (GFLOPS) using the GPU as a coprocessor. Anyone can do that using OpenCL on almost any platform and modern hardware. Additionally, CUDA can be used in NVIDIA hardware to obtain better results.
In this case, they are comparing the new "optimizations" in the GeForce 20 series (native FP16) vs the GeForce 10 series (native FP32). It is to be expected that the GeForce 10 series has the worst performance in FP16 operations because they waste cycles doing that using native FP32 operations, and vice versa is true for the GeForce 20 series, since they need two cycles to perform an FP32 operation.
I don't understand exactly what the "tensor core" really does, but by the description of the Volta series and the supposed "mixed precision", I suspect that the "tensor core" can somehow mix two different types of "precision" in fewer cycles.
 

·
Better be better than yesterday
Joined
·
4,087 Posts
In short, all this has to do with raw power (GFLOPS) using the GPU as a coprocessor. Anyone can do that using OpenCL on almost any platform and modern hardware. Additionally, CUDA can be used in NVIDIA hardware to obtain better results.
In this case, they are comparing the new "optimizations" in the GeForce 20 series (native FP16) vs the GeForce 10 series (native FP32). It is to be expected that the GeForce 10 series has the worst performance in FP16 operations because they waste cycles doing that using native FP32 operations, and vice versa is true for the GeForce 20 series, since they need two cycles to perform an FP32 operation.
I don't understand exactly what the "tensor core" really does, but by the description of the Volta series and the supposed "mixed precision", I suspect that the "tensor core" can somehow mix two different types of "precision" in fewer cycles.
Thanks. If you allow, another question:

Why not having two groups of cores in the GPU ? One specialized in FP16 calculations, the other specialized in FP32 calculations ? Wouldn't this "hybrid" GPU solve the speed issue ?
 

·
...
Joined
·
302 Posts
Thanks. If you allow, another question:

Why not having two groups of cores in the GPU ? One specialized in FP16 calculations, the other specialized in FP32 calculations ? Wouldn't this "hybrid" GPU solve the speed issue ?
I'm not a CPU / GPU architect, but having two different cores in a single GPU sounds inpractical and unnecessary... maybe they already existed?
"The speed issue" exists because the GF 10 series lacks the "tensor core" and the code is not optimized. In theory, two FP16 operations can be executed in a single FP32 cycle. That's why I mentioned the Volta series, they are recycled GF 10 series + the "tensor core".

Didactic reading (I have not read it yet):
GPGPU
 

·
No sir, I don't like it.
Joined
·
5,451 Posts
Not really buying the whole "better for AI" thing. It doesn't make sense to use real-time AI in video games because characters/enemies/environments/etc. can only react and interact in a limited fashion due to time constraints on developers.

All of the AI powered stuff I've come across through deep learning and data set training is performed on a work station and the results are summarized in a neat little file that can be run on virtually any device.

DinJerr had previously posted about a real-time renderer that handles anime/cartoon material really well. It's able to do this because it was trained on that sort of material. If live action material was used, the renders would look really messed up.

Anyway, I've always thought of AI as just a buzzword that companies throw around to get people excited or obtain funding for yet another soon to fail or soon to greatly underwhelm project.

Yes, deep learning is something of a step forward for the future evolution of AI as it will likely be needed for a "true" AI and data set learning is basically just a tool for creating very complex algorithms the programmers just cannot easily translate to code, but both of these methods are just algorithms written by programmers at the end of the day, meaning that the so-called AI is limited by the programmer's ability and imagination.
 

·
Meow Meow Meow
Joined
·
1,386 Posts
N
DinJerr had previously posted about a real-time renderer that handles anime/cartoon material really well. It's able to do this because it was trained on that sort of material. If live action material was used, the renders would look really messed up.
Yup. This is the weakness of DLSS, you can't just "Turn it on" and expect a game to look nice. You have to train the dataset first with what is considered nice, on a per game basis. Hence why there are so few titles currently supporting DLSS mode. This is also why you can't just turn on DLSS without DXR for example, if the dataset wasn't trained on a non-DXR sample set. You need to generate the images for the low res and high res, which is easier said than done because a lot of games (action games especially) are rarely static, so unless you jerryrig your game engine to enable resolution switching on the fly for a paused frame, then training the AI will be a major pain.

Anyway, image upscaling is just one application of machine learning, you can see this paper about using ML to do frame interpolation (which previously relied more on optical flow):
Anyone working in video production will find this to be quite amazing.
 

·
Better be better than yesterday
Joined
·
4,087 Posts
Well, I'm still going to wait to put money on a RTX laptop. However after some investigation, it seems Lenovo is doing a great job with the Y series especially the Y740:


  • Very good temperatures
  • Nice professional design with an aluminium chassis ( No fancy red colors or anything to make you stands out stupidly out of the crowd )
  • 144Hz screen
  • 16GB Dual Channel RAM
  • Place for a SATA HDD or SSD
The laptop is starting at $1800. However, I will wait for the Y540, costing twice less. I will lose the aluminium chassis, but I will keep the 2060 and gain a numeric keypad. I will have to wait until May, however.
 
641 - 660 of 671 Posts
Top