The official lineup is:
Radeon RX 6900 XT, 80 CU, $1000, 300 W, December 8
Radeon RX 6800 XT, 72 CU, $650, 300 W, November 18
Radeon RX 6800, 60 CU, $580, 250 W, November 18
AMD showed off benchmarks of the 6900 XT mostly being a little faster than the RTX 3090, the 6800 XT mostly being a little faster than the RTX 3080, and the RX 6800 handily beating the RTX 2080 Ti. The big question is how typical those benchmarks are. Considering AMD's history of mild cherry-picking in their benchmarks, I'm guessing that they've priced their cards in line with their performance as compared to the RTX 3090, 3080, and 3070.
All three of the new GPUs will have 16 GB of memory. That's a lot more than the RTX 3080 or 3070 have. They also have a 128 MB "infinity cache", which I'm guessing is just a marketing name for their L2 cache. I've explained how that could help with graphics in the past. AMD also claims that it helps a lot with caching ray-tracing data. When going with a 256-bit memory bus, AMD's cards will have a lot less memory bandwidth than the RTX 3080 or 3090, but the large cache might roughly make up for that in graphics.
The new cards do support DirectX 12 Ultimate, and that includes ray-tracing. That's largely expected, but still good to have it confirmed.
AMD is also pushing "smart access memory". Basically, if you're using a Ryzen 5000 series CPU and a Radeon RX 6000 series GPU, then the CPU has full access to the GPU's memory. Of course, CPUs already have access to tell a GPU to put stuff into memory, process it, and pull the results back to the CPU. There is a little bit of memory that is reserved for the GPU's internal use, but giving the CPU explicit access to that seems like a dumb idea to me. I'm pessimistic that the "smart access memory" will ever do anything useful; to me, it sure looks like a marketing gimmick. Or as I often say, any product or concept that feels the need to call itself "smart", isn't.
Comments
They may win the battle for the top spot, but will lose the whole battleground.
If they continue with these there will be no more PC gaming because average income people will just buy consoles.
Its easy cramming top high price component into some monster piece of hardware that is quickly growing to be bigger than whole PC and takes same power as toaster owen.
Its hard to make good card that is made so good that its affordable, and effective.
Who ever makes that will be the true winner
The Radeon RX Vega 64 was sure a miss, as it wasn't competitive with the GTX 1080 Ti. But that's really the only generation AMD has had in the last 13 years that wasn't competitive where it was supposed to be. The RX 5700 XT and RX 480 weren't competitive at the high end, but small dies like that aren't supposed to be. That's like claiming that Nvidia is a failure because the GTX 1660 isn't nearly as fast as the Radeon RX 5700 XT. A small die isn't supposed to be a high end card.
Need benchmarks, as you say, to make sure we are seeing apples to apples.
Remember how Fermi was something like 12x as fast as Evergreen in tessellation? That sure didn't matter in actual gameplay, even in games that used tessellation. Well, except for that one random, Nvidia-sponsored game that had invisible water under the level and massively tessellated it and then discarded it.
It's not unusual for synthetic benchmarks to show really lopsided results. For example, overall, a GeForce RTX 2080 Ti is a lot faster than a Radeon RX Vega 64 in nearly all games. But if you do a synthetic benchmark of local memory bandwidth, the Vega 64 will win by a wide margin and be something like 60% faster than the RTX 2080 Ti. If you do a synthetic benchmark of L2 cache bandwidth, the RTX 2080 Ti will probably be several times as fast as the Vega 64. Those are both likely to be more relevant to actual game performance than the synthetic benchmark that you're citing. And that's without wandering way off into the weeds to benchmark performance in an instruction that one architecture has and another doesn't.
So, in order to demonstrate effects of smart access memory, either (1) pair a Ryzen 5000 with a 6900XT, then a 3090 and compare test results, or (2) pair a 6900XT with a Ryzen 5000, then a i9-10900 ? Or both ?