"Let’s assume these Ampere cards do move on to 7nm . If they do, then this will be Nvidia’s first foray into 7nm. They will have had absolutely zero learnings, zero large scale field tests from which they can draw data.
AMD, on the other hand, will have had roughly two years of data to draw upon when they inevitably release their next generation cards. These cards will undoubtedly contain more efficient and more powerful hardware."
See what I did there?
Also, in what universe would a company close to bankruptcy, as was AMD a few years back, decide to attack on two major fronts simultaneously?
Their CPU position vs Intel was abysmal, while their GPUs were very competitive (2xx generation) to mildly competitive (3xx and later). It was a no-brainer: maintain the okay-ish GPU position, while your CPU division regroups and attacks on their front.
Now that they have their CPU counter offensive well on the way and have recaptured large parts of the territory, their GPU division will work on catching up with Nvidia. What's so difficult to understand here?
At the moment the price difference RX 5600 XT and RTX 2060 is $40/40€ if you look at actual price in store and AMD is doing well enough, they're even somewhat better options than NVidia cards.
I agree that AMD has engineering deficit, but until NVidia manages to launch their 7nm products their process node advantage more than offsets that deficit.
First of all, if you only care about top end performance and not the price tag, you'll prefer Nvidia most of the time. For many years now, Nvidia has been willing to build huge dies on old process nodes, while AMD would typically prefer to move to a new process node with a small die. That AMD is on 7 nm and Nvidia isn't is not a historical anomaly. We saw similar things at 80 nm, 55 nm, 40 nm, and 28 nm. The 16/14 nm node is the only exception since AMD bought ATI, and that's largely because AMD was stuck having to produce their chips at Global Foundries while Nvidia could use TSMC.
Huge dies let you get top end performance that you just can't do with small ones. But AMD ceding the market for $1200 video cards to Nvidia, and often the $700 card tier as well, isn't a huge chunk of the market.
One thing you're missing is the price tags. If you go to New Egg and want the cheapest new cards you can find, the list prices are:
It's worth noting that the RX 5700 has 8 GB of memory and a lot more bandwidth than the other two cards there. Nvidia may be claiming that the RTX 2060 is $300, but New Egg apparently didn't get that memo. It's a fair question to wonder whether it's just taking prices a while to adjust or the $300 price tag is a marketing stunt and not a real price.
As for AMD's updated RX 5600 XT BIOS, yeah, that's a last minute stunt. Unless your card says that it comes with the new BIOS, I'd assume that the old BIOS is the performance you'll get.
Even so, the Radeon RX 5600 XT is a third bin down of a GPU chip. Since when has AMD or Nvidia ever launched a full GPU chip and a salvage part of it, and then later another, further crippled salvage part, and had the latter actually be a good deal? The Radeon HD 4830 from 2008 is the most recent example that comes to mind. Further crippled salvage parts tend to be low volume parts, and are often sold in only one market, as that's all it takes to get rid of your stock.
Tensor cores are a waste of die space in a consumer part. Putting them in top end compute cards like the Tesla V100 makes sense. In consumer hardware, is just a waste of die space that inflates production costs without offering any actual benefit to gamers. And however much people like to deny it, costs of production inevitably get factored into retail price tags. AMD made the right call by not having anything analogous. It's Nvidia that goofed on that round.
Nvidia does have a head start on real-time ray tracing. But realistically, it's going to be several years before that matters, and by the time it does, this generation's hardware isn't going to be able to handle it. The added experience might ultimately give Nvidia some benefit by the time it matters or it might not. AMD had four generations of hardware tessellation on the market before Nvidia's first, but that didn't end up being a huge, obvious advantage for AMD.
"Let’s assume these Ampere cards do move on to 7nm . If they do, then this will be Nvidia’s first foray into 7nm. They will have had absolutely zero learnings, zero large scale field tests from which they can draw data.
AMD, on the other hand, will have had roughly two years of data to draw upon when they inevitably release their next generation cards. These cards will undoubtedly contain more efficient and more powerful hardware."
7nm is not produced by AMD. They're made by either TSMC or Samsung, and both AMD and NVidia are just buying their manufacturing tech.
Additionally AMDs next generation GPUs will likely be on 7nm EUV process instead of current 7nm manufacturing process.
With NVidia, I can generally tell that x is more recent, therefore more powerful than y because of its naming scheme. A 1060 is more recent in the series 10xx than the 1040. And anything in series 10xx is more recent and more powerful than anything 9xx or below.
With AMD, I'm just like "fuck it!" *throws hands up in frustration.*
With NVidia, I can generally tell that x is more recent, therefore more powerful than y because of its naming scheme. A 1060 is more recent in the series 10xx than the 1040. And anything in series 10xx is more recent and more powerful than anything 9xx or below.
With AMD, I'm just like "fuck it!" *throws hands up in frustration.*
That's not how Nvidia's naming works and by introducing a 16xx and 20xx series at the same time they actually made the naming worse than with what AMD is doing.
With NVidia, I can generally tell that x is more recent, therefore more powerful than y because of its naming scheme. A 1060 is more recent in the series 10xx than the 1040. And anything in series 10xx is more recent and more powerful than anything 9xx or below.
With AMD, I'm just like "fuck it!" *throws hands up in frustration.*
Thats actually not true-
the 70 series from a generation ago is still stronger than the 50 series of this generation.
970 is better than 1050
Many people make this mistake in thinking the new generation is better
With NVidia, I can generally tell that x is more recent, therefore more powerful than y because of its naming scheme. A 1060 is more recent in the series 10xx than the 1040. And anything in series 10xx is more recent and more powerful than anything 9xx or below.
With AMD, I'm just like "fuck it!" *throws hands up in frustration.*
That's not how Nvidia's naming works and by introducing a 16xx and 20xx series at the same time they actually made the naming worse than with what AMD is doing.
True enough but isnt the 20 designation for ray tracing? Been awhile since I delved into this madness.
I didn't read this non sense post because everyone knows Nvidia is faster and i respect radeon not giving up and keep making cards. Good luck paying 1000 - 2000 £ for overpriced nvidia cards . I bought half year ago rx vega 64 nitro version for 300£ and runs everything at ultra over 100 fps. Good enough for me
I'm an Nvidia guy myself but I dont really disagree- Way overpriced.
My next rig (or GPU upgrade- depending) I'll be seriously considering going AMD...I'm at a point where I have more than enough graphics power and most of the games I play dont really require that much... edit- Looking at the games I play on Steam, many are a decade old or older lol
Bang for the buck is what I'll be looking at next over raw powers and features.
Also, I think I am part of the biggest market of graphic card buyers. I find the cheapest card that gives the most bang for the buck. Sometimes its $200, sometimes $400. As long as it can play new games at reasonable settings with reasonable fps and lasts a couple years.
Graphics cards are probably the easiest upgrade to research with the easiest to obtain info, benchmarks, and useful reviews aimed at people with my level of knowledge. It doesn't matter if its AMD or Nvidia - just the sweet spot of price and performance.
Also, I think I am part of the biggest market of graphic card buyers. I find the cheapest card that gives the most bang for the buck. Sometimes its $200, sometimes $400. As long as it can play new games at reasonable settings with reasonable fps and lasts a couple years.
Graphics cards are probably the easiest upgrade to research with the easiest to obtain info, benchmarks, and useful reviews aimed at people with my level of knowledge. It doesn't matter if its AMD or Nvidia - just the sweet spot of price and performance.
I think some of us looked at our rigs in the same way baby boomers looked at their muscle cars- They would pay whatever it took (in terms of time and money) to shave 1/4 second time off of 1/4 mile.
Maybe a bad analogy and maybe not but thats how I saw things- However, these days we've reached a point where even you can game with an 'off the shelf' computer fairly well and those small increases are not really worth the huge pricetag anymore...For me anyhow...Especially when most of the games I play dont require much power at all.
Meh... when I build I look around and get whichever fits my budget and performance goals. My first PC ever used an AMD 386-40 and I forget what video card but my first 3D video card was an ATI.
Over the years I've built PCs using probably just as many AMD CPUs and GPUs as intel and nVidia: whichever fits the bill when I'm ready to build. Brand loyalty in these things is for fools.
"Social media gives legions of idiots the right to speak when they once only spoke at a bar after a glass of wine, without harming the community ... but now they have the same right to speak as a Nobel Prize winner. It's the invasion of the idiots”
― Umberto Eco
“Microtransactions? In a single player role-playing game? Are you nuts?” ― CD PROJEKT RED
One other point about the article: the Radeon VII is a compute card that just happens to have a graphics driver. It's mildly surprising that the card exists at all, as with four stacks of HBM2, it's expensive to build. If you just want to play games, AMD would just as soon have you buy a Radeon RX 5700 XT. They don't feel any need to make the Radeon VII a compelling value for gaming as that's just not the point of the card.
With NVidia, I can generally tell that x is more recent, therefore more powerful than y because of its naming scheme. A 1060 is more recent in the series 10xx than the 1040. And anything in series 10xx is more recent and more powerful than anything 9xx or below.
With AMD, I'm just like "fuck it!" *throws hands up in frustration.*
There isn't a GeForce 1040. There is a GeForce GTX 1060, GTX 1050, and GT 1030. The GTX 1060 came out before the GTX 1050, which came out before the GT 1030. So no, you don't understand Nvidia's naming scheme either.
For both vendors' consumer parts, the first digit (or occasionally two) is the generation, and the next digit is relative performance within a generation. Since AMD bought ATI, they've had the 2000, 3000, 4000, 5000, 6000, 7000, 200, 300, 400, Vega, 500, and now another 5000 series. Meanwhile, Nvidia has had an 8000, 9000, 200, 400, 500, 600, 700, 900, 1000, 2000, and now 1600 series. You could argue that Nvidia's numbering is a little better than AMD's, but it's not a huge gap.
If you want a numbering scheme that makes sense, try the Sony PlayStation line.
I agree that AMD has engineering deficit, but until NVidia manages to launch their 7nm products their process node advantage more than offsets that deficit.
Maybe, but we don't really know how good the 7 nm process node is for GPUs compared to 12 nm. It might not be much better; surely there are reasons why Nvidia skipped it, other than stupidity. AMD and Nvidia both skipped the various 10 nm and 20 nm nodes as being unsuitable for discrete GPUs.
Most likely, Nvidia's "7 nm" GPUs will be on a 7 nm EUV node such as what TSMC is calling 7+. AMD is moving to that node as well, and it's a significantly better node than the 7 nm that they're on right now. It's also a major redesign as compared to plain 7 nm. TSMC is claiming that their "6 nm" node is the natural successor to 7 nm, while 7+ is totally different.
Furthermore, every new generation has a chance to change arbitrarily many things. At the end of 28 nm, AMD and Nvidia were basically tied on efficiency (comparing a Fury X to a GTX 980 Ti). Then they both did a die shrink, and suddenly Nvidia was way ahead.
Or to go back a little further, AMD's Tahiti and Fiji GPUs were both of the GCN architecture and both built on TSMC's 28 nm process node. Yet Fiji offered about double the energy efficiency of Tahiti. Apparently they found a lot of ways to make it more efficient that they hadn't by the time they launched the first chip. There might be a whole lot of things that they can fix about RDNA in subsequent chips, just as there were with GCN.
Or more dramatically, consider their previous VLIW architecture. The Radeon HD 2900 XT was a disaster, as it was hot, late, and slow. It was inferior to Nvidia's GeForce 8800 GTX in pretty much every way, in spite of being on a better process node. A little over a year later, AMD had fixed the architecture with the Radeon HD 4870, and suddenly had an enormous die size advantage over Nvidia that would last for about 3 1/2 years. AMD used that to cut prices so aggressively that Nvidia discontinued products before they had successors and completely withdrew from the market for consumer GPUs that cost over $100 for several months.
I'm not predicting how AMD and Nvidia will compare with their next generation of GPUs, likely on a 7 nm EUV node. I'm only saying that how they compare can flip around dramatically in a single generation and has in the past.
Well I have to disagree with the author. He has been smelling too many Nvidia salts. What does Nvidia have to offer, Their ray tracing gimmick only works with their $500+ cards and then still adds a huge performance hit. You really need a $1000_+ card to actually have decent ray tracing performance .
DLSS is basically a complete disaster unless you are into blurry images. So people buying Nvidia are paying for all that extra junk on the die that in reality, they will never use.
People that make the G-Sync monitors have to pay an extra premium to Nvidia to use the tech while AMD's FreeSync is just that free. Hence just buying a G-Sync monitor to use your Nvidia card with will cost you another $100-150. Far more monitors offer FreeSync. As to Nvidia's claim that G-Sync will work with FreeSync monitors, that is a very shaky claim.
I can get AMD cards at discounted prices, Nvidia, not a chance.
AMD has been sticking it to Nvidia lately on price and will continue to do so. If you want to go with Nvidia expect to pay a premium for the name.
Seems to me that AMD is doing great in the GPU market at present and I don't see Nvidia doing much about it, they still cannot compete on price which is the deciding factor in many buying decisions.
EVERY single manufacturer of computer peripherals marginally improve their product because NEW sells even at ridiculous prices.It has been this way since Aureal,3DFX,Sound Blaster,Nvidia ,Radeon,Intel etc etc,only give them marginal improvements so you can do the same thing year after year after year.
Millions..Billions of dollars later and after 30+ years we barely see any improvement in games or the PC's.We still have sub par games only attaining 30-50 frames on super expensive rigs,console games are predominantly 28-30 frames,poorly optimized games,poorly done Bios,poorly done drivers etc etc,almost like we are moving backwards but costs go up.
Like what is the sense of making pc's 10x better,gpu's 10x better but games only 10% better but run 50% worse,we never make headway.
Never forget 3 mile Island and never trust a government official or company spokesman.
EVERY single manufacturer of computer peripherals marginally improve their product because NEW sells even at ridiculous prices.It has been this way since Aureal,3DFX,Sound Blaster,Nvidia ,Radeon,Intel etc etc,only give them marginal improvements so you can do the same thing year after year after year.
Millions..Billions of dollars later and after 30+ years we barely see any improvement in games or the PC's.We still have sub par games only attaining 30-50 frames on super expensive rigs,console games are predominantly 28-30 frames,poorly optimized games,poorly done Bios,poorly done drivers etc etc,almost like we are moving backwards but costs go up.
Like what is the sense of making pc's 10x better,gpu's 10x better but games only 10% better but run 50% worse,we never make headway.
Say what you will about game design decisions, but there is no plausible doubt that CPU and GPU hardware have vastly improved in the last 30 years, or even in the last 10. If you think that games aren't putting it to good use, then that's a software problem, and not the fault of AMD (or Nvidia or Intel).
While it's true that AMD is behind in the highest end cards they are doing fantastic in the 1440p range cards.
The 5700xt is the best bang for the buck if you're looking for 1440p gaming. You can preach features like dlss and ray tracing all day long but at the moment they are doing almost nothing for gamers.
First off both features have to be implemented by the devs and few games support either of them.
Also unless you're playing at 1080p you can forget ray tracing and a decent frame rate because it's still way too demanding.
Nvidia has convinced gamers they would be missing out on something if they didn't have these two features. So gamers but into their marketing and still can't even use these features unless it's 1080p or they have a $1000 gpu and by the time ray tracing is optimized enough to use you will have bought a while new card. But don't worry Nvidia will convince you about some other tech you just have to have but can't use.
And you'll pay too much for it to...the future is bright!
DLSS is complete garbage and always will be, but real-time ray-tracing is likely to be important to gaming in the future. I wish people didn't constantly lump them together merely because Nvidia's Turing architecture introduced them both.
Author, kudos for posting hard truths. AMD has good marketing, better then intel and Nvidia. But they don’t have the best products to back it up. Consider that AMD has had to almost give away its CPUs. Free games, free cooler, cheap price. For an inferior chip. U Intel is still at the top of all gaming benchmarks. People have to rationalize buying amd ” well, it’s only ten fps less””we’ll it’s a good value “”we’ll, Intel and Nvidia are evil boogeymen companies that are evil !”
AMD has sold gamers on slow clock speeds and bad memory bandwidth in exchanges for moar cores and threads!!! And yet a & $180 six core six thread Intel outperforms every Ryzen chip in gaming and since the Intel price cuts, AMD has nothing left then marketing and budget gamers fan boys to keep them going.
You've got a really weird notion of "almost give away its CPUs". The cheapest of AMD's third gen Ryzen CPUs that currently dominate the enthusiast market is $180. For the third gen Threadripper parts that dominate the HEDT market, the cheapest is $1400.
Yeah, AMD had to sharply discount some of their CPUs several years ago in order to sell them. But that's not the case today. AMD just had their highest quarterly revenue ever, even though console sales were down and their coming flood of sales from Ryzen Mobile 4000 series and EPYC Rome mostly hasn't kicked in yet.
That Core i5-9400F that you left unnamed but seem to be referring to gets smoked by a Ryzen 5 3600 for about the same price. Singled-threaded performance will at least be close, but once you push a lot of cores, the AMD CPU will win by a lot.
Intel does still have the best single-threaded performance if you're willing to buy one of their top few mainstream consumer parts (basically, a $370 Core i7-9700KF or better), but not by a very large margin. And even that is likely to go away later this year when Zen 3 arrives.
Comments
AMD, on the other hand, will have had roughly two years of data to draw upon when they inevitably release their next generation cards. These cards will undoubtedly contain more efficient and more powerful hardware."
See what I did there?
Also, in what universe would a company close to bankruptcy, as was AMD a few years back, decide to attack on two major fronts simultaneously?
Their CPU position vs Intel was abysmal, while their GPUs were very competitive (2xx generation) to mildly competitive (3xx and later). It was a no-brainer: maintain the okay-ish GPU position, while your CPU division regroups and attacks on their front.
Now that they have their CPU counter offensive well on the way and have recaptured large parts of the territory, their GPU division will work on catching up with Nvidia. What's so difficult to understand here?
I agree that AMD has engineering deficit, but until NVidia manages to launch their 7nm products their process node advantage more than offsets that deficit.
Huge dies let you get top end performance that you just can't do with small ones. But AMD ceding the market for $1200 video cards to Nvidia, and often the $700 card tier as well, isn't a huge chunk of the market.
One thing you're missing is the price tags. If you go to New Egg and want the cheapest new cards you can find, the list prices are:
Radeon RX 5600 XT: $280
GeForce RTX 2060: $320
Radeon RX 5700: $330
It's worth noting that the RX 5700 has 8 GB of memory and a lot more bandwidth than the other two cards there. Nvidia may be claiming that the RTX 2060 is $300, but New Egg apparently didn't get that memo. It's a fair question to wonder whether it's just taking prices a while to adjust or the $300 price tag is a marketing stunt and not a real price.
As for AMD's updated RX 5600 XT BIOS, yeah, that's a last minute stunt. Unless your card says that it comes with the new BIOS, I'd assume that the old BIOS is the performance you'll get.
Even so, the Radeon RX 5600 XT is a third bin down of a GPU chip. Since when has AMD or Nvidia ever launched a full GPU chip and a salvage part of it, and then later another, further crippled salvage part, and had the latter actually be a good deal? The Radeon HD 4830 from 2008 is the most recent example that comes to mind. Further crippled salvage parts tend to be low volume parts, and are often sold in only one market, as that's all it takes to get rid of your stock.
Tensor cores are a waste of die space in a consumer part. Putting them in top end compute cards like the Tesla V100 makes sense. In consumer hardware, is just a waste of die space that inflates production costs without offering any actual benefit to gamers. And however much people like to deny it, costs of production inevitably get factored into retail price tags. AMD made the right call by not having anything analogous. It's Nvidia that goofed on that round.
Nvidia does have a head start on real-time ray tracing. But realistically, it's going to be several years before that matters, and by the time it does, this generation's hardware isn't going to be able to handle it. The added experience might ultimately give Nvidia some benefit by the time it matters or it might not. AMD had four generations of hardware tessellation on the market before Nvidia's first, but that didn't end up being a huge, obvious advantage for AMD.
Additionally AMDs next generation GPUs will likely be on 7nm EUV process instead of current 7nm manufacturing process.
With NVidia, I can generally tell that x is more recent, therefore more powerful than y because of its naming scheme. A 1060 is more recent in the series 10xx than the 1040. And anything in series 10xx is more recent and more powerful than anything 9xx or below.
With AMD, I'm just like "fuck it!" *throws hands up in frustration.*
That's not how Nvidia's naming works and by introducing a 16xx and 20xx series at the same time they actually made the naming worse than with what AMD is doing.
the 70 series from a generation ago is still stronger than the 50 series of this generation.
970 is better than 1050
Many people make this mistake in thinking the new generation is better
edit- Proof http://gpuboss.com/gpus/GeForce-GTX-970-vs-GeForce-GTX-1050-Ti
note thats even comparing to the TI (I hadnt seen that) which is the "best" 1050 you can get.
the way it works is the initial number is the generation 9xxx or 10xxx and the secondary number determines if its budget or not (more or less).
Like I said, Voodoo 2 FTW
My next rig (or GPU upgrade- depending) I'll be seriously considering going AMD...I'm at a point where I have more than enough graphics power and most of the games I play dont really require that much... edit- Looking at the games I play on Steam, many are a decade old or older lol
Bang for the buck is what I'll be looking at next over raw powers and features.
Graphics cards are probably the easiest upgrade to research with the easiest to obtain info, benchmarks, and useful reviews aimed at people with my level of knowledge. It doesn't matter if its AMD or Nvidia - just the sweet spot of price and performance.
Maybe a bad analogy and maybe not but thats how I saw things- However, these days we've reached a point where even you can game with an 'off the shelf' computer fairly well and those small increases are not really worth the huge pricetag anymore...For me anyhow...Especially when most of the games I play dont require much power at all.
Clearly doing something right.
Over the years I've built PCs using probably just as many AMD CPUs and GPUs as intel and nVidia: whichever fits the bill when I'm ready to build. Brand loyalty in these things is for fools.
“Microtransactions? In a single player role-playing game? Are you nuts?”
― CD PROJEKT RED
For both vendors' consumer parts, the first digit (or occasionally two) is the generation, and the next digit is relative performance within a generation. Since AMD bought ATI, they've had the 2000, 3000, 4000, 5000, 6000, 7000, 200, 300, 400, Vega, 500, and now another 5000 series. Meanwhile, Nvidia has had an 8000, 9000, 200, 400, 500, 600, 700, 900, 1000, 2000, and now 1600 series. You could argue that Nvidia's numbering is a little better than AMD's, but it's not a huge gap.
If you want a numbering scheme that makes sense, try the Sony PlayStation line.
Most likely, Nvidia's "7 nm" GPUs will be on a 7 nm EUV node such as what TSMC is calling 7+. AMD is moving to that node as well, and it's a significantly better node than the 7 nm that they're on right now. It's also a major redesign as compared to plain 7 nm. TSMC is claiming that their "6 nm" node is the natural successor to 7 nm, while 7+ is totally different.
Furthermore, every new generation has a chance to change arbitrarily many things. At the end of 28 nm, AMD and Nvidia were basically tied on efficiency (comparing a Fury X to a GTX 980 Ti). Then they both did a die shrink, and suddenly Nvidia was way ahead.
Or to go back a little further, AMD's Tahiti and Fiji GPUs were both of the GCN architecture and both built on TSMC's 28 nm process node. Yet Fiji offered about double the energy efficiency of Tahiti. Apparently they found a lot of ways to make it more efficient that they hadn't by the time they launched the first chip. There might be a whole lot of things that they can fix about RDNA in subsequent chips, just as there were with GCN.
Or more dramatically, consider their previous VLIW architecture. The Radeon HD 2900 XT was a disaster, as it was hot, late, and slow. It was inferior to Nvidia's GeForce 8800 GTX in pretty much every way, in spite of being on a better process node. A little over a year later, AMD had fixed the architecture with the Radeon HD 4870, and suddenly had an enormous die size advantage over Nvidia that would last for about 3 1/2 years. AMD used that to cut prices so aggressively that Nvidia discontinued products before they had successors and completely withdrew from the market for consumer GPUs that cost over $100 for several months.
I'm not predicting how AMD and Nvidia will compare with their next generation of GPUs, likely on a 7 nm EUV node. I'm only saying that how they compare can flip around dramatically in a single generation and has in the past.
DLSS is basically a complete disaster unless you are into blurry images. So people buying Nvidia are paying for all that extra junk on the die that in reality, they will never use.
People that make the G-Sync monitors have to pay an extra premium to Nvidia to use the tech while AMD's FreeSync is just that free. Hence just buying a G-Sync monitor to use your Nvidia card with will cost you another $100-150. Far more monitors offer FreeSync. As to Nvidia's claim that G-Sync will work with FreeSync monitors, that is a very shaky claim.
I can get AMD cards at discounted prices, Nvidia, not a chance.
AMD has been sticking it to Nvidia lately on price and will continue to do so. If you want to go with Nvidia expect to pay a premium for the name.
Seems to me that AMD is doing great in the GPU market at present and I don't see Nvidia doing much about it, they still cannot compete on price which is the deciding factor in many buying decisions.
Millions..Billions of dollars later and after 30+ years we barely see any improvement in games or the PC's.We still have sub par games only attaining 30-50 frames on super expensive rigs,console games are predominantly 28-30 frames,poorly optimized games,poorly done Bios,poorly done drivers etc etc,almost like we are moving backwards but costs go up.
Like what is the sense of making pc's 10x better,gpu's 10x better but games only 10% better but run 50% worse,we never make headway.
Never forget 3 mile Island and never trust a government official or company spokesman.
The 5700xt is the best bang for the buck if you're looking for 1440p gaming. You can preach features like dlss and ray tracing all day long but at the moment they are doing almost nothing for gamers.
First off both features have to be implemented by the devs and few games support either of them.
Also unless you're playing at 1080p you can forget ray tracing and a decent frame rate because it's still way too demanding.
Nvidia has convinced gamers they would be missing out on something if they didn't have these two features. So gamers but into their marketing and still can't even use these features unless it's 1080p or they have a $1000 gpu and by the time ray tracing is optimized enough to use you will have bought a while new card. But don't worry Nvidia will convince you about some other tech you just have to have but can't use.
And you'll pay too much for it to...the future is bright!
Yeah, AMD had to sharply discount some of their CPUs several years ago in order to sell them. But that's not the case today. AMD just had their highest quarterly revenue ever, even though console sales were down and their coming flood of sales from Ryzen Mobile 4000 series and EPYC Rome mostly hasn't kicked in yet.
That Core i5-9400F that you left unnamed but seem to be referring to gets smoked by a Ryzen 5 3600 for about the same price. Singled-threaded performance will at least be close, but once you push a lot of cores, the AMD CPU will win by a lot.
Intel does still have the best single-threaded performance if you're willing to buy one of their top few mainstream consumer parts (basically, a $370 Core i7-9700KF or better), but not by a very large margin. And even that is likely to go away later this year when Zen 3 arrives.