from reading the article it appears to be what Vulkan and DirectX 12 can both do now anyway. not sure
regardless of that fact, 'coding at a machine level instruction set level' and 'they graphic the are showing in this article' is being a bit misleading and in my view nearly trolling PC fans
Please do not respond to me, even if I ask you a question, its rhetorical.
Since most people probably have no idea what this is talking about, I'll explain. GPU shader/kernel languages have a bunch of built-in functions to do various things. Some, such as fma (fma(a, b, c) = a * b + c as floats), exist so that you can call instructions that the GPU has in silicon. Others, such as normalize (rescale a vector to have length 1), aren't a single instruction in silicon, but the drivers can translate them into a chain of instructions. In some cases, it could be different instructions on different hardware.
Some built-in functions correspond to a single instruction on some architectures and a chain of instructions on others. For example, if you call rotate in OpenCL, on GCN or Maxwell, it uses the rotate instruction in silicon. On Fermi or Kepler, it has to call other instructions, as it doesn't have rotate as a single instruction. But if rotate is what you need, then using a rotate built-in function in OpenCL lets you call that instruction directly on the architectures that have it, while still giving you the most efficient implementation available on other architectures. Ideally, this lets you write code once and have it be reasonably well optimized on all modern GPU architectures, even though it gets compiled to different instructions on different architectures.
GPU architectures sometimes have instructions that aren't available as built-in functions in at least some GPU kernel/shader languages. That means that programmers have no way to call that instruction. Sometimes this means that the hardware could do what you want more efficiently, but the programmer has no way to make it do this. Exposing those instructions allows programmers to write more efficient code. The point of GPUOpen is to expose the instructions on AMD GPUs so that programmers can use them.
For example, AMD's GCN architecture has a mad24 instruction. All modern GPU shaders have a 32-bit floating-point fma instruction, and that includes a 24-bit multiply and then an add for the mantissa. AMD's mad24 instruction presumably mostly uses the same silicon to offer an integer version of it.
The problem is that most programming languages don't have a way to explicitly call mad24. You don't have a 24-bit data type in most CPU or GPU languages. OpenCL has a mad24 built-in function that says that (in the unsigned case), if a, b, and c are uints with a and b < 2^24, then mad24(a, b, c) = a * b + c. If a or b are too large, you can still call mad24, but the results are "implementation defined", which means different hardware could legally give you different answers. Calling mad24 is a programmer's promise that a and b are actually 24-bit numbers, even though they're declared as 32-bit, and if the programmer is wrong, any problems are his fault, not driver bugs.
Nvidia's Kepler architecture doesn't have a mad24 instruction in silicon. It does, however, have a mad32, so it can do a * b + c as a single instruction with arbitrary 32-bit integers. If you use mad24 in OpenCL, Nvidia's compiler presumably compiles it to a mad32, and thus still runs correctly. GCN has a mad32, as well, but it's much slower than mad24. Thus, having mad24 available as opposed to only mad32 lets you write code that runs faster on GCN without hurting performance on Kepler. This is a good thing.
But you can only do it if you have a way to call mad24. OpenGL doesn't, so even if you know that you're doing some small integer arithmetic, you have no way to call it. In some cases, you can use floats and call fma, which is fast on all modern GPUs. The point of GPUOpen is to give programmers a way to call instructions available in AMD hardware and make their code faster.
Ideally, you'd want access to call instructions available in either AMD or NVidia GPUs, and for both to write code paths to efficiently emulate the instructions they don't have. The downside of this is that a novice programmer who doesn't know which hardware has what instructions may use intrinsics for instructions that many GPUs don't have, and not realize that they're slow on a lot of GPUs. Alternatively, a sponsored programmer might do this intentionally to try to harm performance on a competitor's hardware. It's probable that Nvidia's GameWorks does quite a bit of this.
does Vulkan and DIrectX 12 do the same thing given that they have 'low level' access as well
No. Vulkan and DirectX 12 massively redid host code (the portion that runs on the CPU), but Vulkan didn't introduce a new shader/kernel language. The low level access is all about host code.
The intention is that Vulkan will mostly use the same GLSL shader language that OpenGL uses. Vulkan does support SPIR-V, so it will be able to use the OpenCL C kernel language of OpenCL if you wanted to go that route. You could even write code directly in SPIR-V, but it's not a high level language, so I'd expect that to be exceedingly rare apart from people writing compilers. It's kind of like writing a program in assembly. You can, but you probably shouldn't.
does Vulkan and DIrectX 12 do the same thing given that they have 'low level' access as well
No. Vulkan and DirectX 12 massively redid host code (the portion that runs on the CPU), but Vulkan didn't introduce a new shader/kernel language. The low level access is all about host code.
The intention is that Vulkan will mostly use the same GLSL shader language that OpenGL uses. Vulkan does support SPIR-V, so it will be able to use the OpenCL C kernel language of OpenCL if you wanted to go that route. You could even write code directly in SPIR-V, but it's not a high level language, so I'd expect that to be exceedingly rare apart from people writing compilers. It's kind of like writing a program in assembly. You can, but you probably shouldn't.
so in laymens term when DX12 and Vulkan folks say 'down to the metal' this system from AMD that we are talking is about is bascially 'even closer to the metal'
well if that is the case a good thing, this might also explain why a game can run on a console at spec X but on a PC it needs spec X +++++
Please do not respond to me, even if I ask you a question, its rhetorical.
They are also having better benchmarks for the VR headsets. AMD stock is only 4.35 a share right now so its time to buy some and it doesn't take much to create an account and grab a few hundred shares. I'm hoping it will be around 10$ by the end of the year and possibly 20$ by the end of next year. They have a good shot at taking out nvidia and if they do it will be a massive increase for them. They are already 1 year ahead of them and sometimes with technology that is all it takes.
does Vulkan and DIrectX 12 do the same thing given that they have 'low level' access as well
No. Vulkan and DirectX 12 massively redid host code (the portion that runs on the CPU), but Vulkan didn't introduce a new shader/kernel language. The low level access is all about host code.
The intention is that Vulkan will mostly use the same GLSL shader language that OpenGL uses. Vulkan does support SPIR-V, so it will be able to use the OpenCL C kernel language of OpenCL if you wanted to go that route. You could even write code directly in SPIR-V, but it's not a high level language, so I'd expect that to be exceedingly rare apart from people writing compilers. It's kind of like writing a program in assembly. You can, but you probably shouldn't.
so in laymens term when DX12 and Vulkan folks say 'down to the metal' this system from AMD that we are talking is about is bascially 'even closer to the metal'
well if that is the case a good thing, this might also explain why a game can run on a console at spec X but on a PC it needs spec X +++++
It's not closer to the metal. It's a different type of thing entirely. DirectX 12 and Vulkan overhauled host code, which is the code that runs on the CPU and tells the video drivers what to do. GPUOpen is about shader code, which is the code that runs on the GPU directly.
does Vulkan and DIrectX 12 do the same thing given that they have 'low level' access as well
No. Vulkan and DirectX 12 massively redid host code (the portion that runs on the CPU), but Vulkan didn't introduce a new shader/kernel language. The low level access is all about host code.
The intention is that Vulkan will mostly use the same GLSL shader language that OpenGL uses. Vulkan does support SPIR-V, so it will be able to use the OpenCL C kernel language of OpenCL if you wanted to go that route. You could even write code directly in SPIR-V, but it's not a high level language, so I'd expect that to be exceedingly rare apart from people writing compilers. It's kind of like writing a program in assembly. You can, but you probably shouldn't.
so in laymens term when DX12 and Vulkan folks say 'down to the metal' this system from AMD that we are talking is about is bascially 'even closer to the metal'
well if that is the case a good thing, this might also explain why a game can run on a console at spec X but on a PC it needs spec X +++++
It's not closer to the metal. It's a different type of thing entirely. DirectX 12 and Vulkan overhauled host code, which is the code that runs on the CPU and tells the video drivers what to do. GPUOpen is about shader code, which is the code that runs on the GPU directly.
ok well for the common man a oneliner that doesnt make their head explode is going to be best
Please do not respond to me, even if I ask you a question, its rhetorical.
They are also having better benchmarks for the VR headsets. AMD stock is only 4.35 a share right now so its time to buy some and it doesn't take much to create an account and grab a few hundred shares. I'm hoping it will be around 10$ by the end of the year and possibly 20$ by the end of next year. They have a good shot at taking out nvidia and if they do it will be a massive increase for them. They are already 1 year ahead of them and sometimes with technology that is all it takes.
AMD stock right now is 7.00 a share. I wish I had the money to invest and the problem is might be too late. Considering that NVIDIA has had time to adjust and fight back. We really won't know untill January or so if AMD stock is going to keep climbing.
My gut tells me that Zen will be the big driver for AMD in the near term, and I don't know exactly what will happen to AMD.
Yes, they have the consoles now. And Polaris isn't bad. And they make decent APUs for low-range systems. And Vega is due "soon (tm)".
But I really feel that AMD's future hinges on Zen. Their strength for years has been as an x86 competitor. You can pretty much chart the value of their company in relation to the strength of their CPUs. Even when their GPU business was strong (and there have been more than once where AMD has had the best GPU lineup), the company overall has sank or swam on the back of their x86 business.
Even when Cypress was dominating the GPU benchmarks over nVidia Fermi, AMD shares sunk when Intel introduced the first of the Core lineup, and it crushed the Phenom II's. That year, 2009, the year that AMD was probably having the best success in their GPU lineup and the best marketshare for their GPU business. But AMD stock fell to a low of $2.00 that year. It has traded as high as the $40 range, in the mid-2000's (when the Athlon was the king)
I don't think AMD has pivoted away from their x86 business that much. They have a bigger emphasis on "APU", but so does Intel with their iGPU, and that puts the focus right back on the x86 core proice/power/performance metrics.
AMD doesn't need to beat Intel to win - they don't need the fastest chip, or the lowest power chip, or the cheapest chip. But they need to be the best when you look at all 3 of those metrics combined.
Vega, really, I don't think will do much for AMD's bottom line as a company, but as a gamer I'd love to see some 1070/1080 competition to help bring nVidia's prices lower.
For my money, I'm staying out of tech entirely right now. And AMD is recently infamous for hyping up a release, and providing everyone with a big letdown, so I don't have a lot of faith in Zen (although I do hope that it does knock it out of the park, I just don't think it will). Without Zen, I don't see where AMD moves forward, at least as the same company it is today.
All that being said, @filmoret's predictions look to be playing out so far.
I'd like to point out that on AMD's CPUs being good or not, we really have a small sample size. So far this millennium, AMD has really only introduced four major new architectures: K8 (Athlon 64), K10 (Phenom), Bulldozer, and Bobcat. Everything else launched in the last 13 years is derivatives of those. So while Bulldozer was terrible and K10 was "meh", that's too small a sample size to really tell us much about Zen. And it's not like AMD hasn't made a decent CPU since K8; Bobcat and its derivatives (including Jaguar and Puma) are actually pretty good for their intended low-price, low-power use.
Bulldozer's problems were inherited by Piledriver, Steamroller, and Excavator cores, as they're all derivatives of Bulldozer. AMD did manage to improve upon the basic architecture quite a bit, but being stuck with something so bad as the base idea really hampered them. Zen is all new, so it will face no such problems.
For most of its history, AMD has faced a serious drawback of being behind Intel in what its fabs could do. If Intel reached a process node a year before AMD, that's as if AMD was constantly having to fight with CPUs a year older than what Intel offered. If you go through history comparing the GPUs that AMD/ATI had at one point in time to what Nvidia had a year later, AMD/ATI usually isn't competitive. But Nvidia usually isn't competitive if you reverse the year advantage. That AMD was able to take the lead for a while with the Athlon 64 is quite remarkable.
Intel's fab advantage probably isn't going away, but it may be diminishing. In servers where you can use 20 cores if you have them, or in laptops where the difference between 20 W and 30 W is enormous, Intel's fab advantage will still provide a solid advantage.
But once AMD gets everything on 14 nm, die shrinks are going to be nearly irrelevant to desktop CPUs for the foreseeable future. Lower power doesn't gain you anything in desktops anymore, as it doesn't matter if a desktop CPU uses 45 W or 65 W or 95 W. Higher clock speeds are limited by physics, not transistor counts, and have been for years. We can get enough cores that more is irrelevant to desktop use; AMD just announced a 32-core "Naples" server CPU coming next year, and Intel already sells 22-core Broadwell-EP CPUs.
That doesn't mean that AMD will automatically be competitive in desktop CPUs; Kaveri on 28 nm already isn't competitive with Sandy Bridge on 32 nm. But it does mean that it's all about the architecture, not the fabs anymore. The diminishing gains from die shrinks in desktops are why, while Sky Lake is better than Sandy Bridge, it's by a massively smaller margin than, say, Pascal (GeForce 1000 series) is better than Fermi (GeForce 500 series). GPUs still scale very well with die shrinks and will for the foreseeable future.
As mentioned before it seems AMD's GPU department performance is irrelevant to it's share price, irrelevant to its shareholders, and almost like it;s irrelevant to anyone who's not a pure consumer (It seems to play the role of a stop-gap measure, a money life-link until cpu part is doing good again). Whether that's because the CPU departments makes orders of magnitude more money when it's in a better position then the competition or not, i have no clue, it's wierd.
So the only thing that makes AMD sink or swim seems to be the CPU department. And that's the reason AMD share price skyrocketed more than 330% in the past few months. (Optimism for ZEN).
AMD is at the moment seriously competing against slowest of NVidia's new GPUs because they can't match GTX 1070 and GTX 1080 at all.
It looks like NVidia has so much better power efficiency, that I wouldn't be surprised to see them win this round of the match for mobile GPUs.
In processors AMD is offering serious competition against Intel I3 and cheaper models.
AMD's situation is not totally hopeless, but they've been flopping around in the water and trying not to drown for some years now.
On the GPU side, Nvidia launched their larger, higher power, higher performance GPUs first, while AMD launched their smaller, lower power, lower performance GPUs first. That's really all that has happened so far with the new generation. Vega is coming, and should be competitive with Nvidia's higher end GPUs.
On the CPU side of things, yeah, AMD has been a mess for several years, but that's all because of Bulldozer way back in 2011. Zen offers AMD the chance to reset the situation.
AMD could end up doing quite well as integrated graphics become increasingly prevalent, provided that they have at least competitive CPUs and GPUs simultaneously. Intel can't compete with AMD on the GPU side of things, while Nvidia can't compete with AMD on the CPU side.
Comments
regardless of that fact, 'coding at a machine level instruction set level' and 'they graphic the are showing in this article' is being a bit misleading and in my view nearly trolling PC fans
Please do not respond to me, even if I ask you a question, its rhetorical.
Please do not respond to me
That's what I've always wanted on my PC..........the "better" lower FPS.
I think it also has some code that removes from the viewer the ability to see any difference beyond 1080p
Please do not respond to me, even if I ask you a question, its rhetorical.
Please do not respond to me
Some built-in functions correspond to a single instruction on some architectures and a chain of instructions on others. For example, if you call rotate in OpenCL, on GCN or Maxwell, it uses the rotate instruction in silicon. On Fermi or Kepler, it has to call other instructions, as it doesn't have rotate as a single instruction. But if rotate is what you need, then using a rotate built-in function in OpenCL lets you call that instruction directly on the architectures that have it, while still giving you the most efficient implementation available on other architectures. Ideally, this lets you write code once and have it be reasonably well optimized on all modern GPU architectures, even though it gets compiled to different instructions on different architectures.
GPU architectures sometimes have instructions that aren't available as built-in functions in at least some GPU kernel/shader languages. That means that programmers have no way to call that instruction. Sometimes this means that the hardware could do what you want more efficiently, but the programmer has no way to make it do this. Exposing those instructions allows programmers to write more efficient code. The point of GPUOpen is to expose the instructions on AMD GPUs so that programmers can use them.
For example, AMD's GCN architecture has a mad24 instruction. All modern GPU shaders have a 32-bit floating-point fma instruction, and that includes a 24-bit multiply and then an add for the mantissa. AMD's mad24 instruction presumably mostly uses the same silicon to offer an integer version of it.
The problem is that most programming languages don't have a way to explicitly call mad24. You don't have a 24-bit data type in most CPU or GPU languages. OpenCL has a mad24 built-in function that says that (in the unsigned case), if a, b, and c are uints with a and b < 2^24, then mad24(a, b, c) = a * b + c. If a or b are too large, you can still call mad24, but the results are "implementation defined", which means different hardware could legally give you different answers. Calling mad24 is a programmer's promise that a and b are actually 24-bit numbers, even though they're declared as 32-bit, and if the programmer is wrong, any problems are his fault, not driver bugs.
Nvidia's Kepler architecture doesn't have a mad24 instruction in silicon. It does, however, have a mad32, so it can do a * b + c as a single instruction with arbitrary 32-bit integers. If you use mad24 in OpenCL, Nvidia's compiler presumably compiles it to a mad32, and thus still runs correctly. GCN has a mad32, as well, but it's much slower than mad24. Thus, having mad24 available as opposed to only mad32 lets you write code that runs faster on GCN without hurting performance on Kepler. This is a good thing.
But you can only do it if you have a way to call mad24. OpenGL doesn't, so even if you know that you're doing some small integer arithmetic, you have no way to call it. In some cases, you can use floats and call fma, which is fast on all modern GPUs. The point of GPUOpen is to give programmers a way to call instructions available in AMD hardware and make their code faster.
Ideally, you'd want access to call instructions available in either AMD or NVidia GPUs, and for both to write code paths to efficiently emulate the instructions they don't have. The downside of this is that a novice programmer who doesn't know which hardware has what instructions may use intrinsics for instructions that many GPUs don't have, and not realize that they're slow on a lot of GPUs. Alternatively, a sponsored programmer might do this intentionally to try to harm performance on a competitor's hardware. It's probable that Nvidia's GameWorks does quite a bit of this.
does Vulkan and DIrectX 12 do the same thing given that they have 'low level' access as well
Please do not respond to me, even if I ask you a question, its rhetorical.
Please do not respond to me
The intention is that Vulkan will mostly use the same GLSL shader language that OpenGL uses. Vulkan does support SPIR-V, so it will be able to use the OpenCL C kernel language of OpenCL if you wanted to go that route. You could even write code directly in SPIR-V, but it's not a high level language, so I'd expect that to be exceedingly rare apart from people writing compilers. It's kind of like writing a program in assembly. You can, but you probably shouldn't.
well if that is the case a good thing, this might also explain why a game can run on a console at spec X but on a PC it needs spec X +++++
Please do not respond to me, even if I ask you a question, its rhetorical.
Please do not respond to me
Graham Wihlidal says:
May 24, 2016 at 7:57 pm
"This is awesome!!!!! Time to bring all our crazy console optimizations to AMD PC GPUs"
( Graham Wihlidal is a senior rendering engineer on the Frostbite engine team )
Please do not respond to me, even if I ask you a question, its rhetorical.
Please do not respond to me
Yes, they have the consoles now. And Polaris isn't bad. And they make decent APUs for low-range systems. And Vega is due "soon (tm)".
But I really feel that AMD's future hinges on Zen. Their strength for years has been as an x86 competitor. You can pretty much chart the value of their company in relation to the strength of their CPUs. Even when their GPU business was strong (and there have been more than once where AMD has had the best GPU lineup), the company overall has sank or swam on the back of their x86 business.
Even when Cypress was dominating the GPU benchmarks over nVidia Fermi, AMD shares sunk when Intel introduced the first of the Core lineup, and it crushed the Phenom II's. That year, 2009, the year that AMD was probably having the best success in their GPU lineup and the best marketshare for their GPU business. But AMD stock fell to a low of $2.00 that year. It has traded as high as the $40 range, in the mid-2000's (when the Athlon was the king)
I don't think AMD has pivoted away from their x86 business that much. They have a bigger emphasis on "APU", but so does Intel with their iGPU, and that puts the focus right back on the x86 core proice/power/performance metrics.
AMD doesn't need to beat Intel to win - they don't need the fastest chip, or the lowest power chip, or the cheapest chip. But they need to be the best when you look at all 3 of those metrics combined.
Vega, really, I don't think will do much for AMD's bottom line as a company, but as a gamer I'd love to see some 1070/1080 competition to help bring nVidia's prices lower.
For my money, I'm staying out of tech entirely right now. And AMD is recently infamous for hyping up a release, and providing everyone with a big letdown, so I don't have a lot of faith in Zen (although I do hope that it does knock it out of the park, I just don't think it will). Without Zen, I don't see where AMD moves forward, at least as the same company it is today.
All that being said, @filmoret's predictions look to be playing out so far.
Bulldozer's problems were inherited by Piledriver, Steamroller, and Excavator cores, as they're all derivatives of Bulldozer. AMD did manage to improve upon the basic architecture quite a bit, but being stuck with something so bad as the base idea really hampered them. Zen is all new, so it will face no such problems.
For most of its history, AMD has faced a serious drawback of being behind Intel in what its fabs could do. If Intel reached a process node a year before AMD, that's as if AMD was constantly having to fight with CPUs a year older than what Intel offered. If you go through history comparing the GPUs that AMD/ATI had at one point in time to what Nvidia had a year later, AMD/ATI usually isn't competitive. But Nvidia usually isn't competitive if you reverse the year advantage. That AMD was able to take the lead for a while with the Athlon 64 is quite remarkable.
Intel's fab advantage probably isn't going away, but it may be diminishing. In servers where you can use 20 cores if you have them, or in laptops where the difference between 20 W and 30 W is enormous, Intel's fab advantage will still provide a solid advantage.
But once AMD gets everything on 14 nm, die shrinks are going to be nearly irrelevant to desktop CPUs for the foreseeable future. Lower power doesn't gain you anything in desktops anymore, as it doesn't matter if a desktop CPU uses 45 W or 65 W or 95 W. Higher clock speeds are limited by physics, not transistor counts, and have been for years. We can get enough cores that more is irrelevant to desktop use; AMD just announced a 32-core "Naples" server CPU coming next year, and Intel already sells 22-core Broadwell-EP CPUs.
That doesn't mean that AMD will automatically be competitive in desktop CPUs; Kaveri on 28 nm already isn't competitive with Sandy Bridge on 32 nm. But it does mean that it's all about the architecture, not the fabs anymore. The diminishing gains from die shrinks in desktops are why, while Sky Lake is better than Sandy Bridge, it's by a massively smaller margin than, say, Pascal (GeForce 1000 series) is better than Fermi (GeForce 500 series). GPUs still scale very well with die shrinks and will for the foreseeable future.
Man, AMD is really flopping around in the water trying not to drown aren't they.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
http://forums.mmorpg.com/discussion/453873/amd-identifies-microsoft-s-project-scorpio-as-one-of-its-design-wins-expects-revenue-increase#latest
http://forums.mmorpg.com/discussion/453061/radeon-rx-480-and-what-the-future-may-hold-for-amd-and-gamers#latest
http://forums.mmorpg.com/discussion/453293/amds-gpus-are-more-compute-heavy-than-nvidias#latest
But you already know this considering that you made a post saying they could sit back for a while comfortably.
"but for now they can sit back and enjoy another successful launch"
http://forums.mmorpg.com/discussion/452988/actual-rx480-benchmarks-not-cherry-picked#latest
It looks like NVidia has so much better power efficiency, that I wouldn't be surprised to see them win this round of the match for mobile GPUs.
In processors AMD is offering serious competition against Intel I3 and cheaper models.
AMD's situation is not totally hopeless, but they've been flopping around in the water and trying not to drown for some years now.
Whether that's because the CPU departments makes orders of magnitude more money when it's in a better position then the competition or not, i have no clue, it's wierd.
So the only thing that makes AMD sink or swim seems to be the CPU department. And that's the reason AMD share price skyrocketed more than 330% in the past few months. (Optimism for ZEN).
On the CPU side of things, yeah, AMD has been a mess for several years, but that's all because of Bulldozer way back in 2011. Zen offers AMD the chance to reset the situation.
AMD could end up doing quite well as integrated graphics become increasingly prevalent, provided that they have at least competitive CPUs and GPUs simultaneously. Intel can't compete with AMD on the GPU side of things, while Nvidia can't compete with AMD on the CPU side.
Yes, and I made that post BEFORE NVidia released the GTX 1060. What's your point?
My point was that these types of press releases smack of desperation.
"The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently."
- Friedrich Nietzsche
You mad a post on 22nd August. Way beyond anything was "released"