Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

nVidia or AMD? Why not both.

RidelynnRidelynn Member EpicPosts: 7,383
http://www.anandtech.com/show/9740/directx-12-geforce-plus-radeon-mgpu-preview

Interesting preliminary testing on DX12 Multi-Adapter support. Apparently mixing vendors provides better results than sticking with the same vendor for multi-GPU setups.

It's all still very preliminary, but interesting none the less. I'm more interested in this with the possibility of finally using that iGPU that's been wasting silicon in my CPU.

The results kinda make sense - you get two different GPU architectures, with different strengths and weaknesses, and so long as the driver/API is competent enough to assign each architecture it's relative strengths, you should come out ahead of a homogenous setup.

Comments

  • Dagon13Dagon13 Member UncommonPosts: 566
    I feel like this will summon something dark and horrible.

    Seriously though, the only experience I've had with multi-GPU setups is a close friend of mine who seems to have issues with almost every new game that we start playing.  I know that multi-GPU setups always come with higher marks in benchmark scenarios but I'm not sure it's worth the headache in the real world.  Maybe his experience is unique but considering his issues with one vendor I can't even imagine combining two vendors.
  • GladDogGladDog Member RarePosts: 1,097
    Dagon13 said:
    I feel like this will summon something dark and horrible.

    Seriously though, the only experience I've had with multi-GPU setups is a close friend of mine who seems to have issues with almost every new game that we start playing.  I know that multi-GPU setups always come with higher marks in benchmark scenarios but I'm not sure it's worth the headache in the real world.  Maybe his experience is unique but considering his issues with one vendor I can't even imagine combining two vendors.
    DX12 is the real deal, and will forever change PC gaming.  That 3 year old DX11 card in the box of parts is now potential graphics power for any machine, no matter how new!

    The biggest improvement of course is going to be how DX12 uses the CPU video as a benefit.  Even if it is only a 10% boost, its a 10% boost you get for free!

    There is something wrong with your friend's setup, maybe an underpowered Power Supply that can't supply both cards with the max power they need for some games, causing the game to crash.  I've run SLI and crossfire machines several times over the years with no problems at all.  My biggest problem with multicard video is that the cost to buy a pair of cards that match could have been used to buy a faster card that performs far better in a game that doesn't support SLI or Crossfire, and nearly as well in a game that does support these protocols.  With DX12, the cards don't have to match, so you can buy a good one and a cheesy one, and still get better video performance.


    The world is going to the dogs, which is just how I planned it!


  • tawesstawess Member EpicPosts: 4,227
    Well the whole idea was that it is getting easier to do... =P

    And i guess on paper it is a good idea as each one can cover the others weaknesses. 

    This have been a good conversation

  • l2avisml2avism Member UncommonPosts: 386
    edited November 2015
    You guys do know that AMD outperforms Nvidia on DX12, at least until Nvidia actually adds full DX12 support to their hardware.

    http://www.hardwareluxx.com/index.php/news/software/benchmarks/34730-3dmark-api-overhead-test-amd-outperforms-nvidia-dx12-on-par-with-mantle.html



    Overall, it can therefore be said that AMD is currently handling DirectX 12 better - as the comparison between the GeForce GTX Titan X and Radeon R9 290X shows. Also of note is that Mantle appears to be faster than DirectX 12



  • QuizzicalQuizzical Member LegendaryPosts: 25,483
    I would interpret your graph as meaning that Nvidia handles DirectX 11 better than AMD, but that advantage goes away with DirectX 12.  If the overhead for DirectX 12 drops by more than an order of magnitude, then it's no longer a bottleneck, and AMD's advantage barely matters.  Of course, it's also not clear if that synthetic benchmark is representative of real DirectX 11 games.
  • l2avisml2avism Member UncommonPosts: 386
    edited November 2015
    Quizzical said:
    I would interpret your graph as meaning that Nvidia handles DirectX 11 better than AMD, but that advantage goes away with DirectX 12.  If the overhead for DirectX 12 drops by more than an order of magnitude, then it's no longer a bottleneck, and AMD's advantage barely matters.  Of course, it's also not clear if that synthetic benchmark is representative of real DirectX 11 games.
    The advantages of using DX12 and the fact that most game studios simply use 1 of 3 common off-the-shelf game engines.
    DX12 allows for more draw calls per frame. This is extremely important.

    Before DX12, Mantle, Vulkan you had to bake all of your static scene elements (like trees, rocks, terrain, anything unmoving) as one mesh with a texture atlas and try to render it with one draw call which was really inflexible and alot of work. Also, anything that moved or was animated would still need their own draw calls.
    In an MMO this means that your framerate won't drop just because hundreds of players are in the same place at the same time. As long as there is a reasonable amount of polygons drawing 100's or even 1000's of distinct meshes at the same time won't be an issue.

    This is the biggest thing since programmable shaders in 2002.
    Ironically, DirectX1-3 and OpenGL 1.x both had something similar.
    DirectX had "command lists" (called intermediate mode vs retained mode) and OpenGL had "display lists" but both API's did away with that because programmers thought that it was too difficult to use.
    Mantle, Vulkan, DX12 brought back the "command lists". The command list stores rendering commands (work) in a precomputed form allowing the list to be built in a seperate thread and then the thread owning the main graphics context will submit all of the command lists for final rendering. (DX11 had commandlists, but did not have the following ....)
    Also there aren't any texture objects, vertex objects, etc, you just kind of copy stuff to raw places in video memory and you must tell your shader where its at (overly simplied explanation) by setting registers before the draw call. It's extremely low level and brutal reversal of how things used to be done. This basically means that the API does almost nothing except the bare minimum to get your data and commands to the GPU so your shaders can run.
  • IselinIselin Member LegendaryPosts: 18,719
    I didn't know that multi-any-GPU support was built into DX12. That, all by itself, is pretty exciting.
    "Social media gives legions of idiots the right to speak when they once only spoke at a bar after a glass of wine, without harming the community ... but now they have the same right to speak as a Nobel Prize winner. It's the invasion of the idiots”

    ― Umberto Eco

    “Microtransactions? In a single player role-playing game? Are you nuts?” 
    ― CD PROJEKT RED

  • DarLorkarDarLorkar Member UncommonPosts: 1,082
    Iselin said:
    I didn't know that multi-any-GPU support was built into DX12. That, all by itself, is pretty exciting.
    Agree...but as with all new things, i will wait and see if it really helps in my world. Not just in some graph, on some site, that might show something that i would never notice on my computer.
  • QuizzicalQuizzical Member LegendaryPosts: 25,483
    l2avism said:
    Quizzical said:
    I would interpret your graph as meaning that Nvidia handles DirectX 11 better than AMD, but that advantage goes away with DirectX 12.  If the overhead for DirectX 12 drops by more than an order of magnitude, then it's no longer a bottleneck, and AMD's advantage barely matters.  Of course, it's also not clear if that synthetic benchmark is representative of real DirectX 11 games.
    The advantages of using DX12 and the fact that most game studios simply use 1 of 3 common off-the-shelf game engines.
    DX12 allows for more draw calls per frame. This is extremely important.

    Before DX12, Mantle, Vulkan you had to bake all of your static scene elements (like trees, rocks, terrain, anything unmoving) as one mesh with a texture atlas and try to render it with one draw call which was really inflexible and alot of work. Also, anything that moved or was animated would still need their own draw calls.
    In an MMO this means that your framerate won't drop just because hundreds of players are in the same place at the same time. As long as there is a reasonable amount of polygons drawing 100's or even 1000's of distinct meshes at the same time won't be an issue.

    This is the biggest thing since programmable shaders in 2002.
    Ironically, DirectX1-3 and OpenGL 1.x both had something similar.
    DirectX had "command lists" (called intermediate mode vs retained mode) and OpenGL had "display lists" but both API's did away with that because programmers thought that it was too difficult to use.
    Mantle, Vulkan, DX12 brought back the "command lists". The command list stores rendering commands (work) in a precomputed form allowing the list to be built in a seperate thread and then the thread owning the main graphics context will submit all of the command lists for final rendering. (DX11 had commandlists, but did not have the following ....)
    Also there aren't any texture objects, vertex objects, etc, you just kind of copy stuff to raw places in video memory and you must tell your shader where its at (overly simplied explanation) by setting registers before the draw call. It's extremely low level and brutal reversal of how things used to be done. This basically means that the API does almost nothing except the bare minimum to get your data and commands to the GPU so your shaders can run.
    I'm not denying that DirectX 12 and Vulkan are important.  (Mantle, on the other hand, is going away, as it's made redundant by the others.)  Rather, my point is that if the single-threaded host code bottleneck for communicating with the GPU goes away to the extent that doing X before would kill your performance while now doing 3X is no big deal, then it doesn't matter very much which GPU can do that 3X while using less CPU time.

    It's kind of like how, when we were stuck with hard drives, this hard drive is 20% faster than that hard drive was important.  Now that we have SSDs, this SSD has 2X the performance of that one rarely matters.  Once something is fast enough that it's not the bottleneck, that's good enough.
  • mastersam21mastersam21 Member UncommonPosts: 70
    I can't wait for development to really start ramping up dx12 support. I have an AMD A10-7850k in my secondary build, and the thought I will get a lot more performance without any additional or upgraded hardware makes me very happy. This might give consumers more reasons to get AMD APU over intel making things a bit more competitive which will benefit everyone overall.
  • QuizzicalQuizzical Member LegendaryPosts: 25,483

    Iselin said:
    I didn't know that multi-any-GPU support was built into DX12. That, all by itself, is pretty exciting.
    There's an enormous difference between:

    a)  an API gives programmers the tools to use multiple GPUs in heterogenous ways to scale to more GPUs however they like, and
    b)  an API automatically scales to multiple GPUs without programmers having to do anything special to use it

    DirectX 12 and Vulkan are (a), but not (b).  CrossFire and SLI try to do (b) with two identical (or at least related) GPUs purely via driver magic, but that's why they don't work very well and often require custom-done driver magic for particular games.

    Now, (a) isn't nothing, but I don't think it's that big of a deal.  I don't expect games implementing heterogeneous GPU support to be all that common.  Furthermore, from a consumer perspective, the next time I get a new GPU, I want it to replace and not supplement the current one.  I'm not keen on GPUs cranking out 500 W whenever I play games, with the possible exception of a few months during the winter.

    I'll often downplay the importance of the difference between, say, 150 W and 200 W in a desktop GPU.  But the difference between 200 W and 500 W?  That matters.
  • makasouleater69makasouleater69 Member UncommonPosts: 1,096
    Dagon13 said:
    I feel like this will summon something dark and horrible.

    Seriously though, the only experience I've had with multi-GPU setups is a close friend of mine who seems to have issues with almost every new game that we start playing.  I know that multi-GPU setups always come with higher marks in benchmark scenarios but I'm not sure it's worth the headache in the real world.  Maybe his experience is unique but considering his issues with one vendor I can't even imagine combining two vendors.
    Nah, its not just him. It honestly depends on the game, and its a rare game, that sli or crossfire actually works right. As for dx 12, it isnt gonna work out like they say its gonna. It never does......... But hey if it gets a bunch of people to buy windows 10, and dx 12 graphics cards, I am sure they don't care if it doesn't work like its suppose too. 
  • l2avisml2avism Member UncommonPosts: 386
    Quizzical said:

    Iselin said:
    I didn't know that multi-any-GPU support was built into DX12. That, all by itself, is pretty exciting.
    There's an enormous difference between:

    a)  an API gives programmers the tools to use multiple GPUs in heterogenous ways to scale to more GPUs however they like, and
    b)  an API automatically scales to multiple GPUs without programmers having to do anything special to use it

    DirectX 12 and Vulkan are (a), but not (b).  CrossFire and SLI try to do (b) with two identical (or at least related) GPUs purely via driver magic, but that's why they don't work very well and often require custom-done driver magic for particular games.

    Now, (a) isn't nothing, but I don't think it's that big of a deal.  I don't expect games implementing heterogeneous GPU support to be all that common.  Furthermore, from a consumer perspective, the next time I get a new GPU, I want it to replace and not supplement the current one.  I'm not keen on GPUs cranking out 500 W whenever I play games, with the possible exception of a few months during the winter.

    I'll often downplay the importance of the difference between, say, 150 W and 200 W in a desktop GPU.  But the difference between 200 W and 500 W?  That matters.
    Because it requires work from the programmers and only 5 people on the face of earth will use this option, I'd expect no games will support it.

    In the past to run multiple GPU's you had to basically have multiple instances of DirectX (or OpenGL or whatever) running with each one tied to the GPU you wanted to use. There was no easy way to shared data between them. You basically had to copy stuff to ram and then copy it to the other.
    This is typically how most "Cave" VR setups work. Each wall of the cave is a seperate monitor (or projector) that plugs into a seperate GPU. The software then renders the scene from different angles on each GPU every frame.
    In windows, you tell windows which screen to place your window on that you create, and then when you start direct3d for that window it will appear on that screen (even in fullscreen mode, every directX game has a window- the only difference is that in fullscreen mode directX hijacks the backbuffer instead of rendering into the windows buffer and letting the driver blit that to the backbuffer). (In linux and macosx is similar). You couldn't use multiple GPU's to render one window though.

    SLI and crossfire appears to games as one GPU. Even if you have 4 nvidia titans in SLI, your game only sees 1 GPU. When it uses the GPU, the graphics driver handles sharing the workload between all 4 GPU's without the game knowing about it. This is not the case when you aren't using SLI or crossfire.


  • sZakootaisOPsZakootaisOP Member UncommonPosts: 19
    I would say both.

    It's more likely a choice you made and It depends on various factors.
    1. Budget
    2. Games you intent to play on It.
    3. Your current Specification.
    Money is always the first thing you will check, I am Nvidia fan boy and I won't deny It. But I do accept AMD is also a reality and It's progressing further. If you have more money you can spend then Nvidia would be my first preference, but If money is a problem and I have a tight budget, then AMD will do the trick. AMD is giving parallel perfomance graphs with Nvidia but does lack few features.

    Games you want to play on this Graphics card will help you decide even better. If you are more into MOBA or MMORPG games then you can select Mid-end card to High end. If you are into FPS then High End is always the best thing to go for. It really like a filter you put on, and It really helps!

    Last thing, your current Specification. If you are running a old Processor with a old Motherboard you won't be able to utilize the full potential of these new generation cards be it AMD or NVIDIA. So these factors does play a vital role in selecting the right hardware.

    I hope this will help! :)

    Cheers!
  • LadyGagarocksLadyGagarocks Member UncommonPosts: 3
    @Ridelynn Nvidia for sure.
Sign In or Register to comment.