http://www.anandtech.com/show/9740/directx-12-geforce-plus-radeon-mgpu-previewInteresting preliminary testing on DX12 Multi-Adapter support. Apparently mixing vendors provides better results than sticking with the same vendor for multi-GPU setups.
It's all still very preliminary, but interesting none the less. I'm more interested in this with the possibility of finally using that iGPU that's been wasting silicon in my CPU.
The results kinda make sense - you get two different GPU architectures, with different strengths and weaknesses, and so long as the driver/API is competent enough to assign each architecture it's relative strengths, you should come out ahead of a homogenous setup.
Comments
Seriously though, the only experience I've had with multi-GPU setups is a close friend of mine who seems to have issues with almost every new game that we start playing. I know that multi-GPU setups always come with higher marks in benchmark scenarios but I'm not sure it's worth the headache in the real world. Maybe his experience is unique but considering his issues with one vendor I can't even imagine combining two vendors.
The biggest improvement of course is going to be how DX12 uses the CPU video as a benefit. Even if it is only a 10% boost, its a 10% boost you get for free!
There is something wrong with your friend's setup, maybe an underpowered Power Supply that can't supply both cards with the max power they need for some games, causing the game to crash. I've run SLI and crossfire machines several times over the years with no problems at all. My biggest problem with multicard video is that the cost to buy a pair of cards that match could have been used to buy a faster card that performs far better in a game that doesn't support SLI or Crossfire, and nearly as well in a game that does support these protocols. With DX12, the cards don't have to match, so you can buy a good one and a cheesy one, and still get better video performance.
The world is going to the dogs, which is just how I planned it!
And i guess on paper it is a good idea as each one can cover the others weaknesses.
This have been a good conversation
http://www.hardwareluxx.com/index.php/news/software/benchmarks/34730-3dmark-api-overhead-test-amd-outperforms-nvidia-dx12-on-par-with-mantle.html
DX12 allows for more draw calls per frame. This is extremely important.
Before DX12, Mantle, Vulkan you had to bake all of your static scene elements (like trees, rocks, terrain, anything unmoving) as one mesh with a texture atlas and try to render it with one draw call which was really inflexible and alot of work. Also, anything that moved or was animated would still need their own draw calls.
In an MMO this means that your framerate won't drop just because hundreds of players are in the same place at the same time. As long as there is a reasonable amount of polygons drawing 100's or even 1000's of distinct meshes at the same time won't be an issue.
This is the biggest thing since programmable shaders in 2002.
Ironically, DirectX1-3 and OpenGL 1.x both had something similar.
DirectX had "command lists" (called intermediate mode vs retained mode) and OpenGL had "display lists" but both API's did away with that because programmers thought that it was too difficult to use.
Mantle, Vulkan, DX12 brought back the "command lists". The command list stores rendering commands (work) in a precomputed form allowing the list to be built in a seperate thread and then the thread owning the main graphics context will submit all of the command lists for final rendering. (DX11 had commandlists, but did not have the following ....)
Also there aren't any texture objects, vertex objects, etc, you just kind of copy stuff to raw places in video memory and you must tell your shader where its at (overly simplied explanation) by setting registers before the draw call. It's extremely low level and brutal reversal of how things used to be done. This basically means that the API does almost nothing except the bare minimum to get your data and commands to the GPU so your shaders can run.
“Microtransactions? In a single player role-playing game? Are you nuts?”
― CD PROJEKT RED
It's kind of like how, when we were stuck with hard drives, this hard drive is 20% faster than that hard drive was important. Now that we have SSDs, this SSD has 2X the performance of that one rarely matters. Once something is fast enough that it's not the bottleneck, that's good enough.
There's an enormous difference between:
a) an API gives programmers the tools to use multiple GPUs in heterogenous ways to scale to more GPUs however they like, and
b) an API automatically scales to multiple GPUs without programmers having to do anything special to use it
DirectX 12 and Vulkan are (a), but not (b). CrossFire and SLI try to do (b) with two identical (or at least related) GPUs purely via driver magic, but that's why they don't work very well and often require custom-done driver magic for particular games.
Now, (a) isn't nothing, but I don't think it's that big of a deal. I don't expect games implementing heterogeneous GPU support to be all that common. Furthermore, from a consumer perspective, the next time I get a new GPU, I want it to replace and not supplement the current one. I'm not keen on GPUs cranking out 500 W whenever I play games, with the possible exception of a few months during the winter.
I'll often downplay the importance of the difference between, say, 150 W and 200 W in a desktop GPU. But the difference between 200 W and 500 W? That matters.
In the past to run multiple GPU's you had to basically have multiple instances of DirectX (or OpenGL or whatever) running with each one tied to the GPU you wanted to use. There was no easy way to shared data between them. You basically had to copy stuff to ram and then copy it to the other.
This is typically how most "Cave" VR setups work. Each wall of the cave is a seperate monitor (or projector) that plugs into a seperate GPU. The software then renders the scene from different angles on each GPU every frame.
In windows, you tell windows which screen to place your window on that you create, and then when you start direct3d for that window it will appear on that screen (even in fullscreen mode, every directX game has a window- the only difference is that in fullscreen mode directX hijacks the backbuffer instead of rendering into the windows buffer and letting the driver blit that to the backbuffer). (In linux and macosx is similar). You couldn't use multiple GPU's to render one window though.
SLI and crossfire appears to games as one GPU. Even if you have 4 nvidia titans in SLI, your game only sees 1 GPU. When it uses the GPU, the graphics driver handles sharing the workload between all 4 GPU's without the game knowing about it. This is not the case when you aren't using SLI or crossfire.
It's more likely a choice you made and It depends on various factors.
- Budget
- Games you intent to play on It.
- Your current Specification.
Money is always the first thing you will check, I am Nvidia fan boy and I won't deny It. But I do accept AMD is also a reality and It's progressing further. If you have more money you can spend then Nvidia would be my first preference, but If money is a problem and I have a tight budget, then AMD will do the trick. AMD is giving parallel perfomance graphs with Nvidia but does lack few features.Games you want to play on this Graphics card will help you decide even better. If you are more into MOBA or MMORPG games then you can select Mid-end card to High end. If you are into FPS then High End is always the best thing to go for. It really like a filter you put on, and It really helps!
Last thing, your current Specification. If you are running a old Processor with a old Motherboard you won't be able to utilize the full potential of these new generation cards be it AMD or NVIDIA. So these factors does play a vital role in selecting the right hardware.
I hope this will help!
Cheers!