Nice setup. Those that are saying the PSU is a little weak, an 800 will run nearly anything atm, and he only has one GPU so he isn't going to strain it at all.
Only thing I'd suggest was to run SLI is your case has enough room, then you'll be futureproofed well into the next console generation.
800 seems real low. Lets not forget case fans, roms, usb devices, and capicitor aging.
Case fans rarely take more than about 3 W each, and sometimes take a lot less than that--such as if you're running them slowly. The USB 2.0 standard only requires a USB port to deliver up to 2.5 W. USB 3.0 bumps that to 4.5 W, but USB 2.0 or earlier devices (which includes nearly everything except for a handful of external hard drives) have to stay within the 2.5 W limit. Anything that comes with a long cord (e.g., keyboards or mice) isn't going to come remotely near that 2.5 W limit, either.
For most systems, in determining how much power you need, I say take the TDP of the processor and video card, and estimate 100 W for everything else. That's usually a huge overestimate for everything else. For this system, I'd pull the hard drives out of the "everything else" and toss in 100 W for the hard drives and RAID controller. That's probably still a huge overestimate. A quick and dirty approximation would give:
Processor: 100 W
Video card: 250 W
Storage: 100 W
Everything else: 100 W
Total: 550 W
There's enough generous overestimates built in there that I'd regard it as unlikely that he ever pulls 500 W from it, and not entirely shocking if he never pulls 400 W (though he could readily do the latter if so inclined).
Unless, of course, he's going to stick waterblocks on everything and overclock it as far as it will go. In that case, 800 W might be cutting it tight. But at stock speeds, it's plenty of power.
As for the poster about the Quad-SLI. There is no god damn reason why that's ever needed unless you're a high level graphics designer and maybe radiologist that needs that kind of power for rendering, compiling and processing. Having 4 cards doesn't mean 400% power boost, it would be more like maybe 250% if you're lucky. You would be better off having maybe a tri-sli but duel should be more then enough with a dedicated phys-x card although that's not even really needed.
Let me put this another way, with all the overkill on hard drives, you skimped on the video card.
Nice rig. I think the psu is pretty weak for all those drives. Why not a 580? I think the point of a nas is the simplicity and convenience. I like not having to remember what computer has what movie and if that computer is on.
Network attached storage devices are used for more then just knowing where your stuff is. It will be my backup machine for all of the devices in my house, which is about 6 different computers and a shitload of other things. On top of that I will be hosting a domain controller as one of my virtual machines on that computer which all of the systems capable of utilizing it will be running their profiles from that location. Instead of everyone being locked to being forced stuck with their profile on one machine, they could login to any machine in this house and have the same access to their stuff and the same desktops wherever they go. That also includes outlook settings and PST files.
Yes a server motherboard with duel CPU's would be a benefit for me, but once again I need to worry about licensing limitations with VMware and windows. As well I don't feel like spending 5 grand on Xeon processors and even more money on ECC ram to accomidate that board's memory Dimms. This is why I stuck with a well made home computer running enterprise hardware.
A domain controller is one thing, a nas is someting else. Im not sure if you knew that a NAS is an actual device you can buy. I didnt realize you put your home on a domain and actually require the needs of AD and a server os. I am aware of how roaming profiles work.
I didnt say anything about xeon. I was wondering why you stopped at slighly less than spectaular video card when you have spent so much money on hdds.
A domain controller is one thing, a nas is someting else. Im not sure if you knew that a NAS is an actual device you can buy. I didnt realize you put your home on a domain and actually require the needs of AD and a server os. I am aware of how roaming profiles work.
I didnt say anything about xeon. I was wondering why you stopped at slighly less than spectaular video card when you have spent so much money on hdds.
I suspect an appliance grade NAS is severely underpowered for his purpose and a mission critical server grade NAS is severely overbudge for his purpose:)
it sounds like that he went cheap on vid card because this was not ment to be a primary gaming box. it's just something he can 2box on for a 2ndary toon.
so this box is a server 1st and gaming box 2nd, not the other way around.
Sorry about the lack of replies, I've been rather busy and kind of been forgetting to check on my older threads. The reason I skimped on the gpu was because as much as I can overclock the one that I had gotten which turned out to be the Gigabyte 570 GTX with three cooling fans. It is capable of running everything on max/ultra with 60 fps via Vsync.
To further clarifiy, I do use Linux operating systems, but I prefer windows for simplicity and reliability with guaranteed support and fixes if they're required. I run VMware server where I run my phone system, domain controller, untangle, Network Security system, databases and etc. Yes I know Linux is awesome and all, but when setting up all these things I prefer to run as quickly and simply as possible, why give yourself extra work if it's not needed kinda thing.
I got the i7 2600 because it cost me only $200 and I didn't feel at this time that I needed the 2600k due to the only difference being OC limitations. Although I spent a lot of money on my machine, I tried to stick with what I need over what I want. I have over 30 devices at home that uses my home network, 4 laptops, 3 desktops, phones, ipods, consoles, etc.
As for the guy who recommended 10gb ethernet, yes that's a great idea however for it to be truly utilized my system would have needs to get requests that require more then 100MB/s which is highly unlikely given it's still just a home environment. Hell even at the office where we have 50 workstations and people constantly accessing all our data we still merely run on 1gbps flawlessly. Not only that but I'm not going to fork out the money to buy a 10gbps switch, then the cards needed to utilize it.
If there is anymore questions, feel free to keep posting
Why not a 2600k for better O/C? The 2600 O/Cs like a champ (up to 5.0GHz stable, in my experience). I won't leave it there because it just feels wrong but I run 4.2 frequently.
I run 4.2 with my 2600 and it idles at 23C, my hard drives and SSD idle at 10C and my GPU idles at about 23C as well. No liquid cooling, just many well placed fans. Under full load, nothing goes over 45C, the gpu MIGHT rarely go over 50 for some unknown reason I couldn't care about.
Is there a good reason why you are trying to merge your NAS and your gaming rig?
It seems like it would be more cost effective to put the NAS off on it's own rig - an ultra-low power CPU, minimal RAM, and mostly just the HD's. That way you could leave it running 24/7 for file access without being a huge power hog, as well as pulling it out of your gaming machine (which needs to reboot for driver updates, will crash from time to time due to crappy games, etc).
Similarly with your Hypervisor: Yes, a gaming machine will run virtual machines well, but it won't necessarily do VM's and game well at the same time, especially if you are already pushing the hardware with the hypervisor. You run the risk of impacting the performance of your virtual machines with your games (and vice versa), as well as reliability.
If it's "mission critical" you don't game on it - it's a huge liability. The cost of the hardware is something you write off, and is a minute part of the total investment. I don't think you should try to mix it with a gaming rig - it's just bad news waiting to happen.
Is there a good reason why you are trying to merge your NAS and your gaming rig?
It seems like it would be more cost effective to put the NAS off on it's own rig - an ultra-low power CPU, minimal RAM, and mostly just the HD's. That way you could leave it running 24/7 for file access without being a huge power hog, as well as pulling it out of your gaming machine (which needs to reboot for driver updates, will crash from time to time due to crappy games, etc).
Similarly with your Hypervisor: Yes, a gaming machine will run virtual machines well, but it won't necessarily do VM's and game well at the same time, especially if you are already pushing the hardware with the hypervisor. You run the risk of impacting the performance of your virtual machines with your games (and vice versa), as well as reliability.
If it's "mission critical" you don't game on it - it's a huge liability. The cost of the hardware is something you write off, and is a minute part of the total investment. I don't think you should try to mix it with a gaming rig - it's just bad news waiting to happen.
The only mission critical part of my vm's is the phone system, since I am not yet using this machine for any business situation.
I am one that dislikes having to move hardware, having everything in one chasis makes moving it a lot more simplistic. Yes I have an older machine that could handle being a NAS, but the network storage I need is only for the other people that live here which won't be congesting the drives as much as I would be. So running it on my main machine I can have full access without congesting the network for other people around here. I would prefer to run ESXi on one of my machines but the only other machine I have capable of handling it, doesn't have any expansion ports for a realible network card that's compadible with it. Energy efficiency isn't something I'm concerned about as well.
On a further note it takes about 15 seconds for my PC to boot from POST and I have scripts to auto start my VM's after I log in and I have remote access to the system from anywhere in the world and an alert system that emails me if the system stops responding.
Don't get me wrong, when I start hosting game servers and start going into a more production type environment I will not be gaming on the machine as it wouldn't have the resources it needs to survive. At that time I would be building a second, much smaller unit to handle my gaming and run ESX off of the main system purely for a VM environment.
Before anyone asks why so much for a home system. As I've stated before there is a plethora of devices in my home that require network access, as the only IT person around here I'm often stuck fixing everyone's problems and I do not have the time for it on top of monitering what they are installing and etc. Running these VM's as I do I can predeny stuff from accessing their machines and stop them from installing software that could harm their machine and keep me busy for 10 hours to fix it. So far it's working as intended as I have not needed to fix anyone's machine since Christmas where I would normally be doing it every week or more.
I do sometihng similar with Fusion for our workplace's Windows clients. Much easier when they virusbomb their computer to just hand them a fresh VM with all the software preloaded than to try to clean it up and fix it.
When you do make the transition, pull your GPU out of the server: you won't need a gaming class GPU in there, and it will just suck out power. You can re-use that puppy in your new smaller rig. If you don't have an integrated video on the unit (which most every new Intel chip does), then throw in the lowest power card you can in there: it doesn't take a lot of video card to make Windows work, you just need it to play games on. Even if you aren't concerned with power, there's still the heat that an unnessecary GPU creates, even at idle is a lot more than integrated makes.
You can really make due with a smaller CPU - VM's are all about RAM and drive access, as they don't share RAM across workplaces well. They don't share hard drive access well, a couple of moderately active VM's can seriously send a HD well into a queue (especially if any of them are hitting swap, or the host is hitting swap).
But they do share CPU power very well. Unless you have a lot of high transaction databases or someone is trying to stress test CPU's, most any off-the-shelf CPU is fine for VMs. If you can allocate them enough RAM so they don't hit swap, and limit it to one or two per physical drive, then they will run fairly speedily even on low horsepower CPU's and share across cores very well. For example: I have a dual core that's running 8 VM's just fine right now. I have another quad core that I'm having to rebuild because it has four moderate activity VM's that are choking the only hard drive in the unit, and I need to split a couple off onto their own drive.
If your planning on putting 8 VM's on a RAID6, that may be a mistake unless you have a high end RAID controller. The RAID6 will be fine to back them up on, but you may see performance issues if all your VM's start to crank up at the same time.
I do sometihng similar with Fusion for our workplace's Windows clients. Much easier when they virusbomb their computer to just hand them a fresh VM with all the software preloaded than to try to clean it up and fix it.
When you do make the transition, pull your GPU out of the server: you won't need a gaming class GPU in there, and it will just suck out power. You can re-use that puppy in your new smaller rig. If you don't have an integrated video on the unit (which most every new Intel chip does), then throw in the lowest power card you can in there: it doesn't take a lot of video card to make Windows work, you just need it to play games on. Even if you aren't concerned with power, there's still the heat that an unnessecary GPU creates, even at idle is a lot more than integrated makes.
You can really make due with a smaller CPU - VM's are all about RAM and drive access, as they don't share RAM across workplaces well. They don't share hard drive access well, a couple of moderately active VM's can seriously send a HD well into a queue (especially if any of them are hitting swap, or the host is hitting swap).
But they do share CPU power very well. Unless you have a lot of high transaction databases or someone is trying to stress test CPU's, most any off-the-shelf CPU is fine for VMs. If you can allocate them enough RAM so they don't hit swap, and limit it to one or two per physical drive, then they will run fairly speedily even on low horsepower CPU's and share across cores very well. For example: I have a dual core that's running 8 VM's just fine right now. I have another quad core that I'm having to rebuild because it has four moderate activity VM's that are choking the only hard drive in the unit, and I need to split a couple off onto their own drive.
If your planning on putting 8 VM's on a RAID6, that may be a mistake unless you have a high end RAID controller. The RAID6 will be fine to back them up on, but you may see performance issues if all your VM's start to crank up at the same time.
Did you read the specs that I gave for the machine? lol. It has a dedicated raid card that runs in pci-e x8, and I avoid where possible to use a swap file or page file. I plan on upgrading this machine with 32 GB of ram eventually. I do agree I need to get another system for gaming but right now this one runs perfectly fine for the little gaming that I do.
Just looking at the HDD setup is like nails on chalk to me. I understand you use it for networking and probably need it for that, but the gamer in me wants to see a Corvette, not a school bus.
Error: 37. Signature not found. Please connect to my server for signature access.
All that Storage for a gaming rig, what you going to have stored the entire worldwide web? then you have an entry level video card installed, I smell something fishy with the ops post.
All that Storage for a gaming rig, what you going to have stored the entire worldwide web? then you have an entry level video card installed, I smell something fishy with the ops post.
A GeForce GTX 570 is a $300 card. That's not at all entry level.
And how terrible that someone should have a computer that isn't purely a gaming rig! </sarcasm>
Originally posted by xS0u1zx Did you read the specs that I gave for the machine? lol. It has a dedicated raid card that runs in pci-e x8, and I avoid where possible to use a swap file or page file. I plan on upgrading this machine with 32 GB of ram eventually. I do agree I need to get another system for gaming but right now this one runs perfectly fine for the little gaming that I do.
I did. You said your putting all your VM's on a single array. I mentioned it was a poor idea, and that you should spread them out. You have a lot of bottlenecks in there, even with a dedicated RAID card on PCIe8, if you have moderate-high volume VM's running on that. 8 drives in 4x RAID 1's with a limit of 2-3 VM's per array would give you better performance than 1x RAID 6 with 8 drives would - and even then, you really really don't need RAID 6, RAID 5 will give you similar performance more storage without significantly sacrificing reliability.
Comments
Nice setup. Those that are saying the PSU is a little weak, an 800 will run nearly anything atm, and he only has one GPU so he isn't going to strain it at all.
Only thing I'd suggest was to run SLI is your case has enough room, then you'll be futureproofed well into the next console generation.
Let me put this another way, with all the overkill on hard drives, you skimped on the video card.
For a gaming system you went cheap.
A domain controller is one thing, a nas is someting else. Im not sure if you knew that a NAS is an actual device you can buy. I didnt realize you put your home on a domain and actually require the needs of AD and a server os. I am aware of how roaming profiles work.
I didnt say anything about xeon. I was wondering why you stopped at slighly less than spectaular video card when you have spent so much money on hdds.
Show video with your ID that you actually own this rig proof also its in your home or its not true:P
Hope to build full AMD system RYZEN/VEGA/AM4!!!
MB:Asus V De Luxe z77
CPU:Intell Icore7 3770k
GPU: AMD Fury X(waiting for BIG VEGA 10 or 11 HBM2?(bit unclear now))
MEMORY:Corsair PLAT.DDR3 1866MHZ 16GB
PSU:Corsair AX1200i
OS:Windows 10 64bit
I suspect an appliance grade NAS is severely underpowered for his purpose and a mission critical server grade NAS is severely overbudge for his purpose:)
it sounds like that he went cheap on vid card because this was not ment to be a primary gaming box. it's just something he can 2box on for a 2ndary toon.
so this box is a server 1st and gaming box 2nd, not the other way around.
Sorry about the lack of replies, I've been rather busy and kind of been forgetting to check on my older threads. The reason I skimped on the gpu was because as much as I can overclock the one that I had gotten which turned out to be the Gigabyte 570 GTX with three cooling fans. It is capable of running everything on max/ultra with 60 fps via Vsync.
To further clarifiy, I do use Linux operating systems, but I prefer windows for simplicity and reliability with guaranteed support and fixes if they're required. I run VMware server where I run my phone system, domain controller, untangle, Network Security system, databases and etc. Yes I know Linux is awesome and all, but when setting up all these things I prefer to run as quickly and simply as possible, why give yourself extra work if it's not needed kinda thing.
I got the i7 2600 because it cost me only $200 and I didn't feel at this time that I needed the 2600k due to the only difference being OC limitations. Although I spent a lot of money on my machine, I tried to stick with what I need over what I want. I have over 30 devices at home that uses my home network, 4 laptops, 3 desktops, phones, ipods, consoles, etc.
As for the guy who recommended 10gb ethernet, yes that's a great idea however for it to be truly utilized my system would have needs to get requests that require more then 100MB/s which is highly unlikely given it's still just a home environment. Hell even at the office where we have 50 workstations and people constantly accessing all our data we still merely run on 1gbps flawlessly. Not only that but I'm not going to fork out the money to buy a 10gbps switch, then the cards needed to utilize it.
If there is anymore questions, feel free to keep posting
I run 4.2 with my 2600 and it idles at 23C, my hard drives and SSD idle at 10C and my GPU idles at about 23C as well. No liquid cooling, just many well placed fans. Under full load, nothing goes over 45C, the gpu MIGHT rarely go over 50 for some unknown reason I couldn't care about.
Is there a good reason why you are trying to merge your NAS and your gaming rig?
It seems like it would be more cost effective to put the NAS off on it's own rig - an ultra-low power CPU, minimal RAM, and mostly just the HD's. That way you could leave it running 24/7 for file access without being a huge power hog, as well as pulling it out of your gaming machine (which needs to reboot for driver updates, will crash from time to time due to crappy games, etc).
Similarly with your Hypervisor: Yes, a gaming machine will run virtual machines well, but it won't necessarily do VM's and game well at the same time, especially if you are already pushing the hardware with the hypervisor. You run the risk of impacting the performance of your virtual machines with your games (and vice versa), as well as reliability.
If it's "mission critical" you don't game on it - it's a huge liability. The cost of the hardware is something you write off, and is a minute part of the total investment. I don't think you should try to mix it with a gaming rig - it's just bad news waiting to happen.
The only mission critical part of my vm's is the phone system, since I am not yet using this machine for any business situation.
I am one that dislikes having to move hardware, having everything in one chasis makes moving it a lot more simplistic. Yes I have an older machine that could handle being a NAS, but the network storage I need is only for the other people that live here which won't be congesting the drives as much as I would be. So running it on my main machine I can have full access without congesting the network for other people around here. I would prefer to run ESXi on one of my machines but the only other machine I have capable of handling it, doesn't have any expansion ports for a realible network card that's compadible with it. Energy efficiency isn't something I'm concerned about as well.
On a further note it takes about 15 seconds for my PC to boot from POST and I have scripts to auto start my VM's after I log in and I have remote access to the system from anywhere in the world and an alert system that emails me if the system stops responding.
Don't get me wrong, when I start hosting game servers and start going into a more production type environment I will not be gaming on the machine as it wouldn't have the resources it needs to survive. At that time I would be building a second, much smaller unit to handle my gaming and run ESX off of the main system purely for a VM environment.
Before anyone asks why so much for a home system. As I've stated before there is a plethora of devices in my home that require network access, as the only IT person around here I'm often stuck fixing everyone's problems and I do not have the time for it on top of monitering what they are installing and etc. Running these VM's as I do I can predeny stuff from accessing their machines and stop them from installing software that could harm their machine and keep me busy for 10 hours to fix it. So far it's working as intended as I have not needed to fix anyone's machine since Christmas where I would normally be doing it every week or more.
I do sometihng similar with Fusion for our workplace's Windows clients. Much easier when they virusbomb their computer to just hand them a fresh VM with all the software preloaded than to try to clean it up and fix it.
When you do make the transition, pull your GPU out of the server: you won't need a gaming class GPU in there, and it will just suck out power. You can re-use that puppy in your new smaller rig. If you don't have an integrated video on the unit (which most every new Intel chip does), then throw in the lowest power card you can in there: it doesn't take a lot of video card to make Windows work, you just need it to play games on. Even if you aren't concerned with power, there's still the heat that an unnessecary GPU creates, even at idle is a lot more than integrated makes.
You can really make due with a smaller CPU - VM's are all about RAM and drive access, as they don't share RAM across workplaces well. They don't share hard drive access well, a couple of moderately active VM's can seriously send a HD well into a queue (especially if any of them are hitting swap, or the host is hitting swap).
But they do share CPU power very well. Unless you have a lot of high transaction databases or someone is trying to stress test CPU's, most any off-the-shelf CPU is fine for VMs. If you can allocate them enough RAM so they don't hit swap, and limit it to one or two per physical drive, then they will run fairly speedily even on low horsepower CPU's and share across cores very well. For example: I have a dual core that's running 8 VM's just fine right now. I have another quad core that I'm having to rebuild because it has four moderate activity VM's that are choking the only hard drive in the unit, and I need to split a couple off onto their own drive.
If your planning on putting 8 VM's on a RAID6, that may be a mistake unless you have a high end RAID controller. The RAID6 will be fine to back them up on, but you may see performance issues if all your VM's start to crank up at the same time.
Did you read the specs that I gave for the machine? lol. It has a dedicated raid card that runs in pci-e x8, and I avoid where possible to use a swap file or page file. I plan on upgrading this machine with 32 GB of ram eventually. I do agree I need to get another system for gaming but right now this one runs perfectly fine for the little gaming that I do.
Just looking at the HDD setup is like nails on chalk to me. I understand you use it for networking and probably need it for that, but the gamer in me wants to see a Corvette, not a school bus.
Error: 37. Signature not found. Please connect to my server for signature access.
All that Storage for a gaming rig, what you going to have stored the entire worldwide web? then you have an entry level video card installed, I smell something fishy with the ops post.
A GeForce GTX 570 is a $300 card. That's not at all entry level.
And how terrible that someone should have a computer that isn't purely a gaming rig! </sarcasm>
I did. You said your putting all your VM's on a single array. I mentioned it was a poor idea, and that you should spread them out. You have a lot of bottlenecks in there, even with a dedicated RAID card on PCIe8, if you have moderate-high volume VM's running on that. 8 drives in 4x RAID 1's with a limit of 2-3 VM's per array would give you better performance than 1x RAID 6 with 8 drives would - and even then, you really really don't need RAID 6, RAID 5 will give you similar performance more storage without significantly sacrificing reliability.