PDA

View Full Version : Anyone using a Cubix GPU Xpander?



Mar10
06-22-2012, 08:12 PM
Link (https://www.cubixgpu.com/)

First off, I am focusing on adding Cuda Cores to reduce rendering times as I currently see better times on my system in GPU only mode. Feel free to let me know if that notion is misguided.

Does anyone have experience with these GPU Xpanders or other similar systems? Seems like a reasonable way to add Cuda Cores. We are running HP z400 workstations at work with a single Quadro FX 3800 and have been wanting to do more animation work with Move. Due to PSU limitations of our z400s we could probably add another Quadro 2000 (65W) safely but that would only bring us up to 384 cores. Also due to the PSU limitations, I think swapping out the FX 3800 for a Maximus setup (Quadro 2000 65W + Tesla C2075 215W) is out of the question in the current system.

Cubix has a ton of configurations but I think I have found a somewhat reasonable compromise. The Cubix GPU Xpander II running 4 Quadro 4000's would run around $5,500-$6,000 and would add 1024 Cuda Cores and give us a total of 1216 CC. Then, as GPU and core numbers advance we can update the cards in the Xpander. From looking around, I don't think we could build a high end workstation with 1200+ cuda cores for 6K. Am I wrong? Any help would be appreciated. I am looking into options for my boss to recommend and get approved. In the case of this discussion our management prefer we stick with Quadro cards for SolidWorks compatibility.

thanks

artem
06-22-2012, 08:49 PM
Workstation? No, but a desktop computer is a total possibility. GTX 580 has 512 CC and costs at $612. (http://www.newegg.com/Product/Product.aspx?Item=N82E16814130757). With a budget of $6000 you can get a quad SLI motherboard and 4 x GTX 580 for sub $3000 for a total of 2048 CC and still have the other $3000 for the case, memory, power supply, and anything else that you need.

david.randle
06-22-2012, 09:00 PM
The best thing to do would be to load the Xpander with Teslas. Since you already have a GPU (the 3800 is fine for Solidworks and controlling the display of our products), you just want processing cores. You can get more cores for less money with Teslas. I think the Tesla c2075's are a bit more in cost than the Quadro 4000's but have 3x the number of CUDA cores. We have used GPU expanders and they work just fine.

Hope this helps

Mar10
06-22-2012, 10:53 PM
Thanks David, I am definitely thinking about going the expander route. Although the custom PC route that artem put forward is intriguing but I am not sure that is really feasible in my corporate environment. We could possibly go the premade way from BOXX or someone else but at that point I am not sure we will save anything.

I am not seeing how the Tesla GPU gives me more CUDA cores for the money. The c2075 averages $2,000 for 446 CUDA cores while the Quadro 4000 is around $750 for 256 Cuda cores. So using an Cubix Xpander II as a starting point, which can hold 4 single slot cards or 2 double slot cards. I can stuff it with 4 Quadro 4000's since they are a single slot card. At 256 CUDA cores per card, that is 1024 added. With each 4000 costing around $750 + the Xpander we are looking at around $5,800. Divide that by 1024 and you have $5.66 per added core. Now if we stuff it with c2075's, since they are double slot cards we can only add 2. The c2075's look to be around $2000 @ 448 CUDA cores a piece, so only 896 Cuda cores. Add the Xpander, divide, etc.... and you get a total of $6,495 or $7.25 per. If we were comparing the cards with larger vram like the Quadro 5000/6000 then you would be right that the Tesla GPU's are cheaper but in the lower vram cards which are fine for our use they are not.

Config 1: GPU X II and 4 Quadro 4000's = $5,800 (adds 1024 Cuda Cores @ $5.66/CC)
Config 2: GPU X II and 2 Tesla c2075's = $6,495 (adds 1024 Cuda Cores @ $7.25/CC)

Since we are using the FX 3800 for SolidWorks could we run the GTX 580's that artem mentioned or even 590's in the Xpander only for the added processing? The Xpander IIII has space for 4 double slot cards. Say we did 4 GTX 590's in a case. You can get them for $1000 or less so it would be something like below. Or even just use 2 590's for a more budget friendly setup then add 2 more down the road.

Config 3: GPU X IIII and 4 GTX590's = $7,000 (adds 4096 Cuda Cores @ $1.70/CC)

Now that would be nice. Are there any issues with running multiple 590's with their dual core setup?

Aselert
06-23-2012, 02:26 PM
Hi Mar10,
No issues with GTX590... But the memory (VRAM) is "very" small with just 1,5Gb, don't forget this parameter!

artem
06-25-2012, 02:35 PM
Yeah, if you go the GTX route, try to get the 3GB VRAM cards.

Mar10
06-25-2012, 02:39 PM
Hi Mar10,
No issues with GTX590... But the memory (VRAM) is "very" small with just 1,5Gb, don't forget this parameter!

Aselert, We are currently using a Quadro FX 3800 card with 1GB VRAM and have not run into any issues except long render times in Move. Am I correct that VRAM requirements are based on the model complexity only or do camera and model animations ramp up the VRAM needed as well?

andy
06-25-2012, 03:28 PM
Textures and environment can add up Very quickly. You render size also takes up that memory, which is why Bunkspeed automatically tiles at 4000px. But you can often be fine with the smaller ram. I only have 1.5 on my machine and although I crash all the time, a simple restart and make sure to not run anything else 3D or Adobe fixes it 99% of the time.

I've been intrigued about these expansions as well, but I think I'll hold off until someone comes up with a reasonably priced thunderbolt version (and I have a computer with thunderbolt) before doing that. I have a feeling the PCIe bottleneck gets hit pretty quickly. I've heard of people using them over on the Max forums, but no one has seen as large of a speed increase as they expected. One person did an experiment on purchasing one of those, and putting the same amount (I think theirs was closer to 15k) into a new computer and put everything inside the machine. The new machine outperformed the one with the expansion in every way. My takeaway from that was that there's probably a point in which it's better to add an expansion, but if you have the budget, it's probably best to have it all in 1 machine.

I've also been questioning the speed of the quadros, but haven't seen any iray benchmarks. We did some back when I had both and the quadros were slower, but now I'm hearing that the quadros can handle floating point data better, so might be faster on certain scenes? We actually don't have any capable quadros in the building, so I can't test it out.

Mar10
05-16-2013, 08:23 PM
Its been a long time but I finally have an update and more questions. We are moving forward with 2 GPU Xpanders but sadly with a slimmed down budget. Hopefully we can upgrade the GPUs over time. Regardless, we have only $1320 left in the budget for GPUs split between the 2 Xpander boxes, so $660 per box. What GPU or GPUs would you buy for $660? As both boxes need to be the same we can simply discuss one to keep it simple. The Xpander box can hold 4 single wide GPUs or 2 Double wide.


Scenario 1: Sticking to the budget (most likely)
Upon first look, we could easily do 2 GTX 660 TI's since they are only $263 at B&H. Yes, I know the 660 TI only has 2GB VRAM but we seem to be getting along fine with our current Quadro FX3800 that only has 1GB VRAM. So I am not sure this is a big issue for rendering our products. What other options should I look into?


Scenario 2: Pushing the budget (probably a 25% chance)
I am going to push for an extra $1000 in the budget.That gets us to $1160 per box and would allow us to do a single Titan per box with space to add another next year (or other GPU). Is there a better option for this price point?


I have read some of the threads here saying Fermi cards are outperforming Kepler cards. Is there a single our double Fermi card setup that would fit either of these budgets and perform better?

Thanks in advance for the help.

blitz
05-18-2013, 12:40 AM
I still think 580's are the best bang for your buck. We made the mistake of purchasing 2x680's with poorer performance than 2x580's and the same price point. Titans are a worse investment at price/performance ratio than 680's. Granted, Titans give you more vram, so if you are dealing with large projects, titans are the way to go. But from what you are describing, titans may be overkill for the size of projects you deal with.

Unless Bunkspeed staff can really shed some light on the situation and convince us that the next update will better utilize the kepler cards, currently, fermi still prevail even though they are less efficient and hotter running.

Mar10
05-20-2013, 04:39 PM
Thanks Blitz for the response. The big issue here is budget. We may be able to do a single 580 but for about the same price we could do twin 660's. Is the Kepler/Fermi performance difference so big that you would suggest a single 580 over 2 660s?

david.randle
05-20-2013, 06:45 PM
have you considered just doing a single Titan? a single Titan is almost as fast as 2 GTX 680s with 6 GB VRAM. It may not be that far off budget wise than 2 580s or 2 680's.

Mar10
05-20-2013, 07:57 PM
Thanks David. Yes, we have considered single Titans. I mentioned this in post #9 of this thread. That would be great with the large VRAM and it leaves us space to expand in the box. There is maybe a 25% chance of upping the budget to do this.

I would bet the most likely scenario is that I will have to come in at, or under budget. I don't see any possible way to get 2 580s or 2 680s for the $660 left in my budget let alone a Titan.

Does anyone have a suggestion that would work within the $660 budget?

thanks

redrum
05-22-2013, 12:37 AM
GTX 580 all the way. Stay away from GTX 600s... they don't mix well with current iray.

You should find a custom PC builder and see if they have used GTX 580s. You can get 4 of them with a $700 budget. They'll last you long enough until you can upgrade again.

AB.Design
05-22-2013, 04:41 PM
GTX 580 all the way. Stay away from GTX 600s... they don't mix well with current iray.

In what way do the 6xx series cards "mix well" with iray. Im running two 4GB 680GTX cards and iray works perfectly.

If you are referring to the speed differential between Kepler and Fermi using Iray, then yes Fermi performs better overall at this moment in time with this build of Iray

Frankly considering the new Iray build is around the corner i would be putting my money in the 680/670 camp to:
a-Leverage the additional 1GB of VRAM over a 580
b-Ensure that I’m not spending $$$ on second hand goods that may have been OC'ed to within an inch of their lives previously.
c-Take a gamble that the new Iray build will close if not supersede the performance gap to Fermi.

Sorry to deviate from the op’s topic but the above comment is nonsense.

If I were personally filling an GPU Xpander I would choose either 4gb 680’s, Titans or Teslas depending on budget. Whatever you do make sure the cooling is ROCK SOLID.

Mar10
05-29-2013, 04:20 PM
Since buying used 580's isn't going to fly at my company we would be stuck buying a single new 580 or twin (new) 660 ti's to fit in the budget. Even with the Kepler/Fermi performance difference would anyone really still opt for the single 580 over 2 660 ti's?

Mar10
06-03-2013, 03:49 PM
Saw some stuff on the new GTX 780 and that may answer for now. I have quoted prices that put me $80 over budget which isn't too far off and leaves room to add a card next budget cycle.

Will the new software version, supporting the 780 & Titan, be an update for current users or a new paid upgrade? The 780 is really close to working with my budget but not if we have to pay for a software upgrade. Hopefully David knows the answer to this.

thanks