Jump to content
3DCoat Forums
AbnRanger

Don't upgrade that NVidia card just yet...

Recommended Posts

I think this is probably one reason why Cebas (3rd Party software vendor for 3ds Max, Maya and C4D) has dragged it's feet for the past 3-4yrs+ with the upgrade to R4 GPU. It looked like they were going to be all CUDA based (hybrid CPU +GPU), but at one point, this Intel card came into the picture and I think that made them take another look. Who knows? GPU computing IS indeed the future, cause CPU tech has been stuck in the mud, with no significant advances for the past 3-5yrs. We've had 4-8 core CPU's since 2009...and we're still here.

Well it looks to me like there's a big CPU coprocessor/GPU war coming. And that will be a good thing

Share this post


Link to post
Share on other sites

Actually after all this discussion, I'm thinking that the whole GPU card parallel programming business may not be the right way to go. Andrew thinks it's a bitch to program and so do the guys over at Vray.

Vray has a much more interesting take on it; they're going with Intel Xeons and the Xeon Psi Co Processor, which is much easier to program for multithreading and parallel computing.

Needs to be highlighted.

Share this post


Link to post
Share on other sites

http://lotsofcores.com/

http://www.drdobbs.com/parallel/programming-the-xeon-phi/240152106

xeon_phi_family_15.jpg

http://www.sgi.com/products/servers/accelerators/phi.html

  • SGI Rackable Servers
    Leveraging the latest Intel® Xeon® Processor architecture, and Intel® Xeon Phi™ coprocessors, these servers deliver top value and performance. SGI Rackable twin-socket Intel® Xeon® servers are tailored to meet your exact specifications for high-density, high coprocessor to CPU ratios, high I/O or high memory.

  • SGI UV 20
    SGI UV 20, combined with Intel® Xeon Phi™ is ideal for development or field deployed solutions, with nearly 3 Teraflops peak compute, 1.5TB of memory, four PCIe gen 3 x16 slots in a 2U, four Intel® Xeon® E5-4600 processors, and two Intel® Xeon Phi™ coprocessors.

So the SGI UV blade is one Xeon CPU with one Xeon Phi. and the Xeon Phi it toally out of core memory...so not only does it access its own 8 gigs ddr5 RAM it's drawing on the blade's total RAM as well to whatever scale it seems.

I'm wondering how much that costs...

48 blades is 48 buckets instantly, so 48 blades are real time CPU rendering 4k of the finest quality with all the shaders, displacement etc. :D

This may sound pretty off base for what we want but I am hearing rumors that this may be the route Apple will be taking in its new MacPro desktops, the first new ones in 3 years. We shall see what comes out of the WDC June 10-14 . Maybe we'll all want to run out and buy the new Mac Pros for our 3d Coat work. It would be very cool if Apple became more like SGI, an affordable SGI.

Edited by L'Ancien Regime

Share this post


Link to post
Share on other sites

Then again Centileoo has their own approach

<<< and plus some Intel's inventions
this is not a miracle, but another GPU competitor. CUDA programms were optimized for massive parallel architecture of GPUs. People, will probably be able to run C++ programms on Xeon Phi. But it doesn't mean that they will run fast or efficiently scale with more parallel cores on Xeon Phi.
Efficient algorithms for parallel architeture are different from the ones that were originally optimized considering CPU target
You can't even imagine what crazy things we implement on our GPU software. Such things would be impossible to think about on C++ programm running on any CPU

http://www.centileo.com/news.html

Edited by L'Ancien Regime

Share this post


Link to post
Share on other sites

Told you, future is not in the gpu war, it's in the cpu optimization. You can argue all you want, zbrush has been doing very fine for about 15 years relying on cpu only... And there's still much to leverage in cpu usage optimisation.

OFC you can grab huge boost with gpu, but it's not always reliable, highly dependant on the card manufacturer and prone to change invalidating huge portions of code without warning.

IF you're a huge company like autodesk with tight links with the card manufacturers and hundreds of programmers, maybe it's a somewhat safe road. But in Pilgway's case it's suicide.

Share this post


Link to post
Share on other sites

You can argue all you want, zbrush has been doing very fine for about 15 years relying on cpu only...

OK, got to confess that I am no tech head, but after reading most of this thread here - what do you really want?

Speed while working?

Workable poly counts that are numbered in millions of polygons - the more the better?

A stable program that handles all the above and more?

If you can agree to that - get ZBrush :).

32bit, no Cuda - no worries.

And, just to make this clear - no fan boy here, got both (ZBrush/3DCoat) so I am not biased - although I find that there is a lot that speaks for "32bit, no Cuda ..." :).

i7-2600K @ 3,40 GHz, 16 GB RAM, W7/64bit, GTX580

Share this post


Link to post
Share on other sites

What do I want ?

I'm only interested in sculpting in 3dc because I got other tools for the other parts of the software.

What I want: A sculpting feeling on par with Zbrush.

3dc got the freeform function that all traditional sculptors dreamt about for nearly 25 years but it has so far the worst brush feeling of the competition (even sculptris which is very simple has effective brushes which do what they were designed for CLEANLY).

Ofc you can sculpt in 3dc, I can't say it's impossible: I did a few things with it. BUT it could be THE liberator tool for traditional sculptors IF the brushes were as polished and easily tunable as zbrush...

I'm not talking about performances, so far it's ok on this side of the app. About stability if you start a new project you may finish it without issue (depending on which tool you use ^^')... i just want polish (see voxel room in my sig...).

Otherwise like you said (and an awful lot of people coming from zbrush/mudbox and traditional background think): if 3dc can't provide that last missing piece of the puzzle, I have no business here, I better go see Zbrush...

You can't expect artists to stick around for so long when the tool you have to offer is a joy to work with but the brushes are destroying the work put into your sculpture...

Share this post


Link to post
Share on other sites

BeatKitano pretty much nailed it dead on with his last post. I use 3D Coat primarily for retopo, uv setup, and bitmaps because those are what it excels at IMHO. I appreciate what 3D Coat is trying to do insofar as sculpting is concerned, particularly it's ability to freely add, subtract, cut and what not which comes closer to mimicking real clay than ZBrush does, especially when ease of use is considered. It just doesn't handle these and other sculpting tasks cleanly which is where ZBrush, and even Mudbox, trumps it. If Pixologic not only refines ZBrush, but also adds all the stuff currently missing when they release 5.0, it will truly become unstoppable with the final hurdles being it's interface (which I'm fine with) and higher price. Not going to even comment on rendering lol.

Share this post


Link to post
Share on other sites

Told you, future is not in the gpu war, it's in the cpu optimization. You can argue all you want, zbrush has been doing very fine for about 15 years relying on cpu only... And there's still much to leverage in cpu usage optimisation.

OFC you can grab huge boost with gpu, but it's not always reliable, highly dependant on the card manufacturer and prone to change invalidating huge portions of code without warning.

IF you're a huge company like autodesk with tight links with the card manufacturers and hundreds of programmers, maybe it's a somewhat safe road. But in Pilgway's case it's suicide.

I have to disagree totally here. GPU acceleration doesn't have to utilize CUDA or OpenCL. It can simply be optimized for OpenGL or DX. This is how Mudbox stole ZBrush's lunch money (version 2009) in a single release. They went from being considerably slower than ZB, with CPU multi-threading...to being GPU accelerated. So, anyone saying it can't be done, needs to review their study notes again.

I put the 670 on Ebay. I'm fed up with this wireframe issue and I think there is a little bit of lag when sculpting in Voxel mode...due to it's narrow memory bus and scaled down CUDA performance. Going to try and pick up a 580, instead.

Share this post


Link to post
Share on other sites

Ah now we're talking ! Proprietary gpu/cpu acceleration is the way to go if you absolutely want to benefit from those component.

BUT, like cuda and ocl it comes with a price: the need to constantly monitor cards market to make your software support a crazy amount of hardware piece and you need at least one guy dedicated for this task alone (and a very talented one). Not sure if it's a safe way for pilgway, but I agree it's the best route(you don't depend on anyone's tech).

Share this post


Link to post
Share on other sites

I actually discovered recently, that the large brush lag (on 4k+ maps) in the Paint Room.....vanished (for the most part)....when I used an Intel i7 instead of an AMD Phenom X6. It was indeed a limitation of the CPU. Mostly AMD's. Back around Siggraph 2010, I recall some talk about Andrew using some Intel libraries. I guess AMD CPU's don't have access to those or Intel has some wicked prefetch voodoo going on with their CPU's.

On paper, the AMD CPU I was using, was neck and neck with this i7 950, in most benchmarks, including Cinebench and others. And the 670 has a higher framerate overall, but is really, really crappy with wireframe, and I suspect with CUDA. When I threw my old GTX 275 in, it seems to perform smoother, when sculpting with Voxels. I think some of your issues with sculpting could be a little bit on the hardware side, too.

Share this post


Link to post
Share on other sites

You cpu thing is why I never go with amd on cpu (nor cards mind you but for other reasons ;)) . They can claim performance over intel hardware, intel has always put some optimized libs out there, and it's always integrated in cpu intensive apps... I remember the SSE II/III days, when you didn't want to use amd for that reason alone, the perf boost was pretty dope...

Share this post


Link to post
Share on other sites

Ah now we're talking ! Proprietary gpu/cpu acceleration is the way to go if you absolutely want to benefit from those component.

I completely agree with this. If Andrew would do it himself I think thats a good effort rather than relying on third party.

Share this post


Link to post
Share on other sites

I would be happy if the brushing and large edit operations in the Voxel Room (Move, Transform, Primitives w/ Models pallet, and Pose Tool), all were OpenGL and/or DX accelerated, instead. Somehow, Mudbox is doing it and I don't know if they have moved to DX 10 o 11, or still on 9.

I think a move to DX11 or OpenGL 3-4 overall would be good move. Both have been out for 3-4yrs now, and all newer games are written for them. DX9 is about 10yrs old now.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×