Jump to content
3DCoat Forums

Don't upgrade that NVidia card just yet...


Recommended Posts

  • Reputable Contributor

After upgrading from an NVida GTX 470 to a 670 (mostly to upgrade the video RAM), I'm somewhat disappointed. I noticed right off the bat that it struggles with wireframe turned on. The 470 handles it smoothly. But, on the other hand, a very complex scene I have, where the 470 would choke (maybe due to low video RAM...only 1.28GB), the 670 handles it just fine. And I think the large VRAM helps with larger texture map sizes.

So, I'm kind of torn. I wish Andrew was available to chime in on this. One thing I noticed is that NVida reduced the BUS size going from Fermi to Kepler. From 320 in the 470 to 256 in the 670. WTH? If you were thinking about upgrading, yourself....wait. I can understand wanting to bump up the VRAM, but it will probably be best to wait and see what they have in store for the 700 series due soon.

Found a good comparison between the two:

http://gpuboss.com/gpus/GeForce-GTX-670-vs-GeForce-GTX-470

Compute performance is about equal.

Edited by AbnRanger
Link to comment
Share on other sites

  • Reputable Contributor

The new Titan Ultras will have 6 gigs ddr5 ECC RAM and 4000 CUDA cores.

They're almost here..

The problem with the number of CUDA cores is that Nvidia designed them to do less work than the same CUDA cores in the previous generation (Fermi). I was really surprised about the 670's difficulty navigating with wireframe toggled on in the Voxel Room.
Link to comment
Share on other sites

  • Advanced Member

I think part of the problem may be that CUDA in 3D coat has been put on the backburner and has not been upgraded to the latest versions. Also there's no plan ofr implementation of OpenCL and so all those AMD Radeon people are getting locked out. Things are moving fast in GPUs and not only in hardware but in the programming of the GPUs with C# and F# particularly. This stuff needs to be implemented properly if we're going to get the most out of it. Ignore it and let it lapse and you get nothing from it.

This is where it's going..or rather how it should be proceeding.

https://www.quantale...s/introduction/

Edited by L'Ancien Regime
Link to comment
Share on other sites

  • Reputable Contributor

Yeah, I read that the Kepler shader cores (cuda) now run at the same speed as the rest of the Gpu's Clock.

"On Fermi, the cores run on double the frequency of the rest of the logic. On Kepler, they run at the same frequency."

So the cuda cores do not seem as powerful... You get more cuda cores with the Kepler but not the same frequency power.

Now I understand why some Blender users have been a little disappointed as well when rendering using Cycles after they upgraded from Fermi to Kepler. They did not get the rendering time increase they thought they would. Of course it is better and faster but not quite what they expected.

I will still upgrade sometime, AbnRanger keep us posted if you figure out the slow down with the wireframe display...

Link to comment
Share on other sites

  • Reputable Contributor

I think part of the problem may be that CUDA in 3D coat has been put on the backburner and has not been upgraded to the latest versions. Also there's no plan ofr implementation of OpenCL and so all those AMD Radeon people are getting locked out. Things are moving fast in GPUs and not only in hardware but in the programming of the GPUs with C# and F# particularly. This stuff needs to be implemented properly if we're going to get the most out of it. Ignore it and let it lapse and you get nothing from it.

This is where it's going..or rather how it should be proceeding.

https://www.quantale...s/introduction/

Can you e-mail Andrew about this? He just brushes off my requests about it. He doesn't think CUDA updates/recompiling will make that dramatic of a difference. But he also didn't think multi-threading would help much either. It did. I think he's wrong here, too. When he added CUDA SMOOTH BOOST it made a huge difference. I showed him (via screen share) a comparison, in painting with large brush sizes, between 3D Coat and Mudbox. Mudbox was faster with an 8k map than 3D Coat was with a 4k map, and it had absolutely no problem whatsoever with large brushes.

Even after seeing the difference with his own eyes, he brushed that off, too. Much of the reason I upgraded was because I mistakenly thought the 4GB VRAM buffer would make a big difference, in this regard. It did not. I have upgraded all the components before, thinking it would make a big difference in 3D Coat. It never has. So, these bottlenecks in 3D Coat need to be addressed. Because it causes people like me to spend money needlessly on hardware upgrades that don't help much. It's the architecture in the app...not the lack of hardware capability.

Perhaps upgrading from DX9 to 11 could help a great deal, too. We've been on 9 since it 3D Coat's earliest days. It's time to bring the app up to date. C4D now uses OpenGL 3. Now, to be fair, I want to be clear that when Large brush sizes are not used, and the objects are not excessively dense for the task, that performance is quite good...even remarkable in some cases. It's just a matter of addressing the bottlenecks when large brush sizes are used/needed.

Edited by AbnRanger
  • Like 1
Link to comment
Share on other sites

  • Advanced Member

Andrew is just one man. He's accomplished a great deal on his own, and at moments he's accomplished revolutionary innovations in his program. But it's a lot to do on his own. I'm not sure what he wants to do with it, where he wants to take it all

I look at it and I can see where I would like it to go but I'm sure if I said that it would ignite a big quarrel among all of us.

Also the CPU vs GPU debate is perhaps one I'm not qualified to step into . I'm just a beginning programmer and frankly, the more I learn about programming the less I realize I know.

From what I can gather CPU is for doing the big geometry number cruching and GPU is for pushing pixels around.

Of course you can harness the GPU as a kind of coprocessor with CUDA and OpenCL

Andrew is a pretty impressive programmer and perhaps he has his own philosophy his own insights on this.

All I'll say about this is that the thing that really sold me on 3d Coat was that Andrew seemed to be a man who was addicted to making dramatic technological jumps far ahead of less agile big players like Autodesk or even Pixologic.

Think of it; first adopter of PTEX within weeks of it being announced, voxel sculpting, Live Clay subdivision modeling and CUDA support.

I just keep hoping that he'll get inspired again and fire up the genius factory to produce some other new outstanding innovation in the face of overwhelming competition from some people with deep pockets.

Edited by L'Ancien Regime
  • Like 1
Link to comment
Share on other sites

  • Reputable Contributor

In all fairness to AbnRanger, he posted a request through Mantis for upgrading Cuda and a large number of users +1 his Mantis report... I cannot see into the mind of Andrew for reasons not recompiling to the latest Cuda version. Blender still is complied for version 4 of Cuda and Octane renderer for version 4.2 so cuda must be somewhat hard to work with as companies do not upgrade quickly to the newest version of cuda.

If anything I fault Nvidia for making Cuda so funky to work with. One program gets complied using one version, another program you work with uses another version of cuda. Not of all of us want to take the time to learn how and install several versions of cuda and also that each program sees only the cuda version it needs. I could dig into it and learn how to run more than one version of cuda in Linux but I rather sculpt in 3DCoat than to learn some more computer stuff. I have done that many times to fix and address issues but I rather programmers make my life simpler...

I do agree that recompling for the new version 5 of cuda after 3DCoat version 4 is released needs to be near the top of the list.

Link to comment
Share on other sites

  • Contributor

I remember Andrew saying somewhere on this forum that when he is coding for CUDA, he writes some code and then just prays that it won't crash.

CUDA must be a real nightmare to code for if prayer is an integral part of the coding process!

Edited by TimmyZDesign
Link to comment
Share on other sites

  • Reputable Contributor

I remember Andrew saying somewhere on this forum that when he is coding for CUDA, he writes some code and then just prays that it won't crash.

CUDA must be a real nightmare to code for if prayer is an integral part of the coding process!

It probably crashes cause he hasn't recompiled 3D Coat for newer CUDA versions. We're still essentially going on code for CUDA 1. There is a lot of technology that NVidia incorporated in CUDA since, that 3D Coat isn't taking advantage of. For example, with the new Kepler Cards, Nvidia introduced dynamic parallelism and Hyper Q. If a vendor doesn't recompile for CUDA 5, their app won't get any benefit for that tech.

http://docs.nvidia.com/cuda/kepler-tuning-guide/index.html

Some things you just won't know it's impact until you do it. Multi-threading was the same exact thing. Alot of work and Andrew didn't really know if it would help much. Turns out it did. In fact, it's one of the chief reasons why 3D Coat can be legitimately compared to ZBrush and Mudbox, today. Otherwise, the performance would be so underwhelming, it would just be considered a cheap toy. In many areas of the app, CPU multi-threading makes 3D Coat perform so well, it's hard to distinguish between it and the other 2. It's only when you start to push it a bit, with large brush sizes, mainly, that you start to see separation.

Edited by AbnRanger
Link to comment
Share on other sites

  • Advanced Member

This may be a dumb idea, but since the current level of coding/innovations required is orders of magnitude bigger than when 3DCoat started, and likely too big for one man regardless of his level of genius, perhaps a different approach is now required to get to the next level.

Maybe another approach could be to do a Kickstarter to raise the funds to give Andrew the development resources/staff & management he needs to tackle issues such as these. Has this sort of thing been considered before? I really don't like when users speculate about a company's business objectives etc in forums, but maybe in this day of crowd funding, this sort of thing could be useful?

Link to comment
Share on other sites

  • Advanced Member

Or maybe it's to ditch the bitch; drop CUDA and go OpenCL :D

Then everyone, Nvidia and AMD fans will be happy.

As for raising money, Andrew has just got to pull the trigger and issue Version 4 and ask the entire body of 3d Coat users to ante up for the new version.

Companies like Autodesk do it year in year out without offering a whole lot for the upgrade money and my oh my are their yearly upgrades ever buggy pieces of shit.

No point even installing them till Supply Pack 2 comes out in September.

Edited by L'Ancien Regime
Link to comment
Share on other sites

  • Advanced Member

Drop cuda support for now. Make the program work well on medium end systems like zbrush, mud, blender.

Its not a good idea to rely on third party tech like cuda. This is especially true if you're a small team or a one man team like 3dcoat. Heck its not even a good idea to support multiple os platforms for small dev teams. Modo took more than 10 years to support linux, lightwave hasn't, 3dmax, too.

The teams that make good use of cuda are medium sized teams and up and mostly renderers that do parallel processing, not something as complex as sculpting software.

Edited by geo_n
Link to comment
Share on other sites

  • Reputable Contributor

In theory, OpenCL would make sense, but in practice, it's not so easy to just make a wholesale switch....only to find out that AMD's drivers suck in this area. They are concerned about games and games only, with regard to CPU compute capability. So, even if they do theoretically offer OpenCL support, it doesn't mean their drivers will be up to par. The list of apps that make use of AMD cards streaming capabilities is pretty small compared to NVidia. This is why Octane Render doesn't utilize OpenCL. VRay RT offers it, and has an option for CUDA. I find the CUDA option much faster than OpenCL.

Nevertheless, I do wish, after the V4 launch, I hope business is good enough that Andrew could contract a GPU programming specialist to take the ball and run with it. Use GPU wherever it can possibly outperform CPU multi-threading, in the app. Could be that the architecture isn't utilizing DirectX or OpenGL fully. I think that is a possibility as Mudbox is largely GPU dependent, but I haven't heard anything about it utilizing the Compute side of the GPU....just the shading and tessellation engine.

Link to comment
Share on other sites

  • Reputable Contributor

Drop cuda support for now. Make the program work well on medium end systems like zbrush, mud, blender.

Its not a good idea to rely on third party tech like cuda. This is especially true if you're a small team or a one man team like 3dcoat. Heck its not even a good idea to support multiple os platforms for small dev teams. Modo took more than 10 years to support linux, lightwave hasn't, 3dmax, too.

The teams that make good use of cuda are medium sized teams and up and mostly renderers that do parallel processing, not something as complex as sculpting software.

I disagree completely. CUDA still makes a difference in the app...just not as much as it would, if it were recompiled (to take advantage of NVidia's development and technology) every few years. It's never been recompiled at all, and thus it cannot take advantage of Kepler technologies such as dynamic parallelism and Hyper Q.

He has already laid the foundation for its usage, and CUDA is a tool specifically for software vendors to use...not just for scientific research labs. And it's the current industry standard for GPU compute tasks. Any task that can use parallel threading, can utilize CUDA. That means a ton of work in 3D Coat could be accelerated by the GPU instead of the CPU. Andrew just has to commit to it. CPU multi-threading already does a pretty good job in many cases, so I doubt he's interested in doing so. However, if Andrew set his mind to capture the performance crown, I have no doubt he could do it.

Edited by AbnRanger
Link to comment
Share on other sites

  • Advanced Member

You can disagree. You're not a developer. Its not as easy as compiling.

Its not only about if it can improve 3dcoat performance, or how much time and effort and how much performance gains in that alloted time used.

Its also about maintaining that things don't get broken later on without your control because its third party tech. Its not good for small dev team. Don't compare with other team please.

Edited by geo_n
Link to comment
Share on other sites

  • Contributor

Just a way to say that AMD got the upper hand because Nvidia got cuda and didn't find necessary to develop opencl acceleration (luxmark process) (which is perfectly logical from a business standpoint imho).

And yet another way to say that like Geo_n I don't think gpu acceleration is the holy grail because on each side of the business manufacturers are doing THEIR OWN thing...

It doesn't matter anyway. Andrew doesn't plan on maintaining CUDA anyway (or he's so stubborn it's mind boggling) . And opencl, even though it's progressing nicely these last few month is still wayyyy behind cuda (if exploited correctly that is). I don't really think it would be beneficial to switch from an api to another... in any case you're stuck with driver support from each side, and nvidia engineers are already hard enough to reach I'm not sure it's a good idea to add amd engineers in the discussions.

On another note, the app could benefit greatly from cpu threading in a lot of untouched processes (for a few years now), and that, would not have any negative impact on anyone if done correctly...

Oh...and...huh....refactoring. YEAH.

Link to comment
Share on other sites

  • Advanced Member

Cuda works as an emulated OpenCL. So it's slower.

After a year from the release, LuxMark v1.0 has been widely used as OpenCL benchmark by AnandTech, Tom's Hardware, Vr-zone and other sites. AMD has used LuxMark as one of the 5 GPU computing benchmarks to present the new HD7970.

LuxMark v2.0 includes SLG2 as rendering engine with Metropolis Light Transport, Multiple Importance Sampling, Image reconstruction done on the GPU, support for multiple OpenCL platforms (i.e. Nvidia users can use Intel or AMD CPU device) and many more new features. The new features rise the complexity of the benchmark of nearly one order of magnitude and it should be able to put some serious stress on the new generation of GPUs. The capability to submit results to a centralise WEB database looks like the most interesting new feature of LuxMark v2.0: http://www.luxrender.net/luxmark

.Why is Adobe going Open CL? Because they don't like monopolies.

Edited by L'Ancien Regime
Link to comment
Share on other sites

  • Reputable Contributor

Wireframe overlay of high-poly stuff in 3D Coat was always slow as hell, no matter if I used a dated 9800GTX or a new GTX660Ti 3GB.

It wasn't with my GTX 470. I am thinking that the scaled back BUS size (from 320 to 256bit) is the culprit. Everything else is faster. That was a ridiculous move on NVdia's part. Nevertheless, AMD talks out of both sides of their mouths. They trumpet their streaming capability and yet consistently offer crappy driver support. It's like bragging about how big an engine you have in your car, while the transmission is broke down and no attempt is made to get it up and running.

Again, it ain't about monopolies or using a standard everyone can utilize. It's about AMD not giving a flip whether or not your gaming card works with a professional CG app. They put all their development efforts toward making sure their drivers work well with Games and games alone. The whole bit about Adobe offering some support for OpenCL is not a show of commitment from AMD, but on Adobe's part to kick AMD in the rear, to make their shizzle work...likely do to all the Adobe consumer push for the support.

It working with some of Adobe's apps doesn't mean OpenCL support in 3D Coat would work well with AMD cards. The way AMD looks at it, if you buy a gaming card from them, you get gaming support/drivers. If you buy a professional card from them, then you get professional/CG support.

Link to comment
Share on other sites

  • Contributor

Man, it's hard to believe the VRAM bus clock has that much impact here, when it comes to wireframe display in 3D Coat. I mean, my old'n slow GTX9800 had 256-bit VRAM bus and my brand new 660GTX 3GB has 192-bit (but notice the technology jump!) and yet they perform almost similar when it comes to wireframe display (meaning - SLOW)?

And AMD ended their existence after they stated they'd cease competing with Intel. This is a freakin' disaster for CPU/GPU development. Freakin' disaster. Unless...

Unless that's a part of their big, sneaky plan of dominating the world with ultra fast processors. Heh heh. :D

Competition is good.

No competition - f* bad for all of us.

Link to comment
Share on other sites

  • Reputable Contributor

Here is a youtube video (warning...some colorful language is used), where one of those online tech review sources spell it out a bit. States that if you are doing mostly CG work, NVidia is the only way to go.

Link to comment
Share on other sites

  • Advanced Member

I've been a loyal nVIDIA follower since day one, not anymore. I was having issues with display driver stops working in windows 7 with a GeForce GTX 460 (Fermi) card in a Core2 quad Q6600 machine. Out of disgust with nVIDIA for not addressing the issue I replaced the GTX 460 with a Radeon HD 7850. The HD 7850 works flawlessly and even smokes the the GeForce GTX 660 I have in my i7-3770k machine. I won't view AMD as inferior to nVIDIA anymore. Just my opinion, but I think (hope) OpenCL is the future.

Edited by bisenberger
Link to comment
Share on other sites

  • Reputable Contributor

That's rather odd. I had a GTX 470 for about 2yrs now, and hadn't experienced a single issue with the drivers. In fact, I've had about 4 Nvidia cards now and don't recall a single issue with any of their drivers. That's with Windows Vista all the way through Win 8.

The only thing that chaps my hide with them is all this dang marketing fluff about Kepler, when they knew full well they crippled the compute power of Kepler, and it blows my mind that they would go backwards and reduce the memory bus size over and over. It was 512bits back in the GTX 200x cycle. Now, in the 600 cycle it's 256! What is wrong with this picture?

Edited by AbnRanger
Link to comment
Share on other sites

  • Advanced Member

I'm having the same problem with my Nvidia 560Ti 2 gigs DDR5 RAM, particularly on startup.

I'm not going to take sides on this. There's a lot to be said for CUDA and NVidia but there's a lot to be said for Open CL and AMD Radeon too. And the older drivers that we're mostly speaking of here are not necessarily relevant to the latest breed of GPUs.

The Titan Ultras are about to come out and they've got 6gb ddr 5 RAM and 4000 cores

AMD RAdeon is coming out with their own GPU to match with 4000 cores and 6gb ddr 5 RAM

And the new Quadro Pro k6000 is coming out with up to 24 gb of ddr 5 RAM

They're all affordable.

Well the Titan and AMD 4000 core GPUs will be at around $1400.00 each. 2013 is a very exciting year for CG.

Edited by L'Ancien Regime
Link to comment
Share on other sites

  • Contributor

Nevertheless, I do wish, after the V4 launch, I hope business is good enough that Andrew could contract a GPU programming specialist to take the ball and run with it.

Andrew did say this in another thread (which is good news):

"Don't worry. I have no any plans in visible future to stop or sell 3dc. We rather expect to expand."

So maybe there will be some new staff and great improvements after Version 4 is out...

Edited by TimmyZDesign
  • Like 1
Link to comment
Share on other sites

  • Member

It wasn't with my GTX 470. I am thinking that the scaled back BUS size (from 320 to 256bit) is the culprit. Everything else is faster. That was a ridiculous move on NVdia's part. Nevertheless, AMD talks out of both sides of their mouths. They trumpet their streaming capability and yet consistently offer crappy driver support. It's like bragging about how big an engine you have in your car, while the transmission is broke down and no attempt is made to get it up and running.

Again, it ain't about monopolies or using a standard everyone can utilize. It's about AMD not giving a flip whether or not your gaming card works with a professional CG app. They put all their development efforts toward making sure their drivers work well with Games and games alone. The whole bit about Adobe offering some support for OpenCL is not a show of commitment from AMD, but on Adobe's part to kick AMD in the rear, to make their shizzle work...likely do to all the Adobe consumer push for the support.

It working with some of Adobe's apps doesn't mean OpenCL support in 3D Coat would work well with AMD cards. The way AMD looks at it, if you buy a gaming card from them, you get gaming support/drivers. If you buy a professional card from them, then you get professional/CG support.

Where you getting this from? Have you written a line of code in your life? OpenCL is equally powerfull as CUDA. And it works on CPU,GPU,FPGA,ASIC hardware. Sadly CUDA only works on NVIDIA GPU.

All Adobe software support OpenCL. It will continue to rise because it is an Open standard much like OpenGL. Everyone was talking down OpenGL when it was introduced. Now look where it is. Its used in every CG/CAD application on this planet.

Infact the next start to rise is OpenRL. Anyway this whole crap with AMD having bad drivers is a total sham. I have a AMD 5570 and 3d coat works perfect with it. OpenGL & DirectX.

  • Like 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...