Jump to content
3DCoat Forums

AMD and 3D Coat


AbnRanger
 Share

Recommended Posts

  • Reputable Contributor

...it ain't a good match. Let's just put it that way. If you use an AMD Graphics card....you're screwed (CUDA only. No OpenCL). If you use an AMD CPU, you're screwed. Andrew used Intel thread building block libraries as the core of the multi-threading in 3D Coat. That's good news....if you own an Intel CPU. If you, however have an AMD CPU (regardless of how well it performs in games and even in rendering apps.), keep some KY jelly around.

http://www.osnews.com/story/22683/Intel_Forced_to_Remove_quot_Cripple_AMD_quot_Function_from_Compiler_

"However, the Intel CPU dispatcher does not only check which instruction set is supported by the CPU, it also checks the vendor ID string," Fog details, "If the vendor string says 'GenuineIntel' then it uses the optimal code path. If the CPU is not from Intel then, in most cases, it will run the slowest possible version of the code, even if the CPU is fully compatible with a better version."

It turns out that while this is known behaviour, few users of the Intel compiler actually seem to know about it. Intel does not advertise the compiler as being Intel-specific, so the company has no excuse for deliberately crippling performance on non-Intel machines.

"Many software developers think that the compiler is compatible with AMD processors, and in fact it is, but unbeknownst to the programmer it puts in a biased CPU dispatcher that chooses an inferior code path whenever it is running on a non-Intel processor," Fog writes, "If programmers knew this fact they would probably use another compiler. Who wants to sell a piece of software that doesn't work well on AMD processors?"

In fact, Fog points out that even benchmarking programs are affected by this, up to a point where benchmark results can differ greatly depending on how a processor identifies itself. Ars found out that by changing the CPUID of a VIA Nano processor to AuthenticAMD you could increase performance in PCMark 2005's memory subsystem test by 10% - changing it to GenuineIntel yields a 47.4% performance improvement! There's more on that here [print version - the regular one won't load for me].

In other words, this is a very serious problem.

I bring this up because, as I mentioned elsewhere, I had to use a backup PC recently, which had an i7 950 as the CPU. I tested how well it performed using large brushes on a 4k map...thinking the extra 2 threads might help. It's almost like night and day. Using an AMD Phenom X6 running at 4.1Ghz and 1866 memory, it bogs down using large texture maps and large brushes. Even after upgrading the video card (thinking that would help...having a much bigger RAM buffer), it was no difference.

So, to summarize...this Intel Library business means you can blow a wad of cash upgrading components, but it won't really matter. 3D Coat runs slower on AMD components and that's the short version of it. That means you have to buy into Intel and Nvidia if you want to get the most out of 3D Coat.

  • Like 1
Link to comment
Share on other sites

  • Member

So, how do I change my CPUID? Thanks for giving me something to think about.

I've never had an Intel desktop, but this is a good reason to consider Intel for my next machine. It's also the kind of shady anti-competitive behavior I have always expected of Intel, so there's that also.

So even with nVidia/CUDA, AMD systems are sluggish with 3D-Coat? That really surprised me.

Link to comment
Share on other sites

  • Reputable Contributor

So, how do I change my CPUID? Thanks for giving me something to think about.

I've never had an Intel desktop, but this is a good reason to consider Intel for my next machine. It's also the kind of shady anti-competitive behavior I have always expected of Intel, so there's that also.

So even with nVidia/CUDA, AMD systems are sluggish with 3D-Coat? That really surprised me.

Not to the point that it's unusable or that you can't get decent performance, but there is a very noticeable difference. I was stunned to see how much better an i7 950 (not even the top model in that line) was able to paint with a large brush. The AMD machine was my main one...until I found this out. I was even thinking about upgrading the CPU to an FX 8350. But, I'm afraid it won't matter, either...for this very reason. And as much as I'd like to go with an AMD card every once in a while, very few software vendors are writing for OpenCL. Not as mature as CUDA, and I think there are more tools in the CUDA toolkit, to make it easier for the developers code for GPU.

Link to comment
Share on other sites

  • Advanced Member

Actually, I have an AMD CPU and 3DC has no issues on lag or significant performance issues. However, I stick with Nvidia cards. I noticed that you have the same CPU, but your clock speed was higher. Did you overclock or buy it overclocked? I never do this, as I've found over the years that overclocking isn't worth it. It tends to push the temperatures way up and can cause problems.

Edited by alvordr
Link to comment
Share on other sites

any1 saw this news ?

AMD unveils first eight-core 5GHz processor

AMD introduces today its FX-9000 Series, a family of eight-core processors that are said to be the first in the world to have a clock speed of up to 5GHz.

:blink:

Link to comment
Share on other sites

  • Reputable Contributor

Actually, I have an AMD CPU and 3DC has no issues on lag or significant performance issues. However, I stick with Nvidia cards. I noticed that you have the same CPU, but your clock speed was higher. Did you overclock or buy it overclocked? I never do this, as I've found over the years that overclocking isn't worth it. It tends to push the temperatures way up and can cause problems.

This CPU (Black Edition) has a lot of headroom for Overclocking. It can run at 4Ghz comfortably, and I use a good CPU cooler (Zalman CNPS 12X). This isn't the issue. It's an issue of large brush sizes with large texture maps (4K+). The performance (bottleneck) threshold with the AMD CPU is much lower than the i7. Even Andrew said he's sure Intel pulls some dirty tricks in this regard.

Link to comment
Share on other sites

  • Advanced Member

AbnRanger,

Yeah, I can't speak to it entirely, except that they used Intel CPUs at our school and in my experience with IT systems for a long time, I haven't been impressed with Intel chips. That said, you've got more first-hand knowledge here, in relation to comparing both with 3DC. I have an Intel-based laptop, but the specs are lower. I just haven't really experienced any major lag issues with my desktop.

Link to comment
Share on other sites

  • 1 month later...
  • Advanced Member

OSX 3dc builds don't support CUDA. 

Is this what you meant Andrew? 

Mac does not support CUDA means something else. 

OSX does support CUDA alright. (OSX 10.7 - OSX 10.8.4)

Checked and confirmed under blender cycles GPU (CUDA). Excellent results as expected. 

BTW

OSX 3dc builds run wonderfully on my machine. (dual xeon, 16 threads, ATI HD 5870, 24 GB ram) (5870 is still a decent GPU for OGL, NVidias, not so) 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...