Jump to content
3DCoat Forums

What are the specs my computer needs to run 3D Coat?


Raybrite
 Share

Recommended Posts

  • Member

The main spec I had that was missing was not in the computer.

Ignorance of the different ways to do things with the program. I am fixing that one now and things are beginning to work out.

I have a new CPU coming in a few weeks so things will be a lot faster then.

I am not getting it for this program but for another that will be released in December. (iClone6).

I also plan to move up to 2 monitors if possible.

Ken :)

Link to comment
Share on other sites

  • Advanced Member

Be aware of one thing:

 

If you get "out of memory" errors, "disappearing" / "invisible" layers, or sudden crashes, download GPU-Z and check if your VRAM is getting full. That was my problem some time ago... Voxel sculpts can get quite high with the VRAM consumptions if you crank up the amount of layers and resolution, and I had an old GTX 580 with measly 1.5G VRAM in my Workstation.

 

Since I swapped the graphic card for the GTX 670 in my other system, 3D Coat is running rock stable. No crash, no error, no layer problems! In fact, I am pretty amazed how much I can crank up the poly count with just some FPS dips, as long as I make sure the VRAM usage is always below the VRAM size.

 

 

Get GPU-Z (it will also show you the specs of your video card if you don't know them, like the VRAM size), and keep it open in the sensors tab while you work in 3D Coat. If the VRAM is getting 100% full, and then you get your error, you either need to swap your card for one with a larger VRAM, or reduce resolution / layer count (if you are in fact Voxel sculpting)

 

 

On the other hand, I was able to run 3D Coat for smaller / lower resolution voxel sculpts on my old Samsung Series 7 Slate. Intel iGPU, old slow mobile i5, about 4G RAM, still 3D Coat ran stable and quite good as long as I kept my resolution and layer count in check.

So yes, 3D Coat is able to run on pretty slow machines.

Link to comment
Share on other sites

  • Member

Thanks for the mention of GPU-Z Gian, I'm going to download and give it a shot just to see how much VRAM I am eating in my heavier work. Should be interesting and help calculate what my needs in the future will be. I never thought of checking VRAM usage. I just upgraded from an old 768meg card to a new but low end 2 gig card. I know it definitely helped but it would be great to see how much is being used before I hit that ceiling or what it might take to hit that ceiling.

Link to comment
Share on other sites

  • Advanced Member

Thanks for the mention of GPU-Z Gian, I'm going to download and give it a shot just to see how much VRAM I am eating in my heavier work. Should be interesting and help calculate what my needs in the future will be. I never thought of checking VRAM usage. I just upgraded from an old 768meg card to a new but low end 2 gig card. I know it definitely helped but it would be great to see how much is being used before I hit that ceiling or what it might take to hit that ceiling.

 

Well, my current project is really big, Many layers filled with high resolution objects. I do hit 2G VRAM now that I work the last few details in Voxels before I will start the retopo, so I am pretty happy I upgraded to a new card with 4G VRAM days ago :)

 

For most of my sculpts I stay well below 1G, it really only is a problem when you crank resolution / layer count up to the max (Your CPU / GPU can still handle).

 

 

2G VRAM should get you quite far, still, give GPU-Z a try. If your VRAM is being filled, and you cannot afford a new card, you can always either split the sculpt in two (export layers into their own file), or lower resolution where you don't need it. Not keeping unneeeded layers hanging around forever also helps.

Of course, to do that, you most still be able to open the file. I was able to restore a file to working conditions once that was just getting over the memory limit by reducing the resolution of the main layer. If the file is really going too far, you will get instant crashes. But usually the crash will happen way before you can save the file, So usually problems with saved files are mostly "border line cases".

Edited by Gian-Reto
  • Like 1
Link to comment
Share on other sites

  • Member

Very cool, used the program on my work file here:

30MilTrisCountFinalStatue_Hell_Rangers_R

 

30.5 million tris, fully painted i.e. direct paint on high poly mesh, lots of paint layers:

 

GTX 750ti, 2gig VRAM in old 2.66ghz Core 2 Duo, 8 gigs ram.

 

Sculpt Room

Memory Used: 1349 megs of VRAM

GPU Load: When moving model around, 99%

Memory Controller Load: 51%

 

Render Room

Memory Used: 1489 megs of VRAM.

GPU Load: When moving model around, 99%

Memory Controller Load: 51%

 

Power Consumption was about 51% highest for anything.
CPU Temp never exceeded 42c

Fan speed never went above 29%

 

When turning the base of the statue off, reducing the model to 21.3 mil tris, video card GPU load never went above 74% and bounced between 36-48-54-66%, 74% was the highest.

Memory controller load at 21 mil tris never exceeded 38% and was often in the mid 20's on average.

 

The longer 3d Coat was open, VRAM comsumption eventually peaked at 1719 megs, almost topping out.

 

Pretty cool, I've never tested my work this way before.

Good to know the Processor isn't bottlenecking the video card so to speak.

However the processor definitely lags in switching rooms, painting heavier sculpt strokes and undo's. But it is not unbearable by any means.
Pretty amazed I can run such a heavy model on such a low end PC (mid range PC I bought 7 years ago). Clean program to say the least (3d Coat). Interesting data for myself concerning what I do, what I need etc. Very cool to know. Definetly stuff you don't find in your average video card reviews...

Edited by Rkhane
Link to comment
Share on other sites

  • Advanced Member

yeah, I never really expierienced any problem with 3D Coat that was not GPU related. Rock stable application as long as the GPU and VRAM can keep up.

 

From what you posted above, looks like you shouldn't see any problems yet. Your peak VRAM usage just tells you that you shouldn't increase your layer count or resolution by much anymore, or 3D Coat will blow up.

Lucky for you VRAM size goes up all the time thanks to the push to higher Screen resolutions, your next card will most probably have much more VRAM (seeing that your current one is still fairly new).

 

 

Don't worry too much about GPU Load... mine is at 99% too in the big 3b files, and I have tested with a GTX 580, GTX 670 and now GTX 970. So 3d coat will just grab all the GPU resources it can.

If I guess I would say it is quite understandable seeing how, as soon as you use a CUDA version, the GPU has to work a double shift, both rendering the scene in the viewport and doing its CUDA Magic. And of course, 30M Polygons will not render fast on a GTX 750...

 

 

One last thing to note is that if you are using the OpenGL Version, you might want to check the DX Version. I was using the GL version for a long time as the text after the installation vaguely states "...use the GL version for faster cards...".

What I did not know, the faster cards meant here are professional cards like Quadros. When I first tested the file I had problems at the time with with GPU-Z, and switched to DX after recommendations from other users here were urging me to do so, I saw the VRAM usage going down by about 10-20%, and the whole scene getting more FPS.

 

So give DX or DX CUDA a shot, if you are still using the GL version. Might further improve your performance.

  • Like 1
Link to comment
Share on other sites

  • Member

Hmmm... I don't mind the rendering speed at 1920x1080 in the render room, it only takes a couple seconds to render it and make it look nice at 30 mil polys with Realtime Render turned on. However when I render out a large file for print and choose a folder to save it in as BMP/PNG, yeah, a 9000x5000 takes maybe 6-7 minutes to render the file.

 

However if I am not mistaken rendering to file (not for preview) is largely processor based correct? When my last video card died just over a month ago I was forced to use my old 512meg asus card, like really old, about 9 years old, and while it had a huge and slow impact at looking at anything in the renderoom with Realtime Render turned on, it really didn't seem to have an impact for rendering a 3d turnaround to BMP's VS my card that died which was much faster and had CUDA. The old ASUS card kind of choked a bit on the first few renders because it initially tries to display them when you are rendering out to file, then it just steamed along and rendered the BMP's for a turnaround pretty much at previous speeds.

 

For DX, I'll just post something from another thread with ABNRanger:

 

I'm using a GTX 580 3GB, and it works flawlessly in Windows CUDA DX 64. Every time I tried using the GL version, it was noticeably slower navigating about the scene.

 

(My reply) That's bizzare, I have the opposite experience. I'm on Win 7 pro 64 bit, Service Pack 1, Direct X 11 installed, 2.66 Core 2 duo, 8 gigs system ram, GTX 750ti 2gb (Was using a EVGA GTX 460/768meg version but it died, humorouosly speaking the 750ti is about 3x faster for displaying renders especially with large poly counts 20mil+ despite the 750ti having a lower memory bus, 128 VS 192bit, I think the higher ram amount made the difference). As you said but with Direct X, easily noticeably slower  when moving the camera around while working, it just drags in DX but is substantially faster in OGL. Just checked it again with a 28mil triangle model, the GL version displays way smoother and flies when moving the camera around. The DX version displays okay but the camera has very noticable lag and stutters when moving around. I would say the GL version is twice as smooth/fast for me. I can't figure that out but eh, It's great to have the options!

 

Kind of trippy for everyone else to have better performance with DX than me, kind of making me wonder what's up on my side of things :)

Link to comment
Share on other sites

  • Advanced Member

maybe its the new maxwell architecture of the 750ti that favors the openGL? maybe they limited the directX performance ( artificially), since its a low end card, but left the openGL card in tact. Wil have to check it out on my 970..  my 560 definitly performed waaay better in directx

Link to comment
Share on other sites

  • Member

No, both my former (now dead) evga 8800 gts and evga GTX 460 worked the same. GL responded the best for me. Makes me think its my system config or something... At first I thought maybe DX wasn't up] to date but that's not it either. I'm happy though, no complaints here, runs great in GL, just odd that it does so better! :)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...