Jump to content
3DCoat Forums

3DCoat 2025 Development


Recommended Posts

  • Advanced Member
49 minutes ago, Mihu83 said:

Ok, I'm working on pretty heavy model inside 3DCoat(90+ millions, multiple meshes) and the performance is pretty week, whole UI and viewport gets choppy/laggy...

I have Ryzen 9 5900X, 128GB of RAM, RTX 4070 12GB and NVMe drives, so it's not a weak machine. 3DC utilizes a fraction of CPU, maybe 30GB of RAM and around 40% of GPU and the performance drops significantly. FPS is around 20 - 24 max... Also, I think it's worse on Windows 11 than it was on Windows 10.

 

ive noticed the same , i havent looked into specifics but 2025.12 vs 4,9,6 has a large performance drop , i was hoping for a cut down version that doesnt contain the hybrid modelling , renderer, nodes etc ... just the sculpting and retopo side 

Link to comment
Share on other sites

  • Advanced Member
49 minutes ago, Mihu83 said:

Ok, I'm working on pretty heavy model inside 3DCoat(90+ millions, multiple meshes) and the performance is pretty week, whole UI and viewport gets choppy/laggy...

I have Ryzen 9 5900X, 128GB of RAM, RTX 4070 12GB and NVMe drives, so it's not a weak machine. 3DC utilizes a fraction of CPU, maybe 30GB of RAM and around 40% of GPU and the performance drops significantly. FPS is around 20 - 24 max... Also, I think it's worse on Windows 11 than it was on Windows 10.

 

ive noticed the same , i havent looked into specifics but 2025.12 vs 4,9,6 has a large performance drop , i was hoping for a cut down version that doesnt contain the hybrid modelling , renderer, nodes etc ... just the sculpting and retopo side 

Link to comment
Share on other sites

  • Advanced Member
51 minutes ago, Elemeno said:

ive noticed the same , i havent looked into specifics but 2025.12 vs 4,9,6 has a large performance drop , i was hoping for a cut down version that doesnt contain the hybrid modelling , renderer, nodes etc ... just the sculpting and retopo side 

I'm wondering if that have something to do with Incremental rendering, at least partially. In old versions it was ON, but it was screwed up at some point and needs to  be turned off if you want to have smooth sculpting experience.

Anyway, no matter what is the cause, 3DC need some work in performance department, ASAP.

Edited by Mihu83
Link to comment
Share on other sites

  • Advanced Member
2 hours ago, Mihu83 said:

I'm wondering if that have something to do with Incremental rendering, at least partially. In old versions it was ON, but it was screwed up at some point and needs to  be turned off if you want to have smooth sculpting experience.

Anyway, no matter what is the cause, 3DC need some work in performance department, ASAP.

im not sure  who works on the sculpting side , if there actually is someone working on the sculpting side

Link to comment
Share on other sites

  • Reputable Contributor
18 hours ago, Mihu83 said:

Ok, I'm working on pretty heavy model inside 3DCoat(90+ millions, multiple meshes) and the performance is pretty week, whole UI and viewport gets choppy/laggy...

I have Ryzen 9 5900X, 128GB of RAM, RTX 4070 12GB and NVMe drives, so it's not a weak machine. 3DC utilizes a fraction of CPU, maybe 30GB of RAM and around 40% of GPU and the performance drops significantly. FPS is around 20 - 24 max... Also, I think it's worse on Windows 11 than it was on Windows 10.

 

Can you open the scene again and look at the PERFORMANCE tab of Windows Task Manager > GPU? What does it indicate when working in the scene? Is it showing heavy utilization (75% or more) and what about the GPU memory utilization...how high is that? I am just trying to help spot the culprit because I just tested a scene with 180 million tris and the viewport performance is still reasonably good for such a heavy scene. The amount of polygons and textures that can be handled in the viewport mostly depends on the graphics card. I have a Ryzen 9 9950X, 192GB RAM (running @ 5600Mhz) and an RTX 3090. It has double the Memory Bus Bandwidth (384bit) and double the VRAM, so it can handle a heavy scene better than a card (no matter which generation) with a small Memory Bus and low levels of VRAM. An RTX 4070 sounds very up to date, but its intent is not really for high end content creation. The 4070Ti Super is better suited for that because it has 16GB of VRAM and a bigger Memory Bus (256 vs 192 bit). I had a 4070Ti Super and it worked very well, but I sold it and bought a used RTX 3090 because the 3090 was almost neck and neck in terms of overall performance, but it had a much bigger Memory Bus and 6GB more VRAM. I also wanted the extra VRAM for VFX simulations (namely Turbulence FD in Lightwave) and Realtime Renders like EEVEE and Lightwave's new RiPR. I was afraid that the 16GB of VRAM would not be quite enough in some situations.

Bottom line, Memory Bus bandwidth really matters, as does VRAM capacity. For comparison's sake, I was in Blender 4.5 yesterday, doing some Applink tests and I took 5 rocks from the Poly Haven asset library > applied a Subdivision Modifier to each of them with 4-5 subdivisions each...because I wanted to export them via the Applink, to the Sculpt workspace and yet have 3DCoat bake the color texture onto the vertices (for that, the imported mesh needs to have a pretty high polycount). I could see it bogging my video card down with each little adjustment. I was kind of shocked because I had heard of how awesome Blender has gotten lately, but in some respects, it's still not in 3DCoat's or ZBrush's league, in terms of handling large polycounts or scenes. I had a 5mill tri Rhino mesh (imported from 3DCoat) that the sculpting brushes worked VERY well on...but I wanted to push it a bit to get a more accurate comparison with 3DCoat. So, I added a Multi-Resolution modifier to it and tried to add one subdivision level and after thinking about it a few minutes, it crashed Blender...over and over and over.

So, that adds a little perspective here. With your graphics card, you can still handle a MUCH larger polycount/scene in 3DCoat than in Blender. 

 

Link to comment
Share on other sites

  • Reputable Contributor
15 hours ago, Elemeno said:

im not sure  who works on the sculpting side , if there actually is someone working on the sculpting side

Andrew was/is the one working on the Scultping tools. He said on his Twitter/X account that he is developing part time due to serving his country in a technical capacity, during this ongoing war. He is mainly focusing on bugfixing now, while other developers continue their assigned tasks.

  • Like 1
Link to comment
Share on other sites

  • Advanced Member
2 hours ago, AbnRanger said:

Can you open the scene again and look at the PERFORMANCE tab of Windows Task Manager > GPU? What does it indicate when working in the scene? Is it showing heavy utilization (75% or more) and what about the GPU memory utilization...how high is that? I am just trying to help spot the culprit because I just tested a scene with 180 million tris and the viewport performance is still reasonably good for such a heavy scene. The amount of polygons and textures that can be handled in the viewport mostly depends on the graphics card. I have a Ryzen 9 9950X, 192GB RAM (running @ 5600Mhz) and an RTX 3090. It has double the Memory Bus Bandwidth (384bit) and double the VRAM, so it can handle a heavy scene better than a card (no matter which generation) with a small Memory Bus and low levels of VRAM. An RTX 4070 sounds very up to date, but its intent is not really for high end content creation. The 4070Ti Super is better suited for that because it has 16GB of VRAM and a bigger Memory Bus (256 vs 192 bit). I had a 4070Ti Super and it worked very well, but I sold it and bought a used RTX 3090 because the 3090 was almost neck and neck in terms of overall performance, but it had a much bigger Memory Bus and 6GB more VRAM. I also wanted the extra VRAM for VFX simulations (namely Turbulence FD in Lightwave) and Realtime Renders like EEVEE and Lightwave's new RiPR. I was afraid that the 16GB of VRAM would not be quite enough in some situations.

Bottom line, Memory Bus bandwidth really matters, as does VRAM capacity. For comparison's sake, I was in Blender 4.5 yesterday, doing some Applink tests and I took 5 rocks from the Poly Haven asset library > applied a Subdivision Modifier to each of them with 4-5 subdivisions each...because I wanted to export them via the Applink, to the Sculpt workspace and yet have 3DCoat bake the color texture onto the vertices (for that, the imported mesh needs to have a pretty high polycount). I could see it bogging my video card down with each little adjustment. I was kind of shocked because I had heard of how awesome Blender has gotten lately, but in some respects, it's still not in 3DCoat's or ZBrush's league, in terms of handling large polycounts or scenes. I had a 5mill tri Rhino mesh (imported from 3DCoat) that the sculpting brushes worked VERY well on...but I wanted to push it a bit to get a more accurate comparison with 3DCoat. So, I added a Multi-Resolution modifier to it and tried to add one subdivision level and after thinking about it a few minutes, it crashed Blender...over and over and over.

So, that adds a little perspective here. With your graphics card, you can still handle a MUCH larger polycount/scene in 3DCoat than in Blender. 

 

I've been checking the performance before, it shows max 7-7,1 GB VRAM usage(138 millions on screen) and 3D usage varies from 18 to 40% and no matter what it's choppy, with fps varying from around 20 to 80( 80 is with all objects hidden, vertical sync disabled, GPU set to max performance). After hiding and unhiding all objects, viewport and overall UI is smoother, but fps count doesn't change, it stays on 20 - 21 count.

Anyway, I was working with 64GB RAM and GTX 1080 Ti 8GB for years and it could handle 90+ millions without issue. Yes, 1080 had a higher bandwith(256 bit), but still.

Also, even overall startup of 3DC is kinda slow and one thing I see lately when closing 3DC is power shell window that shows for 10 - 15 seconds after closing the app + this "Installing" red sign inside 3DC(lower right corner).

@AbnRanger By the way, how the hell do you have 190+ GB of ram, what kind of MOBO are you using? Is that something dedicated for Threadripper? I thought it's a bit tricky to go even with 128GB of RAM with AM5, especially on full speed

Edited by Mihu83
Link to comment
Share on other sites

  • Reputable Contributor
4 hours ago, Mihu83 said:

I've been checking the performance before, it shows max 7-7,1 GB VRAM usage(138 millions on screen) and 3D usage varies from 18 to 40% and no matter what it's choppy, with fps varying from around 20 to 80( 80 is with all objects hidden, vertical sync disabled, GPU set to max performance). After hiding and unhiding all objects, viewport and overall UI is smoother, but fps count doesn't change, it stays on 20 - 21 count.

Anyway, I was working with 64GB RAM and GTX 1080 Ti 8GB for years and it could handle 90+ millions without issue. Yes, 1080 had a higher bandwith(256 bit), but still.

Also, even overall startup of 3DC is kinda slow and one thing I see lately when closing 3DC is power shell window that shows for 10 - 15 seconds after closing the app + this "Installing" red sign inside 3DC(lower right corner).

@AbnRanger By the way, how the hell do you have 190+ GB of ram, what kind of MOBO are you using? Is that something dedicated for Threadripper? I thought it's a bit tricky to go even with 128GB of RAM with AM5, especially on full speed

I still think the 192bit Memory Bus is your main bottleneck. It's like the highway your memory data travels on. And if your card is close to its max Memory limit, it will probably try to use that NVidia shared Memory (with system memory) feature and that will slow things down a lot. Maybe you have another heavy scene around 100 million tris and can test it on that, too? 

I agree with you on scenes seeming to load more slowly, now. I said something about this to development, but no one responded.

As for the RAM, I had to search for 48GB sticks and of course a Motherboard that would support it. It's a Gigabyte X670E Aorus Master, and at first I could only run 4 modules at 4800Mhz or the system would constantly crash. Recent BIOS updates improved the Memory compatibility and somehow enabled faster timings. I don't want to try and push it past 5600Mhz, for stability's sake, even though the memory is rated at 6000Mhz. For some reason, 4 modules cannot run as fast as 2 modules. I ran just 2 modules for a few months and then put the other 2 in when some of the BIOS updates improved the memory timings.

  • Like 1
Link to comment
Share on other sites

  • Advanced Member
20 hours ago, AbnRanger said:

I still think the 192bit Memory Bus is your main bottleneck. It's like the highway your memory data travels on. And if your card is close to its max Memory limit, it will probably try to use that NVidia shared Memory (with system memory) feature and that will slow things down a lot. Maybe you have another heavy scene around 100 million tris and can test it on that, too? 

I agree with you on scenes seeming to load more slowly, now. I said something about this to development, but no one responded.

As for the RAM, I had to search for 48GB sticks and of course a Motherboard that would support it. It's a Gigabyte X670E Aorus Master, and at first I could only run 4 modules at 4800Mhz or the system would constantly crash. Recent BIOS updates improved the Memory compatibility and somehow enabled faster timings. I don't want to try and push it past 5600Mhz, for stability's sake, even though the memory is rated at 6000Mhz. For some reason, 4 modules cannot run as fast as 2 modules. I ran just 2 modules for a few months and then put the other 2 in when some of the BIOS updates improved the memory timings.

I'll test it, but you might be right. Also, 12GB of VRAM could be quite limiting too. I think, I'll change that GPU for 5070 series with 16GB and 256bit. I wish, I could go for 24GB gpu, but that's out of my price range right now and I don't want to go for used 3090.

 

Oh, I wasn't aware there are 48GB RAM sticks. I'm still on DDR4 platform and probably won't jump into DDR5 bandwagon soon.

  • Like 1
Link to comment
Share on other sites

  • Advanced Member

image.png.04046e40a409fa3fc66b27d1a241304d.png 

That staff haven't worked properly from 2025.12 to this day, 2025.15.

  • Thanks 1
Link to comment
Share on other sites

5 hours ago, tcwik said:

That staff haven't worked properly from 2025.12 to this day, 2025.15.

Is this a known issue? Are the developers aware of it? I haven't seen any issues with these settings.

Link to comment
Share on other sites

19 hours ago, tcwik said:

image.png.04046e40a409fa3fc66b27d1a241304d.png 

That staff haven't worked properly from 2025.12 to this day, 2025.15.

Can you explain what is not working and the steps to replicate it ?
thanks

Link to comment
Share on other sites

  • Advanced Member

Nothing works like before; shortcuts do nothing; all those previous types of surfaces seem like they won't change anything; maybe some big changes are made; perhaps I need to reinstall all the software again :/..

Link to comment
Share on other sites

yes please , we don't have other reports but yours.

Link to comment
Share on other sites

  • Advanced Member

Hey, I have a strange issue in latest builds(mos def in 2025.12, 2025.15) - when I create new folder in Alphas, Smart Materials and so on, folder name is 0 and I can't rename it. Is that a bug or that have something to do with the amount of already existing folders(is there any limitation or something)?

  • Thanks 1
Link to comment
Share on other sites

  • Advanced Member
On 10/29/2025 at 5:10 AM, animk said:

I don't use windows but I can confirm:
2025 linux has much poor performance than 4.9.72 linux at 16 mil tris surface mode
2025 windows version running in wine has about the same (or slightly better) smooth performance as 4.9.72 linux at 16 mil tris

my pc: i9 13900k, RTX 4090

I downloaded the latest Linux version 2025.15, the performance is a lot better than the previous 2025.01.  I can go back to linux version now.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...