Jump to content
3DCoat Forums

3DCoat 2025 Development


AndrewShpagin
 Share

Recommended Posts

  • Advanced Member
49 minutes ago, Mihu83 said:

Ok, I'm working on pretty heavy model inside 3DCoat(90+ millions, multiple meshes) and the performance is pretty week, whole UI and viewport gets choppy/laggy...

I have Ryzen 9 5900X, 128GB of RAM, RTX 4070 12GB and NVMe drives, so it's not a weak machine. 3DC utilizes a fraction of CPU, maybe 30GB of RAM and around 40% of GPU and the performance drops significantly. FPS is around 20 - 24 max... Also, I think it's worse on Windows 11 than it was on Windows 10.

 

ive noticed the same , i havent looked into specifics but 2025.12 vs 4,9,6 has a large performance drop , i was hoping for a cut down version that doesnt contain the hybrid modelling , renderer, nodes etc ... just the sculpting and retopo side 

Link to comment
Share on other sites

  • Advanced Member
49 minutes ago, Mihu83 said:

Ok, I'm working on pretty heavy model inside 3DCoat(90+ millions, multiple meshes) and the performance is pretty week, whole UI and viewport gets choppy/laggy...

I have Ryzen 9 5900X, 128GB of RAM, RTX 4070 12GB and NVMe drives, so it's not a weak machine. 3DC utilizes a fraction of CPU, maybe 30GB of RAM and around 40% of GPU and the performance drops significantly. FPS is around 20 - 24 max... Also, I think it's worse on Windows 11 than it was on Windows 10.

 

ive noticed the same , i havent looked into specifics but 2025.12 vs 4,9,6 has a large performance drop , i was hoping for a cut down version that doesnt contain the hybrid modelling , renderer, nodes etc ... just the sculpting and retopo side 

Link to comment
Share on other sites

  • Advanced Member
51 minutes ago, Elemeno said:

ive noticed the same , i havent looked into specifics but 2025.12 vs 4,9,6 has a large performance drop , i was hoping for a cut down version that doesnt contain the hybrid modelling , renderer, nodes etc ... just the sculpting and retopo side 

I'm wondering if that have something to do with Incremental rendering, at least partially. In old versions it was ON, but it was screwed up at some point and needs to  be turned off if you want to have smooth sculpting experience.

Anyway, no matter what is the cause, 3DC need some work in performance department, ASAP.

Edited by Mihu83
Link to comment
Share on other sites

  • Advanced Member
2 hours ago, Mihu83 said:

I'm wondering if that have something to do with Incremental rendering, at least partially. In old versions it was ON, but it was screwed up at some point and needs to  be turned off if you want to have smooth sculpting experience.

Anyway, no matter what is the cause, 3DC need some work in performance department, ASAP.

im not sure  who works on the sculpting side , if there actually is someone working on the sculpting side

Link to comment
Share on other sites

  • Reputable Contributor
18 hours ago, Mihu83 said:

Ok, I'm working on pretty heavy model inside 3DCoat(90+ millions, multiple meshes) and the performance is pretty week, whole UI and viewport gets choppy/laggy...

I have Ryzen 9 5900X, 128GB of RAM, RTX 4070 12GB and NVMe drives, so it's not a weak machine. 3DC utilizes a fraction of CPU, maybe 30GB of RAM and around 40% of GPU and the performance drops significantly. FPS is around 20 - 24 max... Also, I think it's worse on Windows 11 than it was on Windows 10.

 

Can you open the scene again and look at the PERFORMANCE tab of Windows Task Manager > GPU? What does it indicate when working in the scene? Is it showing heavy utilization (75% or more) and what about the GPU memory utilization...how high is that? I am just trying to help spot the culprit because I just tested a scene with 180 million tris and the viewport performance is still reasonably good for such a heavy scene. The amount of polygons and textures that can be handled in the viewport mostly depends on the graphics card. I have a Ryzen 9 9950X, 192GB RAM (running @ 5600Mhz) and an RTX 3090. It has double the Memory Bus Bandwidth (384bit) and double the VRAM, so it can handle a heavy scene better than a card (no matter which generation) with a small Memory Bus and low levels of VRAM. An RTX 4070 sounds very up to date, but its intent is not really for high end content creation. The 4070Ti Super is better suited for that because it has 16GB of VRAM and a bigger Memory Bus (256 vs 192 bit). I had a 4070Ti Super and it worked very well, but I sold it and bought a used RTX 3090 because the 3090 was almost neck and neck in terms of overall performance, but it had a much bigger Memory Bus and 6GB more VRAM. I also wanted the extra VRAM for VFX simulations (namely Turbulence FD in Lightwave) and Realtime Renders like EEVEE and Lightwave's new RiPR. I was afraid that the 16GB of VRAM would not be quite enough in some situations.

Bottom line, Memory Bus bandwidth really matters, as does VRAM capacity. For comparison's sake, I was in Blender 4.5 yesterday, doing some Applink tests and I took 5 rocks from the Poly Haven asset library > applied a Subdivision Modifier to each of them with 4-5 subdivisions each...because I wanted to export them via the Applink, to the Sculpt workspace and yet have 3DCoat bake the color texture onto the vertices (for that, the imported mesh needs to have a pretty high polycount). I could see it bogging my video card down with each little adjustment. I was kind of shocked because I had heard of how awesome Blender has gotten lately, but in some respects, it's still not in 3DCoat's or ZBrush's league, in terms of handling large polycounts or scenes. I had a 5mill tri Rhino mesh (imported from 3DCoat) that the sculpting brushes worked VERY well on...but I wanted to push it a bit to get a more accurate comparison with 3DCoat. So, I added a Multi-Resolution modifier to it and tried to add one subdivision level and after thinking about it a few minutes, it crashed Blender...over and over and over.

So, that adds a little perspective here. With your graphics card, you can still handle a MUCH larger polycount/scene in 3DCoat than in Blender. 

 

Link to comment
Share on other sites

  • Reputable Contributor
15 hours ago, Elemeno said:

im not sure  who works on the sculpting side , if there actually is someone working on the sculpting side

Andrew was/is the one working on the Scultping tools. He said on his Twitter/X account that he is developing part time due to serving his country in a technical capacity, during this ongoing war. He is mainly focusing on bugfixing now, while other developers continue their assigned tasks.

  • Like 1
Link to comment
Share on other sites

  • Advanced Member
2 hours ago, AbnRanger said:

Can you open the scene again and look at the PERFORMANCE tab of Windows Task Manager > GPU? What does it indicate when working in the scene? Is it showing heavy utilization (75% or more) and what about the GPU memory utilization...how high is that? I am just trying to help spot the culprit because I just tested a scene with 180 million tris and the viewport performance is still reasonably good for such a heavy scene. The amount of polygons and textures that can be handled in the viewport mostly depends on the graphics card. I have a Ryzen 9 9950X, 192GB RAM (running @ 5600Mhz) and an RTX 3090. It has double the Memory Bus Bandwidth (384bit) and double the VRAM, so it can handle a heavy scene better than a card (no matter which generation) with a small Memory Bus and low levels of VRAM. An RTX 4070 sounds very up to date, but its intent is not really for high end content creation. The 4070Ti Super is better suited for that because it has 16GB of VRAM and a bigger Memory Bus (256 vs 192 bit). I had a 4070Ti Super and it worked very well, but I sold it and bought a used RTX 3090 because the 3090 was almost neck and neck in terms of overall performance, but it had a much bigger Memory Bus and 6GB more VRAM. I also wanted the extra VRAM for VFX simulations (namely Turbulence FD in Lightwave) and Realtime Renders like EEVEE and Lightwave's new RiPR. I was afraid that the 16GB of VRAM would not be quite enough in some situations.

Bottom line, Memory Bus bandwidth really matters, as does VRAM capacity. For comparison's sake, I was in Blender 4.5 yesterday, doing some Applink tests and I took 5 rocks from the Poly Haven asset library > applied a Subdivision Modifier to each of them with 4-5 subdivisions each...because I wanted to export them via the Applink, to the Sculpt workspace and yet have 3DCoat bake the color texture onto the vertices (for that, the imported mesh needs to have a pretty high polycount). I could see it bogging my video card down with each little adjustment. I was kind of shocked because I had heard of how awesome Blender has gotten lately, but in some respects, it's still not in 3DCoat's or ZBrush's league, in terms of handling large polycounts or scenes. I had a 5mill tri Rhino mesh (imported from 3DCoat) that the sculpting brushes worked VERY well on...but I wanted to push it a bit to get a more accurate comparison with 3DCoat. So, I added a Multi-Resolution modifier to it and tried to add one subdivision level and after thinking about it a few minutes, it crashed Blender...over and over and over.

So, that adds a little perspective here. With your graphics card, you can still handle a MUCH larger polycount/scene in 3DCoat than in Blender. 

 

I've been checking the performance before, it shows max 7-7,1 GB VRAM usage(138 millions on screen) and 3D usage varies from 18 to 40% and no matter what it's choppy, with fps varying from around 20 to 80( 80 is with all objects hidden, vertical sync disabled, GPU set to max performance). After hiding and unhiding all objects, viewport and overall UI is smoother, but fps count doesn't change, it stays on 20 - 21 count.

Anyway, I was working with 64GB RAM and GTX 1080 Ti 8GB for years and it could handle 90+ millions without issue. Yes, 1080 had a higher bandwith(256 bit), but still.

Also, even overall startup of 3DC is kinda slow and one thing I see lately when closing 3DC is power shell window that shows for 10 - 15 seconds after closing the app + this "Installing" red sign inside 3DC(lower right corner).

@AbnRanger By the way, how the hell do you have 190+ GB of ram, what kind of MOBO are you using? Is that something dedicated for Threadripper? I thought it's a bit tricky to go even with 128GB of RAM with AM5, especially on full speed

Edited by Mihu83
Link to comment
Share on other sites

  • Reputable Contributor
4 hours ago, Mihu83 said:

I've been checking the performance before, it shows max 7-7,1 GB VRAM usage(138 millions on screen) and 3D usage varies from 18 to 40% and no matter what it's choppy, with fps varying from around 20 to 80( 80 is with all objects hidden, vertical sync disabled, GPU set to max performance). After hiding and unhiding all objects, viewport and overall UI is smoother, but fps count doesn't change, it stays on 20 - 21 count.

Anyway, I was working with 64GB RAM and GTX 1080 Ti 8GB for years and it could handle 90+ millions without issue. Yes, 1080 had a higher bandwith(256 bit), but still.

Also, even overall startup of 3DC is kinda slow and one thing I see lately when closing 3DC is power shell window that shows for 10 - 15 seconds after closing the app + this "Installing" red sign inside 3DC(lower right corner).

@AbnRanger By the way, how the hell do you have 190+ GB of ram, what kind of MOBO are you using? Is that something dedicated for Threadripper? I thought it's a bit tricky to go even with 128GB of RAM with AM5, especially on full speed

I still think the 192bit Memory Bus is your main bottleneck. It's like the highway your memory data travels on. And if your card is close to its max Memory limit, it will probably try to use that NVidia shared Memory (with system memory) feature and that will slow things down a lot. Maybe you have another heavy scene around 100 million tris and can test it on that, too? 

I agree with you on scenes seeming to load more slowly, now. I said something about this to development, but no one responded.

As for the RAM, I had to search for 48GB sticks and of course a Motherboard that would support it. It's a Gigabyte X670E Aorus Master, and at first I could only run 4 modules at 4800Mhz or the system would constantly crash. Recent BIOS updates improved the Memory compatibility and somehow enabled faster timings. I don't want to try and push it past 5600Mhz, for stability's sake, even though the memory is rated at 6000Mhz. For some reason, 4 modules cannot run as fast as 2 modules. I ran just 2 modules for a few months and then put the other 2 in when some of the BIOS updates improved the memory timings.

  • Like 1
Link to comment
Share on other sites

  • Advanced Member
20 hours ago, AbnRanger said:

I still think the 192bit Memory Bus is your main bottleneck. It's like the highway your memory data travels on. And if your card is close to its max Memory limit, it will probably try to use that NVidia shared Memory (with system memory) feature and that will slow things down a lot. Maybe you have another heavy scene around 100 million tris and can test it on that, too? 

I agree with you on scenes seeming to load more slowly, now. I said something about this to development, but no one responded.

As for the RAM, I had to search for 48GB sticks and of course a Motherboard that would support it. It's a Gigabyte X670E Aorus Master, and at first I could only run 4 modules at 4800Mhz or the system would constantly crash. Recent BIOS updates improved the Memory compatibility and somehow enabled faster timings. I don't want to try and push it past 5600Mhz, for stability's sake, even though the memory is rated at 6000Mhz. For some reason, 4 modules cannot run as fast as 2 modules. I ran just 2 modules for a few months and then put the other 2 in when some of the BIOS updates improved the memory timings.

I'll test it, but you might be right. Also, 12GB of VRAM could be quite limiting too. I think, I'll change that GPU for 5070 series with 16GB and 256bit. I wish, I could go for 24GB gpu, but that's out of my price range right now and I don't want to go for used 3090.

 

Oh, I wasn't aware there are 48GB RAM sticks. I'm still on DDR4 platform and probably won't jump into DDR5 bandwagon soon.

  • Like 1
Link to comment
Share on other sites

5 hours ago, tcwik said:

That staff haven't worked properly from 2025.12 to this day, 2025.15.

Is this a known issue? Are the developers aware of it? I haven't seen any issues with these settings.

Link to comment
Share on other sites

19 hours ago, tcwik said:

image.png.04046e40a409fa3fc66b27d1a241304d.png 

That staff haven't worked properly from 2025.12 to this day, 2025.15.

Can you explain what is not working and the steps to replicate it ?
thanks

Link to comment
Share on other sites

  • Advanced Member

Nothing works like before; shortcuts do nothing; all those previous types of surfaces seem like they won't change anything; maybe some big changes are made; perhaps I need to reinstall all the software again :/..

Link to comment
Share on other sites

yes please , we don't have other reports but yours.

Link to comment
Share on other sites

  • Advanced Member

Hey, I have a strange issue in latest builds(mos def in 2025.12, 2025.15) - when I create new folder in Alphas, Smart Materials and so on, folder name is 0 and I can't rename it. Is that a bug or that have something to do with the amount of already existing folders(is there any limitation or something)?

  • Thanks 1
Link to comment
Share on other sites

  • Advanced Member
On 10/29/2025 at 5:10 AM, animk said:

I don't use windows but I can confirm:
2025 linux has much poor performance than 4.9.72 linux at 16 mil tris surface mode
2025 windows version running in wine has about the same (or slightly better) smooth performance as 4.9.72 linux at 16 mil tris

my pc: i9 13900k, RTX 4090

I downloaded the latest Linux version 2025.15, the performance is a lot better than the previous 2025.01.  I can go back to linux version now.

  • Like 1
Link to comment
Share on other sites

sorry but, did you press RMB before brushing to take normal position ?

Link to comment
Share on other sites

  • Contributor

2025.15 Hello! A bit late but I wanted to say Thanks for repairing the smart Retopo Tool @Gorbatovsky and maybe others involved. I works now snappy, Glitch free in the right places. Very nice. Also I want to thank @carrots for the huge Python Updates and providing some information how to use and find new modules.:friends:

Edited by Ctc_nick
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

  • Advanced Member

Tried version 2025.16 and the problem with folders is still there - can't rename created folder, it's still named 0. New folders correctly, though, so I'll move alphas to the new folder, I guess.

Oh, it was mentioned in another thread, but I'll bring it here - Don't use sticky keys option in Preferences doesn't work, 3DC is using Sticky keys anyway.

Edited by Mihu83
  • Like 1
Link to comment
Share on other sites

  • Contributor

Is there a way to disable boolean when using tools that append new meshes like spline / import? Fairly simple test cases fail to boolean when using repeated shapes

 image.thumb.jpeg.c62280c5a78786eebbdbc184c98feb94.jpeg

Link to comment
Share on other sites

  • Reputable Contributor
7 hours ago, wendallhitherd said:

Is there a way to disable boolean when using tools that append new meshes like spline / import? Fairly simple test cases fail to boolean when using repeated shapes

 image.thumb.jpeg.c62280c5a78786eebbdbc184c98feb94.jpeg

Surface (mode) Booleans are less reliable than Voxel mode Booleans because of self-intersection issues and such. Try to switch your Sculpt Tree layer to Voxel Mode first. If you need to switch it Surface mode thereafter, you can.

Link to comment
Share on other sites

  • Member

in 3dcoat 2025.15 M1pro, when exporting mesh a d choosing the filename/loactiohn of the mesh to save. SOMetimes... the regular useless finder opens up, other times the finder select path verison. thI am really tired of the bugs in 3dcoat for Mac. Plz do you even test your software in QA? I have rarely seen an app in this state, I am disappointed. plz make macOS versions of the fixes 2025.15 + 

 

I just want to use the software and not report bugs.

Can I download 2024 version or something more stable?

Link to comment
Share on other sites

  • Contributor
16 hours ago, AbnRanger said:

Surface (mode) Booleans are less reliable than Voxel mode Booleans because of self-intersection issues and such. Try to switch your Sculpt Tree layer to Voxel Mode first. If you need to switch it Surface mode thereafter, you can.

Hey there! The reason I'd want to keep it in surface mode is to preserve the topology: As soon as something becomes voxels I have to retopo it before it can be used. As such I try to keep things surface mode as much as possible unless I'm sculpting details or already have a retopo ready. If there was a way to expose whether or not the new meshes get booleaned when applying it would allow me to preserve the mesh topo in the sculpt

Link to comment
Share on other sites

  • Advanced Member
1 hour ago, wendallhitherd said:

Hey there! The reason I'd want to keep it in surface mode is to preserve the topology: As soon as something becomes voxels I have to retopo it before it can be used. As such I try to keep things surface mode as much as possible unless I'm sculpting details or already have a retopo ready. If there was a way to expose whether or not the new meshes get booleaned when applying it would allow me to preserve the mesh topo in the sculpt

It doesn't matter if you keep it in Surface mode or Voxel mode, it is still gonna be a triangulated mesh and you will need to do retopo, or bring it into Zbrush, duplicate the subtool, Zremesh it and Project details or something similar.

By the way, Curve tool is screwed up since... well, it's been messed up for such a long time, I don't even remember.

Edited by Mihu83
Link to comment
Share on other sites

  • Advanced Member
9 hours ago, wendallhitherd said:

Hey there! The reason I'd want to keep it in surface mode is to preserve the topology: As soon as something becomes voxels I have to retopo it before it can be used. As such I try to keep things surface mode as much as possible unless I'm sculpting details or already have a retopo ready. If there was a way to expose whether or not the new meshes get booleaned when applying it would allow me to preserve the mesh topo in the sculpt

3dcoat works different to others , voxels give you complete freedom to create , so i use alot of 3dcoat on the concepting side , i then have to use another software for polishing , so when in 3dcoat just enjoy the freedom and create :D

Link to comment
Share on other sites

  • Contributor
On 12/19/2025 at 5:03 PM, Mihu83 said:

Tried version 2025.16 and the problem with folders is still there - can't rename created folder, it's still named 0. New folders correctly, though, so I'll move alphas to the new folder, I guess.

Oh, it was mentioned in another thread, but I'll bring it here - Don't use sticky keys option in Preferences doesn't work, 3DC is using Sticky keys anyway.

@Dmitriy Nos2025.17 Hi! On create export preset the same happens: when you create an export preset the name in the List and the File becomes "0". I have noticed something similar strange in an exported fbx. the Fallback Path of the Textures of the fbx sometimes points to C:\Program Files\3dcoat: The Install Folder. So that makes me think: If 3dcoat tries to write to the write Protected folders, instead of documents ===> the system denies and the name becomes 0. Just a guess. Greetings!

Link to comment
Share on other sites

  • Contributor
12 hours ago, Mihu83 said:

It doesn't matter if you keep it in Surface mode or Voxel mode, it is still gonna be a triangulated mesh and you will need to do retopo, or bring it into Zbrush, duplicate the subtool, Zremesh it and Project details or something similar.

By the way, Curve tool is screwed up since... well, it's been messed up for such a long time, I don't even remember.


If you can preserve the original topo, even if it's triangulated, it's much easier to make the gamerez than if you are just using voxelized mesh, because sharp corners are preserved.  I could take my mesh into zbrush to polygroup -> zremesh -> quadmesh for my sub-d high / lowpoly foundation, but at that rate you may as well just do all your detailing in ZB. Small edits to 3dc sculpt tools like this are probably pretty easily doable but would save a lot of time getting the final mesh out, because you have a foundation instead of starting from scratch with your retopo. 
 

4 hours ago, Elemeno said:

3dcoat works different to others , voxels give you complete freedom to create , so i use alot of 3dcoat on the concepting side , i then have to use another software for polishing , so when in 3dcoat just enjoy the freedom and create :D

Unfortunately for me im not a concept artist, and it comes down to time. Working in voxels require 100% manual retopo, especially for hard surface. The more sculpt mode surface based features, the easier it is to use 3dc for final work. I def think 3dc supporting subdivision level sculpting is a massive step in the right direction, but all the different "rooms" and data types that are 100% separated from one another are holding back a lot of advantages you get out of zb, like the typical workflow where you dynamesh sculpt a hardsurface thing and then polygroup -> cusp smooth -> zremesh, then detail with dynamic sub-d + zmodeler to make your production asset all in the same app you started. 3dcoat has a lot of these features already, they are just boxed off in their own rooms so you can't use them together seamlessly. Imagine if you could use retopo tools in sculpt mode, then add subdivision levels to it, or use 3DC's quad remesher in sculpt mode directly without having to leave to another room and pull it back? Imagine if 3dc had the equivalent of polygroups or face sets you could use to hide or show parts of your sculpt without having to separate them into other voxlayers? (I think this is also possible to be transferred back and forth to voxel space just like the colors are!) Anyway, that's a ramble. I just see a lot of potential, and I'm just suggesting some baby steps which would make it easier for me personally to stay in 3dc instead of bouncing between apps. For me that's "better surface mode stuff" and "better booleans in surface mode" or, alternatively "avoid booleans in surface mode"

Edited by wendallhitherd
  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...