Jump to content
3D Coat Forums
Rygaard

I will not be silent this time. Just my opinion !

Recommended Posts

I would just ask anyone who truly thinks Quad-Based SubD meshes/levels in 3DCoat....a MASSIVE undertaking if at all possible...to screen record their case, using ZBrush and 3DCoat to show where 3DCoat's current workflow is inferior to the one proposed. This way, if there is indeed sufficient merit for such a major development effort, Andrew can see it. You have to show him why it would help, many times. Just telling him won't often convince him. Not for large scale undertakings.

Conform Retopo Mesh and Proxy system were designed to address those needs. I'm not at all opposed to incorporating good features from other apps, like ZBrush. I told Andrew that with a really well thought out and implemented Sculpt Layer system (with Erase, Magnify/Reduction, Layer Masking and Condition Masking) + Procedural Noise Library (for the NOISE tool and Fill tool in Paint Room) + Vector Displacement Brushes....all for V5, would make 3DCoat so close to ZBrush in terms of overall production-level capability, that it puts 3DCoat clearly in the conversation, where it really wasn't before.  Maybe after V5, it might be worth exploring, but I'd rather have a more robust system built around the current Voxel and Surface mode platform, in the mean time.

  • Like 1

Share this post


Link to post
Share on other sites

...and for what it's worth, Rygard already did something like this in pitching the Reproject tool to Andrew. I'm glad he asked for it. Can be very handy in the right scenario.

  • Thanks 1

Share this post


Link to post
Share on other sites

I quickly did a small project of a face on Blender: Dynamic Tessellation + Multi-Resolution.
image.thumb.png.ab316fe9ea4faa8b9ecc916c5a71d21d.png

My goal was to demonstrate to you the benefits we would have if 3D-Coat had only a single mesh for all Rooms.

My concern was to mix techniques simulating the Rooms of the 3D-Coat. In this project I did not worry about the workflow and I did not worry about the look of the face because I know it is bad, but I did worry about the ease of techniques, control and maintenance of the project.

I could use different types of workflow and I have enough knowledge for this, but my biggest focus was on the freedom I had during the creative process. But if it was a real workflow, I would have the chance to do anything in my favor as well.
Many things I have done in this demonstration, I have notion, that were forced and exaggerated, but it is to show the potentiality of things.

  • Like 1

Share this post


Link to post
Share on other sites

Sorry, I forgot the process ... here it is:

5498507_workflowtessellationandmultiresolution.thumb.jpg.10100660a0c063396703cdb6149b5a09.jpg

Share this post


Link to post
Share on other sites

Thanks Rygaard for nice demonstration. But I'm not sure if this kind of freedom really is freedom what we need in 3d-coat. For example I have had never situation where I would need to model into high poly mesh the way you did 
in the picture. I would go back to sculpt mode and sculpt those details or add other mesh. I think that way of modelling is not very efficient. More I think this more I'm shifting to idea that we need rooms. But not as many we have right now. 

Edited by haikalle
  • Like 1

Share this post


Link to post
Share on other sites

It is true that to optimize the workflow many tasks could be carried out in a single step, and many commands are repeated or disseminated in the interface.

The tools to carry out the work are fine, only that sometimes accessing them is a bit tricky.

But Andrew does not just need time to create a new interface. 

First you have to understand how 3DC works, understand how it is used by doing different jobs for different needs of artists and then, with the road map drawn to link all the blocks with a new paradigm for the user.

Fluency is achieved with simplicity.

  • Like 2

Share this post


Link to post
Share on other sites
4 hours ago, haikalle said:

Thanks Rygaard for nice demonstration. But I'm not sure if this kind of freedom really is freedom what we need in 3d-coat. For example I have had never situation where I would need to model into high poly mesh the way you did 
in the picture. I would go back to sculpt mode and sculpt those details or add other mesh. I think that way of modelling is not very efficient. More I think this more I'm shifting to idea that we need rooms. But not as many we have right now. 

 @haikalle Thank you for your opinion!  :)

I know what I did would not be ideal in a workflow. You have complete reason about the details or add another mesh with better topology! But I did not worry about that.

It was my intention to demonstrate that in a real project with this freedom, you have an easier way in relation to the flow of the mesh for all the necessary program features that you would need at the moment.
I worried much more about demonstrating that I have no difficulties or problems performing any kind of technique, even being in the worst case scenario that I put myself in.
I purposely pulled to the extreme, in a very exaggerated way, what I could do and if I would have some sort of problem in this kind of approach. In no time, I wasted time, everything behaved as I wished and it was very quick to be done.

Of course, in the ideal scenario, shortly after I finished the sculpture using Dynamic tessellation, I would quickly create a retopology of that mesh and I would follow my project with the mesh with a good topology.
Afterwards, I reproject the details from one mesh to the other, and add the Multi-Resolution in this good mesh to improve the sculpture.
At this point, I would have the choice to further detail using Multi-Res or I could create and paint a detail texture map (including the texture projection technique) and I would apply this texture map using the Displacement Modifier in a non-destructive way. Total control between physical sculpture (muli-resolution) and detailing paint of the texture map (Painting + Displacement Modifier).
Then I could accomplish a correct way to texture map painting Diffuse (albedo) and all other types of texture map for the project.
In this project, I would have complete control of everything and if I needed to change anything I could in a very easy way. ;)

You could in a real scenario, import a mesh with UVs and use Multi-Resolution to get on with your project. :) 

In another scenario, if you wanted to directly add MultiResolution to a Sphere, you could start your project from scratch and follow it normally.

Honestly, I have total efficiency of my project in a very intuitive and easy way. Because I had this unique mesh, I had everything the program could offer me in my favor. Regardless of the choice of process that I will follow. :) 

One important thing to be said is that in 3D-Coat I could accomplish many things from what I did, but at a certain point in my project, I would be stuck and have to address something to move on. Honestly, I would have to perform " some
work around" that are not intuitive for an artist to follow through with the project.
For this, a new user or even a user who has more experience in the program would have to have a complete knowledge of how 3D-Coat works. The problem is that sometimes things are not so simple to understand.
I also love the way that 3D-Coat works, but I do not have the interactivity that I would expect to have between the Rooms. I would have to create a different type of mesh to have access to a certain Room functionality.

Just one example:
Maybe, have the possibility of a new Room called Modeling. This would mean yet another program within 3D-Coat that might present the same problems that currently exist.
Maybe this Room would communicate with the Retopo Room and maybe with the Paint Room, but it would not communicate directly with the Sculpt Room.
What I mean is that currently you can not use the Retopo Room tools in the mesh that is in the Sculpt Room. So that a Sculpt Room mesh can be used in the Retopo Room, only if you carry out an Autopo or process of mesh retopology to create another mesh.

Share this post


Link to post
Share on other sites
12 hours ago, AbnRanger said:

I would just ask anyone who truly thinks Quad-Based SubD meshes/levels in 3DCoat....a MASSIVE undertaking if at all possible...to screen record their case, using ZBrush and 3DCoat to show where 3DCoat's current workflow is inferior to the one proposed. This way, if there is indeed sufficient merit for such a major development effort, Andrew can see it. You have to show him why it would help, many times. Just telling him won't often convince him. Not for large scale undertakings.

Conform Retopo Mesh and Proxy system were designed to address those needs. I'm not at all opposed to incorporating good features from other apps, like ZBrush. I told Andrew that with a really well thought out and implemented Sculpt Layer system (with Erase, Magnify/Reduction, Layer Masking and Condition Masking) + Procedural Noise Library (for the NOISE tool and Fill tool in Paint Room) + Vector Displacement Brushes....all for V5, would make 3DCoat so close to ZBrush in terms of overall production-level capability, that it puts 3DCoat clearly in the conversation, where it really wasn't before.  Maybe after V5, it might be worth exploring, but I'd rather have a more robust system built around the current Voxel and Surface mode platform, in the mean time.

I completely understand what you mean.

I had done that Sculpt Layers video for Andrew, rich in detail for him to realize how important Sculpt Layers is in 3D-Coat for us artists! And it was well worth all my effort! :)
I would very much like to do this kind of video, but I do not know if I would have time for that. I'll see what I can do about it.
If anyone can make a video too, please help!

3D-Coat is one of the most powerful programs I've ever worked with. I feel really good working with it. The program has great possibilities to become even more powerful.

@AbnRanger All the features you said are perfect and really the artists would have a lot of power on their hands to accomplish many tasks.

But some things are important to be done as I've been trying to explain the benefits of it all.
3D-Coat is fantastic at everything, but I think the program is not simple and intuitive. It does not have an easy flow in relation to the mesh in all the Rooms.

The first step would not be a new interface.
We already have everything in our favor. Everything is ready! It would only be the operation of the rooms in relation to the mesh. I think it would just be an adaptation of how things work within the program.

I love all the power that 3D-Coat has, you can do everything, but that power is concentrated in different places that do not interact. I know there's a certain amount of interactivity when you throw a mesh to another Room, but it's not the same as you being able to act directly on the same Mesh.

If you open a project by importing a mesh with UVs in the Paint Room (PerPixel or Microvertex Paintings) and throw that mesh to the Sculpt Room, you will not have the details of that mesh in the Sculpt Room. And in the Sculpt Room, you would be working on another type of mesh that would not have UVs. Of course, after you change the mesh you can perform the Bake to return to the Paint Room, but sometimes the result of that Bake is not so good (depending on whether you know how to make a Bake correctly).
This will be a problem if users do not know how to fully use 3D-Coat. I know it is the way that 3D-Coat works and I agree that you have to know how to handle the program, but it becomes very elaborate to get things done.

13 hours ago, AbnRanger said:

...and for what it's worth, Rygard already did something like this in pitching the Reproject tool to Andrew. I'm glad he asked for it. Can be very handy in the right scenario.

And I think there are more good things coming around... ;)

Share this post


Link to post
Share on other sites
6 hours ago, Carlosan said:

It is true that to optimize the workflow many tasks could be carried out in a single step, and many commands are repeated or disseminated in the interface.

The tools to carry out the work are fine, only that sometimes accessing them is a bit tricky.

But Andrew does not just need time to create a new interface. 

First you have to understand how 3DC works, understand how it is used by doing different jobs for different needs of artists and then, with the road map drawn to link all the blocks with a new paradigm for the user.

Fluency is achieved with simplicity. 

I agree with you. Great points raised!

Carlosan, I think that would not be the case for a new interface.
We already have everything in 3D-Coat, everything is ready. I think if it happened some change would be more in relation to an adaptation of how the mesh works in relation to all Rooms.
The interface would be less, since of course it would evolve over time.

In my opinion, the most important thing would be to make all the features and tools available in a single mesh simplifying the whole process in a much more intuitive and easy way.

I agree that we have to learn to master the program, but sometimes it becomes a bit difficult for that. Unfortunately many confusions happen.

I remember when I was starting my 3D-Coat learning path, I watched a video explaining how 3D-Coat worked with Rooms. It was about why a mesh is present in one room and was not in the other.
At one point, I started to laugh (sometimes when I get nervous I start laughing! Will understand!) and I was very nervous because I had become more confused yet.
I remember that the author of the video had difficulties explaining, not because he did not know how to explain, but because it is complicated to make a new user understand this somewhat strange operation of 3D-Coat, that is, a completely different operation from which I was accustomed to using of the various other programs.

Honestly, I had lost the will to learn how to use 3D-Coat, but I thought 3D-Coat was fantastic, and I decided not to give up and try to learn the program.
After a long time, I began to understand a little. And of course I understand the process nowadays.

For me, this trajectory of learning was very painful and difficult! And I think that's why many users end up giving up on the program because of this differentiated 3D-Coat workflow.
I'm not saying that being different is bad. Never! Being different is very good, but in the case of 3D-Coat I did not have a good initial experience.

Today, I see 3D-Coat with other eyes, it is a program of the following style:
Please choose a task and perform it linearly. And this is often powerful and efficient.
The problem is that you often need to have better control of your project. So that's where confusions and problems begin with the 3D-Coat workflow.
For artists who want to change something, they will need to be very knowledgeable about the program's workings and do a lot of work-around, and from that time on, new users would be out of the game because they were wondering what now? How can I do this?  Why that?

I'm going to talk about Blender again, you have a single mesh and you know you can do everything that you have learned from the beginning of your trajectory in the 3d world.
Everything is more intuitive, simple, practical, easy to use and every choice you need to make, you do there, no problem because everything is connected in this single mesh. The entire system of the program is geared towards this mesh.

I've never used Mudbox, but the little I've seen seems to me to be a program of this kind of operation in relation to the simplicity and ease of using a single mesh. If I wanted to learn Mudbox it would be a very smooth and fast transition process.

In ZBrush, taking the fact that you first need to activate the EDIT Button to use the program (All users who start learning to use the program are confused and frustrated because they can not sculpt, if they do not know to have EDIT enabled and other steps), the operation of the program is easy to understand because of this unique mesh.

I'm sure if I had a single mesh working inside 3D-Coat, I would have a gigantic improvement in my workflow process that would be much more intuitive, fast, and powerful.

But in the end, regardless of everything, I will always love working on 3D-Coat. :) 

 

 

Share this post


Link to post
Share on other sites
On 4/13/2019 at 5:11 AM, haikalle said:

About SubD and liveclay. I'm fan with the liveclay and I think that is the future. But I have red some documents that shows that to get same amount details, you need more triangles in liveclay than you need in Subd. And that makes
sense because in Subd the base mesh is well planned and created so it gives that nice poly flow when you subdivide the mesh. And when you have less triangles in subd you can crank the high count even more and get some even better
detailted sculpt. 

Can I honestly tell you my opinion?

When I work with subdivisions, I have an impressive detailing quality!
By the time I used ZBrush and now using Blender through Modify Multi-Resolution, I can see how great that kind of system is. I can not explain the reason for this because the programming area is not my specialty, but I am witness that this system provides an excellent quality of sculpture and rich detailing in definition.
When I did the face of the demonstration, I felt the difference between Dynamic Tessellation and Multi-Resolution.
By the time I began to sculpt the Brush trait on the surface of the mesh seemed more fluent and the way the system behaved at the time of detail was fantastic.

Dynamic tessellation is great and gives me fantastic things too.

The only thing I can tell you is that both systems have both good and bad points.

Share this post


Link to post
Share on other sites

I agree that subdivisions gives a fantastic results.   I have seen many pro users that starts with liveclay and then retopo and continue with subdivisions. So the both ways are needed and I think it's hard to combine these two and their best parts
into one. I don't like that if I resample in surface mode, 3d-coat first convert my mesh into voxel then do the resampling and then converts it into surface mesh. I loose much detail in that process. About the rooms I really like @AbnRanger suggestion. And even if the final goal is to have one room, I would like start from combine two rooms and then refine that workflow. Then see how all works together. If everything works ok  then
combine again two rooms and refine that workflow. etc etc... 

Share this post


Link to post
Share on other sites
2 hours ago, Rygaard said:

Can I honestly tell you my opinion?

When I work with subdivisions, I have an impressive detailing quality!
By the time I used ZBrush and now using Blender through Modify Multi-Resolution, I can see how great that kind of system is. I can not explain the reason for this because the programming area is not my specialty, but I am witness that this system provides an excellent quality of sculpture and rich detailing in definition.
When I did the face of the demonstration, I felt the difference between Dynamic Tessellation and Multi-Resolution.
By the time I began to sculpt the Brush trait on the surface of the mesh seemed more fluent and the way the system behaved at the time of detail was fantastic.

Dynamic tessellation is great and gives me fantastic things too.

The only thing I can tell you is that both systems have both good and bad points.

I think this is the reason Pixologic has spent so much of their development time and resources, to refine and improve ZRemesher. So that the topology flows better. I don't think it would work well, using 3DCoat's Auto-Retopo. It does a great job on many things, but it's a bit fussy and that is why Instant Meshes was added. Problem is, Instant Meshes would give very poor topology, as it has too many termination points/poles. SubD levels on Instant Meshes would be a nightmare, and the Current Auto-Retopo would be too fussy to be usable half the time.

That's why I say, even if Andrew could implement a Quad-Based SubD mode, it wouldn't help much, and there are better solutions already in place (Conform Retopo & Multi-Res Proxy system). 

Share this post


Link to post
Share on other sites

I wanted to show a sketch i made just to show how rooms are very well connected togther.

Begun as voxel then sculpted with adaptive tesselation and then textured in paint room( ao and cuv layers created from high res mesh). 

I can now make changes to high poly and still get my nice pbr materials inside paint room, after that i can 

start baking  into a mid/low poly mesh into a uv set on its own. Then after all pieces are baked i can merge uv sets into 1 eg after baking everything .In the meantime everything is interconected , this mean i can update high poly or texture at anytime even AFTER bake.  This is pipeline is a huge timesaver(for me at least ) and not very apparent for new users.

Eg proxy slider should popup when caching or some sort of hint area where mini guides help the user .

I also think that 3dc should not be afraid of experimenting not copy the industy standards cause it will make it differenciate from the rest.

 

Capture2.JPG

Capture.JPG

  • Like 2

Share this post


Link to post
Share on other sites

I also think most users want subd in sculpting cause sculpting on quads gives a smoother result much faster . This can be negated with extra tools inside the sculpting phase (if necessary).

  • Like 1

Share this post


Link to post
Share on other sites
2 hours ago, AbnRanger said:

I think this is the reason Pixologic has spent so much of their development time and resources, to refine and improve ZRemesher. So that the topology flows better. I don't think it would work well, using 3DCoat's Auto-Retopo. It does a great job on many things, but it's a bit fussy and that is why Instant Meshes was added. Problem is, Instant Meshes would give very poor topology, as it has too many termination points/poles. SubD levels on Instant Meshes would be a nightmare, and the Current Auto-Retopo would be too fussy to be usable half the time.

That's why I say, even if Andrew could implement a Quad-Based SubD mode, it wouldn't help much, and there are better solutions already in place (Conform Retopo & Multi-Res Proxy system). 

I do not know if that was the real intent of Pixologic.
Because even with all the technology that provides a good mesh generated automatically, this mesh will not be used for production. You can even use curves to guide a supposedly correct topology, but even then the result is not the ideal mesh for production. Just a good mesh to sculpt. Now if you work in the area of 3D printing, you do not need the correct or ideal topology.

Then, the artist will have to manually make the topology process to correctly obtain the sequence of the polygons to have an ideal mesh. Of course, this manual process will be prior to the mesh detailing. Or Import a production mesh from the beginning and continue with the project.

On the other hand, you're right that this time spent by them to perfect ZRemesher would provide a better mesh for the artist to work with this system of subdivisions.

I've used the Multi-Resolution system many times. And this system never gave me problems. Even if the mesh has some topology problems like tris or other types we can call a bad topology, Multi-Resolution has always handled very well any kind of mesh I've worked on.

I had not used the Multi-Resolution system for a long time because I obviously use 3D-Coat (Dynamic Tessellation) for my work.
But in this small Face project I demonstrated using Blender, I had a pleasant surprise at how this system handled the completely  horrible mesh in the topology issue.

I really felt a big difference when I started to sculpt, it is difficult to put into words an explanation for this, maybe it is the method of the calculation of subdivision, I do not know the technical terms for this. But the mesh became excellent for the process of sculpting and accurate and sharp detailing.

Of course, if you do not know how to work on this system, you will have polygons stretched and you will end up fighting against the mesh.
One of the good things about this Multi-Resolution system is that it allows the use of UVs in the mesh.

I remember you made a video about the sculpture process in that creature from the Avatar movie (Voxel Surface Mode Sculpting pt.2).

I realized that you had difficulty working with the mesh in the creature's cavity. Not because you did not know how to sculpt, but because the mesh topology (dynamic tessellation) you were working on in Surface Mode was against you.

That is to say that the mesh appeared stretched and there were problems in the mesh with the appearance of holes. You tried using Smooth in the area and you felt that you were not solving the problem. After the smooth, you tried to sculpt with other brushes, but the mesh did not behave the right way. When you used Expand Brush things got worse, then when you switched to InflateClay (LiveClays) the mesh had several problems. To try to solve the mesh problem, you used CleanClay to rebuild the mesh and smooth it. But even so the mesh was not good for sculpting, I had this impression when observing how the Brushes behaved in this mesh. CreaseClay also gave you problems with the mesh.

Of course, this video is old and Andrew has perfected the system, but in my experience of sculpturing through 3D-Coat's Dynamic Tessellation, it sometimes gives me a bad surface to sculpt in which I am fighting against the area. But after all, most of the time, I can sculpt with great quality. :)

I have said all this to say that in dynamic tesselation also has problems. I love sculpting on surface mode, but I can not deny that I have problems from time to time using this system. I use the sculpt room all the time, so I know what I'm talking about.

But since I'm not a programmer, I do not know if Multi-resolution would cause problems in 3d-coat. Just talking to Andrew to really know.
I think we would not have problems with Multi-Resolution if it was implemented in 3D-Coat. It would provide good things. The artist could choose between the two types of system, which would be wonderful. I think it would be worth it.

Share this post


Link to post
Share on other sites

Long video, but hopefully it clearly describes why SubDs would be a welcome addition in 3D Coat.  The video is currently unlisted, because I made it primarily for the this discussion and for @Andrew Shpagin and the 3D Coat developers to look at.

Please watch if you are wondering why SubD modeling is necessary in 3D Coat and how it could help a production pipeline.

 

  • Like 3

Share this post


Link to post
Share on other sites
2 hours ago, gbball said:

Long video, but hopefully it clearly describes why SubDs would be a welcome addition in 3D Coat.  The video is currently unlisted, because I made it primarily for the this discussion and for @Andrew Shpagin and the 3D Coat developers to look at.

Please watch if you are wondering why SubD modeling is necessary in 3D Coat and how it could help a production pipeline.

Thanks for the explanation, you've come up with great points so people can get a better sense of the benefits of this Multi-Resolution (quads) system.

I would very much like this Multi-Resolution system (quads) in 3D-Coat because I know the potential of 3D-Coat and I wonder what that system would look like in 3D-Coat...

The Multi-Resolution system has the problem of the stretched mesh as you have demonstrated, but in my opinion, the artist has to know how to use this system, otherwise the problem will really happen. The mesh will be stretched according to what the artist does.
The Artist has to understand that he can not stretch the mesh thinking that he is working on dynamic tessellation and the Multi-Resolution system will do magic and fix the problem.

Another interesting thing in the video when you were demonstrating things inside Blender, I do not know if you intended, but you also demonstrated the power of a single mesh inside the blender.

You have applied the Multi-Resolution in the mesh that gives us all the benefits of this system in a nondestructive way of the mesh, kept the UVs in the mesh correctly, and when necessary you have modeled directly in the mesh with the tools of the Blender Polygon Editing Mode which is equivalent to the tools and functionalities of the Retopo Room.

You entered Sculpt Mode and made the changes you wanted by going up and down the subdivision levels.
Then you entered the UV Mode and if you needed could have quickly created new UVs or changed the UVs that you had previously created.

In this same mesh, you created a new texture map, then entered Paint Mode and started painting in real time directly in the Diffuse Map mesh, but it could have been any other type of texture map you wanted (bump, normal , displacement, specular and etc).

In addition, you could have taken advantage of vertex / polygon selection and created a group of vertices in Vertex Groups.
And for sure, you could have used all the different and powerful Modifiers non-destructively and with the help of Vertex Groups with weight influence on the vertices, you could restrict the influence of the Modifiers on the mesh, influencing just the vertex / polygons of this Vertex Groups.

In short, everything I've been talking about the benefits of a single mesh, in seconds you've been able to accomplish. All the features were there and you used it the way you wanted and the moment you wanted.

Now, answer me one thing, the freedom you had to do things the way you did,  how would you classify this kind of worflow?

Thanks again for your time having made the video!

Share this post


Link to post
Share on other sites
6 hours ago, gbball said:

Long video, but hopefully it clearly describes why SubDs would be a welcome addition in 3D Coat.  The video is currently unlisted, because I made it primarily for the this discussion and for @Andrew Shpagin and the 3D Coat developers to look at.

Please watch if you are wondering why SubD modeling is necessary in 3D Coat and how it could help a production pipeline.

 

Thanks for taking the time to create the video. I'm halfway through and I already have to objections. Early on, you say "This is supposed to be the equivalent to stepping up and down subdivision levels, but I disagree." How so? It uses a DIFFERENT APPROACH than traditional SubD levels, but it accomplishes the same goal. That is to reduce the mesh to a lower poly version to make Large scale edits (which are normally slow to do on higher resolutions) and still allow the user to keep any smaller details they made to the higher poly version. MISSION ACCOMPLISHED. 

You showed an example where you import an extremely low poly mesh > INCREASE RESOLUTION. Triangulation does poorly in such cases, especially when the model is not prepped properly in one's host application, using extra edgeloops to preserve hard edges and details. If you import a prepped mesh, it will subdivide with much better results. So, it's somewhat of a bad example to show the shortcomings of an app when the user doesn't take the necessary steps to prepare the model to be subdivided. It's often the same with 3D apps in general. When you subdivided in Blender, because the model didn't really have those supporting edgeloops, it smoothed poorly as well, when subdividing it....just not as poor as Loop (Triangulated) Subdivision. In 3DCoat, this is how you would handle an uber low poly mesh on Import.

 

Share this post


Link to post
Share on other sites

I tried to show new users how to properly import low poly meshes, especially uber low poly meshes, and it works. The best practice is to PREP YOUR LOW POLY ASSET BEFORE IMPORTING INTO THE SCULPT WORKSPACE. It's a lot easier to do that than for Andrew to spend a year developing a quad SubD mesh option.

The other issue is you try to use a bad example to demonstrate supposed shortcomings of the CONFORM RETOPO MESH option. You bring in two completely different mesh states, with entirely different poly counts and shapes. That is using the tool in a way it was never intended. Subdivide the mesh the way you want in your host app and import the mesh into the Retopo Room > Sculpt Room go to GEOMETRY > RETOPO MESH TO SCULPT MESH. Now you have a perfect copy, one quad mesh in the Retopo Room and a Triangulated Surface mesh in the Sculpt room. 

If you are going to use an example to show a tool's shortcomings, please try to make it practical, and not an extreme case where few users would even attempt. Having said this, I am starting to see some practical benefits of possibly adding it in the future, as it would allow more control when modeling in the Sculpt room. Maybe after Andrew comes out with V5. I think he needs to have someone who can either bugfix while he works on the core architecture for such a feature. I hope he consolidates the Paint & Retopo Meshes into one unified mesh (he could then remove the Tweak Room and remove the UV tools from the Retopo Room) in the same development effort. That way, any quad mesh in the Sculpt room would be the same Mesh that exists in the Paint and "Topo" room. All of this would take a colossal effort on Andrew's part.

Share this post


Link to post
Share on other sites

Bottom line is this....if you don't prep your model (with supporting edgeloops around hard edges and details) BEFORE IMPORTING into the Sculpt Workspace, you make your job that much harder. The technique I showed above (using Voxels to fix subdividing problems) is a usable "work-around" for dealing with a model that isn't prepped properly. 

Share this post


Link to post
Share on other sites
8 hours ago, gbball said:

Long video, but hopefully it clearly describes why SubDs would be a welcome addition in 3D Coat.  The video is currently unlisted, because I made it primarily for the this discussion and for @Andrew Shpagin and the 3D Coat developers to look at.

Please watch if you are wondering why SubD modeling is necessary in 3D Coat and how it could help a production pipeline.

 

I think i get it. You want to be able to sculpt a mesh that has uvs thas texture on it , say you make  a model on a modeling app and wish to sculpt it later on and extract bumps and disps  etc.

For this to happen i dont think its necessary for subd implemantion only a project uvs to vertex color and then back at your staring model.  

Havent tryed this pipeline and i am not sure if its already supprted.

 

Share this post


Link to post
Share on other sites
11 minutes ago, micro26 said:

I think i get it. You want to be able to sculpt a mesh that has uvs thas texture on it , say you make  a model on a modeling app and wish to sculpt it later on and extract bumps and disps  etc.

For this to happen i dont think its necessary for subd implemantion only a project uvs to vertex color and then back at your staring model.  

Havent tryed this pipeline and i am not sure if its already supprted.

 

If it has UV's AND texture already on it (which is a somewhat rare case because it reverses the normal workflow), there is already a decent workflow for that. In these videos it shows how you can do just that, and still keep your UV's and textures.

 

Share this post


Link to post
Share on other sites
3 hours ago, AbnRanger said:

Thanks for taking the time to create the video. I'm halfway through and I already have to objections. Early on, you say "This is supposed to be the equivalent to stepping up and down subdivision levels, but I disagree." How so? It uses a DIFFERENT APPROACH than traditional SubD levels, but it accomplishes the same goal. That is to reduce the mesh to a lower poly version to make Large scale edits (which are normally slow to do on higher resolutions) and still allow the user to keep any smaller details they made to the higher poly version. MISSION ACCOMPLISHED. 

You showed an example where you import an extremely low poly mesh > INCREASE RESOLUTION. Triangulation does poorly in such cases, especially when the model is not prepped properly in one's host application, using extra edgeloops to preserve hard edges and details. If you import a prepped mesh, it will subdivide with much better results. So, it's somewhat of a bad example to show the shortcomings of an app when the user doesn't take the necessary steps to prepare the model to be subdivided. It's often the same with 3D apps in general. When you subdivided in Blender, because the model didn't really have those supporting edgeloops, it smoothed poorly as well, when subdividing it....just not as poor as Loop (Triangulated) Subdivision. In 3DCoat, this is how you would handle an uber low poly mesh on Import.

 

I know it's long, but please watch the whole video before picking out one or two things to comment on.  At least that way we'll be on the same page.  There are other things that I've addressed that the current workflow isn't capable of.

Share this post


Link to post
Share on other sites
33 minutes ago, gbball said:

I know it's long, but please watch the whole video before picking out one or two things to comment on.  At least that way we'll be on the same page.  There are other things that I've addressed that the current workflow isn't capable of.

I understand that, but CONFORM RETOPO is actually a feature I requested of Andrew so we would have a means of having a Quad mesh conform to the Sculpting changes we might make, and it works really well for me. It was never intended to make two different meshes with totally different shapes, conform. I just had to say something the moment I saw that. I don't think it's fair to Andrew to claim it doesn't work well when it's not being used properly. None of the video tutorials showing how CONFORM RETOPO works, demonstrates it being used that way. Can we at least agree on that much?

I'm coming around a bit, but not because I think 3DCoat falls short the way you think it does. It's mainly because I think it could be a great asset to poly-model with, right in the Sculpt Workspace. Still, it would be a massive undertaking and I'm not sure Andrew would be willing to do that.

  • Like 1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×