Jump to content
3DCoat Forums

Recommended Posts

  • Advanced Member
Uh... 3DC has had that since the beginning, it's like, the core of what 3DC is. :blink:

lol yeah I havent used that part of it much I use 3DC for voxel stuff and I was talking about direct painting to uvmapped textures on a true lowpoly model while viewing normal maps rather than 3Dc's old method...maybe Im just confused but I was under the impression that 3DC 2.0 expected me to import my model and let 3dc subdivide it wierdly before I could paint it.

Link to comment
Share on other sites

lol yeah I havent used that part of it much I use 3DC for voxel stuff and I was talking about direct painting to uvmapped textures on a true lowpoly model while viewing normal maps rather than 3Dc's old method...maybe Im just confused.

As far as I understand direct painting, the only difference is that it doesn't physically deform the mesh, otherwise it's the same. Well, and it's faster, says Andrew.

Link to comment
Share on other sites

  • Advanced Member
As far as I understand direct painting, the only difference is that it doesn't physically deform the mesh, otherwise it's the same. Well, and it's faster, says Andrew.

Not quite. Im pretty sure its not reliant on subdividing the mesh. Its per pixel rather than per vert or per polygon painting

Link to comment
Share on other sites

  • Advanced Member
I was under the impression that 3DC 2.0 expected me to import my model and let 3dc subdivide it wierdly before I could paint it.

It does that internally (becuase colors are stored in microverteces of a hi-res mesh), but you don't have to export (or even view) the hi-res mesh to work on it. You can paint on the low-res mesh (view>low res) and export the texture/nmap for use with the low-res model as well.

Voxel, Example LP paint and LP + rendered SSS Result (attached)

The advantage to direct painting is that it'll supposedly be faster. Displacement painting will still be allowed according to Andrew so i'm not sure what else will change other than the internal implementation. Andrew?

post-1287-1236023837_thumb.png

post-1287-1236023869_thumb.png

post-1287-1236023903_thumb.png

Link to comment
Share on other sites

It does that internally (becuase colors are stored in microverteces of a hi-res mesh), but you don't have to export (or even view) the hi-res mesh to work on it.

This would make a lot more sense. Every time this question came up before the answer sounded like (to me anyway) what I said above. It's strange though I thought I remembered hearing a lot of people complaining that they didn't like painting on microvertices like ZB does.

Link to comment
Share on other sites

  • Advanced Member
This would make a lot more sense. Every time this question came up before the answer sounded like (to me anyway) what I said above. It's strange though I thought I remembered hearing a lot of people complaining that they didn't like painting on microvertices like ZB does.

Well. Unlike ZB (or at least the last version I used, 2.0 or something), 3dc uses adaptive tessellation to generate the hi-res mesh, AFAIK. What that means is that the mesh does not have have to be modeled with a uniform level of detail all over (though it's still a good idea if your renderer doesn't support adaptive subdivision to generate enough detail for your displacement map). Big polys in 3dc are subdivided more. What that means is you don't have to get in to the uber-millions of polys to get enough verts for enough texture detail. You select a number equal to the number of pixels in the image, and the mesh is subdivided evenly until it reaches that number.

I don't see why people don't like the approach, personally. I like it better as it allows cool features such as being able to swap out UV layouts at any time and no seam problems since a mesh itself (vert colors) doesn't have any. Does it really matter how the data is stored internally as long as the result is the same? If Andrew didn't advertise how it worked, nobody might ever know.

Link to comment
Share on other sites

  • Advanced Member
Well. Unlike ZB (or at least the last version I used, 2.0 or something), 3dc uses adaptive tessellation to generate the hi-res mesh, AFAIK. What that means is that the mesh does not have have to be modeled with a uniform level of detail all over (though it's still a good idea if your renderer doesn't support adaptive subdivision to generate enough detail for your displacement map). Big polys in 3dc are subdivided more. What that means is you don't have to get in to the uber-millions of polys to get enough verts for enough texture detail. You select a number equal to the number of pixels in the image, and the mesh is subdivided evenly until it reaches that number.

I don't see why people don't like the approach, personally. I like it better as it allows cool features such as being able to swap out UV layouts at any time and no seam problems since a mesh itself (vert colors) doesn't have any. Does it really matter how the data is stored internally as long as the result is the same? If Andrew didn't advertise how it worked, nobody might ever know.

I agree, provided performance is fine and the results are good then it shouldn't really matter what methods 3D Coat uses. But a lot of people seem to want "direct painting" and so I can only assume that current painting performance isn't so great for them.

Link to comment
Share on other sites

  • Advanced Member

DP allows for:

Overlaping and mirrored UVs. Very handy when you don't have much texture memory available for your target platform.

Since it works in UV space, you can come up with various texel aspect ratio on the same model, independantly of the model topology, and save some more texture memory.

Better accuracy as what you see is what you get and the texel won't go through multiple reprojection during painting and at export (like it is right now). You might not see it when working with 2k images, but try to do a character with a few 128x128 textures right now and it will show. It is important when doing 'down to the texel' work.

All these things are important for game assets, especially low-res work.

Hope this helps,

Franck.

Link to comment
Share on other sites

  • Advanced Member

gonna play devil's advocate here for a minute... Personally I'm a big fan of the microvertex approach Andrew uses.

DP allows for:

Overlaping and mirrored UVs. Very handy when you don't have much texture memory available for your target platform.

True, but mirrored breaks TS normal mapping in most engines since tangent space calculation is based in part off which side is up in UV space. Flipping UVs flips the effect (and results in seams).

Since it works in UV space, you can come up with various texel aspect ratio on the same model, independantly of the model topology, and save some more texture memory.

But since 3dc already uses adaptive tessellation to evenly generate a hi-res mesh, it's already independent of the mesh topology (afaik). If you're working on LP meshes, you can always just crank up the vertex count to compensate. It's smooth as can be for me at 5m verts for 2k textures. It should be dandy with lower res textures, even with downsampling/scaling in the vert>texel conversion.

Better accuracy as what you see is what you get and the texel won't go through multiple reprojection during painting and at export (like it is right now).

Yeah, but most simple photoshop operations can be performed in 3dc, including adjustments to the layers (though admittedly no curves or levels, which would be nice). Exporting and importing a lot... i'd have to agree. Lots of detail is lost in the conversion process... but consider also that you can export to Photoshop using 3dc and import so that only what is changed in the bitmap version replaces the micropoly version. This "only what's changed" approach saves a lot of quality. Also keep in mind that you can import changes into new layers and use blend modes to apply them (for example, if you touch something up with a clone brush in PS, you could clone to a new layer and import that into 3d coat). There are lots of workarounds.

You might not see it when working with 2k images, but try to do a character with a few 128x128 textures right now and it will show. It is important when doing 'down to the texel' work.

This problem isn't solved by using a high amount of vertexes? I've never tested with such a low-res texture. Usually I work on a high-res texture and scale down if necessary. The downsampling produces higher quality textures anyway. I know it's not technically wysiwyg, but neither is high poly sculpting baked onto a low poly mesh. It's an approximation of a finer version. I'm sure Andrew could implement hardware texel downsampling/scaling to give you a realtime preview somehow of a 256 texture, for example, scaled down to 128. I assume this would give you the quality and workflow you're looking for by storing an intermediate higher-res version internally.

... just my opinion... don't shoot me!

Link to comment
Share on other sites

  • Advanced Member
gonna play devil's advocate here for a minute... Personally I'm a big fan of the microvertex approach Andrew uses.

I do like it too. Although I always wondered why UVs were required in the first place when it's microverts paint. You could as well import UVs later on, but that's another story.

Both DP and microverts are usefull approaches, it's just that one is better suited at pure lowres stuff while the other is great for the hi-res baked to low-res approach.

True, but mirrored breaks TS normal mapping in most engines since tangent space calculation is based in part off which side is up in UV space. Flipping UVs flips the effect (and results in seams).

Just assign two shaders on both sides and flip bi-normal on one shader and off you go. Anyway for diffuse-only work (low-res specifically) this isn't a problem. I'm thinking of the many artists working on handheld-level resources here. Think DS or PSP for instance.

But since 3dc already uses adaptive tessellation to evenly generate a hi-res mesh, it's already independent of the mesh topology (afaik). If you're working on LP meshes, you can always just crank up the vertex count to compensate. It's smooth as can be for me at 5m verts for 2k textures. It should be dandy with lower res textures, even with downsampling/scaling in the vert>texel conversion.

That's just what I said. What if I want uneven texel ratio. Because the sole of the shoe of my character isn't as important as his face. And I want to see exactly how it looks while I'm painting it.

Yeah, but most simple photoshop operations can be performed in 3dc, including adjustments to the layers (though admittedly no curves or levels, which would be nice). Exporting and importing a lot... i'd have to agree. Lots of detail is lost in the conversion process... but consider also that you can export to Photoshop using 3dc and import so that only what is changed in the bitmap version replaces the micropoly version. This "only what's changed" approach saves a lot of quality. Also keep in mind that you can import changes into new layers and use blend modes to apply them (for example, if you touch something up with a clone brush in PS, you could clone to a new layer and import that into 3d coat). There are lots of workarounds.

Sure there are workarounds. And right now for me it is to skip 3dcoat all along and stick with maya 3dpaint for lowres work wich is a shame because it is nowhere near as complete as 3dcoat in term of tools. But it is wysiwyg.

This problem isn't solved by using a high amount of vertexes? I've never tested with such a low-res texture. Usually I work on a high-res texture and scale down if necessary. The downsampling produces higher quality textures anyway. I know it's not technically wysiwyg, but neither is high poly sculpting baked onto a low poly mesh. It's an approximation of a finer version. I'm sure Andrew could implement hardware texel downsampling/scaling to give you a realtime preview somehow of a 256 texture, for example, scaled down to 128. I assume this would give you the quality and workflow you're looking for by storing an intermediate higher-res version internally.

Of course you can spend lots of time putting details in a uber-higher resolution texture not knowing what's gonna be kept and what's not when you downsample. Thats an important point here, because I think the whole problem with the hi-to-low approach in general is often an immense waste of time spent on detail you'll not see in the end. And the DP in general might be a nice addition to the artist palette of tool for him to choose what's the fastest route to the end result. Because only the end result matters in game art.

All I'm saying is when you're on tight budget and schedule on... say a PSP title, you're looking for the most direct approach to create your textures and I think DP can help.

... just my opinion... don't shoot me!

You're a dead man... just kidding ;)

Franck.

Link to comment
Share on other sites

  • Advanced Member
I do like it too. Although I always wondered why UVs were required in the first place when it's microverts paint. You could as well import UVs later on, but that's another story.

I think it's because the mesh would be too heavy to display in the viewport (painting on a 5m poly mesh...). Maybe with Incremental render and/or Cuda technology Andrew could make this feasible (it would be nice to see CUDA tech implemented in paint mode .... and on a mac).

Just assign two shaders on both sides and flip bi-normal on one shader and off you go.

Yup. One way to skin that cat.

Anyway for diffuse-only work (low-res specifically) this isn't a problem. I'm thinking of the many artists working on handheld-level resources here. Think DS or PSP for instance.

Aha. In which case there is no normal map capability anyway. I can see where you are coming from better now.

That's just what I said. What if I want uneven texel ratio. Because the sole of the shoe of my character isn't as important as his face. And I want to see exactly how it looks while I'm painting it.

I see what you mean. Well, you could crank up the vertex count until it reached the highest level of detail you needed for a particular area.

Sure there are workarounds. And right now for me it is to skip 3dcoat all along and stick with maya 3dpaint for lowres work wich is a shame because it is nowhere near as complete as 3dcoat in term of tools. But it is wysiwyg.

Last time I used it there were also horrible problems with seams... even with projection on (which made it slow anyway). Maya is a gigantic piece of buggy overpriced bloatware, IMO. They keep releasing new versions but nothing much changes. Mental Ray gets a new shader here (which is really Mental Images work), some minor feature nobody will ever use there...

Of course you can spend lots of time putting details in a uber-higher resolution texture not knowing what's gonna be kept and what's not when you downsample.

Unless... as I said, there was a realtime downsample preview. I assume scaling the texture down is a relatively simple operation. It would be hard to argue against that approach if it provided the exact (or almost exact) same result as DP.

Thats an important point here, because I think the whole problem with the hi-to-low approach in general is often an immense waste of time spent on detail you'll not see in the end. And the DP in general might be a nice addition to the artist palette of tool for him to choose what's the fastest route to the end result. Because only the end result matters in game art.

I suppose, yeah. But in some cases having a reference might be useful (multiple levels of detail, multiple platforms, etc). having different LOD meshes allows objects to be swapped out based on distance to camera / size on screen. If you're rendering hi-res cutscenes you can use those reference meshes as well. I suppose it depends on the engine, platform you're targeting, and the particular project.

All I'm saying is when you're on tight budget and schedule on... say a PSP title, you're looking for the most direct approach to create your textures and I think DP can help.

I'm still skeptical, but I think you've made a pretty good case for DP's use on a LP mesh. Use on a hi-res mesh... Not sure what the benefit is there. Andrew will probably prove me wrong though, as he reports he's been able to fix the famed painting across seams problem and will probably pull other sorts of voodoo out of his hat.

Link to comment
Share on other sites

  • Advanced Member

Newer engines flip the binormal automatically as far as i can tell. I mean I did some stuff for the Electron engine some time back and it had no problems with mirrored Normal maps. and that was over a year and a half ago.

This said its not really an problem because if your creating content for an engine you create your UV's with this in mind and this never becomes an issue.

Link to comment
Share on other sites

Hi Andrew hows the DP coming along,will there be a beta soon?

Yes, soon, I hope this week.

edit: There is still much work - export/import, fill tool, ext normalmap import, some seams adjustments, painting in uv plane... So I don't promise that all will be done tomorrow. If all will go well I will do upload tomorrow or after tomorrow, but next week is also will be busy with final touch. It was a huge piece of work.

Now I am finishing curve & text tool.

Link to comment
Share on other sites

  • Member
Yes, soon, I hope this week.

Great news Andrew, really looking forward to it :)

I've been using the retopo tools much of the morning at work and wondering if you could put in a triangle counter, to go with the vertices and faces?

(No rush or anything, just something that would be handy, and maybe it's already there and I don't know how to turn it on ;)).

Link to comment
Share on other sites

I have pointed out that I didn't care for the rotation of the viewport, the way the last click is the center of rotation. I have pretty much gotten used to it the way it is, so I just accepted that that's the way 3DC works. However, now someone else was complaining about it on the NewTek forums and pretty much said that's the only thing keeping him from buying the program.

He said :

"First issue for me was the rotation of objects - I didn't know (I still don't) how to set the rotation to object's center, so it was highly annoying trying to rotate the objects while trying to sculpt... The objects just slowly crawled away from my view and I had to move them constantly because of that. Is there an option to center the object's rotation (and the object), some shortcut, anything? Is there similar normal and local mode for rotation like in ZBrush?"

and after I explained:

"I'm just not sure I would ever really get comfortable with that rotation scheme - or would I even want to. It is not as bad as the old ZBrush's drop-to-canvas-workflow-killer, but it's cumbersome and, well, in my opinion just a wrong way to implement rotation..."

Link to comment
Share on other sites

  • Advanced Member

Phil,

i understand where he is coming from most of the time when your sculpting you usually just want to rotate around the COG(center of gravity) of a mesh and there are some instance where you do need the current method of rotation, like when your trying to reach hard to paint/sculpt place like inside the mouth,which the current rotation method excels at. I think the ideal solution would to be add a pull-down menu near the top next to the navigation icons, or just allow the user to right click on the rotate icon and select between the 2 rotation method. Personally i would use the COG method more, and when i need to reach hard places i'll switch to the other method. Being able to toggle between method via a hotkey would be awesome also!

BTW if anyone is looking for a good nvidia driver, try the new 182.08, they are really stable on my GTX260!!(every other driver i tried before had driver failure, recovery issue).

@Andrew i'm looking forward to DP, i hope its alot faster then current technology.

I have pointed out that I didn't care for the rotation of the viewport, the way the last click is the center of rotation. I have pretty much gotten used to it the way it is, so I just accepted that that's the way 3DC works. However, now someone else was complaining about it on the NewTek forums and pretty much said that's the only thing keeping him from buying the program.

He said :

"First issue for me was the rotation of objects - I didn't know (I still don't) how to set the rotation to object's center, so it was highly annoying trying to rotate the objects while trying to sculpt... The objects just slowly crawled away from my view and I had to move them constantly because of that. Is there an option to center the object's rotation (and the object), some shortcut, anything? Is there similar normal and local mode for rotation like in ZBrush?"

and after I explained:

"I'm just not sure I would ever really get comfortable with that rotation scheme - or would I even want to. It is not as bad as the old ZBrush's drop-to-canvas-workflow-killer, but it's cumbersome and, well, in my opinion just a wrong way to implement rotation..."

Link to comment
Share on other sites

Yes I was suggesting to him how this method would work really well for working on a character's fingers (in a T pose). The nice compromise is doing it the way LW Modeler does, by rotating around the center of the viewport. Also what he suggested and that I forgot about, ZB has a toggle switch that changes between COG type and 3DC's type.

Link to comment
Share on other sites

  • Advanced Member
Yes I was suggesting to him how this method would work really well for working on a character's fingers (in a T pose). The nice compromise is doing it the way LW Modeler does, by rotating around the center of the viewport. Also what he suggested and that I forgot about, ZB has a toggle switch that changes between COG type and 3DC's type.

I'm fine with the current method, but if this kind of COG is put in it would also be nice to have a null that represents the current user defined COG or something that you can make visible and move when needed and/or have it so that like in XSI you can frame any currently selected/visible items and that sets the COG to the current items COG.

This is handy for quickly re-centering when moving between layers etc. How all this is implemented though is the most important factor. While I've grown use to and in some ways like the current 3DC model, I think it's important that 3DC addresses these issues so it doesn't become an odd ball to use like a certain application starting with zed.

Link to comment
Share on other sites

  • Advanced Member

I, personally, love 3dcoats navigation scheme.

Its the only one where i can intuitively go anywhere i want. No matter how small the space or how weird the angle. Its much better then zbrush's "on last stroke" option because sometimes you just want to "grab" an area to inspect which doesnt have to be where you last touched the model.

Personal opinion aside i fully agree there should be an option to change it so rotation takes place around the center of the model. It has been a long standing request for many.

Ofcourse this will have to be a very convenient toggle near/against the viewport so its very easy to reach/use. (i like sonk's suggestions)

GrtZ JW

Link to comment
Share on other sites

  • Advanced Member

I also like the current rotation scheme; place your cursor over the location on the object where you want to rotate around before you Alt+left mouse button. Works great ;) very intuitive and predictable.

Link to comment
Share on other sites

  • Advanced Member

I'm curious if any of you have problems with rotation that have a 3D Connexion device. For me, a little side pressure on the 3d navigator can keep the model centered as I rotate it.

Of course everyone doesn't have one, but that's one of the selling points of a 3D navigation device -- one hand can control rotation/zoom/positioning.

Link to comment
Share on other sites

  • Advanced Member

Talking about the view rotation, what I would like to, is to be able to pick a point to be the center of rotation ala Mudbox, because sometime the current method is disturbing when you got a part of your mesh passing through your line of sight and accidentally picking this part of the mesh with your cursor.

That would be great to have several options.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
 Share

×
×
  • Create New...