Jump to content
3DCoat Forums

probiner

Member
  • Posts

    79
  • Joined

  • Last visited

Everything posted by probiner

  1. Hey AbnRanger, thanks for the time to right a thorough answer. You kind of convinced me right there to go with the ATI. The 7950 has a 384-bit memory bus against 256-bit of the the GTX760. Also the extra memory might become a little handy if I attempt to render anything GPU. Though I admit that even 3GB is not enough for serious stuff. As for the drivers and a ATI user myself I saw how bad things have going. Though now they seem ok on my end and from my readings people are happy now. About CUDA, yeah, I'll be cutting myself short on that front... OpenCL front still not here on 3D-Coat now or in a close future right? Cheers
  2. My old HD3870 still delivers but I'm on my way to upgrade. On my price range I'm split between nVidia GTX 760 2Gb ATI Radeon HD 7950 3Gb Anyone using them. How's navigation? Would I gain anything by upping my budget a little or only a damn Titan will give performance kick? Is CUDA a game changer? I guess with the 7950 I would be out. Thanks
  3. Hey AbnRanger. Just passing by to say I was a jerk for not returning to this thread :P. I got busy and moved on to other stuff, but just got V4 and hope to get time this weekend to come back on this. Cheers
  4. So I tried to replicate carlosa list but there's something going on wrong on my end. Maybe it's a graphic card issue. I Import one of the content meshes to Retopo (they already come with a UV) then go to voxels and Merge from Retopo and hit apply. The voxels look transparent... like an object looks like when it has no UV map. Tried to import an object to Retopo room without UV map and seemed to work. What is going on here? You guys get it too? Thanks. Moreover, conform with retopo mesh, seemed to have no effect. Cheers
  5. Thanks. I do think it helps Will try to replicate it. Cheers.
  6. Hi AbnRanger I kind of know that Workflow. Like I said in the previous post in my work I model first, Lightwave this case, and once models get approved I pass on to detailing, so my question on "C" does 3D-Coat now have a way to import an established model, detail it in 3D vectors (not just vertex normals in microvertex painting) and get a map and maybe a new low poly to render back in LW. I ask this because If I already spent some time modeling the object in a polygonal modeler, I'm not really interested to spend time again in 3D-Coat remaking a topology and control edge loops for subdivision. This type of flow is what my mates follow with Zbrush, but mostly mudbox. I see I can import a model to Surface Mode, so it has the same topology (+ triangluation) than my polygonal model, unlike voxels that convert them to a resolution. I wouldn't feel like working with real voxels given the computer resource limitations. But I guess it's ok to start an object with them and switch to Surface. I only see the need to go back to Voxels when the Surface gets all messed up, self intersecting and the Voxels would recover it, tough with loss of detail. Maybe I'm missing something. I guess next you'll tell me to start my models in 3D-Coat :P And maybe you're right. Just not feeling in control, like in LW. To me Liveclay replaces Sculptris and Meshmixer, not Mudbox or Zbrush. But those first 2 are free so there's a gap in my humble pov. Anyway, I'll take a better look at the videos later, but it's noticeable the quality improvements Cheers
  7. Been quite a while I've used 3DC sculpting features. Playing around with v4's LiveClay and liking it a lot. But I got a lot of questions and some worries. A- LiveClay uses Surface mode. What are the disavantages? B- I've noticed that the mesh can become messy and the symmetry lost. Even with the Clean Clay tool does nice, seems that it can't solve it all. C- Sometimes the pipleline require a polygonal modeling for aproval and then the fine detailing kicks in. Zbrush and Mudbox solve this with subdivision Multi-Resolution meshes. How do you guys deal with assets that only needs detailing and you just want to end up with a mesh close to what you had and some maps? I've noticed that one can import a mesh as Surface mode in the voxel room. But when I export it back, no UVs... D- Mostly a LW user with Softimage at work too. How is Vector Displacement Baking in 3DC? VDmaps seem perfect no not being strapped to "shapping" only in the vertex normals, as it seems to happen in the painting room. E- Had some crashes. How is the stability and perfomance around lately on sculpting? This is something that can make one feel like a fool afterwards F-How and for what for do you mix Voxel and Surface ? I've notices sometimes changing to Voxels is the only way to fix messy meshes into only surface shell. G- GAre there masks for voxels, so edits can be isolated? Cheers
  8. For what is worth here is my last effort on laying down some Topology concepts. Cheers http://oneupontheroc...bege.com/?p=667
  9. The most important for this type of Wiki, more than the Manual side, are really the workflows and the workarounds for quirky and intuitive brick walls in the creative process Cheers
  10. Congrats indeed. Brushes's spacing set to 1% is now working much much faster and performs more like Photoshop would do. It was overdue. Good to see you always keep pushing forward. Best wishes
  11. Have anyone here ever used Subdivision Interpolation Scheme? Wikipedia Here's the difference ______Approximation_____________Interpolation_____ Imagine, just imagine if the cage you model has the SAME position when you freeze the mesh... Imagine how retopology mesh will not bulge in when subdivided, since the retopo points stay in the same place in the Subdivision Surface, like they did on the Sculpting App!! I don't know any application using the Interpolation Scheme do you? I'm sure it will have it's issues and differences, but I would love to toy with it. Cheers
  12. SOLVED WIREFRAMES! I was afraid that on import if I checked 'Weld Vertices' that it would close my edges and change the normals, changing the Smoothing. That didn't happen it looks just like the model is unwelded and when I export it the polygons are again unwelded. Just what I wanted. Case Closed
  13. Ahh. Makes sense. A multiplier that would go for all layers would also be cool, but this does the trick, thank you. Now I just need to find a way to import a Normal Map has Depth in a Layer If you have a hand on it, go ahead and do it. I would have to spend some time now figuring out how to do it. Still having omitted wireframe lines Cheers
  14. A solution. A problem. A Question =) ))) Solution Thanks for the answers Andrew and AbnRanger, the Bevel is not that hidden, I went down that path in Lightwave for a while though, until I quited completly. It's just NOT a good solution for 2 reasons: 1 - If you have UVs already done, Beveling will destroy the UV and compatibility with another mesh with the same UV 2 - The averaging that happens on the vertex normals after Beveling will leave flat areas less flat. They look a bit flat, but in truth they are bulky. The only solution I saw to both keep the UV good and filling the vertex normals gap, while keep good flat areas would be something like the above. But thats a lot of work... http://i153.photobucket.com/albums/s202/animatics/Lightwave/Polycount-Hard-Angles-C.png So I moved on for a bit more serious baking with xNormal, where there is a Cage that will average the bake projections, while it respects the Vertex Normal Map. So I get no gaps on hard edges because the bake projection will go all around the hard edge, and I get well defined flat shading in the Normal Map. Pilgway, should really consider to update the Baking System to have a cage if it wants 3D-Coat users to have clean, controlable bake results. At least from the retopo room bake that normally deals with Voxels. ))) Problem Now... I got another issue The attachement shows 2 wireframe views. First from a previous version of my model, the second from the current version. Both have open edges in about the same places, with little diferences, but the current model, shows lots and lots of Wireframe omission. These happen on Open Edges, but the previous model had no problems. Any Tips? Using Version 3.5.24 ))) Question Is there a way to make the Maximum blend of Normal Map Painting be per Stroke and not per Layer propertis? Attachement. Sometimes you want to add strokes in the same Layer or even a different Layer, and I don't want crossings to get 2x the same height. Maybe a Multiplier for crossings would be great. It would allow to control if the height becomes 2x or even less in crossings. Thanks for any light on the subject. Cheers
  15. Hi I noticed that i don't get the smoothing i want in renders. I'm probably doing something wrong. Left is what a i get, right is what i want. If my OBJ has a vertex normal map/ smoothing groups in it, will it be used to the bakeing process? I don't see any smoothing groups options on import to retopo room, like i see in painting room. How could i get flat surfaces in my renders like i have in the voxel model? Thank you and Cheers
  16. SG_AmbOcc is exaclty the node I used in my previous post. Dpkit Curvatures is the other one.
  17. More on this i think what Merge Voxels Retopo Per-Pixel-Painting is called Baked Occlusion, is in fact Curvatures/Cavity and not really Occlusion. Image attached. Occ is proximity to other mesh, while curvatures is the angle made with neighbour mesh. Both give 2 different levels of informations and it's nice to be able to bake one and the other. The use of 3D-Coat of Curvatures with "Modulate 2x" is very interesting since it perserves the black and white points and the grey is replaced with color. While Ambient Occlusion would be blended normally with "Multiply", darkening the occluded areas. Cheers
  18. This! Thats the only thing i have against voxels, you can't just brute force extract your work from 3D-coat, like you would do with Sculptris/ZB/MeshMixer and bake the maps somewhere else. Also a way to bake the Lightning to the model (meaning the lightning you get in the Render Tab, would be a win! Looking forward to try 3.5.21 tomorrow
  19. Is there a way to import a model from meshmixer or Sculptris where there are many different levels of detail and see those levels with live clay? Right now when i import a model like this, if i don't go to enormous amounts of voxel resolution the fine details are gone =( Cheers
  20. I have to complain my post here has nothing to do with displacements. But only to get a retopo mesh to have the correct shape when subdivided in the main 3D app, which doesnt happen right now because the retopo mesh is stuck to the subdivided reference shape, so your object in the main 3D app won't have the same shape than the one in 3D-Coat. I was only asking that after retopo, the vertices could be brought to the cage position instead of subdivision position, for correct meshes, especially the light ones. Not focusing on the divisions, but on the deformation that SubD does to a cage. http://wscg.zcu.cz/wscg2006/Papers_2006/Full/B89-full.pdf Anyway Andrew implementation already does this to some point. It's not perfect, but it's there. Cheers
  21. And how is that? When you subdivide the object you are already distorting the SDS lvl1.
  22. Great! Looking forward to see that. I'm thinking how to explain my idea for take 2 and will post it later Hopefully when i post it, again, Andrew will already have posted a new implementation covering it. That's how fast he is =P
  23. Hey there. I saw the new auto retopo method and while i think it is very impressive, there is also room for improvements. I got some ideas that i would like to discuss here by takes. Take 1 - Subdivision Deformation __One thing i noticed when you want to Sculpt, make a Retopo and bring it to LightWave (or any other main 3D package),is that your Retopo mesh will not have the exact same shape than your sculpt in 3D-Coat when you apply Subdivision to it, especially if the Retopo mesh is light, because the SubD deformation is not taken into account in Retopo process. __The solution, now, is to increase the density of the Retopo mesh, so, increasing the number of polygons, in order to make that SubD deformation preety much null, although it is still there. So my suggestion would be for 3D-Coat to have some sort of 2 meshes going on at the same time in a Retopo: __Having a Working mesh that sticks to the sculpt Shape and calculated from this one, an Output mesh (the SubD cage). The Output is the the one to be exported to the Main 3D Package, and when Subdivision is applyed on it, it will look like the 3D-Coat Scuplt without the need of dense meshes. What you think? No need, or makes perfect sense? Is it possible to do a reversed SubD calculation? If i gave you the 'Intended Shape' in the picture below and told you to do a Retopo, what kind of mesh would you give me back from the -Faces State- line in the picture, the first or the last? It is true that even for animation you sometimes need dense meshes but it would be nice to be able to choose and not to get stuck with dense so that the shape is the same in the primary 3D package. Cheers
×
×
  • Create New...