Eric Cosky

Member
  • Content count

    87
  • Joined

  • Last visited

Community Reputation

7 Neutral

About Eric Cosky

  • Rank
    Neophyte

Contact Methods

  • Website URL
    http://www.cosky.com
  • ICQ
    0

Profile Information

  • Gender
    Male
  1. It would be really useful if Sketch had an option to use greyscale images as a bump map (with minimum/max thresholds) instead of just an on/off stencil so I could paint some elevation details in photoshop before importing it. Or perhaps there is another way to do this in 3DC already?
  2. Just have the original high res painted mesh visible in the paint room and use Merge with NM (per pixel) with your reduced mesh in the retopo room. This will cause the object in the retopo room to be copied into the paint room as an object, baking the texture sampled from the objects currently visible in the paint room in the process. The docs do a poor job of explaining this, but that seems to be how it works. I am pretty sure there other ways to bake textures in 3DC but I've never looked into it much, it was hard enough to figure out how to do just this much.. It's really hard to tell from the docs what is and isn't possible with respect to baking textures, it seems to be mentioned only in passing as if people only needed to be reminded of something they already knew how to do as opposed to it being a significant feature that warrants a detailed explanation of what is possible and what specifically to do to use it.
  3. I'm no expert at 3dc but since nobody else responded, I'll try.. but take all this with a big rock of salt because I'll probably be a bit off somewhere. I think what might work best for LOD is to export your mesh (don't bother with texture/uvs) to another program that has poly reduction features (which is probably most of them). I use Softimage, and I would use the Poly Reduce command to let me dial down the poly count while keeping the general form. Then export that model to OBJ. In 3DC, with your high-res model already painted and loaded in the Paint room, go to the retopo room and import the OBJ as a reference mesh. Unwrap the new topology (== create a new, different uv set), taking the time to manually place seams if you want, then use the "Merge with NM (per pixel)" command to bake the texture found in the object in the paint room to the new UV set. Obviously you'll want a suitably smaller texture for this lower res uv set so the lower LOD model will provide better performance (smaller textures = better perf = why you do low lod in the first place). Check my thread history for a few links on related topics, this is something I've had some trouble figuring out myself and people were kind enough to help out with.
  4. Hi, I recently created a model that uses a mirrored UV set and it looked as intended in 3DCoat, with both sides of the model showing the normal maps correctly. When exported to Softimage, I found myself having difficulty getting the normal maps to look correct and I eventually found this to be the result of tangents somehow not getting set up or otherwise reaching Softimage correctly (the apparently typical problem with mirrored tangents flipping the normal map shading angle). I have since been told that the obj format doesn't support tangent data, so this would explain part of the problem. I eventually wound up exporting half the model and doing the symmetry in Softimage which allowed it to work correctly. I'd like to avoid this next time around though, so I'm curious if anyone has had more success with FBX or LWO formats while using mirrored UV sets with normal maps. I made several attempts with those, but had the same results as the OBJ format. Maybe I missed something though. Anyone out there have better luck getting mirrored normal maps out of 3DCoat? It would be nice to know if there was a particular format known to export the correct tangent data for mirrored UVs. Thanks for any feedback,
  5. Thanks!
  6. I'm trying to figure out a good reason why I would want to use virtual mirror mode instead of just plain old symmetry when doing retopo for a model. What's the practical benefit? It seems like it should be pretty similar to regular symmetry mode. The docs don't even mention this feature. I was doing a retopo with virtual mirror mode enabled (not intentionally) and somehow managed to lose some retopo work when I turned it off. I think it might be because I didn't "Apply symmetry" before doing something else. So now I'm a bit hesitant to use the virtual mirror mode and I'd like to understand this better. Thanks for any comments
  7. Always like to see more resources like this
  8. Thanks that was the problem
  9. Thanks guys! I knew it had to be simple. One problem though - I don't actually see the Add Rotation field! Maybe this is a bug in 3.7.10A ? Please check out the attached screenshot. I have a point selected, and the "Add Rotation" simply isn't there for me. Any ideas?
  10. I've been trying to learn how to use the Curves tool in the voxel room. It looks like a very useful tool but one thing is really holding me up - I can't figure out how to reliably control the rotation of the object along the spline. Many of the shapes, such as the first "Dragon4.obj", have a spike or other feature that needs to be oriented correctly for it to work for a given curve. I've tried to use this tool every now and then for a long time and always gave up before figuring this out so I am hoping someone can give me a few pointers on it. I've tried the rotate tool (visible when "apply to whole curve" is set) and it rotates the points on the spline, not the shape extruded along the spline. Often times, when I do find a control that has any influence of the rotation of the extrusion, it rotates so fast for even slight changes in the curve point positions that it is uncontrollable and seemingly just a side effect of the spline manipulation, not a specific "rotate the profile". I haven't yet seen anything that rotates the extrusion that doesn't also move, rotate or scale the spline itself. I am sure I'm overlooking something simple & fundamental here because I've seen plenty of examples of people doing things that are clearly the result of well controlled use of the curve tool. Can someone help me out with a few tips on how to use the curve tool effectively? Thanks
  11. AbnRanger helped me out via IMs but I wanted to share a few key points that really helped me out. These are almost certainly obvious to people more familiar with 3DC than I am, but they were news to me and discovering these things really helped me get a fuller understanding of what 3DC really is so maybe this will help someone else. * This was a fundamental one for me and looking back I don't know how I managed to miss it: Getting a voxel triangulated does *not* require the use of AUTOPO command in the voxel layers context menu. The Retopo room where you create or modify a polygonal hull around the voxels, which can be subdivided as necessary (manually) to increase triangle resolution where it is needed and then you "merge it with the scene" which seems synonymous with "make it paintable in the paint room". The manual talks about importing reference meshes and whatnot, but the real power and purpose here is (to me) to create a mesh that conforms to the voxels you have created - there seems to be no need to export anything or refer to a reference mesh. Adding points will cause them to stick to the voxels. When triangles are subdivided, the new points will shrinkwrap to the voxels, allowing you to create the mesh with whatever topology you need that conforms to whatever is in the Voxels room. Once the topology is prepared, the different "Merge" commands in the retopo menu are used to create the actual paintable mesh in the Paint room. I'll probably continue to use the Autopo for the time being because my needs, time and skills with 3DC are very limited, but I expect to be manually building topologies at some point in the future because it will help with making geometry that behaves better when animated. * You have a couple ways of getting paint layers from an old mesh to a new one based on the original. How you approach it seems to be dependent on if you can have multiple UV sets, or if you need a different single UV set to account for topology changes. ** When using the same, original UV set you export the layers of the original, create the retopo mesh for the new shape, use the original UV set and import the layers to the new mesh (I haven't done this but I think this is how it works). New faces need to use a new secondary UV set and a separate texture. This doesn't meet my needs, since my output needs a single texture & UV set, but I expect this is not a bad way to go if you can because the original faces are left unmodified. I haven't tried this beyond a very simple test. ** With a single UV set, you need to create a new UV set and then bake the original layers to the new UV set. This means have a retopo mesh in the Retopo room for the new shape, and the original layers/model in the paint room. Create a UV set for the new retopo mesh (UV sidebar, auto-seams + unwrap works for me but you can do mark seams manually if you want. Note you don't need to enter the UV room). Use the "Merge with NM (per pixel)" or similar to get the old paint applied to the new mesh, which will be based on the retopo mesh with the color baked into a single layer. I understand that the "Use names that correspond for baking" will cause the original layers to be used, or something along those lines which will allow you to transfer complex layer setups (I haven't had a chance to try this though). This method has produced good results for me, but pixels are not exactly the same because they had to be resampled to the new UV set. For me, it looks fine and this is the approach I wound up going with. There have been some crashes in the past when using this command, but as of today the current version is working for me without a crash (3.7.06C). There are probably more ways to do what I was originally asking about but I this works for me and I'm happy with the results.
  12. I just realized the Retopo's "Use current low poly mesh" simply replaces the current retopo mesh with whatever is in the paint room. Not much closer to the goal but at least I get that part now. After some more testing I am starting to think I should be able to get a Retopo mesh from the modified voxels without exporting an FBX, remove all unrelated Groups in the retopo (perhaps I could make a new uv set in the retopo room instead?), and merge the modified/damaged retopo mesh with the scene to get the colors over. This is where it crashes for me, but I think this is the intended workflow and if so that would be great so I am going to wait until the fix for the bake texture crash is available in another version before spending more time on it.
  13. Hold on a minute, are you implying I can do this process without having to export an FBX for use as a reference mesh and instead just modify the voxels and somehow get the original paint layers applied to the newly created geometry for those voxels? That is my ultimate goal here, but right now the only way I know how to do this involves a lot of hoop jumping with exporting to fbx, etc. it would be so much nicer if I could simplify this process a huge amount by somehow just going to vox room, editing the original voxels to make the damaged geometry, and hitting a button or two and have all my layers of paint transferred to the new model where they happen to match surfaces with the original model and leave unpainted areas where they don't.. I don't know if that's possible but that would be my preferred workflow. The only sequence of steps that have worked for me feels pretty cumbersome: Save the original .3b as a second file because steps #4 below requires deleting the original object before exporting the damaged FBX. Modify the original voxels to create the damaged shape. AUTOPO for Per Pixel to get a mesh with UV set. Delete the original undamaged model in the Objects window of the paint room so only the damaged model is present. Export the resulting damaged model as fbx (it has UVs). Load the original .3B file with the undamaged model. Enter Retopo room. Import the damaged version's FBX as a reference mesh. Use the "Merge with NM" to bake the original paint layers to a single layer on the new geometry. (Incidentally, it took me a while to discover that "NM" meant "Normal Map", would be nice if 3DC didn't use acronyms so often, it's only increases the challenge for new users. It's one thing if it's going to break the UI layout, but there's plenty of room here.) I haven't tried the NAMES CORRESPONDENCE yet, but when you said the Retopo layer name needs to match the Voxel layer name that tells me I am probably overlooking something significant here because the imported FBX reference mesh doesn't create any kind of Voxel layer with which to discover a matching name.. I am wondering if perhaps "Retopo->Use Current Low Poly Mesh" is related to this somehow. I don't really understand how that menu item relates to all this because so far it seems like this process is applying textures to the "reference model" (currently just an FBX that is visible only as a retopo mesh) using whatever is in the paint room as the source for color. Where would I specify the low poly mesh if it isn't the mesh in the paint room? If it is the mesh in the paint room, how would it know to not pull color data from the untextured damage object? Is it as simple as turning off the object visibility? I searched for "Use Current Low Poly Mesh" and read every thread that matched, none of them explain what exactly the low poly mesh is and the manual does not define this term clearly. I think "Low Poly Mesh" means "the entire mesh visible in the paint room". Such a basic term for 3DC, but it isn't clearly specified anywhere from what I can tell and I just don't know for sure. The manual's definition for the menu item is this: "Use current low-poly mesh: A reference mesh can be imported to retopologize big objects made in another 3D modeling program. They can contain reference to textures. In this case the objects will be colored; color will be used in baking and merging into the scene." This doesn't really suggest anything other than "low poly mesh" is the reference mesh. But then down in the manual for the UV Manager it says "Texture Baking Tool. This lets you bake details to a normal or displacement map. This can be used even when the surface topology doesn’t match perfectly between your reference mesh and low-poly mesh." which clearly indicates the reference mesh isn't the low-poly mesh. Section 17.0 regarding the Texture Baking tool says "Here are some detailed steps to use this tool" but they are anything but detailed or even a set of steps to follow. It's very unclear how to use these tools at all based on what is in the manual. I'll continue pushing related buttons nearly at random to see what happens though.. maybe I'll stumble into something that works like I am now hoping it will. It's a bit of a struggle though because it seems my assets have triggered a problem in 3DC's texture baking that causes a crash. I submitted repro data to Andrew and he confirmed the problem so it will be fixed soon I am sure, but for now it only adds to the challenge here and might be part of the reason I'm unable to answer these questions myself. Could be these crashes are preventing me from using the ideal workflow. Hard to say. Thanks again for your insights.
  14. Good to know, thanks.
  15. The main reason for a single uv set is this asset is headed to a game engine that performs better with a single set of textures & UVs. I don't use the AppLink to Softimage, instead relying on FBX for the geometry and then I manually hook the textures to the HLSL shader setup. Softimage is used to rig/animate prior to exporting to the game engine. I've considered making another version of my AppLink script that sets up my specific shader stuff, but I am not really generating assets at a rate that would justify that effort just yet. I could have in theory done a render map in Softimage to get it down to a single UV set but that would risk losing some image quality in the process. One thing I now realize would be very useful is if 3DC would bake all the layers individually instead of combining like it does.. I'd be able to make better use of the previous layers to create variations of the damaged asset.