Jump to content
3DCoat Forums

ChadCapeland

Member
  • Posts

    17
  • Joined

  • Last visited

Posts posted by ChadCapeland

  1. So using voxels will always be vastly less efficient vs regular polygonal methods in terms of overall geometry achievable on current hardware?

    No, it depends on the ratio of surface area to volume, and the range of frequencies.

    Like my suggestion of an option to remove the polygonal skin from the voxel room and work directly on the levelset (as i understand it there are several alternative methods of smoothing a voxel volume that don't use a polygonal skin), would this be more resource friendly and allow us to work at an additional subdivision level than is currently possible?

    You would still need SOME way to rendering the levelset. A raymarcher would work, but it may be slower than polygon skinning. It would be more memory efficient, but slower. Especially if the levelset is not being updated, but the view is changing. So if you are rotating or zooming or whatever, the polygon skin is very fast, since the conversion to polygons from voxels is cached, and the polygon rendering is done in hardware.

    This is exactly the stuff i am trying to find out, i realise voxels have many advantages but if we won't be able to get much finer control over surface details at some point i think it's important to know this in advance so people don't get their hopes up too much. If voxels will be limited to mid-high frequency details, with the final touches being done in Paint room or another application then i can live with that, i just want to know one way or the other what's going to be possible in near future with the voxel room.

    For me the idea of working on a model from start to finish (including painting when Andrew implements this) entirely in voxels is very exciting, but the overall geometry needed for fine detailing is just not possible at the moment.

    There's two options, really. Either break up the voxel array into a hierarchy of some sort or do only low frequency work in voxels and do high frequency work in polygons or displacements. What's the problem with shifting over to polygon surface modeling for the high-frequency detailing?

    - Chad

  2. Sorry maybe i should have been clearer in my post, i'm aware of the differences between voxels and polygons (but dont forget 3D-Coat uses a polygonal skin to display the volume, so poly count directly affects performance).

    No, voxel count + polygon count affects performance. In Zbrush, you are only worried about polygon count. If you want to compare apples to apples, don't add the voxel overhead, compare teh two when using only polygons. The mesh shown is only a proxy of the underlying levelset surface. The SDK allows you to export this array to your rendering software so you can render the levelset directly. Zero polygons.

    - Chad

  3. 1: Would switching to a pure voxel render solution be more efficient (in terms of performance) and allow us to scale object density up further than the current polygon skin render? I know this would result in discrete 'steps' between each voxel, but if we could get the density high enough i think it could work very well.

    2: If voxel rendering is not a valid option could the existing 'polygon skin' method be optimized to allow us to work on 60-80m+ poly objects to achieve the same detail level as ZBrush does at ~25m?

    It's not the render that is the issue, it's the data structure. Voxels (level set) cannot be compared to polygons, ESPECIALLY when you have something like a sphere, which is a pretty ideal case for polygons, but a huge waste for voxels. With voxels, all the voxels distance from the surface still have to be stored in memory (unless there is a hierarchical data structure) and you aren't using them. Consider, you could create dozens or even hundreds of concentric spherical shells and not use any more memory with voxels, but if you used polygons would have dozens or hundreds of times more polygons to store/process.

    Polygons do better on objects where the ratio of surface to volume is small. Spheres, cubes, etc. Voxels do better when the ratio of surface to volume is high. Sponges, trees, etc. Polygons also do better when there is a high variance of frequencies. Voxels do better when there is even frequencies.

    Try making a sponge with voxels and polygons and see how the change in data structure makes the voxels so much faster and lighter in memory consumption.

    - Chad

  4. What does sound of use as Frankie pointed out, is using it for masking and such. Very wicked tools that would not require a lot of RAM to implement.

    The way Claytools does masking, it's basically an on-or-off thing for masking. If you could implement segmentation of the voxels based on a scalar value, then you could assign properties to the voxels.

    So you could have a segment of "skin" and a segment of "bone". The bone voxels would be very stiff, while the skin ones would be stretchy and flexible. So you could push the skin into the bone, but the bone wouldn't move. Or you could push the top of the bone with enough force, and it would move the ENTIRE bone, not just the point where you touched it. With an 8 bit scalar, you could define 256 unique "materials" for your object.

    And when you converted the voxels to an isosurface mesh, the texture vertices could have the nearest segmentation scalar assigned to them in a segmentation channel so you would get automatic masking, so you would have a mask for "hair", "skin", "eyeball", "shirt", "scar tissue", "teeth", etc. Would make painting textures much easier then.

    Also, you could choose to extract a mesh of JUST the "shirt" channel or whatever, so you could get multiple meshes out of your main voxel object.

    - Chad

  5. this is very interesting. I would like to hear from artists what they would want to use this for? what kinds of projects would benefit from painting voxels? would you bake the voxel painting to a lower poly mesh?

    Two things come to mind right away.

    1) You could use real 3D maps. Either explicit OR procedural data could be stored in the voxels and deformed with the voxels. So you could have a 3D skin texture that gets deformed correctly as you sculpt. It wouldn't have to be RGB data, either.

    2) You could extrapolate 3D maps from 2D maps. The imported mesh could transfer it's textures, either from UV maps or from per TV colors to a voxel array. That way you could modify an already textured model and preserve the original colors and UV's.

    The reason you feel like you are sculpting in wax, clay, bronze, etc is a limitation, not a design intent. If you could model a human face and feel like you were working with a real live ACTOR and not a plaster MAQUETTE, wouldn't that be better?

    - Chad

  6. If you were to store not just the density value in the voxel, but stored the density as well as an X, Y, Z vector in each voxel, and you initialized the X, Y, and Z values by the INITIAL location of the voxel, could you not record the local per-voxel vectors of the sculpting tools? So a pinch or a smear would add the vector value to the voxels so that they recorded the changes to the voxel array?

    It wouldn't be perfect, especially if it is only 16 bit, but it might be enough for some modest modifications of a mesh or voxel dataset. The advantage of course is that when you are done, you export not the mesh, but the 3D vector field representing the transformation. This could then be applied to deform the original mesh. No need to re-topologize!

    The other advantage is that it can be applied to ANY mesh, so you could deform all of the morph targets, or you could do a volumetric sculpt on one mesh, and when someone says "Oh wait, I made a few changes to the model I sent you, here's a new version" you can just apply the transformations to the new mesh.

    This would also work for voxel data, including paint. So you record XYZ (ternary) but can apply that to dozens of channels, like opacity, diffuse color, specular color, shinyiness, bump, self illumination, diffuse level, etc. You could save a lot of memory by only sculpting the XYZ and reusing that on multiple, higher frequency, more complex maps.

    - Chad

  7. Yes. Exactly. Export/import/realtime manipulations/scientific modeling.

    If it were possible for us to build a "Sculpt" module based on 3D Coat into our scientific software, that would be very interesting indeed. We'd have to work something out with the licensing of course, but that shouldn't be hard.

    But if we have to stop processing the dataset, export it, open the dataset in 3DC, sculpt, export the dataset, then re-open it our other software, that's less interactive.

    - Chad

  8. 2) Paint over voxels in volume. Not easy to implement because:

    - exery voxel takes 2 bytes now (16 bit precission). Color is 3 or even 4 bytes more. Much more memory consumption, less speed because I need to preserve color (3 additional channels) during all operration. + layers... uff

    So if you have any idea, please share it there.

    Sorry for jumping in late on this...

    You don't HAVE to have 3 additional channels, nor do they have to be 16 bit. You could have the user specify how many channels and at what depth they are needed. Adding an 8 bit scalar to the existing voxel structure would add only 50% more memory usage. Even if you locked the depths down, adding a single scalar at 16 bit would only double the memory size. The speed hit would be less, since you aren't reconstructing the isosurface using the other channel. Also, the operation is identical on the second scalar, so it's just moving another channel of data through the same instructions, which should be very efficient.

    Regarding memory, CUDA hardware currently runs up to 4GB in size in the Tesla range, and the GeForce line will soon be at 1.7GB. As applications for CUDA grow, I'm sure NVidia would love to sell you hardware with even more memory, so I don't think it is an impossible problem.

    - Chad

  9. I will make Volumetric SDK before release, so everyone will be able to use 3DC's Volumetric capabilities to export/import/manage/modify. So, you will be able to use 3DC for scientific visualisation using Visual Studio.

    Meaning, you will have an SDK so that we can get voxel data in/out of 3DC, or that you will allow us to implement portions of 3DC in our scientific applications?

    - Chad

  10. Do you use any haptic device?

    We have Phantom Omni. It uses the same SDK as the other Phantoms. Unfortunately, I don't know of any 3rd party API's for supporting haptic devices in general, so it seems like you have to support each manufacturer separately.

    - Chad

  11. Export is not trivial thing in VS because raw mesh is not always useful. So if you have ideas please write there.

    Anyone with a volumetric rendering software like a voxel raymarcher or raytracer could render the voxels directly, provided they have access to your voxel data format. They can convert the data themselves if the spec is documented. Likewise, they could import their data into 3D Coat for editing.

    - Chad

×
×
  • Create New...