Jump to content
3DCoat Forums

ChadCapeland

Member
  • Posts

    17
  • Joined

  • Last visited

ChadCapeland's Achievements

Newbie

Newbie (1/11)

0

Reputation

  1. Where do you reference texture resources then? You can't create that in the shader.
  2. No, I mean our own shader from scratch in HLSL. Can we create our own texture resources as well?
  3. Is it possible for us to make our own shaders? What about textures? Ideally, we'd like to apply 3D textures (perhaps from .dds, but we'd be open to directly making the texture resources too) and use them in custom shaders. - Chad Capeland
  4. No, it depends on the ratio of surface area to volume, and the range of frequencies. You would still need SOME way to rendering the levelset. A raymarcher would work, but it may be slower than polygon skinning. It would be more memory efficient, but slower. Especially if the levelset is not being updated, but the view is changing. So if you are rotating or zooming or whatever, the polygon skin is very fast, since the conversion to polygons from voxels is cached, and the polygon rendering is done in hardware. There's two options, really. Either break up the voxel array into a hierarchy of some sort or do only low frequency work in voxels and do high frequency work in polygons or displacements. What's the problem with shifting over to polygon surface modeling for the high-frequency detailing? - Chad
  5. No, voxel count + polygon count affects performance. In Zbrush, you are only worried about polygon count. If you want to compare apples to apples, don't add the voxel overhead, compare teh two when using only polygons. The mesh shown is only a proxy of the underlying levelset surface. The SDK allows you to export this array to your rendering software so you can render the levelset directly. Zero polygons. - Chad
  6. It's not the render that is the issue, it's the data structure. Voxels (level set) cannot be compared to polygons, ESPECIALLY when you have something like a sphere, which is a pretty ideal case for polygons, but a huge waste for voxels. With voxels, all the voxels distance from the surface still have to be stored in memory (unless there is a hierarchical data structure) and you aren't using them. Consider, you could create dozens or even hundreds of concentric spherical shells and not use any more memory with voxels, but if you used polygons would have dozens or hundreds of times more polygons to store/process. Polygons do better on objects where the ratio of surface to volume is small. Spheres, cubes, etc. Voxels do better when the ratio of surface to volume is high. Sponges, trees, etc. Polygons also do better when there is a high variance of frequencies. Voxels do better when there is even frequencies. Try making a sponge with voxels and polygons and see how the change in data structure makes the voxels so much faster and lighter in memory consumption. - Chad
  7. The way Claytools does masking, it's basically an on-or-off thing for masking. If you could implement segmentation of the voxels based on a scalar value, then you could assign properties to the voxels. So you could have a segment of "skin" and a segment of "bone". The bone voxels would be very stiff, while the skin ones would be stretchy and flexible. So you could push the skin into the bone, but the bone wouldn't move. Or you could push the top of the bone with enough force, and it would move the ENTIRE bone, not just the point where you touched it. With an 8 bit scalar, you could define 256 unique "materials" for your object. And when you converted the voxels to an isosurface mesh, the texture vertices could have the nearest segmentation scalar assigned to them in a segmentation channel so you would get automatic masking, so you would have a mask for "hair", "skin", "eyeball", "shirt", "scar tissue", "teeth", etc. Would make painting textures much easier then. Also, you could choose to extract a mesh of JUST the "shirt" channel or whatever, so you could get multiple meshes out of your main voxel object. - Chad
  8. Two things come to mind right away. 1) You could use real 3D maps. Either explicit OR procedural data could be stored in the voxels and deformed with the voxels. So you could have a 3D skin texture that gets deformed correctly as you sculpt. It wouldn't have to be RGB data, either. 2) You could extrapolate 3D maps from 2D maps. The imported mesh could transfer it's textures, either from UV maps or from per TV colors to a voxel array. That way you could modify an already textured model and preserve the original colors and UV's. The reason you feel like you are sculpting in wax, clay, bronze, etc is a limitation, not a design intent. If you could model a human face and feel like you were working with a real live ACTOR and not a plaster MAQUETTE, wouldn't that be better? - Chad
  9. If you were to store not just the density value in the voxel, but stored the density as well as an X, Y, Z vector in each voxel, and you initialized the X, Y, and Z values by the INITIAL location of the voxel, could you not record the local per-voxel vectors of the sculpting tools? So a pinch or a smear would add the vector value to the voxels so that they recorded the changes to the voxel array? It wouldn't be perfect, especially if it is only 16 bit, but it might be enough for some modest modifications of a mesh or voxel dataset. The advantage of course is that when you are done, you export not the mesh, but the 3D vector field representing the transformation. This could then be applied to deform the original mesh. No need to re-topologize! The other advantage is that it can be applied to ANY mesh, so you could deform all of the morph targets, or you could do a volumetric sculpt on one mesh, and when someone says "Oh wait, I made a few changes to the model I sent you, here's a new version" you can just apply the transformations to the new mesh. This would also work for voxel data, including paint. So you record XYZ (ternary) but can apply that to dozens of channels, like opacity, diffuse color, specular color, shinyiness, bump, self illumination, diffuse level, etc. You could save a lot of memory by only sculpting the XYZ and reusing that on multiple, higher frequency, more complex maps. - Chad
  10. If it were possible for us to build a "Sculpt" module based on 3D Coat into our scientific software, that would be very interesting indeed. We'd have to work something out with the licensing of course, but that shouldn't be hard. But if we have to stop processing the dataset, export it, open the dataset in 3DC, sculpt, export the dataset, then re-open it our other software, that's less interactive. - Chad
  11. Sorry for jumping in late on this... You don't HAVE to have 3 additional channels, nor do they have to be 16 bit. You could have the user specify how many channels and at what depth they are needed. Adding an 8 bit scalar to the existing voxel structure would add only 50% more memory usage. Even if you locked the depths down, adding a single scalar at 16 bit would only double the memory size. The speed hit would be less, since you aren't reconstructing the isosurface using the other channel. Also, the operation is identical on the second scalar, so it's just moving another channel of data through the same instructions, which should be very efficient. Regarding memory, CUDA hardware currently runs up to 4GB in size in the Tesla range, and the GeForce line will soon be at 1.7GB. As applications for CUDA grow, I'm sure NVidia would love to sell you hardware with even more memory, so I don't think it is an impossible problem. - Chad
  12. Meaning, you will have an SDK so that we can get voxel data in/out of 3DC, or that you will allow us to implement portions of 3DC in our scientific applications? - Chad
  13. We have Phantom Omni. It uses the same SDK as the other Phantoms. Unfortunately, I don't know of any 3rd party API's for supporting haptic devices in general, so it seems like you have to support each manufacturer separately. - Chad
×
×
  • Create New...