Jump to content
3DCoat Forums

SMcQ

Member
  • Posts

    17
  • Joined

  • Last visited

SMcQ's Achievements

Newbie

Newbie (1/11)

0

Reputation

  1. I tried the import of a point cloud set into the voxel room, and while the task bar at the bottom reported success, there is no visible result. 332, 464 triangles are in memory but not on the screen. It saved as a 3b file, about 2 MB. Saves to obj produce null files, as I expected, because it is a voxel file. Documentation would help, as I just accepted the default input parameters. Maybe the spheres at the points are too small? It is a big scene, from a LiDAR dataset. About 13 MB as a csv file. 3D Coat did not complain or crash, it just didn't show anything on screen. The file was a point-cloud-only obj created in MeshLab from LiDAR xyz data. Supposing I eventually get this to work, what do I do with it? I mean, how do I get it back as a geometry file with many thousands of little spheres? I'm expecting the obj to be much larger than the data, because each point will be represented by the multiple vertexes of a sphere. Hope they are economical spheres! The objective is to render the point cloud with the animation capability of my general purpose 3D program. I have point cloud viewers but I can't make stereo 3D movies with them. Seriously, I need some orientation to this functionality because bumping around in the dark is just getting my shins bruised. SMcQ
  2. I'm making a dumb mistake somewhere in my first experience with the sketch tool. Clicking apply does nothing. Not in the OpenGL version, not in the DX version. Curiously, when I undo one step and redo that step, the estimated poly count shows up, but only that, no shaping results. The profiles all 256 square. They are all white inner with black outer frames. They are reconcilable with a 3D shape. They load, but no voxel shape computes. Win7 64, 8 GB RAM, i7, Radeon 5570. DX version crashes a lot, OpenGL doesn't but won't produce shape, either. No CUDA, of course. After hitting apply I wait for something to compute and nothing does. What am I doing wrong? SMcQ
  3. Winging it here, but I think the logic of the subdivision on import is to "shrink wrap" the original model with a version that allows displacement and normals painting. It is not that the original model is subdivided in the traditional sense, with the polys broken down into smaller ones. Rather, the polys are recast onto the internal, flat normal map that 3DC 2.x uses for painting. The higher the poly resolution of that normal map, the finer the details and the better they join across boundaries. So, your initial setting of 0 polys created a very coarse surrogate for the import. (Although I like the foam rubber architecture look, seems to suit Starbucks.) With 5 million polys the shrink wrap could bend acutely around the sharp edges. When you paint near those edges with displacement or normals, you can affect them within the range of the brush. All this may be a mistaken surmise, but it is my mental model of how 3DC currently works. As for low poly painting, from what I've seen, displacement for low poly UV painting creates some artifacts along UV edges that are not joined on the map but are adjacent in the model. I use Carrara displacement, and it can show these artifacts at joins. I'm wondering if it would be possible to paint the normals or displacement with the subdiv model, and the other channels directly onto the low poly UV map with 3DC version 3? SMcQ
  4. It's not about saving time, its about making possible something that can't be done now, something that people will want when they see it. We are all going to be happy with the outcome, eventually. You all will get what you want first. (I want it too.) And then, only after refining the tools in his current set to the satisfaction of the user base, will Andrew will turn to the challenge of keyframing the voxel sculptor. I have as much confidence in his judgment as I do his skill. Keyframe animation is really a natural extension of the technology. View the screen animations recently posted and imagine the ability to keyframe them as exportable morphs. http://www.3d-coat.com/v3_voxel_sculpting.html It was never my intention to get anyone riled up, by the way. So I'll abstain from any further replies to this thread. The majority opinion is clear, but it will change.
  5. Thank you for making my point! I do think it self evident that the replies, including your most recent one, are hostile to new ideas that you seem to fear will knock the development progress off kilter. I think such fears underestimate Andrew. He will choose wisely the allocation of his genius. Meantime, we lose out if we practice self censorship of creativity or shout down the ideas that anyone thinks will distract Andrew. A forum is for sharing, not dominance. You should bear in mind that I have no suggested timeline for this proposal. I expect it could not be done until voxel sculpting is mature and stable. "Someday" is soon enough for me. There is somewhat of a precedent for keyframed voxels-type animation. The metaballs modelers I am familiar with, a very limited sample, have some keyframing capability built into the programs of which they are a part. There are 3rd party plugins that use the keyframing capability of the host application. These all use marching cubes meshes. If there are standards for keyframing across platforms, then the voxel modeler perhaps without much trouble could generate the data that various programs would need in their metaball type modelers. Another approach would be to morph between incommensurate meshes, which I've heard can be done by some applications. It would be fun to get a constructive riff going on using the 3D Coat voxel modeler to paint animation. SMcQ
  6. I don't know why there's all the worry. This hostility toward brainstorming seems odd. A request is a request, not a demand. I have no power over Andrew, and he's a practical guy. It's an unconventional concept, sure, and maybe something that would not be in high demand until used creatively to produce exciting results. I broach it to Andrew because if anyone could do it >eventually< it would be Andrew. There's nothing to be gained by censoring ideas. Andrew knows how to focus. Trust him.
  7. OK, thanks. Obviously I haven't tried it yet. I'm a little daunted by how 3D software is outpacing my hardware.
  8. That's great! Now all I have to do is learn how to program. Should be a cinch. Couple of weeks to master it, right? (Insert dismayed emoticon here, I can't even get that to work.) SMcQ
  9. Andrew, Can you post dual versions of the alpha in the future, one without CUDA for folks like me whose legacy cards cannot handle CUDA requirements? Or maybe one version that allows CUDA to be switched off so it won't crash, or has automatic detection so it can match the host card's capability? SMcQ
  10. There are many different formats for volume data sets, depending on where the data came from, but they all amount to 3D grids with scalar values. Some scientific visualization programs, like UCSF Chimera (open source) can create isosurfaces that define certain density values, and these can be exported as surface meshes in VRML or OBJ formats. It would be very nice if someone could program a converter that would translate volume data sets directly into the 3DC volume format, for enhancement, artistic interpretation and remeshing. What would be required in 3DC, then, is the ability to set the sensitivity to scalar values in order to define the working isosurface. Andrew's technology sure beats every volumetric scientific visualization program I've seen. SMcQ
  11. What I'm requesting here is direct animation within volumetric sculpting, not animation rigging of the mesh conversion. I don't know how this could be done, but I've tried various metaball programs and found them lacking. An example application: growth of a tree or a tumor. Trees mature by apical growth; that is, the trunk and limbs build length without stretching, while the limb girth increases in place. If you hammer a nail into a young tree, then come back in 5 years, the nail will be at the same height but partially enveloped. In 20 years the trunk will have completely enveloped the nail. If you took a time lapse of a tree over 20 years of its growth, it would seem as if there were an invisible tree template that the maturing tree was growing into, the limbs snaking along the template paths and getting fatter. A tree does not enlarge, it fills out, and it drops limbs. The kinks in a mature limb are places where branches have dropped or broken off. All this could be animated using volumetric sculpting, I am sure, but haven't a clue how it could be set up. Keyframing? Morphing? Progressively filling out a template? Another example application: landscape formation processes. A landslide, a volcano growing and blowing off its side, a timelapse of fault movement, an extreme timelapse (over eons) of mountain formation. An artistic example: an alternative to mesh morphing, where the alteration added volume and details by obvious growth of mass, not just stretching. Think of horns growing out of a devil's head. Maybe too much to ask for, but it would be cool. EDIT: Actually, I'm playing with the VS now for the first time, version 41, and the spike tool grows just like a tree limb. The trick to animating a tree growth would be to have several spikes active, and keyframe their simultaneous progress. SMcQ
  12. Maybe this can be done in 3DCoat but I don't see how. If it can't, then consider this post a feature request. I want to create a normal map from a landscape digital elevation model (DEM) of very high resolution and immense file size, especially as an obj conversion (>20 GB). This normal map would be applied to a mesh of much lower resolution. I have approximated this approach using hi-res baked shadows as a texture on a low res triangular conversion of a DEM. Works pretty good and a normal map would look even better. The data set is of Lake Tahoe, CA/NV. Seems to me this would have to be a command line operation that performed on the hard drive stored file, chunk-wise. DEMs come in various formats at 32 bit precision, and there is a seldom used portable grayscale map alternative at 16 bits precision. I have software to convert between several formats, but the obj and xyz outputs are prohibitively large. Direct DEM to normal map would be too much to ask for. There are several DEM formats. By the way, it would be interesting to re-topologize a height field quad, which is just stretched vertically, to a quad that followed the features better. My maxed out, five year old system is underpowered for the full capabilities of 3DCoat as it evolves. Thus, evolving software drives hardware purchases! SMcQ
×
×
  • Create New...