Jump to content
3DCoat Forums

Skaven252

Advanced Member
  • Posts

    109
  • Joined

  • Last visited

Everything posted by Skaven252

  1. I'm planning to upgrade my video card (Radeon X1950 Pro) soon too. Still trying to decide between NVidia and ATI, so the above messages made me think. How much performance increase does CUDA actually bring? I'm between two options: ATI: Sapphire Radeon HD4870 1GB GDDR5 NVidia: Asus GeForce ENGTX280/HTDP/1GB I guess it all boils down to CUDA. If it brings a lot of speed, I might go for NVidia. But for overall performance I'd go for ATI. Besides, I have a quad core CPU, so there should be plenty of calculation power already available, no?
  2. It would be a handy way to produce workable Materials for sculpting. Sure, it is possible to paint fur or scales on a creature one by one, strand by strand, but in a hectic production environment you may be tempted to take a few shortcuts... ... but I agree with GED. It doesn't fit 3D-Coat's feature set in my opinion either.
  3. It would be cool, sure, but you're talking about a pretty complex feature set for a program that isn't originally built for 2D painting on a flat canvas (though 3DC can handle it on picture planes). Can the 3DC team do all this work with their hands full? Extracting depth data from photos based on lighting ("shape from shading") involves a lot of trickery. Here's one interesting technique. They take two photos from the same surface, one with a flash and one without (only ambient light): http://gizmodo.com/5042393/scientists-work...info-in-a-flash But it only works on relatively flat surfaces that don't have much specularity. And after all this, you'll still have to make the texture tile. The example video on CrazyBump site showed nothing about tiling. But PixPlant is pretty good at creating tiling textures out of photos. You could create a tiling texture with PixPlant first, then pass it on to CrazyBump. But oh yeah, that's yet another expensive program to buy.
  4. I made a quick test with voxel sculpting. Just took a sphere and smudged some detail onto it: I then switched to retopo, and imported a sphere which projected neatly onto the voxel object and automatically created the UV clusters: However, when I merged it into the scene, the object had artifacts on it (which I've circled in red): In low poly mode it displays discontinuities along the UV cluster seams. These appear to be holes in the texture (caused by alpha mapping) rather than split edges. I presume this is a known issue, and the feature is being worked on?
  5. Just a quick comment. I think the "Vox Follow" tool could be renamed to "Vox Smudge", as that's what it basically does. People who have used Photoshop, ZBrush, et al are probably quite familiar with what "smudge" does.
  6. Thanks for the tip! I was wondering about that too.
  7. Doesn't Alpha48 already do that? Right click, select "Quadrangulate and paint" from the vox tree menu. It creates a low poly model and projects the voxel details to it. You can also use the Retopology tools on the voxel object, if you want to do it manually. The voxel details will also be projected to the retopo. (I got an instability and some artifacts when I just tried it, though... still work in progress) It would be cool to be able to draw some guides on the vox model to steer the edge flow like you said, though. The Retopo tool already has the nifty "draw strokes" mode. It could be used as a starting point; you could draw a few guide circles and lines over the vox model in the places where you want to steer the flow and place edge loops, but the rest would be handled automatically. I'm sure something like this will come up eventually. This program grows at an awesome speed.
  8. The Mask doesn't seem to work correctly if you increase the resolution of the voxel object. Try this: 1. Start sculpting on an object through the mask using the Build, Extrude or Airbrush tool. The mask is projected on the object correctly. 2. Click "Increase Resolution" 3. Paint on the object again using the same mask. It seems to be projected onto the object at half the size.
  9. Having used 3D-Coat only for a short time I'm still a bit unfamiliar with the best ways to do certain things. One of them is sculpting versus height painting, because these two seem to be a bit overlapped in functionality. But I got the impression they should not be confused with each other. As I ran into an oddity in that another thread ("Sculpting does not update normals") I only then became aware that Layers don't only contain displacement, they can also contain sculpting changes - but only the sculpting done in Layer 0 affects the surface's normals. Also, it became clear that moving vertices around in Sculpting mode after the UVs have been applied will cause stretching - but painting displacement does not cause stretched UVs? So, would someone be so kind and shed some light on the subject? For example, if I do some sculpting after doing some texture work and the UVs end up looking stretched, what is the best way to "un-stretch" the mapping without destroying all of the texture work done so far? Do you think there should be a warning in 3D-Coat if the user tries to do sculpting on other layer than Layer 0? As a matter of fact, it kind of does. When you switch layers in Sculpt mode, you get the following warning: "In sculpt mode, all geometry changes are applied to the current layer when you change modes. Switching between layers in sculpt mode is therefore unnecessary." ... but I don't understand what exactly it means? Does "changing modes" mean the layer's blend mode (add depth, subtract depth, etc) or switching between tools (from Paint to Sculpt)? If the latter is the case, does it mean you have to select the layer you want to sculpt onto before switching to Sculpt tool, and changing layers afterwards has no effect? Thanks for the help guys.
  10. I am a bit worried about this, and in color painting too, as I have seen it happen in other applications when you paint over seams. A highly subdivided model in 3D-Coat performs remarkably well across seams, but will this be the case in direct low-poly painting? How will visible seam artifacts be avoided, especially if the clusters are at slightly different scales or the adjacent polys across the seam are stretched slightly differently? Also, will cavity / height masking work on the model's corners in low poly, like it does in high poly?
  11. It's a map that determines how emissive the texture is. A self illuminated object "glows in the dark", quite literally. It doesn't cast light on other objects, it's just visible in an otherwise dark place. Here is a good example of a self illuminated object. The glowing parts of the texture are determined by the self illumination map.
  12. This could, kind of sort of, be a feature request, but I thought I'd bring the topic up here before posting in the Feature Requests channel, just to discuss this. 3D-Coat textures already support color, bump/normal/dispacement and specularity channels. I was thinking, would it be useful if there was also support for a self illumination channel? It's not as widely used in games and rendering as color, bump and specularity are, but it would be useful for various glow-in-the-dark effects (lava, LEDs, what not). So this is not really a "request" yet, just a thought. How much work would it be to implement? How many people here would find this useful? It's not an absolute must for me, and it would introduce an additional degree of complication to the paint UI (one more material channel to worry about in addition to Color, Height and Specularity). Perhaps one way to do this without changing the UI would be to make an additional "Self Illumination" layer blend mode for the texture layers. If it was done that way, then the UI would not need to be changed so much, but it would also mean that you wouldn't be able to build materials with built-in self illumination maps.
  13. I can ask around. Yes, almost all DarkTree procedurals are volumetric by nature. There are only a few components that are 2D. They can be combined together into a "tree" with various operations and operators to produce very versatile textures.
  14. That clears it up, thanks. Yes, this would be very useful. You could also author the normal maps by using the Depth map as a bump map only, rather than to displace the geometry in any way - right? I tried importing a low-poly mechanical-looking model I had into 3DC via the "no smoothing" option. It also had lots of overlapping UVs. I tried painting on it, but I think I ran into some problems. There were large multi-vertex polygons (not quads), for example the cap of a 20-sided cylinder, and when I painted on that the texture looked distorted if I used the original UVs. So is this because the painting does not currently work in UV space? I wonder about Cavity Masking though. How are cavities and heights calculated? Will cavity masking work on un-smoothed (non-tesselated) geometry? It would be great for painting worn corners and dirty cavities on angular low-poly metal objects. Because I think that cavity calculation requires tesselation. And if the polygons are not quads (like the cylinder cap) and also very long (like a single-poly side of a long metal bar), the results may get ugly.
  15. Yep, I've already tried that and it works nicely. But there is a bit of seaming because the textures are reduced to 2D ( = not volumetric), so if there's a recognizable pattern in the texture it will break up where the cubic mapping blends. If you could use a volumetric DarkTree texture to modulate a Fill (via Simbiont), it would give the users access to a huge variety of volumetric procedurals that can be authored with DarkTree, and you would not need to expand the fractal selection of 3DC. From the DarkTree download page you can find the DarkTree Engine which allows you to author Simbiont to any graphics software. In the Simbiont download page there are examples of several "community projects", where people have authored Simbiont for software such as Blender and Mental Ray Maya. DarkTrees can also be Tweaked with maps. You can use a color or grayscale map to control a variable within a DarkTree texture. For example there is a material called "Ghoul" which looks like green skin, but by adjusting a Tweak you can make it look red and boiled. By applying a map to this Tweak parameter, you can make the redness and boiledness appear only in certain parts of the model. In 3DC, maybe this could be done either via Cavity masking, or by creating a grayscale Layer and using that to control the Tweak. So, all in all, it's doable and would be useful, but of course there are more exciting areas of development in 3DC right now. Such as volumetric sculpting, so I'm not ushering you or anything, just cherishing the thought. Since 3DC also supports 3rd party plugins, I suppose some 3rd party could also try the integration. How many 3D-Coat users use DarkTree? I could set up a poll in the discussion forum.
  16. But I can already paint directly onto a low-poly model in Alpha 3_0, even in 2_10. Could someone elaborate further how this direct painting differs from the one currently available? How must an object be modified prior to painting? Does it need to be tesselated within 3D-Coat?
  17. Just bought 3D-Coat, and it's mah-velous. Already modeled another weird blob with voxels, then built topology onto it and started painting, awesomeness. Anyway... I'm using dual 1600x1200 displays at work. Many programs allow you to move the toolbars outside the main program window to nicely arrange the views. With 3D-Coat, I need to stretch the whole viewport across two displays, which distorts the perspective and slows the program down as there's a huge view window to update. And I only want to move some of the tools/windows to the other display. Would floating toolbars and windows be doable? Then you could, for example, have the 2D texture view in the second display and the 3D view in the primary display. What do you think?
  18. This gave me a thought how to take it one step further (haven't actually tried the tree generator yet): how about making the adaptive simplification also work on the radial polycount, according to the thickness of the branch? If the branch is very thick it needs to look round, so you need dense vertices to loop around the branch to keep it round (like 12 vertices). But as the branch gets thinner towards the tip, you could reduce the number of vertices in the loops one by one, until near the tip of the branch would be only a 3-vertex loop. The only problem I can think of, is that this would introduce triangles to the branch geometry. Is that a problem in 3D-Coat? (at least ZBrush runs into problems with triangles)
  19. ^ Yeah, I agree. Gesture-invoked manipulators aren't the most intuitive option. Another problem which at least I personally have, is that every program seems to have its own logic behind these manipulators. Thus, if one jumps a lot between programs in one's pipeline, at least I tend to sometimes use the previous program's commands in the current program for a while, which causes confusion. But I guess it's mostly a matter of accustomization...
  20. Please see this thread post. The "bubble" issue has been addressed in the Volumetric Sculpting Development forum.
  21. Yeah, to me it seems like a feature, not a bug. Voxels are really cool because it is actually possible to create hollows inside your models. But indeed, some sort of "detect and fill hollows" voxel operation feature could be useful for the reasons you mentioned. How complicated is it to detect empty spaces that are completely surrounded by voxel volume?
  22. Hello all, I just recently got introduced to 3D-Coat and it looks really interesting as a part of a 3D modeling and texturing pipeline. Plus the way it's constantly being developed and the new voxel sculpting tools... awesome. I haven't bought it quite yet, but with the discount and all it's a matter of days (I'm on Xmas holiday now). I've tried out the Fractal Fills function in 3D-Coat, and read somewhere here that Andrew has plans to expand the fractal selection and fill functions (rotation? two-axis anisotropy?) someday. I immediately started thinking about DarkTree. It's a program that allows you to combine multiple different fractals to create really good-looking volumetric materials, and thru the Simbiont plugin it works with Maya, 3DS Max and what not. DarkSim are also working on a program called Tactile Ink (currently alpha), which allows you to paint procedural materials on a flat plane. The're working on the ability to paint on meshes, but it's still under development. But then, 3D-Coat already allows you to paint on meshes! I think DarkTree/TI and 3D-Coat - each with their features, advantages and limitations - could complement each other very well. I tried searching this forum but found no mention of DarkTree, so I thought I'd bring this topic up. So, instead of re-inventing the wheel and trying to build all these features in these both programs, how about some kind of integration between the two? It's an interesting thought, no?
×
×
  • Create New...