Jump to content
3DCoat Forums


  • Content count

  • Joined

  • Last visited

Community Reputation

781 Reputable

1 Follower

About ajz3d

  • Rank

Profile Information

  • Gender
  • Location
    Warsaw, Poland

Recent Profile Visitors

3,638 profile views
  1. ajz3d

    3DCoat 4.8 BETA testing thread

    Hey, thanks for the info @SERGYI. I was on vacation for almost a month, away from technology, reloading my batteries a bit. That's why I'm replying with such a long delay. It's great to hear that something is happening, though I pity that I don't have a 3DConnexion device to test things up. It would be fantastic if you could drop us, GNU/Linux users, an info from time to time about the progress, no matter if it's significant or not. You can bet that we would appreciate it. Cheers.
  2. ajz3d

    3DCoat 4.8 BETA testing thread

    I second the requests of my fellow GNU/Linux colleagues. Please update us on the progress.
  3. Are there any (slow-paced ) videos of subdiv hard-surface modeling with this asset? With a comment on which areas this asset shines in and what are some of its drawbacks to be aware of?
  4. For some time now I was thinking about switching to an AMD card, because of how that company is more open to a thought of liberating their drivers than Nvidia is, so I'm interested in your experience too. Especially if you tried one of their recent cards in Houdini.
  5. ajz3d

    Import/export heightmap for sphere?

    Just perform standard texture baking of your highpoly detailed sculpture into a sphere that has equirectangular uvs. I'm not sure if it won't reintroduce texture distortion at the poles though.
  6. Yeah, the problem of destructive workflows...
  7. I usually, at the start of the project, import a reference box of 1m x 1m x 1m for calibration measures. In the import tool I reset scale and axis and observe if my mesh is to small or too large to work with. If one of these are true, in my DCC program I multiply the scale of the object in increments of 10 or 100 or 1000, whatever brings best results in 3D Coat, and export it again. 3D Coat does offer us a chance to remember transforms we did to the imported mesh and allows to specify a precise scale of the imported object in percents (why not -0..1+?) and reverts them on export (that is, if we clicked "Yes" to the "This is the first time you have tried to import and object..." modal window) Scaling is stored in Geometry->Edit Scene Scale, but somehow I always found it too cumbersome to work with. On object reimport I divide the mesh by the same factor. I use Houdini 90% of the time, so rescaling before export and after import is merely a matter of configuring xform nodes with appropriate scale values in order for rescaling to happen automatically. This effectively brings it to a one-click operation - "reload file". PS. Offtopic stuff, but if someone of Blender Gurus knows a way to reload a geo file that is already loaded in .blend file, please do let me know. Many times I had everything set up for EEVEE rendering, with just a single mesh passing through multiple iterations, and all of the data bound to it was lost upon reimport. Shaders, modifiers, transforms, etc. while I'd only like to update its geometry.
  8. ajz3d

    3DCoat 4.8 BETA testing thread

    @SERGYI - GNU/Linux? What's the progress?
  9. ajz3d

    Import/export heightmap for sphere?

    A valid question. The only difference is that the method above is UV agnostic and in theory should work with any convex mesh, whatever its uvs may be. But if you have an icosphere with polar coordinates that fit the displacement map nicely, there's no reason not to use it. How and what are you importing to 3DC? My guess is that it solely depends on your hardware. Memory footprint of a mesh that is supposed to be mapped 1:1 in vertex-to-displacement-map-pixel ratio, can be significant. Especially if we're talking about 16x8k displacement map. If you need to do some close-ups of a specific area (do you?), why not cheat your way out by using a separate mesh for that zoomed-in shot and leave the rest of the moon lowres or not import it at all? This way you could import just a fraction of the moon to 3DC for detail sculpting, and then combine it with the rest of the celestial body in your main DCC software down the stream.
  10. ajz3d

    Import/export heightmap for sphere?

    Your baked lightmap becomes the new height map. You then use the displace modifier to bake it into the mesh before exporting to 3DC. With icosphere's topology, you won't have to worry about pinching on the poles. Make sure to vertically flip your original height map before baking it into lightmap texture because otherwise it will end up inverted (due to how refraction works). Also, set Glass BSDF IOR to 0. Some screenshots of the procedure: You can compare the displacement result to this photograph: https://en.wikipedia.org/wiki/Ceres_(dwarf_planet)#/media/File:PIA19310-Ceres-DwarfPlanet-20150225.jpg Cheers PS. Andrew Price has a good video on lightmap baking in Cycles: https://www.youtube.com/watch?v=sB09T--_ZvU
  11. ajz3d

    Import/export heightmap for sphere?

    @Innovine you can use Blender to prepare the mesh for sculpting. Use the Ceres texture as linear environment map and bake the lighting into high res uv unwrapped icosphere. Be sure to set icosphere's material to a perfect mirror beforehand and to use linear image as render target. Then, use this texture to displace points of the sphere using displace modifier and export the model to 3D Coat. The greater the icosphere resolution, the finer the detail you will get from displacement.
  12. ajz3d

    3DCoat 4.8 BETA testing thread

    I'd like to know about the progress too.
  13. ajz3d

    Retopo - next big Step

    I'm a little bit torn about what I'm about to say now because I like 3D Coat a lot and have been using it for many years (and will still continue to use it), but in my opinion none of the retopo solutions currently available on the market (including Houdini's TopoBuild, which, even as a long time Houdini user, I don't understand why everyone is so excited about as it doesn't offer anything new) can compare to NVil's Draw Mesh SteamLine tool feature wise. For me this tool offers the most efficient semi-automatic retopo right now, and NVil has been my "go to" app for this kind of jobs ever since this tool was developed (to a point where I spent two years trying to run it under WINE, as it's totally windows app unfortunately - but it works on 3.20). When Farsthary first announced that he is working on new retopo tools (and it was way before the Draw Mesh work even begun, I think), I imagined that those tools would look exactly like Draw Mesh. You draw main edge loops and let the algorithm generate the fill geometry for you. This is extremely productive because you don't have to waste your time to draw geometry that needs to follow some kind of rules anyway, and as thus shouldn't need your attention at all (or should require your minimal attention at most). But Draw Mesh isn't the tool on its own in the retopo process. When retopologizing an asset in NVil, you still have access to the full modelling suite offered by the program. And its modeling toolset is pretty damn powerful. In my eyes way more powerful than what Blender 2.8 have to offer at its current state (minus modifiers, because NVil doesn't have them). Pilgway could make some sort of a deal with DigitalFossils in order to combine forces against some of the more powerful competition. Because why the hell not? Maybe it's worth a try to get to some form of agreement? We could get the best semi-automatic retopo algorithms available on the market, powerful modeling solution and excellent sculpting/texturing software combined together in one superpackage. Let's kick the new ZRemesher ass by giving the user more control on his retopo mesh. You can watch Draw Mesh in action (on a 3DC-made sculpture) on my playlist, but do note that this is an old demo of very early Draw Mesh version, and since those videos were recorded, many improvements have been introduced to that tool: https://www.youtube.com/watch?v=44ndpC8lMO0&list=PLNPeRk-wjBGiod2fk0YSaYCK0Oh6JpMQj
  14. ajz3d

    3DCoat 4.8 BETA testing thread

    Of course, but nothing stands in the way to unify the brushes between the two modes. Like I said (implicitly), I'm aware of the difference between how 3DC surface mode works in contrast to the voxel mode. Not every aspect of each brush is portable to voxels (like dynamic subdivision). But I think Dynamesh (based on voxels) uses a similar principle (add/subtract volume then remesh), minus the dynamic part of the 3DC voxel sculpting (because in ZB you still need to manually remesh the model after stretching it too much). And still, all ZB brushes feel and act in the same way in any of its sculpting modes, and all brushes are available in all modes, with maybe an exception of a few very specialized ones. Brush customization began with an introduction of General Clay Brush, or at least it looked like that from a perspective of 3DC end user. This brush was a pretty damn good start at that time, but it does have some quirks (like this one, for example: https://3dcoat.com/mantis/view.php?id=2331). If we're going to get a fully customizable brush system that would affect the surface in an even more predictable and efficient way than already existing brushes - that's already awesome. But if this system was to be consistent between surface and voxel modes - that's even better!