Zeddicus

Member
  • Content count

    197
  • Joined

  • Last visited

Community Reputation

9 Neutral

About Zeddicus

  • Rank
    Novice

Profile Information

  • Gender
    Male
  • Location
    Midlands
  1. I came across this comparison video which illustrates quite well the different methods for creating sharp edges: Insofar as poly counts are concerned, there really is a big difference between smoothing groups versus adding extra edges all over the place. And that's just one simple mesh, not at all typical of a real scene (like in a game using the Unreal Engine).
  2. Yeah, but it's still worth discussing from time to time. I like your idea, but wouldn't it only work if you UV unwrapped each mesh based on it's original smoothing group configuration in Max? That would be both challenging and time consuming, probably just as time consuming as re-doing them all from scratch when your back in whichever program it is that supports them. Speaking of which, the number is growing. 3ds Max and the Unreal game engine have been mentioned. Another one is Modo. Oddly, Mudbox doesn't in spite of it being an Autodesk app. Most apps, like Maya and Blender for example, do support something akin to smoothing groups. They're pretty basic in functionality and generally involve setting edges to hard/creased versus soft/uncreased and thus aren't quite as advanced as Modo and Max. Hopefully this will change over time. Modo 3ds Max Blender Maya I find smoothing groups particularly useful when using the Turbosmooth modifier in Max (for rendering purposes), which has an option in it to separate by smoothing group. This is superior to adding a bunch of edge loops, something which can crank the poly count way up. It actually makes sense for a game engine like Unreal to support them when you think about it, as it can help give the impression that a low poly model is high res when it's actually not. Normal maps are another such trick, something that was also unsupported by most apps at one time and also had to be fought for (ubiquitous PTEX support currently being another). These simple rendering tricks are less computationally intensive than adding a lot of unnecessary edges that can add up quickly depending on the mesh, especially when you have a lot of such assets in a scene. I won't be surprised at all when we eventually see more game engines adopt smoothing groups for this very reason. The side benefit of it helping to keep each individual mesh as simple as possible is also useful because, once you start adding lots of edge loops to a mesh, it starts to become unwieldy. Especially when last minute edits are required. 3D Coat does support smoothing groups, though it's been a challenge to get it to do so properly (see link below). 3D Coat even exports them properly to the OBJ format and are seen by 3ds Max just fine upon import. Unfortunately 3D Coat still refuses to respect and preserve them whenever importing an OBJ file which already has them. As the original topic starter pointed out, if you export an OBJ with smoothing groups from 3ds Max, then turn around and re-import it straight back into Max, all the smoothing groups are still there, intact and unchanged. For some reason 3D Coat wipes them out, either replacing them with it's own automated setup (if the smoothing option is enabled during import) or puts all faces into the same smoothing group (when the smoothing option is disabled during import). If only 3D Coat would respect all of the smoothing groups already there instead of changing them, we could export our UV'ed meshes from 3D Coat back to programs like 3ds Max without having to go through the painful, hugely time wasting rigmarole of setting them all back up again from scratch. I'd give my left you know what for that... seriously, I would. http://3d-coat.com/mantis/view.php?id=493 PS: Importing an OBJ with smoothing groups for PTEX painting is even worse. I'm not at all sure what 3DC is doing to the normals of the mesh, but once back in 3ds Max, any smoothing groups you try to re-setup won't have any effect. The only way to fix it is to set the OBJ importer in 3ds Max to faceted, or apply two normal modifiers with each set to flip normals. After collapsing the stack, the mesh with be faceted and smoothing groups work once again.
  3. The lowest my frame rate will go using an overclocked GTX 670 (256-bit / 2048 MB GDDR5) with driver version 331.65 and 3D Coat 4.1.12 DX Cuda in Windows 7 SP1 64-Bit with a 30 million triangle mesh is 17 fps. I downloaded the 340.52 driver yesterday, but haven't installed it yet. Trying to figure out why I can't move, rename, or delete the installer lol.
  4. BeatKitano pretty much nailed it dead on with his last post. I use 3D Coat primarily for retopo, uv setup, and bitmaps because those are what it excels at IMHO. I appreciate what 3D Coat is trying to do insofar as sculpting is concerned, particularly it's ability to freely add, subtract, cut and what not which comes closer to mimicking real clay than ZBrush does, especially when ease of use is considered. It just doesn't handle these and other sculpting tasks cleanly which is where ZBrush, and even Mudbox, trumps it. If Pixologic not only refines ZBrush, but also adds all the stuff currently missing when they release 5.0, it will truly become unstoppable with the final hurdles being it's interface (which I'm fine with) and higher price. Not going to even comment on rendering lol.
  5. Were you upgrading from v3 EDU to v4 PRO? When I entered my v3 EDU serial, it took me to a page that said I would have to pay $84, but didn't say which version I'd be getting, EDU or PRO. I posted a question in the v4 upgrade thread about this a few days ago but haven't gotten an answer yet unfortunately.
  6. I followed your instructions, AbnRanger, and the experience was exactly as you predicted. It's a very jarring, sudden change in performance completely dependent on brush size. It's speedy and just fine, then totally drops off with just a tiny change to the brush size. Shrink it back down just a tiny fraction and it goes right back to being speedy. Knowing Nvidia (and how corporate types think in general), their gaming cards are probably designed to work great only in games, while their pro cards are designed to work well only in CG apps. That way you're forced to buy both, or so they hope. Greed makes people do strangely illogical things. It would be interesting to see how AMD's 7970 would perform with 3DC if Andrew were to add OpenCL support. The article posted by L'Ancien Regime is an interesting read. Thanks for sharing it with us! I'll probably replace my GTX 670 with whatever blows away the AMD 7970. I try not to upgrade too often because even though it can be fun, it's often also time consuming and I do so hate the inevitable troubleshooting that tends to go with it lol. About memory with XMP, I had to turn it off because the timings it set would prevent my PC from getting past the BIOS screen, and sometimes not even that far. What I did was write down the settings it wanted to use, then entered identical settings into the BIOS myself using its manual override mode. Then it would boot perfectly fine and even ended up being super stable that way. Don't know why one way would work and the other wouldn't when the settings were identical, but there you have it. Fwiw they were Mushkin Enhanced Blackline Frostbyte DDR3-1600 rated for 9-9-9-24 timings at 1.5v. They easily ran at higher clock speeds so long as the timings were loosened, but after a lot of benchmarking I found that a slower frequency with tighter timings was actually a fair bit faster than a higher frequency with loose timings. Naturally YMMV.
  7. It sounds as if they could have actually released the hardware a while ago. If it truly was delayed for the reasons given, then it surely could have been sent out right now given the other concerns are all things that could have been dealt with after the fact (e.g. taken from their forum: "building our developer program, Airspace, OS interaction, etc"). I highly doubt the pre-order folks would mind finally having the finished/polished hardware right now while downloading more stable software down the road. In a way it's a bit like the boy who cried wolf and the kind of thing that can make one start to doubt their sincerity, dangerous considering this particular product is a want and not a need for most people. Hopefully they'll get it right come June. It's a lot smaller than I thought it was, after looking at some images of it, and could actually be built into a Wacom tablet I think. In fact I could see myself attaching it to the top of my tablet with stick-on velcro strips. What I see in my minds eye is holding the tablet pen in my right hand and sculpting/texturing with it at the same time as I use my left hand to manipulate the mesh (pan, zoom, rotate), kind of like holding one of those plastic models in one hand while painting it using the other. Can't really see myself using it to actually model though.
  8. "I have no idea what LuxMark scores are supposed to mean/prove?" I'm wondering the same thing. Synthetic benchmarks don't mean much IMHO, but I'm sure this one does to owners of AMD cards whom, after reading my post, are no doubt foaming from the mouth at this very moment lol. I used Nvidia cards in the beginning, switched to ATI for a while as they were called at the time, then went back to Nvidia due in part to driver support. In over a decade of use, I've only ever run into two problems with an Nvidia product. The first was a driver issue about three years ago that sounds very similar to bisenberger's problem, and the second was when I tried to set up SLI back when it was still relatively new tech. That one may have actually involved firmware rather than drivers and the manufacturer bent over backwards to make it right. As for my experience with CLI versus CUDA, using the former in Vray RT results in approximately the same performance as having it set to CPU mode, while the latter is much faster than either. On the other hand I don't see any difference between CUDA and SIMP when comparing these two 3D Coat versions. Take from that what you will, but personally I think AbnRanger is right about the situation. Not sure what the problem is with his 670 though as mine (made by Gigabyte) works great. And before anyone freaks out, I don't have anything against AMD and I would happily use a card designed by them provided some basic reassurances were met. After all, it's not just all about performance, is it now.
  9. There is a button in the tool palette which says Ptex local resolution when you hover over it. It's an image of a square with four smaller squares in the upper left corner of it. Clicking on it will bring up a bunch of options, at which point you can select which polygons you want to affect by left clicking on them (hold control key and left click to deselect).
  10. The "Import images as mesh" in the file menu is a good one too.
  11. Now that this thread has been resurrected by AbnRanger, I can post this since nobody seems to have mentioned it yet: http://corona-renderer.com/ It's a really nice renderer and will probably go commercial a few years down the road, but for now it's free to any Max users interested in testing it and lots of great examples can be found on their forum after signing up.
  12. The excitement of coming across a new tool he's never seen before and the rush to let everyone else in on it? And a for free tool no less. It's enough to make anyone gloss over the minor detail it's not created by Pixar, so I both forgive and thank him.
  13. This has been bugging me for a while now too fwiw. That Mantis entry needs to be updated to include this ability. Then we'll not only know what data is contained on each layer, but also be able to limit what type of data is allowed on each layer too.
  14. Yay for zombie thread resurrections! ZBrush has gotten just as bad; especially now that users can save everything, including undo information, within one file. My 3D Coat test file, a pre-beta project from a while back which I've been using a lot lately for testing purposes, saved out at 582 MB in size using v4 and all it had was a high poly mesh (2,328,005 polygons, no UV's) and low poly retopo mesh (5184 quads, 3 UV sets); that's pretty much it, there is no color information at all. Like the thread creator, kay_Eva, I save incrementally and often so this eats up storage space pretty fast. Thankfully I've found 7z compression actually works pretty well; those big 600 MB files become closer to 100 MB files for example, though that takes a little over a minute at the ultra setting which isn't exactly fast (test results below). I've wondered if it might not be feasible to implement 7z compression right into 3D Coat, complete with the option to choose compression level or disable it altogether depending on your patience. It is open source with a GNU LGPL license so I think that means Andrew could implement it if he wanted to (not really my area of expertise though). Intel i7 3770K @ 4.5 GHz Two 256 GB OCZ Vertex 4's in RAID-0 16 GB of PC3-12800 9-9-9-24 1.5v Windows 7 Ultimate SP1 x64 9.22 beta / LZMA2 / 582 MB / .3b file Fastest = 0m 5s / 149 MB Fast = 0m 8s / 151 MB Normal = 0m 46s / 134 MB Maximum = 0m 51s / 133 MB Ultra = 1m 06s / 134 MB The first three took about 8 seconds to decompress. Thankfully Andrew improved file loading a lot with the more recent betas. This file used to take about a minute to load in 3D Coat, and now it only takes about six seconds. Adding just the fastest level of 7z compression/decompression alone wouldn't hamper performance all that much IMHO, and probably wouldn't even be noticeable for that matter. The benefit is pretty big too at about 4:1 savings ratio.
  15. I saw this movie in the theater when I was a child and it's still one of my top ten favorites. As great as CG is, there is still something to be said for doing it old school lol. Anyways I absolutely LOVE what you've done here with SkekUng. Beautiful coloring and the sculpt looks exactly like him. I'd actually buy it if it were available as a small figurine. Awesome work, Garagarape!