Jump to content
3DCoat Forums

TimmyZDesign

Contributor
  • Posts

    1,029
  • Joined

  • Last visited

Everything posted by TimmyZDesign

  1. Thanks Andrew and Team! 3D-Coat really makes modeling and texturing so much fun!
  2. Cool submarine Spiraloid! Good to see you back on these forums again!
  3. That must be it. All the people having trouble with the OpenGL version are probably using GeForce cards.
  4. The only OpenGL bug I ever encountered in 3D-Coat was the ghosting artifacts on matcap shaders in the Render Room. At first Andrew said he couldn't fix it, but then a few years later he actually did fix it. Otherwise I always thought it was weird how people complained about the OpenGL version, since it has always worked pretty well for me. On the other hand, I've been using Quadro cards the whole time, so maybe the issues people were having had to do with GeForce drivers or something.
  5. Ha ha! Yeah I was thinking the same thing recently. I guess we just need to accept the fact that any of our beloved apps could be discontinued at some point in the future, and we don't have much control over that. Usually it happens over the course of a few years though, so at least we have a little time to adapt. It doesn't bother me that much anyways, getting up to speed in a new app can go pretty quickly. Just finish one or two projects in it and it starts to feel like home. I liked the presentation about the Keds shoes. Those were 3D printed shoes he had on the table there, and they looked just like real shoes! Some day I want to 3D print a dinner for my wife and laugh when she tries to eat it! That is an interesting point. I've often wondered if I save more time using a ton of specialized apps, or if it would be faster for me to just do it all in one app. I like to think it is faster using the perfect tool for each step, but constantly exporting and importing stuff does waste time too. I saw a guy making 2D vector art using MOI (the 3D app)! He then finished it off in Illustrator. He probably could have just done the whole thing in Illustrator, but it did seem like his strange multi-app method was actually faster and easier.
  6. Yeah I tried to watch the live event too, but it was very laggy and kept freezing up. I tried lowering the video resolution but that didn't help. Then I tried closing the chat function, and that seemed to help quite a bit, but not completely. At that point the presentation was watchable, but still laggy. Towards the end of the presentation I opened up the chat function again and found that the moderators had been carefully answering ALL of the questions (much better than the 801 presentation where they ignored any question they didn't like). I noticed several other viewers commenting about the lag problems too.The most interesting part of the presentation was hearing real employees at real studios talking about how they use Modo, and why they like it. It seems like the fast preview render was one of the most important features for them. The renderer seems quite nice. I also like that there now seems to be a good Houdini connection via OpenVDB, and that it can also be viewed pretty quickly in Modo's preview render. Still no animation layers in Modo though. Several of the studios that were interviewed mentioned using Maya for their animation instead of Modo. Overall though, Modo is starting to look good!
  7. YES! Adding these kinds of features to the Retopo Room is a great idea! Partly automatic retopology is exactly what we need! Please continue to work in the Retopo Room!Also, will the retopo polygons still snap to a high density voxel mesh? Snapping is very important. In your demo video it does not show that happening...
  8. You can use the Fill Tool in the Paint Room to create a mask that looks like feathers or hair. Then go to the Sculpt Room and use that mask to extrude the feathers or hair. You can see the method in the videos below. In those videos the user is creating a mask that looks like little arrows, but you can make your mask to look like feathers or hair. The videos are in Japanese, but you can still basically see what is going on. I hope this will give you some ideas on how to proceed. Good luck!
  9. Or maybe if you have time, take a few minutes to create a similar model just for testing purposes, nothing special, just a basic model that has the same characteristics as your production model (multi-UV etc.). If that still crashes 3D-Coat, then share it with Andrew (and Javis etc.), so they can solve the bug.I know none of us have time to spare, but a little extra effort goes a long way for fixing bugs. I have often been amazed by how quickly Andrew solves bugs that I have taken a little time to succinctly report to him. The key is to be very brief and clear with Andrew (no lengthy explanations). He is a great programmer, but he is very busy. Quick occasional reminders also help. [emoji4]
  10. Yes, like Javis said, this can give the user more control and speed up work in the Autopo or manual retopology tools. We could paint areas where we want edge flow to change direction, and then click apply to get automatic retopology there.
  11. Scripting is supported in 3D-Coat with Angelscript. Some users on this forum have written and shared useful scripts.For example: http://3d-coat.com/forum/index.php?showtopic=17561
  12. When painting a model in 3D-Coat you can sync with Photoshop and once you Save in Photoshop, the changes you make in Photoshop will update automatically back to 3D-Coat.
  13. Quick update for AbnRanger: So I checked with my client today to be sure, and I'm not allowed to share anything. But as I said before, I don't think the sim I made would be good for comparing with other commercial simulators anyways, since it is just meant to be black dust blown off a table in an NPR scene.Back to the topic of this thread: I am an indie/freelancer, so I like all the "Indie" prices that have appeared in recent years. Houdini has an Indie license, so does Substance, Fusion, and Maya LT is essentially Indie as well. Even Autodesk's new CAD/Cam app "Fusion 360" is offering free licenses to Indie businesses. There are some limitations on all of these Indie products, but many of those limitations don't really matter to me. I also like the subscription payment models because I can afford paying small amounts every month instead of thousands all at once. These Indie licenses and subscription-style pricing are perfect for me right now. It makes me wonder if the number of Indie licenses being sold is a significant source of income for the companies making these apps. Maybe they found a serious revenue stream in the Indie demographic. I think if Adobe buys the Foundry, it is possible that a Modo/Mari/Nuke package set at a low monthly subscription price could sell very successfully to people like me. I'm kinda hoping that it will actually happen.
  14. Sorry, my client doesn't want me to share any videos of it until it is all done. Even fax the NDA to Rebusfarm just for the smoke sim (to be sure I covered all the bases, lol). But...I think I can post some screen shots of the Blender scene setup here, and probably one screen shot of the smoke, without causing any problems. I will check with my client to be sure and try to post some images here tomorrow. I don't know if you will find the images very informative though because the smoke is just being used as black dust being blown off a table, but each frame is 4K (for compositing), so that is a big part of the reason why the renders are taking so long.And yes, I think some volumetrics are GPU-accelerated in Cycles, but specifically smoke and fire sim are not yet (as far as I know). I am using Blender 2.73 on this project, but I don't think fire and smoke are GPU in 2.74 either. I was even panicked for a while because I had GPU turned on, and nothing was showing up in the render, but then I switched to CPU and it finally appeared!
  15. Good news! I already heard rumors that animation layers were supposed to be in 901, hopefully they will also get Houdini Engine for Modo (or at least fluid sims will somehow render easily and quickly in Modo).This weekend I made a nice smoke sim in Blender, and I was getting 15 minutes per frame because Cycles only renders smoke sims on the CPU. The fans in my computer went full blast when it was rendering and everything froze up until the render was done. 15 minutes per frame was just too long. I had to send the full animation to Rebusfarm. If Modo gets fluid sims, I hope they make it GPU accelerated.
  16. Yeah if Adobe sets a low price subscription for a special Modo/Nuke/Mari suite, then this could be great! I also hope Modo development doesn't slow down, it still needs a bunch of things like animation layers, a good link to Houdini for fluid sims, etc.
  17. I think a big part of making VFX look realistic has to do with lighting. Putting a cg character in a live action environment will always look fake unless the lighting from the live action shot is the same as the lighting on the CG character. There are a lot of special tricks that are used to make an accurate record of the lighting in a real life scene so that it can be reused later in a 3D environment on a computer. For example if you take a video of a real life environment, then you can also take HDR photos of a large highly reflective chrome ball which you place on the set. The photos of the ball must be taken right after you shoot the video (before the daylight or other environment lighting changes). Then you can use those photos to recreate the same lighting in a 3D scene on a computer. The photos of the chrome ball can be mapped to a dome shaped object inside the software, and that HDR photo itself can be used to light the 3D scene. This lighting trick alone adds a great deal of realism to the CG character. This technique can also now be used to light scenes in the upcoming 3D-Coat 4.5! Of course the big studios who worked on Transformers don't use the chrome ball trick anymore. That method is reserved for productions on a much more limited budget. Instead they have very expensive cameras that take spherical photos (much like the camera which Google uses to take its Streetview photos for Google Maps). Also they use special camera rigs that record all the movements that the camera makes in 3D space while shooting live action on set. Then that 3D movement data is matched to a camera within 3D software like Maya. That way the fake CG characters appear to fit perfectly within the live action footage. There is also motion tracking software (like Boujou) which is used for that same purpose when 3D tracking data was not recorded on the live action set. So basically what I am saying is that a big part of creating the realism in movies like Transformers has to do with high tech expensive tricks, which are executed carefully by a big team of people. Each person excels at performing one small part of the project, and when all those hours of work are put together, they end up with some amazing stuff. It is the teamwork combined with clever technology which makes those movies so good. To be honest I have often been surprised to work for film directors who can't draw storyboard pictures any better than a child. I know professional character animators who can't model anything worthwhile in 3D. Some of the people working in the film industry aren't even artistic people at all. Some of them couldn't even really be described as creative! But they are all good at doing something, and if you put them all together on a project, with a lot of expensive hardware/software, then it will probably turn out pretty cool...most of the time.
  18. Your work looks great Piacenti! Welcome to the forums!
  19. Cool! I will check this out soon. I am already very satisfied with my own Maya-style setup in Blender, but maybe this one will be even better.
  20. It looks like the conversion to nurbs is dependent on the edgeflow of the polygonal mesh, am I right? For example, the rhino model in your demo would already have a nice curve around the tail/head/leg areas if there was an edgeloop there on the original polygonal mesh, correct? That way it would be easier to break the mesh up into nice parts or perform edits when it becomes a nurbs object. So, if we used edgeloop guides when performing automatic retopo in 3D-Coat, then I guess we would get a nicer resultant mesh after the nurbs conversion. I think it should be a priority of your plug-in to get the best possible logical placement of nurbs curves after the conversion, instead of just random patches stitched together all over the body. It seems to do a much better job with a subdivision surface with nice edgeflow as opposed to an autoretopo mesh with bad (illogical) edgeflow. I am guessing the Zbrush autoretopo algorithm will probably yield better results for this than the current 3D-Coat autoretopo algorithm. Have you tried it yet?
  21. I am getting near to finishing a motion graphics project that I did about 50% in Blender. Normally I would prefer Maya, but I decided to try Blender as an experiment because I was given ample time to finish this particular project. My client seems to be satisfied with the part I did with Blender, and I am satisfied with it too.Of course I have used Blender for little things in the past, so it is not brand new to me, but I hadn't ever used it seriously until I used it for this project. This project required me to do some modeling, rigging, painting bind weights, animating a human arm, rendering with Freestyle and vector motion blur. I would describe the overall experience as pleasant. I did notice however that Blender is not as good as Maya in many ways. Of course I am not an expert with using Blender yet, but I did a bunch of research trying to find the same tools and workflows in Blender that I would have used in Maya, but I was a bit disappointed with the Blender versions of those tools and workflows. For example, soft select is very nice in Maya, but the Blender version (called proportional editing), is not as good because you can not visually see the falloff like you can in Maya. In Maya the vertices are colored with a gradient, but in Blender there is just a big circle which appears on the screen and you have to imagine what the falloff looks like inside of the circle. Of course you can see the effects of the falloff by moving the mesh, but it becomes more of a trial and error process than it is in Maya because you don't have good visual feedback for what you are doing. On the other hand, there are some things I really enjoyed about using Blender. It may sound strange but I actually like the Blender UI. (But I think the Maya 2016 UI has some nice improvements too). Also, I set up Maya-style navigation in Blender and it works fine for me. All the hotkeys I left as Blender defaults. The Blender animation graph editor is nice too. Basically I still prefer Maya so far, but I definitely like Blender despite its apparent shortcomings. I am writing a list of all the things I think need to be improved in Blender, and at some point I will post it somewhere. I don't know where feature requests, improvements, or bug fixes are supposed to be posted for Blender, or if the Blender developers are as responsive to users as Andrew is to 3D-Coat users, but there is a new release every two months, so I guess things are somehow getting fixed and improved. I foresee that I might be willing to use Blender again in the future depending on what the project requires. I might even be able to skip the Maya rental fee for a month or two and save some money!
  22. My guess is that Autodesk is trying to slowly push their architectural visualization and CAD customers out of Max and into other Autodesk alternative products like Revit (for architecture) and Fusion 360 (for CAD). Of course I think the transition process will take some years, but I think there is evidence showing that the process has already begun. Autodesk probably feels that Max is spread out across too many different industries and it is therefore hard to develop Max to suit all of those differences appropriately. Should they spend time developing character rigging tools or spend time making tools for design viz? Too difficult. So they drop Max and send the Media and Entertainment customers to Maya, the Architectural customers to Revit, the CAD/CAM customers to Fusion 360. That is my theory at least.
  23. My personal theory is that most (not all) of the new Max features are just stuff originally developed for Maya that was ported to work within Max. I think that Autodesk wants to eventually EOL Max, but there are still too many people using it, so they are first gonna try to get people to move to other products first. Once they see the subscriptions for Max drop low enough, they will EOL it.They want to get rid of Max because they see it as redundant. Maya pretty much does everything that Max does, so why pay developers to work on both apps, when they could save money and just work on one? Why do I have this theory? First of all, Autodesk already has a history of getting rid of apps they see as redundant. Also it looks to me like Maya development in the last few years has been much more intense than Max. Also they created Maya LT to try and grab all the customers who use Max for game art. They also stopped working on the "Design" version of Max. They want customers who use Max for architecture to instead switch to the new BIM apps like Revit (which is designed to be specifically used for architecture). Additionally they are also putting a big development effort into Fusion 360, which seems to be the new and improved all-inclusive CAD app that they want to eventually replace all their other apps which are used for design. It looks to me like Autodesk is trying to streamline their huge portfolio of apps. In some ways this is sad because in the end they will EOL a bunch of stuff, but on the other hand it is great because they are building some very awesome super apps to replace all the other redundant stuff! Maya 2016 is looking really great with big improvements and a big feature set, and Fusion 360 is turning out to be awesome too. Of course these are just my personal theories, Autodesk strategies may still change a lot in the future, but it's fun to share my speculations with other people who are also as interested in this stuff as I am!
  24. Yeah it seems to me that UV Sets pretty much serve the same purpose as UV Tiles, but somehow UV tiling became the more popular method, people just got used to it, so now they expect it in all 3D software. UV tiling really is just a matter of spreading out the UV Sets in the x and y directions, so you can look at all of them in rows and columns at the same time. I guess seeing them spread out all at once really is better than searching through a pile of UV Sets stacked on top of each other, so in that respect UV Tiles are superior to UV Sets. Andrew really should just make the UV space bigger to allow for tiling, then all the people who are used to tiles would be happy...but I guess that somehow isn't an easy change to make in the software, so he is hoping people will just accept using UV Sets instead.The biggest problem for people using tiling is that the location of each tile is actually important, since each tile is given a specific number, and each external 3D app uses that number to reference which texture map is applied to the 3D mesh. So if all the tiles are converted to UV Sets while working in 3D-Coat, then they need to be put back into their exact original location when they are exported back out of 3D-Coat (by both row and column). It would be best to keep them as tiles in their original locations for the entire process of moving them between apps. That's why Andrew really needs to support UV tiling in 3D-Coat.
×
×
  • Create New...