Jump to content
3D Coat Forums

All Activity

This stream auto-updates     

  1. Past hour
  2. Shift tool in the paint room broken?

    Testing vertex paint mode, is working fine. Can i ask what happend if you change the alpha and switch off -if any- strips and stencils ?
  3. Shift tool in the paint room broken?

    Think it was because I was in vertex paint mode and I was having problems with the gradient not working before aswell.
  4. Shift tool in the paint room broken?

    I guess it does not work in vertex paint mode, think thats what it is.
  5. Today
  6. As for memory, there has been quite a change in RAM compatibility since the first few months after Ryzen 7 was first released. The 14 CAS/ Samsung B-Die modules (which cost more because they were considered the premium RAM modules). I got 2 x 16GB modules of the GSkill Trident Z, and it was the 16 CAS/3200Mhz rated stock. Wasn't bad for 32 GB, and I have room to add 2 more, if needed. It's pretty rare for me to run out of RAM at 32GB, but I will probably add 2 more modules for a total of 64, soon, just to be on the safe side. https://www.newegg.com/Product/Product.aspx?Item=N82E16820232415&cm_re=gskill_trident_z-_-20-232-415-_-Product At any rate, when I first put my system together, my RAM would only run as high as 2400Mhz....even though it was rated at 3200Mhz. That's where the whole Samsung B-die stock issue came into play. Up to that point, they WERE the only modules able to reach 3200Mhz. However, in August, AMD released a new string of code for Motherboard Manufacturers to greatly expand the Memory compatibility. A few weeks later, most of those manufacturers released new BIOS updates to include this optimization. That worked. On my ASROCK board, I was able to get the full 3200mhz with no issues. I just choose the XMP 2.0 Profile for 3200Mhz and it works great. I'm still using an SSD drive (1TB) for my main drive, but an M.2 card would definitely be faster. I haven't invested in one yet, because it boots up really fast with this system. I have my Ryzen 7 1700X running at 3.8 Ghz, and it runs like a champ. 3ds Max loads really fast, now (10-15sec), whereas before, it might take up to a full minute to fully load. No kidding.
  7. It's good that Blender Cycles can utilize both an AMD and NVidia card, simultaneously, but you probably won't find that compatibility in other GPU rendering engines. In fact, most GPU engines still require CUDA enabled cards (NVidia only, as that is their exclusive technology). That's why I have a GTX 1080 & 1070. It allows me to render with any GPU render engine, and it's really fast. However, you have Cycles and AMD's ProRender GPU engines that can use AMD cards right now. VRayRT does have an OpenCL mode in 3ds Max and Maya, so, it can render with AMD cards as well. With all of that said, the AMD Pro Duo is effectively 2 workstation GPU's on one card, thus the 32GB (16GB per GPU). Since it is a workstation card, it should handle dense geometry in the 3D Viewport much better than any GTX/Gaming card. Since the AMD Pro Duo is seen by 3D programs like Blender, 3ds Max, Maya, etc., as 2 separate GPU's, there is no real advantage of having the 2nd GPU or it's dedicated 16GB's of VRAM. That means, in those applications, you are only using one of those GPU's and 16GB of VRAM for all work except for GPU rendering. The new Radeon Vega Frontier Edition is on sale at Newegg for $699. https://www.newegg.com/Product/Product.aspx?Item=N82E16814105073 It's basically a blend between the gaming performance of a gaming card (on par with the GTX 1080) and all the benefits of a workstation card, with 16GB of VRAM. So, if you want a relatively inexpensive workstation card that is very good at gaming, too, that might be the best option. I went with 2 Nvidia GTX 10xx's because it hit on compatibility (CUDA & OpenCL), pricing, performance, and energy efficiency. Granted, each card has 8GB VRAM each, but that is more than adequate for what I may use it for. Plus, some GPU render engines have "Out of Core" support, meaning it will still render if you run out of VRAM. It will access system RAM in that case. 2 that I know of are Redshift and now AMD's ProRender offers that support. In 3ds Max, and maybe Blender too, it can render using both the GPU and CPU simultaneously. So, it really comes down to whether you REALLY need more than 8GB of VRAM on your primary display card. If you do, then either of the AMD workstation cards would be the best cost/benefit option. The 1080Ti has 11GB of VRAM, so that might be another option to consider.
  8. Hi Falconius, thanks for your comment! It's amazing to know that 3D-Coat is working very well with AMD technology according to your pc build. I'm glad to hear that! Every time when I tried to set up a pc, I was not successful because the person who guided me did not know much about the 3D programs that we are accustomed to using and indicated me a pc build that when I mounted did not have the performance I was expecting and with that I was very frustrated. In recent days, I've done a lot of research and I've set up a configuration according to what I posted here. About the memories I was afraid to choose something that was not appropriate for the system, apart from the fact that there are many marks and specifications making me confused at the time of what would be appropriate. I'll try to find out more about the GTX 1080i video card and by your comment it should be excellent. Thanks for the guidance! Seeing reviews on youtube and google, I took note of the video card of AMD Radeon Pro duo 32GB (which is a video card pro). In the youtube link that I posted above, the guy demonstrated the use of this card with Blender, which had great points and what caught my attention was that he commented that with AMD Radeon Pro duo 32GB, he can simultaneously use another program at the same time while rendering without causing any problems, unlike with the two 1080's cards that locked the system while rendering and thus making you wait to finish the render to be able to use the pc. He also made use of the AMD radeon Pro 32GB and set with Nvidia 1080 and in that way he got better performance in the system. As I said earlier, what do you think about using two cards working at the same time on the system? My intention was to have a pro card like the AMD Radeon Pro duo 32GB (for professional 3D work and etc.) and in the future put a video card like the 1080ti (as you indicated me) to be able to play games and increase even more the performance in 3D works as well. I do not know if I would have any benefit with using the two cards at the same time in the 3D works and when I wanted to play I would have chosen the 1080i video card for example.
  9. Uploading Instant Light 1.4.19 with a ton of additions on the Unbiased Engine , tessellation shaders , triplanar shader etc etc BLACK FRIDAY DEALS COMING UP SOON Instant Light OFFICIAL RELEASE Version 1.4.19 STABLE - BETA CRUCIAL BETA UPDATE ON OFFICIAL RELEASE ----Core optimisations---- System optimization (1.4.19) Save / Load optimizations -- Faster save and load , (1.4.19) UI IMPROVMENTS and SPEED OPTIMISATIONS (1.4.19) PRIMITIVES cube sphere plane etc compatible with almost any system in Instant Light (1.4.19) On new scene , Main Engine system fully auto re boots (1.4.19) ----Render Engine Updates---- DISPLACEMENT / REALTIME TESSELATION on the advance pbr shader(1.4.19) TRIPLANAR SHADER works on any model with out UV or UVUNWRAP!!!!! just import your textures and scale with out seams!!!(1.4.19) Added white point control on STRT unbiased render engine (1.4.19) Updated benchmark scene with STRT UNBIASED and TESSELATION (1.4.19) Shadow catcher now auto disables the stochastic reflections of STRT Unbiased when we force shadows on the HDRI images , user needs to open them manually (1.4.19) ----Additions---- Procedural cloth added PIN handles , now you can create almost anything very very accurate. (1.4.19) MatCap materials Monster Clay fully procedural. (1.4.19) Added STRT UNBIASED render engine in elements (1.4.19) Primitives can be animated (1.4.19) Now make cloth interactive gets user interactible only uppon simulation start (1.4.19) ----Corrections---- Droplets and snow secondary textures , needs to re select the model to come up correctly (1.4.19) Triplanar auto seam transparent color for seamless results (1.4.19) Start stop sim pins fixed (1.4.19) Fixed 3d icons on and off bug (1.4.19) Fixed elements drag (1.4.19) Fixed HDRI floating ui random anomalies with textures and 3d realtime particle among other issues. (1.4.19) Tesselation values up (1.4.19) ----Known Issues----- FBX export works currently on WIN X 64 systems and needs full access under Win 10 ----Near feature Release will contain----- SAVE LOAD TRANSFER TO GPU FOR EVEN FASTER SAVE LOAD (PRIORITY) TEXTURE CREATION ADDON (PRIORITY / Almost done) Procedural cloth re add on sim Cloth Pins (FEATURE ADDED) Decimator keep original mesh and revert to it Asset Bundle for materials , models etc WACOM FULL support (UPDATED , NEEDS TESTING) logo_wizzard.bmp
  10. UV Automap cluster size?

    There isnt anyway to control the result so that islands must be of a min size/cluster. 0002112: Auto Seem creating too many islands. Below Auto Seams there is another tool - Sharp seams Help to solve this task ? ------------------------------------------ Add Clusters tool (Add Cluster then unwrap) Use the LMB to add a cluster center directly on a polygon face. Clicking again on the same face will remove it. This tool lets you mark local selections of polygons, or clusters, and marks the bordering edges of these clusters as UV Seams, allowing for a whole UV island to be split into parts with little work.
  11. Decimation like Proxy Decimate setting?

    Which proxy method are you using ? Decimate or Reduce/Resample ?
  12. curve bug

    Try switching Brush along Curve = off
  13. I just bought a computer with a Ryzen 7 1700 a week or two ago with a GTX 1080, and 3D-Coat runs butter smooth. I'm running 16GB RAM at 3000 MHz and have had no problems. With the system specs you are going for with you aren't going to have any issues. For the video cards if you are thinking about going that high end I'd suggest looking at the GTX 1080ti which has significant performance advantages over 1080's, it has 11GB memory and all the benchmark things I watched it was like 20% faster than the 1080, which was the fastest card in general (the Vega 65 LC matched 1080 or was a little bit faster in some production tests, particularly Blender. the Vega didn't come close to a Ti though). I wasn't looking so closely at the pro cards because that was out of my price range, but my understanding from what I saw was that the 1080ti was the more worthwhile buy, but I didn't see any hard data like benchmarks to assert whether that was true or not.
  14. Hi Andrew, I did enable the checkbox to import the UDIMs correctly. The issue was texture size setting and number of tiles/uv-sets can slow down 3DC and cause huge memory usage. If the mesh data loaded in has no texture loaded in/connected(blank data) perhaps don't allocate memory? Only when start to interact with that mesh data with paint or load/import texture to that uv-set/tile, should memory be allocated. I have seen from recent research that octree data structure can be used for making texture data more efficient to work with. Some form of proxy mode would also help(cache to fast M.2 NVMe SSD? Compressed/low-res texture proxy, load full-res textures for current and nearby UV islands around brush, perhaps on stroke?). 3DC could also proxy/cache by taking large texture like 8/16k and treating it as several smaller tiles/textures, it would make streaming more possible for memory efficiency, smaller textures can load in parallel neighbours in a faster time too. Take a look at active branch of Crunch texture compression being maintained by the Unity engine, it has good compression with recent 5x speedup of compression time, it is open-source so 3DC could make use of it for building texture cache. Then during export, can write the texture chunks back into original 8/16k textures. Currently I only know of Mari as being able to effectively handle painting over multiple UDIMs without big issues due to texture streaming. Unreal Engine 4 has a third party tool called Graphite that also approaches texture handling differently to get performance gains(but may not be applicable for real-time texture editing).
  15. Meshes take many hours to import into 3D Coat

    Can confirm that this hasn't been an issue with importing PLY or any other format now. It does not seem to handle filesize(at least binary ply/fbx) larger than 500MB. Import in chunks of that size though and is ok afterwards, just merge back together(have not tested if this cause export issues for file over 500MB).
  16. UV Automap cluster size?

    This auto uv map feature 3D coat looks very good! One small issue was in noisy areas of mesh it made many small islands, I am wanting to just uv unwrap the broader/bigger surfaces automatically as a starting point. Is there anyway to control the result so that islands must be of a min size/cluster? Or must I go through afterwards and find all these tiny islands to merge/weld them back manually afterwards?
  17. I have noticed with a mesh I import with bad geo, decimate does not help much, but when I tried enable proxy with Decimate 16x, it made a much nicer topology(lost detail aside). Unfortunately I cannot seem to replicate this proxy decimation(decimate 16M mesh to 1M is not same results). I can use the clean clay brush and manually get similar result, but would like this decimation approach proxy feature uses vs current decimation method.
  18. Done, thanks Look forward to seeing bug fix.
  19. FYI this is what it does when you smooth
  20. Thanks all. From the advice given I figured a good work around With symmetry off cut out one side with Cut Off tool. Do your sculpting Turn symmetry on Use Clone tool, select mesh and apply Done
  21. Hi #Brad, thank you very much for your comment! Really, there's no way to resist joining the AMD 1950x club. I believe we will not regret it! I've been doing a lot of research on the internet regarding the memories that are optimized for the Threadripper 1950x, they say about the quad channels and show models that are very expensive. I did not know about the RAM CAS of being 14 (C14) the ideal. At speed, I'm lost :(, because I do not know if it interferes with anything. I did not know about the Samsung b-die architecture ... Thanks for the tips! If you do not mind, what memory did you decide to use? The video card I was interested in was AMD Radeon Pro duo 32GB, but I am undecided to not be able to play some games, because my current video card is an Nvidia Quadro 600 and I can not play with it. Of course, I will prioritize my work, but being able to play a little game should be allowed! So I thought about having the AMD Radeon Pro 32GB duo and in the future put another video card running at the same time. I do not know if I'm right in my reasoning, but with the 2 video cards, they could improve performance at work with the 3D programs and when I wanted to play a game, I could choose the gaming-oriented board for that. What do you think? But it looks like this would not be highly recommended as #AbnRanger commented. Thanks for your help and I hope you are very lucky with your new machine too!
  22. curve bug

    For some reason I can't make the 'apply' action for my curves do anything..which means the tool is useless for now. See the attached video. Any ideas, or is this a bug? 2017-11-22_19h56_01.mp4
  23. Hey! I just recently joined the AMD 1950x club myself! Micro Center has a KILLER deal on it right now, $700 USD. I went ahead and got a Gigabyte x399 Aorus Gaming 7 motherboard with it too at a discount. Thought it's all sitting in their boxes still (still waiting on other parts). I've been trying to do as much research as I can as well. While Intel's new chips seem to outshine AMD in the single threaded, for the most part Threadripper's multithreaded is incredible. Again, being able to get mine as cheap as I did definitely helped with the decision. I too snagged the same Enermax AIO as well and look forward to getting it here soon. As for ram, that seems to be the biggest conundrum so far. From my understanding ram with a CAS of 14 (C14) is ideal for getting the most out of a Ryzen chip. As for speed, I'm not sure how big of a difference 3000 vs 3200 will make. For me I'm sure I'd never notice it. Also it seems ram built off the Samsung b-die architecture if best for Ryzen. The TR4 build is quad channel which supposedly means it runs most efficiently with 4 or 8 dimms. With ram being so terribly expensive now I only ordered 2x8gb sticks for now. It won't be optimal, but at least it'll be enough to get it up and running. Hopefully prices come down and I can start maxing out those slots! I'm going to use my old GTX 770 for the time being. A new GPU isn't in the "cards" (pun intended!) right now. So I really can't comment on the GPU aside from more vram the better. I do know that Nvidia cards use Cuda Cores which, depending on your application of choice, can be very important. Well I hope this helps! Definitely not an expert so feel free to fact-check all this. Best wishes on your build! I hope to have mine up this weekend. - Brad Here's a thread I've been browsing to try to make sense of it all.
  24. Yesterday
  25. How to 'hide' an island when UV Mapping

    Bug reported. Thx ! Clear button add some issue.
  26. Sandbox Tiling Engine

    Thanks Ratchet I already tried this technique and it isn't so nice to use.
  27. How to 'hide' an island when UV Mapping

    Calosan, Please review this very SHORT video. It appears there is a bug in UNHIDE ALL. Your second solution, where I select the island and press UNHIDE (not UNHIDE ALL) does work however. If you're interested in identifying the bug for yourself, please review with the obj I posted above.
  1. Load more activity
×