Jump to content
3DCoat Forums

L'Ancien Regime

Advanced Member
  • Posts

    2,189
  • Joined

  • Last visited

Everything posted by L'Ancien Regime

  1. Somebody once said to me that unless you were an experienced engineer/programmer in image processing from the JPL, then with two years of that very expensive schooling from a place like Gnomon or VFS, you were looking at 8 to 10 years as an unpaid intern before anybody would even think of giving you a job as a TD. I"m not sure how true that is but you'd better be from a rich family if you want to be a TD regardless.
  2. I noticed in one of those videos that there's a choice between NVlink sizes depending on how much space you have in PCIE slots and the need for cooling the video cards
  3. International student awards scheme The Rookies has released its 2018 rankings for the world’s best CG schools, rating the performance of 580 schools in 87 countries. The USA’s Gnomon – School of VFX and Animation for Film and Games topped the Animation rankings for the second year running, also being rated the world’s best school for visual effects. Other category winners were unchanged from 2017, with Belgium’s Howest leading in game development, the USA’s SCAD in motion graphics, and Australia’s CDW Studios and Flinders University in illustration. So how much does it cost? $102,544.00 USD Now each term is 2.5 months, say Oct 8 to Dec 17. So that will be 30 months of rent. If you want/need solitude to study in peace and tranquility then at $1700 per month for 30 months that's $51,000.00 USD If you don't care and can share then it's $800 a month for 30 months and that''s $24,000.00 USD So for 2.5 years of schooling you're looking at from $126,544.00 to $153,544.00 and that's without calculating the loss of wages and groceries etc you'll have to buy in that time and transportation too if you can't find a place within walking distance of the school. And when you get out what kind of a job will you get? How long will you have to serve as an unpaid intern? This is what happens to the best; But who knows? maybe I'm just a cynical pessimist...
  4. So let's just say you decided to buy that RTX 2080 or that RTX 2080Ti or even wanted to drop $3000+ on a Titan V with it's odd Volta in it. and you liked it so much you decided to buy two and run them in tandem (assuming the PCIE link could even handle all that bandwidth) with NVLink... Yup, that's right; $599.00 USD just for the bridge...
  5. Actually I haven't had a computer I can work in for over a year now and I'm aiming for Cyber Monday to get all the parts for my next rig so it's a matter of growing urgency to me. I have to say in favor of this RTX 2080Ti that it's scoring really highly for Vray. This is very very impressive and the pricing doesn't seem unreasonable to me given that Nvidia hasn't really come out with a new broadly issued GPU, the Pascal, for two years now. So I'm going to be probably posting a lot of news about this over the next couple of months. This thread isn't disappearing off the bottom of the page anytime soon.
  6. "Somebody in marketing realized that the 2080 and the 2080ti strictly in terms of teraflops for FP32 (Floating Point precision mode) don't look all that much better than the 10 series on paper, so what do we do to make it look better? and the answer was to invent a new metric of measurement" The new metric is based around RTX Op...1:32 mark "Nvidia calls a Floating Point Unit a CUDA Core..." It's all beyond me..I'm glad Andrew understands this stuff.
  7. https://wccftech.com/exclusive-nvidia-rtx-series-msrp-pc-cost/ Potentially some bad news... This story will go into one of the biggest problems stopping AIB (Add-in-Boards) partners from achieving MSRP pricing and what this has to do with President Trump’s trade tariffs. …there is a 10% tariff impacting $200B of goods that is scheduled to take effect on 10/1/2018. Every Monday there is an update on whether there is any progress made by US and China in the negotiations. If the tariff does take effect on 10/1, then […] would try to move assembly and testing over to Taiwan in order to avoid the tariff but most likely there would have to push back shipment lead times or else raise prices while they get it all sorted out – Red feathered bird.
  8. https://wccftech.com/nvidia-dlss-explained-nvidia-ngx/ Standing for Neural Graphics Acceleration, it’s a new deep-learning based technology stack part of the RTX platform. Here’s a brief description from NVIDIA: NGX utilizes deep neural networks (DNNs) and a set of Neural Services to perform AI-based functions that accelerate and enhance graphics, rendering, and other client-side applications. NGX employs the Turing Tensor Cores for deep learning-based operations and accelerates delivery of NVIDIA deep learning research directly to the end-user. Note that NGX does not work on GPU architectures before Turing.
  9. https://wccftech.com/amd-epyc-rome-7nm-64-core-cpu-performance-benchmark-leak/ Alleged AMD EPYC ‘Rome’ 7nm Based 64 Core Processor Performance Leaks Out – Scores an Incredible 12,500 Points in Cinebench Multi-Tasking Benchmark The chip was tested in Cinebench R15 multi-thread benchmark and the chip scores an astonishing 12,587 points which are beyond anything current-generation processors can achieve. AMD Ryzen Threadripper 2990WX scores around 5500 points in the same benchmark with 32 cores and 64 threads. The score we are looking at in the leaks shows more than twice the performance of the flagship Threadripper SKU. There’s also the EPYC 7601 SKU which scores around 6000 points that is due to the octa-channel memory support compared to the quad channel on the Threadripper CPUs.
  10. Better hurry up and register for the Montreal Houdini Users Group then and if you don't get an invite right away email Chris. It's free, 2 drinks and snacks and these things are often in very cool venues. I've been to other industry meetups like this in Vancouver where there were cool door prizes too...like Nvidia graphics cards and shit. I won a $35 hard cover book at one for Maya. https://www.meetup.com/pro/houdini
  11. Yes, Zbrush UI is a mess and 3D Coat has a vastly superior workflow and UI. I attribute this mainly to the early influence of Meats Meier and the emphasis at Zbrush on 2.5D workflow (I may be wrong on this but at the time that is what it seemed to me though it's in no way intended as an insult to Meats Meyer as an artist whatsoever). I never liked the 2.5D workflow and the compromises it imposed on the UI at all and I found Andrew's straight up 3D workflow to be vastly superior. And back to the main subject, I've personally found 3d Coat to be much less prone to crashes (even though I've always used the latest untested Betas and alphas provided here) than with other programs. For me, 3d Coat has always been one of the more stable programs so I have no complaints on that account.
  12. Actually one of the things that most intrigues me about 3d Coat is that Andrew has been the innovator that others must follow. Zbrush's creators would be well advised to use 3d Coat and find out where they should go, not vice versa. And that's a joke because I think they already have.
  13. This is the kind of data I've been searching for to no avail; there's a lot of BS about which cards perform best for artists and designers; this analysis seems to be the most clear in its results I see it's gotten rid of the HDMI port and it's got 4 x Display Port 1.4 which at 6' to 8' supports up to 8K and there's a special plate for a single VR interface stereo plug. http://www.planar.com/blog/2018/2/28/displayport-14-vs-hdmi-21/ He concludes that for handling complex models the Quadro Pro P4000 can handle thousands of parts in a single model and is the optimal price performance solution while even the best gamer cards don't even come close. I won't even bother posting the 32GB VRAM Quadro Pro p6000 Amazon ad; it's something like $3400. Oh God...it's still so complicated For modeling single core speed is of paramount importance so the i7-8700K was rated the best despite the Threadripper having more cores. At 12 threads it's a pretty good price too And the motherboard for that is pretty cheap. Throw in a $130 for a case ( I want a server rack mounted case) and PS etc and wait for Cyber Monday and you're coming in with a really powerful 3d modeling machine for around $2200 or so, maybe less depending on the sale prices then. Then in a year or two when the 64 core 128 thread Threadrippers appear with the lastest gen motherboards I'll still have plenty of dough to go for one then as a render box, game server, crypto miner etc. Ditto for all the 7nm and RTX stuff...
  14. So when you're doing animation cells in 2D with this program, and you've got say a character, do you only have to color in one line drawing cell and the rest of the cells of the animated character get automatically painted or do they have to each be painted individually? and.. "Compatible with operations that use the Microsoft Surface Dial" OMG that would be fun to do...not going to go out and buy a MS Surface but that would be a lot of fun..
  15. That image I posted above was modeled by Phillipe von Prueschen. He did an animated short using some of his models made with Alexey's Houdini modeling tools plug in. https://www.behance.net/cyte The guy does some excellent work in Houdini.
  16. https://wccftech.com/review/gigabyte-x399-aorus-xtreme-motherboard-review/ 12 nm LP process technology – 1st generation Ryzen and 1st generation Threadripper were manufactured using 14L LPP (Low Power Plus) process technology of GLOBALFOUNDRIES, whereas 2nd generation Ryzen Threadripper based on Zen + microarchitecture was manufactured by GLOBALFOUNDRIES 12 nm LP (Leading Performance) process technology adopted. If the same power consumption is higher than the first generation Threadripper, AMD is appealing that it can realize lower power consumption than the first generation Threadripper for the same clock. Precision Boost 2 – The automatic clock-up technology “Precision Boost” adopted by the 1st generation Ryzen and the 1st generation Threadripper had the operation clock determined by the number of loaded cores, but this time the CPU voltage, current, core It has been redesigned to detect the temperature and select an appropriate operation clock. As a result, regardless of the number of cores under load, clock up according to the situation. XFR 2 (Extended Frequency Range 2) ~ “If the CPU temperature condition permits,” XFR “to operate with a higher clock beyond the maximum clock of Precision Boost becomes the 2nd generation, and as with Precision Boost 2, the number of cores is restricted lost. Depending on the performance of the CPU cooling system, the performance will improve up to 7% Reduction of access delay of cache and main memory – Access delay to cache and main memory is smaller for first-generation Threadripper. Up to 13% improvement in L1, up to 34% in L2, up to 16% in L3, up to 11% in main memory, resulting in a 3% increase in instruction execution count (IPC, Instruction per Clock) per clock It is said that
  17. This comes as a big surprise...I wasn't expecting this for another year or so.. https://wccftech.com/amd-confirms-new-7nm-radeon-graphics-cards-launching-in-2018/ AMD Confirms New 7nm Radeon Graphics Cards Launching in 2018 With all the media buzz surrounding NVIDIA’s brand spanking new 12nm RTX 20 series Turing graphics cards over the past couple of weeks, which promise to deliver 40% better performance than their predecessors, a similarly exciting news story on the Radeon side has seemingly flown under the radar. Earlier this week the company confirmed in a press release, and later President and CEO Dr. Su confirmed in an interview with Marketwatch, that AMD is on track to launch the world’s first 7nm graphics cards this year. While the world’s first 7nm CPUs, built on the company’s next generation Zen 2 x86 64-bit core, are on track to be on-shelves next year. The company had already demonstrated working 7nm GPU silicon back in June at Computex, which has been sampling since and is set to be available for purchase later this year. Based on an improved iteration of the Vega architecture which debuted last year, 7nm Vega is nothing short of a beast. The new GPU supports intrinsic AI instructions and features four HBM2 8GB stacks running across a 4096-bit memory interface for a total of 32GB vRAM. Whilst the company hasn’t disclosed detailed specifications relating to the new GPU we could reasonably expect around one terabyte/s of memory bandwidth, higher clock speeds and significantly better power efficiency thanks to TSMC’s leading-edge 7nm process technology, which has reportedly enabled the company to extract an unbelievable 20.9 TFLOPS of graphics compute out of 7nm Vega, according to one source. If true, it would make it the world’s first 20 TFLOPS GPU. https://www.anandtech.com/show/12910/amd-demos-7nm-vega-radeon-instinct-shipping-2018 In a fairly unexpected move, AMD formally demonstrated at Computex its previously-roadmapped 7nm-built Vega GPU. As per AMD's roadmaps on the subject, the chip will be used for AMD’s Radeon Instinct series accelerators for AI, ML, and similar applications. The 7nm Vega GPU relies on the 5th Generation GCN architecture and in many ways resembles the Vega 10 GPU launched last year. Meanwhile, the new processor features a number of important hardware enhancements, particularly deep-learning ops specifically for the AI/ML markets. AMD isn't detailing these operations at this point, though at a minimum I'd expect to see Int8 dot products on top of Vega's native high speed FP16 support. AMD also briefly discussed the use of Infinity Fabric with the new 7nm GPU. AMD already uses the fabric internally on Vega 10, and based on some very limited comments it looks like they are going to use it externally on the 7nm GPU. On AMD's Zeppelin CPU dies - used in the EPYC CPU lineup - AMD can switch between Infinity Fabric and PCIe over the same lanes depending on how a product is configured, so it's possible we're going to see something similar here. In other words, AMD can kick in Infinity Fabric when they have something else to connect it to on the other end. https://wccftech.com/amds-infinity-fabric-detailed/
  18. https://wccftech.com/amd-ryzen-threadripper-2950x-16-core-cpu-899-usd-launch/ Black Friday and Cyber Monday aren't that far away now.
  19. What is it with Russian guys? Andrew, this Alexey dude, and Arseniy Korablev over at Polybrush...they're brilliant...they get some mad idea and BAM, they deliver on it. Compare that with the guys over at Silo. Every few months there's some little update where they announce they've corrected some memory leak that's been making it crash in some key operation like it's a big deal and their program isn't an antiquated app that's going nowhere, creatively speaking. And Alexey loves giving you all these nice touches, like this one;
  20. This is coming in V2..can you do this in MeshFusion in Modo?? Stephen HallquistPLUS1 year ago Just a thought but shouldn't this be considered something other than what you have going on in Flux? Flux is procedural and as soon as you go in and change the mesh settings for smoothness all the insert mesh stuff would break? Still very cool! Alexey Vanzhula1 year ago No. Insert Mesh can be procedural or linear tool. Selection places can be converted to bounding regions and you can increase quality of upstream flux nodes without losing of inserted meshes.
×
×
  • Create New...