-
Posts
2,176 -
Joined
-
Last visited
Content Type
Forums
Calendar
Gallery
Everything posted by L'Ancien Regime
-
Houdini 17 Direct Modeling Sneak Peek..
L'Ancien Regime replied to Nossgrr's topic in CG & Hardware Discussion
Better hurry up and register for the Montreal Houdini Users Group then and if you don't get an invite right away email Chris. It's free, 2 drinks and snacks and these things are often in very cool venues. I've been to other industry meetups like this in Vancouver where there were cool door prizes too...like Nvidia graphics cards and shit. I won a $35 hard cover book at one for Maya. https://www.meetup.com/pro/houdini -
Houdini 17 Direct Modeling Sneak Peek..
L'Ancien Regime replied to Nossgrr's topic in CG & Hardware Discussion
October 2 downtown Montreal. -
Yes, Zbrush UI is a mess and 3D Coat has a vastly superior workflow and UI. I attribute this mainly to the early influence of Meats Meier and the emphasis at Zbrush on 2.5D workflow (I may be wrong on this but at the time that is what it seemed to me though it's in no way intended as an insult to Meats Meyer as an artist whatsoever). I never liked the 2.5D workflow and the compromises it imposed on the UI at all and I found Andrew's straight up 3D workflow to be vastly superior. And back to the main subject, I've personally found 3d Coat to be much less prone to crashes (even though I've always used the latest untested Betas and alphas provided here) than with other programs. For me, 3d Coat has always been one of the more stable programs so I have no complaints on that account.
-
This is the kind of data I've been searching for to no avail; there's a lot of BS about which cards perform best for artists and designers; this analysis seems to be the most clear in its results I see it's gotten rid of the HDMI port and it's got 4 x Display Port 1.4 which at 6' to 8' supports up to 8K and there's a special plate for a single VR interface stereo plug. http://www.planar.com/blog/2018/2/28/displayport-14-vs-hdmi-21/ He concludes that for handling complex models the Quadro Pro P4000 can handle thousands of parts in a single model and is the optimal price performance solution while even the best gamer cards don't even come close. I won't even bother posting the 32GB VRAM Quadro Pro p6000 Amazon ad; it's something like $3400. Oh God...it's still so complicated For modeling single core speed is of paramount importance so the i7-8700K was rated the best despite the Threadripper having more cores. At 12 threads it's a pretty good price too And the motherboard for that is pretty cheap. Throw in a $130 for a case ( I want a server rack mounted case) and PS etc and wait for Cyber Monday and you're coming in with a really powerful 3d modeling machine for around $2200 or so, maybe less depending on the sale prices then. Then in a year or two when the 64 core 128 thread Threadrippers appear with the lastest gen motherboards I'll still have plenty of dough to go for one then as a render box, game server, crypto miner etc. Ditto for all the 7nm and RTX stuff...
-
So when you're doing animation cells in 2D with this program, and you've got say a character, do you only have to color in one line drawing cell and the rest of the cells of the animated character get automatically painted or do they have to each be painted individually? and.. "Compatible with operations that use the Microsoft Surface Dial" OMG that would be fun to do...not going to go out and buy a MS Surface but that would be a lot of fun..
-
Houdini 17 Direct Modeling Sneak Peek..
L'Ancien Regime replied to Nossgrr's topic in CG & Hardware Discussion
That image I posted above was modeled by Phillipe von Prueschen. He did an animated short using some of his models made with Alexey's Houdini modeling tools plug in. https://www.behance.net/cyte The guy does some excellent work in Houdini. -
https://wccftech.com/review/gigabyte-x399-aorus-xtreme-motherboard-review/ 12 nm LP process technology – 1st generation Ryzen and 1st generation Threadripper were manufactured using 14L LPP (Low Power Plus) process technology of GLOBALFOUNDRIES, whereas 2nd generation Ryzen Threadripper based on Zen + microarchitecture was manufactured by GLOBALFOUNDRIES 12 nm LP (Leading Performance) process technology adopted. If the same power consumption is higher than the first generation Threadripper, AMD is appealing that it can realize lower power consumption than the first generation Threadripper for the same clock. Precision Boost 2 – The automatic clock-up technology “Precision Boost” adopted by the 1st generation Ryzen and the 1st generation Threadripper had the operation clock determined by the number of loaded cores, but this time the CPU voltage, current, core It has been redesigned to detect the temperature and select an appropriate operation clock. As a result, regardless of the number of cores under load, clock up according to the situation. XFR 2 (Extended Frequency Range 2) ~ “If the CPU temperature condition permits,” XFR “to operate with a higher clock beyond the maximum clock of Precision Boost becomes the 2nd generation, and as with Precision Boost 2, the number of cores is restricted lost. Depending on the performance of the CPU cooling system, the performance will improve up to 7% Reduction of access delay of cache and main memory – Access delay to cache and main memory is smaller for first-generation Threadripper. Up to 13% improvement in L1, up to 34% in L2, up to 16% in L3, up to 11% in main memory, resulting in a 3% increase in instruction execution count (IPC, Instruction per Clock) per clock It is said that
-
AMD Introduces Radeon Pro WX 8200
L'Ancien Regime replied to Carlosan's topic in CG & Hardware Discussion
This comes as a big surprise...I wasn't expecting this for another year or so.. https://wccftech.com/amd-confirms-new-7nm-radeon-graphics-cards-launching-in-2018/ AMD Confirms New 7nm Radeon Graphics Cards Launching in 2018 With all the media buzz surrounding NVIDIA’s brand spanking new 12nm RTX 20 series Turing graphics cards over the past couple of weeks, which promise to deliver 40% better performance than their predecessors, a similarly exciting news story on the Radeon side has seemingly flown under the radar. Earlier this week the company confirmed in a press release, and later President and CEO Dr. Su confirmed in an interview with Marketwatch, that AMD is on track to launch the world’s first 7nm graphics cards this year. While the world’s first 7nm CPUs, built on the company’s next generation Zen 2 x86 64-bit core, are on track to be on-shelves next year. The company had already demonstrated working 7nm GPU silicon back in June at Computex, which has been sampling since and is set to be available for purchase later this year. Based on an improved iteration of the Vega architecture which debuted last year, 7nm Vega is nothing short of a beast. The new GPU supports intrinsic AI instructions and features four HBM2 8GB stacks running across a 4096-bit memory interface for a total of 32GB vRAM. Whilst the company hasn’t disclosed detailed specifications relating to the new GPU we could reasonably expect around one terabyte/s of memory bandwidth, higher clock speeds and significantly better power efficiency thanks to TSMC’s leading-edge 7nm process technology, which has reportedly enabled the company to extract an unbelievable 20.9 TFLOPS of graphics compute out of 7nm Vega, according to one source. If true, it would make it the world’s first 20 TFLOPS GPU. https://www.anandtech.com/show/12910/amd-demos-7nm-vega-radeon-instinct-shipping-2018 In a fairly unexpected move, AMD formally demonstrated at Computex its previously-roadmapped 7nm-built Vega GPU. As per AMD's roadmaps on the subject, the chip will be used for AMD’s Radeon Instinct series accelerators for AI, ML, and similar applications. The 7nm Vega GPU relies on the 5th Generation GCN architecture and in many ways resembles the Vega 10 GPU launched last year. Meanwhile, the new processor features a number of important hardware enhancements, particularly deep-learning ops specifically for the AI/ML markets. AMD isn't detailing these operations at this point, though at a minimum I'd expect to see Int8 dot products on top of Vega's native high speed FP16 support. AMD also briefly discussed the use of Infinity Fabric with the new 7nm GPU. AMD already uses the fabric internally on Vega 10, and based on some very limited comments it looks like they are going to use it externally on the 7nm GPU. On AMD's Zeppelin CPU dies - used in the EPYC CPU lineup - AMD can switch between Infinity Fabric and PCIe over the same lanes depending on how a product is configured, so it's possible we're going to see something similar here. In other words, AMD can kick in Infinity Fabric when they have something else to connect it to on the other end. https://wccftech.com/amds-infinity-fabric-detailed/ -
Houdini 17 Direct Modeling Sneak Peek..
L'Ancien Regime replied to Nossgrr's topic in CG & Hardware Discussion
What is it with Russian guys? Andrew, this Alexey dude, and Arseniy Korablev over at Polybrush...they're brilliant...they get some mad idea and BAM, they deliver on it. Compare that with the guys over at Silo. Every few months there's some little update where they announce they've corrected some memory leak that's been making it crash in some key operation like it's a big deal and their program isn't an antiquated app that's going nowhere, creatively speaking. And Alexey loves giving you all these nice touches, like this one; -
Houdini 17 Direct Modeling Sneak Peek..
L'Ancien Regime replied to Nossgrr's topic in CG & Hardware Discussion
This is coming in V2..can you do this in MeshFusion in Modo?? Stephen HallquistPLUS1 year ago Just a thought but shouldn't this be considered something other than what you have going on in Flux? Flux is procedural and as soon as you go in and change the mesh settings for smoothness all the insert mesh stuff would break? Still very cool! Alexey Vanzhula1 year ago No. Insert Mesh can be procedural or linear tool. Selection places can be converted to bounding regions and you can increase quality of upstream flux nodes without losing of inserted meshes. -
Houdini 17 Direct Modeling Sneak Peek..
L'Ancien Regime replied to Nossgrr's topic in CG & Hardware Discussion
Personally I really like the network/dependency graph/history way of working. It's openly parametric and so any clumsiness in manipulation is more than made up for in sheer power. It's like working in Catia or Seimens NX in that way. But after watching that video of their new modeling tools I have to say they seem to suck (if that's really all they're offering). I think that Russian guy's (Alex Vanzhula) $100 plug in for Houdini is far better. It's kind of weird that they wasted their time and money making such a disappointing tool set when they should have just gone to Alexey and bought his tool outright from him and hired him as a staff developper. While they're at it they should buy 3D Coat and make Andrew a staff developer and then they'd rule the world. But there it is; Alexey has created a MeshFusion for Houdini. Looks great. https://gumroad.com/l/GVLLS -
Houdini 17 Direct Modeling Sneak Peek..
L'Ancien Regime replied to Nossgrr's topic in CG & Hardware Discussion
Hey she mentions 3D Coat and BEFORE Zbrush when talking about retopoing a high poly mesh.!! 4 min mark -
Intel Optane Memory Tested, Makes Hard Drives Perform Like SSDs. ... In short, it's a new memory tier, a faster storage repository for most often used data and meta data, that resides between system memory (RAM) and the main storage subsystem. https://www.forbes.com/sites/davealtavilla/2017/04/26/intel-optane-memory-tested-makes-hard-drives-perform-like-ssds/#3946b7fb6090 So you're still going to need RAM even if you have Optane memory. Intel Optane Memory for PCs looks like the average M.2 gumstick and in fact plugs into an M.2 slot on Intel 200 series chipset motherboards (7th gen Kaby Lake or newer). However, it’s designed to cache slower storage volumes like hard drives, offering orders of magnitude faster response times and essentially enabling spinning media to perform more like a high performance SSD in many applications, from workstation and content creation workloads, to gaming, web browsing and even productivity apps. It does this by storing most frequently used data, meta data and access patterns on either a 16GB or 32GB Optane Memory stick, allowing the system to make far fewer trips to a much slower hard drive for data access. for now it´s just for caching and speeding up hard drives/ solid state drives with an Intel 7th generation processor. We still need RAM. DRAM latency is still faster than optane. Soooo...just save up your empties and maybe you'll be able to afford..
-
One thing I do notice is that for Boxx and other suppliers of workstations, as well as people using photogrammetry programs , 64GB is no longer really enough. 128GB is the new standard, so take the money you save by not buying the top of the line new GPUs or a 32core CPU and spend that money on DDR4 RAM, 128GB of it.
-
Forbes seems to be quoting your opinions now ahah https://www.forbes.com/sites/jasonevangelho/2018/08/21/nvidia-rtx-20-graphics-cards-why-you-should-jump-off-the-hype-train/#41dcba773f8e so get the 16 core 32 thread AMD Threadripper and for now get the 1080 Ti for now and wait for the next gen 7nm RTX Nvidia Geforce in 2018.
-
It's going to be quite a while before AMD can match the current Turings it appears; we're looking at the second half of 2020 https://www.extremetech.com/gaming/272764-new-amd-gpu-rumors-suggest-polaris-refresh-in-q4-2018 Next up, Navi. There’s a rather confused suggestion that Navi will be both a mainstream and high-end part arriving sometime in the 2019 timeframe, and that it will debut in the budget segment first before eventually launching as a high-end, HBM2 equipped part sometime “much later.” The suggested time frame is: Q4 2018: Polaris 30 (performance up 15 percent). H1 2019: Navi 10 (budget part, and timing on this introduction is unclear, with additional reference to a Q1 release) H2 2020: A new, high-end Navi part, as a “true” successor to Vega
-
Well AMD does have HMB2 and the cryptocurrency boom is over, to the degree that it's lowering stock prices, and there are Black Friday price reductions to look forward to. Someone should have told Intel that new generation chips should outperform their predecessors, then they wouldn't be in the fix they're in now. I'm closely watching the situation. It'll be itneresting.
-
I don't feel this is price gouging at all (and it's $1119 USD not $1219.) I should be the one crying; I'll be paying in Canadian pesos) Stop and think how Intel would have done this; they would have held back the Turing and put out the Volta and squeezed all the profit out of Volta they could, and then in a year from now they would have come out with the Turing. If Turing actually does what they say it does then this is a huge technological jump and an amazing bargain at these prices. RTX with Turing is not just another incremental advancement. This is five years of progress jammed into one. Maybe AMD can catch them but so far they're not even close. the AMD/ATI vs NVidia race is in no way comparable to the AMD Intel race. What I do think is somewhat of a ripoff has been the Quadro prices, historically. As I've said before here I think or rather suspect the whole pro card/gamer card thing is a big of snake oil salesmanship, particularly if you're not into high end engineering. It's going to be interesting to see how Intel fits into this competition.
-
One Turing is pretty much equivalent to four Voltas...so they actually jumped a year in GPU development, just leapfrogged the Volta and cast it aside which is pretty amazing. Nvidia is a company in a big hurry. And yes there is NVLink and some kind of variant of the NVSwitch for the GeForce Turing cards; This stuff is just getting bizarre... NVSWITCH_1920x1080_MUSIC-final.mp4
-
I contacted NVidia support yesterday; no announcement yet whether the new Turing Geforce GPUS support NVLink or NVSwitch yet. And from what I've read about them in the PDFs available from NVidia's site, the NVSwitch is only documented for use with a specific Intel multiple Xeon board. (If you're interested ask me and I'll look it up) for use with the three Quadro Turing cards. There doesn't seem to be any documentation showing what an actual implementation of the NVSwitch looks like aside from the diagram I've posted below.