Jump to content
3DCoat Forums

L'Ancien Regime

Advanced Member
  • Posts

  • Joined

  • Last visited

Everything posted by L'Ancien Regime

  1. This comes as a big surprise...I wasn't expecting this for another year or so.. https://wccftech.com/amd-confirms-new-7nm-radeon-graphics-cards-launching-in-2018/ AMD Confirms New 7nm Radeon Graphics Cards Launching in 2018 With all the media buzz surrounding NVIDIA’s brand spanking new 12nm RTX 20 series Turing graphics cards over the past couple of weeks, which promise to deliver 40% better performance than their predecessors, a similarly exciting news story on the Radeon side has seemingly flown under the radar. Earlier this week the company confirmed in a press release, and later President and CEO Dr. Su confirmed in an interview with Marketwatch, that AMD is on track to launch the world’s first 7nm graphics cards this year. While the world’s first 7nm CPUs, built on the company’s next generation Zen 2 x86 64-bit core, are on track to be on-shelves next year. The company had already demonstrated working 7nm GPU silicon back in June at Computex, which has been sampling since and is set to be available for purchase later this year. Based on an improved iteration of the Vega architecture which debuted last year, 7nm Vega is nothing short of a beast. The new GPU supports intrinsic AI instructions and features four HBM2 8GB stacks running across a 4096-bit memory interface for a total of 32GB vRAM. Whilst the company hasn’t disclosed detailed specifications relating to the new GPU we could reasonably expect around one terabyte/s of memory bandwidth, higher clock speeds and significantly better power efficiency thanks to TSMC’s leading-edge 7nm process technology, which has reportedly enabled the company to extract an unbelievable 20.9 TFLOPS of graphics compute out of 7nm Vega, according to one source. If true, it would make it the world’s first 20 TFLOPS GPU. https://www.anandtech.com/show/12910/amd-demos-7nm-vega-radeon-instinct-shipping-2018 In a fairly unexpected move, AMD formally demonstrated at Computex its previously-roadmapped 7nm-built Vega GPU. As per AMD's roadmaps on the subject, the chip will be used for AMD’s Radeon Instinct series accelerators for AI, ML, and similar applications. The 7nm Vega GPU relies on the 5th Generation GCN architecture and in many ways resembles the Vega 10 GPU launched last year. Meanwhile, the new processor features a number of important hardware enhancements, particularly deep-learning ops specifically for the AI/ML markets. AMD isn't detailing these operations at this point, though at a minimum I'd expect to see Int8 dot products on top of Vega's native high speed FP16 support. AMD also briefly discussed the use of Infinity Fabric with the new 7nm GPU. AMD already uses the fabric internally on Vega 10, and based on some very limited comments it looks like they are going to use it externally on the 7nm GPU. On AMD's Zeppelin CPU dies - used in the EPYC CPU lineup - AMD can switch between Infinity Fabric and PCIe over the same lanes depending on how a product is configured, so it's possible we're going to see something similar here. In other words, AMD can kick in Infinity Fabric when they have something else to connect it to on the other end. https://wccftech.com/amds-infinity-fabric-detailed/
  2. https://wccftech.com/amd-ryzen-threadripper-2950x-16-core-cpu-899-usd-launch/ Black Friday and Cyber Monday aren't that far away now.
  3. What is it with Russian guys? Andrew, this Alexey dude, and Arseniy Korablev over at Polybrush...they're brilliant...they get some mad idea and BAM, they deliver on it. Compare that with the guys over at Silo. Every few months there's some little update where they announce they've corrected some memory leak that's been making it crash in some key operation like it's a big deal and their program isn't an antiquated app that's going nowhere, creatively speaking. And Alexey loves giving you all these nice touches, like this one;
  4. This is coming in V2..can you do this in MeshFusion in Modo?? Stephen HallquistPLUS1 year ago Just a thought but shouldn't this be considered something other than what you have going on in Flux? Flux is procedural and as soon as you go in and change the mesh settings for smoothness all the insert mesh stuff would break? Still very cool! Alexey Vanzhula1 year ago No. Insert Mesh can be procedural or linear tool. Selection places can be converted to bounding regions and you can increase quality of upstream flux nodes without losing of inserted meshes.
  5. Personally I really like the network/dependency graph/history way of working. It's openly parametric and so any clumsiness in manipulation is more than made up for in sheer power. It's like working in Catia or Seimens NX in that way. But after watching that video of their new modeling tools I have to say they seem to suck (if that's really all they're offering). I think that Russian guy's (Alex Vanzhula) $100 plug in for Houdini is far better. It's kind of weird that they wasted their time and money making such a disappointing tool set when they should have just gone to Alexey and bought his tool outright from him and hired him as a staff developper. While they're at it they should buy 3D Coat and make Andrew a staff developer and then they'd rule the world. But there it is; Alexey has created a MeshFusion for Houdini. Looks great. https://gumroad.com/l/GVLLS
  6. Hey she mentions 3D Coat and BEFORE Zbrush when talking about retopoing a high poly mesh.!! 4 min mark
  7. Intel Optane Memory Tested, Makes Hard Drives Perform Like SSDs. ... In short, it's a new memory tier, a faster storage repository for most often used data and meta data, that resides between system memory (RAM) and the main storage subsystem. https://www.forbes.com/sites/davealtavilla/2017/04/26/intel-optane-memory-tested-makes-hard-drives-perform-like-ssds/#3946b7fb6090 So you're still going to need RAM even if you have Optane memory. Intel Optane Memory for PCs looks like the average M.2 gumstick and in fact plugs into an M.2 slot on Intel 200 series chipset motherboards (7th gen Kaby Lake or newer). However, it’s designed to cache slower storage volumes like hard drives, offering orders of magnitude faster response times and essentially enabling spinning media to perform more like a high performance SSD in many applications, from workstation and content creation workloads, to gaming, web browsing and even productivity apps. It does this by storing most frequently used data, meta data and access patterns on either a 16GB or 32GB Optane Memory stick, allowing the system to make far fewer trips to a much slower hard drive for data access. for now it´s just for caching and speeding up hard drives/ solid state drives with an Intel 7th generation processor. We still need RAM. DRAM latency is still faster than optane. Soooo...just save up your empties and maybe you'll be able to afford..
  8. One thing I do notice is that for Boxx and other suppliers of workstations, as well as people using photogrammetry programs , 64GB is no longer really enough. 128GB is the new standard, so take the money you save by not buying the top of the line new GPUs or a 32core CPU and spend that money on DDR4 RAM, 128GB of it.
  9. Forbes seems to be quoting your opinions now ahah https://www.forbes.com/sites/jasonevangelho/2018/08/21/nvidia-rtx-20-graphics-cards-why-you-should-jump-off-the-hype-train/#41dcba773f8e so get the 16 core 32 thread AMD Threadripper and for now get the 1080 Ti for now and wait for the next gen 7nm RTX Nvidia Geforce in 2018.
  10. It's going to be quite a while before AMD can match the current Turings it appears; we're looking at the second half of 2020 https://www.extremetech.com/gaming/272764-new-amd-gpu-rumors-suggest-polaris-refresh-in-q4-2018 Next up, Navi. There’s a rather confused suggestion that Navi will be both a mainstream and high-end part arriving sometime in the 2019 timeframe, and that it will debut in the budget segment first before eventually launching as a high-end, HBM2 equipped part sometime “much later.” The suggested time frame is: Q4 2018: Polaris 30 (performance up 15 percent). H1 2019: Navi 10 (budget part, and timing on this introduction is unclear, with additional reference to a Q1 release) H2 2020: A new, high-end Navi part, as a “true” successor to Vega
  11. Well AMD does have HMB2 and the cryptocurrency boom is over, to the degree that it's lowering stock prices, and there are Black Friday price reductions to look forward to. Someone should have told Intel that new generation chips should outperform their predecessors, then they wouldn't be in the fix they're in now. I'm closely watching the situation. It'll be itneresting.
  12. I don't feel this is price gouging at all (and it's $1119 USD not $1219.) I should be the one crying; I'll be paying in Canadian pesos) Stop and think how Intel would have done this; they would have held back the Turing and put out the Volta and squeezed all the profit out of Volta they could, and then in a year from now they would have come out with the Turing. If Turing actually does what they say it does then this is a huge technological jump and an amazing bargain at these prices. RTX with Turing is not just another incremental advancement. This is five years of progress jammed into one. Maybe AMD can catch them but so far they're not even close. the AMD/ATI vs NVidia race is in no way comparable to the AMD Intel race. What I do think is somewhat of a ripoff has been the Quadro prices, historically. As I've said before here I think or rather suspect the whole pro card/gamer card thing is a big of snake oil salesmanship, particularly if you're not into high end engineering. It's going to be interesting to see how Intel fits into this competition.
  13. One Turing is pretty much equivalent to four Voltas...so they actually jumped a year in GPU development, just leapfrogged the Volta and cast it aside which is pretty amazing. Nvidia is a company in a big hurry. And yes there is NVLink and some kind of variant of the NVSwitch for the GeForce Turing cards; This stuff is just getting bizarre... NVSWITCH_1920x1080_MUSIC-final.mp4
  14. Nvidia GEForce livestream is about to begin https://wccftech.com/nvidia-geforce-rtx-20-series-announcement-livestream/
  15. I contacted NVidia support yesterday; no announcement yet whether the new Turing Geforce GPUS support NVLink or NVSwitch yet. And from what I've read about them in the PDFs available from NVidia's site, the NVSwitch is only documented for use with a specific Intel multiple Xeon board. (If you're interested ask me and I'll look it up) for use with the three Quadro Turing cards. There doesn't seem to be any documentation showing what an actual implementation of the NVSwitch looks like aside from the diagram I've posted below.
  16. The same motherboard that the 2950x uses will take the 64core 128 thread Threadripper or so AMD is saying so an upgrade in a year and a half to the 7nm would be available then. I see what you mean on the price point...
  17. https://wccftech.com/what-does-radeon-do-now-to-stay-competitive/
  18. https://wccftech.com/nvidia-geforce-rtx-2080-ti-and-rtx-2080-specs-leak/ NVIDIA GeForce RTX 2080 Ti 11 GB and RTX 2080 8 GB Graphics Cards Core Specifications Confirmed – 2080 Ti With TU102 GPU Rocks 4352 CUDA Cores, 2080 With TU104 Rocks 2944 CUDA Cores NVIDIA Turing GPU Based GeForce RTX 2080 Ti Comes With 11 GB GDDR6 Memory and 4352 Cores, GeForce RTX 2080 Comes With 8 GB GDDR6 Memory and 2944 Cores
  19. The 64 core 128 thread AMD Epyc server CPU is coming out in January 2019. How long after that do we have to wait until they do some tweaks on it so they can sell it as a 64 core 128 thread AMD Threadripper? If their past strategy in the CPU market are any indication that would mean an AMD Threadripper with 64 cores and 128 threads by August 2020. https://www.servethehome.com/amd-epyc-rome-details-trickle-out-64-cores-128-threads-per-socket/ The next generation of AMD EPYC 7000 series is shaping up to be a Xeon killer. The next-generation AMD EPYC 7000 series is codenamed “Rome” and it is going to be a big deal. Instead of adopting Zen+ like the desktop Ryzen CPUs, the new EPYC generation will use the Zen 2 architecture which means improved IPC gains from two generations of core tweaks. Beyond the IPC gains, the next generation parts will be based on 7nm production. The impact of leapfrogging Intel and using 7nm is several-fold. First, Rome will have up to 64 cores and 128 threads in a single socket. (Edit June 6, 2018: Mea Culpa. Looks like we got some generational information “confirmed” to us incorrectly. Expect a 48 core / 96 thread generation before a 64 core / 128 thread generation. Still quite a huge gap. DDR4 and interconnect improvement information held up to further confirmations. 64 core / 128 thread apparently is still coming, just missed one generation due to a few words not being typed in messages to us.) The new CPUs will be socket compatible with the current SP3 socket motherboards with a small caveat. At STH, we expect Rome to adopt PCIe Gen 4 so motherboards will have to support the higher signaling rates to achieve PCIe Gen 4. We also expect the next generation to have greatly improved Infinity Fabric, an area that the first generation product has room to improve upon. The other key disclosure is that AMD already has silicon in their labs with the next generation AMD EPYC Rome CPUs in their labs. They will be sampling to partners in the second half of 2018 and will launch in 2019. This is going to put a lot of pressure on Intel Xeon as Cascade Lake is not going to come anywhere close to the core count of AMD EPYC’s next generation. Intel is scrambling to build a competitive response. 2019 is going to be extremely interesting in the server market. Yeah, Intel is having some big problems mastering the 7nm process that AMD has mastered. https://www.tomshardware.com/news/intel-cpu-10nm-earnings-amd,36967.html As we pointed out earlier this year, the delay may seem a minor matter, but Intel has sold processors based on the underlying Skylake microarchitecture since 2015, and it's been stuck at the 14nm process since 2014. That means Intel is on the fourth (or fifth) iteration of the same process, which has hampered its ability to bring new microarchitectures to market. That doesn't bode well for a company that regularly claims its process node technology is three years ahead of its competitors. https://www.pcgamesn.com/intel-amd-7nm-cpu-euv But there may be another technology, with its own troubled history, that could finally be close to saving the day for this ageing law: Extreme Ultraviolet Lithography, or EUV. EUV is a revolutionary new production process that will allow 7nm CPU production to offer higher yields, with lower complexity, and potentially lower costs too. It’s been the holy grail of chip manufacturers for years and is about to become a genuine reality.
  20. You know this is going to take us back to the old argument on whether pro cards like the Quadros and Radeons are worth the extra coin. Definitely they tend to receive the best chips cut from the center of the big silicon dies as the xray lithography rays get more slanted and the lithography gets less perfect towards the edges of the wafers... But look at this now; NVIDIA Allegedly Launching Monstrous 4352 CUDA core RTX 2080 Ti According to at least three separate sources NVIDIA is said to be looking to surprise everyone with the launch of an absolutely monstrous RTX 2080 Ti graphics card and not just an RTX 2080 as was previously thought. Additionally, according to TPU, the new gaming flagship features a very slightly cut down version of the big daddy Turing GPU that we saw in all its glory earlier this week at NVIDIA’s keynote. A version that’s in fact very similar to the GPU that the company leverages in its $10,000 Quadro RTX 8000. We’re going to call this chip GT102 for the time being, although TPU alleges that it may actually be called RT102. So, what exactly are we looking at? Well, the RTX 2080 Ti is said to feature 4352 CUDA cores, 576 TENSOR cores, 272 TMUs and 88 ROPs paired with a 352-bit memory interface and 11GB of 14gbps GDDR6 memory for a whopping 616 GB/s of bandwidth . Please be reminded that these specifications are very much rumored and in no way shape or form confirmed at this moment. https://wccftech.com/rumor-nvidia-launching-surprise-rtx-2080-ti-with-4352-cuda-cores-11gb-gddr6-vram/ From what I've heard from engineers and people in CAD CAM is that the pro cards make sense for engineering apps that demand floating point precision/correction but that for artists like us a card like this is every bit as good for a fraction of the price. And the price for all this is $699. I could afford two of them..
  21. Wow, thanks for this post.... And it's coming down at under $1000, so that's comparable performance to the Quadro P5000 at $2500 Cdn. That's last year's Nvidia Quadro though not the new RTX that has DDR6 VRAM not the P5000 with its DDR5 VRAM. So basically this is a card that is in competition with Nvidia's last generation 2017 Pascal cards but can't stand up to the new Turing (Volta) RTX cards. And at that price point I'm wondering if even the new Nvidia GeForce RTX 2080's at $649 won't be a better deal. It'll only have 8 gb of VRAM but it'll be clocked faster for gamers than the Quadro series base model at $2300
  22. It seems to be all about Tensor Cores..as opposed to CUDA cores... https://www.nvidia.com/en-us/data-center/tensorcore/ A BREAKTHROUGH IN TRAINING AND INFERENCE Designed specifically for deep learning, Tensor Cores deliver groundbreaking performance—up to 12X higher peak teraflops (TFLOPS) for training and 6X higher peak TFLOPS for inference. This key capability enables Volta to deliver 3X performance speedups in training and inference over the previous generation. Each of Tesla V100's 640 Tensor Cores operates on a 4x4 matrix, and their associated data paths are custom-designed to dramatically increase floating-point compute throughput with high-energy efficiency. 58 page white paper PDF on Tensor Cores http://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf NVIDIA-Tensor-Core-SXS_30fps_FINAL_700x394.mp4
  23. Previously, when NVIDIA announced their RTX real-time ray tracking technology, there were questions about whether or not the technology would get any significant traction among developers. At the SIGGRAPH 2018 press conference, NVIDIA’s CEO Jensen Huang announced a slew of professional ISVs that are adopting NVIDIA’s RTX technology to enable its use in their software. These include programs like Adobe Dimension CC, Autodesk Arnold, Clarisse, DaVinci Resolve, Dassault Systemes Catia and Solidworks, Octane Render, ParaView, Redshift, Siemens NX, Unity and Unreal Engine. Having all of these companies onboard with RTX means that there’s a much higher probability that RTX will gain enough traction in the industry. Principal Analyst Patrick Moorhead summed up ISV support saying it was a "done deal". https://www.forbes.com/sites/moorinsights/2018/08/14/nvidia-doubles-down-on-ray-tracing-with-turing/#494606665bb2 Amazing to see Clarisse included in there; I've talked to Sam Assadian, its founder and he was pretty adamant that it would always be CPU intensive and they wouldn't bother to go the GPU route.. In addition to giving details on the Quadro RTX series and the Turing GPU, NVIDIA also announced yet another appliance reference architecture. This new reference architecture from NVIDIA is called the RTX Serve and is designed to serve the needs of VFX studios and other companies doing visualization wanting photorealistic rendering with ray tracing in real-time or near real-time. Each NVIDIA RTX Server features eight RTX 8000 GPUs inside at a price of $500,000 which according to NVIDIA is a steal compared to $2 million worth of CPUs that it would take to accomplish the same amount of rendering. These servers are designed to sit rack-mounted in a data center or a cabinet so that the users can have quick access to a lot of rendering horsepower without having to see or hear it.
  24. The Quadro options are pretty pricey but this price list for the Nvidia GeForce RTX 2080 coming this week show a more affordable price range. Same GPU as the Quadros but faster (for gaming) and perhaps less CUDA cores than Quadro series. At $699 I could get 2 of those and really boost my power. Threre's a special new NVlink for Nvidia Turings that make them operate effectively as one. Also if you watch that keynote speech the CEO is saying that these GPUs will handle the heavy lifting on photostitching and 3d photogrammetry jobs too.
  • Create New...