Jump to content
3DCoat Forums

L'Ancien Regime

Advanced Member
  • Posts

    2,201
  • Joined

  • Last visited

Everything posted by L'Ancien Regime

  1. https://www.fudzilla.com/news/graphics/46014-vega-7nm-is-not-a-gpu https://www.fudzilla.com/news/graphics/46038-amd-navi-is-not-a-high-end-card Start at around the 11 min mark
  2. It's going to be interesting to see if that Radeon VII uses some kind of Crossfire.
  3. At last year’s Game Developers Conference 2018, Microsoft announced a framework “Windows ML” for developing machine learning based applications on the Windows 10 platform, and “DirectML” that makes it available from DirectX12. We are currently experimenting with the preview version SDK of DirectML, but Radeon VII shows excellent results so far. By the way, Radeon VII scored about 1.62 times the GeForce RTX 2080 in “Luxmark” which utilizes an OpenCL-based GPGPU-like ray tracing renderer. Based on these facts, I think NVIDIA’s DLSS-like thing can be done with a GPGPU-like approach for our GPU. (A general-purpose GPU (GPGPU) is a graphics processing unit (GPU) that performs non-specialized calculations that would typically be conducted by the CPU (central processing unit). Ordinarily, the GPU is dedicated to graphics rendering.) https://whatis.techtarget.com/definition/GPGPU-general-purpose-graphics-processing-unit DirectML is currently due to be available in Spring 2019. We actually reached out to Microsoft a while ago and received the following statement regarding its extensive capabilities: DirectML provides a DirectX 12-style API that was designed to integrate well into rendering engines. By providing both performance and control, DirectML will enable real-time inferencing for game studios that want to implement machine learning techniques and integrate them into their games. These scenarios can include anything from graphics related scenarios, like super-resolution, style-transfer, and denoising, to real-time decision making, leading to smarter NPCs and better animation. Game studios may also use this for internal tooling to help with things like content and art generation. Ultimately, we want to put the power into creators’ hands to deliver the cutting edge experiences gamers want across all of the hardware that gamers have. https://wccftech.com/amd-radeon-vii-excellent-result-directml/
  4. Plus everything you're saying about Nvidia and using gimmicks to get higher resolutions is almost identical to what an interior architect who did high end medical installations I knew personally used to say about Mental Ray. Basically all they did was stack *****, just blurring pixels and then blurring the blurred pixels to get higher res. And who owns Mental Ray?? There's some good reasons why it's been discontinued.
  5. Like I said; the Radeon VII is a geared down Radeon Instinct MI50 (there's also a MI160 that's even more powerful) They're scientific engineering and datacenter cards for research purposes. The Radeon VII strips away the stuff artists don't need and just gives them that teraflop of memory bandwidth and stream processors (3840) which is incredible. I suppose if they'd really wanted to have gone crazy they could have made a Radeon VIIb from the Radeon Instinct MI60 with 4096 stream processors and 32 GB of HBM2 VRAM but that would have gotten really expensive to double the VRAM like that. So what does a MI50 or an MI60 cost? We don't know yet and won't know till the end of March 2019. But let's look at earlier editions of the Radeon Instinct so we can broadly surmise what that will be; So you're going to get all the power you need as an artist from that Radeon Instinct MI60 or MI50 for $699.00 instead of $10,568.99. I'm waiting on this one myself.
  6. I didn't get the feeling that WCCFTech was throwing trash on it. That's a story that's identical throughout the press. In fact that's the first idea I've gotten that it was actually a much higher end card that had been toned down for a lower priced sale. I thought the material I posted, far from being a dumping of trash was an impressive advertisement for the Radeon VII that made me far more likely to entertain buying it. Basically it's an expensive scientific and database card that has been cut down into a very affordable super powerful artist's card.
  7. https://wccftech.com/amd-radeon-vega-vii-5000-units-64-rops-no-fp64-compute/ AMD Radeon Vega VII Rumored To Have Less Than 5000 Units Made – Confirmed To Feature 64 ROPs, Botched FP64 Compute Compared To Instinct Mi50 (ROP The render output unit, often abbreviated as "ROP", and sometimes called raster operations pipeline, is a hardware component in modern graphics processing units (GPUs) and one of the final steps in the rendering process of modern graphics cards. The pixel pipelines take pixel (each pixel is a dimensionless point), and texel information and process it, via specific matrix and vector operations, into a final pixel or depth value. This process is called rasterization. So ROPs control antialiasing, when more than one sample is merged into one pixel. The ROPs perform the transactions between the relevant buffers in the local memory – this includes writing or reading values, as well as blending them together. Dedicated antialiasing hardware used to perform hardware-based antialiasing methods like MSAA is contained in ROPs. All data rendered has to travel through the ROP in order to be written to the framebuffer, from there it can be transmitted to the display.) AMD Radeon Vega VII Will Feature 64 ROPs and Botched Down FP64 Support – Rumored To Have Less Than 5000 Units With No AIB Models Alright so first up, we have a rumor by TweakTown which states that the AMD Radeon Vega VII graphics card will have less than 5000 units made during its production cycle and each card is going to be sold at a loss considering these are just repurposed Instinct MI50 parts that could’ve been sold for much higher prices to the HPC sector. https://www.amd.com/en/products/professional-graphics/instinct-mi50 https://wccftech.com/amd-radeon-instinct-mi60-first-7nm-vega-20-gpu-official/ https://arrayfire.com/explaining-fp64-performance-on-gpus/ Also, since the Vega VII is basically an Instinct MI50 with Radeon RX drivers, it was thought that the card would retain it’s heavy FP64 compute, making it a formidable compute option at its price point but that isn’t the case anymore. Confirming through AMD’s Director of Product Marketing, Sasa Marinkovic, TechGage reports that the Radeon VII does not feature double precision enabled and that it’s 1:32 FP64 compute like the RX Vega 64 cards at just 0.862 TFLOPs while the Instinct MI50 features 6.7 TFLOPs of FP64 compute. But you're still getting an incredible 1 Terabyte per second memory bandwidth with the Radeon VII that the Radeon Instinct MI60 and MI60 provide.
  8. Now THAT would be very interesting...especially with 16gb of HBM2 and a Teraflop/second.
  9. https://www.techpowerup.com/gpu-specs/radeon-vii.c3358 RTX 2070 is $549 USD Radeon VII is $699 USD TechPowerup rates the 2070 at 97% to Radeon VII at 100% performance...
  10. Here's another sobering fact; thanks to that Level1 guy's videos I'd decided on this motherboard at $560 Cdn. That was its price 3 days ago on Amazon. I guess a lot of other people saw his video too because I checked it last night and this was the new price hahaha...Amazon.com instead of Amazon.ca...that's $516.00
  11. I sure wish all these GPU guys be they Nvidia or AMD would at least throw the content creators a few bones with their publicity, especially with these high end cards. I really don't care about playing The Division or Final Fantasy. If we're going to be expected to fork out this kind of money for a piece of technology they could at least print out a few paragraphs on how it runs with Arnold or Renderman or Keyshot.
  12. I wonder where that new Radeon Vega 7 would fall in that graph with it's 16GB of HBM2 VRAM? And how do you find that AMD Radeon Pro Render for SSS and caustics? Does it measure up to something like Maxwell Render? I'm reading that for things like particle cloud renders the CPU is still superior due to it's math engines like Embree..and Embree also runs on Ryzen CPUs as well https://software.intel.com/en-us/rendering-framework https://software.intel.com/en-us/articles/embree-highly-optimized-visibility-algorithms-for-monte-carlo-ray-tracing "All recent AMD CPU support Embree, including Rizen. Performance in Vray(for Max) are in between an 8core and a 10core I7. At least on very old release of Vray it was better to avoid mixing AMD and Intel when caching IM because some parts of the computation was random and behave slightly differently on the two hardware platform. Don't know if this is a problem with current release.. anyway you can always save GI maps on a single system and distribute the render for final frame, this should always work" http://forum.vrayforc4d.com/index.php?threads/19169/ CPUs' = complex elegant solutions. GPU's = brute force simplistic solutions. Or is that a concept that's now 5 years out of date?
  13. Aren't the algorithms on a CPU more complex due to the hard coded math engines in them compared to the ones used in a GPU no matter how fast? Well based simply on noise elimination BOXX says GPU rendering is over 6 times faster than CPU rendering. https://blog.boxx.com/2014/10/02/gpu-rendering-vs-cpu-rendering-a-method-to-compare-render-times-with-empirical-benchmarks/ But that's only one criterion. https://www.fxguide.com/featured/look-at-renderman-22-and-beyond/ Well it would seem that they've attained parity at least in specific render software. https://renderman.pixar.com/news/renderman-xpu-development-update So if that's the case, what is the correct way to proceed here in the purchase of a new rig? What is the optimal price performance configuration for a rig especially if you've been frustrated by the testing of new shaders/textures bottleneck and you want to make that workflow more agile and responsive?
  14. So this is interesting...Nvidia is coming out with a GTX 1180, that is a graphics card those doesn't have the RTX or the other specialized cores in it for a cheaper non RTX price New leaks reveal that Nvidia's GTX 1180 will support higher refresh rates through one VR-friendly connection, making it possible to render 4K@120 Hz content for each eye, and this would also apply to the TV-sized G-Sync 4K monitors. Price-wise, the GTX 1180 will not match the original launch MSRP of the GTX 1080, as there will be two versions and the most affordable one is supposed to cost US$999. by Bogdan Solca, 2018/06/15Desktop Geforce Gaming Huawei Mate 20 X 91% Huawei Mate 20 X Smartphone Review Acer TravelMate X3410 (i7, MX130, FHD) Laptop Review 87% Acer TravelMate X3410 (i7, MX130, FHD) Laptop Review Lenovo ThinkPad A285 (Ryzen 5 Pro, Vega 8, FHD) Laptop Review 88% Lenovo ThinkPad A285 (Ryzen 5 Pro, Vega 8, FHD) Laptop Review Oukitel WP1 80% Oukitel WP1 Smartphone Review Next Page 〉 Nvidia’s CEO Jensen Huang claimed at Computex this year that the next gen gaming GPUs will be released “a long time from now”, but trusted sources already informed that the new GTX 11xx series should be announced in late July / early August, with mobility GPU versions expected to land some time in Q4 2018. Huang most likely did not want to spoil a larger marketing scheme and had to cut it short for people who were expecting any teasers. Now, according to Tom’s Hardware’s anonymous sources, the upcoming GPUs should integrate a brand new VR-friendly connector that allows for much higher refresh rates over a single cable. This should translate to 120 Hz per each eye in 4K resolutions that will probably get delivered through a new HDMI 2.1 output. Previous reports claimed that the next gen GPUs from Nvidia would be priced quite similar to the launch MSRPs of the GTX 1080 series, but the latest info from TweakTown suggests that the GTX 1180 will come in two variants: a US$999 model and a US$1,499 model with more VRAM. Nvidia will probably start selling its Founders Editions in early August, while third-party integrators could start shipping their custom versions in September. The updated Quadro professional lineup is also expected to make an appearance at Siggraph in August. I'd rather get an RTX 2070 at under $700 than get this RTX 1180 at $999 and $1499
  15. Yep, and for me, like you, only the renders count. Games disappoint me. The AI and game designs and aesthetics for the most part have failed that is unless you love shooting and blowing stuff up. With 16 gb of VRAM that thing is actually very good value for the money.
  16. But what about the Wraith Ripper? http://www.coolermaster.com/cooling/cpu-air-cooler/wraith-ripper/ And thanks for all that info on the heat sinks. And also you're right that the 2990WX's problems are solely due to MicroSoft. here are the Linux Results on those renders. https://www.phoronix.com/scan.php?page=article&item=amd-2920x-2970wx&num=9
  17. http://www.entagma.com/building-your-own-houdini-workstation/#comment-20906 This guy is always brilliant.
  18. Thanks Nossgrr. I've been really studying up for my next build and it's a revelation every day. I was prepared to go big bucks even for the 2990WX but it seems that thousands of extra dollars doesn't necessarily buy you a proportional increase in performance. I just came across this sobering fact; That's crazy. And that 2990WX would be worth it if it gave you double the speed on renders (imagine 64 render buckets all going at 4ghz!) but because of the way AMD took it's 64 thread EPYC CPU and crippled the memory access with half of the four chiplet modules it just doesn't perform the way you'd think all those extra threads and render buckets should. https://bitsum.com/portfolio/coreprio/ It appears to be mainly oriented towards scientific and computational researchers. The 1950X is by far the superior buy.
  19. I posted this a couple days ago in the RTX Nvidia thread near the top of the page; check out the 2070 performace. There's no NVLink for the 2070 or SLI. Two of those outperform the 2080Ti Two 2070's cost $1300 Cdn One 2080ti costs $1900 Cdn So without any other data I'd say the answer to your question would be "YES" at least when it comes to rendering. For gaming I don't know.
×
×
  • Create New...