Jump to content
3DCoat Forums

L'Ancien Regime

Advanced Member
  • Posts

    2,219
  • Joined

  • Last visited

Everything posted by L'Ancien Regime

  1. I didn't get the feeling that WCCFTech was throwing trash on it. That's a story that's identical throughout the press. In fact that's the first idea I've gotten that it was actually a much higher end card that had been toned down for a lower priced sale. I thought the material I posted, far from being a dumping of trash was an impressive advertisement for the Radeon VII that made me far more likely to entertain buying it. Basically it's an expensive scientific and database card that has been cut down into a very affordable super powerful artist's card.
  2. https://wccftech.com/amd-radeon-vega-vii-5000-units-64-rops-no-fp64-compute/ AMD Radeon Vega VII Rumored To Have Less Than 5000 Units Made – Confirmed To Feature 64 ROPs, Botched FP64 Compute Compared To Instinct Mi50 (ROP The render output unit, often abbreviated as "ROP", and sometimes called raster operations pipeline, is a hardware component in modern graphics processing units (GPUs) and one of the final steps in the rendering process of modern graphics cards. The pixel pipelines take pixel (each pixel is a dimensionless point), and texel information and process it, via specific matrix and vector operations, into a final pixel or depth value. This process is called rasterization. So ROPs control antialiasing, when more than one sample is merged into one pixel. The ROPs perform the transactions between the relevant buffers in the local memory – this includes writing or reading values, as well as blending them together. Dedicated antialiasing hardware used to perform hardware-based antialiasing methods like MSAA is contained in ROPs. All data rendered has to travel through the ROP in order to be written to the framebuffer, from there it can be transmitted to the display.) AMD Radeon Vega VII Will Feature 64 ROPs and Botched Down FP64 Support – Rumored To Have Less Than 5000 Units With No AIB Models Alright so first up, we have a rumor by TweakTown which states that the AMD Radeon Vega VII graphics card will have less than 5000 units made during its production cycle and each card is going to be sold at a loss considering these are just repurposed Instinct MI50 parts that could’ve been sold for much higher prices to the HPC sector. https://www.amd.com/en/products/professional-graphics/instinct-mi50 https://wccftech.com/amd-radeon-instinct-mi60-first-7nm-vega-20-gpu-official/ https://arrayfire.com/explaining-fp64-performance-on-gpus/ Also, since the Vega VII is basically an Instinct MI50 with Radeon RX drivers, it was thought that the card would retain it’s heavy FP64 compute, making it a formidable compute option at its price point but that isn’t the case anymore. Confirming through AMD’s Director of Product Marketing, Sasa Marinkovic, TechGage reports that the Radeon VII does not feature double precision enabled and that it’s 1:32 FP64 compute like the RX Vega 64 cards at just 0.862 TFLOPs while the Instinct MI50 features 6.7 TFLOPs of FP64 compute. But you're still getting an incredible 1 Terabyte per second memory bandwidth with the Radeon VII that the Radeon Instinct MI60 and MI60 provide.
  3. Now THAT would be very interesting...especially with 16gb of HBM2 and a Teraflop/second.
  4. https://www.techpowerup.com/gpu-specs/radeon-vii.c3358 RTX 2070 is $549 USD Radeon VII is $699 USD TechPowerup rates the 2070 at 97% to Radeon VII at 100% performance...
  5. Here's another sobering fact; thanks to that Level1 guy's videos I'd decided on this motherboard at $560 Cdn. That was its price 3 days ago on Amazon. I guess a lot of other people saw his video too because I checked it last night and this was the new price hahaha...Amazon.com instead of Amazon.ca...that's $516.00
  6. I sure wish all these GPU guys be they Nvidia or AMD would at least throw the content creators a few bones with their publicity, especially with these high end cards. I really don't care about playing The Division or Final Fantasy. If we're going to be expected to fork out this kind of money for a piece of technology they could at least print out a few paragraphs on how it runs with Arnold or Renderman or Keyshot.
  7. I wonder where that new Radeon Vega 7 would fall in that graph with it's 16GB of HBM2 VRAM? And how do you find that AMD Radeon Pro Render for SSS and caustics? Does it measure up to something like Maxwell Render? I'm reading that for things like particle cloud renders the CPU is still superior due to it's math engines like Embree..and Embree also runs on Ryzen CPUs as well https://software.intel.com/en-us/rendering-framework https://software.intel.com/en-us/articles/embree-highly-optimized-visibility-algorithms-for-monte-carlo-ray-tracing "All recent AMD CPU support Embree, including Rizen. Performance in Vray(for Max) are in between an 8core and a 10core I7. At least on very old release of Vray it was better to avoid mixing AMD and Intel when caching IM because some parts of the computation was random and behave slightly differently on the two hardware platform. Don't know if this is a problem with current release.. anyway you can always save GI maps on a single system and distribute the render for final frame, this should always work" http://forum.vrayforc4d.com/index.php?threads/19169/ CPUs' = complex elegant solutions. GPU's = brute force simplistic solutions. Or is that a concept that's now 5 years out of date?
  8. Aren't the algorithms on a CPU more complex due to the hard coded math engines in them compared to the ones used in a GPU no matter how fast? Well based simply on noise elimination BOXX says GPU rendering is over 6 times faster than CPU rendering. https://blog.boxx.com/2014/10/02/gpu-rendering-vs-cpu-rendering-a-method-to-compare-render-times-with-empirical-benchmarks/ But that's only one criterion. https://www.fxguide.com/featured/look-at-renderman-22-and-beyond/ Well it would seem that they've attained parity at least in specific render software. https://renderman.pixar.com/news/renderman-xpu-development-update So if that's the case, what is the correct way to proceed here in the purchase of a new rig? What is the optimal price performance configuration for a rig especially if you've been frustrated by the testing of new shaders/textures bottleneck and you want to make that workflow more agile and responsive?
  9. So this is interesting...Nvidia is coming out with a GTX 1180, that is a graphics card those doesn't have the RTX or the other specialized cores in it for a cheaper non RTX price New leaks reveal that Nvidia's GTX 1180 will support higher refresh rates through one VR-friendly connection, making it possible to render 4K@120 Hz content for each eye, and this would also apply to the TV-sized G-Sync 4K monitors. Price-wise, the GTX 1180 will not match the original launch MSRP of the GTX 1080, as there will be two versions and the most affordable one is supposed to cost US$999. by Bogdan Solca, 2018/06/15Desktop Geforce Gaming Huawei Mate 20 X 91% Huawei Mate 20 X Smartphone Review Acer TravelMate X3410 (i7, MX130, FHD) Laptop Review 87% Acer TravelMate X3410 (i7, MX130, FHD) Laptop Review Lenovo ThinkPad A285 (Ryzen 5 Pro, Vega 8, FHD) Laptop Review 88% Lenovo ThinkPad A285 (Ryzen 5 Pro, Vega 8, FHD) Laptop Review Oukitel WP1 80% Oukitel WP1 Smartphone Review Next Page 〉 Nvidia’s CEO Jensen Huang claimed at Computex this year that the next gen gaming GPUs will be released “a long time from now”, but trusted sources already informed that the new GTX 11xx series should be announced in late July / early August, with mobility GPU versions expected to land some time in Q4 2018. Huang most likely did not want to spoil a larger marketing scheme and had to cut it short for people who were expecting any teasers. Now, according to Tom’s Hardware’s anonymous sources, the upcoming GPUs should integrate a brand new VR-friendly connector that allows for much higher refresh rates over a single cable. This should translate to 120 Hz per each eye in 4K resolutions that will probably get delivered through a new HDMI 2.1 output. Previous reports claimed that the next gen GPUs from Nvidia would be priced quite similar to the launch MSRPs of the GTX 1080 series, but the latest info from TweakTown suggests that the GTX 1180 will come in two variants: a US$999 model and a US$1,499 model with more VRAM. Nvidia will probably start selling its Founders Editions in early August, while third-party integrators could start shipping their custom versions in September. The updated Quadro professional lineup is also expected to make an appearance at Siggraph in August. I'd rather get an RTX 2070 at under $700 than get this RTX 1180 at $999 and $1499
  10. Yep, and for me, like you, only the renders count. Games disappoint me. The AI and game designs and aesthetics for the most part have failed that is unless you love shooting and blowing stuff up. With 16 gb of VRAM that thing is actually very good value for the money.
  11. But what about the Wraith Ripper? http://www.coolermaster.com/cooling/cpu-air-cooler/wraith-ripper/ And thanks for all that info on the heat sinks. And also you're right that the 2990WX's problems are solely due to MicroSoft. here are the Linux Results on those renders. https://www.phoronix.com/scan.php?page=article&item=amd-2920x-2970wx&num=9
  12. http://www.entagma.com/building-your-own-houdini-workstation/#comment-20906 This guy is always brilliant.
  13. Thanks Nossgrr. I've been really studying up for my next build and it's a revelation every day. I was prepared to go big bucks even for the 2990WX but it seems that thousands of extra dollars doesn't necessarily buy you a proportional increase in performance. I just came across this sobering fact; That's crazy. And that 2990WX would be worth it if it gave you double the speed on renders (imagine 64 render buckets all going at 4ghz!) but because of the way AMD took it's 64 thread EPYC CPU and crippled the memory access with half of the four chiplet modules it just doesn't perform the way you'd think all those extra threads and render buckets should. https://bitsum.com/portfolio/coreprio/ It appears to be mainly oriented towards scientific and computational researchers. The 1950X is by far the superior buy.
  14. I posted this a couple days ago in the RTX Nvidia thread near the top of the page; check out the 2070 performace. There's no NVLink for the 2070 or SLI. Two of those outperform the 2080Ti Two 2070's cost $1300 Cdn One 2080ti costs $1900 Cdn So without any other data I'd say the answer to your question would be "YES" at least when it comes to rendering. For gaming I don't know.
  15. I would go with the more recent one because of certain possibly important subtleties in the UEFI. (I'm not pretending to be an expert here or anything; I'm just studying up on all these complexities myself right now in preparation for buying a 2950x and an X399 motherboard. I'm even tempted in buying a 2990WX but there's a lot of sophisticated stuff with that high end CPU that baffles even the experts so I'm getting sort of intimidated at the prospect of buying it and trying to make it work properly.) For example the later version of that CPU will probably have a better XMP so that with a simple switch in the UEFI you can have the full value of your RAM. Extreme Memory Profile (XMP) is a specification serving as an extension to the standard JEDEC SPD specifications developed by Intel. XMP is intended to make overclocking easier and more accessible to new users through profiles and predefined overclocking configurations that are known to be stable. Its only for Memory it will overclock the CPU only if you choose the CPU speed under XMP settings XMP is proprietary to Intel but you should check closely with your motherboard to see if it is enabled for AMD too. It probably is if it's second gen like the X399 boards. You don't want to buy 3200MHz RAM and never be able to use all that extra speed you paid for..
  16. So the plot gets even more interesting; Nvidia is in a lot of trouble as it's business model becomes less and less viable despite the propaganda. 1. Far from being a viable choice for big data centers, NVidia is being pushed out by superior custom solutions, particularly by Google. 2. Nvidia has failed in the automotive sector and Tesla has replaced their AI cards with their own custom solutions. 3. The 2000 series has been a massive failure with consumers and it seems that a new (3000?) series will be forthcoming in 2019, since the 2000 series simply doesn't perform well enough for what it aspires to do. 4. As APUs get more and more prevalent, the discrete GPU card will become less and less viable, the same way sound cards did. CPUs and motherboard chipsets will simply push the GPU form aside. I love Coreteks videos. He's got the best analysis.
  17. I'm afraid of ebay but I'm giving this serious consideration. My EE friend in E Europe is advising the same thing. Thx
  18. He says the RTX is comparable to the GTX 1080 but look at the Canadian prices and comparative performance. The 2070 is almost 12% faster and 36% cheaper. And note, once again the 2080 underperforms; it only beats the 2070 by ONE SECOND, for all that extra money you have to pay. And look at that price performance on the 1070ti. It's 6% more expensive and 16% slower than the 2070.
  19. Echoechonoisenoise...this is interesting.. https://echoechonoisenoise.wordpress.com/ “after internet, abuse could become a default approach to technology. to technological means of representation in particular. ubiquitous amateur extruder machines may be regarded as dull interfaces between virtual junk and plastic debris. on the other hand they offer cheap and easy access to direct material manipulation and can therefore be turned into monstrous yet valid bypass between desire of saturation and limited bandwidth of digital modeling, delivering multiscale depth beyond the flat screen.” or simply: it’s an established method of any avant-garde to appreciate media for what they can become.
  20. I discovered it in this excellent video by Gleb Alexandrov at the 17 min mark; http://structuresynth.sourceforge.net/
×
×
  • Create New...