Jump to content
3DCoat Forums

Nvidia launches the 12gb ddr5 RAM TITAN black. $900


Recommended Posts

  • Advanced Member

Is AMD going to rock Nvidia's world with DX12 or what??

 

http://wccftech.com/amd-r9-290x-fast-titan-dx12-enabled-3dmark-33-faster-gtx-980/

 

DX12 support has just been added to 3DMark showing unbelievable results with DX12 delivering up to 20 times faster performance than DX11. AMD Radeon graphics cards are showing the most significant gains compared to their Nvidia GeForce GTX counterparts.

nvidia-amd-DX12.png

 

 

Testing has shown in fact that with AMD’s most recent driver update the R9 290X is not only on par with the $999 12GB GeForce GTX Titan X, but also maintains a minute edge against the Nvidia flagship. When compared against Nvidia’s $549 GTX 980 the R9 290X from AMD delivered a 33% performance lead.

AMD R9 290X As Fast As Titan X in DX12 Enabled 3DMark – 33% Faster Than GTX 980
Link to comment
Share on other sites

  • Reputable Contributor

 

Is AMD going to rock Nvidia's world with DX12 or what??

 

http://wccftech.com/amd-r9-290x-fast-titan-dx12-enabled-3dmark-33-faster-gtx-980/

 

DX12 support has just been added to 3DMark showing unbelievable results with DX12 delivering up to 20 times faster performance than DX11. AMD Radeon graphics cards are showing the most significant gains compared to their Nvidia GeForce GTX counterparts.

nvidia-amd-DX12.png

 

 

Testing has shown in fact that with AMD’s most recent driver update the R9 290X is not only on par with the $999 12GB GeForce GTX Titan X, but also maintains a minute edge against the Nvidia flagship. When compared against Nvidia’s $549 GTX 980 the R9 290X from AMD delivered a 33% performance lead.

AMD R9 290X As Fast As Titan X in DX12 Enabled 3DMark – 33% Faster Than GTX 980

 

I may start using an AMD card here, soon...seeing that Andrew has never recompiled/updated CUDA in 3D Coat, and that leaves a lot of newer CUDA advancements untapped by 3D Coat. And Thea Render has an OpenCL build (might still be in Beta), so I can still use the AMD card for rendering, if needed. Will see, but I'm not real happy with NVidia recently.

Link to comment
Share on other sites

  • Advanced Member

im not happy with most hardware/software development recently, but i think im just getting old and grumpy... because objectivly everything is awesome! =)

Link to comment
Share on other sites

  • Advanced Member

im not happy with most hardware/software development recently, but i think im just getting old and grumpy... because objectivly everything is awesome! =)

 

 

 

hahaha too true...

 

THESE GAD DAMT 4K REAL TIME RENDERS ARE SLOWING DOWN TO A MERE 126 FPS! I DEMAND MY MONEY BACK!

\

1181360874_3.jpg

Link to comment
Share on other sites

  • Reputable Contributor

im not happy with most hardware/software development recently, but i think im just getting old and grumpy... because objectivly everything is awesome! =)

Yeah...I'm just talking about how they trumpet the new advances in a new model, but they never mention how much they actually crippled part of the card, in the process. I simply EXPECT, and rightfully so, a company as large as NVidia to advance all the features forward, not backwards. It made me very angry.... 

marvin-martian-angry-mc-1-1.gif

 

...to see Nvidia cripple CUDA as well as the Memory bus with the Kepler cards. When you fork over $500+ you expect to get MORE not less....than what you had.

Link to comment
Share on other sites

  • Advanced Member

i see where they are coming from, gamers have less money, and hence they sell em crippled hardware. I mean for a gamer, the advances in video card technology have been pretty solid with every cycle.

 

As a pro, nvidia wants more of your money, and i guess rightfully so, since we are actually making money with their products, and when you consider the amount of time GPU rendering can save, even 3k per card is a bargain...

Link to comment
Share on other sites

  • Contributor

Could you please share another link with that information, that link does not show the FP64 = 1/32 FP32. Maybe it is buried, if so point to the location.

EDIT:

Ok, I found one though their wording is somewhat confusing but yes it looks like FP64=1/32 FP32 for the Cuda cores on the Titan X is correct...

Titan X = just an expensive gaming card if this is true with 3072 crippled cuda cores...

I will wait though till there are some more reviews to post them here....

Is the FP64 really that important?

I wish somebody would make a complex scene in Maya with millions of rigged and animated polygons, with a big fluid simulation in it, with volumetrics, particles, big textures, complex lighting, etc. and then test it out on both GTX and Quadro cards (multiple comparable models), to see which ones perform better. They should test viewport navigation/editing as well as rendering. Then we would definitively see if paying the extra money for Quadro was worth it or not.

I've seen videos showing scenes with millions of polygons being laggy on gaming cards in Maya, but they were smooth with Quadros. I've also seen videos showing the gaming cards performing better than Quadros in video editing software like Premiere. I've also read people claiming that the hardware is more heavy duty and durable in Quadro cards, while others claim the parts are identical.

But I don't want to buy all these cards and test this out for myself. I can't afford it.

Just reading spec sheets is obviously not enough, and the benchmarks are not the same as real life use. It's very much like comparing an iPhone 6 and a Samsung Galaxy Note 4. The Note 4 had better specs, but I sold it and switched to the iPhone 6 Plus because it honestly seems to be faster and less laggy than the Note 4 when I use it. The user experience is just more fluid on the iPhone, and stutters and freezes up occasionally on the Note 4. I think Apple has optimized everything so well that it still beats a Note 4, even though the Note 4 had better specs for less money. Or at the very least I feel like iOS is more enjoyable to use than Touchwiz Android.

Maybe somebody should make a Kickstarter campaign to put together enough money to buy a bunch of these graphics cards and really compare them (and make demo videos etc.) Would any freelancers on a budget actually donate to a Kickstarter like that? Probably not. I guess we are at the mercy of Nvidia. Hopefully AMD will get back in this game soon and create some real competition (more opencl adoption and DirectX 12 should help get things going).

Link to comment
Share on other sites

  • Advanced Member

As far as I have seen, most GPU renderers use single precision and don't touch double precision at all.  Octane devs have said that they do use dp a bit for hair and displacement, but those do not affect the rendering enough to make much of a difference between the cards.   For Nvidia GPUs it all comes down to the drivers and what those drivers are optimized for.  That is GTX is optimized for single precision and directX, Quadro is optimized for double precision and OpenGL.  This is an artificial distinction enforced by Nvidia so, if you have a GTX and a Quadro card with the same chip type, the Quadro will not be as fast as the GTX card for rendering.   If you are using a heavy OpenGL program than the Quadro will be much faster than the GTX.

  • Like 1
Link to comment
Share on other sites

  • Reputable Contributor

i see where they are coming from, gamers have less money, and hence they sell em crippled hardware. I mean for a gamer, the advances in video card technology have been pretty solid with every cycle.

 

As a pro, nvidia wants more of your money, and i guess rightfully so, since we are actually making money with their products, and when you consider the amount of time GPU rendering can save, even 3k per card is a bargain...

I don't buy that, personally. Because the differences in the GTX and Quadro line is extremely minimal, but not so the cost. They are simply price-gouging, by charging in the thousands for 99% the same card. What people use them for is none of their business, whether it's a studio or individual artist, if every hardware and software vendor is try to gouge them for 400-600% profit margins, then those same studios will find it increasingly difficult to compete. That's why many small to mid-sized studios largely stick with consumer cards, rather than they uber-expensive Quadro models.

  • Like 1
Link to comment
Share on other sites

  • Advanced Member

but its not about the production cost. Its about the RnD cost. They worked to develop these cards, and they are entitled to try to make as much money as they can from it. it's what the free market is all about. They make it cheaper for people who can't afford to pay lots ( gamers) and they try to make the money back on people who can actually afford it.

 

Also it is possible, that the prices for gaming cards are artificually reduced to compete with AMD and the real prices are in fact of the quadro cards ( RnD wise)

Link to comment
Share on other sites

  • Reputable Contributor

The only thing I wished is that Nivida would have picked a certain Double Precision and stuck with that from the beginning. Keep improving the gamers side of the card but kept the same DP through out the development. My 550Ti is FP64 = 1/8 FP32.

 

That way you know that you are going to get only so much DP and if you want better double precision, then you would know to start saving up for the card. Nvidia wins here on both counts. They make a sale of the gaming card with some DP compute power but then they get my business for the more expensive DP cards.

 

Nivida has been all over the board with DP on their gaming type of cards but with some DP compute power. No consistent marketing policy here.

550Ti  1/8FP32 (my current card)

Then from there less and less...

Titan and Titan Black 1/3 FP32. Here they were trying to bring on board the buyers who could not afford the higher priced DP cards. It appears to have been too successful

as the Titan X is 1/32 FP32. 

This back and forth handling of the DP is not a very smart customer policy. Pick a DP and stick with it, that is a personal opinion of course but as a long time Nivida customer this makes the most logical sense to me.

 

Most renderers handle only single precision so no problem there but if you use Adobe After Effects and other video editing software that will take advantage of DP in some areas of the programs then this back and forth would I think at least be frustrating.

Edited by digman
Link to comment
Share on other sites

  • Advanced Member

 

Most renderers handle only single precision so no problem there but if you use Adobe After Effects and other video editing software that will take advantage of DP in some areas of the programs then this back and forth would I think at least be frustrating.

 

That is very true, in that case you are stuck with Quadros, at least with Nvidia.   :angry:  Unless those programs can use OpenCL?  Then AMD cards can be used, especially since they started fixing their drivers.  I only mentioned renderers because TimmyZDesign asked about rendering.   :)

Edited by Grimm
Link to comment
Share on other sites

  • Reputable Contributor

but its not about the production cost. Its about the RnD cost. They worked to develop these cards, and they are entitled to try to make as much money as they can from it. it's what the free market is all about. They make it cheaper for people who can't afford to pay lots ( gamers) and they try to make the money back on people who can actually afford it.

 

Also it is possible, that the prices for gaming cards are artificually reduced to compete with AMD and the real prices are in fact of the quadro cards ( RnD wise)

I understand your premise, but I don't believe that RnD actually makes the cards sold to consumers a LOSS, and they have to make up that loss by gouging the professional/workstation market. They make a handsome profit in the consumer card market, all by itself. NVidia would be HUGE without the workstation cards. It's a tiny niche. They charge more for the Quadro cards because they believe studios that use them has deeper pockets and thus they price gouge. It's why you can go into the ritzy/upscale parts of a major city and notice the prices for the VERY SAME ITEMS are exorbitant. How did a gallon of gas at one station just a block away cost only $2.50, but in the upscale side, cost $3.50? Greed. That's how. I get the free market concept. I like it, but there is always the presence of greed here and there. Workstation cards are pure greed. Not RnD. The cards they cripple cost the very same to make as the quadro's. They just know they can't price gouge and compete well in the consumer market. Because there is just too much competition, there, and it's too large of a market to ignore.

 

But rather than ask governments to crack down on NVidia, I prefer to vote with my pocket book. That's why I never have and never plan to, buy a Quadro card. There are many studios who feel the same way. They can't afford to blow $3k-5K on a Quadro card, with all of this in mind...they are struggling just to keep their staff paid and the lights on.

Link to comment
Share on other sites

  • Contributor

That is very true, in that case you are stuck with Quadros, at least with Nvidia. :angry: Unless those programs can use OpenCL? Then AMD cards can be used, especially since they started fixing their drivers. I only mentioned renderers because TimmyZDesign asked about rendering. :)

Thanks for the info Grimm and Digman!

AbnRanger, you seem so sure that these Quadros are a ripoff. Have you tried a Quadro in DCC apps to see if it is the same as a less expensive gaming card with similar specs? I'm not saying you are wrong about the price gouging, I'm sure it is a real possibility, but other than comparing spec sheets, have you seen it in real world scenarios that both kinds of cards are essentially performing the same in DCC apps?

I mean, I see videos on YouTube showing how a Quadro card with lower specs seems to be performing much better than a gaming card in Maya, and I really start to wonder about how much the specs matter.

After all, I remember some videos you made which showed that 3D-Coat is so much faster on Intel chipsets as opposed to AMD, simply because Andrew used libraries which were optimized for Intel. Maybe there are similar significant optimizations for Quadros in DCC apps that don't exist on gaming cards?

Here is a link to one of those videos. It is almost a year old, and it is not in English, but there are subtitles, so it is still easy to understand the points he is making.

What do you think of videos like this? Are there still big differences like this in the newer models of Quadros and gaming cards, or are the gaming cards still the best bang for the buck?

http://youtu.be/iJ3oEZEP0bs

Link to comment
Share on other sites

  • Reputable Contributor

Thanks for the info Grimm and Digman!

AbnRanger, you seem so sure that these Quadros are a ripoff. Have you tried a Quadro in DCC apps to see if it is the same as a less expensive gaming card with similar specs? I'm not saying you are wrong about the price gouging, I'm sure it is a real possibility, but other than comparing spec sheets, have you seen it in real world scenarios that both kinds of cards are essentially performing the same in DCC apps?

I mean, I see videos on YouTube showing how a Quadro card with lower specs seems to be performing much better than a gaming card in Maya, and I really start to wonder about how much the specs matter.

After all, I remember some videos you made which showed that 3D-Coat is so much faster on Intel chipsets as opposed to AMD, simply because Andrew used libraries which were optimized for Intel. Maybe there are similar significant optimizations for Quadros in DCC apps that don't exist on gaming cards?

Here is a link to one of those videos. It is almost a year old, and it is not in English, but there are subtitles, so it is still easy to understand the points he is making.

What do you think of videos like this? Are there still big differences like this in the newer models of Quadros and gaming cards, or are the gaming cards still the best bang for the buck?

http://youtu.be/iJ3oEZEP0bs

I was never able to do such a side by side comparison between GTX and Quadros, just have seen detailed articles on sites like Tomshardware and other like it. Quadros do make a difference when working with a heavy polygon scene, but no advantage when GPU rendering or using it in sculpting apps. I don't work with heavy scenes in Max, much, so it wouldn't be worth the investment. It does seem like Nvidia knows how to cripple the GTX cards, with drivers, to make them less effective than their workstation counterparts, because most of the difference isn't in the hardware, but the drivers, alone. So, just as a 980 card can be crippled to be sold as a 970, Nvidia basically turns certain features on, in software.

 

It's still gouging because they are artificially creating the disparity and using that to leverage heavy profit margins on the Quadro cards. I could understand if they were marginally more expensive...say $100-200 more for their top-end cards, but they know that many studios will pay 300-500% more, because they need the extra performance in those situations. That means on a top end card, they are making about $3000 straight profit, above the cost of creating the card. They also try to justify the exorbitant increase by saying the support is better. Who really needs support on these cards, very often? It's all a ruse.

Link to comment
Share on other sites

  • Advanced Member

All I know is the quadros at work make mincemeat of gtx of the same chipset when used in 3dmax, maya on heavy scene with lots of particles and deformation.

Not good for rendering with octane though.

Link to comment
Share on other sites

  • Advanced Member

The best possible setup to cover all bases, is to have a Quadro card to power the monitors, and then have 1, 2, or more GTX cards just for rendering.   That way you have all the OpenGL goodness from the Quadro and all the single precision power for rendering from the GTX cards.  

Link to comment
Share on other sites

  • Advanced Member

The best possible setup to cover all bases, is to have a Quadro card to power the monitors, and then have 1, 2, or more GTX cards just for rendering.   That way you have all the OpenGL goodness from the Quadro and all the single precision power for rendering from the GTX cards.  

 

 

Good idea..for Nvidia

 

milking_cows_chiquinquira_boyaca_colombi

Link to comment
Share on other sites

  • Contributor

I was never able to do such a side by side comparison between GTX and Quadros, just have seen detailed articles on sites like Tomshardware and other like it. Quadros do make a difference when working with a heavy polygon scene, but no advantage when GPU rendering or using it in sculpting apps. I don't work with heavy scenes in Max, much, so it wouldn't be worth the investment. It does seem like Nvidia knows how to cripple the GTX cards, with drivers, to make them less effective than their workstation counterparts, because most of the difference isn't in the hardware, but the drivers, alone. So, just as a 980 card can be crippled to be sold as a 970, Nvidia basically turns certain features on, in software.

 

It's still gouging because they are artificially creating the disparity and using that to leverage heavy profit margins on the Quadro cards. I could understand if they were marginally more expensive...say $100-200 more for their top-end cards, but they know that many studios will pay 300-500% more, because they need the extra performance in those situations. That means on a top end card, they are making about $3000 straight profit, above the cost of creating the card. They also try to justify the exorbitant increase by saying the support is better. Who really needs support on these cards, very often? It's all a ruse.

 

Here is an interesting comparison:

1. Quadro K5200 (8GB memory, 256 bit, 2304 CUDA cores, 150W power consumption)

Sold new on Amazon for: $1900

2. GTX Titan Black (6GB memory, 384 bit, 2880 CUDA cores, 250W power consumption)

Sold new on Amazon for: $1600

I wonder if this Quadro outperforms the Titan in viewport navigation speed and animation playback? In this case the Quadro isn't much more expensive than the gaming card, and it might even be a better choice for someone if they were doing lots of character animation in Maya. Maybe the Quadro is faster for viewport navigation when modeling too? But I can't really know for sure without testing these cards out side by side...

 

All I know is the quadros at work make mincemeat of gtx of the same chipset when used in 3dmax, maya on heavy scene with lots of particles and deformation.

Not good for rendering with octane though.

 

That's quite interesting. I'm starting to get the impression that Quadros are for people doing lots of intense animation, and the gaming cards are for people who mainly just do product renders and things like that.

I've always used Quadros, but I've been tempted to switch to gaming cards. Now I'm starting to think that I shouldn't bother with those after all.

 

The best possible setup to cover all bases, is to have a Quadro card to power the monitors, and then have 1, 2, or more GTX cards just for rendering.   That way you have all the OpenGL goodness from the Quadro and all the single precision power for rendering from the GTX cards.  

 

Did you hear about Nvidia's Maximus setup? In that setup you use a Tesla card for compute, and a Quadro card for graphics. I wonder if a Maximus setup is better than a Quadro and GTX combination? I've heard people have trouble with drivers when they mix GTX and Quadro...but I guess Quadro and Tesla must work fine together. I'm also wondering if the Maximus thing has become outdated technology, since there doesn't seem to be much news about it since around 2013 or so.

 

http://www.nvidia.com/object/multi-gpu-technology.html

 

https://www.youtube.com/watch?v=s7niAKeAVxY

 

EDIT: I originally put the wrong price for the GTX card I was talking about above. It looks like it is actually selling for $1600 right now.

Link to comment
Share on other sites

  • Reputable Contributor

A Titan Black is not $1600. More like $1100.

 

The way I look at it is this, you could build a stout PC with just $1500...including a GTX 980. Spending $2k+ on a Quadro card, just to get better tumbling performance takes a complete system from $1500, to costing about $3000-5000. The CPU also has some bearing on overall application performance, so one could get the best CPU you can afford, to help mitigate the need for an uber-expensive Quadro card.

 

http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&IsNodeId=1&N=100007709%20600488129

Link to comment
Share on other sites

  • Member

I've been working with GPU rendering for some time now, and I really can't recommend the Quadro line of cards. When you compare GeForce vs Quadro at a similar price point, you will always get more "bang" for your buck with the Geforce cards. The Quadros were always the weakest CUDA rendering performers, and the viewport performance difference was negligible. I guess I'm a big proponent of the GeForce cards, because I've never gotten such returns out of any hardware before at that price point. If I didn't already own a Titan Black, I would seriously look into getting a Titan X because it really sounds like a fantastic card. 

 

 

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...