Jump to content
3DCoat Forums

Best GPU value for 3D-Coat


probiner
 Share

Recommended Posts

  • Member

My old HD3870 still delivers but I'm on my way to upgrade.

 

On my price range I'm split between
nVidia GTX 760 2Gb

ATI Radeon HD 7950 3Gb

Anyone using them. How's navigation?
Would I gain anything by upping my budget a little or only a damn Titan will give performance kick?
Is CUDA a game changer? I guess with the 7950 I would be out.

Thanks

Link to comment
Share on other sites

  • Reputable Contributor

CUDA does help when you are sculpting in Voxel mode, but the brushes in Surface mode are so refined and fast, now, that they pretty much blow voxel sculpting out of the water. There is still a number of operations that are best done in Voxel Mode, but it seems Andrew has placed more focus on Surface mode the past two years. You can see it just in the larger number of brushes in Surface mode...not to even mention LiveClay. So, in 3D Coat alone, I don't think you'll be hindered much by going with an ATI, instead. However, it's nice to know you have CUDA in your back pocket and with more and more and more GPU renderers coming onto the market (most are CUDA accelerated), it really makes sense to be ready to take advantage of them by having a newer NVidia card.

 

I recently bought a GTX 670 4GB, and I was severely disappointed how it struggled navigating with wireframe (in Voxel Room) toggled on. I recorded a video for Andrew to see and he forwarded it to his NVidia rep. That rep said it was a known issue and that they were looking to correct it in an upcoming driver update. I waited about a month and nothing had changed. Plus, it seems that NVidia crippled CUDA performance in the Kepler series (a GTX 770 or lower is still Kepler architecture). So,  you could either fork out the cash for a 780 or do what I did....go with a GTX 580 3GB. It is still the best CUDA card outside of the Titan. Is slightly better than the GTX 780 in terms of CUDA performance. I don't know if the wireframe issue has been solved, with the 780.

 

What I think it is, is the Memory Bus. With the 600 series, NVidia scaled BACK (instead of UP) the Memory Bus from 512 in the GTX 480 and 384 in the GTX 580, down to 256 in the 680 (the 770 is essentially a slightly upgraded 680). So, here you have an app like 3D Coat with (wireframe toggled on) displaying 10mill triangles + .....seems that the low memory bus is the bottleneck. It's like having a Corvette engine connected to a Cavalier transmission.

 

The 580 I bought on Ebay, has been working like a champ. Would recommend that option until the 780 comes down in price or the next generation comes out. ATI cards might be a good alternative, but you hamstring yourself from being able to use any GPU renderers and ATI tends to have driver issues with CG apps. I used to buy nothing but ATI cards for the longest, until about 4-5yrs ago. The drivers on my 4850 would not let me use Combustion (the compositing app I was using at the time), among other things. Once I switched to an NVidia card, the problems went away. I've stuck with NVidia ever since. NVidia aggressively develops CUDA, whereas ATI doesn't spend a dime on a GPU streaming software.

Edited by AbnRanger
Link to comment
Share on other sites

  • Member

Hey AbnRanger, thanks for the time to right a  thorough answer.

You kind of convinced me right there to go with the ATI. The 7950 has a 384-bit memory bus against 256-bit of the the GTX760. Also the extra memory might become a little handy if I attempt to render anything GPU. Though I admit that even 3GB is not enough for serious stuff.

As for the drivers and a ATI user myself I saw how bad things have going. Though now they seem ok on my end and from my readings people are happy now.

About CUDA, yeah, I'll be cutting myself short on that front... OpenCL front still not here on 3D-Coat now or in a close future  right?

Cheers

 

Link to comment
Share on other sites

  • Reputable Contributor

Hey AbnRanger, thanks for the time to right a  thorough answer.

You kind of convinced me right there to go with the ATI. The 7950 has a 384-bit memory bus against 256-bit of the the GTX760. Also the extra memory might become a little handy if I attempt to render anything GPU. Though I admit that even 3GB is not enough for serious stuff.

As for the drivers and a ATI user myself I saw how bad things have going. Though now they seem ok on my end and from my readings people are happy now.

About CUDA, yeah, I'll be cutting myself short on that front... OpenCL front still not here on 3D-Coat now or in a close future  right?

Cheers

 

I don't think Andrew is big on GPU acceleration, at this stage...so I doubt he even updates/recompiles the CUDA code anytime soon. Been trying to get him to do that for the longest, and he's been very reluctant thus far. He doesn't thinking the speed benefit will be noticeable enough to warrant spending a few months recompiling. I do hope he can add CUDA acceleration for the Pose tool....it needs it. Other than that, the speed in Surface mode has gotten really good with V4...using CPU multi-threading.

 

http://www.youtube.com/watch?v=RIGMArh0myo&list=PL0614F2A03AD725CD&index=15

Link to comment
Share on other sites

  • Member

I think most developers are taking a "wait and see" approach to GPGPU (general purpose GPU) aka CUDA and OpenCL. The main problems are:

- there are only 2 GPU vendors and they are arch enemies with no motivation to cooperate on standards. NV is not interested in promoting OpenCL, while CUDA is NV-proprietary.

- A lot of development effort has to be put into a multi-processing architecture, deciding what to do on the GPU vs CPU, how to coordinate those threads of execution. With CPU capability continuing to advance (more cores, higher clock speeds at reduced power usage), it is much more straight-forward to just take advantage of that well-known architecture. It is more straight-forward to simply work on making an app take advantage of multiple cores/threads.

 

probiner, I'm going to be making a similar decision in the next weeks, but I personally am only considering NV cards. For one thing, I believe the NV OpenGL support is significantly better than AMD/ATI and pretty much has always been. AMD concentrates on DirectX. AbnRanger pointed out some of the current state of things in the NV Geforce lineup that makes a buying decision harder than it ought to be. I'm leaning toward this one currently due to the 4GB of VRAM, but haven't decided for sure:

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16814125462

GIGABYTE GV-N770OC-4GD GeForce GTX 770 4GB 256-bit GDDR5 PCI Express 3.0 HDCP Ready WindForce 3X 450W Video Card

 

There are also 780's with 3GB for about the same price. I think the VRAM will be more important to me in the long run than CUDA cores... Sigh. More research...

Link to comment
Share on other sites

  • Reputable Contributor

I think most developers are taking a "wait and see" approach to GPGPU (general purpose GPU) aka CUDA and OpenCL. The main problems are:

- there are only 2 GPU vendors and they are arch enemies with no motivation to cooperate on standards. NV is not interested in promoting OpenCL, while CUDA is NV-proprietary.

- A lot of development effort has to be put into a multi-processing architecture, deciding what to do on the GPU vs CPU, how to coordinate those threads of execution. With CPU capability continuing to advance (more cores, higher clock speeds at reduced power usage), it is much more straight-forward to just take advantage of that well-known architecture. It is more straight-forward to simply work on making an app take advantage of multiple cores/threads.

 

probiner, I'm going to be making a similar decision in the next weeks, but I personally am only considering NV cards. For one thing, I believe the NV OpenGL support is significantly better than AMD/ATI and pretty much has always been. AMD concentrates on DirectX. AbnRanger pointed out some of the current state of things in the NV Geforce lineup that makes a buying decision harder than it ought to be. I'm leaning toward this one currently due to the 4GB of VRAM, but haven't decided for sure:

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16814125462

GIGABYTE GV-N770OC-4GD GeForce GTX 770 4GB 256-bit GDDR5 PCI Express 3.0 HDCP Ready WindForce 3X 450W Video Card

 

There are also 780's with 3GB for about the same price. I think the VRAM will be more important to me in the long run than CUDA cores... Sigh. More research...

Well...you have to look at it from NVidia's vantage point. They have invested heavily in CUDA development. ATI/AMD has invested nothing on this front. Instead, they have just sat on their hands waiting for OpenCL to mature on it's own. In the mean time, though....the end user gets stuck out for years while this development (OpenCL) is underway. As you can see from this video, where the developer (Vlado) of VRay talks about the still new CUDA/GPU rendering paradigm.

 

http://www.youtube.com/watch?v=S74JeoPIbL4

 

There are a LOT of GPU based renderers coming on the market or coming on to the market. I use both VRay and finalRender in 3ds Max, and there are GPU options for both. VRay already has it's interactive renderer working in GPU mode. FinalRender R4 is due to be out at some point soon. It uses CUDA to accelerate not only the Interactive renderer, but the production rendering, too.

 

So, no matter which app you are using, there is a 3rd party rendering option that uses the GPU/CUDA. That is the main reason why I would stick with NVidia....despite the issues I previously mentioned. The 780 has the bigger Memory Bus (384), while the whole GTX 600 line, all the way to the GTX 770, has the small memory bus (256)....which is what I think causes the wireframe bottleneck issue in the Voxel Room. There are times when you want to turn wireframe on and actually rotate about the model and work on it (so you can see the tessellation occurring when using LiveClay brushes or using the SHIFT key action options). If you are using one of those cards with a 256 bit memory bus, you will be frustrated with this limitation. So, I'd heavily advise staying away from them. The GTX 580 3GB has some juice, and compares very favorably with the 760/770....and can be bought for a few hundred dollars less on eBay. The 780 might be the best option, if you have the $$$ to spend, but the 580 is neck and neck with it, when GPU rendering. It's not far behind it in most other benchmarks, either.

 

http://www.youtube.com/watch?v=-GEGwWAYlz0

 

http://gpuboss.com/gpus/GeForce-GTX-770-vs-GeForce-GTX-580

 

580 GTX practically $200 cheaper than a GTX 770 while performing better in 3D Coat, Blender Cycles and Octane Renderer:

http://www.ebay.com/itm/MSI-NVIDIA-GeForce-GTX-580-3GB-GDDR5-Lightning-Limited-Edition-Dual-Fan-/321205199928?pt=PCC_Video_TV_Cards&hash=item4ac9526438

Edited by AbnRanger
Link to comment
Share on other sites

  • Member

I'm going to make my new pc all in a shop :D And no one around here sells the 580. Neither I see a big advantage of getting a 3 year old card because for having CUDA + good memory bus. The memory thing will give me a nice punch not only in 3D-coat wireframe mode but also Lightwave and other 3D apps. CUDA... nothing else, that I am seeing.
300€ is the cap for me for the GPU budget. Don't feel for going higher (at the moment) for such. So any nVidia 384-bit card seems to be out.
Anyway, still a week or two before buy, reading up :)

Link to comment
Share on other sites

  • Reputable Contributor

Don't let the 2yr old card bit fool you. It was the newest model on the market until about a year ago. If NVidia had played their cards right, the 780 would be twice as fast as the 580 in most benchmarks. It's not even close. Plus, with the Kepler cards, NVidia took a step back...not a step forward. Trust me....I know this first hand....after investing in a GTX 670 4GB, right about the same time the 780 hit the market. I didn't want to have to bother stepping back. I actually lost money doing so, but that Kepler card just sucked so bad (wireframe issue I mentioned), I wasn't willing to deal with it just so I could say I have a brand new card.

 

After exchanging a few e-mails with that NVidia rep, he seemed to imply that NVidia realized they screwed up with the Kepler series (what you are proposing to buy with the 760/770....cause it's new). So, if you want to play games mostly...then it might be a better buy. But it seems that NVidia purposely crippled the Kepler architecture so it wouldn't compete with the uber expensive Quadro cards in the CG market.

 

I didn't notice any appreciable benefit of the the 670 over the 580. Again...when you are talking CUDA, we aren't simply talking 3D Coat here. If you want to do any GPU rendering, the 580 is the best card to go with outside of a 780 or Titan. The stats (especially from the blender cycles rendering comparison in my previous post) don't lie. The memory bus will matter more when working with DX and OpenGL, too.

Link to comment
Share on other sites

  • 1 year later...
  • Advanced Member

Hello, nice thread - GTX580's GPU I see some on ebay too  - but I did have lost over 380 € (500 USD) 3 years agao with a bad lawnmower - so I have fear for buying old things, or should I wait for the 800series? They are better again? Still I have my 550Ti with only 1GB VRAM ... and its not so good ... also with some other programs ...

ps. I have only 600 WATT PSU

Edited by Monkeybrain
Link to comment
Share on other sites

  • Advanced Member

i'm +1 abn ranger. Get a 580. Only thing out there that is better than the 580 is a 780, but it costs like 5 times more.

 

everything in between is just depressing.

 

ebay has a really good buyer protection policy in case of faulty goods.

Link to comment
Share on other sites

  • 1 month later...
  • Advanced Member

Hey, did have anyone now a GTX 970? 980? I think, that I will buy a PALIT GTX 970, its one of the cheap 970 GTX models - the reference model - or is a Plait Jetstream (its overvlocked) better? Also best models from MSI and Asus but Palit is shorter and cheaper and also good?

 

Did I need an overclocked GPU model for 3DCoat? I think, no - because its mostly only important to have huge VRAM. I'm not a gamer.

Edited by Monkeybrain
Link to comment
Share on other sites

  • Advanced Member

I have a MSI 980 and it's very nice.  It's only as fast as a 780 though and much slower than a 780ti.  There might be some improvements though with drivers and software but I don't think that it will be the fastest card on the planet by any means.  I use Octane and the devs have gotten the 980 almost as fast as a Titan Black with the path tracing kernel.  The Blender/Cycles devs have been looking in to speeding up the 900 series as well.  Compared to my old 460 it's blazing fast.  :D  The other advantages to these cards is that they use much less power and keep cooler than the 700 series cards.  Even at full load my 980 has only ever gotten up to 76C.

Link to comment
Share on other sites

  • Advanced Member

I got myself an oc'd GTX 970 as I was able to get a good price and my GTX 580 with 1.5G ran out of VRAM on some *D Coat projects of mine.

 

pro: Bigger VRAM! weeeeeeh... finally I can finish that 120+ mio tris sculpt I have going. Yes, it is slightly insane, but 3D Coat on my machine seems to be able to cope.

 

contra: overall the FPS seem to be the same as with the GTX 580, maybe even took a hit at times. Nowhere near the 65% performance increase that the benchmarks promised in average. But I guess gaming performance and app performance are two different things.

 

 

The good thing is: You get an awesome card for gaming, 4G of VRAM which is plenty for 3D Coat, and almost the render / viewport performance of a GTX 580, for much less power drawn from the socket. Card is cool and quiet, and already you can get good oc'd GTX 970 for 350$.

If you think this card will blow the humble GTX 580 out of the water though, think again. This is the MAxwell midrange card, same as the GTX 980, sold as high-end cards this year.

 

The big Maxwell is yet to come. Hopefully not in the guise of another titan rip-off. 6G VRAM is likely, around 2500 Shadercores (compared to 1600 and 2000 for GTX 970 / 980) are expected. No words yet on memory interface, 384bit possible.

So yes, this card might be bettering both GTX 780 Ti and the humble GTX 580 in 3D Coat, when it finally arrives in its uncut version. By how much, nobody knows yet. When exactly that will happen, how much it will cost, is still not announced yet.

 

 

I'd say the GTX 970 is a fine card if you get a good deal. The GTX 980 is just too expensive for what is basically just an early version of next years mid-range card. If you manage to get your mitts on a double-vram-sized GTX 580 for a great price, get it, and sit out the next 2-3 years to see where maxwell is headed.  Because the GTX 580 still rocks in 3D Coat (if it wouldn't be for the measly VRAM size of the standart model)!

 

As an interesting fact, I was using a GTX 670 for a short while in my work rig, just to bridge the time until my GTX 970 arrives (it has 2G VRAM, so it is slightly larger). Viewport FPS have been the same as with the GTX 580, and where also slightly better than on my new Maxwell card.

Somehow I suspect the maxwell drivers to be not well optimized yet....

Link to comment
Share on other sites

  • Reputable Contributor

Yeah...I've been looking at either the 970 or buying a 780 6GB version on Ebay. I'm not real happy with Nvidia going backwards AGAIN (like they did with the 600-770 series) with the Memory Bus. Going to 256bit from 384. I get that they are trying to get the power consumption down, but when I invest in a card, that is a very small consideration compared to performance. So, performance again has to take a back seat to power efficiency. I don't get that strategy. I am pretty happy with my 580 3GB's for the time being. Lot's GPU rendering punch, and it performs very well in 3D Coat and 3ds Max. I may just wait until Nvidia decides to quit messing compromising with the Memory Bus size. Maxwell architecture with a 384bit bus might provide a sizable enough gain to warrant a new purchase, but until then, I just don't trust that their tricks to mitigate the smaller bus size are enough.

Link to comment
Share on other sites

  • Advanced Member

If it wouldn't have been for my VRAM shortage, I wouldn't have bought a new card this year.

 

And the reason why I choose the GTX 970 in the end was the combination of 4G VRAM, good value, the difficulties of finding a double-sized 500 and 600 series card, and the fact the GTX 780 Ti was still even more expensive than the GTX 980. Of course just as I am starting to use my 4G card, rumours of 8G cards start popping up :) ... but then the GTX 900 cards will be too slow to do anything useful with THAT much VRAM anyway.

 

The good thing is: The GTX 970 is still a very capable gaming card (where the memory interface limitations mitigation tricks of maxwell seem to do the trick). If the big maxwell turns out to be a killer, I might still get it when prices come down in 2016 and put the GTX 970 into my gaming rig, replacing the aging GTX 670.

My GTX 670 is fast enough to play everything at max at 1080p, I guess the GTX 970 should do the same at 1440p when I finally upgrade my Screen. I don't expect single-card 4k to be already there in 2016, so this card might even still see some use if it gets replaced in my workstation.

 

 

So yeah, the GTX 970 is a fine bridging solution, but certainly not the big upgrade for GTX 580 owners.

Edited by Gian-Reto
Link to comment
Share on other sites

  • Advanced Member

I think I will spend also a GTX 780, with 6GB, but whats the difference between 780ti and 780? So a 780 with 6GB VRAM is much better like a 970 with 4GB and only 254 bus memory ... and not only this reason

 

and I'm against a 970 also ...

Just look also for COIL WHINING, COIL NOISE in youtube of the 970's :/ its sometimes a real noise-terror

Link to comment
Share on other sites

  • Advanced Member

I have got a GTX 970 since a month. No coil whining, no coil noise here.... just incredible silence thanks to a very cool running chip (its a small midrange maxwell chip after all), and a very good 3rd party cooler (windforce, yay). There is a certain louder hum at startup, I guess it takes a while for the fan throttling of the windforce to kick in. But whoever has problems with short startup noise bursts like that should go back to an android tablet as workstation :P

 

Of course, as usual, coil noise is "luck of the draw". And as always, your chance to get a bad product is higher when you are an early adopter.

 

 

Difference between GTX 780 and 780 TI is about 10-15% less activated cores, and marginally lower stock speed.

 

That said, no, a GTX 780 will not be faster than a GTX 970. A GTX 780 TI will be... a little bit. And will in turn loose to the GTX 980 marginally. Really, these cards are very close, so you might want to look at other factors like memory, price, or even the bus size before deciding. The raw power of GTX 780 > 970 > 780 TI > 980 are within 15-20% of each other if you believe the tests.

 

So yeah, getting a GTX 780 6GB for a good price is a good decision. You will get a marginally slower card than the others, but enough memory for the next few years of sculpting probably, all the while you do not have to worry if the small Maxwell bus size and memory magic will limit your VRAM access speed.

Still, from my expierience the GTX 970 is a fine card for 3D Coat, while being cool, quiet and less power hungry than older Nvidia cards (once again, this is a Midrange card as powerful as last gens maxed out chip (GTX 780)).

 

 

If you have money to burn and REALLY want to get the best card you will be able to get in the next two years, wait for the big maxwell. It will most probably blow the GTX 780 out of the water powerwise, will have (most probably) a bigger memory interface than the midrange maxwells again, and most probably at least 6G of VRAM (next Titan might have 12G, if you believe rumours)... only question is if Nvidia will release it as Titan first this time again.

Edited by Gian-Reto
Link to comment
Share on other sites

  • 2 weeks later...
  • Reputable Contributor

@abn: here is your 384bit memory bus =)

 

http://www.game-debate.com/hardware/index.php?gid=2490&graphics=GeForce%20GTX%20980%20Ti%208GB

 

coming december :D

That will be interesting to see what they charge for it. I've got two GTX 580's handling the GPU rendering and it is just flat out kicking butt and taking names. :D  I don't know that I'll be upgrading until NVidia conclusively proves their cards are twice as fast as the GTX 580.

Link to comment
Share on other sites

  • Member

That will be interesting to see what they charge for it. I've got two GTX 580's handling the GPU rendering and it is just flat out kicking butt and taking names. :D  I don't know that I'll be upgrading until NVidia conclusively proves their cards are twice as fast as the GTX 580.

Exactly my thinking about upgrading a processor from Intel too.

Link to comment
Share on other sites

  • Advanced Member

I did wait for my EVGA 970GTX since  1 1/2 weeks now, I hope I will get this card in this or overnext week :)

if this will be not enough in some months maybe a second 970GTX from EVGA, the same type of card? But I read, that a SLI isn't a plus for graphic-software - only a plus for games - is this true?

Edited by Monkeybrain
Link to comment
Share on other sites

  • Reputable Contributor

I did wait for my EVGA 970GTX since  1 1/2 weeks now, I hope I will get this card in this or overnext week :)

if this will be not enough in some months maybe a second 970GTX from EVGA, the same type of card? But I read, that a SLI isn't a plus for graphic-software - only a plus for games - is this true?

Dual cards are not going to give you twice the performance in apps like 3D Coat, 3ds Max, Maya, Modo, etc., except when you use them for GPU accelerated rendering...like VRay RT (in GPU mode), Octane, Thea, Moskito, FurryBall, Redshift, Arion, Cycles, etc. Those renders recognize multiple cards and assign them to the rendering, automatically. No need to use an SLI bridge, and in fact they may not work correctly if such a bridge is used. SLI is indeed intended to benefit gaming mostly.

Link to comment
Share on other sites

  • Advanced Member

yeah as abn ranges said: SLI is only for gaming. and for that you do need identical cards. but for gpu rendering ( which is where multiple cards are useful) you can mix and match any cards you want.

Link to comment
Share on other sites

  • Reputable Contributor

yeah as abn ranges said: SLI is only for gaming. and for that you do need identical cards. but for gpu rendering ( which is where multiple cards are useful) you can mix and match any cards you want.

The caveat with mixing cards for GPU rendering, they typically are constrained to video memory of the lowest card. So, for example, if you have one card that has 2GB Video RAM and the other has 6GB, then the renderer is constrained to 2GB only. So, you could mix a 780 and a 970 if you want, but just keep in mind this "weak link" principle, regarding video RAM. Another thing you can do with multiple cards, is use the weakest one to drive your video display and assign the most powerful one to just rendering. It is basically a render node inside your case...which is pretty cool.

 

Cause when you have interactive rendering going, it can slow down your viewport interaction, without a 2nd card to handle the video graphics. What Thea does is use each card, but will scale down the usage amount of one, so it doesn't kill the interactivity in your viewport.

Link to comment
Share on other sites

  • Advanced Member

I concur with AbuRanger, I have a 460 and a 980.  I use the 460 to drive my displays and just use the 980 for compute.   Has Andrew updated the CUDA version for 3D-Coat?  Last time I tried to run the CUDA enabled version it wanted version 3 I think.  I'm currently running version 6.5, so it doesn't make the CUDA version of 3D-Coat very useful.  :(

Link to comment
Share on other sites

  • Reputable Contributor

I concur with AbuRanger, I have a 460 and a 980.  I use the 460 to drive my displays and just use the 980 for compute.   Has Andrew updated the CUDA version for 3D-Coat?  Last time I tried to run the CUDA enabled version it wanted version 3 I think.  I'm currently running version 6.5, so it doesn't make the CUDA version of 3D-Coat very useful.   :(

I don't think he has. I've been asking him to use CUDA (6 toolkit) on the POSE tool, as it can be rather sluggish on anything over a few million triangles. I think it is just single-threaded as well. So, it would be great to see CUDA doing some of the heavy lifting.

Link to comment
Share on other sites

  • Advanced Member

W

 

I don't think he has. I've been asking him to use CUDA (6 toolkit) on the POSE tool, as it can be rather sluggish on anything over a few million triangles. I think it is just single-threaded as well. So, it would be great to see CUDA doing some of the heavy lifting.

 

 

2 GPU's not SLI, one used for the monitor.

 

Will programs like 3D-Coat automatically use the other GPU not being used for the monitor, or does it need to be configured.

 

Not having SLI has thrown me  a bit..

 

.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...