Jump to content
3DCoat Forums

Scott

Member
  • Posts

    60
  • Joined

  • Last visited

Scott's Achievements

Neophyte

Neophyte (2/11)

0

Reputation

  1. Yes you are incorrect. Cuda 3.0 is compatible with G80 and above, as were previous versions.
  2. I would imagine that as Andrew suggests, version 2.2 is needed and not the latest 2.3 supported drivers and toolkits from Nvidia. Andrew ideally could easily include CUDART version himself with the distribution, (Nvidia made it so end users should not need the toolkit anymore) this would make it easier for this issues.
  3. Yes i agree the Nvidia CUDA angle is something both Nvidia and Andrew should play up, as is cross platform D3D and OGL. It would be really nice to see an early Linux version running, just to let people know that it is a real option for 3DC. It's something Pixologic and Zbrush won't be able to show. And having a real sculpting application with real 2D Layer paint capability is sure to impress the larger studios that are often Linux based. But i also agree if he can't have some fast impressive sculpters on the floor at least get some impressive video demos running. Retopology, Voxels and particularly stuff like Curves, Snake etc which are very unique to Voxels and 3D, Also try and print up some CD's with manuals, videos and 3DCoat trials to hand out to ensure a popular Siggraph meet. And it might be time for 3DC loyalists to try and turn out some really nice sculpt/renders for Andrew to slideshow.
  4. Yes If you wish to use both, you will require both be installed... Hopefully Andrew starts to use CUDA 2.2, and includes CUDART.dll in future versions of 3DC Mainly so i don't have to answer this question 100 more times..
  5. You are incorrect i use Mudbox09 daily since it was released with an 8800GTS and have never had any issues apart from some AOshader issues, which were fixed with SP updates. It also works well on my 9650GT Notebook videocard, also a non-quadro. As mentioned Autodesk only officially generally support Quadro's for a range of their 3D and CAD software, If you want technical support with an issue, you generally only get it you are running certified drivers as mentioned previously. Back on topic but with a similar issue..... Andrew a small suggestion would be that now CUDA 2.2 has been released (today), perhaps you could update to CUDA 2.2 and then include the runtime .dll's with the 3DC install. This would seem to be less confusing and would allow users to only need 3DC and Nvidia driver to use CUDA. Currently the Toolkit is needed as well. CUDA 2.2 may offer some slight performance increases from 2.1 as well. Not a big deal but it may be useful when 3DC 3.0 is officially released.
  6. Hi, Well it wasn't in this forum, and i don't think i plan on searching every forum topic, before i post every subject. It seems Andrew didn't see or answer the previous thread, So i guess it cannot hurt to have another? I'm not sure if Andrew has intimate knowledge of his OGL code, So perhaps this thread may be lucky enough to get a reply from someone with some knowledge on the subject. Personally 3DC has slowed down a bit over the 3.0, and having 5x faster OGL is a priority, Speed is always going to be an issue for 3DC, Mudbox and ZBrush are handled much better on laptops etc, So Blindness OGL extensions are quite useful to me.
  7. I'm wondering if Andrew has had a chance to evaluate "Nvidia's Blindness" OGL extensions. Nvidia are claiming that significant speed and performance increases are offered in the Nvidia 1.85 drivers and above, when using the two new extensions. http://developer.nvidia.com/object/bindless_graphics.html So I'm wondering if 3DC OGL versions on the PC, Mac, Linux could gain some speed by changing this? D3D doesn't have an equiv idea as yet, But I'm sure they will soon enough. It seems to be the ability to have GPU address pointers, Which is quite a powerful change to OGL, it seems many Developers believe it's a very good idea. It would be nice to think that our current Videocards could push 3DC 5x faster than they are currently? More discussion here: http://www.opengl.org/discussion_boards/ub...729&fpart=1
  8. CUDA is NOT going anywhere, Nvidia are well on there way for release of 2.2. And many other releases beyond that. And have invested way too much time and money on this technology, will not be leaving it or abandoning this tech. So forget what a few LW users don't know.. Andrew should now be able to change the way CUDA is distributed, I believe the latest license allows Andrew to distribute CUDART.DLL with 3DC itself. (As of recently) So Andrew should be able to compile with say the new version 2.2 (beta) and release the CUDART.DLL binary with 3DC, rather than needing us to download the appropriate Toolkit. EULA now gives right to distribute Nvidia libraries. http://developer.download.nvidia.com/compu...EULA_081215.pdf I'm sure this would be good for Andrew in final release as it would cause less confusion for new and potential customers. Andrew would need to compile 2.2 with 3DC, and then users will already have CUDART rather than downloading the toolkiit in the future. All more modern drivers from Nvidia support CUDA, but the toolkit is still needed (but not really anymore) if Andrew updates 3DC. So the tookit should only be necessary for developers in the future, Customers will just need Application with CUDA (3DC) and Nvidia drivers for it all to work well.
  9. I would love to see a "Subdivision Level" display given in the same place... So if i have increased resolution twice, this would be "SubDLevel = 2" Just lets me know where i am, or how many times i have increased res.
  10. 3DC exports various file formats, .LWO, .Obj etc... Maxwell comes with a plugin for 3D host applications, or can be used as a standalone package. So you could take a .obj from 3DC and use it Maxwell Studio to do lighting, camera etc, and then have a full Maxwell Render. Blender doesn't connect to Maxwell, Sketchup is likely the cheapest software with a plugin for it. I would download and try the demo, as it will give you an idea of how the pipeline workflow would work for you. I use it via LW and Houdini, but the Standalone Studio is good enough to get a decent render. (Being an Unbiased Renderer, beware the LONG rendertimes)
  11. Sounds like a good plan, after Interface implementation is done, bug fixes and optimizations could be considered beta stage, and it seems quite achievable for your expected release of May. I would of thought using Qt library being that it's free or commercial (LGPL) and would help with virtually no code changes between PC/Mac/Linux compiles, OGL/DX and also helps with language translations too. http://www.qtsoftware.com/products/ But i think that standard features like docking etc, is fairly standard throughout the Qt SDK. I also see other companies like Newtek, and Daz both heading that way too. Either way look forward to next week.
  12. The interface already has customizable colors etc, obviously Dark grey is nice for my current UI. But the newer interface will likely be more customizable.
  13. Avoiding topology isn't always an option, if LW can handle the object without smoothing errors, I really think most other applications should be able to handle the same objects with the same methods. So i agree with Phil, that 3DC should be able to handle the object, at least as well as LW does. Edit: I See Andrew has fixed the problem, Kudos...
  14. The variables are many and it's hard to give an exact speedup. Your videocard has 32SP, which won't give you a large speedup, but then any speedup is always welcome really. Keep an eye on the heat, but i doubt that CUDA would change the temp much. I would still give it a go, it does tend to work when large brush sizes are an issue. Andrew is meant to receive official Nvidia speedup results from Nvidia sometime soon, to try and quantify the numbers.
  15. +1 For OpenSUSE x64 distro. As for Photoshop, CS2 works via wine, not sure about the others, But thankfully Andrew's painting tools are likely enough for most things anyway.
×
×
  • Create New...