Jump to content
3DCoat Forums

Scott

Member
  • Posts

    60
  • Joined

  • Last visited

Everything posted by Scott

  1. Yes you are incorrect. Cuda 3.0 is compatible with G80 and above, as were previous versions.
  2. I would imagine that as Andrew suggests, version 2.2 is needed and not the latest 2.3 supported drivers and toolkits from Nvidia. Andrew ideally could easily include CUDART version himself with the distribution, (Nvidia made it so end users should not need the toolkit anymore) this would make it easier for this issues.
  3. Yes i agree the Nvidia CUDA angle is something both Nvidia and Andrew should play up, as is cross platform D3D and OGL. It would be really nice to see an early Linux version running, just to let people know that it is a real option for 3DC. It's something Pixologic and Zbrush won't be able to show. And having a real sculpting application with real 2D Layer paint capability is sure to impress the larger studios that are often Linux based. But i also agree if he can't have some fast impressive sculpters on the floor at least get some impressive video demos running. Retopology, Voxels and particularly stuff like Curves, Snake etc which are very unique to Voxels and 3D, Also try and print up some CD's with manuals, videos and 3DCoat trials to hand out to ensure a popular Siggraph meet. And it might be time for 3DC loyalists to try and turn out some really nice sculpt/renders for Andrew to slideshow.
  4. Yes If you wish to use both, you will require both be installed... Hopefully Andrew starts to use CUDA 2.2, and includes CUDART.dll in future versions of 3DC Mainly so i don't have to answer this question 100 more times..
  5. You are incorrect i use Mudbox09 daily since it was released with an 8800GTS and have never had any issues apart from some AOshader issues, which were fixed with SP updates. It also works well on my 9650GT Notebook videocard, also a non-quadro. As mentioned Autodesk only officially generally support Quadro's for a range of their 3D and CAD software, If you want technical support with an issue, you generally only get it you are running certified drivers as mentioned previously. Back on topic but with a similar issue..... Andrew a small suggestion would be that now CUDA 2.2 has been released (today), perhaps you could update to CUDA 2.2 and then include the runtime .dll's with the 3DC install. This would seem to be less confusing and would allow users to only need 3DC and Nvidia driver to use CUDA. Currently the Toolkit is needed as well. CUDA 2.2 may offer some slight performance increases from 2.1 as well. Not a big deal but it may be useful when 3DC 3.0 is officially released.
  6. Hi, Well it wasn't in this forum, and i don't think i plan on searching every forum topic, before i post every subject. It seems Andrew didn't see or answer the previous thread, So i guess it cannot hurt to have another? I'm not sure if Andrew has intimate knowledge of his OGL code, So perhaps this thread may be lucky enough to get a reply from someone with some knowledge on the subject. Personally 3DC has slowed down a bit over the 3.0, and having 5x faster OGL is a priority, Speed is always going to be an issue for 3DC, Mudbox and ZBrush are handled much better on laptops etc, So Blindness OGL extensions are quite useful to me.
  7. I'm wondering if Andrew has had a chance to evaluate "Nvidia's Blindness" OGL extensions. Nvidia are claiming that significant speed and performance increases are offered in the Nvidia 1.85 drivers and above, when using the two new extensions. http://developer.nvidia.com/object/bindless_graphics.html So I'm wondering if 3DC OGL versions on the PC, Mac, Linux could gain some speed by changing this? D3D doesn't have an equiv idea as yet, But I'm sure they will soon enough. It seems to be the ability to have GPU address pointers, Which is quite a powerful change to OGL, it seems many Developers believe it's a very good idea. It would be nice to think that our current Videocards could push 3DC 5x faster than they are currently? More discussion here: http://www.opengl.org/discussion_boards/ub...729&fpart=1
  8. CUDA is NOT going anywhere, Nvidia are well on there way for release of 2.2. And many other releases beyond that. And have invested way too much time and money on this technology, will not be leaving it or abandoning this tech. So forget what a few LW users don't know.. Andrew should now be able to change the way CUDA is distributed, I believe the latest license allows Andrew to distribute CUDART.DLL with 3DC itself. (As of recently) So Andrew should be able to compile with say the new version 2.2 (beta) and release the CUDART.DLL binary with 3DC, rather than needing us to download the appropriate Toolkit. EULA now gives right to distribute Nvidia libraries. http://developer.download.nvidia.com/compu...EULA_081215.pdf I'm sure this would be good for Andrew in final release as it would cause less confusion for new and potential customers. Andrew would need to compile 2.2 with 3DC, and then users will already have CUDART rather than downloading the toolkiit in the future. All more modern drivers from Nvidia support CUDA, but the toolkit is still needed (but not really anymore) if Andrew updates 3DC. So the tookit should only be necessary for developers in the future, Customers will just need Application with CUDA (3DC) and Nvidia drivers for it all to work well.
  9. I would love to see a "Subdivision Level" display given in the same place... So if i have increased resolution twice, this would be "SubDLevel = 2" Just lets me know where i am, or how many times i have increased res.
  10. 3DC exports various file formats, .LWO, .Obj etc... Maxwell comes with a plugin for 3D host applications, or can be used as a standalone package. So you could take a .obj from 3DC and use it Maxwell Studio to do lighting, camera etc, and then have a full Maxwell Render. Blender doesn't connect to Maxwell, Sketchup is likely the cheapest software with a plugin for it. I would download and try the demo, as it will give you an idea of how the pipeline workflow would work for you. I use it via LW and Houdini, but the Standalone Studio is good enough to get a decent render. (Being an Unbiased Renderer, beware the LONG rendertimes)
  11. Sounds like a good plan, after Interface implementation is done, bug fixes and optimizations could be considered beta stage, and it seems quite achievable for your expected release of May. I would of thought using Qt library being that it's free or commercial (LGPL) and would help with virtually no code changes between PC/Mac/Linux compiles, OGL/DX and also helps with language translations too. http://www.qtsoftware.com/products/ But i think that standard features like docking etc, is fairly standard throughout the Qt SDK. I also see other companies like Newtek, and Daz both heading that way too. Either way look forward to next week.
  12. The interface already has customizable colors etc, obviously Dark grey is nice for my current UI. But the newer interface will likely be more customizable.
  13. Avoiding topology isn't always an option, if LW can handle the object without smoothing errors, I really think most other applications should be able to handle the same objects with the same methods. So i agree with Phil, that 3DC should be able to handle the object, at least as well as LW does. Edit: I See Andrew has fixed the problem, Kudos...
  14. The variables are many and it's hard to give an exact speedup. Your videocard has 32SP, which won't give you a large speedup, but then any speedup is always welcome really. Keep an eye on the heat, but i doubt that CUDA would change the temp much. I would still give it a go, it does tend to work when large brush sizes are an issue. Andrew is meant to receive official Nvidia speedup results from Nvidia sometime soon, to try and quantify the numbers.
  15. +1 For OpenSUSE x64 distro. As for Photoshop, CS2 works via wine, not sure about the others, But thankfully Andrew's painting tools are likely enough for most things anyway.
  16. Why doesn't it work?, It seems others use Nvidia 8xxx series cards, without the need to complain? Why is it suddenly no good? CUDA nor Nvidia are a requirement for 3DCoat. CUDA is an optional an enhancement you can choose to take advantage, if you have the means too. Also try to remember it's unlikely your videocard is only used by 3DC. It has other uses and programs that utilize it. Vidoecards and computers will never be fast enough, If you are using 3DC as a profession, than the cost is inbuilt to your income you make by using the tool, if you are a hobbyist, than perhaps use the hardware you have until you are ready to turn professional, and can afford the Nvidia upgrade. Well as it's an Alpha you haven't paid money, which makes your comments about cost laughable And then you threaten the author with attempts to leave to another application, Classy! Feel free to go and use Zbrush and remember the cost difference right up front... Might help make the Nvidia choices seem rather more logical. Please feel free to check out Zbrush! Nobody is forcing you to buy a new videocard, If the applications you wish to run don't run well on your hardware, It's say it's your computer that is the problem and not that of the software you choose to run on it. 3D Sculpting software is quite computationally expensive and therefore it's likely that the cat and mouse game of performance will always be a hurdle to overcome. It seems you made a silly choice to invest in a substandard card for the thing you want to do.
  17. I would at least like a Full Screen option, being able to hit "Tab" like Photoshop or Mudbox and just have a complete full screen, is often all i need when using a Wacom. hitting Tab is all it takes for the menus and buttons to appear again. I do like Shadow's look it is at least clean and pro looking, I'm not 100% with all the ideas, but it's cleaner.... Maybe Andrew should start there and let users interact with certain decisions from then on. This discussion shows why Customization is important, because we will never please everybody. It's all about the basics and let users skin whatever they want. Silo got this bit right. For myself the Interface is about Workflow, i.e how fluid i can keep sculpting, painting and working without having to search, through menus, click 50 times to achieve a single function, stop working to think of where the next tool or function may be etc.. Pretty is nice and all, but workflow is far more important! (Do you hear me Microsoft?) CS3 for example was a huge step backwards, once the menus had all the items listed, now i have a little down arrow i have to press just to see what other items are on the menu.... It's thinking like this that should be avoided at all costs... :P
  18. Cool!, Now for your next trick..... How about Realtime Paint Physics! Drop a paint bucket on the object and let CFD drip over the objects via gravity, while allowing rotation etc of the object being splashed. I want drips of ink running down my 3D objects....
  19. Doh!, No i still had an older CUDA Toolkit installed, I uninstalled previous version, installed the latest 2.1 beta toolkit for Vistax64 release and all is working now. So thanks for the efforts... looks like there is a few issues as mentioned by others (precision?), but the performance with large radius brushes is quite excellent, although it seems to smooth slightly oddly. Cannot wait to see more, thanks for posting the early tests! Does 3DC need Full double precision FP? I know CUDA supports it, but only supported on 200 series Nvidia's, which means i guess you cannot use it currently? Keep up the good work....
  20. I unfortunately cannot seem to run either 64bit OGL or 64bit DX..... 32bit version works, but of course i wanted to test CUDA... It doesn't seem to be a CUDA related error, and all the other CUDA examples i have work fine. "3D-CoatGL64.exe - Application Error The application failed to initialize properly (0xc000007b). Click OK to terminate the application." I will also try it out on my laptop ASAP but i have to load the CUDA toolkit for the 9650GT and lappy first. Any Ideas Andrew....? Vista x64, 512Mb 8800GTS.... (Fault Module Name = ntdll.dll)
  21. Lol @ Cute comment, You are an animal.... Cannot wait to see what kind of results you get. Not at the moment, but in the next version of CUDA, Cuda should emulate on the CPU. It means Andrew can code a single CUDA.exe that will work on Nvidia in CUDA mode, but also if an Ati or Intel card is found. At the moment the .exe will have to be separate for Ati and Nvidia users. ATi/Apple/AMD and the Kronos group will likely adopt similar ideas for their own open standard of OpenCL. PS... Andrew Linux is available in 64bit for quite some time, and also has support for OGL and CUDA.
  22. I will take a guess and say quite a while... I have been involved in CUDA development for about 12months and for Andrew he's code will require quite a rewrite. ( I imagine) CUDA is still limited by many many technical limitations, getting a good speedup via CUDA even when rewriting code, is not always easy.. Sometimes the benefits of speedups are lost to the stupidity of the limitations. I imagine Andrew will want to thoroughly test it and work around these issues, which could be a long process. But i hope that Andrew's Genius will handle it with the same professionalism and speed that he shows elsewhere. But in comparison to 64bit there is a lot more coding changes to be done in the CUDA enhanced versions.
  23. Not at the moment, Apple stuffed up their implementation of 64bit under MacOSX, and thus nobody can really make true 64bit software to run inside Mac currently. (not the way it's meant to be done anyway)
  24. Where is link to 64bit version, I can only see the standard 32bit link on page 1?
  25. Not sure that is a problem, many 3D programs use Transpose because it's a mathematical term. It may seem a little silly to want to not standardize terminology as it only simplifies the issues. Anyway no complaints either way, but just to let you know it's a legit 3D term regarding algebra. http://en.wikipedia.org/wiki/Transpose
×
×
  • Create New...