• Content count

  • Joined

  • Last visited

Community Reputation

4 Neutral

About tiburbage

  • Rank

Profile Information

  • Gender
  • Location
    Redwood City, CA
  • Interests
    3D modeling, texturing, and animation, game development in Unreal 4, programming and writing, movies, games, guitar, music
  1. I'm using the Universal Manipulator in the UV Preview window in the UV room e.g. with an island selected, and had wanted to move a shell just in U or just in V, but can only seem to do "free form" moves. Tried just dragging on the horizontal or vertical lines of the gizmo, thought Ctrl or Shift might work as constraint modifiers, but so far no luck. Does the tool support 1D translates?
  2. Not a feature request per se. Just thought people might find these things interesting, and who knows there may be some ideas which would fit well in a 3DC context. Allegorithmic's Texture Painter, which implements what looks like a particle system approach to generating physics driven semi random texture effects... http://www.allegorithmic.com/products/substance-painter Quixel's dDo and nDo are really interesting tools dDo: http://quixel.se/ddo/ nDo: http://quixel.se/ndo/ These tools are basically Photoshop plugins so they can leverage the PS environment in general including layers etc, but also its selection and painting tools. nDo provides some really interesting normal map tools.
  3. I think most developers are taking a "wait and see" approach to GPGPU (general purpose GPU) aka CUDA and OpenCL. The main problems are: - there are only 2 GPU vendors and they are arch enemies with no motivation to cooperate on standards. NV is not interested in promoting OpenCL, while CUDA is NV-proprietary. - A lot of development effort has to be put into a multi-processing architecture, deciding what to do on the GPU vs CPU, how to coordinate those threads of execution. With CPU capability continuing to advance (more cores, higher clock speeds at reduced power usage), it is much more straight-forward to just take advantage of that well-known architecture. It is more straight-forward to simply work on making an app take advantage of multiple cores/threads. probiner, I'm going to be making a similar decision in the next weeks, but I personally am only considering NV cards. For one thing, I believe the NV OpenGL support is significantly better than AMD/ATI and pretty much has always been. AMD concentrates on DirectX. AbnRanger pointed out some of the current state of things in the NV Geforce lineup that makes a buying decision harder than it ought to be. I'm leaning toward this one currently due to the 4GB of VRAM, but haven't decided for sure: http://www.newegg.com/Product/Product.aspx?Item=N82E16814125462 GIGABYTE GV-N770OC-4GD GeForce GTX 770 4GB 256-bit GDDR5 PCI Express 3.0 HDCP Ready WindForce 3X 450W Video Card There are also 780's with 3GB for about the same price. I think the VRAM will be more important to me in the long run than CUDA cores... Sigh. More research...
  4. I wouldn't spend too much time reading C++ books... maybe look for some short tutorials on basic C++ syntax and usage. If you know Javascript at all, you'll already be part-way there. Based on a quick look at the AngelScript site, writing AngelScript code will look something like original C++, and unlike many scripting implementations is statically typed, but like other interpreted languages handles memory management for you (no raw pointers or heap allocation). The problem with Python is that it has a big footprint. I have heard of AngelScript being used in game engines before to provide their scriptability, and this is probably because of its familiar syntax, small footprint without external dependencies, and easy integration with app/libraries built in C/C++. If its primary purpose is to create macro-like automation within 3DC, it should do fine. Where Python really shines is where it is exposed as an external interface, letting TDs build glue/workflow applications in Python that drive e.g. Maya, Motion Builder, LightWave, etc. in production. With some apps such as Maya, you can effectively write plugins in Python as well. Neither of those may be 3DC objectives though.
  5. Just mentioning that the Textures > Texture UV Editor panel does have a "Normals" mode to display normals direction info. It would be nice in the "UV Preview" panel to be able to toggle between: Stretching hinting (what it has now), normal direction, and reveal overlapping UVs. Maybe the latter could be incorporated into the normals hinting.
  6. Well, the PNG specification does support stuff like indexed color if you wanted to do that. The important thing with what file formats one chooses is a combination of what characteristics your workflow requires (grayscale, RGB, RGBA, 8bpc, 16bpc, etc.) and what support the apps you use provide for I/O. It's kind of a "least common denominator" thing. I've never had any issues with .PNG in my workflows, which include PS, Maya, LW, 3DC, ZB. I can't vouch for import to Unity or UDK in terms of game engines...
  7. My use of 3D-Coat is currently pretty limited, so I can't comment about most of the issues you raise. I use LightWave Modeler for hard surface modeling, and its UV tools pretty much suck, so I use 3D-Coat for initial UV map generation/unwrapping, and Maya where good lower level UV tools are needed. Maya's hard surface model UV tools are pretty good, and organic/unwrap tools also decent, but 3DC's are faster in terms of getting the UV islands initially defined and uniformly parameterized. Anyway, I do have a few comments based own 20+ years now in software dev. johnnycore's list of stability or performance issues is so long that most developers I know, if confronted with it, would just let it "fall on the floor" (too much information). My thought on what work I would think it best for Andrew to work on in a drive up to release of v4, based on my own , are: * stability -- because there can still be things that are awkward, or features not yet there, or performance not being great in some areas, but if the user experience is frequently punctuated by crashes, most users will reluctantly move on. And a reputation for instability can ultimately sink an app and be hard to dispel once it gets that reputation. * UI freeze -- Adoption of a new app is so heavily dependent on user documentation and videos/tutorials. Even if there are some inconsistencies or rough patches in workflow, those videos especially can explain those things. However, both of these things can quickly become either confusing or invalid if the user interface keeps changing. Those quality and scope of those (YouTube) videos is, I think, critical to bring new users in. This also implies temporarily putting a moratorium on feature additions. * performance -- I would caution against significant architectural changes to try to achieve better performance this late in the v3 cycle. I think the key for driving toward the v4 release would be to try to identify the places where (inadequate) performance is going to have the greatest impact on the general professional user experience and which also would not require major re-writes (which WILL introduce additional bugs and probably new instabilities and lengthen the v3 dev cycle). The best time in a development cycle to do architectural changes is at the beginning of a product version cycle so the impact of these changes can have a long bake time (beta user testing). * Make sure those app-links are working with the current versions of the software they link with. I think it's great that folks outside of 3DC have contributed these, but they have to work well for the key link apps or 3DC will look bad regardless of who initially developed them. The hard work for those of you who do use 3DC and really push it to its limits is to try to help Andrew to focus on say the top 1-3 areas which are of greatest concern -- especially ones which you believe truly limit wider adoption of the app when these constrains become known. If 3DC v4 initial release is stable, well documented, and has adequate performance (on a hardware platform at least up to some well documented minimum standard) for the majority of users, it should be well received (and bring in needed new-customer-purchase revenue), and then the developers can have the breathing room to review their architectural decisions and begin the long process of redesign. Maybe even add staff!
  8. TGA is popular because it is apparently very easy to write an importer/exporter for, and at least in the past was kind of the common denominator for game engine usage. I pretty much always use PNG for non-HDR work because it is both lossless and has good compression, and supports 8bpc RGBA and 8 or 16bit grayscale.
  9. With "Preview Islands" enabled, 3DC will show the current island being hovered over in the model view in the preview window, but zoomed to fill the window. While I guess that is useful for giving a more detailed view of the island, I would find it more useful to have an option to have the island under the cursor highlight in the preview window but with the preview just showing the overall layout. I know you can when in "Islands" mode go in the opposite direction -- select the island in the preview window and see what gets highlighted on the model, but would still find the requested option useful.
  10. +1 I came here to log a couple of similar/related requests, which I post separately.
  11. In Maya, a Shape is equivalent to a LW Modeler layer. A Maya Shape, like a LW layer, can contain any number of unconnected meshes (shells). Maya doesn't distinguish between a "model" and a "scene" in its native .ma/.mb format. Maya does not have LWO I/O support, but does have .obj and .fbx import/export.
  12. Folks, While I was evaluating 3DC, I tried both OGL and DX 64-bit versions and didn't notice anything in particular. The DX version is said to be "faster". Does anyone have any comments about one vs. the other from the standpoint of: stability, performance, display quality? -Tom
  13. Just spent a couple of weeks going back and forth between ZBrush (which I already own) and 3D-Coat, just to get a feel for how they overlap, how they are different. Decided as good as ZBrush is, 3D-Coat is a very useful tool in its own right, and is pushing the technology in some very interesting directions. 3D-Coat represents a trend I really like in CG software these days, which is away from the huge, monolithic "try to do everything" apps, and toward small, highly focused, rapidly developing apps. It seems like this is where the real innovation is coming from. It was initially CG-Coat's 3D paint/texturing and UV tools which caught my attention, but the retopo look top notch, and I think the time has finally come in terms of computer horsepower for voxel-based modeling to really show its potential. I've long enjoyed the straight polygonal modeling process, but at the same time it can be incredibly time consuming and at times down right tedious (UV mapping included). I'm really looking forward to thinking more in terms of shape and form and less about topology details. That dovetails nicely with the retopo tools since eventually a good base mesh will have to be derived for external use. Anyway, I'm looking forward to some creative experimentation in the coming months. By way of a short intro, I've worked for the last 20 years in software development, mostly on the testing side, primarily as a tools and automation systems developer, with Adobe in San Jose for the past 11. However, for a number of years now my real interest has been in CG, with an interest in a professional sense in both video/film and in high end game production areas. I have used LightWave since v6, Maya since v6, and am comfortable with After Effects, PS, ZBrush. I split my creative time between "artist" and "TD" endevours, and would be happy to find production opportunities that are anywhere in the middle of those two poles. I'm looking forward to how folks are integrating 3D-Coat into their own creative pipelines. -Tom