Jump to content
3DCoat Forums

popwfx

Advanced Member
  • Posts

    467
  • Joined

  • Last visited

Everything posted by popwfx

  1. I've been away for a month after buying the upgrade so outta the loop a bit... Is 4.0.04B on the download page the latest version? Or is there a beta or update somewhere in the forums? thanks
  2. oops. So if I already went through the purchase and only put one serial in - how can I pay the additional $10 and get the other one?
  3. Just bought my upgrade - thanks Andrew! One question though - I have 2 serials for 3.x -- one Win & one Mac - when I bought this it just gave me a Win serial - does this also work for Mac? or is it a separate charge now per serial? I thought one license worked for both platforms if it wasn't used simultaneously? I forget, it has been so long since I bought 3.x Can you tell me what the deal is and how to get a v4 Mac serial - or if I need to purchase that too? thanks!
  4. In this discussion, what would you suggest for the reverse situation - where the user is very experienced with poly modeling, but not so much with sculpting? What is the best way to learn sculpting when you've already mastered poly (and spline) modeling?
  5. Come on guys, you are forgetting the best painting software of them all ;-) Metacreations Painter 3D It's still for sale even: http://www.amazon.com/Painter-3D-MetaCreations-MAC-WIN/dp/B000I5JYNQ Lol
  6. --If you mean scanning range for 3d scanning, well that hasn't been done yet so not possible, but if it does happen down the line, it will have to be handheld and waved over your object. For hand motion interaction, if you placed it in front of your keyboard on a desk, the range is a bit wider than shoulder width and say the distance to your monitor and roughly the same distance back if you lean back in the chair that amount. Up and down I'd say it is from desk height where the device is to say head height. And this is all in a wide cone shape from the device. Not sure if that explanation is clear enough or makes sense though...
  7. -- it sure would be, but the lack of textures from no RGB camera means even if they could get point cloud data and scanning to work, there'd be no color recorded so all you'd get is white models - and that's too bad, because scanning would be great for faces. Looks like Kinect is the only hope for the short term.
  8. thanks lol - I was mostly kidding about your avatar your last one was much less distressing than this one with its eyeballs falling out Yes there is full 3D depth with any hand motions - you can move side to side, up and down, and in (towards your screen) and out with all five fingers with both hands. So painting would be achieved by pushing in towards the screen a certain amount and lifting the brush off the canvas would be achieved by just pulling your fingers back. How it feels is up to how the developer implements it. And whether or not it is truly useful depends on that and on how they give you visual feedback on screen as you paint - since there is nothing in mid air you can feel...
  9. It also has a few issues if your desk is under bright lights - so if you have a ceiling light right above you, it wont work as well. Plus it is quite small and tiny (and wired - they do plan to make a wireless one down the line, but it's too power hungry to spit all that data all those frames per second). The wired aspect means you tend to move it a lot accidentally and then may need to recalibrate before using it again next time - which is a pain. Hopefully this will be sorted by launch. It may turn out to work better once built into Asus & HP laptops eventually. It is a nice device though.
  10. phil - 3d Scanning is at least a year away for this device - unless some magic occurs between then and now. The Leap does NOT currently give access to point cloud data & there is no RGB camera in the hardware so texture scanning is not possible. Leap themselves are unwilling to pursue that at present and open up the sdk for that because they want to perfect the hand interaction thing for everyone first. They realize 3D scanning is a possibility down the line, but my guess from where their devs are positioning things is that they will do that with a different device later after the leap motion is out for awhile. Don't bet on this tiny device being a scanner. It has amazing precision though and very very fast responsiveness (I can move my hands super fast like mental and it doesn't stop tracking the motion and keeps the precision to the mm) but there's no texture RGB camera so that limits scanning. Your best bet for a low cost scanning solution is for someone (which may have happened in the last month but I don't know of one yet) who have taken the last Microsoft SDK for the Kinect with its Kinect Fusion scanning and made that into a product (as opposed to just the current tech demos that are out there now). Though I suspect Kinect2 will hit before Christmas and may have even better functionality than the Kinect 1. For scanning now though Leap can't do it - unless these guys have reverse engineered something..? Ajz3d - you're going to have to change that avatar back to your old one, this one is freaking me out too much - but don't know what you mean about pressure detection? It is just a motion detection using infra red depth cameras - no pressure or touching or physical feedback. That being said, for hands it will be a great device. My dev device works really nice and the latest sdks have had nice gesture detection built in so pokes swipes and circles etc are now all built in to the driver. It's good hardware - the software needs work which is why the delay. They also want to release it with a rock solid OS integration which is what they're working on. I also wouldnt worry too much on preorder with these guys. (it won't be like the Lightwave preorder where it was like 2+ years or something before they finally released lol) Their dev guys are really good and their customer service I've dealt with has been impeccable, they are really trying hard to please developers, and are doing well launching internationally even (with many int'l devs treated just as good as the ones stateside) and the hardware I've received (though older than what they currently are going to release) seems well built and worth the money when it will be released. The big thing whether this takes off enough, is if there is enough good software and useful stuff (that is not just gimmicky) that works. and if any one can just rock up to a device and "get it". That's still a tricky thing, as calibration issues and just plain interaction design will be the hard part. Some of the tech demos and games I've seen seem useless after the first 5 minutes of fun with them, but I have seen javascript addons for websites which enable it to work on websites which has been surprisingly useful. Hard to see where this will all go, but they've got a good shot at revolutionizing things...
  11. Btw, I see sculpting in air as a toy and not the real use case of it. That's a cool tech demo, but probably not practical. A lot of devs in the leap forums are forgetting UX stuff and things like arm fatigue. Might be fine for 5 minutes, but you don't want to work 8 hours like Tom Cruise in Minority Report. I'm aiming for more subtle smaller gestures like finger flicks and catches to make it work in conjunction with a wacom tablet or mouse. This magical holodeck sculpting idea is not practical in real use (maybe as a toy, but not for pro work). Physical feedback is important, and most people don't have the body awareness to know exactly what they are pointing at. I've had 20 people try to calibrate it by pointing chopsticks at dots on their screen and almost all of them were way off from what they were really pointing at and were different among themselves on subsequent attempts. Believe it or not only dancers or people who had extreme body awareness (like their arm is where they think it is in space) did well at doing tasks like this. All this means making a single interface for all people with complex gestures very tricky - and your app has to compensate for what people *think* they are doing. It is not as simple as writing an app and hooking it up. This is a whole new interaction paradigm.
  12. Can't really say too much because of the dev agreement. But I'm having a bit of trouble with the hardware I have. The last two SDK updates have improved things, but hopefully the final hardware will be much better. Though there are still software issues to work out in the drivers and APIs. It is an amazing device, but I'm having trouble making bulletproof things for the masses. And I do see big things for it (they have deals with Asus & HP - so expect to see it built into laptops in the future), but the delay until July-ish (was supposed to be out in May - though I've had this dev device since Jan) and my experience say it's not ready for prime time yet- I.e. granny or a five yr old won't find it intuitive yet. I do need to spend more time on working on the plugin (as opposed to work that pays the bills) but was waiting for improvements in the SDK to make things easier for me...
  13. I'll +1 this too. I've found when creating textures for games that get modified in-game (for colorization or personalization of the characters) the padding problems cause me issues with scaling and mipmaping etc. Being able to control this from 3DCoat and also when packing (to ensure a minimum amount of space between islands) is important to me. Check out this link, it was useful to me for working around this issue in Photoshop after 3DC exported the textures: http://wiki.polycount.com/EdgePadding
  14. If it's a good class that is not too specific to the instructor's project, I would be interested as well. Even though I've been using it for awhile I'm sure there are a few holes in my knowledge that would benefit from something like this. The online class would also need to be at least a little interactive in some way allowing submitted questions to be answered...
  15. This may be a dumb idea, but since the current level of coding/innovations required is orders of magnitude bigger than when 3DCoat started, and likely too big for one man regardless of his level of genius, perhaps a different approach is now required to get to the next level. Maybe another approach could be to do a Kickstarter to raise the funds to give Andrew the development resources/staff & management he needs to tackle issues such as these. Has this sort of thing been considered before? I really don't like when users speculate about a company's business objectives etc in forums, but maybe in this day of crowd funding, this sort of thing could be useful?
  16. Does this mean "No Center Snap" is broken in 15A? In the past for me, symmetry was only on bounding box not on the actual X =0 when this was not checked on import or whatever. I wish this setting was defaulted on. But you guys may be talking about a different symmetry issue?
  17. since we seem to be discussing the renderer, I'd just mention that for me it is not a vital part of 3DCoat. As I render either in realtime engines for games or use LW or other rendering engines for real renders, for me I'm not particularly interested in stills only - I tend to only care about renderers that allow me to create moving animated stuff with other effects (like physics) and also often render passes which are tied into a compositing workflow (for when it's a render and not realtime). Therefore other than for WIP demos, 3DCoats renderer is superfluous to my needs. To me it's kind of like Poser Pro's renderer - in that it kind of needs to be there for integrated testing, but I hardly ever (read: never) use Poser's renderer for output. Please don't consider this a flame war or a conflict, we are all speculating on 3DC's future anyway, but the things 3DC does do better than anything else (or equal to for a much more reasonable price) are namely: UV mapping, Retopo, Painting, (and for most people here it is a contender with Sculpting too). For those tasks it is really amazing (though here there seems to be a debate about the sculpting improvements needed) to me. Rendering (and to some degree the Tweaking room) are anemic to me and I personally can't fathom improving rendering too much. I mean, am I ever going to render animated, rigged characters with physics, or architectural stuff or anything else moving in 3DC? probably not. Though for stills and art photos and print - I can see that need (and it is a professional need) - that's just not my needs and currently not where 3DC's strength lies. 3DC is a vital tool in the arsenal of my workflow, but I can't see it being the sole tool. just my 2cents...
  18. thanks, I appreciate the tip, but, If I did that, then I'm converting a poly model to voxels and giving me the headache of having to then retopo the new voxel model and also texturebake it. In this case, I don't want to add extra time-consuming steps to my workflow. If I'm in polys, I want to stay in polys (as in this case, I might just be using 3dc for painting or UV and don't want to re-jig everything - especially if I've already externally weighted the object or whatever)... While this may be the case for many sculptors here (using 3DC as primarily a sculpting tool or as the sole tool to create content) I find because it plays nice with a lot of packages (like LW) you should be able to fit in 3DC's power whereever you want to at anytime in your workflow. If something I model and rig in LW or Modo or whatever needs UVs or texture painting, I can pull it in do what I need and pop it out. Not everybody creates every single thing exclusively in 3DC - especially if you are talking about dealing with existing assets. But that being said, I will get more into sculpting in 3DC when the project demands it, and then all these voxel only tips will finally come in handy ;-)
  19. Thank you for clarifying, unfortunately half of the time I'm not using voxels. A lot of times I'm using 3DCoat's power on externally created models. I gues sculpting is the primary use fo 3DC (despite the rest of the app being very strong) so I find most answers I get are from the sculpting perspective...
  20. I'd really like to get into 3D printing, but they seem soooo fiddly and time-consuming. I'm not sure I have the patience or time to wait 10 hours for a print and then clean up the machine only to find the thing I printed breaks. I'd rather go to a local printer (assuming the office max ones will be any good) or mail-order until they are more mature. I'd rather spend a few grand and not have to deal with clean up, long prints, or fiddly adjustments - more money for less pain would be welcome. Do you know of a really stable, fast one that isn't a time-sink?
  21. Did you download the texture? it's 4K -- or do you mean that the resolution within the repeating pattern is too small? Assuming the texture was bigger, how do you apply this sort of thing easily and seamlessly over an irregular surface without Moire Patterns appearing? That may contribute to it, but I don't think the res of the depth texture is what is causing the moire stuff - is it?
  22. I fully agree, Currently importing an LWO with skelegons and then exporting it strips the skelegons from the file, so you end up having to have had an older version saved and then combining the 2 if you say want to use 3DC for making UVs and then go back to LW for more modeling etc. What is in the model should not be altered, unless you alter it in 3DC. But the import fixes did help (not sure if they broke anything else though).
×
×
  • Create New...