Member Vaiktorg Posted January 18, 2015 Member Share Posted January 18, 2015 I found this pretty neat website on siggraph project documents about different methods of capturing normals, albedos, speculars, and other stuff, from surfaces. So... i was thinking... is it possible to actually get real-world readings with just a camera, a linear polarized filter, a black box, sequential hemispherical lighting on all 360 degree?What i mean is, taking a picture from a polarized camera thats 90 degrees aiming down to the surface, and each picture will have a different lighting value. Then with some editing, recreate all the maps? Or is it more complicated than this? http://www.pauldebevec.com/index.html Quote Link to comment Share on other sites More sharing options...
Carlosan Posted January 18, 2015 Share Posted January 18, 2015 related Quote Link to comment Share on other sites More sharing options...
Member Vaiktorg Posted January 18, 2015 Author Member Share Posted January 18, 2015 Yeah, I've seen it. I've been looking for a LOT of documents and pretty much all of them use either lasers, or projecting lines at a model, or an orbital something something. Quote Link to comment Share on other sites More sharing options...
Member Vaiktorg Posted January 18, 2015 Author Member Share Posted January 18, 2015 (edited) Here i found an idea of what i believe how Quixel's MegaScanner works. https://udn.epicgames.com/Three/TakingBetterPhotosForTextures.html http://www.polycount.com/forum/showthread.php?t=136486 Edited January 18, 2015 by Vaiktorg Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.