Jump to content
3DCoat Forums

L'Ancien Regime

Advanced Member
  • Posts

  • Joined

  • Last visited

Everything posted by L'Ancien Regime

  1. FreeCAD is BREP modeling, MOI is NURBS modeling. The two are related but different. If you were going to design a car's aerodynamic body, you'd use NURBS modeling. If you were going to design a car's brake system or engine, you'd use BREP. You can do some pretty fine work in MoI...
  2. http://www.cgchannel.com/2018/04/check-out-kozinarium-a-procedural-cg-creature-generator/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+cgchannel%2FnHpU+(CG+Channel+-+Entertainment+Production+Art) CG artist Denis Kozlov has posted a crazily compelling demo video of Kozinarium, a procedural creature generation system based on Houdini and Fusion. The tool isn’t publicly available, but it’s a neat example of what can be achieved with automated systems – and a lot of lateral thinking. Create convincing creepy-crawlies, curtailing conscious control According to Kozlov, it takes 30 minutes to model, rig and animate a creature in Kozinarium, with results ranging from things that look vaguely like fish, worms or insects to things that look like nothing on Earth. The guts of the system consists of “about 1,700” Houdini nodes, with the user simply entering numbers to generate random seeds, and Kozinarium outputting a new creature or animation based on the results. The core modules for generating body shape and motion are CHOP-based generators. Intermediate geometry is generated as both polygons and NURBS, and the final meshing is based on VDB volumes. The system uses “flexible, marionette-like rigging” with Houdini’s FEM solver generating realistically squishy secondary motion, and the results are rendered in Mantra with procedural displacemement. Surface colours are also generated procedurally: this time in Fusion as a post process. https://www.the-working-man.org/2018/04/procedural-bestiary-and-next-generation.html Then finally comes the implementation stage. The work definitely requiring much skill and deserving proper recognition on its own, it’s not in a primary focus for this piece and is covered in many other sources. My main choice is Houdini for 3D work and Fusion for 2D (I find both tools just absolutely fantastic). C++ is arguably the most versatile, yet quite low-level solution; Java and Processing seem to be popular within the procedural circles; Python is an industry standard in commercial CGI. This is the process in a nutshell. It can be boiled down to a basic analysis-synthesis-implementation chain, but does require certain expertise. Erudition is important, but pattern recognition and ability to see/translate between the structural, the verbal and the visual are probably key. The rewarding part is that once the system is in place, life usually becomes notably easier. Of course each new project involves tons of research, but with experience comes the vision, seeing through patterns and approaches. And eventually pretty much anything can be expressed. Future Tools Now imagine a system where you can create any visuals or 3D objects by merely describing them. Doesn’t matter whether with words or parametric sliders, existing or totally made-up. What’s important is that you don’t need to understand the technicalities in order to create, unless you want to. This is high-level graphics creation, as opposed to directly manipulating pixels or polygons at the low level.
  3. http://www.cgchannel.com/2018/04/foundry-unveils-kanova-a-volumetric-sculpting-tool-for-vr/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+cgchannel%2FnHpU+(CG+Channel+-+Entertainment+Production+Art)
  4. https://www.amazon.com/Figure-Sculpting-Planes-Construction-Techniques/dp/0975506587/ref=pd_sim_14_3?_encoding=UTF8&pd_rd_i=0975506587&pd_rd_r=WM4ZW4TKTJH0F5Z8H3Y0&pd_rd_w=pGeRP&pd_rd_wg=peHbh&psc=1&refRID=WM4ZW4TKTJH0F5Z8H3Y0 https://www.amazon.com/Portrait-Sculpting-Anatomy-Expressions-Clay/dp/0975506501/ref=pd_bxgy_14_img_2?_encoding=UTF8&pd_rd_i=0975506501&pd_rd_r=S2MZPG4FR3NW6Z2JPVNG&pd_rd_w=qKK9I&pd_rd_wg=qlKNO&psc=1&refRID=S2MZPG4FR3NW6Z2JPVNG https://www.amazon.com/Mastering-Portraiture-Advanced-Analyses-Sculpted/dp/097550 6560/ref=pd_bxgy_14_img_3?_encoding=UTF8&pd_rd_i=0975506560&pd_rd_r=QBPYCRK3X4806MYVEC7F&pd_rd_w=ixALe&pd_rd_wg=vV4px&psc=1&refRID=QBPYCRK3X4806MYVEC7F
  5. Aha...this is the exact same solution that Modo used for it's autoretopo. Works very well; I've tried it as a stand alone, and within Modo on the same model and got exactly the same results, polygon for polygon. And I never found one ngon...just pure quads. But having said that, I still am strongly of the opinion that Zbrush has the best autoretopo in the business by far, though kudos to Andrew for being the first, the guy that started the autoretopo movement, and he did it with impressive panache too. But Pixologic just had more money and could hire the big guns.
  6. I'd rather spend that money on a real CnC machine, one that mills real steel and aluminum etc https://www.ebay.com/itm/Bolton-Tools-CNC-Milling-Machining-XQK9630S-GSK-Controller-Free-Shipping-/253270984266 https://www.ebay.com/itm/PM-727-M-VERTICAL-BENCH-TOP-MILLING-MACHINE-3-AXIS-DRO-INSTALLED-FREE-SHIPPING/332554933892?_trkparms=aid%3D222007%26algo%3DSIM.MBE%26ao%3D2%26asc%3D50527%26meid%3D5a5b1e3c4408432c88e1cbc5c93ac167%26pid%3D100005%26rk%3D6%26rkt%3D6%26sd%3D382213467946%26itm%3D332554933892&_trksid=p2047675.c100005.m1851 Cutting cardboard with a laser is a waste of time and money. Frankly if you're not specifically into making plastic prototypes and casts for vacuum moulding of plastic then 3d printing is a waste of m oney too.
  7. excellent pdf download. thx
  8. http://www.cgchannel.com/2018/01/download-free-tree-generation-tool-tree-it/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+cgchannel%2FnHpU+(CG+Channel+-+Entertainment+Production+Art)
  9. AMD...AMD...AMD Intel can spy on somebody else. Not paying for NSA backdoors.
  10. It uses its own proprietary software, Presto https://www.digitaltrends.com/computing/pixar-shows-software-at-gtc-2016/ I have no problem admitting I’m a huge Pixar nerd. From Toy Story to Finding Nemo, it’s hard not to be enchanted by the creative storytelling and compelling characters, and most of all, the unique Pixar look. Some of the most complex frames in Finding Dory took over 600 hours to render. The team at Pixar doesn’t just color and animate though, and the tech side there is constantly searching for new ways to improve the work others are doing. That’s led to Presto, a program built for Pixar in cooperation with Maya, as well as a library of real-time rendering and modeling tools. Related offer: Stream your favorite Pixar movies on Amazon Video now At Nvidia’s GPU Technology conference three Pixar employees — graphics software engineer Pol Jeremias, lead software engineer Jeremy Cowles, and software engineer Dirk Van Gelder — explained how movie making led to software creation, with some appearances from favorite Pixar characters thrown in for good measure. A unique challenge As you might imagine, Pixar’s cutting-edge 3D animation demands impressive hardware. Part of the challenge specific to Pixar is that most machines are built for speed, not beauty. That’s why the company built its own systems purpose-built for movie making. The standard machine at Pixar is powered by a 2.3GHz, 16-core Intel processor with 64GB of RAM, and a 12GB Nvidia Quadro M6000. If the team needs a little more oomph, there’s a dual-CPU configuration with two of the 16-core chips, a pair of M6000s, and 128GB of RAM. QuadroM600024GB And even those machines are pushed to their limit during an active work day. There are over 100 billion triangles in a small shot, more than even the fastest gaming desktop could handle. Mater, from Cars, is made up of over 800 meshes, and almost all of them are deformed in some way. Add to that the schools of fish in Finding Nemo, or the swarms of robots in Wall-E, and the need to develop software in-house only becomes more pressing. Presto At the heart of Pixar’s software suite is the reclusive, proprietary Presto. The modeling software, built in cooperation with Maya, is responsible for everything from scene layout, to animation, to rigging, to even simulating physics and environments. Pixar doesn’t show it off in public often. Fortunately, during the presentation at GTC, we were treated to a live demo. A lot of the Pixar’s articulation, animation, effects, and subdivision happen in real time. Presto’s interface might look familiar to anyone who has spent time in 3D modeling applications like Maya or 3DSMax, but it has workflow innovations that help artists in different parts of the process stay focused on their work, and not have to deal with unnecessary information. At the same time, animators and riggers can find an extensive amount of data relevant to their particular role, and multiple methods of articulating parts of the mesh. The models for characters aren’t just individual pieces. Grabbing Woody’s foot and moving it up and down also articulates his other joints, and the fabric in surrounding areas. As a long-time Pixar fan, I couldn’t immediately point out any artifacts or graphical oddities in the live demo. It helps that it was just Woody and Buzz on a gray background, but textures were sharp, animation was clean, and reflections were accurate and realistic. Even a close-up focused on Woody’s badge looked spot-on. And it all happened in real time. Harnessing collaborative power One of Presto’s early limitations was its inability to handle collaborative work, so Pixar set out to bring the functionality into its workflow. The result is Universal Scene Description, or USD. This collaborative interface allows many Pixar artists to work on the same scene or model, but on different layers, without stepping on each other’s feet. By managing each aspect of the scene individually — the background, the rigging, the shading, and more — an animator can work on a scene while an artist is touching up the characters’ look, and those changes will be reflected in renders across the board. Instead of frames, scenes are described in terms of layers and references, a much more modular approach to traditional 3D modeling. Related Offer: Stream Monsters University on Amazon Video now USD was first deployed at Pixar in the production of the upcoming film Finding Dory, and quickly became an integral part of the workflow. Its success hasn’t been limited to Pixar, and programs like Maya and Katana are already integrating USD. Assets in these programs can be moved and copied freely, but that’s not all there is to the story. Van Gelden showed how Pixar is taking USD a step further with a new program called USDView. It’s meant for quick debugging and general staging, but even that’s becoming increasingly sophisticated. In a demo, USDView opened a short scene with 52 million polygons from Finding Dory in just seconds on a mobile workstation. In fact, Van Gelden did it several times just to stress how snappy the software is. It’s not just a quick preview, either. There’s a limited set of controls for playback and camera movement, but it’s a great way for artists to get an idea of the blocking or staging of a scene without needing to launch it in Presto. USD, with USDView built in, will launch as open-source software this summer. It will initially be available for Linux, but Pixar hopes to release it for Windows and Mac OS X later on. Multiplying polygons One of the main methods of refining 3D models is subdivision. By continually breaking down and redefining polygons, the complexity of the render increases — but so does the accuracy and level of detail. In video games, there’s a limit to how far subdivision can go before it hurts performance. In Pixar’s movies, though, the sky’s the limit. To offer an example of how far subdivision could go, Jeremias showed an example of a simple 48-polygon mesh. The next image showed the polygon after a round of subdivision, looking much cleaner, and sporting 384 polygons. After another round, the shape had smoothed out completely, but the cost was a mesh with over 1.5 million polygons. Jeremias noted that these subdivisions are most noticeable at contact points between two models, and especially on a character’s fingertips. Pixar relies on subdivision so much that the company built its own subdivision engine, OpenSubDiv. It’s based off Pixar’s original RenderMan libraries, but features a much broader API. It’s designed with USD in mind, as well, for easy integration into the workflow. Summoning the Hydra If you want to see how those elements are adding up without having to render a final scene, Hydra is the answer. It’s Pixar’s real-time rendering engine, built on top of OpenGL 4.4. Importantly, it’s built specifically for feature-length film production, and it’s built for speed. Textures were sharp, animation was clean, and reflections were accurate and realistic. It’s not an end-all be-all solution for final rendering, but it can help bring together a lot of effects and details for a more accurate representation of what a scene will look like than USDView can provide. It also supports features like hardware tessellated curves, highlighting, and hardware instance management. Even other effects and media companies have been working with Pixar to integrate Hydra into their workflow. Industrial Light and Magic, the special effects company behind the Star Wars films, has built a hybrid version of its software that’s built around Pixar’s technology. In the case of the Millennium Falcon, that means 14,500 meshes and 140 textures at 8K each — no small feat, even for extreme workstations. It’s not just about creating the models and animating them, however. A huge part of setting the mood and polishing a film involves post-processing effects. The artists and developers at Pixar wanted an equally intuitive and streamlined process for adding and managing effects. And there are quite a few to manage. Cowles showed off a list of post-processing effects that wouldn’t look out of place in Crysis’ graphics settings. That includes ambient occlusion, depth of field, soft shadows, motion blur, a handful of lighting effects, and masks and filters in a variety of flavors. When you look closely at a rendering of an underwater scene with Dory and Nemo, the culmiative impact of these extras adds up quickly. Real time, a recent development Today, a lot of the Pixar’s articulation, animation, effects, and subdivision happen in real time. That wasn’t always the case. Van Gelder showed this by turning off the features that are now possible to instantly preview using the modern tool set. Shadows were gone, major details like pupils and markings disappeared, and all but the most basic color blocking vanished. That example drove home the massive scale of each scene in these movies. The complexity of just a small scene far outweighs even the most advanced video games, and the payoff is immense. Even with all of that impressive hardware and purpose-built software, some of the most complex frames in Finding Dory took over 600 hours to finish. It’s a cost that companies like Pixar have to consider in the budget for a film, but in-house, purpose-built software helps streamline the important areas.
  11. http://ca.ign.com/articles/2017/12/11/kojima-explains-death-stranding-gameplay-and-lore KOJIMA EXPLAINS DEATH STRANDING GAMEPLAY AND LORE Share. Hideo Kojima on what happens when your character dies, "Timefall," and that ubiquitous baby. BY MARTY SLIVA “I’m trying to make something different. How to show that, how to have people see that, that’s something I’m trying to figure out.” As usual, Hideo Kojima was calm and collected as I spoke to him just hours after the newest look at Death Stranding at the 2017 Game Awards. I’ve spoken with him several times throughout the past few years, from the forging of his partnership with Sony and his decision to use the Decima Engine, to the opening of his new studio deep in the heart of Shinagawa. But this time, something different happened. This time, I walked away from my time with Kojima with some actual, tangible details about the gameplay mechanics, philosophy, and lore of Death Stranding. “Games started over 40 years ago with arcades. When the player dies, it’s game over. You continue, and time goes back to before you die. You can die as many times as you want, but you always go back to a little bit before you die. That was a mechanic made specifically for putting in coins, and it hasn’t changed since then. As Kojima spoke of the way a vast majority of games treat the concepts of life, death, and mortality, it was clear that Death Stranding was aiming to eschew this tradition. “One of the themes of this game is life and death. So I want people to realize that when they die in the game, that isn’t the end.” “ I want people to realize that when they die in the game, that isn’t the end. Partway through the latest clip from Death Stranding, Sam (played by Norman Reedus) watches as an explosion engulfs the giant, Lovecraftian-kaiju that projects through the fog. Suddenly we see an upside-down world submerged in water, where Sam exists among the flotsam and jetsam of the universe around him. From the sound of it, this is where Sam, and the player, go every time they die. But don’t make the mistake of calling this “game over.” When you die in Death Stranding, you’re transported to this purgatory, where you’re free to explore in first-person. Because of some mysterious “unique” abilities Sam possesses, you can wander outside of your body, recovering items among other things. As Kojima explains, “At that point, you’re not dead or alive. It’s the equivalent of that screen that says ‘Continue?’ and a counter ticking down towards zero.”
  12. https://wccftech.com/nvidia-titan-v-volta-gaming-benchmarks/
  13. In case you're not acquainted with this incredible new game in development here's the previous two trailers An analysis...
  14. Isn't the entire Metro franchise an offshoot of S.T.A.L.K.E.R? Didn't Andrew work on STALKER? Correct me if I'm wrong..
  15. https://wccftech.com/nvidia-titan-v-volta-gpu-hbm2-announcement/ Announced by NVIDIA founder and CEO Jensen Huang at the annual NIPS conference, TITAN V excels at computational processing for scientific simulation. Its 21.1 billion transistors deliver 110 teraflops of raw horsepower, 9x that of its predecessor, and extreme energy efficiency “Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said Huang. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.” via NVIDIA In return, you are not only getting the awesome new Volta GPU architecture “GV100”, buyers also get 12 GB of HBM2 memory. Yup, this is the first TITAN graphics card and also the first NVIDIA line of graphics cards (Non Quadro / Non Tesla) to feature HBM2 memory. The NVIDIA TITAN V is based on the GV100 GPU architecture and features a total of 5120 CUDA cores and 320 texture units. This is the exact same amount of cores featured on the Tesla V100. In addition to the regular cores, the card also packs 640 Tensor Cores inside the Volta GPU. These are geared for maximum deep learning performance as the card can crunch up to 110 TFLOPs of GPU performance for AI related algorithms. The entirety of the core is clocked at 1200 MHz base and 1455 MHz boost. Even with such hefty specs, the card only requires an 8 and 6 pin power connector configuration to boot and comes in a 250W package. So coming to the HBM2 VRAM, yes there’s 12 GB of that on board the graphics card and it comes with a data rate of 1.7 Gbps along a 3072-bit memory bus. This gives the card a total bandwidth of 652.8 GB/s which is way faster than the previous TITAN Xp. Compared to the Tesla V100, we are looking at a cut down bus interface (4096-bit vs 3072-bit) and also lower VRAM of 12 GB compared to 16 GB on that board. Overall, this graphics card can be used for both professional and regular workloads such as gaming and it will be interesting to see what kind of punch this card packs.
  16. I was just sitting here thinking about you and decided to log in here and see what was up with you. I hope everything has worked out. We'll talk when there's time.
  17. I left this out; https://www.pcworld.com/article/2899351/everything-you-need-to-know-about-nvme.html
  18. A couple weeks ago I became aware of M2 drives which have their own bus to the CPU. Here's a 2TB Samsung model you can get for around $1200 PCI Express 3.0 M.2 interface Read / write transfer rates: 3500MB/s / 2100MB/s Then I was over at WCCF looking at their computer builder app and saw that they had this matched with this motherboard which is exclusively built for the 16 core AMD Threadripper; https://www.amazon.com/ROG-ZENITH-EXTREME-Threadripper-Motherboard/dp/B0748K1F99 It takes up to three of these 2 TB drives Powered by AMD Ryzen Threadripper TR4 processors to maximize connectivity and speed with support for up to 128GB of DDR4 memory, three (3) NVMe M.2 drives, front side U.2 and front panel USB 3.1 Gen2 port Connect With Unparalleled Speed ROG Zenith Extreme's DIMM.2 module is a bundled expansion card that allows two M.2 drives to be connected via a DDR4 interface. It is also equipped with an M.2 heatsink integrated into the PCH heatsink. With a huge cooling surface, the heatsink perfectly chills an inserted M.2 SSD. Question: Can this board take 2 samsung 960 pro m.2 ssds? i cant tell from the photos? wanting to run them in raid 0 for the operating system and games Answer: Can use 3 M.2 natively. NVME RAID boot will be supported in upcoming BIOS update (Fall17). By GoodwinAJ on September 2, 2017 Now I don't pretend to be any expert on computers. I'm an artist who loves computers, not a computer guy. But this piqued my curiosity; NVME RAID boot? what the hell is that? So I went looking and found this; " Serial ATA and Serial Attached SCSI (SAS) offer plenty of bandwidth for hard drives, but for increasingly speedy SSDs, they’ve run out of steam. Because of SATA’s 600Gbps ceiling, just about any top-flight SATA SSD will score the same in our testing these days—around 500MBps. Even 12GBps SAS SSD performance stalls at around 1.5GBps. SSD technology is capable of much more. The industry knew this impasse was coming from the get-go. SSDs have far more in common with fast system memory than with the slow hard drives they emulate. It was simply more convenient to use the existing PC storage infrastructure, putting SSDs on relatively slow (compared to memory) SATA and SAS. For a long time this was fine, as it took a while for SSDs to ramp up in speed. Those days are long gone." https://www.techpowerup.com/236644/amd-to-enable-nvme-raid-on-x399-threadripper-platform When AMD Ryzen Threadripper HEDT platform launched earlier this year, a shortcoming was its lack of NVMe RAID support. While you could build soft-RAID arrays using NVMe drives, you couldn't boot from them. AMD is addressing this, by adding support for NVMe RAID through a software update, scheduled for 25th September. This software update is in the form of both a driver update (including a lightweight F6-install driver), and a motherboard BIOS update, letting AMD X399 chipset motherboards boot from RAID 0, RAID 1, and RAID 10 arrays made up of up to ten NVMe drives. AMD confirmed that it has no plans to bring NVMe RAID support for the X370 or B350 platforms. Leveraging existing technology Fortunately, a suitable high-bandwidth bus technology was already in place—PCI Express, or PCIe. PCIe is the underlying data transport layer for graphics and other add-in cards, as well as Thunderbolt. (Gen 2) offers approximately 500MBps per lane, and version 3.x (Gen 3), around 985MBps per lane. Put a card in a x4 (four-lane) slot and you’ve got 2GBps of bandwidth with Gen 2 and nearly 4GBps with Gen 3. That’s a vast improvement, and in the latter case, a wide enough pipe for today’s fastest SSDs. PCIe expansion card solutions such as OCZ’s RevoDrive, Kingston’s Hyperx Predator M.2/PCIe, Plextor’s M6e and others have been available for some time now, but to date, they have relied on the SCSI or SATA protocols with their straight-line hard drive methodologies. Obviously, a new approach was required. One of the best things about NVM Express is that you don’t have to worry about drivers showing up. Linux has had NVMe support since kernel 3.1; Windows 8.1 and Server 2012 R2 both include a native driver, and there’s a FreeBSD driver in the works. When Apple decides to support NVMe, the latter should make it easy to port. However, BIOS support is largely lacking. Without an NVMe-aware BIOS, you can’t boot from an NVMe drive, though anyone with a x4 PCIe slot or M.2 connector can benefit from employing an NVMe drive as secondary storage. An NVMe BIOS is not a difficult technical hurdle, but it does require engineering hours and money, so it’s unlikely it will stretch far back into the legacy pool. Equally daunting for early adopters is the connection conundrum. Early on, you’ll see a lot of expansion card NVMe drives using Gen 3 PCIe slots. That's because all 2.5-inch NVMe SSDs use the new SFF-8639 (Small Form Factor) connector that’s been specially developed for NVMe and SATA Express, but is currently found only on high-end servers. An SFF-8639 connection features four Gen 3 PCIe lanes, two SATA ports, plus sideband channels and both 3.3-volt and 12-volt power. There are adapters and cables that allow you to connect 2.5-inch NVMe SSDs to M.2, but as M.2 lacks a 12-volt rail, the adapters draw juice from a standard SATA power connector. The real issue with M.2 is that on Intel systems it's generally implemented behind the PCH (Platform Controller Hub), which features only Gen 2 PCIe. That's because the PCH lies behind the DMI (Direct Media Interface) which is capped at 2GBps. You can see the problem. Note that NVMe via M.2 isn’t 3.3 times faster than SATA. But if you pay the money, you’re going to want your SSD to be all it can be. At least I would. That means an expansion card drive until SFF-8639 connectors show up on consumer PCs. NVMe SSDs actually showed up last summer with Samsung’s 1.6TB MZ-WEIT10, which shipped in Dell’s $10,000 PowerEdge R920 server. Gulp. Intel followed suit with the announcement of its pricy PS3600 and 3700 series NVMe SSDs, which are available in capacities up to 2TB. The first consumer NVMe drive to show up is Intel’s 750. It’s fast. Read our review. The Current Outlook Enthusiasts will want to take a hard look at Intel’s 750. Most recent high-end motherboards will get firmware upgrades to support NVMe so you can boot from the drive. Most legacy mainstream boards will probably not. But our talks with Intel and other vendors indicate that the flood gates have opened, and you should see a torrent of NVMe support later in the year. I'm sure there's people on this board that know more about this than me, maybe some of you have already implemented this technology and if you have please post your experience or knowledge here; I'd welcome it and I'm sure the other 3d coat enthusiasts here that are planning new rigs would welcome it too.
  19. http://zivadynamics.com/character-platform-beta
  20. I find it's not the nodes that are formidable so much as the VEX scripts that the modes take on as you dig deeper and deeper into them. I'm no programmer and trying to follow the tiny script in their attribute editor boxes is really hard to follow and even harder to learn.
  • Create New...