Photorealism is not a sphere confined only to the entertainment sector. More and more fields based on digital pictures – and human representation – are seeking expertise so that they can move forward in creating clothing, hair and faces. Syflex, based in the USA, designed software that was first intended for simulating clothing in animation, and in 2010 opened an entity to work with apparel manufacturers to help them in their on-line business. As for the ADN firm, it chose to develop a set of technical and legal services to create and manage the use of digital doubles not only for stunt work but also for close-ups. Based on triple data acquisition, the capture provides a duplicate of a celebrity's face which can then be managed by the celebrity from start to finish, fully controlled. Hair is likewise a very difficult element to properly animate, or else it requires complex tools and a developer-oriented approach. Having built on research by the Grenoble Inria, Neomis Animation offers a Maya plug-in to simplify the creation of dynamic hairstyles, including numerous variables for either a cartoon-like or realistic rendering.
Syflex, clothes, simulation, ADN, digital double, legal protection, Neomis, superhelix, hair, wigs, photorealism
Photorealism is not a sphere confined only to the entertainment sector. More and more fields based on digital pictures – and human representation – are seeking expertise so that they can move forward in creating clothing, hair and faces.
As such, the Syflex company was founded in the United States in 2002 by Gérard Banel. He introduced the discussion on clothing simulation through a brief historical review: in 1997 the first "convincing" clothing was shown on screen in the Pixar short, Geri's Game.
Soon after, the fashion world picked up on this new potential and in 1998 Thierry Mugler held his much-noticed virtual fashion show. Subsequently, it was with the video-game inspired feature film, Final Fantasy, where new hurdles were cleared.
"Until then, clothes had to be animated manually," as Gérard Banel recalled, "but this was simply impossible for such an ambitious project and so, for the very first time, the production people turned to research and development (R&D) to try and get photorealistic esthetics for the clothing, suited to the graphic elements of the characters."
The R&D department, managed by Gérard Banel and endowed with an ample budget, first began to work on tee-shirts to generate their automatic animation, to fit with the movements of the characters wearing them. "Once we got our results, we then extended our efforts to other items of clothing." The aim was two-pronged: both to improve the realism of a graphic element that was often unsuited to a character's movement flow, as well as to simplify animation through automated simulations. Outcome: the R&D work of Square, the film producer, made it possible to finish more than one computed scene per animator per day! Unfortunately, not only did the film fare poorly, but this technological breakthrough went almost unnoticed. Square did attempt to renew the experience with Animatrix, an animation film based on the Wachowski brothers' trilogy, but the company finally had to close down.
With this experience in hand, Gérard Banel then opened Syflex in 2002, utilizing improved technology and a new rendering algorithm. The software did not require in-depth knowledge, and so was quickly adopted not only for animation but also for advertising and video-gaming (because of its cinematics).
Concretely, the Syflex software is founded on a system of particles with kinds of "springs" linking them together. These springs, endowed with multiple variables and distinctive features, make it possible to play with the physical properties of a piece of fabric and its associated dynamics, and to correlate all of this with the movements of the characters wearing the fabrics. As was noted by the founder of Sylex: "The properties are similar to the real physical fabrics on which we apply stress – force, friction, etc. We then establish collisions, with bodies for example, but also 'self-collisions' (clothing colliding with itself) to obtain realistic animation, via real-time computing. Besides just clothing, the software also manages hair and skin (via a double-tiered system wherein the skin 'slides' over another lower-tier skin) as well as muscles (with addition of a third tier)."
The software also has capacity for proxies to be added (buttons, bows, etc) which connect into the model and the generated simulation.
In 2010 the company decided to look into other fields of use and hence contacted the apparel industry. On-line sales were booming but a preponderant issue remained: many disappointed customers were returning items of clothing. So Syflex then started up a new company, Embodee, with Hurley as its first customer. The aim was to enable on-line customers to try on their purchase items virtually by creating a cloned copy of their measurements, beginning with a generic mannequin on which parameters of weight, height, waist, bust and so on, can be adjusted. In addition to this visualization, the software also provides a so-called comfort map, using colored zones to indicate how close-fitting the clothing is to the body (with potential constraints). "Very quickly, not only did sales increase, but most especially the rate of returned items dropped by 30%." Today, Nike is also on Embodee's customer list.
Over time, improvements in the technology will change simulation speed, approaching real time, and also will impact how the modeling is done. "We have three prospective approaches: with sewing patterns, direct stereoscopic modeling, and direct mannequin dressing." Gérard Banel pointed out that many studios, including Disney, have now hired on "pattern-makers" to produce real clothing items for their future productions' clothing needs, obeying design and fashion rules. This just goes to show that with such tools, new professions are emerging, increasingly specialized ones.
In 2010, Christian Guillon, a visual effects pioneer and Cédric Guiard, a software and artificial intelligence engineer with a PhD in math and computer science, founded the Agence de Doublures Numériques (ADN). Their analysis was as follows: now thanks to advances made in analysis techniques and computer generated images, indiscernible close-up stereoscopic reproduction of celebrities is right around the corner. For the time being, this remains the prerogative of the major studios because of the induced costs; examples of it are films such as Matrix, Beowulf, and The Curious Case of Benjamin Button. For the latter, the Digital Domain studio produced no fewer than 21 representations of Brad Pitt, costing 1 M US$ per representation. But soon this sphere of reproduction should open more widely.
Cédric Guiard also pointed to an ongoing evolution between performance doubles (such as Sigourney Weaver's Nävis embodiment in Avatar) and appearance doubles (John Lennon or Marilyn Monroe in recent ads). And right now this is manifestly a legal jungle.
ADN offers an extension of these doubles principles in a proprietary and open technological environment (for use that is sustainable, generic and interoperable). It also offers a standardized framework so that the digitalized celebrity can maintain a hold on his/her double and its uses, whether for movies, ads or video games.
The process takes place in three major phases which are: establishing a model approved by the actor and protected by a contract (creation); implementation and animation (implementation); and postproduction integration of the double (integration).
From the technology side of things, this begins with the capture of a reference picture which is chosen by the celebrity and which will be used throughout the entire capture process. "We begin with capture in structured lighting, which are projections of a light grid onto the face. It's the deformations of this grid which give us the depth (the Z) that is later applied to the generic model. Capture is done first in static and then in dynamic modes. This is because the scanned celebrities are asked to show an entire range of 'physemes', the physical equivalents of phonemes, which we can summarize as the most representative facial expressions, to obtain a series with 104 degrees of freedom (and therefore a set of expressions)."
Next, data is integrated on diffuse light, specular light, displacement mapping, subsurface scattering (the absorption and reflection potential of a defined quantity of light on a skin), and more.
The final data acquisition involves, first, reconstruction of the eyes, and then the inside of the mouth. "These three acquisition phases do not take more than two hours," said Cédric Guiard. "Next we incrementally analyze these data points to get a model for the actor to approve." Obtaining a model over a period of 8 to 10 weeks generates a digital file of about 30 Go. ADN then carries out a series of performance captures to catch the dynamics through facially-positioned markers, "but we do not plaster the animation on top. The reference that we have of the face is what we use as a base."
Last step of the process: delivering the model to the postproduction studio so it can be used.
While movies represent the major area of operations for this technology, ADN is canvassing any number of potential applications. Recently, the Mikros Image studio, which partook of this work, used Marilyn Monroe's image for advertising Dior J'Adore, involving a triple search for video loops, for a skin reference (from an actress with skin similar to Monroe's), and for an actor for the performance.
The average cost of an acquisition is approximately 100 to 120,000 €. Once the model has been determined, prices calculated in terms of use and exposure vary between 1 K€/second for close shots and 20 K€/minute for video game cinematics.
In animation, a single hair is still a difficult item to simulate in convincing fashion, but next you realize that a head can have up to 150,000 of them! In 2006 Bruno Gaumetou, founder of the Neomis animation studio, was having to deal with this challenge, so he contacted the Grenoble Inria which was then working on an algorithm based on the superhelix, a mathematical line which can mimic hair shapes. At the time, the starting point for these studies was research work by L'Oréal, specialists in the matter. The 2008-2011 Hair project was granted support by the national research agency (ANR) in partnership with CNRS-IJLRA, Inria-Grenoble and BeeLIGHT.
Within the framework of a technological transfer, Neomis took up the research but went at it from a more practical angle: generating digital wigs for animation projects. Even if there are already tools on the market such as Shave & Haircut, Autodesk Maya Hair, ZBrush or the proprietary tools of Disney and Weta, the Neomis approach is intended to be more intuitive and artistic, with easy-to-configure software that's relatively inexpensive and usable for cartoon-style or realistic animation.
The resulting Maya plug-in (with Mental Ray as final renderer) is therefore based on superhelices and three key steps: creating the hair style, giving it movement, and rendering light and color. "You start with a single hair and add to it variables of curl, density and more. Then the hair is incorporated by means of an implantation map. Tools for styling hair (scissors, comb, layering) and for adjusting the dynamics (animation masks, damping) make it possible to obtain a dynamic curve so that later it's possible to move into baking phase (verification with the dynamics 'frozen'). Overall, the graphic artists will have harvested data so that they can easily manipulate the curves in CG software before the final rendering."
Equipped with a clear three-part interface (superhelix on the left, visualization of the strand in the center, adjustment variables such as ethnic type and general helipticity data on the right) the plug-in first operates on a single piece of hair. Once the test is validated, "the process is extended to a bunch of 20 hairs which will be the 'master hairs' of the hair style," continued Bruno Gaumetou. "We can thus provide computed images, with colors, ready for compositing." In addition to the ethnic typing function, the newly-created hair has variables for simulating how the hairstyle "falls" (on the shoulders, for example) or oily hair (when the downward movement is fluid) or dry (when the downward movement is more abrupt).
Neomis holds the exclusive operating license for this plug-in which is still in its first version, and this led the company founder to state that "Inria quite naturally continued on with its research, especially on friction and self-collisions, and these constraints will need to be integrated in the upcoming months." Several national and international studios have already declared their interest in this plug-in which has the advantage of being flexible and easy to integrate into existing production flows.
In answer to a question concerning the availability of Syflex's student licenses, Gérard Banel explained that they do exist, at 10% of face price, but that "the schools can also obtain a floating license if need be."
In answer to a question about creating crossovers to link these technologies, Cédric Guiard and Bruno Gaumetou replied that "there are ongoing discussions concerning joint projects based on both technologies, such as placing a wig on a model."
In answer to a question concerning the relevance of ADN's approach in other entertainment sectors, Cédric Guiard pointed out that "6,000 athletes are currently suing the Electronic Arts games publisher in litigation about image rights: since the level of photorealism has greatly improved, they are now dissatisfied with the quality of their representations."
Drafted by Stéphane Malagnac, Prop'Ose, France
Conferences organized by CITIA
under the editorial direction of René Broca and Christian Jacquemart
Translated by Sheila Adrian
Contact: christellerony@citia.org