Photos Spare Cycles MythBusters

Talk: ILM

metamanda's friend Vijay gave a talk at PARC yesterday about some of his work at ILM. He worked on the movie The Hulk and is currently working on The Day After Tomorrow. I had been trying the schedule this talk while I was still with PARC, but of course he waits until I switch jobs before he comes and speaks :).

Hulk Movie:

It took 1 year to create character model, which is as long as it took to create the rest of the special effects. There were three models developed for the Hulk, representing the three sizes his character has in the movie. They couldn't scale the same model because it didn't look right.

The Hulk movie was the first time they tore cloth. The cloth tearing was created programatically, rather than by animators.

Ang Lee: You have 24fps, every frame is an opportunity to make a dynamic pose. They tried to use this principle throughout the film.

Ang Lee did a lot of the reference and motion capture work himself. The videos of him acting as the Hulk are rather amusing.


There is also an R&D department that works on the in-house software and an art department that is the soul of the film. Everyone's office is filled with art department images, which is necessary to creating a unified look and feel.

The creature people are responsible for skin and hair. For the skin they have to do enveloping - determining how skin deforms as skeleton is posed.

Simulations are a bit unpredictable, but its nice b/c director can't tell you exactly what to do like he does with animators. In essence, you have more artistic control although you are playing with parameters rather than controlling the model directly.

Digital crash test dummies

Vijay worked on the dog fight scene in the Hulk and showed us a bunch of clips of how the scene was created. Ang Lee wanted scene to be violent. The level of violence was too much for motion capture, though they did motion capture a dog for the first time.

It's hard for animators to deal with collisions as they have to figure out the secondary motions resulting from the impact. It is also difficult for animators to have a model attached to another, which was the case in the dog fight when the dog bites the Hulk. This is because the model's skeleton is rooted in torso. The other body parts inherit their motion from the torso. This makes it difficult to
attach a body part to an object because it wants to move with the rest of the model, rather than vice-versa. (NOTE: in Harry Potter they worked with a new modelling system (dobby) where the model can be rooted anywhere)

Someone had idea to use rigid body solver, which they regularly used in airplane explosions and the like. The rigid body solver uses volumes with weights as well as springs connecting them to physically model forces on the model. The volumes act as a hard stop on motion for the model, as they collide and cannot pass through each other. One of the dog models had two large disks sticking out of the back, which was to prevent the back from deforming backwards too much. The springs control the stiffness/dampening, which create the equilibrium "rest pose" of the model. They will use different rest poses for
different scenes.

The rigid body solver solved the two main problems the animators were having. They didn't have to worry about secondary motion as that would be modelled physically. It was also a lot easier to animate the dogs when they were attached to the Hulk, because the rigid body solver, while perhaps not getting everything right, could give the animators a much better starting point.

Vijay showed us numerous funny, though slightly cruel, images of dogs being hurled to test the simulation results. There were clips of mastiffs and poodles being tossed great distances, as well as dogs being punched and hit with tree trunks. Needless to say, the rigid body solver worked.

Minor notes:

Solver doesn't animate fingers and toes -- not worth the results

Can combine animation and solver, use animation as input into solver and vice-versa. The rigid body solver is smart enough to look back a couple of frames to inherit the motion the model had as input.

When they create a character they model six facial expressions (fear, surprise, disdain, stern, ?, ?). There is a theory that all faces are a blend of these six expressions.

Motion capture: 24 cameras, infrared light. Have to post process manually to fill in gaps in data.


Listed below are links to weblogs that reference Talk: ILM:

» What they do at ILM from Metamanda's Weblog
My friend Vijay gave a talk at PARC about what he does at ILM. He works on characters, figuring out how their skin and clothing and stuff will deform. Really, really cool stuff. Ken (who doesn't work here anymore but... [Read More]

Comments (2)

I had noticed that "happy" wasn't one of those six fundamental facial expresssions. That really surprised me.


please send me a blueprints of hulk so i can try it

Post a comment

related entries.

what is this?

This page contains a single entry from kwc blog posted on November 18, 2003 1:33 PM.

The previous post was Rendevous Proxy.

The next post is Book: Animal Farm.

Current entries can be found on the main page.