Pixar Used AI to Stoke Elemental's Flame
It’s harder to make fire than you might think. But the Pixar team was determined to do the impossible.
PIXAR HAD A problem. It had a great new idea for a movie—Elemental, based on characters from The Good Dinosaur's director Peter Sohn—but actually animating the film’s titular elements was proving to be a problem. After all, it’s one thing to draw a crumbling mound of sentient dirt, but how do you capture the ethereal nature of fire onscreen, and how would a corporeal body made of water even work? Can you see through it? Do the eyes just float around?
While some of those questions could be answered with good old-fashioned suspension of disbelief, Pixar’s animators thought the fire issue was a real conundrum, especially considering that one of their movie’s leads, Ember, was actually supposed to be made of the stuff. They had tools to make a flame effect from years of previous animations, but when you actually tried to shape it into a character, the results were pretty terrifying, a cross between Studio Ghibli’s Calcifer and Nicolas Cage’s Ghost Rider, but somehow harsher.
“Our fire fluid simulations are very naturalistic and they're designed to mimic reality,” says VFX supervisor Sanjay Bakshi. With a character like Ember, Bakshi says, “it's really important to concentrate on the performance of the face,” but the studio was having trouble balancing the dynamism of the fire with the character’s shape and sensibilities. Paul Kanyuk, a crowds technical supervisor at Pixar, says that at first crack, Ember looked like a ghost or even a demon. “It can look horrifying if it's too realistic, like you actually have a human figure made of real pyro,” he explains.
Even if you can get the scary tamped down, Sohn says, you still have to craft something that’s recognizably fiery. “Fire naturally is so busy, but if you slow it down, it can turn into something that looks like a plasma,” he explains. “It was interesting to compare it to other anthropomorphized characters, because they’re all very fantastical and you can do anything with them. If you’re drawing an emotion, there is no one-to-one, but everyone knows what fire looks like.”
Basically, Sohn explains, to make Ember, every single shot of Elemental would need an effects pass, something that’s not only incredibly time-consuming, but also very expensive.
Fortunately, Kanyuk had an idea. He’d been working on crowd animation at Pixar since 2005, starting with Ratatouille, and always struggled with ways to make the clothes on big groups of people look right. While trying to solve the problem he’d gotten involved with the Association for Computing Machinery’s Siggraph, a community organization devoted to the advancement of computer graphics. Around 2016, he found some of the group’s research on using machine learning to hone cloth simulations and has been trying to master it ever since.
Elemental gave him an opportunity to apply what he learned.
AROUND 2019, KANYUK came across a paper out of Siggraph Asia about using neural style transfer (NST)—the type of artificial intelligence used to make a photo look like a Van Gogh or a Picasso—to move voxels (basically 3D pixels, with volume) around in animation, all with the goal of giving a character a certain look. Kanyuk thought NST could help Pixar master its flame problem, though he told Sohn (who’d also signed on to direct the film) that, like much of machine learning, there was only about a 50 percent chance it would work. “I said, ‘I’m going to give you five ideas, and maybe two of them will work.’ But he said, ‘Let’s do all of them,’” Kanyuk says.
Labels: Story
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home