• Newsletters
  • Submit News
  • SCLegacy RSS

Is StarCraft 2 in-game cinematics considered machinima? Blizzard claims no because it doesn't use in-game assets, even though it can. Assets from top-down view look good, but in cinematics view they might not cut it at all due to lack of detail. There are lots of things you don't need in the top-down RTS engine for SC2 that is used for in-game cinematics such as fog, advanced lighting, depth of field (which forces the character to look where the artists want and is used for hiding flaws). So the engine team added features that wouldn't otherwise be used.

Blizzard story-mode team consists of five modelers. They developed some pipelines and workflows to get what they wanted done. They took Tychus and converted pre-rendered models to in-game. Made sure to make sure he read well from up close as well as far away. Not all assets could be taken from pre-rendered, lots had to be made from scratch, such as Matt Horner. They first developed a concept for Matt Horner and then created a model. The 2007 version of him looked meaner, but the cinematics team overhauled him. The old Matt Horner model was used for the body-bag scene during the Zerg invasion. Lots of the model were upgraded from shader 1.0 to 2.0.

The storyboarding is still done with a writer and artist getting together and churning something out. The next step is previsualization (previs), which is staging the characters, setting the cameras, and give the characters actions. Previs is not actually animation. The pacing and such is continually ramped up. Next people get in front of the camera to rehearse and experiment before doing animations. The lore is also considered and performances are made out of his characteristics (such as Warfield showing off his experience and struggle - if he died the mission  might fail - he had to think fast when the zerlging was coming and use his bayonet). Weight, anticipation, and strong posing are core values in first pass of animation. In second pass, body mechanics are refined and characteristics of the characters are refined.

Then the scene is translated to in-game. Many textures and animations are missing and lots of problems need to be debugged. When things are exported to game-engine, they sometimes disappear, and even if they work for a while they will suddenly stop working. These are common problems. For the Escape from Mar Sara cinematic, the mutalisks went from outside of the hyperion to the inside of the bridge for some reason. The game engine wants to do many things to the model, such as blending. So it takes lots of work to tell the models not just what to do, but what not to do.

For lighting, they ask what the environment is first. They lay down a default lighting first. Where does the specular shine? Where doesn't it? Where could it be? Then they add ambient occlusion (contact shadows). In SC2 most chars have 3 lights. In Fire and Fury there was one onto warfield's face, second one is on bayonet, and the last one was on him.

Localization is a huge challenge for Blizzard. The game was shipped simulataneously in 11 challenges. Every single sign, poster, line of dialog (around 50000 words), was localized. Blizzard used software face effects. You interpolate text, the character, and sound, and it creates animation. Some of the close-up scenes require extra animation to remove pops and smoothen out the dialog. Casting was really important.

StarCraft II comes with all the cinematic assets and tools to make machinima.


Q: Have you guys ever considered putting pre-rendered material in the in-game pipeline such as the zeratul vs. kerrigan fight?

Q: Do you feel that given current hardware you can push the level of detail for these graphics such that the cinematics can get even closer to pre-rendered? Do you plan to do this in Heart of the Swarm?

A: Yes, we do plan to add other graphical features, but we also want our games to be available to the largest people of feature. So any features we add can't be critical to storytelling.

Q: How do you bounce ideas off of each other?

A: I try to keep the meeting to a minimum, but the teams sit together. So even the most dedicated work session turns into a brainstorm session.

Q: How hard are certain features to program for fans such as rain?

Rain particles falling through sky can be done. But rain coming down the textures cannot.

Q: What about lyp syncing?

A: I don't think so. We use face effects, which is not included. If you got it on your own you could do it?

Q: Is there any plans to make longer cinematics for the game? Like 30 minutes plus?

A: Yes, we've talked about downloadable content, episodic TV shows. If we can get to the point where we can support the teams we'd be interested, but we're not there yet.

Q: Have you ever thought about pre-rendering the in-game cinematics?

A: We've been debating that pretty heavily. We're proud of the fact that the in-game cinematics can run in real-time. But that said it does bother me that people that aren't running high-end systems aren't seeing all the detail we've put into it. The trade-off though is that the interactive portion is still low-quality.

Q: Are you changing the lighting per shot?

A: Yes, every shot is basically rendered from scratch.

Q: I assume you storyboard the entire story before you decide to do a cinematic. Do you have criteria for what becomes in-game cinematic or pre-rendered cinematic?

A: We make the decision during the storyboard stage. We try to figure out the overall layout of where they will fit in. We want to open with a prerendered and end with one and try to fit some in between.

Q: You had mentioned you tripled the amount of polys that you used for a cinematic. How many polys are you looking for for a good character?

A: About 4000. Highest character is 20000. For sets are 10000 to 50000.

Ventrilo Servers
Contact Us About Us

SCLegacy is hosted by DarkStar Communications, home to high quality Ventrilo Servers.