Animation/VFX Award Contest Tips

Some unofficial personal opinions on how to prepare a submission for Animation/VFX award contests:

– Choose the appropriate category for the material. e.g., don’t submit to an “environment” category if there isn’t substantial environment work in the submission.

– Explain what you did fully in the behind-the-scenes material and/or notes. Don’t go into great detail on the specific techniques (unless they are original and interesting), but do include a complete list of all the relevant elements you worked on. This makes it easier to “parse” the visuals. The quality of the behind-the-scenes material is important, don’t make it an afterthought.

– Show the plate! If there is a major live-action element, the breakdown should show this element alone first. Go to the bare plate, THEN add any intermediate steps afterward. Otherwise it’s hard to figure out what you had to work with at the start.

Some ways your piece might be evaluated:

– Has this been done before? Given that new techniques emerge constantly, pieces that merely re-use older methods are not going to look competitive, UNLESS they are executed extremely well (e.g., with much larger scale or better art direction than has been seen before).

– Did you take risks? Did you take on a big challenge, in terms of technology, artistic ambition, or budget/schedule? In my opinion, part of the purpose of awards is to encourage risk-taking, so this makes a piece more attractive.

– Did it matter? Was your work essential to supporting the overall film/story? If your work was taken away, would the film suffer? If you hadn’t come up with new and better techniques but used old ones instead, would the results have looked just as good?

– Don’t count on scale (in terms of scene complexity) alone to be impressive – “Avatar” came out over a year ago now. Moore’s Law continues to make time and space cheaper. China and India continue to make basic artist labor cheaper.


The new disaster film 2012 pleasantly surprised me. In addition to being a vehicle for excellent visual effects, the story itself was fairly interesting, at least if you look at it as a sophisticated farce. It’s much like Starship Troopers, a movie I love (but most people love to hate), which doesn’t seem to have much depth until you view it as an ironic commentary on the seductiveness of foolish warfare.

2012‘s filmmakers deal with a difficult theme – the end of human civilization as we know it. Treated literally, this could make for an extremely depressing film. 2012 did incorporate some serious elements of this setting, such as the dilemma of whether to inform the public of impending doom, and how to decide who gets a seat on the “arks” that can save a limited number of people.

Thankfully, however, these horrifying scenes are balanced out with surreal action sequences and characters that remind the audience that 2012 is only fantasy. 2012‘s action is the sort that you have to turn off the analytical part of your brain to enjoy: ignore the glaring technical inconsistencies and just appreciate the clever timing and creative destruction of famous landmarks. The intentional one-dimensionality of the well-cast main characters, like John Cusack’s protagonist and his conspiracy-theorist buddy, further remind us that 2012 isn’t to be taken literally.

I felt 2012 succeeded where other films like The Day After Tomorrow and Poseidon failed. Neither of those two films included any lighthearted elements to balance the catastrophic setting, nor did they treat the scenarios realistically enough to provide thought-provoking insight.


Previsualization is an extremely valuable tool for filmmakers working with digital effects. Animatics (rough stand-in animation) can be generated on a quick turn-around and dropped into edited scenes to check timing and continuity.

But there is a drawback – animatics are often unable to convey the true feeling of a fully-rendered shot, in terms of motion and object scale. A movement that looks OK for untextured stand-in models may look too quick or awkward once the final models are substituted and motion blur is added.

Ideally the production pipeline should be able to accommodate additional iterations at this stage, but often it’s too late to make changes. Usually the in/out points and action timing must be locked early on during 3D production to allow music and sound work to proceed.

This problem results in shots that look mis-timed or out of scale, even though the lighting is as good as it can be. For example, I think I saw many cases of this in the new Spider-Man movies, like the shot in the video below where Spider-Man jumps onto the moving car.

The timing of the jump and landing probably looked fine in animatics, but when fully lit Spider-Man seems like he’s moving too quickly; there is a feeling of lightness to his body where instead there should be solid weight. I imagine the animator really wanted to have Spider-Man hang in the air for four or five more frames, but was stuck because Spider-Man has to touch the car at frame 19, since that’s when the sound effect is going to go off, and it’s too late to get that changed.

I call this phenomenon “animatic-itis.” Wishing you could make a timing improvement, but having your hands tied by the locked-down animatic.

I’ve been guilty of this myself, by the way. The airbag-bouncing shots in Roving Mars have less-than-ideal motion because we had to lock down the bounce timing for sound effects work a week or two before we finished tweaking the animation.

The Karate Kid Remake

The other day I was just thinking, “wouldn’t it be cool to remake The Karate Kid? Make a 2000s version of the 80s classic?” Lo and behold, here it is:


Well they’d better call it “The Kung Fu Kid” because it’s now set in China, and Jackie Chan plays the Miyagi character!

The setting has a lot of potential – I’m delighted to see the US expat experience depicted in a mainstream film.

On the other hand, I’m afraid of how they will treat the clash of cultures. Will they resolve the boy’s conflict with his Chinese peers in an evenhanded let’s-understand-each-other-as-equals way, or will it just have the usual triumph-of-the-American-Imperialist ending? (Or will they avoid treating the issues in anything but a superficial way, a la Lost In Translation?)

The “Karate” title has me worried – mis-labeling the actual martial art being practiced is, at best, seriously disrespectful of the setting.

Lens Flares Revisited

I was just watching the excellent behind-the-scenes material on the Blu-Ray release of the new Star Trek film. J. J. Abrams discussed his choice to use lens flares and fake dirt/moisture elements to enhance the realism of CGI space shots. While he may have gone a little too far in this particular instance, I believe lens effects are a very helpful technique to make CG imagery appear less artificial.

I developed a custom lens flare tool for my Mars Phoenix animation project in 2006. Although lens flares are traditionally a feature of 3D rendering packages, I chose instead to handle them as a 2D compositing element. I implemented a simple “renderer” that draws radial glows and discs based on the projected location of a light source in screen space. My custom pipeline integration package (interp) already had the ability to color each pixel of a 2D layer based on an arbitrary function of x,y position. Using this feature I was able to implement the flare renderer entirely in script, without adding a single line of C++ code to the core interpreter.

The original script takes as input the screen-space location of a single light source (as determined by sampling the baked motion data and transforming it through the camera-to-screen projection). Then it generates a list of radial glows and lens reflection rings, and finally goes down the image line by line, iterating through the list and adding up the contribution from each element.

This system works great for single point light sources like the sun. But today I wanted to push the technique further, to handle many sources and large area glows. I imagine the ultimate lens-effect system would not require you to specify the light source locations numerically; instead you should be able to give it an HDR image and then automatically generate lens flare for each sufficiently bright pixel. This way you could design the flare effect just once, and then apply it to any scenario like multiple small point sources or large glows and trails.

It was not too difficult to implement this within interp. I just had to add one C++ function to scan through an image and return a list of all the non-zero pixels. I used script functions to darken and clamp the source image so only very bright pixels would be identified. Then I instantiated a glow-and-ring lens flare for each pixel in the list.

Here are the results:

I was particularly surprised by the smooth appearance of anamorphic radial glows. In the past I had always made these using a 2D blur filter. But the new flare renders looked a lot better. It’s probably because they use a true radially-symmetric blur, as compared to the blobby Gaussian effect you get with a separable 2D filter.

Note the superior appearance of the horizontal glow bands in the “radial glow” image compared to the glow that was based on a 2D Gaussian blur.

Unfortunately, the renderer slows down from a few seconds per frame to a minute or more when it has to draw more than a handful of flares. Despite interp’s efficient line-by-line SIMD compositing architecture, there is still too much interpreter overhead for this operation. At first I thought about implementing more of the glow-and-ring drawing in C++, which would involve finding efficient bounding boxes for each element and sorting them in screen space. But given that I have RenderMan handy, it would be easier just to translate the necessary operations into a RIB scene and let PRMan draw it.

I will probably use this technique for my next 3D production. It can produce beautiful area glows and will save me the effort of specifying individual flare elements for the sun and specular glints.

The New Client Application Landscape

Back in the early 2000s, selecting a development platform for client software applications was easy. You wrote C++ code using the native OS APIs (Win32, Carbon, X11) or thin wrappers on top of them. It also became practical to write clients in higher-level languages like Python or Java, provided you were willing to put up with some rough edges in the implementations of GUI toolkits and cross-platform issues.

There’s nothing fundamentally wrong with these platforms, and despite all the hype about web applications taking over the world, I think it’s perfectly reasonable to develop new client software with them (provided you “pay your taxes” to keep up with modern OS requirements like packaging systems and user data persistence). In fact, good cross-platform toolkits like Qt and WxWidgets have made OS-native client development easier over time.

However, as I consider developing a new, graphically-intensive client program intended for a large user base, I feel obligated to look at browser-based platforms, specifically Flash and JavaScript.


The Flash Player has already become popular as a game client platform, notably for Zynga‘s Facebook games, and even some MMOs. It has no standards-compliance issues because there is only one canonical implementation. Its sound and graphics features are solid and well-optimized, although the 3D API is still pretty limited. There are some GUI controls libraries for Flash, but they are not as well-developed as those in a desktop OS or a good JavaScript framework.


From a desktop developer’s point of view, JavaScript is essentially a bare-bones scripting language interpreted by the browser, coupled with a small “standard library” including an API for manipulating the browser’s layout and display engine (the HTML DOM). JavaScript applications work by pretending to be a web page with various elements that appear, disappear, or move around according to the script. There are many “widget libraries” for JavaScript that emulate desktop GUI controls to some degree of success. Although JavaScript-based controls sometimes feel clunky, it’s important to remember that you get the full benefit of the web browser’s page-layout engine and its HTML controls (buttons, sliders, etc) for very little code. This could be a powerful advantage for writing GUIs that make heavy use of traditional desktop-style controls.

Future browsers will (hopefully) also include WebGL, a complete JavaScript interface to OpenGL which renders into an HTML canvas. In theory, you could port most sophisticated rendering code, like a modern 3D game engine, to this interface. Unfortunately, WebGL is just an emerging standard and there is a risk that browsers won’t ship it widely or support it well. It could join the long history of widely-promoted but seldom-implemented standards like VRML or the non-audiovisual parts of MPEG.

Limitations of the browser as a platform

Flash and JavaScript both have some important limitations:

  • You can’t communicate over arbitrary TCP sockets. In both platforms, client/server communication has to use XML-RPC features.
  • You have no control over the “event loop.” This is a potential source of problems with input polling and timing control. In particular, the WebGL code I’ve seen uses a wonky technique for frame timing: the script schedules an “update” function to run at periodic intervals (or with a zero timeout, which tells the browser to handle inputs and then return immediately to the script). This feels clunky compared to the high degree of control you get with a native client, which can explicitly select on file descriptors with precise timeouts and/or pause for graphics buffer swaps.
  • The programming language will be different from the server side, making it difficult to share libraries and move code back and forth. There are a couple of “language translators” that target JavaScript (Pyjamas for Python and GWT for Java) but this seems like a fragile approach.
  • Client-side storage options are limited. There are ways to store small stuff, but typically not hundreds of megabytes of 3D models and textures. I suppose you could do something crazy like install a small native web server on the client and access it via HTTP to localhost.

How to Decide

In the end, the decision of which platform to use will depend on a few factors:

  • How much does the application resemble typical desktop GUI software?
  • Are you continuously rendering a changing scene, or does the app sit and wait for input?
  • How much client-side storage is needed, and is client-side software installation going to be a problem?

Server Platforms

On the server side, not much has changed over the last few years. The first tool I’m going to reach for will still be Python, writing directly to the Linux POSIX APIs. Python has a rather nasty (though constant) performance penalty versus compiled code, but I can usually get around that by selectively translating critical sections into C.

(The only situation where this approach might fail would be a system that had to run arbitrary, hard to vectorize scripts very very fast – something like a game engine simulating a large number of independent AI agents. In this case you might need to look into a more JIT-oriented platform like Java. However, without going this far I think it’s possible one could get the performance penalty low enough to make it reasonable just to throw more hardware at the problem.)

On Virtual Goods

Here is what I don’t get about companies that sell virtual goods: My “virtual credit balance” is just a number in your database. You want me to pay real money for you to change that number. It’s just a tiny piece of data that has no effect outside the virtual world. Why would I want to do that?

“But wait,” you say, “what about a bank account balance? Isn’t that just a tiny piece of data too? And people work very hard to change it!”

True, but that number represents the full faith and credit of the bank, from which I can withdraw real cash to exchange for real goods and services.

OK, decorating my virtual fishtank with a different background image is a kind of real service, in that it affects the pixels on my monitor (and yours, when you come visit on account of some made-up fishtank crisis). But I personally do not a attach a very high value to this service.

Game analysis: A Tale In The Desert

I’ve been meaning to try A Tale In The Desert for a while now, since it’s a fairly successful MMORPGs with a truly deep crafting system (the other one being Star Wars: Galaxies, which won’t install for me anymore). I just downloaded and played the starting-island portion over about three hours. Unfortunately the UI is really annoying so I won’t put more time into it.


  • The crafting system is indeed deep. Basically the entire game is about making stuff. You gather resources (modeled nicely and placed throughout the world) and build tools (also nicely modeled) to transform them into ever more complex products. The starting-island tutorial is basic but I read some fascinating things about advanced crafting on the wiki. For instance, you can grow grapes and make wine, but first you can try cross-breeding grape strains to get seeds with better growing properties, and during the growing season you take a series of actions to “tend” the field, which affect the final quality of the wine produced. (“Product quality” seems to be a good way to add depth to crafting – SW:G uses this too – but obviously it complicates the server since it can’t treat all instances of an item type as fungible).
  • No combat, although there is PvP in the form of competing for achievements. ATITD proves that there is actually a good middle ground between purely social “chat rooms” and purely combative PvE/PvP games.


  • It’s a grind.

The resource-gathering system is cumbersome. For example, early in the demo you have to gather slate to make stone blades. Slate is found dotted along shorelines, but there is no indication of exactly where a slate node is located, except for an icon that appears only when you are standing right on top of one. Gathering slate is a process of walking around randomly until the icon flashes up, then backtracking to find exactly where to gather it. There should be a visual display of some kind to show where the slate nodes are.

Another resource you need early on is grass. You can pick grass anywhere the ground is green, but each time you pick up a handful you have to wait a few seconds for the character to animate, then move aside and click again. The gathering animations should be interruptible. Or, if the goal is to limit the rate of gathering, then you should be able to “batch-gather” a large amount of the resource with a single mouse click, like WoW’s “make all” button. (edit: apparently there is a way to automate grass-gathering during times you are offline, but you first have to gather 2500 units by hand!)

Along the same lines, there is no automation for “processing” steps like cutting wood into boards. You sit there clicking, waiting for an animation, clicking, waiting for an animation, etc. Worse, the blade breaks every so often (for no apparent reason), sending you on another shale-finding quest. I can understand exposing players to these mechanics temporarily, but not on a long-term basis. You should reach a skill level that allows some automation or batch-processing, or at least invent a little point-and-click game (with decreasing difficulty as your skill increases) to control the frequency of success and failure.

  • ATITD uses its own widgets and controls for the GUI, but implements them poorly.

Here’s my take on implementing your own GUI widgets, as opposed to using the native widgets in Windows or OSX: it’s like asking for a bout with the heavywweight title-holder: the rewards are good if you succeed, but you’d better have the skills to back up your challenge.

If the native widgets aren’t ideal for your intended use, then go ahead and write your own custom widgets, but be careful and design them well (good examples are the custom GUIs in certain 3D apps like Lightwave and Houdini).

Just keep in mind it’s incredibly tough to correctly handle all of the corner cases and behaviors that users expect from modern widgets. Sure, your homemade button works when I click it, but what about keyboard focus? How does it move when I scale the containing window? Does it work well on a tablet PC with a touch screen? A PDA with a stylus? Does it handle Unicode text rendering? Right-to-left languages?

Widgets that get this stuff wrong feel unpolished. Like, for example, ATITD’s windows, which you can only resize by dragging a pixel-wide line somewhere in the interior, or the super-annoying pop-up notifications, which appear directly under the mouse cursor, preventing you from clicking nearby until you dismiss the pop-up.

It’s a shame the UI is so annoying, I was really looking forward to seeing some more advanced features. For instance, check out the ATITD wiki page about plant genomics!