Ivan Neulander



I am a Software Engineer at Google, where I have worked on a number of projects involving Computer Graphics, Computer Vision, Image Processing, and Deep Learning since 2013. A couple of them are described near the bottom of this page.

Prior to joining Google, I spent 15 years at Rhythm & Hues, specializing in efficient film-quality image synthesis through our proprietary photorealistic renderer. I (co-)authored most of the work on this page while at R&H.

This page presents some of my recent (and not-so-recent) work over the span of my career.

Professional Networking


LinkedIn Profile
View Ivan Neulander's profile on LinkedIn

Published/Presented Work (Peer-Reviewed)

This is material that I have either published or presented in a peer-reviewed forum.


SIGGRAPH 2013 Talk
Rendering Fur in "Life of Pi"

We explain how Rhythm & Hues's proprietary renderer was used to create a photorealistic, stereoscopic rendition of Richard Parker, the lifelike Bengal tiger featured in Ang Lee's Oscar-winning feature film,
Life of Pi.

recorded presentation

SIGGRAPH 2011 Talk
Adaptive Importance Sampling for Multi-Ray Gathering

We describe an unbiased noise reduction method for integrating incident radiance at a fixed position.

recorded presentation
sketch 2010

SIGGRAPH 2010 Talk
Fast Furry Ray Gathering

We present several techniques for efficiently gathering diffuse and specular reflection rays from strand-based fur geometry, recently put into production at Rhythm & Hues.
recorded presentation
sketch 2009

SIGGRAPH 2009 Talk
Smoother Subsurface Scattering

We enhance the Langlands & Mertens smoothing technique for subsurface scattering in order to produce more accurate shadow boundaries, and allow for continuous variation in mean free path.
sketch 2008

SIGGRAPH 2008 Talk
Pismo: Parallax-Interpolated Shadow Map Occlusion

Pismo is a technique for smoothly blending among multiple shadow maps created on the surface of an area light, in order to produce accurate soft shadows. The result rivals the precision of ray-traced shadows at a significantly lower cost.
one-page abstract
sketch 2007

SIGGRAPH 2007 Sketch
Pixmotor: A Pixel Motion Integrator

Pixmotor is a 2D motion blur technique, which incorporates several heuristics to fill holes created by hidden pixels that become visible during motion. It is used extensively at Rhythm & Hues as a much faster and more flexible alternative to standard 3D motion blur.

SIGGRAPH 2006 Course
The Chronicles of Narnia: The Lion, The Crowds and Rhythm & Hues

We discuss modelling, rigging, animation, dynamics, and rendering challenges encountered during the production of the feature film The Chronicles of Narnia: The Lion, the Witch and the Wardrobe.
course notes

sketch 2006

SIGGRAPH 2006 Sketch
Markerless Facial Motion Capture using Texture Extraction and Nonlinear Optimization

We present a tracking technique that uses an occlusion-tested texture-space camera projection technique called primitex to generate an error metric, which drives rigging parameters. This method relies on the fact that a properly tracked model has a consistent texture projection over multiple frames of animation.
one-page abstract
sketch 2005

SIGGRAPH 2005 Sketch
Image-Space Construction of Displaced Normal Maps

Here we present a purely image-based yet physically accurate method for constructing a detailed normal map of a displacement-mapped polygonal model. We accomplish this without tessellating the model or otherwise modifying the polygon mesh.We represent the surface geometry in texture space only.

sketch 2004

SIGGRAPH 2004 Sketch
Quick Image-Based Lighting of Hair

This sketch extends an inexpensive self-occlusion formula we developed earlier for point-lit fur, and uses it for approximating directional occlusion in the context of image-based lighting. The result is essentially real-time environment lighting of fur, including localized self-shadowing, without the need for any shadow maps.

sketch 2003

SIGGRAPH 2003 Sketch
Image-Based Diffuse Lighting using Visibility Maps

This sketch introduces the visibility map: a 4-channel texture containing an average unoccluded surface direction ("bent normal") and an average visibility value over a hemisphere of directions around the surface normal ("ambient occlusion"). We use visiblity maps for fast environment lookups into a cosine-filtered environment texture, in order to approximate environmental diffuse lighting.
one-page abstract

sketch 2001

SIGGRAPH 2001 Sketch
Grooming and Rendering Cats and Dogs

This was my first sketch at Rhythm & Hues. My colleagues discuss the hair modelling and animation techniques used in the movie Cats & Dogs, while I cover the rendering.

Graphics Interface 1998 Paper
Rendering Generalized Cylinders with Paintstrokes

This paper is based on my Masters Thesis work on a rendering primitive for generalized cylinders, the paintstroke. It lays the foundation for my subsequent the hair rendering work.

SIGGRAPH 1997 Sketch
Rendering with Paintstrokes

This is my first SIGGRAPH sketch, where I discuss my Masters research on the paintstroke primitive for efficiently rendering generalized cylinders. This serves as the basis for my subsequent implementation of the hair rendering pipeline at Rhythm & Hues.

Master of Science Thesis, 1997
Rendering Generalized Cylinders using the A-Buffer

This is my Master of Science thesis from University of Toronto. The subject is a rendering primitive for efficiently rendering antialiased, semitransparent generalized cylinders using the A-Buffer. This primitive uses view-adaptive dynamic tessellation to miminize the polygon count for a given level of detail.

Published/Presented Work (Non-Peer-Reviewed)

This is material was published or presented in non-peer-reviewed forums.


ArXiV (2015)
DeepStereo: Learning to Predict New Views from the World's Imagery

A Deep Learning technique to synthesize novel camera views from a sparse set of inputs. This can be used to build stereoscopic views from mono imagery, and also to smooth out animations by interpolating frames.

painterly droplets

Internal Presentation (2014)
Dynamic Painterly Droplets

Techniques for generating different styles of animated painterly renderings from still images. This incorporates a multi-scale approach for placing paint droplets onto a virtual canvas, and a way to animate the droplets time-coherently.