Friday, February 26, 2010

Walkthrough of the Widow Pipeline Part 1

I decided to make an example walk through of the workflow through the Project Widow pipeline, for those who seem interested in what we are doing. I contemplated a video entry but alas the lack of hardware in the interface-video-to-PC area prevented that from happening so I went about and began to write this entry in text.

The walk through is using objects and data from Widow, in the screenshot form so you will get to see the latest work we have done. Just remember that it has taken quite a long time to get to this point and it was not easy. Much of the work involved trying to get the 'Widow' pipeline functioning was done by trial and error, in fact much of our work so far has been pushing the limits, learning what can work and what does not, figuring out the best way to do something as well as ensuring that it can work more than once. We started out the project at full speed with models and textures completed by the end of the first month, the time since then has been a lot of R+D and setup for animation. In contrast to what people think, setting up a flexible, custom pipeline even for a short film is quite the task. Making sure that a single articulated model has the right texture, shader, movement from concept to finish is hard work to keep track of. Some of our models still do not have a final shader look for them, at least one model is still needed to complete the modeling phase, layout for the various sets is only just begun, animation has yet to get into gear and we are only half done. We still need to get the renderfarm situation solid. However despite all the work that has yet to be completed, the work we have done so far has been astounding! The work to develop the multilayer openexr display driver had been a huge deal for this project. In fact everything is rendered in EXR format in addition to TIFF and the framebuffer. The amount of work done to Mosaic by Eric Back was astounding and as of right now any development is being done for Blender 2.50.

Despite all the frustration that has slowed down production we are still diligent in finishing this.

Base Tools

Our base tools consists of Python, Blender, Aqsis, Mosaic and OpenEXR. Python itself is much of the pipeline as much of our tools use it for one reason or another. The shader editors we use run on Python for the most part, or use Python. Blender and Mosaic run on Python and there are even some tools written for Aqsis that use Python. Later on when we put all the video and audio together in Cinlerra, that too uses Python. Even the SVN has a Python tool attached to it to email members when files are changed. So Python stretches our pipeline from one end to the next, and with good reason too , it is a very powerful scripting language and can be used anywhere for anything. There is no compiling of the code, it just runs and if you are adept enough you can modify it and run again without effort.

Python 2.5.4 -

Blender 2.49b -

Mosaic 0.4.9 -

Aqsis 1.7.0 -

OpenEXR 1.6.1 -

These tools are the ones that everyone in the team has to have installed in order for everyone to correctly open, edit and render in our pipeline. Below is a list of tools that we have used or continue to use.

Cinepaint 0.22-1-

Gimp 2.4.7-

Shaderman 0.7-

Shaderman.NEXT -

SLer -

Shrimp -

DrQueue -

postmosaic -

In addition to the listed software we also have a whole ton of development environments to compile code, libraries of all types, misc utilities (code editors for instance), server daemons and other little pieces of code to support these as well. Of course some of these have been custom compiles as some tools do not have binaries available for download, or in the case of the shader editors both Shaderman.NEXT and SLer run off of Python. PostMosaic is a shell script written for BASH so is not usable on Windows, not to mention it was designed for the lastest stable version of Dr.Queue so it is not known if the same issue would persist on a Windows based renderfarm (of course one would wonder why anyone would do such a thing but regardless that possibility cannot be ruled out, as ridiculous and expensive it may be). The version of Cinepaint we use is only available for Linux as there is no working version for Windows, since we do need to use Cinepaint to load and save OpenEXR files. GIMP is pretty much standard on whatever platform you are on. Shaderman 0.7 is a Windows only build while Shaderman.NEXT runs off of Python and thus crossplatform. All kinds of tools for various functions and all available for Linux at the very least.

Asset Management.

First and foremost is an asset control, be it a file server via ftp, a NFS, or a distributed peer to peer, something that everyone can use has to be used otherwise there is no consistent structure to what gets worked on. In our final version of the Widow pipeline we are using SVN and this is hosted privately but all members who worked on the project can have access to the data. Previously we had relied heavily on Dropbox for our asset control however because of the limit of space we would need we started to work on developing something better. Arachnid was a series of scripts written for Unison by Paul Gregory, however it seemed to be a bit too buggy for our needs even though it did work. When we started to use SVN there had been some concern over corruption of binary files (.blend files are binary files for instance).

SVN gui (Rapidsvn) with remote file list

The initial idea was sparked during the viewing of the Hand Turkey Studios webcast during the 48hr Film Contest. They were using an SVN for there very busy 48 hour pipeline. I had also participated in aiding a last minute re-render of 6 frames for the Animux "Prince Charming" preview animation, this used SVN as well. It seemed stable enough and something we could work with, we eventually ended up hosting it ourselves courtesy of NOYX Studios, which is my friends small home based recording and editing studio. This also freed up the usage of Animux's network strictly for the renderfarm later on down the road.

Had everyone been in a single location instead of everyone spread out over the world a lot of our assets would have been easier to manage. As it is i would say the amount of data that we will have transfered over the course of the production can easily reach 2 TB.

The asset tools are always going to be upgraded to make communication and detailed information regarding certain data. The content and rendering tools work very well on this pipeline but the one thing we lacked in the beginning is starting to shape up with SVN storage and recently some talks with a company that specializes in the asset management sector.

More can be read here :


The Widow assets are actually small in numbers compared to some short films, much of the modeling had been completed for many months. In our purposes we are going to use the main subject, the spider model. This model was the first thing made from model, rig and texturing completely.

Closeup of spider with wireframe over shaded, complete with hairs!

You will notice how it was made using Quad polygons rather than triangles, this was a design choice rather than looks, the REYES pipeline is far more efficient dicing quad polygons into micropolygons so building in quads will actually decrease render times with SubD and displacement shaders applied. What happens is that the quad polygon with a SubD modifier creates a patch with a control mesh around it. When this patch goes through the REYES pipeline these patches get cut up into sub patches and then cut up again, then diced into micropolygons at the pixel level, then displaced, then shaded and then lit.

For those who want to read up on REYES :

Model of the spider

Since this is an 8 legged creature, a custom rig was needed to be made. The rig was built by Cedric Palle, and has controls that can either move the entire rig across 3 space or parts of the body, or one part of the rig moves the body while the legs stay in place. A very nice, workable rig. One of the issues we had was scale, the spider model compared to the environment is huge, so for much of the scenes a smaller version of the spider had to be used. What was ultimately done was to use a scaled down version of the spider without the hairs added they did not work so well at the scale of 0.01 so removale of them was a terrible price to pay, though we still have the model with the hairs on them for some extreme close up shots.

Rig of the spider

One of the things that was stressed in the beginning was to use Aqsis as the preview renderer when modeling because there were many instances where we either found certain bugs in Aqsis itself or there were some methods of modeling in Blender that looks fine when using the internal renderer but in Aqsis looks different. There are some things that Blenders Internal renderer can overlook or get away with because the renderer is designed for Blender, however when it gets translated into another language somethings sometimes don't come out right and in this case there were instances where the two rendered very different results, so using Aqsis as the preview renderer was needed. In most cases though there was little difference in the two.

More of this can be read here :

Texturing and Imaging.

This part of the pipeline was not used as much as originally intended but did find use here and there when needed. Much of the surfacing is done with Renderman shaders but there has been several uses for textures such as ground planes and the spider design. When we made textures they were first worked on with GIMP and then later with Cinepaint. All of our textures are in TIFF format simply because this is the only format that Aqsis can process into a MIP map.

Textures above in Blender

Textures in Cinepaint

There are 4 texture maps on this spider model. Color, Specularity and 2 different levels of displacement maps. One for long to mid shots and another for close up details. These were first created in GIMP and later cleaned up in Cinepaint.


This portion of the pipeline was one of the most difficult to tackle because we were unsure exactly how to setup multiple scenes with animated objects without making each shot 100+ MB in size. Linking provides us that ability to make multiple sets in a small amount of time, add in the objects that need to be animated and one of the most important reasons is consistent shader and lighting settings from set to set. Our main environment is built off of 3 main sets, one of them is the complete set, the next version is most of the set and the third is a set with much of the objects removed. Set design in this project is tricky, depending on the camera view it is far more efficient to only include objects that are going to be seen, if we made one set for the entire thing then much of the objects would never be seen at all and thus reducing export time as well as disk space.

In this example I am using one of the production scene files (scene 002 to be exact) since from this point in the pipeline these scene files will be what every other process will be based off of. Because we are linking in objects we have more consistant shader visual continuity, there wont be really obvious repeating patterns and the varying amounts of turbulence, noise and fractal patterns wont change from shot to shot. If we did not link in objects each shot would have to be manually edited and doing this over and over is just not practical, so linking solves at least much of the grunt work.

Scene 002 set which is entirely linked from the main scene file
We can link or in some cases append anything into the layout files, however to keep the work flow consistent, having a custom file per scene prior to the work will allow anyone who starts to animate to do so without the fear of their work being altered by others. We also changed the various screens of the interface to accommodate that factor, having them labeled something like 'BLENDER_LAYOUT', 'BLENDER_ANIM01', 'RMAN_SHADER' so that different people can work on the same file without altering the settings that others have done, depending on the circumstance of course.

At this point there are two pipelines going and one is the modeling, layout and animation pipeline and for the most part that is contained in Blender with some Renderman data associated with it but nothing really shader heavy. In the beginning of the modeling phase Daniel had made a ton of models for the environment, which I had later shaded with the custom shaders. At some point in time later these shaders and Mosaic shader fragments will be appended in to the scene file that the rest of the scene files are linked to. This will reduce the copying and editing needed to a maximum of 3 main blender sets. This will also be necessary to do with lighting.

In retrospect it could have been done a lot better, the planning of this was not fully worked out but considering a lot of time had passed we decided to just go with what we had and keep patching this together. Linking and appending offer us that way of making time critical adjustments or in some cases rebuild. Later in the future this will be fully planned beforehand but for 'Widow' anything done from this point on would have to work.

Nathan Vegdahl also helped me out with this during a conversation on IRC one night, for all the things we knew how to do with Blender, something like actually bringing a linked object in scene was unknown to us, which of course is laughable now but at the time it was a moment similar to when the light of Marcellus Wallas briefcase renders one speechless in wonder.

More information can be seen here :
and here :


Since we are just now touching on this part of the pipeline there is very little to tell but as we build each scene and shot file up we are linking in the subject models in as well. At first we were not clear on how to add in external files and edit them (such as animation) without appending the data in itself. It just so happens that during that time the Durian team had released a short video that deals with this very subject so within minutes of watching it the whole animation portion of the pipeline had been figured out in my head, and within 30 minutes on paper and being implemented in the project folder.

Since my primary task on this project is shading, lighting and effects TD (I just happen to do other tasks as well) I am not too skilled at the modeling or animation end so our models and rigs have been made by others, so when building a layout scene I generally just place the object in scene and then place in the approximate area they are to be in. Depending on who animates them will have ultimate control over that file until it is considered final, upon which the scene is copied and renamed for shading and lighting usually with a version number and shading added to the name.

During this entire time effects animation is being done as well to accompany the primary animation. This can include anything from spider web movement, cloth simulation to particle work and ambient animation of environment objects. One such instance is these series of cables that have a Lattice deformer added to it which can be animated either by hand or with dynamic animation scripts during scenes where the train is going through and creating some vibrations, really only adding atmosphere to the scene instead of static models everywhere, bringing life to the shot.


A lot of R+D has gone into this area to see if there some of the things we wanted to do were possible. Spider webs for instance are very rigid structures when in complete form but when broken is a very flexible strand of an extremely lightweight material. One of the problems we had from the start is how to accurately replicate spider webs without a high poly count cost. One of the ideas was to use the Curves in Blender, which are exportable and renderable, however the problem comes in animating the control points - THAT part of the Pthon API is not accessable to Mosaic and this is good example of just some of the limits of the Blender 2.4x series. So more research went into varying types of methods for webs; polygons with hooks in some key vertice points, polygon web with cloth modifier, curves for non animated webbing.... all these methods could work. Even texture maps on a plane would work, however it all depends on what is going on in the scene, how close it is to the camera, if it is static or moving and so on. In all there is up to 12 different ways we can do spider web strands and it will most likely take all of them at one point or another.

Another research project was cloth itself. There might be a chance that we use cloth objects for blowing paper, lots and lots of little itty bitty pieces of paper. This demanded some testing and despite the fact the "paper" did not really act like paper, it did prove to work fine regarding the technical possibility to do such a task. When this does get added to scenes it is pretty certain that the exported data will be quite numerous and large. The one thing that one should remember is that in Mosaic it is possible to export only one RIB of an object and then no matter where it is, as long as the vertices do not move then only that one RIB is called on in the frame RIB file. However with something like cloth, each frame exports a cloth RIB file as well since the file itself is a large collection of where the geometry data is located in 3D space, so if there are a 100 tiny cloth objects all blowing around for 30 seconds, that is 90,000 individual RIB files for that sequence just for the cloth objects alone. So trying to work out the effects for these kinds of shot will require some effort but is possible.

An early web test done last summer

In all there won't be a whole lot of effects that anyone would consider "effects", it is more like supportive environment elements since the whole short is an effect in itself...... anywho, the effects are one of the last things to modify during the shading and lighting phase. The reason as such is that for the most part some of the effects can be cheated simply because of factors like DOF, lighting setups, distance to the camera, movement and so on. Any of these factors are only really visible during the lighting process, there might be times when a web will be in shadow so if it is off a little bit then so be it, regardless if we remove or add in objects. If we really were worried about every single thing being perfect then this will never be finished. It has to look good enough, not perfect.

You can read more of this here :

Shading and Lighting

This portion of the pipeline is the very reason why we are doing the short film in the first place, to showcase the power of Renderman. In all reality this is an ongoing process from start to finish as much of the initial shader work was done in the early months of production. All that is left is to add the AOV code to them and they are ready for production use. The way we wrote the shaders is also important since much of the work is going to be done in Blender, so the shader parameters were made with Blender in mind as opposed to the average shader code. Some of the shaders will never see the light of day, others are a wonder in appearance, some are actually being built to use Blender paint data to apply a separate shader to the object and others will not be seen as much but still look great.

All of the custom shaders made for 'Widow' are designed to be used within Mosaic. When writing a Renderman shader it is not uncommon to have numerous parameters that adjusts the way a shader looks or functions and in most cases this is perfectly fine if you were to use the shader in something like Maya where anything can control them due to it's open API. However since Mosaic's shader fragment system uses Blender material functions to change these parameters, you are limited to this area. Luckily the Blender material system is very robust and one does have quite a selection to use, the hard part is remembering what function of the Blender material controls the Renderman shader, there does need to be some planning involved. It is very possible with Renderman to have multiple functions that control different parts of the shader code, so like you can have 3 Turbulence functions but they can each change the values of
various other parts. Problem is that if you want to be able to change these parameters with the fragment system you are limited to what it can connect to, if using a Turbulence function in the Blender materials you can only have one of these, so you need to find something else for the other functions or not use Mosaic's fragments system. For the most part the shader functions are not too complex and when the ones that are usually only have a handful linked to the Blender material.

Custom shader development and fragment assignment in Mosaic

Custom shader with preview rendering using Pqsl

The other part of shading that usually needs to be completed first is the material assignment to polygons in order to use different shaders on a single object. This does not apply mulitple shaders to a single mesh when exported, this is not possible to do according to the RiSpec (unless of course in the case of Aqsis there is the existance of layered shaders but these require special shader and RIB programming to accomplish this), so what Mosaic does is split up the mesh into sub meshes which it then adds the shader reference in the RIB file. So with an object that uses multiple shaders, when exported these in turn become seperate RIB files. This operation is not visible to the user though and unless you are totally aware of how it works you would never know this would happen.

In addition to the custom Renderman shaders we are also using a lot of Mosaic's shaders as well considering there are many situation that do not require a complex shader. Such as various parts of the train are using the Mosaic surface shader because they wont be seen much at all and thus do not require anything other than a plastic type shader. The Mosaic lighting shader is the primary code we use since it tends to be one of the most complete light shaders seen for Renderman at all, next to from over 10 years ago. During the course of production Eric had added volumetric shading to be called when the "Halo" setting of a light is switched on, something that was desired by the artistic members.

Lighting is the most crucial step for the production, not only the art aspect but from a technical view as well, there are many industry proven tricks that we will be employing to reduce render times as well as making it appealing to the eye. Since Aqsis is not ray trace capable we will have to use shadow maps as well as environment maps in some cases, however since it is Renderman much of the lighting in the environment can be rendered once beforehand and then later baked in to the scene over a spread of frames, also reducing render times. Custom lighting setups will need to be made of course but much of the general lighting will have already been rendered and baked in for later use.

I also managed to find a python programmer who wrote a script which adds a spotlight that is pointing to an empty object by defualt. This handy tool was something I always wanted to have, it makes lighting such an easier task when you only have to move two objects to get the spotlight to point exactly where you want. So having a script to just add this in without the setup of making the light rig itself is a blessing. At this time it is only a simple script but I imagine can include a GUI someday and hope to work on this script myself later.


This is where we will be finishing up each shot as they come out of the final stages of production. These exported RIB file structures (which can be quite large in the range of 500+ MB) will be uploaded to the remote render farm we have reserved and then using DrQueue to distribute the frames across the 20+ node renderfarm (see down below). We will be using a newer developmental version of the recently released Aqsis 1.6.0, which is now technically 1.70. Prior to this we were testing the developmental alpha version 1.5.0. The reason we are using developmental builds rather than "production stable" releases like 1.6.0 was simply because we are a test bed for them and provide some great cases for them when used in a production environment. So even though later on building a custom Animux rendernode Live-CD will require compiling a stable development build it will be the very same we would be using for our own preview rendering, thus maintaining a consistent rendering environment regardless where the rendering takes place.

Aqsis rendering of our train model with full shaders and DOF, featured in the new Piqsl framebuffer interface

Since we have been using Aqsis from the start, our assets are designed around it. Subdivision surfaces for instance tend to look a little different depending on the renderer used, be it of course between the Blender internal and Aqsis. So it was used during modeling, as well as any testing for ability to do what we wanted. Such as during the R+D process we use Aqsis entirely for the sole reason to see if what we are trying to do is possible at all. Like finding out that curves are not translating animation during export, or finding out that we can make very thin polygon strands that can look just as well as a curve. Testing out render times for full scenes, testing DOF and motion blur on both objects and cameras, testing out instancing methods between dupliverts and Array Modifiers, testing out a way to paint on objects to change shader values, viewing texture mapping results, the list goes on and on. We are not using Renderman much for animation previews simply because it is far faster and easier for the animators to make 3D view preview videos themselves than teach them about Renderman preview rendering. The shading and lighting process and beyond will be entirely rendered using Aqsis.

All of our frames are being rendered in the new Multilayer OpenEXR display driver.

Of all the achievements made in the past year it was the Multilayer OpenEXR display driver and Mosaic counterpart that made the biggest impact. By adding this to both Aqsis and Mosaic, it made AOV rendering a simple task, unless of course you were using custom shaders which requires that you add AOV code to it, something that can be a challenge unless you have experience programming. Openexr has become an industry standard now so having the ability to use an HDR file format through much of the pipeline was desired. Up to the point of editing the final video, the exr format has and will be used and the only other image format used is TIFF, that is only for textures which then get processed into the custom MIP map format. With the ability to put all AOV layers into a single file format was itself a huge contribution to not only Blender but the rest of the community. Being that it was designed for Blender, when added in the node automatically makes output points for each layer in the node, thus not requiring a large amount of nodes for each AOV and making the whole process much easier. The only problem is that Blender has a unique "feature", it writes exr files upside down and any file that is read into Blender will also be upside down and requiring a flip node for each layer. We are not sure if this was intentional by the Blender developers and hope that the the next version umm "corrects" this, Irritation of this aside, making a template composite blend file is an easy workaround and the rest of process can be devoted towards working with a shot and not with setup.

Cinepaint with a single multilayer openexr file opened

'Widow' was a great test bed for the Aqsis developers, during the summer we tried to do everything possible to test every feature or optimization, such as the improvements to depth of field and motion blur, or testing the new Piqsl gui. In fact since I do my own regular builds of Aqsis I have been taking advantage of some of the newest toys like point lights that use single maps as opposed to the old standard of 6 thus reducing the amount of files per light, per shot. We did encounter some errors, sometimes Aqsis failed horribly but with constant bug feedback the Aqsis developers were quick to fix and in turn we were able to continue to test.

The renderfarm operated by Animux went through initial testing this past summer as well, using Dr.Queue to manage Aqsis render nodes. Initial problems existed in the first series of tests then we found out that the way Mosaic writes the render script caused some errors. So one of the Animux devs had written a small shell script that edits this file so that Dr.Queue will correctly assign frames across the network. This is a tool called 'postmosaic' and on the Animux release 'Tremor' this was added by default, all one needs to do is run the command in the shell. This was also released to the public for anyone to add to their own system.

The Animux renderfarm

There is going to be anywhere from 7000 to 9000 frames needed to be rendered so when we do end up starting that process there is going to be 20 times that amount in RIB files, shaders, textures, shadow maps and other data. In all this entire short film could occupy a 500 GB volume, between the production files, the RIB exports, the frames, video and sound. It can be estimated up to 5 times that in the amount of data would be transfered over the internet, considering that we are located all over the world. Eventually this will archived onto a stack of DVD's then an external drive will be purchased and backed up onto that as well.

More about this can be read here :

Aqsis Pipeline :


Once each scene is rendered the elements are composited together in Blender. Since the Multilayer EXR format was designed for such a case, the image itself will contain the AOV layers that we specify such as color, specularity, normals, UV, alpha and so on. Since it is one file there is no need to render out each AOV file per frame, it is all self contained. The only issue is that Blender seems to read and write EXR files upside down, it is not clear if this was done on purpose or if it was an oversight but since Aqsis writes EXR files the correct way we need to flip the layers before they are run through filters.
The reason for the AOV rendering is because if anything doesn't look quite right it is far easier to correct that particular layer than rendering the entire frame sequence again. This can't remove issues that occur either from modeling mistakes or rendering artifacts but in turn this can also reveal artifacts not normally seen in a final render, such as tiny grid cracking spots that normally can be very difficult to spot initially. This also is where you first experience where AOV code is not present in which case the layers will have a very different look than expected.

Cinepaint with bad AOV layers composited (including the UV layer which normally would not be visible)

Once all the layers are satisfactory they will be written to an OpenEXR image sequence and placed into the final frame area where they will be brought back into Blender for the final video output and placed into the final stage of production.

Blender composite nodes of a single OpenEXR file

There has been the thought that we can use Blender 2.51 for the compositing of 'Widow' since there are new composite nodes added. Since this would be at the tail end of the production it would not be a bad idea to start upgrading the pipeline to accommodate the next versions of software as it is quite possible that all the tools needed would have at least been updated somewhat.

More can be read about in here :


NOYX Studio is taking care of the sound creation and editing for the short and for the first time recently I had a chance to listen to it. Since for the most part the drama of the scene drives the animation, sound is really a post process, in this case he happens to have some of the sounds needed for the short and is adept with sound.

Even in the sound department open source is being used in the form of Ardour, something in my opinion is one of the best audio tools out there. Something I really wish existed 10 years ago but sadly at the time there was no such thing and if there were I was certainly not aware of it.

Ryan has already built up quite a sound library and being an sound artist himself he has taken samples that eventually ended up sounding nothing like the original where others were damn near the original recording (EQ done to them of course).

This is also where the video will be edited, I will be hiking it over to his place and putting together the samples, soundtrack and shots to fully bring this to the final product.

NOYX Studio consists of a small network ran entirely with Linux (64Studio for audio production and Ubuntu for file server use) so Ryan has been very helpful in many areas of the production, not to mention being the glue that holds this whole pipeline together.


Project Widow is still chugging away, though there was little that has been seen so far and the stuff that has been "released" has been in various places, forums, postings and what not. Consider it guerrilla marketing haha. At this time our work is mainly getting all scenes set up for animation and fully enter that phase now that our initial work on the first three scenes turned out nice. People wanted to know more about what was going on and I wanted to speak a little bit of tech talk so this walk through is the result. For Part 2 I am going to explore deeper into some parts of the pipeline with screencasts.

The pipeline is something that can be run on any OS that has Blender, Mosaic, Aqsis, Cinepaint, GIMP, SVN access, and a web browser. Workstation, server, laptop, whatever this pipeline has been tested and developed on all of them. Yes nothing is one complete package from start to finish but considering this is designed around professional studio pipelines it seemed important to explore those methods aside from the neccessity of having to do so. Yes the software is already considered old, except for Aqsis but this is only because it is stable to handle the tasks we want, much of this past summer was finding bugs and development of new functions and tools. We are also using professional production methods and tricks, in fact in some cases the same tools, as such is the case with OpenEXR and Cinepaint, since both were developed by studio employees with the intention of being used by these same places as well. Researching into how these places built their pipelines was also a great starting point, of course with information being limited due to trade secrets and proprietary software unavailable to the public but the idea starts and if you know talented programmers one can develop their own tools and in our project we just did happen to know several.

The development team all use Linux while our artists have been known to use both Windows and Linux, so access to the SVN server had to be compatible across both operating systems. While it seems easier to install the latest stable release on Windows, it is far easier to build the tools on Linux if the source code has changed (such as the case with Aqsis), not to mention some tools such as Cinepaint and Shrimp are only actively developed for Linux. Obviously the further down the pipeline the less we use Windows and rely on Linux for everything, modeling was done mainly on Windows but the rendering will rely entirely on Linux for instance.

After this film is final there is talk of releasing the production files to the public, sort of an open movie. This would be a great way for people to really learn how the process is worked out considering that much of the pipeline has been pieced together from all open source tools. A lot of documentation has been written as well. What will not be included are the exported scenes, only the 7000-9000 OpenEXR composited frames (maybe). Of course this would be released as a torrent file since the eventual 8 GB of potential data this whole thing could take up would be a huge load on the SVN server. Considering that the BRAT toolkit combined downloads exceeded 1000 it seems that it might be a worthy effort to do so, along with a special gift inside. I will also upload the video to the Internet Archive as well.

In the future though there is one part of the pipeline that would need to be fully worked out and that is a solid asset server as well as a web based production tracking and collaboration system. We would also be using the next version of Blender, Mosaic and most likely Aqsis. However the tools we are using right now are battle tested, proven to be stable and can reproduce the same results so we are using these version for this project, the programmers spent a LONG time working day and night to provide some kick ass software for us to use and it would be a waste to let all that work go in vain. Already there has been some experimentation with exporting to RIB format in Blender 2.50, however this was considered a personal research project and 'production stable' at all.

None of this would be possible without Linux, specifically Debian,and any of it's children offshoots such as 64Studio, Ubuntu and Animux.

I hope that this walk through was not a complete bore, I did skim over a lot of things in this first part of the series.


  1. Very nice, great post! Obviously I've been apart of the development process but its really great to see all the pieces laid out all at once.

    There's a couple things that caught my eye I need to comment on...

    1) If your not already be sure to enable "gzip RIBs" in MOSAIC export options to compress the output RIBs to cut exported file size down.

    2) You may want to add custom parameters to MOSAIC's preset binary strings to add things like texture compression and render status for Aqsis. Texture compression could also cut down export size.

    3) You can slightly speed up Blender's compositor for layer combining by only flipping the final combined image instead of using flipping each layer before combining.

    4) Be sure to thoroughly test your shaders shadow+diffuse layers as I found these very difficult to combine with colored and projection lighting. The diffuse layer can be used as is but the shadow layer has to be divided by the diffuse layer so it can be multiplied with the diffuse layer later by the compositor. The problem was you can't divide two colors straight or you'll loose information so you have to separate the color channels, then divide with clamping and finally recombine :-/

    Here's a RSL example from MOSAICSurface that may make more sense...

    float Dr = comp(layer_diffuse, 0);

    float Dg = comp(layer_diffuse, 1);

    float Db = comp(layer_diffuse, 2);

    float Sr = (Dr != 0 ? comp(layer_shadow, 0)/Dr : 0);

    float Sg = (Dg != 0 ? comp(layer_shadow, 1)/Dg : 0);

    float Sb = (Db != 0 ? comp(layer_shadow, 2)/Db : 0);

    layer_shadow = color(Sr, Sg, Sb);

    Anyway email me if you've got any questions ;-)

    Eric Back (WHiTeRaBBiT)

  2. Thats's a great work that you are doing!!

    Keep on that!

    PD: Slowly is comming to light a Open Source Node compositor, may be you can take a look:

  3. amazing stuff... i am goind to report this to blendernation. keep up the the super work. the tutorial is detailed and very helpful. the result is sp amazing that i am eagerly waiting for the movie. we can have a community sprint too just like the durian project


  4. Many institutions limit access to their online information. Making this information available will be an asset to all.