Sunday, December 26, 2010

Merry Christmas! A gift from us... to you.



Well it is that time of the year and this year I decided to release the Widow Pipeline as a present to the Blender and Renderman communities. Yes it is true you can get these all on the sites themselves, however some tools are not so well known. So now everyone can use the same tools we are using for the short film "The 10:15" (aka Project Widow).

Widow Pipeline I for Linux tar archive

(of course this was not without it's own technical problems.... torrents did not work, now just a single tar archive.... sigh)

This release is built for Linux, since much of the team uses that operating system, also some of the tools are built for Linux and would require some work to get them to build on Windows or Mac OS. Shrimp for instance is modified specifically for this short, WidowShade was a heavily modified version of ShaderLink, something that took close to several months to complete before Shrimp was brought into the pipeline.

This release is similar in fashion to the BRAT release in 2009, however the difference in the release is that where BRAT was meant for a more general installation on multiple operating systems, with a wide variety of tools and example files, this release is more specific and designed to fulfill a specific role : to be the basis of an open source production pipeline. This release is also using older tools rather than bleeding edge, simply because of stability and the fact that at the time of this writing there is still much to develop to get a stable pipeline using Blender 2.5x and RIBMosaic.This is the result of over a year of work developing a stable pipeline so that “Project Widow” could be completed and the hope is that people will be able to use this as a starting point for their own projects, or using this to learn Renderman with Blender.

Artist Tools

Aqsis
Blender 2.49b (both 32 and 64 bit)
RIBMosaic
DJV
Ramen
Cinepaint
Shrimp (Widow build)
WidowShade

Blender Scripts and files

Conspot
RIB_Lib Database (blend file with entire RSL LIB for linking)

Shaders

Entire shader collection used for “Project Widow” (as of Sep 2010)
Surface, Displacement, Imager, Volume, Light and Diagnostic shaders


Pipeline Tools

Shotgun API 3
WVTU (Widow Version Thumbnail Uploader)
WidVerTU 0.3 (dev version of GUI WVTU)
Postmosaic
BlenderAid
DrQueue

Libraries

RSL Lib 0.1
OpenColorIO 0.6.1 (Sony Imageworks color management library)



There is also something to find in this torrent. It's your present. It is a pretty complete collection of pre-production images, as well as R+D renders and test beauty shots. There are also some screw up images, some test blend files, as well as older images from the past to show just how much has changed in the past 5 years.

There is also the BlenderCon 2010 technical paper that I wrote, considering that this very pipeline is the one described in the paper.

Now to get back to the StarWars marathon on SpikeTV.

Merry Christmas!

Wednesday, December 08, 2010

RIBMosaic : Now part of Aqsis



HUH!?!?!

I am sure some of you are thinking this. Yes, it is true. RIBMosaic is now a part of the Aqsis project and the current official "home" of the new add on.

The back story goes like this. Eric Back emailed a certain select few last month (around the time of the last post here) and told us that he would no longer be developing the recoded Blender plugin. He was "orphaning" the code as it was on Sourceforge and anyone could come and pick it up. Knowing that RIBMosaic was important for Aqsis it was decided that the developers would adopt the code as it last was and continue development with the intention to bring Aqsis closer to Blender.

From the developer mailing list :

Eric has given his agreement that this should be the 'home' of RIB Mosaic now, and understands that we will focus on the integration of Aqsis specifically into Blender, while endeavouring to ensure that nothing we do will intentionally preclude support for other compliant renderers. We as a team will probably not be able to focus any effort on supporting other compliant renderers, beyond possibly testing regularly to ensure that existing functionality still works. Of course, we will assist and encourage anyone who wants to work on support for other renderers should they wish to do so within its new project space.

Cheers

Paul


In a way this would be a "RIBMosaic for Aqsis", while other developers could make a RIBMosaic for say 3Delight, or AIR or even PRMan.

So now RIBMosaic is now a part of Aqsis and will be packaged with the renderer from now on. A lot still has to be completed and there has to be some serious testing done in order to accomplish this in a timely manner. In order for Project Widow to continue, the tools need to be upgraded as well, it was bound to happen and now is the time.

http://sourceforge.net/apps/mediawiki/ribmosaic

Thank you Eric for bringing this idea to full steam, without your efforts and help it's hard to imagine being this far by now.

Sunday, November 07, 2010

Pixar and Microsoft - Cloud rendering with Azure





 http://blog.seattlepi.com/microsoft/archives/226427.asp

This is a little late news overshadowed by the Blender Conference this year but this is a bit of interesting news. Pixar and Micorsoft have paired up, with Renderman being shown as a proof of concept for Microsofts cloud computing service "Azure". This is the first time an RiSpec renderer has been used in such an environment, which is actually pretty tough to do since there are usually quite a bit of assets and files required to render any given frame as opposed to say Blender or Maya files which are usually a single file with all information needed to render.

For instance BURP and Renderfarm.fi are possible because Blender animation projects can be packed into a single file, these files can be distributed across the internet to multiple computer slaves, rendered and the images sent back to a single folder of multiple frames.

Renderman it is a bit different, shaders, image maps, RIB archives, even header files and any other on the fly processed imagery, brick maps or point clouds can be scattered across multiple file paths, even with relative file paths things can be lost if you are not carefull. The amount of exported files is quite large, often reqching into the thousands and the more data per frame that number just increases, shadow map passes alone can reach into the tens of thousands depending on the amount of frames and lights. This makes it very difficult to use distributed rendering across the internet using Renderman.

Another factor that makes distributed rendering unfavorable compared to a renderfarm is CPU architecture. The differences in the various types of processors will alter the output of procedural texturing because of the way the shader is compiled, the subtle changes in fractal pattern generation or noise, turbulence, anything that generates a procedural pattern. In other words an image rendered with Aqsis on an AMD64 Dual Core CPU will be slightly different than a render of the same file with Aqsis on a SPARC or MIPS processor. The difference will not really be noticable on still frames, even side by side the image can appear to be the same, however when the frames are going at 30 fps, these differences will be seen and the patterns will appear to flicker over time. This is why renderfarms are usually composed of identical hardware and at the very least the same type of CPU, this eliminates the worry of those artifacts.

However it is not impossible and if anyone can prove it, it's Pixar and they did.

Pixar took to the stage at the Professional Developers Conference 2010 to demonstrate this now potentially powerful ability to reduce the overhead cost of hardware and energy supply and instead using remote rendering. Cloud computing is considered the future of computing, whether it happens or not is anyones guess but in many instances where number crunching at a massive scale is needed, cloud computing could be one of the best options available to smaller studios that can't afford to spend the very large amount of money to build an effective renderfarm. Spending a fraction of that using cloud computing services is favorable in that sense, so it is a very welcome sight to see that it is possible with Renderman.




As you can see here, the payoff is incredible.




So this can open up new doors for some on what can be done with Renderman. In the past, I myself have argued that cloud computing is not really practical when using something like Aqsis to render, however with Blender, Maya 3DSMax and so on, it actually is favorable if the ability is there. I admit that I did not research fully into the subject, now I have and my opinion has changed.

Of course this is Pixar's work so there a large degree of engineering involved, this is not just a bunch of college kids doing this for a project or hopes of making it big. To do that yourself would require a bit of work, programming and patience, however this news of Renderman : Azure is really a nice breath of fresh air, gives us hope and inspiration to replicate it ourselves. While it may actually require investing money into the service, be it Azure, Google or Amazon, in the end it may end up be more cost effective than building your own renderfarm. On demand rendering services are on the rise.

It is funny though, since Pixar was owned by Steve Jobs, that Microsoft was used to demonstrate this. Also proving to the visual effects world that Microsoft is not as useless in the number crunching arena as it once was.

Monday, November 01, 2010

Aqsis new core prototype, interactive viewer!

So here it is - words cannot describe what you are about to see, you have to watch this for yourselves.



From the Aqsis development blog :

"This blog has been pretty quiet for a while, but aqsis development has been coming along behind the scenes. During the aqsis-1.6 development last year I focussed a lot on making aqsis faster. After working on this for a while it became obvious that some major changes were needed for the code to be really fast. In particular, the aqsis sampler code is geared toward dealing with single micropolygons at a time, but it seems better for the unit of allocation and sampling to be the micropolygon grid as a whole. This was just one of several far-reaching code changes and cleanups which seemed like a good idea, so we decided that the time was right for a rewrite of the renderer core. Broadly speaking, the goals are the following:

* Speed. Simple operations should be fast, while complex operations should be possible. The presence of advanced features shouldn't cause undue slowdowns when they are disabled.
* Quality. Speed is good, but not at the cost of quality. Any speed/quality trade offs should be under user control, and default settings should avoid damaging quality in typical use cases.
* Simplicity. This is about the code - the old code has a lot of accumulated wisdom, but in many places it's complex and hard to follow. Hopefully hindsight will lead us toward a simpler implementation.

Fast forward to the better part of a year later - development has been steady and we've finally got something we think is worth showing. With Leon heading off to the Blender conference, I thought an interactive demo might even be doable and as a result I'm proud to present the following screencast.



There's several important features that I've yet to implement, including such basic things as transparency, but as the TODO file in the git repository indicates, I'm getting there. The next feature on the list is to fix depth of field and motion blur sampling which were temporarily disabled when implementing bucket rendering.

Edit: I realized I should have acknowledged Sascha Fricke for his blender-2.5 to RenderMan exporter script which was used by Paul Gregory in exporting the last example from blender. Thanks guys!"
Posted by Chris Foster

"Just to clarify, this is not a demonstration of an interactive viewer for RIB editing. This is the newly under development main core. So, what you’re seeing there is the actual renderer, rendering microplygons (Reyes) at 40 fps. We’re just displaying it in an interactive framebuffer, rather than bucket at a time, to show how fast it really is. It’s not using GPU, purely CPU, exactly what you’ll get when you render with Aqsis.
I should also point out that it’s not complete yet, this is the first demonstrable stage of the core re-engineer, there’s more still to go in before it’s even up to current Aqsis feature levels, but rest assured, when it’s finished, this is going to be fast."

~ Paul Gregory

Wednesday, October 27, 2010

BlenderCon2010 "We have such sights to show you..."

All rights are reserved by Clive Barker and/or Bent Dress Productions.

The Aqsis team has been very hard at work giving the renderer a reboot of sorts, with the building of the new core and all, of which has not really been seen in the public eye.......until now. Well, almost.
Leon Atkinson (Renderguy) will be at BlenderCon this year and will be showing off some of the latest exciting developments. Yes, showing it off on screen for everyone to witness because there is no way to fully explain the details in words, the Aqsis team reports a demo for the conference is under development, it is being prepared specifically for BlenderCon to announce the new plans for Aqsis and to show how beneficial they could be to Blender users.

The very core of Aqsis is being re-engineered with a focus on speed, it is at the prototype stage now, but is functional enough to form the basis of the BCon demo. There also will be interface changes, the migration away from FLTK to QT4 for instance, which is actually pretty neat since a lot of the pipeline tools are already in QT4 as well, or in the process of switching. Other changes like multithreading for instance are a very recent addition to the new core.

They are preparing a more detailed press release for after the conference, so keep your eyes open for that. If anyone happens to be at BlenderCon and able to record a video if you could let us know so we can post it here.

This also coincides with a point release, Aqsis 1.6.1 which will mainly be a bug fix release

Some of the rumors going around the underworld is that Larry Gritz is trying to get Ptex implemented into OIIO (Open Image Input Output), which Chris Foster has expressed great interest in using for Aqsis, thus Aqsis would get Ptex for free. That won't be until later though, possibly in version 1.8 or the fabled Aqsis 2.0.

Piece by piece the developers are building up towards a very powerful rendering application.

On the other end of the conference spectrum is the paper "Blender to Renderman : Building a Pipeline for Production Use" written by myself. I had originally been planned for a speaker spot, however due to complications I had to back out and asked Ton if submitting a paper would be ok since the topic was pretty much the same (possibly even worded better on paper than with spoken word haha) so...... my first publication of sorts and it is appearing on this years BlenderCon page.

Strangely enough is that this year's Halloween is also during BlenderCon, so one can only imagine what will go on during the weekend, wouldn't it be cool to see the BlenderCon attendees dressed up as zombies and walk the streets of Amsterdam?


Thursday, September 30, 2010

Sintel now available



 Congrats to the Blender Foundation! I have been awaiting this film for quite some time, even had the chance to chat every now and then with some of the guys. During the course of their production those of us working on "Project Widow" took note of some of the methods the BF had done, in particular Blender-Aid, so in a way Sintel has been a good source of reference and information, even in such cases as the modeling sprint, used some of the models for testing with Renderman. In such case aside from the texture format change was pretty solid export, even from 2.49. Nathan in particular has been a real joy to chat with, spending time in the Aqsis chat room and in one such case showed Paul Gregory and I a video preview of his basic exporter using a rigged Sintel model. Of course there has been a lot of behind the scenes talks between developers about the RenderAPI, something that we have been encouraging for some time now.

I guess it is kind of strange that a few of us have had a small connection to this film, strange but cool at the same time.

Grats guys! I enjoyed it greatly!

http://www.blendernation.com/2010/09/30/sintel-now-available-for-download/

Monday, August 09, 2010

RIBMosaic GUI update

By popular demand, Eric Back (Whiterabbit) has posted a very detailed update about his development on the next generation of RIBMosaic.

What's done...
- all permanent panels in the UI are complete
- the pipeline manager and pipeline driven UI panels are working
- the pipeline link system is complete (these are like tokens in the old mosaic but can now connect almost anything and be used anywhere)
- the translating of slmeta shader files to pipeline panels is working (this converts K3D shader xml files to panels in Blender's interface)
- the core of the export manager is working and export of command scripts is just finished

A lot of this is pulled from the message that Eric posted.... 


Description from left to right:
- The first window is the render properties window showing the RenderMan Passes and Export Options panels. Of note is the partially visible link command in the Output property "Renders/@[EVA:.current_frame:####]@@[EVA:.pass_layer:]@.tif". These links pass information from the exporter directly into that string and can used in any text field. They can also pass values from buttons from the UI directly into code in pipelines allowing designers to build their own interfaces and pass the selections directly into RIB, RSL or embedded Python code.

- The second window is the render properties window showing the Pipeline Manager panel and a example user pipeline panel called "AQSIS - Layered Display". The Pipeline Manager panel is available in all properties windows and allows you to load and remove pipeline files as well as manage their content. Pipelines are XML files containing descriptions of UI panels with RIB, RSL, Python or shell script associated to them. The idea is a pipeline designer would build any panels they need along with the underlying RIB and shaders to provide a complete rendering pipeline for specific features (such as Aqsis's layered EXR display). The artists simply load the pipeline to add the functionality they want.
The example panel shown is a test using every control RIBMOSAIC supports and is completely designed and drawn from XML. All controls are connected to Blender properties so they can be animated. The buttons can trigger pipeline scripts and show message popups (letting you give the user feedback). There is also a file browser button to quickly putting file path in text fields.

- The third window is the scene properties window showing the Export Options panel. These panels are also available in almost all properties windows and show options relevant to where they are. In this case the scene export options have low level control over things like export path, search paths and global archive options. Also note the export path is also using links with Python eval expressions (to let the user do fancy things like make the export folder use the name of the blend or scene). Also worth noting is the pipeline manager is showing different categories and the panels loaded at the bottom are using different icons in the left corner. Pipelines can create three types of panels "Utility", "Shader" and "Command" each showing in a different category list in the pipeline manager and each with their own icon. Utility panels will be for adding RenderMan features to the UI and internally generate RIB code on the datablock they are on. Shader panels represent shaders and are created from slmeta files and also internally generate RIB code on the datablock they are on. Command panels represent shell scripts and contain code that's executed on the command line. This is an interesting feature as it puts complete control in the hands of the pipeline designer allowing them to do cool things like call renderfarm managers through MOSAIC.

- The fourth window is the material properties window and shows another Export Options and a shader panel. You'll notice several of Blender's standard material controls have been placed in the export options. The only controls that are kept from Blender's render pipeline are ones that are directly export, interact with Blender's 3D view or can be useful to passing to RenderMan in some way. The AQSIS - k3d_greenmarble panel was completely generated from a slmeta file (these will soon be created by MOSAIC but also by k3d and shrimp shader editor). Like everywhere else all controls can be animated. The little buttons with the chain icon can be used to "link" the Blender data path from another control to this shader parameter. In Blender you can right click on any control and copy its data path then paste that value as a link in MOSAIC. This lets you link the value from any control in the same window to that parameter, for instance linking Blender's lamp "Energy" to a light shader's intensity parameter. Data path links can also contain math or even eval expressions to manipulate the value under artist control (no need to code).

- The fifth window is the object properties window. The only thing to note here is the export options has CSG controls. RIBMOSAIC's new exporter will support parent/child archive structures so it will use this to handle CSG. For instance you may set a parent to "Union" and then all children will be joined together, then make some sub children "Intersection" and cut those children out of the union, ect. Also I've already setup motion blur to support sub-frames although this hasn't been added to the API yet, hopefully it will by the time I'm coding it.

- The sixth window is the objectdata properties window. In the export options notice a select for primitive type. This will let the user manually specify a primitive type depending on the underlying data type. For instance if on a mesh then a user can specify PointsPolygons, SubdivisionMesh, Curves or Points. If auto select then mosaic will attempt to guess (such as exporting a Subd if a Subd modifier is on the object or points if halos is enabled, ect). Also the user will have control over exported primitive variables and I plan to make pipeline for users to add even more. Lastly Levels of detail will allow users to specify different datablocks for levels based on distance.


This last shot is showing a "hard coded" test rib rendering (since it can't export scene's yet)...

In the top left background is the XML code for the loaded pipeline in Blender's text editor. The top right is showing the current console output. The bottom left is showing Aqsis's framebuffer output. The bottom right is showing Blender's render result automatically using the render from Aqsis.

This is exciting news and a lot of people have been patiently waiting for some news over the past 8 months. Plus with screenshots even so we can drool over our keyboards.

Well done Eric!

Thursday, August 05, 2010

Ptex support added to Blender!

This is a brand new commit to the Blender 2.5x code actually, something that has not been formally announced nor has it been fully worked out in functionality and there is probably bugs and needed optimizations ....
This is ptex, of course. The implementation isn't complete, but here's what working for now:

  • Per-face ptex resolution. Each face gets a U resolution and a V resolution based on its area (relative to other faces) and how stretched it is (i.e. a thin tall face should have a lower U resolution and a higher V resolution.)
  • Automatic generation of ptexes. This step is somewhat analogous to unwrapping your mesh, except instead of choosing UV coordinates, it's setting the default ptex resolution for each face. There's a UI control for texel density.
  • "Vertex" painting. That's a bit of a misnomer now, of course, but you can paint more or less normally. (Naturally I've broken some vpaint features like Blur in the process, but it'll all be restored.)

Note: ptex is designed mainly to work on quads. Triangles and other faces are split up into quads in the same manor as Catmull-Clark. I've coded it so that both quads and tris work (although there are some mapping issues with vpaint still), however quads are the "fast case"; for this reason I've applied one level of subsurf to Suzanne in this example.

A partial TODO list:

  • Add UI for setting individual faces' resolutions
  • Integrate the open source ptex library for loading and saving ptex files
  • Add upsampling/downsampling so that changes aren't lost when changing ptex resolution
  • Change default ptex to a flat color. The random noise is just for testing, of course :)
Very very cool to hear! Congrats to the Blender devs for adding this, now all that is needed is more render engines to support Ptex as well, I would imagine within the year this will be a sweeping motion to impelment this across the spectrum of graphics programming.

Wednesday, July 28, 2010

ILM and Sony Imageworks release Alembic OSS

"Alembic is an open computer graphics interchange framework. Alembic distills complex, animated, scenes into non-procedural, application-independent, baked geometric results. This distillation of scenes into baked geometry is exactly analogous to the distillation of lighting and rendering scenes into rendered image data.

Alembic is focused on efficiently storing the computed results of complex procedural geometric constructions. It is very specifically NOT concerned with storing the complex dependency graph of procedural tools used to create the computed results. For example, Alembic will efficiently store the animated vertex positions and animated transforms that result from an arbitrarily complex animation and simulation process,
 one which could involve enveloping, corrective shapes, volume-preserving simulations, cloth and flesh simulations, and so on. Alembic will not attempt to store a representation of the network of computations (rigs, basically) which were required to produce the final, animated vertex positions and animated transforms. "
 



http://code.google.com/p/alembic/

- quoted from the Google code project page

ILM and Imageworks truly are our best friends in the proffesional visual effects industry. From what is posted this is intended to bake this data into a format that can be read later on down the pipeline, such as cloth simulation into a single baked file rather than thousands of small files of cloth sim data, such as the case with Blender. Or bake an animated character that will be later used for cloth simulations. Then using the same format can take these scenes and bake it to be used later for lighting and rendering. There are limits though, it cannot store network representations, such as bone rigs and it is not meant to replace native scene files for applications. It is another bake format that is intended to smooth out pipeline issues, like file formats between applications, which has been an issue in the 3D industry for as long as it has been around. Trying to get Maya files into Blender is a pain and usually involves exporting an object into the .OBJ format, which cannot store animation data, so having the ability to bake animated scenes that can be read in another application is a HUGE leap! Imagine the ability to use Maya scenes in Blender, then rendering in 3Delight, or for instance, modeling something in Blender, animating it in Maya, use RealFlow to make a fluid simulation, then render it in Pixie.

While it is too early to say where this will go, who will adapt it and how long it will take, I can say that the idea is great and hope to see this evolve. Remember, in less than 5 years OpenEXR went from a small user base to be included by default on most Linux distros and 3D applications, both open source and commercial as well as proprietary in house tools.

Sunday, July 25, 2010

Aqsis Licensing Changes to BSD

Aqsis Licensing Changes

As of 25th July 2010, the Aqsis project intends to move towards re-licensing under the BSD license (http://www.opensource.org/licenses/bsd-license.php). All new code contributed by any authorised developer must, by agreement of the developer, be licensed under the BSD license. Existing code will remain under the current GPL/LGPL combination in the short term, while efforts are made to obtain agreement to re-license under the BSD license. Where agreement is not forthcoming, the code in question will be removed from the project, and be replaced with new implementation, written in isolation and licensed under the BSD license. The intention is to ultimately provide the whole of Aqsis under the BSD license.

As reproduced from the mailing list.

Tuesday, July 20, 2010

Blender 2.5x RenderMan Export News

It has been some time since there was anything written, sorry for the lack of activity, my personal life has taken some unexpected twists and turns, for the better of course.

Now for the news!!

Blender 2.5x now has the ability to export to RIB! Actually Blender has been able to do this for some time, it just has not been mature enough to really say anything about it. However this is not Eric Beck's RIBMosaic, this seems to be a simpler script similar in tune to the old Blenderman script, just with a lot more functionality. I have not had a chance to test run anything on Blender 2.5x, simply because the Project Widow pipeline is for 2.49 so in order for us to continue to work on this we need to keep the old production stable version. Thus the only information I have is already seen on the Blenderartists site here : http://blenderartists.org/forum/showthread.php?t=187969&highlight=renderman


"Frigge" as he has been known as in the forums, made this as an exercise into deeper technical stuff, as mentioned in the forums, so this script is not to be considered something for large productions. It does seem to be the perfect tool for someone who does not know much about Renderman but is curious to know how it works. That's how I started, my first taste of Renderman came from a Lightwave plugin and 3Delight back in 2003. So if you are not a shader and rendering wizard you might find this useful, at the very least informative and IT WORKS.

Back in Feb I had been talking a lot with Nathan of the Durian team and at one point in time he had made a basic export script for Renderman as well. It is kind of funny that he has done this considering that about 7 years ago he ranted about how much he hated Renderman shaders, I did some digging and had found this post form that time : http://www.blender.org/forum/viewtopic.php?t=1497&start=30, I just hope he does not kill me when he reads this, heh. I did have the source for it at one point but it got lost so I am unable to provide anything in terms of how it functions or what it looks like, though at the time he did do a simple 60 frame animation of the main character model using this script to export and me when he Aqsis to render it out. Again that too is lost in the abyss of the hard drive recycle bin (I was being stupid and did not realize that files and folders I deleted months ago were there). Nathan then became very busy with Sintel shortly afterwards so I left him alone to complete the film. He did say that once Sintel is over he wants to become more involved in the Blender to Renderman project so we are excited to have his presence here.

Matt Ebb also has been experimenting with a Blender 2.5 Renderman export script.


This script is also a sort of personal pet project for him and no code seems to be available at the moment, however from the screenshot he posted it looks good enough for people to play around with and learn more of what Renderman is and how it works rather than a full on production capable plugin like RIBMosaic, which of course can be daunting to grasp at first for someone who has never used a RiSpec renderer.


http://mke3.net/weblog/3delight-renderman-in-blender/

Finally Eric has informed me that the GUI portion of the new RIBMosaic script is done, cannot wait to see some screenshots!!

The point of this is that Renderman usage with Blender is gathering steam, more people are interested in it and there is some good effort from all over the Blender community, from the smaller basic scripts to the complex. This is a good thing, even in this little niche of the Blender community we have options and that is the whole goal of this : OPTIONS. There was a recent thread on Blenderartists, one that asks "Should the Blender Internal Engine be retired?" and in my honest opinion it should NOT. Why? Well not only has Ton put a LOT of time and work into that renderer but why should it be retired at all? Every 3D animation package has some sort of rendering engine and while each may not be able to x or y rendering capability, you are still able to render out an animation sequence. Yes Maya's internal engine cannot hold a candle to MentalRay or PRMan but it is still functional and with enough effort can produce some good results. We started this site for a single purpose, to have the OPTION to use an RiSpec rendering engine, not to replace the internal engine. We know for a fact that not everyone is interested in rendering, they are called character animators, or modelers. While they may prefer this renderer over another for whatever reason, they have the desire to create the model or animate it. Why should they have to be forced to learn a new rendering system when they are comfortable with the one supplied? Of course not everyone is like this, I just used that as a single example however the point is to not take out the internal renderer of ANY application that is intended to be used for animation. That would be software suicide.

Wednesday, April 21, 2010

Ptex and the Open Source Community




Figure 1: T. Rex with 2694 faces rendered with Ptex. (©Walt Disney Animation Studios)

I decided today to write an article on Ptex, the texture map library that Disney developed and recently released as open source. While this is not fresh news by any means, since it's official release early this year there have been some development in both the commercial and open source worlds.

One of the first applications that support Ptex is naturally Pixar's RenderMan 15.0. After the announcement by Disney in Jan. 3D-Coat was one of the first smaller 3D applications that implimented it within a week, which of course brings to mind that infamous set of images found in the forums.




One of the more interesting images was the texture map itself which looks unlike any map I have ever seen.


















The original forum thread can be found here : http://www.3d-coat.com/forum/index.php?showtopic=4834&st=0

However in the open source world it has not caught on quite as fast as expected, in fact to date none of the software in the Blender to Renderman pipeline support Ptex. This is not because of the lack of interest, it is more or less due to development targets. Simply put both Blender and Aqsis are in the process of major rewrites so to implement Ptex into these at this time would divert attention from the important targets in these applications.

Aqsis 1.8.0

Blender 2.5

Does this mean that Ptex will never see the light of day in the open source world? No. In fact if you recall it took some time before OpenEXR was added to Blender, Aqsis and Pixie so while the technology is there currently, it may be some time before it is added across the board. This is not due to developer laziness or as mentioned before, lack of interest (because there is an interest in it from the Aqsis team), it is simply there are more important things to worry about in each application such as functionality, stability and speed.

But all is not lost and it seems that some people have been using workarounds to accomplish usage of Ptex with applications that do not support Ptex, such as Blender.

CG Society Article

Recently there have been some usefull code that has popped up that not only will benefit Aqsis but Blender as well. For instance OpenImageIO (OIIO), which if combined with Ptex and supported in Blender and Aqsis would be something that can be very usefull. OpenImageIO was developed by none other than Larry Gritz, possibly one of the most important figures in the CG industry, from BMRT and Entropy, to the work he did at Nvidia with Gelato. So not only is OpenImageIO a powerfull imaging library but it's also one of Larry's greatest contributions to the open source community.

The /*Jupiter Jazz*/ group has also provided a usefull file cache library called "JupiterFileCache", of which at render time could improve file access speeds, the files can be texture maps, point clouds or other large files to reduce network traffic and disk use. Combined with the OpenImageI/O and Ptex libraries could be a useful benefit for everyone.

The Blender to Renderman project is aware that these recent contributions could have a massive impact on the open source CG industry, so I have a proposal to ALL the developers in these areas : Let's get together and at least come to a common goal to impliment Ptex into these applications as soon as possible. Even if I have to personally spearhead the development and communication between all parties involved, this could very well be one of the most pivotal moments in development after the core rewrites to the applications themselves, in reference to Blender, Aqsis and Pixie of course.

In conclusion it is nice to see more and more large scale studios contributing to the open source community, while the tools or libraries they provide may be small and few, they are greatly welcomed by the community and we are encouraging more of it. One of the goals we are trying to promote is to "play nice", which means that open source software should be able to work well with commercial or proprietary software in terms of file formats, libraries and standards. One of those steps was the addition of OpenEXR (except the minor issue of Blender reading and writing EXR images upside down), hopefully more of the same follows and one of those steps can be the subject of this post : Ptex.

The actual details of Ptex can be found here : http://www.disneyanimation.com/library/ptex/

(© Pixar Animation Studios)
(© Walt Disney Animation Studios)

Thursday, April 01, 2010

April Fools! Pixar aquires the source to Aqsis and Pixie in stunning lawsuit


Update : Yes it was a joke ;)

It seems that history repeats itself, as we all remember the sad demise of BMRT and Entropy due to the lawsuit against Larry Gritz by Pixar over patent infringment, or something to that effect. Well recently it has come to our attention that certain projects are being hit with the same lawsuits over graphics technology patents and now Aqsis and Pixe are now no more in the open source domain.
What this means is that the developers have to license the RiSpec from Pixar despite the fact that the code was written from scratch, the patent technology described in papers are so close to the Pixar source code that lawyers are afraid of investment loss. "Aqsis and Pixe, while being open source has this code that infringes on the technolgy Pixar invented, clearly the need to take effort to protect the name and reputation, as well as financial investments, is required" says a source that wishes not to be named for legal reasons.
This seems to be a common trend in the software industry where one technology company sues another over patent infringement, it seems that the open source world is not immune of this legal battle either. This comes as a devestating blow to the community since this means that development on these projects comes to a halt and will require the developers to license the patent, sell the software in order to recoup the license cost and make profit to pay for the subsequent years thereafter.
Already the developers of 3Delight, AIR and RenderDotC have adjusted the pricing to cover the fees with the same legal action, though it is far easier for them since they are commercial products already with established footprints in the industry. The open source community of Renderman users and artists were just starting to establish the valid reason for such tools and on the brink of the dawn the rug was pulled from under us and now we either use the old versions that will remain as is or pay for the next gen versions of our beloved rendering apps.
It is a sad time in our chapter as a whole and we wish the developers of Aqsis and Pixie well as they adjust to the dealings of commercial development. We are only waiting to see if our site gets hit with the same lawsuit over the name RenderMan itself, something also spoken of around the net here and there, so time will only tell if this site exists in the future at all.

Sunday, March 07, 2010

Video tutorial on Rendering Glass..



Hi. I am mohan, from India. We are a small team of creative people making an animated series named KICHAVADI for a television channel here in our home land using Open source softwares.. Ted gave me the liberty to post here to give back my exploration with blender & renderman. Thanks for him. :)

I have done an audio-less video tutorial on rendering glass. Using fake method of achieving refraction and transparency with AQSIS. This is the same lamp and settings i am using it in production. Thanks to Erric (Mosaic developer) for detailed guide.


Erric talk on rendering glass material in aqsis - blender:
Setup the materials IOR and Fresnel for the materials RayTransp tab (also RayMir and its Fresnel if you want reflections), be sure to DISABLE both "Ray Mirror" and "Ray Tranp" toggles or MOSAIC will think its supposed to export raytracing. One thing to keep in mind when doing this is the environment map only sees what "outside" the glass so you will not be able to see what's "inside" in the refraction (like the flame and lamp base). You can get around this by turning down Alpha on the material however this will also "fade out" the reflections and refractions.If you want the reflections to be solid and the fresnel to be "see through" you can do a trick by enabling "TraShadow" on the material and MOSAIC's shaders will use the fresnel to adjust the output alpha.

This is my link to video tutorial

-Mohan
rangakahale.creations@gmail.com

Friday, February 26, 2010

Walkthrough of the Widow Pipeline Part 1

I decided to make an example walk through of the workflow through the Project Widow pipeline, for those who seem interested in what we are doing. I contemplated a video entry but alas the lack of hardware in the interface-video-to-PC area prevented that from happening so I went about and began to write this entry in text.






The walk through is using objects and data from Widow, in the screenshot form so you will get to see the latest work we have done. Just remember that it has taken quite a long time to get to this point and it was not easy. Much of the work involved trying to get the 'Widow' pipeline functioning was done by trial and error, in fact much of our work so far has been pushing the limits, learning what can work and what does not, figuring out the best way to do something as well as ensuring that it can work more than once. We started out the project at full speed with models and textures completed by the end of the first month, the time since then has been a lot of R+D and setup for animation. In contrast to what people think, setting up a flexible, custom pipeline even for a short film is quite the task. Making sure that a single articulated model has the right texture, shader, movement from concept to finish is hard work to keep track of. Some of our models still do not have a final shader look for them, at least one model is still needed to complete the modeling phase, layout for the various sets is only just begun, animation has yet to get into gear and we are only half done. We still need to get the renderfarm situation solid. However despite all the work that has yet to be completed, the work we have done so far has been astounding! The work to develop the multilayer openexr display driver had been a huge deal for this project. In fact everything is rendered in EXR format in addition to TIFF and the framebuffer. The amount of work done to Mosaic by Eric Back was astounding and as of right now any development is being done for Blender 2.50.

Despite all the frustration that has slowed down production we are still diligent in finishing this.

Base Tools

Our base tools consists of Python, Blender, Aqsis, Mosaic and OpenEXR. Python itself is much of the pipeline as much of our tools use it for one reason or another. The shader editors we use run on Python for the most part, or use Python. Blender and Mosaic run on Python and there are even some tools written for Aqsis that use Python. Later on when we put all the video and audio together in Cinlerra, that too uses Python. Even the SVN has a Python tool attached to it to email members when files are changed. So Python stretches our pipeline from one end to the next, and with good reason too , it is a very powerful scripting language and can be used anywhere for anything. There is no compiling of the code, it just runs and if you are adept enough you can modify it and run again without effort.

Python 2.5.4 - http://www.python.org/download/releases/2.5.4/

Blender 2.49b - http://www.blender.org/download/get-blender/

Mosaic 0.4.9 - http://ribmosaic.cvs.sourceforge.net/viewvc/ribmosaic/mosaic/

Aqsis 1.7.0 - http://download.aqsis.org/builds/testing/

OpenEXR 1.6.1 - http://www.openexr.com/downloads.html

These tools are the ones that everyone in the team has to have installed in order for everyone to correctly open, edit and render in our pipeline. Below is a list of tools that we have used or continue to use.

Cinepaint 0.22-1- http://www.cinepaint.org./

Gimp 2.4.7- http://www.gimp.org/

Shaderman 0.7- http://www.dream.com.ua/theoldtool.html

Shaderman.NEXT - http://www.dream.com.ua/thetool.html

SLer - https://sourceforge.net/projects/ribkit/

Shrimp - https://sourceforge.net/projects/shrimp/

DrQueue - http://www.drqueue.org/cwebsite/

postmosaic - http://projectwidow.wikidot.com/forum/t-166757/functional-blender-mosaic-drqueue-renderfarm


In addition to the listed software we also have a whole ton of development environments to compile code, libraries of all types, misc utilities (code editors for instance), server daemons and other little pieces of code to support these as well. Of course some of these have been custom compiles as some tools do not have binaries available for download, or in the case of the shader editors both Shaderman.NEXT and SLer run off of Python. PostMosaic is a shell script written for BASH so is not usable on Windows, not to mention it was designed for the lastest stable version of Dr.Queue so it is not known if the same issue would persist on a Windows based renderfarm (of course one would wonder why anyone would do such a thing but regardless that possibility cannot be ruled out, as ridiculous and expensive it may be). The version of Cinepaint we use is only available for Linux as there is no working version for Windows, since we do need to use Cinepaint to load and save OpenEXR files. GIMP is pretty much standard on whatever platform you are on. Shaderman 0.7 is a Windows only build while Shaderman.NEXT runs off of Python and thus crossplatform. All kinds of tools for various functions and all available for Linux at the very least.


Asset Management.

First and foremost is an asset control, be it a file server via ftp, a NFS, or a distributed peer to peer, something that everyone can use has to be used otherwise there is no consistent structure to what gets worked on. In our final version of the Widow pipeline we are using SVN and this is hosted privately but all members who worked on the project can have access to the data. Previously we had relied heavily on Dropbox for our asset control however because of the limit of space we would need we started to work on developing something better. Arachnid was a series of scripts written for Unison by Paul Gregory, however it seemed to be a bit too buggy for our needs even though it did work. When we started to use SVN there had been some concern over corruption of binary files (.blend files are binary files for instance).



SVN gui (Rapidsvn) with remote file list

The initial idea was sparked during the viewing of the Hand Turkey Studios webcast during the 48hr Film Contest. They were using an SVN for there very busy 48 hour pipeline. I had also participated in aiding a last minute re-render of 6 frames for the Animux "Prince Charming" preview animation, this used SVN as well. It seemed stable enough and something we could work with, we eventually ended up hosting it ourselves courtesy of NOYX Studios, which is my friends small home based recording and editing studio. This also freed up the usage of Animux's network strictly for the renderfarm later on down the road.

Had everyone been in a single location instead of everyone spread out over the world a lot of our assets would have been easier to manage. As it is i would say the amount of data that we will have transfered over the course of the production can easily reach 2 TB.

The asset tools are always going to be upgraded to make communication and detailed information regarding certain data. The content and rendering tools work very well on this pipeline but the one thing we lacked in the beginning is starting to shape up with SVN storage and recently some talks with a company that specializes in the asset management sector.

More can be read here :
http://projectwidow.wikidot.com/pipeline:network

Modeling.

The Widow assets are actually small in numbers compared to some short films, much of the modeling had been completed for many months. In our purposes we are going to use the main subject, the spider model. This model was the first thing made from model, rig and texturing completely.



Closeup of spider with wireframe over shaded, complete with hairs!

You will notice how it was made using Quad polygons rather than triangles, this was a design choice rather than looks, the REYES pipeline is far more efficient dicing quad polygons into micropolygons so building in quads will actually decrease render times with SubD and displacement shaders applied. What happens is that the quad polygon with a SubD modifier creates a patch with a control mesh around it. When this patch goes through the REYES pipeline these patches get cut up into sub patches and then cut up again, then diced into micropolygons at the pixel level, then displaced, then shaded and then lit.

For those who want to read up on REYES :

http://graphics.pixar.com/library/Reyes/paper.pdf



Model of the spider

Since this is an 8 legged creature, a custom rig was needed to be made. The rig was built by Cedric Palle, and has controls that can either move the entire rig across 3 space or parts of the body, or one part of the rig moves the body while the legs stay in place. A very nice, workable rig. One of the issues we had was scale, the spider model compared to the environment is huge, so for much of the scenes a smaller version of the spider had to be used. What was ultimately done was to use a scaled down version of the spider without the hairs added they did not work so well at the scale of 0.01 so removale of them was a terrible price to pay, though we still have the model with the hairs on them for some extreme close up shots.



Rig of the spider

One of the things that was stressed in the beginning was to use Aqsis as the preview renderer when modeling because there were many instances where we either found certain bugs in Aqsis itself or there were some methods of modeling in Blender that looks fine when using the internal renderer but in Aqsis looks different. There are some things that Blenders Internal renderer can overlook or get away with because the renderer is designed for Blender, however when it gets translated into another language somethings sometimes don't come out right and in this case there were instances where the two rendered very different results, so using Aqsis as the preview renderer was needed. In most cases though there was little difference in the two.

More of this can be read here :
http://projectwidow.wikidot.com/pipeline:modeling

Texturing and Imaging.

This part of the pipeline was not used as much as originally intended but did find use here and there when needed. Much of the surfacing is done with Renderman shaders but there has been several uses for textures such as ground planes and the spider design. When we made textures they were first worked on with GIMP and then later with Cinepaint. All of our textures are in TIFF format simply because this is the only format that Aqsis can process into a MIP map.



Textures above in Blender



Textures in Cinepaint

There are 4 texture maps on this spider model. Color, Specularity and 2 different levels of displacement maps. One for long to mid shots and another for close up details. These were first created in GIMP and later cleaned up in Cinepaint.

http://projectwidow.wikidot.com/pipeline:texturing

http://projectwidow.wikidot.com/spider

Layout.

This portion of the pipeline was one of the most difficult to tackle because we were unsure exactly how to setup multiple scenes with animated objects without making each shot 100+ MB in size. Linking provides us that ability to make multiple sets in a small amount of time, add in the objects that need to be animated and one of the most important reasons is consistent shader and lighting settings from set to set. Our main environment is built off of 3 main sets, one of them is the complete set, the next version is most of the set and the third is a set with much of the objects removed. Set design in this project is tricky, depending on the camera view it is far more efficient to only include objects that are going to be seen, if we made one set for the entire thing then much of the objects would never be seen at all and thus reducing export time as well as disk space.

In this example I am using one of the production scene files (scene 002 to be exact) since from this point in the pipeline these scene files will be what every other process will be based off of. Because we are linking in objects we have more consistant shader visual continuity, there wont be really obvious repeating patterns and the varying amounts of turbulence, noise and fractal patterns wont change from shot to shot. If we did not link in objects each shot would have to be manually edited and doing this over and over is just not practical, so linking solves at least much of the grunt work.



Scene 002 set which is entirely linked from the main scene file
We can link or in some cases append anything into the layout files, however to keep the work flow consistent, having a custom file per scene prior to the work will allow anyone who starts to animate to do so without the fear of their work being altered by others. We also changed the various screens of the interface to accommodate that factor, having them labeled something like 'BLENDER_LAYOUT', 'BLENDER_ANIM01', 'RMAN_SHADER' so that different people can work on the same file without altering the settings that others have done, depending on the circumstance of course.

At this point there are two pipelines going and one is the modeling, layout and animation pipeline and for the most part that is contained in Blender with some Renderman data associated with it but nothing really shader heavy. In the beginning of the modeling phase Daniel had made a ton of models for the environment, which I had later shaded with the custom shaders. At some point in time later these shaders and Mosaic shader fragments will be appended in to the scene file that the rest of the scene files are linked to. This will reduce the copying and editing needed to a maximum of 3 main blender sets. This will also be necessary to do with lighting.

In retrospect it could have been done a lot better, the planning of this was not fully worked out but considering a lot of time had passed we decided to just go with what we had and keep patching this together. Linking and appending offer us that way of making time critical adjustments or in some cases rebuild. Later in the future this will be fully planned beforehand but for 'Widow' anything done from this point on would have to work.

Nathan Vegdahl also helped me out with this during a conversation on IRC one night, for all the things we knew how to do with Blender, something like actually bringing a linked object in scene was unknown to us, which of course is laughable now but at the time it was a moment similar to when the light of Marcellus Wallas briefcase renders one speechless in wonder.

More information can be seen here : http://projectwidow.wikidot.com/pipeline:layout
and here : http://projectwidow.wordpress.com/2010/02/04/library-linking-svn-and-a-new-tool/

Animation.

Since we are just now touching on this part of the pipeline there is very little to tell but as we build each scene and shot file up we are linking in the subject models in as well. At first we were not clear on how to add in external files and edit them (such as animation) without appending the data in itself. It just so happens that during that time the Durian team had released a short video that deals with this very subject so within minutes of watching it the whole animation portion of the pipeline had been figured out in my head, and within 30 minutes on paper and being implemented in the project folder.

Since my primary task on this project is shading, lighting and effects TD (I just happen to do other tasks as well) I am not too skilled at the modeling or animation end so our models and rigs have been made by others, so when building a layout scene I generally just place the object in scene and then place in the approximate area they are to be in. Depending on who animates them will have ultimate control over that file until it is considered final, upon which the scene is copied and renamed for shading and lighting usually with a version number and shading added to the name.

During this entire time effects animation is being done as well to accompany the primary animation. This can include anything from spider web movement, cloth simulation to particle work and ambient animation of environment objects. One such instance is these series of cables that have a Lattice deformer added to it which can be animated either by hand or with dynamic animation scripts during scenes where the train is going through and creating some vibrations, really only adding atmosphere to the scene instead of static models everywhere, bringing life to the shot.

Effects

A lot of R+D has gone into this area to see if there some of the things we wanted to do were possible. Spider webs for instance are very rigid structures when in complete form but when broken is a very flexible strand of an extremely lightweight material. One of the problems we had from the start is how to accurately replicate spider webs without a high poly count cost. One of the ideas was to use the Curves in Blender, which are exportable and renderable, however the problem comes in animating the control points - THAT part of the Pthon API is not accessable to Mosaic and this is good example of just some of the limits of the Blender 2.4x series. So more research went into varying types of methods for webs; polygons with hooks in some key vertice points, polygon web with cloth modifier, curves for non animated webbing.... all these methods could work. Even texture maps on a plane would work, however it all depends on what is going on in the scene, how close it is to the camera, if it is static or moving and so on. In all there is up to 12 different ways we can do spider web strands and it will most likely take all of them at one point or another.

Another research project was cloth itself. There might be a chance that we use cloth objects for blowing paper, lots and lots of little itty bitty pieces of paper. This demanded some testing and despite the fact the "paper" did not really act like paper, it did prove to work fine regarding the technical possibility to do such a task. When this does get added to scenes it is pretty certain that the exported data will be quite numerous and large. The one thing that one should remember is that in Mosaic it is possible to export only one RIB of an object and then no matter where it is, as long as the vertices do not move then only that one RIB is called on in the frame RIB file. However with something like cloth, each frame exports a cloth RIB file as well since the file itself is a large collection of where the geometry data is located in 3D space, so if there are a 100 tiny cloth objects all blowing around for 30 seconds, that is 90,000 individual RIB files for that sequence just for the cloth objects alone. So trying to work out the effects for these kinds of shot will require some effort but is possible.



An early web test done last summer

In all there won't be a whole lot of effects that anyone would consider "effects", it is more like supportive environment elements since the whole short is an effect in itself...... anywho, the effects are one of the last things to modify during the shading and lighting phase. The reason as such is that for the most part some of the effects can be cheated simply because of factors like DOF, lighting setups, distance to the camera, movement and so on. Any of these factors are only really visible during the lighting process, there might be times when a web will be in shadow so if it is off a little bit then so be it, regardless if we remove or add in objects. If we really were worried about every single thing being perfect then this will never be finished. It has to look good enough, not perfect.

You can read more of this here : http://projectwidow.wikidot.com/r-d

Shading and Lighting

This portion of the pipeline is the very reason why we are doing the short film in the first place, to showcase the power of Renderman. In all reality this is an ongoing process from start to finish as much of the initial shader work was done in the early months of production. All that is left is to add the AOV code to them and they are ready for production use. The way we wrote the shaders is also important since much of the work is going to be done in Blender, so the shader parameters were made with Blender in mind as opposed to the average shader code. Some of the shaders will never see the light of day, others are a wonder in appearance, some are actually being built to use Blender paint data to apply a separate shader to the object and others will not be seen as much but still look great.

All of the custom shaders made for 'Widow' are designed to be used within Mosaic. When writing a Renderman shader it is not uncommon to have numerous parameters that adjusts the way a shader looks or functions and in most cases this is perfectly fine if you were to use the shader in something like Maya where anything can control them due to it's open API. However since Mosaic's shader fragment system uses Blender material functions to change these parameters, you are limited to this area. Luckily the Blender material system is very robust and one does have quite a selection to use, the hard part is remembering what function of the Blender material controls the Renderman shader, there does need to be some planning involved. It is very possible with Renderman to have multiple functions that control different parts of the shader code, so like you can have 3 Turbulence functions but they can each change the values of
various other parts. Problem is that if you want to be able to change these parameters with the fragment system you are limited to what it can connect to, if using a Turbulence function in the Blender materials you can only have one of these, so you need to find something else for the other functions or not use Mosaic's fragments system. For the most part the shader functions are not too complex and when the ones that are usually only have a handful linked to the Blender material.



Custom shader development and fragment assignment in Mosaic



Custom shader with preview rendering using Pqsl

The other part of shading that usually needs to be completed first is the material assignment to polygons in order to use different shaders on a single object. This does not apply mulitple shaders to a single mesh when exported, this is not possible to do according to the RiSpec (unless of course in the case of Aqsis there is the existance of layered shaders but these require special shader and RIB programming to accomplish this), so what Mosaic does is split up the mesh into sub meshes which it then adds the shader reference in the RIB file. So with an object that uses multiple shaders, when exported these in turn become seperate RIB files. This operation is not visible to the user though and unless you are totally aware of how it works you would never know this would happen.

In addition to the custom Renderman shaders we are also using a lot of Mosaic's shaders as well considering there are many situation that do not require a complex shader. Such as various parts of the train are using the Mosaic surface shader because they wont be seen much at all and thus do not require anything other than a plastic type shader. The Mosaic lighting shader is the primary code we use since it tends to be one of the most complete light shaders seen for Renderman at all, next to uberlight.sl from over 10 years ago. During the course of production Eric had added volumetric shading to be called when the "Halo" setting of a light is switched on, something that was desired by the artistic members.

Lighting is the most crucial step for the production, not only the art aspect but from a technical view as well, there are many industry proven tricks that we will be employing to reduce render times as well as making it appealing to the eye. Since Aqsis is not ray trace capable we will have to use shadow maps as well as environment maps in some cases, however since it is Renderman much of the lighting in the environment can be rendered once beforehand and then later baked in to the scene over a spread of frames, also reducing render times. Custom lighting setups will need to be made of course but much of the general lighting will have already been rendered and baked in for later use.

I also managed to find a python programmer who wrote a script which adds a spotlight that is pointing to an empty object by defualt. This handy tool was something I always wanted to have, it makes lighting such an easier task when you only have to move two objects to get the spotlight to point exactly where you want. So having a script to just add this in without the setup of making the light rig itself is a blessing. At this time it is only a simple script but I imagine can include a GUI someday and hope to work on this script myself later.


Rendering.



This is where we will be finishing up each shot as they come out of the final stages of production. These exported RIB file structures (which can be quite large in the range of 500+ MB) will be uploaded to the remote render farm we have reserved and then using DrQueue to distribute the frames across the 20+ node renderfarm (see down below). We will be using a newer developmental version of the recently released Aqsis 1.6.0, which is now technically 1.70. Prior to this we were testing the developmental alpha version 1.5.0. The reason we are using developmental builds rather than "production stable" releases like 1.6.0 was simply because we are a test bed for them and provide some great cases for them when used in a production environment. So even though later on building a custom Animux rendernode Live-CD will require compiling a stable development build it will be the very same we would be using for our own preview rendering, thus maintaining a consistent rendering environment regardless where the rendering takes place.


Aqsis rendering of our train model with full shaders and DOF, featured in the new Piqsl framebuffer interface

Since we have been using Aqsis from the start, our assets are designed around it. Subdivision surfaces for instance tend to look a little different depending on the renderer used, be it of course between the Blender internal and Aqsis. So it was used during modeling, as well as any testing for ability to do what we wanted. Such as during the R+D process we use Aqsis entirely for the sole reason to see if what we are trying to do is possible at all. Like finding out that curves are not translating animation during export, or finding out that we can make very thin polygon strands that can look just as well as a curve. Testing out render times for full scenes, testing DOF and motion blur on both objects and cameras, testing out instancing methods between dupliverts and Array Modifiers, testing out a way to paint on objects to change shader values, viewing texture mapping results, the list goes on and on. We are not using Renderman much for animation previews simply because it is far faster and easier for the animators to make 3D view preview videos themselves than teach them about Renderman preview rendering. The shading and lighting process and beyond will be entirely rendered using Aqsis.

All of our frames are being rendered in the new Multilayer OpenEXR display driver.

Of all the achievements made in the past year it was the Multilayer OpenEXR display driver and Mosaic counterpart that made the biggest impact. By adding this to both Aqsis and Mosaic, it made AOV rendering a simple task, unless of course you were using custom shaders which requires that you add AOV code to it, something that can be a challenge unless you have experience programming. Openexr has become an industry standard now so having the ability to use an HDR file format through much of the pipeline was desired. Up to the point of editing the final video, the exr format has and will be used and the only other image format used is TIFF, that is only for textures which then get processed into the custom MIP map format. With the ability to put all AOV layers into a single file format was itself a huge contribution to not only Blender but the rest of the community. Being that it was designed for Blender, when added in the node automatically makes output points for each layer in the node, thus not requiring a large amount of nodes for each AOV and making the whole process much easier. The only problem is that Blender has a unique "feature", it writes exr files upside down and any file that is read into Blender will also be upside down and requiring a flip node for each layer. We are not sure if this was intentional by the Blender developers and hope that the the next version umm "corrects" this, Irritation of this aside, making a template composite blend file is an easy workaround and the rest of process can be devoted towards working with a shot and not with setup.



Cinepaint with a single multilayer openexr file opened

'Widow' was a great test bed for the Aqsis developers, during the summer we tried to do everything possible to test every feature or optimization, such as the improvements to depth of field and motion blur, or testing the new Piqsl gui. In fact since I do my own regular builds of Aqsis I have been taking advantage of some of the newest toys like point lights that use single maps as opposed to the old standard of 6 thus reducing the amount of files per light, per shot. We did encounter some errors, sometimes Aqsis failed horribly but with constant bug feedback the Aqsis developers were quick to fix and in turn we were able to continue to test.

The renderfarm operated by Animux went through initial testing this past summer as well, using Dr.Queue to manage Aqsis render nodes. Initial problems existed in the first series of tests then we found out that the way Mosaic writes the render script caused some errors. So one of the Animux devs had written a small shell script that edits this file so that Dr.Queue will correctly assign frames across the network. This is a tool called 'postmosaic' and on the Animux release 'Tremor' this was added by default, all one needs to do is run the command in the shell. This was also released to the public for anyone to add to their own system.





The Animux renderfarm

There is going to be anywhere from 7000 to 9000 frames needed to be rendered so when we do end up starting that process there is going to be 20 times that amount in RIB files, shaders, textures, shadow maps and other data. In all this entire short film could occupy a 500 GB volume, between the production files, the RIB exports, the frames, video and sound. It can be estimated up to 5 times that in the amount of data would be transfered over the internet, considering that we are located all over the world. Eventually this will archived onto a stack of DVD's then an external drive will be purchased and backed up onto that as well.

More about this can be read here :

http://projectwidow.wikidot.com/pipeline:rendering
http://projectwidow.wikidot.com/r-d#toc6

Aqsis Pipeline : http://wiki.aqsis.org/dev/pipeline

Composite

Once each scene is rendered the elements are composited together in Blender. Since the Multilayer EXR format was designed for such a case, the image itself will contain the AOV layers that we specify such as color, specularity, normals, UV, alpha and so on. Since it is one file there is no need to render out each AOV file per frame, it is all self contained. The only issue is that Blender seems to read and write EXR files upside down, it is not clear if this was done on purpose or if it was an oversight but since Aqsis writes EXR files the correct way we need to flip the layers before they are run through filters.
The reason for the AOV rendering is because if anything doesn't look quite right it is far easier to correct that particular layer than rendering the entire frame sequence again. This can't remove issues that occur either from modeling mistakes or rendering artifacts but in turn this can also reveal artifacts not normally seen in a final render, such as tiny grid cracking spots that normally can be very difficult to spot initially. This also is where you first experience where AOV code is not present in which case the layers will have a very different look than expected.



Cinepaint with bad AOV layers composited (including the UV layer which normally would not be visible)

Once all the layers are satisfactory they will be written to an OpenEXR image sequence and placed into the final frame area where they will be brought back into Blender for the final video output and placed into the final stage of production.



Blender composite nodes of a single OpenEXR file

There has been the thought that we can use Blender 2.51 for the compositing of 'Widow' since there are new composite nodes added. Since this would be at the tail end of the production it would not be a bad idea to start upgrading the pipeline to accommodate the next versions of software as it is quite possible that all the tools needed would have at least been updated somewhat.

More can be read about in here : http://projectwidow.wikidot.com/r-d#toc5

Sound

NOYX Studio is taking care of the sound creation and editing for the short and for the first time recently I had a chance to listen to it. Since for the most part the drama of the scene drives the animation, sound is really a post process, in this case he happens to have some of the sounds needed for the short and is adept with sound.

Even in the sound department open source is being used in the form of Ardour, something in my opinion is one of the best audio tools out there. Something I really wish existed 10 years ago but sadly at the time there was no such thing and if there were I was certainly not aware of it.

Ryan has already built up quite a sound library and being an sound artist himself he has taken samples that eventually ended up sounding nothing like the original where others were damn near the original recording (EQ done to them of course).

This is also where the video will be edited, I will be hiking it over to his place and putting together the samples, soundtrack and shots to fully bring this to the final product.



NOYX Studio consists of a small network ran entirely with Linux (64Studio for audio production and Ubuntu for file server use) so Ryan has been very helpful in many areas of the production, not to mention being the glue that holds this whole pipeline together.

Conclusion

Project Widow is still chugging away, though there was little that has been seen so far and the stuff that has been "released" has been in various places, forums, postings and what not. Consider it guerrilla marketing haha. At this time our work is mainly getting all scenes set up for animation and fully enter that phase now that our initial work on the first three scenes turned out nice. People wanted to know more about what was going on and I wanted to speak a little bit of tech talk so this walk through is the result. For Part 2 I am going to explore deeper into some parts of the pipeline with screencasts.

The pipeline is something that can be run on any OS that has Blender, Mosaic, Aqsis, Cinepaint, GIMP, SVN access, and a web browser. Workstation, server, laptop, whatever this pipeline has been tested and developed on all of them. Yes nothing is one complete package from start to finish but considering this is designed around professional studio pipelines it seemed important to explore those methods aside from the neccessity of having to do so. Yes the software is already considered old, except for Aqsis but this is only because it is stable to handle the tasks we want, much of this past summer was finding bugs and development of new functions and tools. We are also using professional production methods and tricks, in fact in some cases the same tools, as such is the case with OpenEXR and Cinepaint, since both were developed by studio employees with the intention of being used by these same places as well. Researching into how these places built their pipelines was also a great starting point, of course with information being limited due to trade secrets and proprietary software unavailable to the public but the idea starts and if you know talented programmers one can develop their own tools and in our project we just did happen to know several.

The development team all use Linux while our artists have been known to use both Windows and Linux, so access to the SVN server had to be compatible across both operating systems. While it seems easier to install the latest stable release on Windows, it is far easier to build the tools on Linux if the source code has changed (such as the case with Aqsis), not to mention some tools such as Cinepaint and Shrimp are only actively developed for Linux. Obviously the further down the pipeline the less we use Windows and rely on Linux for everything, modeling was done mainly on Windows but the rendering will rely entirely on Linux for instance.

After this film is final there is talk of releasing the production files to the public, sort of an open movie. This would be a great way for people to really learn how the process is worked out considering that much of the pipeline has been pieced together from all open source tools. A lot of documentation has been written as well. What will not be included are the exported scenes, only the 7000-9000 OpenEXR composited frames (maybe). Of course this would be released as a torrent file since the eventual 8 GB of potential data this whole thing could take up would be a huge load on the SVN server. Considering that the BRAT toolkit combined downloads exceeded 1000 it seems that it might be a worthy effort to do so, along with a special gift inside. I will also upload the video to the Internet Archive as well.

In the future though there is one part of the pipeline that would need to be fully worked out and that is a solid asset server as well as a web based production tracking and collaboration system. We would also be using the next version of Blender, Mosaic and most likely Aqsis. However the tools we are using right now are battle tested, proven to be stable and can reproduce the same results so we are using these version for this project, the programmers spent a LONG time working day and night to provide some kick ass software for us to use and it would be a waste to let all that work go in vain. Already there has been some experimentation with exporting to RIB format in Blender 2.50, however this was considered a personal research project and 'production stable' at all.

None of this would be possible without Linux, specifically Debian,and any of it's children offshoots such as 64Studio, Ubuntu and Animux.



I hope that this walk through was not a complete bore, I did skim over a lot of things in this first part of the series.