Thursday, December 24, 2009

Happy Holidays!




Happy Holidays to all!

We want to wish all the best of times and safe travels! This post is a long time coming, considering that there has never been a holiday related post here before but this year is different. Why is it so different than any other year? Well Blender to Renderman has become a valid project this year, at least we like to think so. This has stood the test of time and criticism, has pushed development in other areas, not to mention more people are becoming aware and using these tools for their own needs. Considering that Aqsis is one of 3 external rendering examples featured on the Blender website, I think the efforts done by everyone past and present have started to come to light. While this website is only one effort to bring a community together, the idea and practice of exporting Blender scenes to Renderman RIB files has been around for at least 7 years. It has only been in the past 2 years things have really picked up and only this past year that our efforts are taken seriously and not "Yet Another Project".

Not to say that the previous year was bad, not at all, in fact that year was a step towards the right direction. However even today this is still in it's infancy and we all have a ways to go before we see more start ups using these tools to develop their own animations. As mentioned in the previous post, some already have, we applaud them and encourage them, cheering them on because the more people that use these tools that are not directly involved in the development of Mosaic, Blender, Aqsis or Pixie then that means that we as a community have at least helped steer others in that direction. The tools we use and developed had also been used for research, in the form of RenderAnts, a GPU based Renderman visualization tool.

2009 was also the starting point in which "Project Widow" began, something that we wish would have been completed by now but due to the complexity and just the fact that this has not been done before using these tools, production is still in progress. This project helped in the development of Aqsis, with the multilayer OpenEXR display driver, as well as RIB Mosaic, with the addition of these EXR driver presets. While the production is slow, things are starting to pick up and we are chugging away, hoping that by next years SIGGRAPH we will have something to present. Regardless when it gets done, it will get done, some way, some how.

With Blender 2.50 on the horizon things are taking a turn. If anyone has downloaded the Alpha release you will notice that the interface has gone through a massive transformation. While an interface is only the nice presentation of the underlying code, this interface has truly changed from the Blender we all have been used to over the past 10 years. What is more important though is the Render API, the one thing that many have been waiting for years to come to light. At this point in time it is not known exactly when this will be added but development is supposedly in the works according to one of the Blender devs. This is one of the most important steps towards a more solid link between Blender and Renderman, in fact it is better for ANY render engine period! Be it Luxrender, MentalRay, VRay... whatever the case may be this will offer that ability to choose which render engine one wants to use, with better access to the data that was previously blocked by the Python API.

RIB Mosaic will also go under a massive change, mainly because in order to continue development Eric is forced to, since Blender is using Python 3.x and Mosaic was developed for the 2.5x Python, thus a substantial break of code. Also according to Eric in this recent post there will be a change in how Mosaic will not only function but how we will interface to Renderman. The current Mosaic uses a single pane system in the Python script window, the shader system alone is a massive series of RSL code that tries to replicate the Blender materials, the options and settings are there to try to include each feature and option for every single well known RiSpec render engine available. This has made development hard for Eric because it seemed to go beyond what was expected of the project. According to his latest post, RIB Mosaic will be more or less a framework for others to build upon, be Aqsis, Pixie, 3Delight or PRMan. This way he can concentrate primarily with Mosaic as it is rather than fixing the minute details for each renderer. "Pipelines" are described as these functions for each renderer, such as a pipeline for Aqsis shadow maps will be different than a pipeline for point clouds for Pixie. This approach will also allow others to develop pipelines for each rendering engine and then share them. The idea is to change the approach on how Blender and Renderman are to be used, rather than replicating Blender's material and lighting system, it will encourage users to take advantage of the true power of Renderman from the start. This idea is very similar to how Maya and Houdini approach the Renderman interface, the 3D app is just the geometry and animation engine while Renderman is the primary rendering system and thus anything built in the app is designed to be rendered as such. Not only will this encourage new users to really learn Renderman, it will also help more experienced users unlock the power of Renderman as it has been done for many years by the likes of Pixar and Industrial Light and Magic (for instance).

So changes all around are happening this year and even more so next year. Each year that I have been involved to some degree or another I get more excited about it, not because I am involved per se, it is more because this has been a desire of mine to see happen at all. The first taste of Renderman I had was in 2002 when I had been using Lightwave and downloaded 3Delight and Aqsis to be used with this export script Light-R. Since then I have not looked back and I can tell you hands down that anything done now using these tools has surpassed the Lightwave work I had played with years ago. That is why I am excited about this, the efforts done by many people have not only made the ability possible but work very well together and has reached all over the globe. We are not this massive corporation with millions in marketing funds, we are not even an animation studio.... most of us are just your average geek programmer or artist that have this desire to continue on.

So in the 2010 I believe that our efforts will become noticed more, not only by the Blender community but by the rest of the CG community, as well as the industry as a whole.

The image at the top was modeled by Jeremy Birn (yes of Pixar) and was the Lighting Challenge around this time last year. I never got to complete my render in time, until now. While it is not perfect, forgive me it was done last minute, it at least it is something festive! Not to mention the massive file size and poly count! The tree took up the majority of the render time and rightly so, each one of those needles are polygons! Post processing was done in GIMP.

See you next year!!

Thursday, December 17, 2009

KICHAVADI




Well it looks like our influence and efforts are starting to pay off! Recently a small team of artists over in India are working on developing an animated series and are using Blender and Aqsis, among other open source tools, to do it.

What a good start as well! They have been posting regular updates on their blog, mainly a lot of test renders but they do look very nice! In the past few days alone there have been many posts just on tests of shadows and DOF, so they seem to be getting the hang of Blender and Renderman quite well!

Hope to see more from these guys and best of luck gentlemen!

Tuesday, October 27, 2009

Update!! - RenderAnts - Interactive REYES Rendering on GPU

This has been floating around the net recently, something that is actually quite impressive for what it does as well as what it could be. Interactive REYES rendering on a GPU, which really in a sense has been something a lot of people and studios have been looking for.




CG Society Link

BlenderArtist Link

Authors page:
http://www.kunzhou.net/

Paper:
http://www.kunzhou.net/2009/renderants.pdf

Video:
http://www.kunzhou.net/2009/renderants.wmv

Updated News!

According to a source recently in the comments of this post the source code(?) for this is located here : http://research.microsoft.com/en-us/downloads/283bb827-8669-4a9f-9b8c-e5777f48f77b/default.aspx

There have been other similar types, such as Pixar's LPics which was featured after Cars was released, as well as Lightspeed which ILM had developed during the Transformer production. However the difference was these used GL shader equivalents of RSL shaders, so they both did not really use a Renderman based rendering. Both were very impressive though.

Gelato was also something that had been designed for such a purpose but was discontinued after a few years, certian tools did have the ability to convert basic RSL shaders into it's own shader language so in a sense it was sort of a start of what could be. Larry Gritz, the same person who had developed the first non Pixar REYES renderer, BMRT, had developed Gelato. So maybe that was another reason for Gelato being non REYES based considering the legal issues between Gritz and Pixar in previous years.

RenderAnts is a GPU based REYES rendering system, using RIB and RSL code to render the resulting image from the GPU, rather than the traditional CPU software we currently use. The ability to get fast rendering feedback is always a great thing, the only current way to do this is to render smaller sized images along with turning down the detail features of REYES, or do something like Crop rendering which will only render a certain region. This does in fact make an image render faster but if you are concerned about details, or lighting changes, having to render out a new image just to see if something works or not is quite a painfully slow task. This is why RenderAnts is a huge deal. It is not because of the fact that Elephants Dream was used to showcase the speed difference of normal CPU based rendering versus GPU, though it was pretty cool to see that. Elephants Dream was used mainly because it is Open Content, these were fully animated scenes that can be used for any reason within legal bounds.

What makes it so interesting for us, the Blender to Renderman users and developers, is that Mosaic was used to export these scenes out. This is why open source Blender to Renderman is important, it can be used for research, not only production. It is far easier and cheaper to use Blender, Mosaic and Aqsis or Pixie to showcase some new 3D research where you have access to the source code and can make your research possible, than it is with closed source commercial software. At best you can make a plugin for Maya if you were to make something like say a new form of fluid simulation that used a custom RSL volume shader. You would also only be able to do this on one system, while with open source you can have several copies spread out over a network, even at home.

This is the first time Mosaic has been officially used and cited in a published research paper.

If you watch the video make sure to notice that this NOT real time, it is fast but it does not have the ability to render at even 1 fps. At best it does take a few seconds, the few that do look fast are more like camera placement changes or lighting changes. Anything really drastic does seem to take a bit longer to render. However considering the same frame using the same data would take a considerable amount of time for PRMan to render does say quite a bit. What this also means is that this is not to replace current software for final frame rendering, at least not for a while. The best use for such a system is for previewing during production, the little changes that artists and TD's make for instance. Something so tedious like shader development would cut such time in half, making 30 renders of minute changes in the shader is a very time consuming task. It is not hard to imagine that this will be used by the big boys very soon, it is also only a matter of time before a commercial adaptation of this is released in the next few years.

We just have a nice warm feeling knowing that our work here has helped in this, we were used first. THAT is something.

Tuesday, October 20, 2009

Aqsis 1.6 and Project Widow

Ack! I have been very behind! I recently moved (again) and am also in the process of remodeling a house as well so my time has been limited, obviously when BlenderNation reports news before we do. Not to mention the link to this site as well. Anyways off to the subject at hand.

Aqsis 1.6



Aqsis has undergone some serious changes since version 1.4 and a lot of it has been to improve it's speed and stability. Copied directly from the press release :

General optimisation and performance has been the primary focus for this release, with improvements including:

  • Avoiding sample recomputation at overlapping bucket boundaries.
  • Refactored sampling and occlusion culling code.
  • Enabled "focusfactor" and "motionfactor" approximation by default, for depth-of-field and motion blur effects respectively.
  • Improved shadow map rendering speed.
  • Faster splitting code for large point clouds.

In addition, key feature enhancements have been made with improvements including:

  • Multi-layer support added to OpenEXR display driver.
  • Side Effects "Houdini" plugin.
  • New RIB parser, supporting inline comments and better error reporting.
  • Matte "alpha" support, for directly rendering shadows on otherwise transparent surfaces.
  • Refactored advanced framebuffer (Piqsl).
  • Texturing improvements.
  • Enabled "smooth" shading interpolation by default.
Now to get the point. One of the main additions to Aqsis, the MultiLayer OpenEXR, was from the request of the team that is working on Project Widow. The reason for this of course is because Blender's Compositor can use this directly, much like the way it can with it's own EXR render. This was to facilitate an easier workflow later on during the composite stage, rather than have a mess of multiple image sequences for each and every single AOV render we wanted. Also because of the talks between the Widow team and the Aqsis team, Mosaic was also built to handle this very function. In the latest CVS version of Mosaic there is a much larger menu selection of display drivers available than in previous versions. So Blender, Aqsis and Mosaic all work hand in hand in various stages of the pipeline now, rather than just rendering. Since we used Aqsis for preview renders as well, it was important for us to have the speed and stability. The Piqsl framebuffer was also a request from us working on Widow, we wanted to have the ability to scroll through images using the arrow keys rather than clicking on each render, this saved us a lot of time when working on previews and rendering dozens of images. We also tested Aqsis quite a bit through out the process, though now that it is fully released we can use the "production stable" version rather than the daily builds or sources.




Above is an example of the AOV multi layer EXR renders



Composite Nodes


During the months of pre-production of Widow, all of us would gather in an IRC chat room and discuss ideas that we wanted from Aqsis, also to get feedback over how to work with this or that in the rendering end. Planning for a renderfarm had begun and was tested over the summer, even building a new script tool so that DrQueue could use Mosaic batch output. We also had to design a lot of the assets from Aqsis in, by that I mean the process of figuring out how to make Blender work with what we wanted. There were some ideas scraped simply because of the limits currently imposed by the Python API.

So now that we have covered that...

Project Widow



This short has taken a LOT longer than planned, the idea was to get this done in 3 months starting in May of this year. It is now October. So yes things are way behind but that does NOT mean that it is stopped. At the moment it is at a standstill because there are so few of us working on it but also I have had a lot of real life situations that prevented me from devoting as much time as I want to it. There also has been quite a bit of technical issues as well. Our propsed "Arachnid" system was not stable enough to be considered as workable, it was just not perfectly solid as we had hoped. So now we have decided to use SVN once again and that is still being worked on (issues with speed mainly), the other hosts I had looked at did not offer near enough space for what we needed, so we will be using a private server located in Wisconsin belonging to a personal friend of mine.

One of the main issues we had encountered was texture maps. Sometimes when the map is pointed to a file that is not relative, it will not be found and thus not rendered. This became frustrating to the point that it was decided that all surfaces aside from the spider model will be Renderman shaders rather than a collection of images. This also supports our cause since Blender can do texture maps quite well on it's own but when it comes to displacements nothing beats Renderman. As there were to be quite a bit of it in the short it only made sense to showcase what Renderman can do quite well rather than just say "Hey it can render!" So a lot of work has been going into designing shaders that can take advantage of Mosaic's power, not just look good. Such as using the Blender material system to control the shader parameters so that different models can share a shader but each have it's own look and feel. The train above is such an example, the main body of the train itself is using one shader but the color and subtle pattern differences are controlled by the base Blender material. The only other shaders that do not share this are the wheel assemblies, but even those are also controlled in their own way by their base Blender material. In all the entire short maybe uses 12 custom Renderman shaders, including the displacement shaders, the rest are all Mosaic's power.

Blender 2.50 and the future of Mosaic

This is something that needs to be addressed as the timeline to the next version of Blender gets shorter. Mosaic as it is in it's current form, will not work with Blender 2.50. This is due to the use of Python 3 for the reworked Blender. However all is not lost since the Blender devs have started to work on the much requested Render API that we have been waiting for. This means Mosaic will need to be rewritten from scratch all over again, something Eric is not too excited to undertake since he spent the past year putting much effort and work into what it is now, though we do know that when the time comes it will need to be done. This is good news though since this will allow Blender users to render everything that can currently be done only in Blender (such as particles, animated curves and soft bodies). Currently Mosaic can output about %90 of what Blender can do natively, this is due to the limit of the data that Mosaic can access in Python. This of course is not a Mosaic only issue, ALL render exporters have this limit in Blender (with the exception of possibly Yafray). One of this sites goals was to prove to the Blender devs that having that external renderer support was a good thing, this will offer users a choice to use something they know rather than use just Blender's internal. Again we do not want to say Blender's internal render engine is bad, it is quite an amazing piece of coding and one of the best open source renderers out there. The issue mainly is choice rather than function and since most visual effects and animation studios use Renderman for the final frame rendering it would only make sense to have that option for Blender, thus making it more appealing to the high end market. This site itself has gotten the attention of many such studios and in the process some have even started to use Blender to Renderman for their own evaluation or even actual work.

So what does this mean for the future of this site? Well that is something we have a year to figure out. I do know that things will be changed, ideas are already being drawn out for the site itself though I do know this blog will be used in some form or other. I think our goal of public awareness has been achieved, that is obvious when Pixar, LucasArts, Blizzard, Dreamworks and more have stopped in on more than a few occasions. BlenderNation, BlenderAritst, CGTalk and even Blender.org have directed traffic here every single day. This site has gotten Animux some attention too, people who come here have gone to that Linux OS to check it out and in some cases are now working with them on various projects, including myself.

We have come a long way that is certain but we also have a long way to go.

Friday, August 21, 2009

Fisheye lens distortion using MOSAIC and Aqsis

Greetings again Blender heads and RenderMan junkies :)

Well I've been hard at work with MOSAIC and recently needed a good test case for the volume shader I'm currently working on. It just so happened at that moment someone over at BlenderArtists inquired about lens distortion so I decided to do a test project and developer blog demonstrating lens distortion but also testing several area of MOSAIC. I've been asked to copy that blog to share with all the good people here also....


From the original developer blog here http://sourceforge.net/apps/wordpress/ribmosaic/

I've recently had someone at BlenderArtists ask whether MOSAIC could do lens distortions. Well since that's not a standard feature of Blender I have no built-in solution for this, however knowing this could be done I figured I'd tell him that RenderMan could do it easy and MOSAIC can set it up. As I was writing the reply I realized that I hadn't actually tried this myself and I'm currently needing an good project to test the new volume shader so... thus was born this fisheye project :) This post is not intended as a tutorial but just an overview of how I achieved this effect, I'll also include the blend if anyone wants to play with it.

Well the first thing I did of course is look around for examples of techniques, and as it so often turns out there's several different ways to do this. If looking for just mild image distortion/displacement then the simplest solution is to just use an image shader to process the image (the same as the post process filter in Blender's compositor). If looking for something more extreme such as the 360 degree fisheye effect then more radical steps are required. One approach is to use a warped plane in front of the camera and use raytracing with the surface normal refraction to fake the lens effect. Another technique, which I prefer, is to render the camera as a cube map in separate passes and then combine the maps into an empty beauty pass with an image shader. This approach can see distortions all the way around the camera, has fine tuned control over image quality and camera rotations and lens distortion and can even be animated with the same maps as long as camera translation doesn't change!

I found several example shaders but was surprised to find one written by my friend Chris Foster in the Aqsis example folder! I only made a few small changes to the shader for creative control:

  • I added a rather wide filter to the lens mask to blur its edge

  • I added the ability to flatten the distortion so the lensing effect can be pulled in and out of

  • I added the ability to rotate the forward vector in the cube lookup on x,y and z axises


The idea behind this technique is simple: use a cube map pass from the camera's position to generate cube faces, then in the beauty pass lookup into the cube faces with the image shader to project the warped perspective on the frame. The first step is to build a standard scene, I decided to use checker displacements on a ground plane and columns around the camera to emphasize the effect. I also decided to test faked soft shadows in a larger more complex space and also to include my partially rewritten volume shader to produce more daigonal light streaks for effect (the finished shader will be included in next CVS update). Next I added the fisheye image shader and created a shader fragment.

First off there's several tricks I had to play on the beauty pass:

  • Enable an empty scene layer. This is so nothing is rendererd except the image shader otherwise you'll waste time rendering object not seen.

  • Create and select a RIBset on the camera with the fisheye shader enabled. This is so the beauty uses the fisheye shader but we can force the default RIBset with no fisheye effect for other passes.


Next I created a User Autopass to use as the cube renders with a few filter options applied:

  • Blank the "Layers Scene:" filter. This is so we can specify what layers to use in this pass, otherwise it will use the beauty's empty layer.

  • Set "RIBset Bypass:" to DEFAULT. This makes sure the camera used from the beauty pass is not using the fisheye lens.


An autopass is necessary instead of just using a global env pass because better filtering can be achieved in Aqsis using textures then an env map. In this custom pass I made several custom scene RIBset's named as numbers from 0000 to 0005. Then I set this pass in the Project tab to use "RIBSETPASSES" so MOSAIC will export the numbered RIBset's as separate passes and ignore the DEFAULT RIBset. This is so I can use the same scene setup for each cube perspective. Then for each RIBset I used the "Show Autopass Settings" to setup the following:

  • For each RIBset I use one of each of the cube Camera: Perspectives (Object: nx, Object: px, Object: ny, Object: py, Object: nz, Object: pz). I use the Object instead of World perspectives so the cube faces are relative to the cameras orientation.

  • Setup each "file" display to point to ./Maps/ and use the name of each cube face, as "ny.tif", "py.tif" ect.

  • Setup the "Texture" dialog to convert each file display tif into mipmapped tex, as "ny.tif" - "ny.tex"ect. This is not strictly necessary but produces much better results with the image shader ;)


So at this point what's happening is the user autopass is exporting each RIBset using the active camera from the beauty scene with one of each cube face directions and optimizing them into one of 6 images. Now all that has to do done is pull up the fisheye shader in the shader editor and put in each of the images from the cube pass and adjust "thetaMax" to the distortion angle and render the beauty pass.

However since the cube faces can be reused in a simple rotational animation with minimal render time I decided to take things further and do a 20 second animation. Also since the cube faces are static I decided to try really high quality occlusion maps, shadow map with faked soft shadows, DOF and volume atmosphere shading. This is because the addition time needed to calculate these passes on the first frame are more then made up for by the really fast render times of the animation in the imager pass (it only has to grab the cube faces and calculate lookup direction and lensing). I also thought it would be interesting to synchronize settings across the pipeline from Blender to MOSAIC by adding animated composting effects and by using the same camera controls to drive multiple shader parameters in MOSAIC. In particular I'm animating the "lens" control on the camera and hooking that to the thetaMax shader control but also grabbing the same lens data and modifying it in a python token to control the lens distortion parameter and finally using the frame count in another python token to feed y axis rotation in the cube lookup. As a finishing touch I've animated a spectral lens distortion effect in Blender's compositor, this could fairly easily be done in the image shader but this gave me a chance to try sychronizing animation in Blender's compositor with RenderMan :) Anyway Here's the video, project file and a few frames of the animation from my gallery...

Here's the youTube video...


Here's a direct download of the mp4...
http://www.dreamscapearts.com/Public/fisheye.mp4

Here's a frame at 100 degree lens at 0 degrees rotation...


Here's a frame at 200 degree lens at 90 degrees rotation...


Here's a frame at 360 degree lens at 180 degrees rotation...


Here's a frame at 360 degree lens at 180 degrees rotation...


And if anybody want's to play with the blend here it is too.
NOTE: I embedded a modified version of MOSAIC that includes the Object:py-nz camera perspecitves that is not in CVS yet so you'll need to run MOSAIC from the text editor!!
http://www.dreamscapearts.com/Public/fisheye.blend

That's it, thanks for reading :)
Eric Back (WHiTeRaBBiT)

Tuesday, July 21, 2009

SIGGRAPH 2009




I decided that with 2 weeks until this years SIGGRAPH convention that I should make a post about some of the behind the scenes talks between myself and the Animux devs. Since a lot of Blender to RenderMan integration has been done on the Animux distro, Mark Puttnam (the founder of Animux) has asked for some screencasts and imagery so that he can show this during his presentation.

So I decided that instead of just showing off normal screenshots of the default Blender startup screen, why not show something with some punch to it? Such as the one below, a simple test render done for Project Widow in early June.




So the idea is to setup and render at least static frames of Project Widow, if not maybe 10 seconds of animation of the spider, the tunnel system and the train. This not only would be the first official test of the pipeline, it will also serve as a small technical demo for the SIGGRAPH presentation. Most likely this preview will not end up in the final production short, much like Pixar did for the first Finding Nemo trailer. What this will do is not only draw attention to Project Widow itself, it will also draw attention to the whole Blender to RenderMan idea. SIGGRAPH has traditionally been the place to present "proof of concept" ideas and papers, showcasing new technologies and amazing artwork. So it only makes sense that after years of pain staking work trying to get where we are now that we at least begin to present our efforts at the biggest CG event in the industry.

The downside is that I am not able to attend this, I cannot afford it. I am not upset though since what we are showcasing is more of how Blender to RenderMan works with one OS - Animux, rather than how it can work for everyone. It just so happens that Animux is delivering us to SIGGRAPH. We could not ask for more publicity than SIGGRAPH anyway. If anyone does manage to get to New Orleans this year, please stop by the Animux Birds of a Feather meeting, take pictures too!

Monday, 3 August

Animux: Free Software for Animators
Animux is an absolutely FREE animation toolset that is used to handle all the tasks of pre-production, production, and post-production stages of a high-quality animation project.

Monday, 11 am - 1 pm
Ernest N. Morial Convention Center
Room 264
Mark Puttnam

Wednesday, June 03, 2009

The ball starts rolling

Since the last post there has been quite a bit of activity on the net in relation to the Blender to RenderMan projects, chatter on websites and our group has seen a very large increase of posts. Seems that we are bigger than we thought, that our collective efforts are starting to get noticed and paying off. I for one can say that I have underestimated the effect and reach this website has. So I am taking this time to inform people some of the things that are going on around the world.








Animux
http://www.animux.org

This is a newer Linux distribution that I only recently found, why I only just found it I can't say, however from what I have seen on the website this seems to be much like my idea of having a Blender to Renderman Linux distro. Not only is this available NOW but they have been following this site. This is right up our alley! This is exactly what we've been trying to influence! Best part is that I don't have to do the hard part as I have never even tried to make my own Linux distro and I wouldn't know where to start. This is here now and only getting better.

So.... I am making an unofficial statement of which Animux would be considered as the Blender to RenderMan "primary" Linux distro. The reason is that they have established quite a bit to make it workable for artists and TD's alike, for all skill levels not just the Linux guru's. That was one of the main issues I felt was important. They also have quite a roster of people that are either developing it or advising, some of these people are very important to the success of the open source 3D movement.

One of the coolest things I have seen is their IKEA renderfarm how-to! This is something anyone could, and SHOULD build.

http://www.animux.org/wiki/index.php?title=Animux_Caseless_%22Ikea%22_Rackmount_Renderfarm



One aspect of the Blender to RenderMan pipeline that kind of got overlooked was Digital Asset Management, something of which is VERY important for any kind of studio. They are working on what is known as ADAM, short for Animation Digital Asset Management. They seem to be well on their way to something very special for us 3D artists.

That is all I can say about that right now ;)

"Project Widow"

This is very much an early announcement (something I was going to hold back for a little longer but got excited tonight over the Animux finding). Though if anyone has been in the forum group section, this really is not much of a surprise. This is a short 1-2 minute "test" that some of us have been pulling together, something that has taken off quite fast. This was started as a Blenderartist forum topic discussion where the debate over whether or not Aqsis was being used to render the new Blender Foundation Open Movie "Durian". Well no is that answer. Either way over the next week the subject kept coming up and somehow a test was thought up. But it is not like any normal test, this is a short story, very short but it is still a story and not just some walk cycle. I felt it was more important to use an original story to make our statement heard and that is Blender to RenderMan WORKS.

Over the next couple of weeks a LOT of progress has been made. We have Paul Gregory and Chris Foster from the Aqsis team backing us up, we are also using Aqsis 1.5.0 development builds to be the test bed for that version. They are helping us with the technical issues, we are helping them find some obscure bugs. We also have the help of WhiteRabbit (Eric Back) who is furiously busy writing the next version of Mosaic, again we are going to test that version out as well. We have a talented modeller, Daniel Wray, that has built the vast majority of our assets so far, including the main character model. We also have the help from Cedric, an animator. Not only that but we also have the help from a couple of concept artists and a sound fx engineer. We are still on the lookout for some more animators and RenderMan users. This week I am busy with my local film friend punch out the final storyboard so we can start blocking out camera shots. My own tasks are quite varied but it's been in the shading and TD area, writing the pipeline documentation and testing things in both Blender and Aqsis to make sure when we get to that stage - it will work without too much trouble.

The short is being done to test the pipeline, to put it through a real world situation that involves a number of people working together on different parts of the "production". The nice part about this short is that we are doing this over the internet, which also has made it very challenging.

For more information and to watch our progress as we plug along on this here is the website.

http://projectwidow.wikidot.com/


Things seem to be starting here and I have been wishing for this day for a very very long time, so it is nice to see a little progress here.

"It's a very exciting time!"
- a quote from one of my favorite movies.

Sunday, May 03, 2009

Blender to Renderman Artist Tools v 0.5 Released! (UPDATE!!!!)




(EDIT: UPDATE! See below original post for more info!!!)

Finally this LONG awaited release has come! Sorry for such a long delay everyone, it has been a hectic year for me. Over the past 2 months I have been working hard on testing Blender 2.48 and the recent versions of Aqsis and Pixie to make sure that it can be considered "production stable", I have been on and off with making this ISO over the past year (to many people's dismay - sorry again) but so far with latest developments this looks good. Most of the tests I have done have been successful and I decided to finally put together this ISO for everyone to use, previous versions were not so bug free. I am listing it as version 0.5 because this is the fifth time I have undertaken such a task, the first time was just for my own use and each time since marked another "version" of this toolkit.

Anywho...... on to the goods!!!!

In this release are:

Compiled and sources of Blender 2.48, Aqsis 1.4.2, Pixie 2.2.5
Mosaic 0.2 Beta
Python 2.5.4 (installation for Windows - Linux already has it)
CGKit 2.0
Shaderman 0.7 and ShadermanNEXT
SLer
Shader sources (hundreds of them!!)
Shaderman shader projects
OpenEXR 1.4.0 (Windows install, Windows and Linux source)
OpenEXR 1.4.0 sample images
Dr. Queue (compiled and source)
Cutter
GIMP
Cinepaint (Linux only!! There is no recent Windows build)
Documentation from Pixars Online Research library as well as SIGGRAPH papers
Crimson Editor (Windows code editor)
Dev C++ and MinGW (Windows only, Linux has gcc which is the basis for all Linux builds)
Python scripts for Blender (various usefull ones not released with Blender)
Voodoo (for visual effects camera matchmoving)
Blender files (examples and test files)
Some usefull textures


This ISO is available HERE and we are also working on having it hosted elsewhere on a more permanent basis. This ISO is available free of charge, nobody is making money off of this at all, we have included all the required licenses (such as GPL and BSD). You are free to host this ISO and redistribute it as we have done so, we only require you to credit Blendertorenderman.org as original source since it has taken quite a number of man hours to test the software and put together the ISO. Please note that Pixar and ILM do have copyright information that MUST be included with the package, we do NOT want to upset these guys and it is only respectful that we do this since they pretty much influenced the industry and provided us with the means to do all of this.

Please note that while this is a "production stable" release, that there may be unforeseen bugs. Due to the nature of programming on different operating systems there is always the chance that something might be wrong. While this has been tested on Linux and Windows by myself personally, I have experienced some bugs here and there. That is the nature of the game, sometimes things get broken in the mix and there is a lot of code between all of the software. For the most part, and I mean %95 of the time, this release is stable enough for production, be it small studio or individual.

Also I am posting my updated Blender to Renderman Pipeline document originally posted in 2006. This has been revised to reflect current developments but is still being worked on and should be complete by late May, at most.

All the Development software for Windows like MinGW and DevC++ are to be considered AS IS! Like I said I am not a programmer but have included these into the ISO anyway, it is up to the end user to collect the necessary libraries and tools. I have not included any Linux libraries because it is easier to download them as needed from a repository, thus ensuring proper compiling of code. Any Linux programmer will be well versed in the trials of finding the right source if not found in repositories.

Lastly I want to thank the following people for their work and contributions towards this (of course not listed in any order).

Temujin
Eric Beck
Ildar Ahmetgaleev
Ton Roosendaal
Paul Gregory and the entire Aqsis development team
Okan Arikan
Alexei Puzikov (ShaderMan!)
Bobby Parker
Malcolm Kesson
Syoyo Fujita
Andrew Silke (for the original Generi Rig!!)
Larry Gritz
Saty Raghavachary
Tal L. Lancaster
Florian Kainz (for developing OpenEXR)
Robin Rowe

And everyone else who has joined our project, offered advice, testing and support!! Without everyone's encouragement none of this would be possible.

Oh yeah and thank you
Google for developing superb software to host our site!!

UPDATE

Due to some oversight an update was required. Sorry for that and promise to not make the same mistake!

Changes :

Code:Blocks added to replace Dev C++
GIMP source added
Python source added
CGKit 2.0 source added
Blend files removed (plan to add it next update with BRAT stable models, some required too much initial setup and didn't render right)

Hosting update

For now a torrent file has been created for this update but we are still hosting the original ISO as well since not everyone uses torrent programs.

BRAT 0.5rc1 (original host)

BRAT Release Update 0.5rc2



RenderMan is a registered trademark of Pixar.
All other products mentioned are registered trademarks of their respective holders.
© 2009 Lucas Digital Ltd. LLC. OpenEXR, Industrial Light & Magic and ILM are trademarks and service marks of Lucasfilm Ltd.; all associated intellectual property is protected by the laws of the United States and other countries. All rights reserved.