Thursday, December 24, 2009

Happy Holidays!




Happy Holidays to all!

We want to wish all the best of times and safe travels! This post is a long time coming, considering that there has never been a holiday related post here before but this year is different. Why is it so different than any other year? Well Blender to Renderman has become a valid project this year, at least we like to think so. This has stood the test of time and criticism, has pushed development in other areas, not to mention more people are becoming aware and using these tools for their own needs. Considering that Aqsis is one of 3 external rendering examples featured on the Blender website, I think the efforts done by everyone past and present have started to come to light. While this website is only one effort to bring a community together, the idea and practice of exporting Blender scenes to Renderman RIB files has been around for at least 7 years. It has only been in the past 2 years things have really picked up and only this past year that our efforts are taken seriously and not "Yet Another Project".

Not to say that the previous year was bad, not at all, in fact that year was a step towards the right direction. However even today this is still in it's infancy and we all have a ways to go before we see more start ups using these tools to develop their own animations. As mentioned in the previous post, some already have, we applaud them and encourage them, cheering them on because the more people that use these tools that are not directly involved in the development of Mosaic, Blender, Aqsis or Pixie then that means that we as a community have at least helped steer others in that direction. The tools we use and developed had also been used for research, in the form of RenderAnts, a GPU based Renderman visualization tool.

2009 was also the starting point in which "Project Widow" began, something that we wish would have been completed by now but due to the complexity and just the fact that this has not been done before using these tools, production is still in progress. This project helped in the development of Aqsis, with the multilayer OpenEXR display driver, as well as RIB Mosaic, with the addition of these EXR driver presets. While the production is slow, things are starting to pick up and we are chugging away, hoping that by next years SIGGRAPH we will have something to present. Regardless when it gets done, it will get done, some way, some how.

With Blender 2.50 on the horizon things are taking a turn. If anyone has downloaded the Alpha release you will notice that the interface has gone through a massive transformation. While an interface is only the nice presentation of the underlying code, this interface has truly changed from the Blender we all have been used to over the past 10 years. What is more important though is the Render API, the one thing that many have been waiting for years to come to light. At this point in time it is not known exactly when this will be added but development is supposedly in the works according to one of the Blender devs. This is one of the most important steps towards a more solid link between Blender and Renderman, in fact it is better for ANY render engine period! Be it Luxrender, MentalRay, VRay... whatever the case may be this will offer that ability to choose which render engine one wants to use, with better access to the data that was previously blocked by the Python API.

RIB Mosaic will also go under a massive change, mainly because in order to continue development Eric is forced to, since Blender is using Python 3.x and Mosaic was developed for the 2.5x Python, thus a substantial break of code. Also according to Eric in this recent post there will be a change in how Mosaic will not only function but how we will interface to Renderman. The current Mosaic uses a single pane system in the Python script window, the shader system alone is a massive series of RSL code that tries to replicate the Blender materials, the options and settings are there to try to include each feature and option for every single well known RiSpec render engine available. This has made development hard for Eric because it seemed to go beyond what was expected of the project. According to his latest post, RIB Mosaic will be more or less a framework for others to build upon, be Aqsis, Pixie, 3Delight or PRMan. This way he can concentrate primarily with Mosaic as it is rather than fixing the minute details for each renderer. "Pipelines" are described as these functions for each renderer, such as a pipeline for Aqsis shadow maps will be different than a pipeline for point clouds for Pixie. This approach will also allow others to develop pipelines for each rendering engine and then share them. The idea is to change the approach on how Blender and Renderman are to be used, rather than replicating Blender's material and lighting system, it will encourage users to take advantage of the true power of Renderman from the start. This idea is very similar to how Maya and Houdini approach the Renderman interface, the 3D app is just the geometry and animation engine while Renderman is the primary rendering system and thus anything built in the app is designed to be rendered as such. Not only will this encourage new users to really learn Renderman, it will also help more experienced users unlock the power of Renderman as it has been done for many years by the likes of Pixar and Industrial Light and Magic (for instance).

So changes all around are happening this year and even more so next year. Each year that I have been involved to some degree or another I get more excited about it, not because I am involved per se, it is more because this has been a desire of mine to see happen at all. The first taste of Renderman I had was in 2002 when I had been using Lightwave and downloaded 3Delight and Aqsis to be used with this export script Light-R. Since then I have not looked back and I can tell you hands down that anything done now using these tools has surpassed the Lightwave work I had played with years ago. That is why I am excited about this, the efforts done by many people have not only made the ability possible but work very well together and has reached all over the globe. We are not this massive corporation with millions in marketing funds, we are not even an animation studio.... most of us are just your average geek programmer or artist that have this desire to continue on.

So in the 2010 I believe that our efforts will become noticed more, not only by the Blender community but by the rest of the CG community, as well as the industry as a whole.

The image at the top was modeled by Jeremy Birn (yes of Pixar) and was the Lighting Challenge around this time last year. I never got to complete my render in time, until now. While it is not perfect, forgive me it was done last minute, it at least it is something festive! Not to mention the massive file size and poly count! The tree took up the majority of the render time and rightly so, each one of those needles are polygons! Post processing was done in GIMP.

See you next year!!

Thursday, December 17, 2009

KICHAVADI




Well it looks like our influence and efforts are starting to pay off! Recently a small team of artists over in India are working on developing an animated series and are using Blender and Aqsis, among other open source tools, to do it.

What a good start as well! They have been posting regular updates on their blog, mainly a lot of test renders but they do look very nice! In the past few days alone there have been many posts just on tests of shadows and DOF, so they seem to be getting the hang of Blender and Renderman quite well!

Hope to see more from these guys and best of luck gentlemen!

Tuesday, October 27, 2009

Update!! - RenderAnts - Interactive REYES Rendering on GPU

This has been floating around the net recently, something that is actually quite impressive for what it does as well as what it could be. Interactive REYES rendering on a GPU, which really in a sense has been something a lot of people and studios have been looking for.




CG Society Link

BlenderArtist Link

Authors page:
http://www.kunzhou.net/

Paper:
http://www.kunzhou.net/2009/renderants.pdf

Video:
http://www.kunzhou.net/2009/renderants.wmv

Updated News!

According to a source recently in the comments of this post the source code(?) for this is located here : http://research.microsoft.com/en-us/downloads/283bb827-8669-4a9f-9b8c-e5777f48f77b/default.aspx

There have been other similar types, such as Pixar's LPics which was featured after Cars was released, as well as Lightspeed which ILM had developed during the Transformer production. However the difference was these used GL shader equivalents of RSL shaders, so they both did not really use a Renderman based rendering. Both were very impressive though.

Gelato was also something that had been designed for such a purpose but was discontinued after a few years, certian tools did have the ability to convert basic RSL shaders into it's own shader language so in a sense it was sort of a start of what could be. Larry Gritz, the same person who had developed the first non Pixar REYES renderer, BMRT, had developed Gelato. So maybe that was another reason for Gelato being non REYES based considering the legal issues between Gritz and Pixar in previous years.

RenderAnts is a GPU based REYES rendering system, using RIB and RSL code to render the resulting image from the GPU, rather than the traditional CPU software we currently use. The ability to get fast rendering feedback is always a great thing, the only current way to do this is to render smaller sized images along with turning down the detail features of REYES, or do something like Crop rendering which will only render a certain region. This does in fact make an image render faster but if you are concerned about details, or lighting changes, having to render out a new image just to see if something works or not is quite a painfully slow task. This is why RenderAnts is a huge deal. It is not because of the fact that Elephants Dream was used to showcase the speed difference of normal CPU based rendering versus GPU, though it was pretty cool to see that. Elephants Dream was used mainly because it is Open Content, these were fully animated scenes that can be used for any reason within legal bounds.

What makes it so interesting for us, the Blender to Renderman users and developers, is that Mosaic was used to export these scenes out. This is why open source Blender to Renderman is important, it can be used for research, not only production. It is far easier and cheaper to use Blender, Mosaic and Aqsis or Pixie to showcase some new 3D research where you have access to the source code and can make your research possible, than it is with closed source commercial software. At best you can make a plugin for Maya if you were to make something like say a new form of fluid simulation that used a custom RSL volume shader. You would also only be able to do this on one system, while with open source you can have several copies spread out over a network, even at home.

This is the first time Mosaic has been officially used and cited in a published research paper.

If you watch the video make sure to notice that this NOT real time, it is fast but it does not have the ability to render at even 1 fps. At best it does take a few seconds, the few that do look fast are more like camera placement changes or lighting changes. Anything really drastic does seem to take a bit longer to render. However considering the same frame using the same data would take a considerable amount of time for PRMan to render does say quite a bit. What this also means is that this is not to replace current software for final frame rendering, at least not for a while. The best use for such a system is for previewing during production, the little changes that artists and TD's make for instance. Something so tedious like shader development would cut such time in half, making 30 renders of minute changes in the shader is a very time consuming task. It is not hard to imagine that this will be used by the big boys very soon, it is also only a matter of time before a commercial adaptation of this is released in the next few years.

We just have a nice warm feeling knowing that our work here has helped in this, we were used first. THAT is something.

Tuesday, October 20, 2009

Aqsis 1.6 and Project Widow

Ack! I have been very behind! I recently moved (again) and am also in the process of remodeling a house as well so my time has been limited, obviously when BlenderNation reports news before we do. Not to mention the link to this site as well. Anyways off to the subject at hand.

Aqsis 1.6



Aqsis has undergone some serious changes since version 1.4 and a lot of it has been to improve it's speed and stability. Copied directly from the press release :

General optimisation and performance has been the primary focus for this release, with improvements including:

  • Avoiding sample recomputation at overlapping bucket boundaries.
  • Refactored sampling and occlusion culling code.
  • Enabled "focusfactor" and "motionfactor" approximation by default, for depth-of-field and motion blur effects respectively.
  • Improved shadow map rendering speed.
  • Faster splitting code for large point clouds.

In addition, key feature enhancements have been made with improvements including:

  • Multi-layer support added to OpenEXR display driver.
  • Side Effects "Houdini" plugin.
  • New RIB parser, supporting inline comments and better error reporting.
  • Matte "alpha" support, for directly rendering shadows on otherwise transparent surfaces.
  • Refactored advanced framebuffer (Piqsl).
  • Texturing improvements.
  • Enabled "smooth" shading interpolation by default.
Now to get the point. One of the main additions to Aqsis, the MultiLayer OpenEXR, was from the request of the team that is working on Project Widow. The reason for this of course is because Blender's Compositor can use this directly, much like the way it can with it's own EXR render. This was to facilitate an easier workflow later on during the composite stage, rather than have a mess of multiple image sequences for each and every single AOV render we wanted. Also because of the talks between the Widow team and the Aqsis team, Mosaic was also built to handle this very function. In the latest CVS version of Mosaic there is a much larger menu selection of display drivers available than in previous versions. So Blender, Aqsis and Mosaic all work hand in hand in various stages of the pipeline now, rather than just rendering. Since we used Aqsis for preview renders as well, it was important for us to have the speed and stability. The Piqsl framebuffer was also a request from us working on Widow, we wanted to have the ability to scroll through images using the arrow keys rather than clicking on each render, this saved us a lot of time when working on previews and rendering dozens of images. We also tested Aqsis quite a bit through out the process, though now that it is fully released we can use the "production stable" version rather than the daily builds or sources.




Above is an example of the AOV multi layer EXR renders



Composite Nodes


During the months of pre-production of Widow, all of us would gather in an IRC chat room and discuss ideas that we wanted from Aqsis, also to get feedback over how to work with this or that in the rendering end. Planning for a renderfarm had begun and was tested over the summer, even building a new script tool so that DrQueue could use Mosaic batch output. We also had to design a lot of the assets from Aqsis in, by that I mean the process of figuring out how to make Blender work with what we wanted. There were some ideas scraped simply because of the limits currently imposed by the Python API.

So now that we have covered that...

Project Widow



This short has taken a LOT longer than planned, the idea was to get this done in 3 months starting in May of this year. It is now October. So yes things are way behind but that does NOT mean that it is stopped. At the moment it is at a standstill because there are so few of us working on it but also I have had a lot of real life situations that prevented me from devoting as much time as I want to it. There also has been quite a bit of technical issues as well. Our propsed "Arachnid" system was not stable enough to be considered as workable, it was just not perfectly solid as we had hoped. So now we have decided to use SVN once again and that is still being worked on (issues with speed mainly), the other hosts I had looked at did not offer near enough space for what we needed, so we will be using a private server located in Wisconsin belonging to a personal friend of mine.

One of the main issues we had encountered was texture maps. Sometimes when the map is pointed to a file that is not relative, it will not be found and thus not rendered. This became frustrating to the point that it was decided that all surfaces aside from the spider model will be Renderman shaders rather than a collection of images. This also supports our cause since Blender can do texture maps quite well on it's own but when it comes to displacements nothing beats Renderman. As there were to be quite a bit of it in the short it only made sense to showcase what Renderman can do quite well rather than just say "Hey it can render!" So a lot of work has been going into designing shaders that can take advantage of Mosaic's power, not just look good. Such as using the Blender material system to control the shader parameters so that different models can share a shader but each have it's own look and feel. The train above is such an example, the main body of the train itself is using one shader but the color and subtle pattern differences are controlled by the base Blender material. The only other shaders that do not share this are the wheel assemblies, but even those are also controlled in their own way by their base Blender material. In all the entire short maybe uses 12 custom Renderman shaders, including the displacement shaders, the rest are all Mosaic's power.

Blender 2.50 and the future of Mosaic

This is something that needs to be addressed as the timeline to the next version of Blender gets shorter. Mosaic as it is in it's current form, will not work with Blender 2.50. This is due to the use of Python 3 for the reworked Blender. However all is not lost since the Blender devs have started to work on the much requested Render API that we have been waiting for. This means Mosaic will need to be rewritten from scratch all over again, something Eric is not too excited to undertake since he spent the past year putting much effort and work into what it is now, though we do know that when the time comes it will need to be done. This is good news though since this will allow Blender users to render everything that can currently be done only in Blender (such as particles, animated curves and soft bodies). Currently Mosaic can output about %90 of what Blender can do natively, this is due to the limit of the data that Mosaic can access in Python. This of course is not a Mosaic only issue, ALL render exporters have this limit in Blender (with the exception of possibly Yafray). One of this sites goals was to prove to the Blender devs that having that external renderer support was a good thing, this will offer users a choice to use something they know rather than use just Blender's internal. Again we do not want to say Blender's internal render engine is bad, it is quite an amazing piece of coding and one of the best open source renderers out there. The issue mainly is choice rather than function and since most visual effects and animation studios use Renderman for the final frame rendering it would only make sense to have that option for Blender, thus making it more appealing to the high end market. This site itself has gotten the attention of many such studios and in the process some have even started to use Blender to Renderman for their own evaluation or even actual work.

So what does this mean for the future of this site? Well that is something we have a year to figure out. I do know that things will be changed, ideas are already being drawn out for the site itself though I do know this blog will be used in some form or other. I think our goal of public awareness has been achieved, that is obvious when Pixar, LucasArts, Blizzard, Dreamworks and more have stopped in on more than a few occasions. BlenderNation, BlenderAritst, CGTalk and even Blender.org have directed traffic here every single day. This site has gotten Animux some attention too, people who come here have gone to that Linux OS to check it out and in some cases are now working with them on various projects, including myself.

We have come a long way that is certain but we also have a long way to go.