Tuesday, October 27, 2009

Update!! - RenderAnts - Interactive REYES Rendering on GPU

This has been floating around the net recently, something that is actually quite impressive for what it does as well as what it could be. Interactive REYES rendering on a GPU, which really in a sense has been something a lot of people and studios have been looking for.




CG Society Link

BlenderArtist Link

Authors page:
http://www.kunzhou.net/

Paper:
http://www.kunzhou.net/2009/renderants.pdf

Video:
http://www.kunzhou.net/2009/renderants.wmv

Updated News!

According to a source recently in the comments of this post the source code(?) for this is located here : http://research.microsoft.com/en-us/downloads/283bb827-8669-4a9f-9b8c-e5777f48f77b/default.aspx

There have been other similar types, such as Pixar's LPics which was featured after Cars was released, as well as Lightspeed which ILM had developed during the Transformer production. However the difference was these used GL shader equivalents of RSL shaders, so they both did not really use a Renderman based rendering. Both were very impressive though.

Gelato was also something that had been designed for such a purpose but was discontinued after a few years, certian tools did have the ability to convert basic RSL shaders into it's own shader language so in a sense it was sort of a start of what could be. Larry Gritz, the same person who had developed the first non Pixar REYES renderer, BMRT, had developed Gelato. So maybe that was another reason for Gelato being non REYES based considering the legal issues between Gritz and Pixar in previous years.

RenderAnts is a GPU based REYES rendering system, using RIB and RSL code to render the resulting image from the GPU, rather than the traditional CPU software we currently use. The ability to get fast rendering feedback is always a great thing, the only current way to do this is to render smaller sized images along with turning down the detail features of REYES, or do something like Crop rendering which will only render a certain region. This does in fact make an image render faster but if you are concerned about details, or lighting changes, having to render out a new image just to see if something works or not is quite a painfully slow task. This is why RenderAnts is a huge deal. It is not because of the fact that Elephants Dream was used to showcase the speed difference of normal CPU based rendering versus GPU, though it was pretty cool to see that. Elephants Dream was used mainly because it is Open Content, these were fully animated scenes that can be used for any reason within legal bounds.

What makes it so interesting for us, the Blender to Renderman users and developers, is that Mosaic was used to export these scenes out. This is why open source Blender to Renderman is important, it can be used for research, not only production. It is far easier and cheaper to use Blender, Mosaic and Aqsis or Pixie to showcase some new 3D research where you have access to the source code and can make your research possible, than it is with closed source commercial software. At best you can make a plugin for Maya if you were to make something like say a new form of fluid simulation that used a custom RSL volume shader. You would also only be able to do this on one system, while with open source you can have several copies spread out over a network, even at home.

This is the first time Mosaic has been officially used and cited in a published research paper.

If you watch the video make sure to notice that this NOT real time, it is fast but it does not have the ability to render at even 1 fps. At best it does take a few seconds, the few that do look fast are more like camera placement changes or lighting changes. Anything really drastic does seem to take a bit longer to render. However considering the same frame using the same data would take a considerable amount of time for PRMan to render does say quite a bit. What this also means is that this is not to replace current software for final frame rendering, at least not for a while. The best use for such a system is for previewing during production, the little changes that artists and TD's make for instance. Something so tedious like shader development would cut such time in half, making 30 renders of minute changes in the shader is a very time consuming task. It is not hard to imagine that this will be used by the big boys very soon, it is also only a matter of time before a commercial adaptation of this is released in the next few years.

We just have a nice warm feeling knowing that our work here has helped in this, we were used first. THAT is something.

Tuesday, October 20, 2009

Aqsis 1.6 and Project Widow

Ack! I have been very behind! I recently moved (again) and am also in the process of remodeling a house as well so my time has been limited, obviously when BlenderNation reports news before we do. Not to mention the link to this site as well. Anyways off to the subject at hand.

Aqsis 1.6



Aqsis has undergone some serious changes since version 1.4 and a lot of it has been to improve it's speed and stability. Copied directly from the press release :

General optimisation and performance has been the primary focus for this release, with improvements including:

  • Avoiding sample recomputation at overlapping bucket boundaries.
  • Refactored sampling and occlusion culling code.
  • Enabled "focusfactor" and "motionfactor" approximation by default, for depth-of-field and motion blur effects respectively.
  • Improved shadow map rendering speed.
  • Faster splitting code for large point clouds.

In addition, key feature enhancements have been made with improvements including:

  • Multi-layer support added to OpenEXR display driver.
  • Side Effects "Houdini" plugin.
  • New RIB parser, supporting inline comments and better error reporting.
  • Matte "alpha" support, for directly rendering shadows on otherwise transparent surfaces.
  • Refactored advanced framebuffer (Piqsl).
  • Texturing improvements.
  • Enabled "smooth" shading interpolation by default.
Now to get the point. One of the main additions to Aqsis, the MultiLayer OpenEXR, was from the request of the team that is working on Project Widow. The reason for this of course is because Blender's Compositor can use this directly, much like the way it can with it's own EXR render. This was to facilitate an easier workflow later on during the composite stage, rather than have a mess of multiple image sequences for each and every single AOV render we wanted. Also because of the talks between the Widow team and the Aqsis team, Mosaic was also built to handle this very function. In the latest CVS version of Mosaic there is a much larger menu selection of display drivers available than in previous versions. So Blender, Aqsis and Mosaic all work hand in hand in various stages of the pipeline now, rather than just rendering. Since we used Aqsis for preview renders as well, it was important for us to have the speed and stability. The Piqsl framebuffer was also a request from us working on Widow, we wanted to have the ability to scroll through images using the arrow keys rather than clicking on each render, this saved us a lot of time when working on previews and rendering dozens of images. We also tested Aqsis quite a bit through out the process, though now that it is fully released we can use the "production stable" version rather than the daily builds or sources.




Above is an example of the AOV multi layer EXR renders



Composite Nodes


During the months of pre-production of Widow, all of us would gather in an IRC chat room and discuss ideas that we wanted from Aqsis, also to get feedback over how to work with this or that in the rendering end. Planning for a renderfarm had begun and was tested over the summer, even building a new script tool so that DrQueue could use Mosaic batch output. We also had to design a lot of the assets from Aqsis in, by that I mean the process of figuring out how to make Blender work with what we wanted. There were some ideas scraped simply because of the limits currently imposed by the Python API.

So now that we have covered that...

Project Widow



This short has taken a LOT longer than planned, the idea was to get this done in 3 months starting in May of this year. It is now October. So yes things are way behind but that does NOT mean that it is stopped. At the moment it is at a standstill because there are so few of us working on it but also I have had a lot of real life situations that prevented me from devoting as much time as I want to it. There also has been quite a bit of technical issues as well. Our propsed "Arachnid" system was not stable enough to be considered as workable, it was just not perfectly solid as we had hoped. So now we have decided to use SVN once again and that is still being worked on (issues with speed mainly), the other hosts I had looked at did not offer near enough space for what we needed, so we will be using a private server located in Wisconsin belonging to a personal friend of mine.

One of the main issues we had encountered was texture maps. Sometimes when the map is pointed to a file that is not relative, it will not be found and thus not rendered. This became frustrating to the point that it was decided that all surfaces aside from the spider model will be Renderman shaders rather than a collection of images. This also supports our cause since Blender can do texture maps quite well on it's own but when it comes to displacements nothing beats Renderman. As there were to be quite a bit of it in the short it only made sense to showcase what Renderman can do quite well rather than just say "Hey it can render!" So a lot of work has been going into designing shaders that can take advantage of Mosaic's power, not just look good. Such as using the Blender material system to control the shader parameters so that different models can share a shader but each have it's own look and feel. The train above is such an example, the main body of the train itself is using one shader but the color and subtle pattern differences are controlled by the base Blender material. The only other shaders that do not share this are the wheel assemblies, but even those are also controlled in their own way by their base Blender material. In all the entire short maybe uses 12 custom Renderman shaders, including the displacement shaders, the rest are all Mosaic's power.

Blender 2.50 and the future of Mosaic

This is something that needs to be addressed as the timeline to the next version of Blender gets shorter. Mosaic as it is in it's current form, will not work with Blender 2.50. This is due to the use of Python 3 for the reworked Blender. However all is not lost since the Blender devs have started to work on the much requested Render API that we have been waiting for. This means Mosaic will need to be rewritten from scratch all over again, something Eric is not too excited to undertake since he spent the past year putting much effort and work into what it is now, though we do know that when the time comes it will need to be done. This is good news though since this will allow Blender users to render everything that can currently be done only in Blender (such as particles, animated curves and soft bodies). Currently Mosaic can output about %90 of what Blender can do natively, this is due to the limit of the data that Mosaic can access in Python. This of course is not a Mosaic only issue, ALL render exporters have this limit in Blender (with the exception of possibly Yafray). One of this sites goals was to prove to the Blender devs that having that external renderer support was a good thing, this will offer users a choice to use something they know rather than use just Blender's internal. Again we do not want to say Blender's internal render engine is bad, it is quite an amazing piece of coding and one of the best open source renderers out there. The issue mainly is choice rather than function and since most visual effects and animation studios use Renderman for the final frame rendering it would only make sense to have that option for Blender, thus making it more appealing to the high end market. This site itself has gotten the attention of many such studios and in the process some have even started to use Blender to Renderman for their own evaluation or even actual work.

So what does this mean for the future of this site? Well that is something we have a year to figure out. I do know that things will be changed, ideas are already being drawn out for the site itself though I do know this blog will be used in some form or other. I think our goal of public awareness has been achieved, that is obvious when Pixar, LucasArts, Blizzard, Dreamworks and more have stopped in on more than a few occasions. BlenderNation, BlenderAritst, CGTalk and even Blender.org have directed traffic here every single day. This site has gotten Animux some attention too, people who come here have gone to that Linux OS to check it out and in some cases are now working with them on various projects, including myself.

We have come a long way that is certain but we also have a long way to go.