Monday, March 31, 2008

Blender and Aqsis - MOSAIC test

So I finally have a working Blender to Renderman workstation running and have been doing a few tests. I ran an example test that is included with the MOSAIC package and rendered both Aqsis and Blender renders, and while the results are expected (I honestly expected the Blender render to be nothing like the Aqsis, not to mention taking twice as long to finish), doing some composite work in Blender to see the two side by side was also quite interesting.

Aqsis :



Blender Internal :



And the screenshot



There are uses for both of these renderers that can be used, and in some cases required. Not only was this a test for my own purposes but at the same time proves that Blender can still be plenty useful when rendering is done and compositing begins.

A very exciting time to be working with all this software now and without too much frustrating technical issues as there was 2 to 3 years ago.

4 comments:

  1. That's one of many things good about external renderer integration to Blender. For instance you can bake object materials to a texture and use them back out through RenderMan in shaders or the other way around. You can also assemble the RenderMan frames back into Blender's video editor and apply composite/effects nodes to the final animation exporting as a avi, mpeg ect, ect...
    Hopefully when the Blender devs add an API for the exporters this will be even better :)

    ReplyDelete
  2. You say the results are 'expected', can you elaborate? I'm intrigued to see the disparity between Aqsis and Blender internal, in fact, I'd expect the internal to produce better than that.

    ReplyDelete
  3. By expected I mean that there wasn't anything "new" that we RMan users haven't rendered before I guess. Both renders were done out of the box, I did not tweak settings other than render engines. Hence the reason I tried to see what each produced.

    ReplyDelete
  4. Hey Paul, nice to see you here :)

    One of the reasons that Bl and Aqsis are so different is because this is a Very old example scene. The older system had very little integration into Blender's material and light system. With the work I'm doing now most light and material settings match very closely. In this particular scene the only thing that would not translate cleanly would be the hair and particle widths, this is because python does not have access the the strand settings in Blender so I'm using halo settings instead :(
    I'm getting very close to releasing the mapping utilities at which point I'll begin updating the manual and example scenes :D

    PS: I know your team have been frustrated by not having mapping features in MOSAIC, but as of right now I have working shadow maps (cube point, ortho distant, spot, and area light arrays, deep where supported and dupli light support), caustic maps, occlusion maps (both appended display style and light rig style), IBL world environment maps that can be used with ratraced image based occlusion and color the lights in the shadow occlusion (when using rig instead of append), cube and mirror environment maps (with fake fresnel reflection and refraction, and DOF and fog to simulate blurry reflect/refract), SSS that works on both mapping and raytracing. I'm even playing with an idea to use near clipped blurry DOF cube env maps applied to diffuse to simulate color bleeding (interesting results so far). All maps can use animation at any time and all are setup as separate scenes instead of hidden so there's full control of pass setup (although some people may not like it this way).
    Anyway almost everything is working but is still Very rough around the edges :/

    ReplyDelete