MR's rendering command is called ray; it's run as

  ray [MI file]

In Maya, you can export your scene as a MI file, and then render it outside using ray:

  1. Create a simple scene in Maya.

  2. Open Window > Settings/Preferences > Plug-in Manager and turn on Mayatomr.mll.

  3. Run File > Export All (options). Set File Type to MentalRay, File Format to ASCII, Tabulator Size to 2, and press Export All.

  4. Outside in the command prompt, cd to the location where you stored the MI file and type ray [MI file name]. You might see a bunch of error messages; disregard them.


MI format

Here's a barebone MI scene:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
link "base.dll"

declare shader
  color "mib_illum_lambert" (
    color "ambience",
    color "ambient",
    color "diffuse",
    integer "mode",
    array light "lights"
  )
end declare

options "myopt"
  contrast 0.1 0.1 0.1 0.1
  samples 0 2
  object space
end options

camera "mycam"
  output "rgb" "square.sgi"
  resolution 400 300
  aspect 1.333333
  clip 1 20
  aperture 1.0
  focal 1.2
end camera

instance "caminstance" "mycam"
  transform 1 0  0 0
            0 1  0 0
            0 0  1 0
            0 0 -5 1
end instance

material "mymat"
  "mib_illum_lambert" (
    "ambience" 1 1 1,
    "ambient" 1 0 0
  )
end material

object "myobj"
  visible on
  group
    # points
    -1 -1  0
     1 -1  0
     1  1  0
    -1  1  0
    # vertices
    v 0 v 1 v 2 v 3
    # polygon
    c "mymat" 0 1 2 3
  end group
end object

instance "objinstance"
  "myobj"
end instance

instgroup "everything"
  "caminstance"
  "objinstance"
end instgroup

render "everything" "caminstance" "myopt"

First of all, base.dll is a shader library. A shader is a little plugin program that can control material appearance, lighting characteristics, and a lot of other stuff in the rendering process. A shader library is a collection of one or more shaders. You're going to learn how to create such libraries later in the course. base.dll contains 53 shaders to help you create general shading effects like common lighting and texture-mapping.

The link command brings a library into the scene. You could type in the whole path of the library, but the common way to go about it is to write just the library file name, and specify the (semi-colon separated) search paths in an environment variable called MI_LIBRARY_PATH.

Next, you have to declare (lines 3 - 11) the name and parameters of the shader that you going to use in the scene. You could also put the declaration in an external MI file, and then use

  $include [external MI file]

to insert it into the current file. You can check the declarations of the base shaders in [Mental Ray installation directory]\include\base.mi. In fact we could $include this file into our little scene, but I want to show you how a declaration looks like. If you don't want to type out the full path in the include command, you can use the MI_RAY_INCPATH environment variable to set up the search path.

Lines 13 through 17 is called an option block. This is an important structure in the MI scene; it lists rendering parameters and global switches such as what space to use for the objects. Here, we specify a maximum allowed difference of 0.1 in the values of the four image channels in adjacent samples; if that limit is exceeded more sub-samples will be taken. We also set the sampling rate to between 22*0 and 22*2 per pixel, and geometric co-ordinates to object space.

Lines 19 through 26 is a camera block. It obviously lists the output and camera parameters; the parameters to output specify the image format ("rgb" is sgi format) and output file name respectively; aspect is the image aspect ratio; clip specifies the clipping planes; aperture specifies the width of the film plane; and focal specifies the distance of the film plane from the camera origin.

The camera block only describes the parameters of the camera. The camera itself hasn't been created at this point. To create a camera, you instantiate it through the instance call. Here, "caminstance" is the name of the camera instance and "mycam" is of course the name of the camera block you defined earlier. You can put a bunch of parameters in the instance block to switch visibility/shadow/ray-tracing/motion-blur/etc on or off, but here we only use the transform parameter to apply a transform matrix to the instance. The matrix describes a world-to-object transformation, thus all the numbers are the reverse of their effects. For example, we want to move the camera along the positive z by 5 units, so we set the z translation (the 15th number in the matrix) to -5.

Next, we have the material block. This block packages surface-material, displacement, shadow, volume, etc shaders together, to be applied to objects later. Every material block must contain at least a surface-material shader, and that's what we have in this example, just a lambert shader with a constant red ambient color.

After the material block we set up an object block, which can be used to specify any renderable surface. The visible parameter defaults to off, so you must turn it on explicitly in order to see your object. The group section defines the geometric data of the surface. Here we're making a square polygon; the group section is made up of three sub-sections -- the first sub-section lists the point co-ordinates, the second sub-section lists the vertex references, and the third one builds the polygon faces (and applies a material to each face as well). Note that the space orientation is right-handed, thus a counter-clockwise ordering (from your viewpoint) of the vertices forms a polygon that faces you.

Like the camera block, the object block only defines the object geometry. You need to instantiate it in order to see it. And that's exactly what we do in lines 57-59.

Next, we put everything into a group called "everything". Groups can be nested, so you could, for example, put the "everything" group into another group. Every scene must have at least one global group called the root group. In this scene, "everything" is the root group.

Finally, we hand the root group, the camera instance and the option block (in exactly that order) to the render command.

Now try running ray on the file. You can use the command imf_disp to view the rendered image.

OK, let's try rotating the square by 40 around the x axis; to do that we have to set up a transform matrix for objinstance:

  1  0        0        0
  0  cos(-40) sin(-40) 0
  0 -sin(-40) cos(-40) 0
  0  0        0        1

Remember the transformation numbers are in reverse to what we want to see. Incidentally, you can find out the world-to-object transformation matrix of any transform node in Maya by querying its worldInverseMatrix attribute.

Now insert the following line after line 58.

  transform 1 0 0 0 0 0.766044 -0.642788 0 0 0.642788 0.766044 0 0 0 0 1

and render the file again.

By now you should get the flavour of the MI file. If you know the Renderman RIB file, you should be able to draw the analogy -- MI is to MR as RIB is to RM (if only RIB was called IM, that statement would've qualified as geek verse), but RIB is organized more like a scene description while MI is more like a list of scene parameters.


Spline surface and lighting

Note: for the following exercises, make sure you have the environment variable MI_RAY_INCPATH set to a directory path which contains the file base.mi.

Check out patch.mi in the courseware. In this scene we replace the polygon with a simple(!) spline surface. Don't fret over the complicated syntax of the surface specification; it's unlikely that you would be called upon to write a surface like this by hand. But do take note of line 87 which specifies how the surface is going to be approximated by polygons during rendering. Mentay Ray provides a rich set of parameters to control this approximation (or tessellation). The "regular parametric 20 20" method you see here uses 20 (in u direction) X 20 (in v direction) polygons to approximate the patch -- reduce those two numbers to 4 and see how it affects the patch's appearance.

Unlike Renderman's RIB, MI does not attach texture co-ordinates to patch vertices; instead it defines a 2D "texture surface" in the parametric space of the patch. The texture surface's bases (in both parametric directions), degrees and number of vertices can be different from those of the actual rendered surface. Texture co-ordinates at the surface point hit by a viewing ray are interpolated from the specified texture vertices, just as spatial co-ordinates are interpolated from the spatial vertices, so you can think of the entire mat of interpolated tex co-ords as a sort of spline surface.

Note that this time we assign a material to the surface instance rather than to the surface definition. Objects that take their materials in their instances should contain the tagged flag (line 63).

Lights are defined with light...end light blocks, and then instantiated as usual with instance...end instance blocks. Because of the way the standard illuminated shaders are written, they all have a parameter called lights which takes an array of light instances; usually only the specified lights will be involved in the shading calculation.

lights.mi contains the three kinds of simple light that you can create in Mental Ray. When you define just an origin position in the light definition, you signal to Mental Ray that you want to create a point light; if you define only a direction vector, then you'll get a directional or infinite light; if you define both origin and direction, you'll get a spot light.

Point and spot lights can be turned into area lights by including a shape in their definitions. We'll talk more about this at the end of the next section.


Shadow

Lights can cast simple depthmap shadow or raytraced shadow.

Depthmap shadow is enabled by the following steps.

Furthermore, there're a few other options that you'll most likely want to set:

For an example of the whole shenanigan at work, check out shadowmap.mi.

Note: the ray command has a "-shadowmap only" option which generates only shadow maps without rendering the color image.

Raytraced shadow is simpler to set up, and is the preferred way to make shadow in Mental Ray. All you need to do is

Snappy, isn't it? See the example in rtshadow.mi. BTW the light's factor parameter controls the brightness of the shadow.

For transparent objects, Mental Ray has an elaborate shadow-casting mechanism based on shadow shaders placed in material definitions. The file transhadow.mi contains an example of the setup. In this file, the surface illumination shader for the cylinder is separated out from the material definition (lines 48-55) and fed into a shader called "mib_opacity" (lines 58-61) which modulates the opacity on the surface. We also add a shadow shader called "mib_shadow_transparency" to the material definition -- this is the shader that controls the color of the transmitted shadow rays. The mode parameter, when set to 3, simply tells the shader to shut up and do its work. Seriously though, don't worry about this mode thing, especially since you're going to have the power to create your own shaders later; trust me -- setting it to 3 will take care of almost all your transparent shadowing needs.

If you make the shadow-casting light an area light, it'll cast blurred raytraced shadow. To impart area to a light, you simply insert one of

rectangle u_vector v_vector #_of_u_samples #_of_v_samples

disc normal_vector radius #_of_u_samples #_of_v_samples

sphere radius #_of_u_samples #_of_v_samples

cylinder axis_vector radius #_of_u_samples #_of_v_samples

object "instance" #_of_u_samples #_of_v_samples

into the light definition after the shader part. See blurshadow.mi for an example.


Texture mapping

In textured.mi, we assign a texture map to the diffuse parameter of the patch's material.

Mental Ray takes a piecemeal approach to constructing shaders, just like how you would construct shaders in Maya by stringing different nodes together in the Hypershade. First, we bring in a texture image and call it pic1 (line 52). Then we choose a set of texture co-ordinates to prepare for the mapping (line 54); setting "select" to 0 chooses the first -- and in this case the only -- texture surface in the patch. Next, we sample the texture image with the texture co-ordinates generated by mib_texture_vector (line 56). And finally we assign the sampled color to the diffuse color of the material.

You can set a shader parameter to the return value of another shader by writing

"parameter" "shader function" (function's parameters)

or if the shader function has a named instance,

"parameter" = "shader instance"


Displacement mapping

In material definition, other than a mandatory surface shader specified at the beginning, you can also specify additional displacement, shadow, volume, environment (for environment mapping), contour (for outline rendering) and photon (for photon handling in global illumination rendering) shaders.

Check out displaced.mi to see how we apply a displacement map. We are still stringing together shaders to feed, finally, a function called mib_color_intensity to the displace slot of the material; this function takes a color input, averages the RGB channels to form a grayscale value, and then scales this value by a factor. Mental Ray expects a displacement shader to return a float value; if the shader returns a color, only the red channel will be taken. Each tessellated vertex will then be moved along the normal by this value -- that's just the renderer's default behaviour; in a custom-written displacement shader you can move the vertex anywhere, not just along the normal.

At this point we should take note of a salient difference between Renderman's shaders and Mental Ray's shaders: the type of a Renderman shader (surface, displacement, etc) is set when you write the shader code, and afterwards you can only use a surface shader in a Surface statement, a displacement shader in a Displacement statement, etc. On the other hand, there're no different types of shaders (when you write them) in Mental Ray; all shaders are just functions that return some value or/and change some aspects of the scene state. These functions only become surface shaders, displacement shaders, or whatever by virtue of where you call them in the MI scene.

The patch here has an additional approximate statement at line 103 which controls the tessellation at displaced regions. Like the main approximation, this displacement approximation has many combinations of parameters to achieve very fine control over the tessellation. Here, we simply use a distance condition to specify that tessellation continues until the difference in the heights of adjacent sub-vertices is not more than 0.002. We also specify that each original triangle must be subdivided at least 0 times and at most 3 times.

Displaced objects must have a max displace [distance] parameter to add a spatial allowance for the displacement. At line 78, we give an allowance of 0.1 for the distance by which vertices will be displaced.


Phenomenon

You can package shaders together to form a composite shader called a phenomenon. For example, in the previous scene, we can package tcoord, colortex, displacetex and mymat into a material phenomenon and assign it to the patch. Let's see how it's done in the file phenomenon.mi.

We begin setting up a phenomenon by declaring (line 62)

declare phenomenon

followed by its return type, name, and parameters. Then we set up a bunch of co-operating shaders and a material as usual. And just before the end of the declaration, we choose the material "mymat" as the root of the phenomenon. A phenomenon's root is what it's all about, its meat, if you like. This is obvious from examining the body of the phenomenon declaration: everything in there leads to "mymat".

The declared phenomenon is just like any declared shader -- you can instantiate it with a shader statement and then put the instance at wherever it's applicable, as in the material call in the patch instance specification (line 128).

Note that we inform the phenomenon about the identities of the textures and lights by passing them in as parameters, and then within the phenomenon declaration we assign them to the local parameters of the sub-shaders with statements of the form

local_parameter = interface phenomenon_parameter

In the phenomenon body, you can put not only shader definitions, but also lights and instances, and even change certain scene options.


Animation

Rendering a sequence of animation is simply a repeated process of changing a bit of the scene state and rendering a frame. The mechanism of changing just what needs to be changed and leaving the rest alone is called incremental change. The file animation.mi contains an example of incrementally changed animation. So, as you can see, for every frame you just add the keyword "incremental" in front of anything that needs to be animated, set the animated parameter to a new value, and render.


Motion blur

You can cause the transformation of any object/light/camera instance to be motion-blurred by adding a shutter 1 parameter to the option block and appending a motion transform parameter to the instance's definition. For example, look at line 11 and lines 98-102 of xformblur.mi. At shutter open time, the first transform matrix will be registered, and at shutter close time, the motion transform matrix will be registered. A blurred trail will be rendered in between these two extreme positions.

Like in any renderer, in order to create a good-looking blur, you must increase the image sampling rate. As you know, the basic sampling rate is controlled by the contrast values in the option block. But there's another contrast parameter called time contrast which controls the sampling in time (ie. the course of the motion blur). The default value of this parameter is 0.2, which is probably too high, yielding a blur that's usually too grainy. You may find that the samples values need to be increased as well. All these pushing on the sampling will increase rendering time, so be easy on it.

For curved motion blur (for example, in a spinning propeller blade), you can specify up to 15 intermediate blur "steps" in the course of the motion to better approximate the shape of the blur. You do this via a motion steps [number of steps] parameter in the option block.

You can also motion-blur object shape change by attaching motion vectors to the object vertices. See movecblur.mi for an example: at line 85, two more vector co-ordinates are defined (make a mental image -- unintentional pun -- of their directions), and then in the following vertex list they are attached to vertices 11 (at the right side of the sphere) and 39 (at the left side) respectively with the "m" keyword. You can attach up to 15 motion vectors for every vertex to create a curved blur path.


Depth of field and other lens effects

Mental Ray's approach to creating depth of field (dof) effect is more general than other renderers' approach: the effect is implemented as a lens shader, which is a shader attached to the camera to manipulate the directions and colors of the viewing rays. A standard shader called physical_lens_dof in the library physics.dll can help you create dof conveniently.

See dof.mi for example.

Note how the lens shader is attached to the camera (line 21) -- BTW you must remember to link the library physics.dll and include the declaration file physics.mi. The shader physical_lens_dof accepts two parameters; the first one sets the focus distance along the negative z axis in camera space; the second one sets the blur amount.

More than the case of motion blur, you need to increase the image sampling rate to get a good-looking dof effect. Check out the contrast and samples values in the option block.

Side note: notice this time the four object instances are placed into an instance group and the material is applied to this group rather than to the individual group members. This demonstrates the mechanism of material inheritance, whereby a material gets passed down the instance hierarchy to every member. If the member defines its own material, it will overwrite the inherited material. Object flags and parameters such as visible and shadow (which controls shadow casting) can be inherited down the hierarchy as well.


Contour rendering

Open the file contour.mi. Note that you must link contour.dll and include contour.mi in order to use the standard contour shaders.

To render contour or line drawing, you must set up a contour store shader (line 11) and a contour contrast shader (line 12) in the option block, and also a contour shader in the material definition (line 44). You must also add a contour output shader to the camera (line 17).

The contour store shader stores the data (eg. color, depth, normal) necessary for the following contrast shader to decide where to put the lines. The contrast shader examines each of these data in pairs of image samples, and if the pair differs by more than a certain threshold, a contour line will be placed between them. The material's contour shader determines how thick the lines are and what color to draw them in. And finally the contour output shader draws the lines into the color frame buffer or creates a PostScript file.

For the contour contrast shader in this scene, we set the "zdelta" parameter to 0.1, meaning a line will be placed between any two samples that defer in their distances from the camera by 0.1; we also set "ndelta" to 8, meaning a line will be placed between surface samples which normals point away from each other by 8; the "contrast" parameter specifies whether to draw an outline around shadow regions; and the "max_level" specifies the raytracing level in which contours should be calculated -- it defaults to 0, so you must change it in order to see any contour at all.

For the contour shader in the material, we set the line thickness to 0.2 units and the line color to black with a full alpha. The alpha must be set to 1 because the contour lines will be composited on to the background using this value. In addition to contour_shader_simple, Mental Ray provides 9 more contour shaders in the standard library to do special effects like variable line width and depth-fading.

Finally, we set up an output statement in the option block to store the computed lines into a special frame buffer. The storing is regulated by a shader called contour_only which draws the lines over a parameterized background color. The standard library provides another output shader called contour_composite which combines the line drawing with an image of the scene rendered in the normal way, and yet another shader called contour_ps which outputs a PostScript file. This output statement only stores the contour image in a frame buffer; we need to set up another output statement after it (line 18) to dump the frame buffer to an image file.


Caustics

Caustics are light patterns formed by the focusing of reflected or refracted light. Mental Ray models caustics through a mechanism called photon tracing, whereby light packets called photons are emitted from lights and then reflected and transmitted around the scene. These photons finally get deposited on diffuse surfaces (which can't focus the light anymore) to form caustic patterns.

The following picture was rendered from caustic.mi. In this scene we use shaders only from the physics.dll library. These shaders were designed with physical accuracy in mind, so you can get very realistic shading effects from them.

To render caustics, you must do the following steps:

Notice we didn't make the refractive material transparent. Mental Ray makes a distinction between transparent ray and refractive ray; the former goes straight through the objects without bending and does not require raytracing, while the latter deflects when it goes from one environment to another of a different density -- an effect that requires raytracing. If we had made the material transparent, both transparent and refractive effects would be visible. Try it out and see how strange it looks. For similar reason we didn't make the shadow transparent -- the caustics already take care of making the shadows appear transparent.


Global illumination

Mental Ray can simulate light that bounces around a scene to illuminate objects indirectly, just like real light does. This kind of illumination is popularly known as global illumination (GI). The simulation is based on photons too.

See gi.mi for example.

To render global illumination, you must

GI gives lights a more palpable presence and subtly heightens realism. But these things come at a price -- a high-quality GI-enabled scene renders at least 5 times (and often more than 10 times) slower than usual. GI quality is proportional to the number of stored and sampled photons in the scene. Photon storage is controlled by the photon emission statement in the light, while photon sampling is controlled by the photon accuracy statement in the option block.

globillum off light: globillum photons 100000
option: globillum accuracy 1000 0.7

light: globillum photons 500000
option: globillum accuracy 5000 0.7
light: globillum photons 500000
option: globillum accuracy 5000 2

Notice the indirect illumination looks blotchy when you don't have enough photons (top right picture, the green block), and spread-out when you increase the accuracy radius (bottom right picture, the feet of the yellow pipe). The rendering time increases dramatically from the top left picture to the bottom right picture.

There's a faster alternative to photon-based global illumination called final gathering (FG). Here's how it works: a bunch of rays are shot out from every shading point to sample the average color of nearby surfaces. This color is then added to that shading point's base color. If those nearby surfaces have stored photons, the colors of the photons are sampled; if they don't have store photons, their surface colors are sampled directly.

The file fgwphoton.mi contains an example of final-gathered stored-photon global illumination. As you can see, this time we can get a decent-looking GI with much lower number of stored photons (line 33). We also don't need a globillum accuracy statement because the finalgather accuracy statement will take care of the photon sampling (more about this in the next paragraph).

FG is turned on by putting a finalgather on statement in the option block (line 8) and turning on the trace flag in all the participating object instances. FG quality depends on the two numbers in the finalgather accuracy statement in the option block (line 9): the first number specifies the number of probe rays shot from each shading point; of course, the more rays you send out, the more accurate the result will be, but usually you don't have to go much further than 2000 rays. The second number specifies the maximum probe distance, which decides how spread-out the FG result is going to be.

Another scene, fg.mi demonstrates that you don't even need to have photons to order to get the FG effect. As I mentioned earlier, without photons the FG computation simply samples the surface colors in the surroundings. This allows you to get a quick-and-dirty, close-range GI effect.