misner >> Poly Texturing Primer
1) Texture Projections and Clusters.
Creating a texture projection in XSI creates both a texture projection object and cluster.
The support is a 3D object describing your texture projection. The support is a child of the object is projecting onto but there's nothing stopping you from parenting it differently or constraining it to other objects.
Whenever you create a support, it also creates a Cluster, in this case: . This cluster is usually created in the folder, but that may not be the case if the geometry was imported. When you move your texture support around it changes the UV coordinates in that cluster. The link between the projection and cluster is the operator found in your Cluster folder. If you select the Texture_Projection and hit Freeze it will remove the link between the support and the Cluster. XSI supports multiple UV coordinates (and therefore simultaneous projections) so you could have many Clusters holding UV's.
2) Poly, Point, Sample Component Selections
The diagram below is an example of different component selections. In a component selection the object is yellow and the component (in this case poly, point, sample are highlighted in red). Component selection have brackets after showing the component ids and must be treated differently in scripting that object (white lines) selections.
In XSI Polygons are the faces that make up Polygon meshes.
Points (or vertices) are the blue dots (red when tagged) where polygons edges meet.
Each sample point is the corner of a polygon where it meets a Point. Below: at pnt is the intersection of 4 different polygons, so it contains 4 sample points: sample[5,8,16,17].
Sample points are slightly confusing in version 1.5 of XSI because their graphic representation is very similar to points. Their representation style in the Paint module of 3.X was a bit more clear:
The four green lines above represent the sample points on poly, Sample[3-6].
3) Relationship between Samples and UV's.
Because XSI supports multiple sets of UV data, there's a need to distinguish between the position on the object and the data itself. In XSI:
sample points: describe a polygon corner on an object and its position (on the object an in space).
UV's: are the coordinates of a sample point on a texture.
Because you can have many set's of UV's, each sample point could hold many different UV coordinates.
UV's actually hold 3 coordinates U, V and W. Although there is currently no UI (user interface) to manipulate texture coordinates 3 dimensionally, it is possible through scripting.
4) UV manipulation
Using the XSI Texture Editor is a big topic well documented in the manuals so I'm only going to touch on it briefly. Two key things to know are:
1) The vertex bleeding button: will determine whether you select sample point on the same vertex together (both when hitting and when selecting a sample point).
2) To switch between your different UV's for editing us the UV's tab in the texture editor:
My main technique for moving points around in the UV editor is to turn vertex bleeding off and tagging groups of points I want to translate (v hotkey ) together. This way I don't end up inadvertently moving unwanted points through texture bleeding.
5) Managing UV's
Here are a few tips for managing multiple UV's:
a) You can switch which UV is being shown in the 3D Texture mode 3 different ways:
a1) By switching the active UV tab in the Texture Editor (image above).
a2) By changing the UV tab in the PPG (property page) under each objects .
a3) By using the hot keys Ctrl ] and Ctrl [ to cycle between UV projections.
b) Moving points in the texture editor will begin to stack operators. You can freeze them either by the steps above in 1) or by running:
XSI Net > Texturing > UV Space/Texture Coordinates > Freeze Active UV.
c) When texturing you often want to ignore you object and just see a constant version of your material. You can so this in the PPG by setting the Texture Display mode to Constant:
d) When you render, get rid of any UV clusters you don't need. They hold alot of data and can affect rendering performance.
Merging Multiple Projections into one UV set:
0) Other Approaches.
This is one texturing approach that's evolved from a game texturing style. There is a great deal of debate about the best texturing workflow at the moment. For reference the some other main methods for texturing complex characters are:
a) General Projection/Unfolding and 3D Painting the result. Many people currently export their object back and forth through 3.X (with the .xsi format) to use a 3rd party tool like Deep Paint. Generally, this is the best approach if you want to paint your character like a clay figurine and aren't blending multiple photographs or scanned data.
b) Keeping Multiple Projections separate and blending them with transparency maps.
c) Make characters out of lots of separate objects, which simplifies the texturing closer to one/two projections per object.
d) Using simple geometry templates to define the UV's. At the moment this is primarily being done by translating back to 3.X and using a 3rd party tool like YouMap.
Below are a few character textures I worked on that hold multiple projections on a single texture (other examples are Jaiqua and the Club_bot found in Netview > XSI Local):
In general this technique is best suited for trying to rationalize photographs and sketches into a single skin 3D character. The obvious difficulty with working this way is it requires good skill at Photoshop image blending / manipulation and some thinking about the patterns to unfold character topology onto a 2D plane.
There are also many advantages to texturing this way:
a) It's highly optimized. Having many sets of UV data slows down performance for games, rendering and big crowd scenes. It's not a huge cost, but many situations in 3D require tight optimization.
b) It make 2D texture editing similar to the 3D object. There are 2 ways of thinking about texture unfolding:
b1) Unfold First, Texture After. The priority of the texture is on the object and you try and unfold it such that the map is created by the geometry like:
The problem with this approach is you need to highly distort your 2D scanned images to match. For example the eyes in the image above are horizontally elongated and don't look exactly like they will on your 3D object.
b2) Texture from Combining Images, Unfold After. You put the priority on scanned images or drawing and blend between them. By using this approach critical areas like eyes, mouth, ears, fingernails look exactly the same in 2D as your 3D object. If you're a good 2D texture artist, this kind of control is extremely useful for making something beautiful or that has very precise visual requirements.
c) It gives you extremely fine control over image blending. When blending an image of a front of a face with a side of a face using multiple projections and blending mattes, you are essentially doing a cross fade. In some cases this can lead to a blurry area between the two textures. On the side of a face, having a texture based blend means you can have fine stubble texture across the entire blend..
d) It simplifies image/rendertree manipulation. If you want to color correct the skin of your character, you can do it one shot. If you want to create a bumpmap for your face you don't have to worry about the blending seam for your bumpmap between different texture projections. It can greatly reduce the number of nodes you need in the rendertree to achieve visual tasks.
At the moment the main tool I've made to try and facilitate this workflow is in:
XSI Net > Texturing > UV Space/Texture Coordinates >UVSwap2
UVSwap2 will work both with sample points and polygons. Here's a quick example of each:
a) Create a face texture (shown above).
b) Project it as a front and side texture projection:
c) Go to the side projection, select the sample point you want to recieve the UV's from the front and run UVSwap2.
Polygons (add an extra fish to Etho):
1) Load Etho form Netview > XSI Local > Library > Models.
2) Add a new XY texture projection and switch to it.
3) Line up the fish texture with her stomach.
4) Select the polygons you want to swap.
5) Switch to the UV projection you want to recieve them on.
6) Run UVSwap2.
You can see how one the texture is made you can patch a 3D characters UV's together from projections fairly quickly.
NOTE: the current version of UVSwap2 like most of my texture tools freezes the texture projection, and will remove the link with the original projection. This is freezing is a scripting workaround for a texture stack bug in 1.5.1, but you can still get the same affect as positioning you texture projection by translating those points in the Texture Editor (which will now be positioned aligned with the projection).
Switching UV's between
There are a number of reasons you may want to switch UV's between objects in XSI 1.5.1. To help out these workflows, I made:
XSI Net > Texturing > UV Space/Texture Coordinates > UV Copy
a) Bypass XSI poly tools removing UV's. In 1.5.1 a small percentage of XSI poly tools will destroy UV data (merge, boolean..ect). UV Copy can be used to lift the UV's off a version prior to the modeling operation.
b) Model in Parts: You can break off part off a character (a hand from example) and focus on texturing only that part. Once you're done you can use UV copy to transfer to the full character skin.
c) Clean Geometry: In rare cases geometry in XSI can become corrupt. UV Copy can be used to make a new version of the geometry (one way is to do a merge/boolean with a temporary cube you delete) and then copying the UV's over.
d) Transfer with other 3D Packages: Many other 3D packages will destroy your index when converted over. UV Copy will allow you to bring it back and forth, and still lift off the textures.
For details of use, look on XSI Web.
1) Self Symmetry: The best approach is to use:
XSI Net > Texturing > UV Space/Texture Coordinates > UV Mirror (this version is more frequently updated than the XSI Local version).
2) Copy Symmetry: Copy and mirror your source object (scale -1) to the same location as your target. Use UV Copy.