Maybe you’ve previously seen some cool Nvidia demos with awesome looking hair and wondered why hair in games never matches this quality. To put it briefly, games have a whole lot more processing to do than just character hair. Tech demos typically focus on a single feature and throw a computer’s entire processing power at it.
Of course we don’t have the luxury to do that, hair is only a tiny tiny part of what makes up our game. For performance reasons we can’t do tech demo quality, but we still want the results to look decent because hair is a key visual component of most characters.
The standard way to do hair in games is to use a number of polygonal strips and map them with hair textures. This puts much less of a strain on the game engine than letting it calculate thousands of individual hair strands.
For Masters, we have developed a procedural workflow that allows us to easily iterate on hair styles. We use SideFX Houdini to do hair modeling, processing and rendering because it gives us tremendous control and gets hair into Unity quickly.
We first model a high resolution hair style using Houdini’s standard hair tools. Afterwards, our custom asset generates around 400 polygon strips that roughly follow the hair and the shape of the head. A texture baking node then renders the high resolution hair onto the polygon patches.
We can output many different types of texture maps, such as color, normals, occlusion, displacement and so on. These texture maps are used to control shader parameters inside of Unity where we tweak the final look.
The remainder of this post gets quite technical as we take a more in-depth look at what we described above. If you want to skip this section, there’s a video at the end of the article!
Modeling the high resolution hair is straightforward. We generate around 2,000 guide hairs and shape them the way we want them with standard tools. The full hair style consists of 40,000 strands which are scattered around the guides. Both the guide hairs and the high resolution hairs serve as an input to our custom Houdini-to-Unity asset.
We take the guide hairs as a starting point for the realtime version, then strategically delete 80% of them so that we’re left with about 400 evenly distributed hair curves. From there, we calculate reasonable normals and upvectors in order to extrude the curves “sideways” and turn them into polygonal strips. We use the head mesh as a target for our upvectors, so that the polygon strips orient themselves towards the head.
In order not to waste any UV space later, we also calculate an individual width for each strip. We do this by first assigning each of the 40,000 high resolution hairs to its nearest polygon strip. Afterwards, we calculate the widest diameter of each hair bundle, and that diameter serves as a multiplier for the final strip’s width.
In the case of our villain character, the widest strip ended up being 4.5 centimeters (1.77″) while the thinnest strip is only 0.3 centimeters (0.12″) wide. UV space optimization succeeded! The UV layout itself is generated by two standard Houdini nodes that pack the map densely.
At this point we’re nearly ready to render the textures. Unfortunately, with all polygon strips positioned on the head, there’s no reasonable way to render the entire texture map at once. Render rays would hit all sorts of overlapping details, and the results would be subpar.
Initially we solved the problem by rendering each polygon strip individually and merging them into a single map in post. Although we wrote scripts to facilitate this process, it proved to be too cumbersome and time consuming.
Instead, we came up with another solution. Our asset now repositions and rotates the polygon patches and their assigned high resolution hairs into a grid-like structure. This does not modify the UVs at all, it only transforms the geometry in world space. By orienting all strips in roughly the same direction, we avoid intersecting rays and can render the entire texture map in a single go.
Together with the rest of our procedural setup, we can now modify the original hair style at any time without worrying about what happens downstream. Everything we explained is calculated automatically without any manual labor involved. When we’re ready to go back into Unity, we push a button to let Houdini recook all nodes and re-render the texture maps.
Once rendering is finished, we can immediately refresh Unity and take a look at the realtime version of the newly modified hair style. Of course it’s also easy to upres or downres the realtime hair style. Right now it clocks in at around 2,500 polygons total, and going higher or lower is simply a matter of changing a parameter on the asset.
Regarding the shader setup inside of Unity, we’re not doing anything fancy. We currently only use two different maps, a diffuse map and a flow map. The flow map describes in which direction the individual hair strands flow within the UV space, this helps give more realistic highlights on the shaded hair.
To wrap up this excessively long article, here’s a short video that showcases the realtime hair in motion and gives a quick summary of our setup process.