


Abstract
Realistic hair or fur is essential when creating a plausible virtual world. In feature animation, this is often used to define the signature look of characters; examples include Pixar’s Brave (Iben et al. 2013), and Walt Disney Animation Studios’ Tangled (Sadeghi et al. 2010; Ward et al. 2010) and Zootopia (Chiang et al. 2016).
Previously, Cycles’s hair shader (wiki page, sources) was an ad-hoc version based on Marschner et al. (2003)’s model. Its several assumptions and simplifications make it inaccurate for light colored hair (d’Eon et al. 2011) as well as for most types of fur (Yan et al. 2015). Furthermore, d’Eon et al. (2011) and Khungurn and Marschner (2017) demonstrated it to not be energy conserving.
This project upgraded Cycles’ hair shader to the Zootopia version by Chiang et al. (2016). A joint work between Lukas Stockner and Leonardo E. Segovia, we started by porting Pharr (2017)’s implementation, to which we added:
- Original paper’s missing features
- Primary Reflection Roughness modifier, renamed Coat
- Extra features
- Additional color parametrizations: Melanin concentration and Absorption coefficient
- Linearization of Melanin coefficients
- Randomization of Roughness and Melanin concentration
This project is a constituent part of Segovia’s MSc thesis, “De Mr. IncreĂble a Judy Hopps: un estudio sobre modelado de cabello y pelaje en producciones de animaciĂłn”.
Future work
- Reduce noise in the Glossy Indirect light pass.
- Currently, light reflection between hairs introduces too much noise.
- This affects only the transmission mode (TT); primary specular (R), secondary specular (TRT) and residual (TRRT+) are unaffected.
- Add Pharr’s white furnace tests.
- To the best of my knowledge, Blender doesn’t separate the Cycles light logic from the sampling functions themselves yet.
Deliverables
- Project reports are available in the Blender Wiki
- Source code to the Cycles renderer (commit, project branch, fixes)
- Manual pages (commit)
- Regression tests (commit)
Media coverage
On BlenderArtists:
References
- Chiang, Matt Jen-Yuan, Benedikt Bitterli, Chuck Tappan, and Brent Burley. 2016. “A Practical and Controllable Hair and Fur Model for Production Path Tracing.” Computer Graphics Forum 35 (2): 275–83. https://doi.org/10.1111/cgf.12830.
- Iben, Hayley, Mark Meyer, Lena Petrovic, Olivier Soares, John Anderson, and Andrew Witkin. 2013. “Artistic Simulation of Curly Hair.” In Proceedings of the 12th ACM SIGGRAPH/Eurographics Symposium on Computer Animation, 63–71. SCA ’13. New York, NY, USA: ACM. https://doi.org/10.1145/2485895.2485913.
- Khungurn, Pramook, and Steve Marschner. 2017. “Azimuthal Scattering from Elliptical Hair Fibers.” ACM Trans. Graph. 36 (2): 13:1–13:23. https://doi.org/10.1145/2998578.
- Marschner, Stephen R., Henrik Wann Jensen, Mike Cammarano, Steve Worley, and Pat Hanrahan. 2003. “Light Scattering from Human Hair Fibers.” In ACM SIGGRAPH 2003 Papers, 780–91. SIGGRAPH ’03. New York, NY, USA: ACM. https://doi.org/10.1145/1201775.882345.
- Pharr, Matt. 2017. “The Implementation of a Hair Scattering Model.” In Physically Based Rendering: From Theory to Implementation, 3rd ed. Boston, MA, USA: Morgan Kaufmann. http://www.pbrt.org/hair.pdf.
- Sadeghi, Iman, Heather Pritchett, Henrik Wann Jensen, and Rasmus Tamstorf. 2010. “An Artist Friendly Hair Shading System.” In ACM SIGGRAPH 2010 Papers, 56:1–56:10. SIGGRAPH ’10. New York, NY, USA: ACM. https://doi.org/10.1145/1833349.1778793.
- Ward, Kelly, Maryann Simmons, Andy Milne, Hidetaka Yosumi, and Xinmin Zhao. 2010. “Simulating Rapunzel’s Hair in Disney’s Tangled.” In ACM SIGGRAPH 2010 Talks, 22:1–22:1. SIGGRAPH ’10. New York, NY, USA: ACM. https://doi.org/10.1145/1837026.1837055.
- Yan, Ling-Qi, Chi-Wei Tseng, Henrik Wann Jensen, and Ravi Ramamoorthi. 2015. “Physically-Accurate Fur Reflectance: Modeling, Measurement and Rendering.” ACM Trans. Graph. 34 (6): 185:1–185:13. https://doi.org/10.1145/2816795.2818080.
- d’Eon, Eugene, Guillaume Francois, Martin Hill, Joe Letteri, and Jean-Marie Aubry. 2011. “An Energy-Conserving Hair Reflectance Model.” Computer Graphics Forum 30 (4): 1181–87. https://doi.org/10.1111/j.1467-8659.2011.01976.x.