Colour management is a key component of imaging applications. Generally speaking, it consists of tools to unambiguously reproduce and transform colour image data between input devices, storage facilities, and output devices. The data is usually (though not always) described as coordinates in a given colour space. ICC profiles specify how to transform them between a source colour space (for instance, one describing your camera’s colour range), and a common, special destination space called the profile connection space.
The talk I presented at the DiVOC event was about what may be the most special space of all– YCbCr. A staple of analog and digital broadcasting, YCbCr is defined in three distinct International Telecommunications Union recommendations (ITU-R): BT.601-7, BT.709-6, and BT.2020-2. Colour management systems such as Little CMS have long supported it, though it may well be the only colour space that cannot be tested properly. This is because there are no ICC profiles in the wild that target it; except for two such copyrighted specimens, scraped a long time ago from Sun machines, and lost to the mists of time.
The talk covers: the essentials of colour management; the standards describing YCbCr colour space; and how to go from standards to an ICC profile implementing such a transformation.
Spatial dithering, also known as digital halftoning, is a technique used to simulate a higher color depth in images using a limited color palette. It approximates shades of colour not available in the palette using a spatial distribution of available colours, taking advantage of the human eye’s tendency to average colours in a neighbourhood. This technique has its origins in the printing industry: halftoning is the process of rendering an image as a pattern of coloured dots, exchanging color resolution or bit depth for spatial resolution.
Common issues that are addressed by dithering are colour shift and false contours . When an image is quantized, the luma and chroma of the area covered by each pixel is classified into one of the available colors in the device’s palette, resulting in a shift that may be visible to the naked eye. When there are insufficient colours to represent an otherwise smooth transition, banding, or the formation of false edges may occur. Low level graphics libraries, image processing toolkits, as well as graphics drivers and output peripherals themselves, use dithering as a computationally cheap way to work around these issues.
Throughout its history, Krita used 8-bit resolution and the sRGB color space exclusively to render gradients, ignoring the color space and bit depth of the image. This resulted in very visible banding artifacts, a behaviour that was reported in bug 343864. During the last month of 2020 and the start of 2021, we implemented high dynamic range gradients by rendering the gradient in the image’s color space, with 16-bit as a minimum depth, and adding dithering in the gradient’s render step if the image’s depth is lower. The project effort was summarized in a talk at Libre Graphics Meeting 2021.
MSc thesis: "De Mr. Increíble a Judy Hopps: un estudio sobre modelado de cabello y pelaje en producciones de animación"
Hair (or fur, for animal characters) is one of the crucial components of creature design and production. It is one of the most noticeable features that contribute to the authenticity and identity of the character. As such, not only its simulation must be aesthetically pleasing and physically plausible, but it must also fit within the character’s universe.
For this thesis, we survey all available methods and techniques of hair styling, simulation, and rendering, both for offine and realtime use. We make special emphasis in those models known to be used in feature animation, in particular the films Tangled, Brave, and Zootopia. Additionally, we present a complete hair simulation and shading system, constructed according to the specifcations of these models, and implemented within the open source 3D creation suite, Blender.
Layers are one of the core concepts of digital painting. They allow artists to control different parts of their artwork at once, for instance, color, lighting, lineart, as well as texture. A key feature of them is their ability to be resized, composited, renamed, grouped or deleted independently of the rest of the document.
Patterns and textures are also essential components of an artist’s toolbox, allowing them to represent the intricacies of a physical material. They come in two forms: bitmap textures, which are images contained in e.g. PNG or OpenEXR files, or procedural textures, which are generated on the fly using their mathematical representation.
KDE’s Krita painting suite supports using patterns and textures through two types of layers, File or Fill Layers. However, neither of them let artists create dynamically generated content: File Layers are inherently static, and Fill Layers support only color fills (like Paint Layers) or basic pattern rendering.
The goal of this project is to let artists create dynamic content through a new, scriptable Fill Layer. To this effect, I integrated Disney Animation’s SeExpr expression language into Krita.
Krita is a professional, free and open source painting suite that allows concept artists, texture and matte painters, as well as illustrators to deploy their full creativity towards production of high quality art pieces. High bit depth color spaces are essential to feature-level production pipelines. The recent introduction of HDR, supported by Intel, is a great stride towards this objective. However, there remains a critical, unfixed issue: support of color space operation across all numeric representations.
In T4488 Wolthera van Hövell detailed how Krita assumes that all floating-point color spaces assume a range [0.0, 1.0]. This is correct for all cases except two: CIE’s 1976 La*b*, and CMYK. The first one is usually assumed to be in the range of L = [0.0-100.0], a, b = [-128.0, 127.0]. This project proposes to unify all color space data and operations, while introducing support for custom value ranges, so as to properly support all color spaces through the same API. Additionally, we plan to allow unbounded operations so as to support HDR.
In 2016, amid controversy regarding the budget for Argentina’s National Scientific and Technical Research Council (CONICET), some publications authored by CONICET researchers were discovered and considered as “unscientific” in social media (Clarín, 2016). The ensuing reaction has called into question the quality and relevance of CONICET’s research, as well as the sound administration of the budget assigned to the Science and Technology field.
The current administration has launched several initiatives purporting to promote the visibility of its activities. For example, the tools “Real Economy Dashboard” (Tablero de la Economía Real) and “Productive Simplification” (Simplificación productiva) publish statistics of the economy and the actions carried out by the Ministry of Production. However, there are no equivalent alternatives for the CONICET, apart from each researcher’s individual web page.
For these reasons, our project aims to explore different features of our country’s scientific production, as well as allowing access to individual articles.
Among Blender’s proposed ideas for GSoC 2018 I found this gem, a request to port Zootopia’s shader to Cycles.
Realistic hair or fur is essential when creating a plausible virtual world. In feature animation, this is often used to define the signature look of characters; examples include Pixar’s Brave (Iben et al. 2013), and Walt Disney Animation Studios’ Tangled (Sadeghi et al. 2010; also Ward et al. 2010) and Zootopia (Chiang et al. 2016).
Currently, Cycles has a working hair shader (wiki page, sources), based on Marschner et al. (2003)’s model. Its several assumptions and simplifications make it inaccurate for light colored hair (d’Eon et al. 2011) as well as for most types of fur (Yan et al. 2015). Furthermore, d’Eon et al. (2011) and Khungurn and Marschner (2017) demonstrated it to not be energy conserving.
This project intends to upgrade Cycles’ hair shader to the aforementioned Zootopia shader by Chiang et al. (2016), by porting Pharr (2017)’s implementation. Lukas Stockner has made available a WIP patch, which may also serve as a basis for this work.
BSc thesis: "Técnicas de deformación para objetos virtuales. El impacto entre vehículos como caso de estudio"
In this work we evaluate the suitability, performance and precision of different deformation techniques for virtual objects. The test case is a car accident simulator, implemented using the Unity engine and executed under Android.
This work was cited in the WICC 2017 workshop paper by Selzer et al., “Modelos de interacción y aplicaciones en realidad virtual mediante dispositivos móviles”.
“I’ll give a talk at $con!”
You start writing down the requirements for your slides, and you soon realise your tool of choice is not enough anymore. Do you cut your losses and move to a new tool, or do you monkey-patch the existing one?
This talk is about about my experience looking for upgrades to my slides toolchain. I will show you my usual requirements for slide-making, and showcase some tools I tried. I’ll illustrate how each tool has its strengths and weaknesses, depending on what you may need.
La Realidad Virtual (RV) se ha convertido, una vez más, en un tópico popular e interesante tanto en el ámbito de la investigación como en el campo comercial.
Esta tendencia tiene su origen en el uso de dispositivos móviles como núcleo computacional y displays de RV. Tales dispositivos no están libres de limitaciones, tanto en el software como hardware que soportan. Esta línea de investigación tiene como objetivo analizar el impacto de los dispositivos móviles en los modelos de interacción de RV, y desarrollar nuevas aplicaciones de esta tecnología.