Skip to main content Skip to navigation

Portfolio

Publications

XIX Workshop de Investigadores en Ciencias de la Computaci贸n, ITBA, Buenos Aires (2017)

La Realidad Virtual (RV) se ha convertido, una vez m谩s, en un t贸pico popular e interesante tanto en el 谩mbito de la investigaci贸n como en el campo comercial.

Esta tendencia tiene su origen en el uso de dispositivos m贸viles como n煤cleo computacional y displays de RV. Tales dispositivos no est谩n libres de limitaciones, tanto en el software como hardware que soportan. Esta l铆nea de investigaci贸n tiene como objetivo analizar el impacto de los dispositivos m贸viles en los modelos de interacci贸n de RV, y desarrollar nuevas aplicaciones de esta tecnolog铆a.

Projects

Ran from

Dithering and Colour and Noise, Oh My!
Dithering and Colour and Noise, Oh My!

Spatial dithering, also known as digital halftoning, is a technique used to simulate a higher color depth in images using a limited color palette. It approximates shades of colour not available in the palette using a spatial distribution of available colours, taking advantage of the human eye鈥檚 tendency to average colours in a neighbourhood. This technique has its origins in the printing industry: halftoning is the process of rendering an image as a pattern of coloured dots, exchanging color resolution or bit depth for spatial resolution.

Common issues that are addressed by dithering are colour shift and false contours . When an image is quantized, the luma and chroma of the area covered by each pixel is classified into one of the available colors in the device鈥檚 palette, resulting in a shift that may be visible to the naked eye. When there are insufficient colours to represent an otherwise smooth transition, banding, or the formation of false edges may occur. Low level graphics libraries, image processing toolkits, as well as graphics drivers and output peripherals themselves, use dithering as a computationally cheap way to work around these issues.

Throughout its history, Krita used 8-bit resolution and the sRGB color space exclusively to render gradients, ignoring the color space and bit depth of the image. This resulted in very visible banding artifacts, a behaviour that was reported in bug 343864. During the last month of 2020 and the start of 2021, we implemented high dynamic range gradients by rendering the gradient in the image鈥檚 color space, with 16-bit as a minimum depth, and adding dithering in the gradient鈥檚 render step if the image鈥檚 depth is lower. The project effort was summarized in a talk at Libre Graphics Meeting 2021.

MSc thesis: "De Mr. Incre铆ble a Judy Hopps: un estudio sobre modelado de cabello y pelaje en producciones de animaci贸n"

Ran from

Hair (or fur, for animal characters) is one of the crucial components of creature design and production. It is one of the most noticeable features that contribute to the authenticity and identity of the character. As such, not only its simulation must be aesthetically pleasing and physically plausible, but it must also fit within the character鈥檚 universe.

For this thesis, we survey all available methods and techniques of hair styling, simulation, and rendering, both for offine and realtime use. We make special emphasis in those models known to be used in feature animation, in particular the films Tangled, Brave, and Zootopia. Additionally, we present a complete hair simulation and shading system, constructed according to the specifcations of these models, and implemented within the open source 3D creation suite, Blender.

Ran from

GSoC 2020: Dynamic Fill Layers in Krita using SeExpr
GSoC 2020: Dynamic Fill Layers in Krita using SeExpr

Layers are one of the core concepts of digital painting. They allow artists to control different parts of their artwork at once, for instance, color, lighting, lineart, as well as texture. A key feature of them is their ability to be resized, composited, renamed, grouped or deleted independently of the rest of the document.

Patterns and textures are also essential components of an artist鈥檚 toolbox, allowing them to represent the intricacies of a physical material. They come in two forms: bitmap textures, which are images contained in e.g. PNG or OpenEXR files, or procedural textures, which are generated on the fly using their mathematical representation.

KDE鈥檚 Krita painting suite supports using patterns and textures through two types of layers, File or Fill Layers. However, neither of them let artists create dynamically generated content: File Layers are inherently static, and Fill Layers support only color fills (like Paint Layers) or basic pattern rendering.

The goal of this project is to let artists create dynamic content through a new, scriptable Fill Layer. To this effect, I integrated Disney Animation鈥檚 SeExpr expression language into Krita.

Ran from

Krita is a professional, free and open source painting suite that allows concept artists, texture and matte painters, as well as illustrators to deploy their full creativity towards production of high quality art pieces. High bit depth color spaces are essential to feature-level production pipelines. The recent introduction of HDR, supported by Intel, is a great stride towards this objective. However, there remains a critical, unfixed issue: support of color space operation across all numeric representations.

In T4488 Wolthera van H枚vell detailed how Krita assumes that all floating-point color spaces assume a range [0.0, 1.0]. This is correct for all cases except two: CIE鈥檚 1976 La*b*, and CMYK. The first one is usually assumed to be in the range of L = [0.0-100.0], a, b = [-128.0, 127.0]. This project proposes to unify all color space data and operations, while introducing support for custom value ranges, so as to properly support all color spaces through the same API. Additionally, we plan to allow unbounded operations so as to support HDR.

Argentina in Science

Ran from

Argentina in Science
Argentina in Science

In 2016, amid controversy regarding the budget for Argentina鈥檚 National Scientific and Technical Research Council (CONICET), some publications authored by CONICET researchers were discovered and considered as 鈥渦nscientific鈥 in social media (Clar铆n, 2016). The ensuing reaction has called into question the quality and relevance of CONICET鈥檚 research, as well as the sound administration of the budget assigned to the Science and Technology field.

The current administration has launched several initiatives purporting to promote the visibility of its activities. For example, the tools 鈥淩eal Economy Dashboard鈥 (Tablero de la Econom铆a Real) and 鈥淧roductive Simplification鈥 (Simplificaci贸n productiva) publish statistics of the economy and the actions carried out by the Ministry of Production. However, there are no equivalent alternatives for the CONICET, apart from each researcher鈥檚 individual web page.

For these reasons, our project aims to explore different features of our country鈥檚 scientific production, as well as allowing access to individual articles.

Ran from

GSoC 2018: Implementing a Hair Shader for Cycles
GSoC 2018: Implementing a Hair Shader for Cycles

Among Blender鈥檚 proposed ideas for GSoC 2018 I found this gem, a request to port Zootopia鈥檚 shader to Cycles.

Realistic hair or fur is essential when creating a plausible virtual world. In feature animation, this is often used to define the signature look of characters; examples include Pixar鈥檚 Brave (Iben et al. 2013), and Walt Disney Animation Studios鈥 Tangled (Sadeghi et al. 2010; also Ward et al. 2010) and Zootopia (Chiang et al. 2016).

Currently, Cycles has a working hair shader (wiki page, sources), based on Marschner et al. (2003)鈥檚 model. Its several assumptions and simplifications make it inaccurate for light colored hair (d鈥橢on et al. 2011) as well as for most types of fur (Yan et al. 2015). Furthermore, d鈥橢on et al. (2011) and Khungurn and Marschner (2017) demonstrated it to not be energy conserving.

This project intends to upgrade Cycles鈥 hair shader to the aforementioned Zootopia shader by Chiang et al. (2016), by porting Pharr (2017)鈥檚 implementation. Lukas Stockner has made available a WIP patch, which may also serve as a basis for this work.

BSc thesis: "T茅cnicas de deformaci贸n para objetos virtuales. El impacto entre veh铆culos como caso de estudio"

Ran from

BSc thesis: "T茅cnicas de deformaci贸n para objetos virtuales. El impacto entre veh铆culos como caso de estudio"
BSc thesis: "T茅cnicas de deformaci贸n para objetos virtuales. El impacto entre veh铆culos como caso de estudio"

In this work we evaluate the suitability, performance and precision of different deformation techniques for virtual objects. The test case is a car accident simulator, implemented using the Unity engine and executed under Android.

This work was cited in the WICC 2017 workshop paper by Selzer et al., 鈥淢odelos de interacci贸n y aplicaciones en realidad virtual mediante dispositivos m贸viles鈥.