Portrait de Noam Aigerman

Noam Aigerman

Membre académique associé
Professeur adjoint, Université de Montréal, Département d'informatique et de recherche opérationnelle
Sujets de recherche
Apprentissage profond
Vision par ordinateur

Biographie

Je suis professeur adjoint à l'Université de Montréal. Précédemment, j’ai été chercheur scientifique chez Adobe. Je travaille sur des problèmes liés à la géométrie 3D et à l'apprentissage. Mes recherches se situent au carrefour du traitement de la géométrie, de l'infographie, de l'apprentissage profond et de l'optimisation.

Étudiants actuels

Maîtrise recherche - UdeM
Maîtrise recherche - UdeM

Publications

Temporal Residual Jacobians For Rig-free Motion Transfer
Sanjeev Muralikrishnan
Niladri Shekhar Dutt
Siddhartha Chaudhuri
Vladimir Kim
Matthew Fisher
Niloy J. Mitra
We introduce Temporal Residual Jacobians as a novel representation to enable data-driven motion transfer. Our approach does not assume acces… (voir plus)s to any rigging or intermediate shape keyframes, produces geometrically and temporally consistent motions, and can be used to transfer long motion sequences. Central to our approach are two coupled neural networks that individually predict local geometric and temporal changes that are subsequently integrated, spatially and temporally, to produce the final animated meshes. The two networks are jointly trained, complement each other in producing spatial and temporal signals, and are supervised directly with 3D positional information. During inference, in the absence of keyframes, our method essentially solves a motion extrapolation problem. We test our setup on diverse meshes (synthetic and scanned shapes) to demonstrate its superiority in generating realistic and natural-looking animations on unseen body shapes against SoTA alternatives. Supplemental video and code are available at https://temporaljacobians.github.io/ .
TutteNet: Injective 3D Deformations by Composition of 2D Mesh Deformations
Bo Sun
Thibault Groueix
Chen Song
Qixing Huang
MagicClay: Sculpting Meshes With Generative Neural Fields
Amir Barda
Vladimir Kim
Amit H. Bermano
Thibault Groueix
The recent developments in neural fields have brought phenomenal capabilities to the field of shape generation, but they lack crucial proper… (voir plus)ties, such as incremental control - a fundamental requirement for artistic work. Triangular meshes, on the other hand, are the representation of choice for most geometry related tasks, offering efficiency and intuitive control, but do not lend themselves to neural optimization. To support downstream tasks, previous art typically proposes a two-step approach, where first a shape is generated using neural fields, and then a mesh is extracted for further processing. Instead, in this paper we introduce a hybrid approach that maintains both a mesh and a Signed Distance Field (SDF) representations consistently. Using this representation, we introduce MagicClay - an artist friendly tool for sculpting regions of a mesh according to textual prompts while keeping other regions untouched. Our framework carefully and efficiently balances consistency between the representations and regularizations in every step of the shape optimization; Relying on the mesh representation, we show how to render the SDF at higher resolutions and faster. In addition, we employ recent work in differentiable mesh reconstruction to adaptively allocate triangles in the mesh where required, as indicated by the SDF. Using an implemented prototype, we demonstrate superior generated geometry compared to the state-of-the-art, and novel consistent control, allowing sequential prompt-based edits to the same mesh for the first time.
Neural Semantic Surface Maps
Luca Morreale
Vladimir Kim
Niloy J. Mitra
Explorable Mesh Deformation Subspaces from Unstructured 3D Generative Models
Arman Maesumi
Paul Guerrero
Vladimir Kim
Matthew Fisher
Siddhartha Chaudhuri
Daniel Ritchie
DA Wand: Distortion-Aware Selection Using Neural Mesh Parameterization
Richard Liu
Vladimir Kim
Rana Hanocka
We present a neural technique for learning to select a local sub-region around a point which can be used for mesh parameterization. The moti… (voir plus)vation for our framework is driven by interactive workflows used for decaling, texturing, or painting on surfaces. Our key idea is to incorporate segmentation probabilities as weights of a classical parameterization method, implemented as a novel differentiable parameterization layer within a neural network framework. We train a segmentation network to select 3D regions that are parameterized into 2D and penalized by the resulting distortion, giving rise to segmentations which are distortion-aware. Following training, a user can use our system to interactively select a point on the mesh and obtain a large, meaningful region around the selection which induces a low-distortion parameterization. Our code11https://github.com/threedle/DA-Wand and project22https://threedle.github.io/DA-Wand/ are publicly available.
Isometric Energies for Recovering Injectivity in Constrained Mapping
Xingyi Du
Danny M. Kaufman
Qingnan Zhou
Shahar Kovalsky
Yajie Yan
Tao Ju