Portrait of Noam Aigerman

Noam Aigerman

Associate Academic Member
Assistant Professor, Université de Montréal, Department of Computer Science and Operations Research
Research Topics
Computer Vision
Deep Learning

Biography

I am an assistant professor at Université de Montréal and was formerly a research scientist at Adobe. I work on problems related to 3D geometry and learning. My research lies at the intersection of geometry processing, computer graphics, deep learning and optimization.

Current Students

Master's Research - Université de Montréal
Master's Research - Université de Montréal

Publications

DECOLLAGE: 3D Detailization by Controllable, Localized, and Learned Geometry Enhancement
Qimin Chen
Zhiqin Chen
Vladimir Kim
Hao (Richard) Zhang
Hao Zhang 0002
Siddhartha Chaudhuri
MeshUp: Multi-Target Mesh Deformation via Blended Score Distillation
Hyunwoo Kim
Itai Lang
Thibault Groueix
Vladimir Kim
Rana Hanocka
We propose MeshUp, a technique that deforms a 3D mesh towards multiple target concepts, and intuitively controls the region where each conce… (see more)pt is expressed. Conveniently, the concepts can be defined as either text queries, e.g.,"a dog"and"a turtle,"or inspirational images, and the local regions can be selected as any number of vertices on the mesh. We can effectively control the influence of the concepts and mix them together using a novel score distillation approach, referred to as the Blended Score Distillation (BSD). BSD operates on each attention layer of the denoising U-Net of a diffusion model as it extracts and injects the per-objective activations into a unified denoising pipeline from which the deformation gradients are calculated. To localize the expression of these activations, we create a probabilistic Region of Interest (ROI) map on the surface of the mesh, and turn it into 3D-consistent masks that we use to control the expression of these activations. We demonstrate the effectiveness of BSD empirically and show that it can deform various meshes towards multiple objectives. Our project page is at https://threedle.github.io/MeshUp.
Temporal Residual Jacobians For Rig-free Motion Transfer
Sanjeev Muralikrishnan
Niladri Shekhar Dutt
Siddhartha Chaudhuri
Vladimir Kim
Matthew Fisher
Niloy J. Mitra
We introduce Temporal Residual Jacobians as a novel representation to enable data-driven motion transfer. Our approach does not assume acces… (see more)s to any rigging or intermediate shape keyframes, produces geometrically and temporally consistent motions, and can be used to transfer long motion sequences. Central to our approach are two coupled neural networks that individually predict local geometric and temporal changes that are subsequently integrated, spatially and temporally, to produce the final animated meshes. The two networks are jointly trained, complement each other in producing spatial and temporal signals, and are supervised directly with 3D positional information. During inference, in the absence of keyframes, our method essentially solves a motion extrapolation problem. We test our setup on diverse meshes (synthetic and scanned shapes) to demonstrate its superiority in generating realistic and natural-looking animations on unseen body shapes against SoTA alternatives. Supplemental video and code are available at https://temporaljacobians.github.io/ .
TutteNet: Injective 3D Deformations by Composition of 2D Mesh Deformations
Bo Sun
Thibault Groueix
Chen Song
Qixing Huang
MagicClay: Sculpting Meshes With Generative Neural Fields
Amir Barda
Vladimir Kim
Amit H. Bermano
Thibault Groueix
The recent developments in neural fields have brought phenomenal capabilities to the field of shape generation, but they lack crucial proper… (see more)ties, such as incremental control - a fundamental requirement for artistic work. Triangular meshes, on the other hand, are the representation of choice for most geometry related tasks, offering efficiency and intuitive control, but do not lend themselves to neural optimization. To support downstream tasks, previous art typically proposes a two-step approach, where first a shape is generated using neural fields, and then a mesh is extracted for further processing. Instead, in this paper we introduce a hybrid approach that maintains both a mesh and a Signed Distance Field (SDF) representations consistently. Using this representation, we introduce MagicClay - an artist friendly tool for sculpting regions of a mesh according to textual prompts while keeping other regions untouched. Our framework carefully and efficiently balances consistency between the representations and regularizations in every step of the shape optimization; Relying on the mesh representation, we show how to render the SDF at higher resolutions and faster. In addition, we employ recent work in differentiable mesh reconstruction to adaptively allocate triangles in the mesh where required, as indicated by the SDF. Using an implemented prototype, we demonstrate superior generated geometry compared to the state-of-the-art, and novel consistent control, allowing sequential prompt-based edits to the same mesh for the first time.
Neural Semantic Surface Maps
Luca Morreale
Vladimir Kim
Niloy J. Mitra
Explorable Mesh Deformation Subspaces from Unstructured 3D Generative Models
Arman Maesumi
Paul Guerrero
Vladimir Kim
Matthew Fisher
Siddhartha Chaudhuri
Daniel Ritchie
DA Wand: Distortion-Aware Selection Using Neural Mesh Parameterization
Richard Liu
Vladimir Kim
Rana Hanocka
We present a neural technique for learning to select a local sub-region around a point which can be used for mesh parameterization. The moti… (see more)vation for our framework is driven by interactive workflows used for decaling, texturing, or painting on surfaces. Our key idea is to incorporate segmentation probabilities as weights of a classical parameterization method, implemented as a novel differentiable parameterization layer within a neural network framework. We train a segmentation network to select 3D regions that are parameterized into 2D and penalized by the resulting distortion, giving rise to segmentations which are distortion-aware. Following training, a user can use our system to interactively select a point on the mesh and obtain a large, meaningful region around the selection which induces a low-distortion parameterization. Our code11https://github.com/threedle/DA-Wand and project22https://threedle.github.io/DA-Wand/ are publicly available.
Isometric Energies for Recovering Injectivity in Constrained Mapping
Xingyi Du
Danny M. Kaufman
Qingnan Zhou
Shahar Kovalsky
Yajie Yan
Tao Ju