Portrait of Étienne Laliberté

Étienne Laliberté

Associate Academic Member
Full Professor, Université de Montréal, Department of Biological Sciences
Research Topics
Computer Vision

Biography

Etienne Laliberté is a full professor in the Department of Biological Sciences at Université de Montréal, a member of the Institut de recherche en biologie végétale (IRBV), and the Canada Research Chair in Plant Functional Biodiversity. He also heads the Canadian Airborne Biodiversity Observatory (CABO).

Laliberté’s current research focuses on the development of new approaches for vegetation monitoring (plant biodiversity and carbon) based on high-resolution remote sensing using drones and computer vision. He is particularly interested in applications of this technology that can help mitigate biodiversity loss and climate change, and that can have a rapid and widespread impact

Current Students

Master's Research - Université de Montréal
Co-supervisor :
Independent visiting researcher - Université de Montréal
Independent visiting researcher - Université de Montréal
Master's Research - Université de Montréal
Postdoctorate - McGill University
Principal supervisor :

Publications

Uncertainty Assessment in Deep Learning-based Plant Trait Retrievals from Hyperspectral data
Teja Kattenborn
Luke A. Brown
Michael Ewald
Katja Berger
Phuong D. Dao
Tobias B. Hank
Bing Lu
Hannes Feilhauer
Abstract. Large-scale mapping of plant biophysical and biochemical traits is essential for ecological and environmental applications. Given … (see more)their finer spectral resolution and unprecedented data availability, hyperspectral data, in concert with machine and particularly deep learning models, have emerged as a promising, non-destructive tool for accurately retrieving these traits. However, when deploying these methods on a large scale, reliably quantifying the associated uncertainty remains a critical challenge, especially when models encounter out-of-domain (OOD) data, i.e., samples that differ substantially from those of the training data, such as unseen geographical regions, species, biomes, data acquisition modalities, or scene components (e.g., clouds and water bodies). Traditional uncertainty quantification methods for deep learning models, including deep ensembles (deterministic and probabilistic) and Monte Carlo dropout, rely on the variance of predictions but often fail to capture uncertainty in OOD scenarios, leading to overly optimistic and possibly misleading uncertainty estimates. To address this limitation, we propose a distance-based uncertainty estimation method (Dis_UN) that quantifies prediction uncertainty by measuring the dissimilarity in the predictor space (spectral inputs) and embedding space (features learned by the deep model) between the training and test data. Dis_UN leverages residuals as a proxy for uncertainty and employs dissimilarity indices in data manifolds to estimate worst-case errors via 95-quantile regression. We evaluate Dis_UN using a pretrained deep learning model to predict multiple plant traits from hyperspectral images, analyzing its performance across OOD data, such as pixels containing spectral variations from urban surfaces, bare ground, water, clouds, or open surface waters. In this study, we target six leaf and canopy traits: leaf mass per area, chlorophylls, carotenoids, nitrogen content, equivalent water thickness, and leaf area index. Compared to scaled variance-based methods, Dis_UN provides (1) a superior estimation of uncertainty in OOD scenarios, achieving 36 % higher contrast (KS distances: 0.648 vs. 0.475) between non-vegetation pixels, particularly under mixed-pixel conditions at medium resolution (30 m); (2) uncertainty quantification without requiring normality or symmetry assumptions, accommodating asymmetric error patterns; (3) enhanced interpretability of uncertainty sources, as uncertainty is directly linked to sample dissimilarity from the training data; and (4) computational efficiency at inference (2.6–7.7× faster), requiring only a single forward pass compared to multiple passes for ensemble-based methods. Challenges remain for traits that are affected by spectral saturation. These findings highlight the advantages of distance-aware uncertainty quantification methods and underscore the necessity of diverse training datasets to minimize sampling biases and enhance model robustness. The proposed framework improves the reliability of uncertainty estimation in vegetation monitoring and offers a promising approach for broader applications.
Estimating Individual Tree Height and Species from UAV Imagery
Accurate estimation of forest biomass, a major carbon sink, relies heavily on tree-level traits such as height and species. Unoccupied Aeria… (see more)l Vehicles (UAVs) capturing high-resolution imagery from a single RGB camera offer a cost-effective and scalable approach for mapping and measuring individual trees. We introduce BIRCH-Trees, the first benchmark for individual tree height and species estimation from tree-centered UAV images, spanning three datasets: temperate forests, tropical forests, and boreal plantations. We also present DINOvTree, a unified approach using a Vision Foundation Model (VFM) backbone with task-specific heads for simultaneous height and species prediction. Through extensive evaluations on BIRCH-Trees, we compare DINOvTree against commonly used vision methods, including VFMs, as well as biological allometric equations. We find that DINOvTree achieves top overall results with accurate height predictions and competitive classification accuracy while using only 54% to 58% of the parameters of the second-best approach.
PlantTraitNet: An Uncertainty-Aware Multimodal Framework for Global-Scale Plant Trait Inference from Citizen Science Data
Ayushi Sharma
Johanna Trost
Daniel Lusk
Johannes Dollinger
Julian Schrader
Christian Rossi
Javier Lopatin
Simon Haberstroh
Jana Eichel
Daniel Mederer
Jose Miguel Cerda-Paredes
Shyam S. Phartyal
Lisa-Maricia Schwarz
Anja Linstädter
Maria Conceição Caldeira
Teja Kattenborn
Global plant maps of plant traits, such as leaf nitrogen or plant height, are essential for understanding ecosystem processes, including the… (see more) carbon and energy cycles of the Earth system. However, existing trait maps remain limited by the high cost and sparse geographic coverage of field-based measurements. Citizen science initiatives offer a largely untapped resource to overcome these limitations, with over 50 million geotagged plant photographs worldwide capturing valuable visual information on plant morphology and physiology. In this study, we introduce PlantTraitNet, a multi-modal, multi-task uncertainty-aware deep learning framework that predicts four key plant traits (plant height, leaf area, specific leaf area, and nitrogen content) from citizen science photos using weak supervision. By aggregating individual trait predictions across space, we generate global maps of trait distributions. We validate these maps against independent vegetation survey data (sPlotOpen) and benchmark them against leading global trait products. Our results show that PlantTraitNet consistently outperforms existing trait maps across all evaluated traits, demonstrating that citizen science imagery, when integrated with computer vision and geospatial AI, enables not only scalable but also more accurate global trait mapping. This approach offers a powerful new pathway for ecological research and Earth system modeling.
Understanding Representation Gaps across Scales in Tropical Tree Species Classification from Drone Imagery
Sulagna Saha
Evan M. Gora
Adriane Esquivel Muelbert
Ian R. McGregor
Cesar Gutierrez
Vanessa E. Rubio
Accurate classification of tropical tree species from unoccupied aerial vehicle (UAV) imagery remains challenging due to high species divers… (see more)ity and strong visual similarity among species at typical image resolutions (centimeters per pixel). In contrast, models trained on close-up citizen science photographs captured with smartphones achieve strong plant species classification performance. Recent advances in UAV data acquisition now enable the collection of close-up images that are spatially registered with top-view aerial imagery and approach the level of visual detail found in smartphone photographs, with the trade-off that such high-resolution photos cannot be acquired for many trees. In this work, we evaluate the performance of existing methods using paired top-view and close-up UAV imagery collected in a species-rich tropical forest. Through fine-tuning experiments, we quantify the performance gap between vision foundation models and in-domain generalist plant recognition models across both image types (high-resolution close-up versus coarser-resolution top-view imagery). We show that classification performance is consistently higher on close-up images than on top-view aerial imagery, and that this performance gap widens for rare species. Finally, we propose that self-supervised representation alignment across these two spatial scales offers a promising approach for integrating fine-grained visual information into canopy-level species classification models based on top-view UAV imagery. Leveraging high-resolution close-up UAV imagery to enhance canopy-level species classification could substantially improve large-scale monitoring of tropical forest biodiversity.
Seeing the forest and the trees: a workflow for automatic acquisition of ultra-high resolution drone photos of tropical forest canopies to support botanical and ecological studies
Guillaume Tougas
Helene C. Muller-Landau
Gonzalo Rivas-Torres
Thomas R. Walla
Melvin Hernández
Adrian Buenaño
Anna Weber
Jeffrey Q. Chambers
Jomber Chota Inuma
Fernando Araúz
Jorge Valdes
Andrés Hernández
David Brassfield
P. Sérgio
Vicente Vasquez
Adriana Simonetti … (see 7 more)
Daniel Magnabosco Marra
Caroline de Moura Vasconcelos
Jarol Fernando Vaca
Geovanny Rivadeneyra
José Illanes
Luis A. Salagaje-Muela
Jefferson Gualinga
Tropical forest canopies contain many tree and liana species, and foliar and reproductive characteristics useful for taxonomic identificatio… (see more)n are often difficult to see from the forest floor. As such, taxonomic identification often becomes a bottleneck in tropical forest inventories. Here we present a drone-based workflow to automatically acquire large volumes of close-up, ultra-high resolution photos of selected tree crowns (or specific locations over the canopy) to support tropical botanical and ecological studies ( https://youtu.be/80goMEifpc4 ). Our workflow is built around the small, easy-to-use DJI Mavic 3 Enterprise (M3E) drone, which is equipped with a wide-angle and a telephoto camera. On day one, the pilot maps a forest area of up to ∼200 ha with the wide-angle camera to generate a high-resolution digital surface model (DSM) and orthomosaic using structure-from-motion (SfM) photogrammetry. On subsequent days, the pilot acquires close-up photos with the telephoto camera from up to 300 selected canopy trees per day. These close-up photos are acquired from 6 m above the canopy and contain a high level of visual detail that allows botanists to reliably identify many tree and liana species. The photos are geolocated with survey-grade accuracy using RTK GNSS, thus facilitating spatial co-registration with other data sources, including the photogrammetry products. The primary operational challenge of our workflow is the need to maintain RTK corrections with the drone to ensure that close-up photos are acquired exactly at the predefined locations. The maximum operational range we achieved was 3 km, which would allow the pilot to reach any tree within a ∼2800 ha area from the take-off point. Although our workflow was developed to support taxonomic identification of tropical trees and lianas, it could be extended to any other forest or vegetation type to support botanical, phenological, and ecological studies. We provide harpia , an open-source Python library to program these automatic close-up photo missions with the M3E drone ( https://github.com/traitlab/harpia ). We provide harpia , an open-source Python library to program these automatic close-up photo missions ( https://github.com/traitlab/harpia ). Drone imagery and labelled close-up photo data are not yet publicly available because they were acquired with the goal of publishing benchmark machine learning datasets and models for tree and liana species classification and prior publication of the data would jeopardize this future publication.
Mapping canopy foliar functional traits in a mixed temperate forest using imaging spectroscopy
Alice Gravel
Margaret Kalacska
Juan Pablo Arroyo-Mora
Additional methodological and statistical details from Testing the scale dependence of plant community assembly processes using imaging spectroscopy
Anna L. Crofts
J. Pablo Arroyo-Mora
Margaret Kalacska
Mark Vellend
Additional methodological details (including, Figure S1-S2) and detailed statistical results (Figure S3-S4 and Table S1-S3).
SelvaBox: A high‑resolution dataset for tropical tree crown detection
Detecting individual tree crowns in tropical forests is essential to study these complex and crucial ecosystems impacted by human interventi… (see more)ons and climate change. However, tropical crowns vary widely in size, structure, and pattern and are largely overlapping and intertwined, requiring advanced remote sensing methods applied to high-resolution imagery. Despite growing interest in tropical tree crown detection, annotated datasets remain scarce, hindering robust model development. We introduce SelvaBox, the largest open‑access dataset for tropical tree crown detection in high-resolution drone imagery. It spans three countries and contains more than
deadtrees.earth — An open-access and interactive database for centimeter-scale aerial imagery to uncover global tree mortality dynamics
Clemens Mosig
Janusch Vajna-Jehle
Miguel D. Mahecha
Yan Cheng
Henrik Hartmann
David Montero
Samuli Junttila
Stéphanie Horion
Mirela Beloiu Schwenke
Michael J. Koontz
Khairul Nizam Abdul Maulud
Stephen Adu-Bredu
Djamil Al-Halbouni
Muhammad Ali
Matthew Allen
Jan Altman
Lot Amorós
Claudia Angiolini
Rasmus Astrup
Hassan Awada … (see 80 more)
Caterina Barrasso
Harm Bartholomeus
Pieter S.A. Beck
Aurora Bozzini
Joshua Braun-Wimmer
Benjamin Brede
Fabio Marcelo Breunig
Stefano Brugnaro
Allan Buras
Vicente Burchard-Levine
Jesús Julio Camarero
Anna Candotti
Luka Capuder
Erik Carrieri
Mauro Centritto
Gherardo Chirici
Myriam Cloutier
Dhemerson Conciani
KC Cushman
James W. Dalling
Phuong D. Dao
Jan Dempewolf
Martin Denter
Marcel Dogotari
Ricardo Díaz-Delgado
Simon Ecke
Jana Eichel
Anette Eltner
André Fabbri
Maximilian Fabi
Fabian Fassnacht
Matheus Pinheiro Ferreira
Fabian Jörg Fischer
Julian Frey
Annett Frick
Jose Fuentes
Selina Ganz
Matteo Garbarino
Milton García
Matthias Gassilloud
Antonio Gazol
Guillermo Gea-Izquierdo
Kilian Gerberding
Marziye Ghasemi
Francesca Giannetti
Jeffrey Gillan
Roy Gonzalez
Carl Gosper
Terry Greene
Konrad Greinwald
Stuart Grieve
André Große-Stoltenberg
Jesus Aguirre Gutierrez
Anna Göritz
Peter Hajek
David Hedding
Jan Hempel
Stien Heremans
Melvin Hernández
Marco Heurich
Eija Honkavaara
Bernhard Höfle
Robert Jackisch
Tommaso Jucker
Jesse M. Kalwij
Sebastian Kepfer-Rojas
Pratima Khatri-Chhetri
Till Kleinebecker
Hans-Joachim Klemmt
Tomáš Klouček
Niko Koivumäki
Nagesh Kolagani
Jan Komárek
Kirill Korznikov
Bartłomiej Kraszewski
Stefan Kruse
Robert Krüger
Helga Kuechly
Ivan H.Y. Kwong
Bringing SAM to new heights: Leveraging elevation data for tree crown segmentation from drone imagery
Information on trees at the individual level is crucial for monitoring forest ecosystems and planning forest management. Current monitoring … (see more)methods involve ground measurements, requiring extensive cost, time and labor. Advances in drone remote sensing and computer vision offer great potential for mapping individual trees from aerial imagery at broad-scale. Large pre-trained vision models, such as the Segment Anything Model (SAM), represent a particularly compelling choice given limited labeled data. In this work, we compare methods leveraging SAM for the task of automatic tree crown instance segmentation in high resolution drone imagery in three use cases: 1) boreal plantations, 2) temperate forests and 3) tropical forests. We also study the integration of elevation data into models, in the form of Digital Surface Model (DSM) information, which can readily be obtained at no additional cost from RGB drone imagery. We present BalSAM, a model leveraging SAM and DSM information, which shows potential over other methods, particularly in the context of plantations. We find that methods using SAM out-of-the-box do not outperform a custom Mask R-CNN, even with well-designed prompts. However, efficiently tuning SAM end-to-end and integrating DSM information are both promising avenues for tree crown instance segmentation models.
Assessing SAM for Tree Crown Instance Segmentation from Drone Imagery
Early Detection of an Invasive Alien Plant (Phragmites australis) Using Unoccupied Aerial Vehicles and Artificial Intelligence
The combination of unoccupied aerial vehicles (UAVs) and artificial intelligence to map vegetation represents a promising new approach to im… (see more)prove the detection of invasive alien plant species (IAPS). The high spatial resolution achievable with UAVs and recent innovations in computer vision, especially with convolutional neural networks, suggest that early detection of IAPS could be possible, thus facilitating their management. In this study, we evaluated the suitability of this approach for mapping the location of common reed (Phragmites australis subsp. australis) within a national park located in southern Quebec, Canada. We collected data on six distinct dates during the growing season, covering environments with different levels of reed invasion. Overall, model performance was high for the different dates and zones, especially for recall (mean of 0.89). The results showed an increase in performance, reaching a peak following the appearance of the inflorescence in September (highest F1-score at 0.98). Furthermore, a decrease in spatial resolution negatively affected recall (18% decrease between a spatial resolution of 0.15 cm pixel−1 and 1.50 cm pixel−1) but did not have a strong impact on precision (2% decrease). Despite challenges associated with common reed mapping in a post-treatment monitoring context, the use of UAVs and deep learning shows great potential for IAPS detection when supported by a suitable dataset. Our results show that, from an operational point of view, this approach could be an effective tool for speeding up the work of biologists in the field and ensuring better management of IAPS.