We use cookies to analyze the browsing and usage of our website and to personalize your experience. You can disable these technologies at any time, but this may limit certain functionalities of the site. Read our Privacy Policy for more information.
Setting cookies
You can enable and disable the types of cookies you wish to accept. However certain choices you make could affect the services offered on our sites (e.g. suggestions, personalised ads, etc.).
Essential cookies
These cookies are necessary for the operation of the site and cannot be deactivated. (Still active)
Analytics cookies
Do you accept the use of cookies to measure the audience of our sites?
Multimedia Player
Do you accept the use of cookies to display and allow you to watch the video content hosted by our partners (YouTube, etc.)?
An effective framework for learning 3D representations for perception tasks is distilling rich self-supervised image features via contrastiv… (see more)e learning. However, image-to-point representation learning for autonomous driving datasets faces two main challenges: 1) the abundance of self-similarity, which results in the contrastive losses pushing away semantically similar point and image regions and thus disturbing the local semantic structure of the learned representations, and 2) severe class imbalance as pretraining gets dominated by over-represented classes. We propose to alleviate the self-similarity problem through a novel semantically tolerant image-to-point contrastive loss that takes into consideration the semantic distance between positive and negative image regions to minimize contrasting semantically similar point and image regions. Additionally, we address class imbalance by designing a class-agnostic balanced loss that approximates the degree of class imbalance through an aggregate sample-to-samples semantic similarity measure. We demonstrate that our semantically-tolerant contrastive loss with class balancing improves state-of-the-art 2D-to-3D representation learning in all evaluation settings on 3D semantic segmentation. Our method consistently outperforms state-of-the-art 2D-to-3D representation learning frameworks across a wide range of 2D self-supervised pretrained models.
2023-06-17
2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (published)
Traceability is the ability to relate di erent artifacts during the development and operation of a system to each other. It enables program … (see more)comprehension, change impact analysis, and facilitates the cooperation of engineers from di erent disciplines. The 10th International Workshop on Software and Systems Traceability (former International Workshop on Traceability in Emerging Forms of Software Engineering, TEFSE), explored the role and impact of traceability in modern software and systems development. The event brought together researchers and practitioners to examine the challenges of recovering, maintaining, and utilizing traceability for the myriad forms of software and systems engineering artifacts. SST'19 was a highly interactive working event focused on discussing the main problems related to software traceability in particular in the context of opportunities and challenges posed by the recent progress in Arti cial Intelligence techniques and proposing possible solutions for such problems.