Contrastive Learning Strategies for Medical Imaging

Abstract:

Self-Supervised Learning (SSL) overcomes the limitations of most learning methods, which require high-quality labelled data. Contrastive learning is a popular technique of SSL, where a model learns to distinguish between similar and dissimilar examples by bringing similar data points closer and dissimilar ones farther apart in the feature space. A key element of contrastive learning is defining the positive and negative pairs, as the model learns by comparing these pairs to adjust its representation. Proper pair selection is crucial for the effectiveness of the learned embeddings.

This project aims to compare different strategies for defining pairs in contrastive learning for medical applications. The first part of the project will involve identifying and comparing state-of-the-art strategies on medical tasks. The second part will focus on evaluating and proposing new strategies.

Marta Hasny
Marta Hasny
Doctoral Researcher

My research interests include the application of foundation models and generative AI in cardiology.

Maxime Di Folco
Maxime Di Folco
Research Scientist

My research interest is the study of the cardiac function via machine learning methods, in particular representation learning methods that aim to acquire low dimensional representation of high dimensional data. I have a strong interest in cardiac remodelling (adaptation of the heart to its environment or a disease), notably the study of the deformation and shape aspects.