Анализ смещения распределений в контрастивном обучении / Mitigating Distributional Biases in Contrastive Learning

19 May 2023, 16:40
15m
Физтех.Арктика, Поточная аудитория (МФТИ)

Физтех.Арктика, Поточная аудитория

МФТИ

Computer & Data Science Computer & Data Science 19

Speaker

Лидия Троешестова (Московский физико-технический институт)

Description

Recently contrastive learning has regained popularity as a self-supervised representation learning technique. It involves comparing positive (similar) and negative (dissimilar) pairs of samples to learn representations without labels. However, false negative and false positive errors in sampling lead to the loss function bias. This paper analyzes various ways to eliminate these biases. Based on the fully-supervised case, we develop debiased contrastive models that account for same-label datapoints without requiring knowledge of true labels, and explore their properties. Using the debiased representations, we measure accuracy of predictions in the classification task. The experiments are carried out on the CIFAR10 dataset, demonstrating the applicability and robustness of the proposed method in scenarios where extensive labeling is expensive or not feasible.

Primary author

Лидия Троешестова (Московский физико-технический институт)

Co-author

Роман Исаченко (Московский физико-технический институт)

Presentation materials