Corruption Invariant Person Re-Identification

VALSE Webinar

By Minghui Chen in Talks Webinar

December 30, 2021

Abstract

A talk (in Chinese) about corruption invariant person re-identification and understanding model robustness against image corruptions.

Date

December 30, 2021

Time

12:00 AM

Location

Online

Event

Abstract

When deploying person re-identification (ReID) model in safety-critical applications, it is pivotal to understanding the robustness of the model against a diverse array of image corruptions. However, current evaluations of person ReID only consider the performance on clean datasets and ignore images in various corrupted scenarios.

In this work, we comprehensively establish five ReID benchmarks for learning corruption invariant representation. In the field of ReID, we are the first to conduct an exhaustive study on corruption invariant learning in single- and cross-modality datasets, including Market-1501, CUHK03, MSMT17, RegDB, SYSU-MM01.

After reproducing and examining the robustness performance of 21 recent ReID methods, we have some key observations:

  1. Transformer-based models are more robust towards corrupted images, compared with CNN-based models
  2. Increasing the probability of random erasing (a commonly used augmentation method) hurts model corruption robustness
  3. Cross-dataset generalization improves with corruption robustness increases

By analyzing the above observations, we propose a strong baseline on both single- and cross-modality ReID datasets which achieves improved robustness against diverse corruptions.

Posted on:
December 30, 2021
Length:
1 minute read, 165 words
Categories:
Talks Webinar
Tags:
Person Re-Identification Robustness Computer Vision
See Also: