Benchmarks for Corruption Invariant Person Re-Identification

By Minghui Chen, Zhiqiang Wang, Feng Zheng in NeurIPS

December 1, 2021

Authors: Minghui Chen (Equal contribution), Zhiqiang Wang (Equal contribution), Feng Zheng

Published in: Thirty-sixth Conference on Neural Information Processing Systems (NeurIPS 2021)

Abstract

When deploying person re-identification (ReID) model in safety-critical applications, it is pivotal to understanding the robustness of the model against a diverse array of image corruptions. However, current evaluations of person ReID only consider the performance on clean datasets and ignore images in various corrupted scenarios. In this work, we comprehensively establish five ReID benchmarks for learning corruption invariant representation. In the field of ReID, we are the first to conduct an exhaustive study on corruption invariant learning in single- and cross-modality datasets, including Market-1501, CUHK03, MSMT17, RegDB, SYSU-MM01. After reproducing and examining the robustness performance of 21 recent ReID methods, we have some observations, 1) transformer-based models are more robust towards corrupted images, compared with CNN-based models, 2) increasing the probability of random erasing (a commonly used augmentation method) hurts model corruption robustness, 3) cross-dataset generalization improves with corruption robustness increases. By analyzing the above observations, we propose a strong baseline on both single- and cross-modality ReID datasets which achieves improved robustness against diverse corruptions. Our codes are available on github( https://github.com/MinghuiChen43/CIL-ReID).

Posted on:
December 1, 2021
Length:
1 minute read, 196 words
Categories:
NeurIPS
Tags:
person re-identification robustness corruption invariance benchmark computer vision
See Also:
VITA: A Multi-Source Vicinal Transfer Augmentation Method for Out-of-Distribution Generalization