Mixing up contrastive learning: Self-supervised representation learning for time series
Permanent lenke
https://hdl.handle.net/10037/26414Dato
2022-02-14Type
Journal articleTidsskriftartikkel
Peer reviewed
Sammendrag
The lack of labeled data is a key challenge for learning useful representation from time series data. However, an unsupervised representation framework that is capable of producing high quality representations
could be of great value. It is key to enabling transfer learning, which is especially beneficial for medical
applications, where there is an abundance of data but labeling is costly and time consuming. We propose
an unsupervised contrastive learning framework that is motivated from the perspective of label smoothing. The proposed approach uses a novel contrastive loss that naturally exploits a data augmentation
scheme in which new samples are generated by mixing two data samples with a mixing component.
The task in the proposed framework is to predict the mixing component, which is utilized as soft targets in the loss function. Experiments demonstrate the framework’s superior performance compared to
other representation learning approaches on both univariate and multivariate time series and illustrate
its benefits for transfer learning for clinical time series.
Forlag
ElsevierSitering
Wickstrøm, Kampffmeyer, Mikalsen, Jenssen. Mixing up contrastive learning: Self-supervised representation learning for time series. Pattern Recognition Letters. 2022;155:54-61Metadata
Vis full innførselSamlinger
Copyright 2022 The Author(s)