Auroral classification ergonomics and the implications for machine learning
Permanent link
https://hdl.handle.net/10037/19076Date
2020-07-09Type
Journal articleTidsskriftartikkel
Peer reviewed
Abstract
The machine-learning research community has focused greatly on bias in algorithms and have identified different manifestations of it. Bias in training samples is recognised as a potential source of prejudice in machine learning. It can be introduced by the human experts who define the training sets. As machine-learning techniques are being applied to auroral classification, it is important to identify and address potential sources of expert-injected bias. In an ongoing study, 13 947 auroral images were manually classified with significant differences between classifications. This large dataset allowed for the identification of some of these biases, especially those originating as a result of the ergonomics of the classification process. These findings are presented in this paper to serve as a checklist for improving training data integrity, not just for expert classifications, but also for crowd-sourced, citizen science projects. As the application of machine-learning techniques to auroral research is relatively new, it is important that biases are identified and addressed before they become endemic in the corpus of training data.
Is part of
Kwammen, A. (2021). Auroral Image Processing Techniques - Machine Learning Classification and Multi-Viewpoint Analysis. (Doctoral thesis). https://hdl.handle.net/10037/22584Publisher
Copernicus Publications, European Geosciences UnionCitation
McKay D, Kvammen A. Auroral classification ergonomics and the implications for machine learning. Geoscientific Instrumentation, Methods and Data Systems. 2020;9(2):267-273Metadata
Show full item recordCollections
Copyright 2020 The Author(s)