An UltraMNIST classification benchmark to train CNNs for very large images
Permanent link
https://hdl.handle.net/10037/36529Date
2024-07-12Type
Journal articleTidsskriftartikkel
Peer reviewed
Author
Gupta, Deepak Kumar; Agarwal, Rohit; Agarwal, Krishna; Prasad, Dilip Kumar; Bamba, Udbhav; Thakur, Abhishek; Gupta, Akash; Suraj, Sharan; Demir, ErtugulAbstract
Current convolutional neural networks (CNNs) are not designed for large scientific images with rich
multi-scale features, such as in satellite and microscopy domain. A new phase of development of
CNNs especially designed for large images is awaited. However, application-independent high-quality
and challenging datasets needed for such development are still missing. We present the ‘UltraMNIST
dataset’ and associated benchmarks for this new research problem of ‘training CNNs for large
images’. The dataset is simple, representative of wide-ranging challenges in scientific data, and easily
customizable for different levels of complexity, smallest and largest features, and sizes of images.
Two variants of the problem are discussed: standard version that facilitates the development of novel
CNN methods for effective use of the best available GPU resources and the budget-aware version to
promote the development of methods that work under constrained GPU memory. Several baselines
are presented and the effect of reduced resolution is studied. The presented benchmark dataset and
baselines will hopefully trigger the development of new CNN methods for large scientific images.
Publisher
Springer NatureCitation
Gupta, Agarwal, Agarwal, Prasad. An UltraMNIST classification benchmark to train CNNs for very large images. Scientific Data. 2024Metadata
Show full item recordCollections
Copyright 2024 The Author(s)