dc.contributor.author | Gupta, Deepak Kumar | |
dc.contributor.author | Agarwal, Rohit | |
dc.contributor.author | Agarwal, Krishna | |
dc.contributor.author | Prasad, Dilip Kumar | |
dc.contributor.author | Bamba, Udbhav | |
dc.contributor.author | Thakur, Abhishek | |
dc.contributor.author | Gupta, Akash | |
dc.contributor.author | Suraj, Sharan | |
dc.contributor.author | Demir, Ertugul | |
dc.date.accessioned | 2025-02-19T12:57:39Z | |
dc.date.available | 2025-02-19T12:57:39Z | |
dc.date.issued | 2024-07-12 | |
dc.description.abstract | Current convolutional neural networks (CNNs) are not designed for large scientific images with rich
multi-scale features, such as in satellite and microscopy domain. A new phase of development of
CNNs especially designed for large images is awaited. However, application-independent high-quality
and challenging datasets needed for such development are still missing. We present the ‘UltraMNIST
dataset’ and associated benchmarks for this new research problem of ‘training CNNs for large
images’. The dataset is simple, representative of wide-ranging challenges in scientific data, and easily
customizable for different levels of complexity, smallest and largest features, and sizes of images.
Two variants of the problem are discussed: standard version that facilitates the development of novel
CNN methods for effective use of the best available GPU resources and the budget-aware version to
promote the development of methods that work under constrained GPU memory. Several baselines
are presented and the effect of reduced resolution is studied. The presented benchmark dataset and
baselines will hopefully trigger the development of new CNN methods for large scientific images. | en_US |
dc.identifier.citation | Gupta, Agarwal, Agarwal, Prasad. An UltraMNIST classification benchmark to train CNNs for very large images. Scientific Data. 2024 | en_US |
dc.identifier.cristinID | FRIDAID 2358241 | |
dc.identifier.doi | 10.1038/s41597-024-03587-4 | |
dc.identifier.issn | 2052-4463 | |
dc.identifier.uri | https://hdl.handle.net/10037/36529 | |
dc.language.iso | eng | en_US |
dc.publisher | Springer Nature | en_US |
dc.relation.journal | Scientific Data | |
dc.relation.projectID | info:eu-repo/grantAgreement/EC/H2020/964800/EU/Technology for real-time visualizing and modelling of fundamental process in living organoids towards new insights into organ-specific health, disease, and recovery/OrganVision | en_US |
dc.relation.projectID | info:eu-repo/grantAgreement/EC/ERC/101123485/EU/Sperm filtration for improved success rate of assisted reproduction technology/SpermoTile/ | en_US |
dc.relation.projectID | info:eu-repo/grantAgreement/EC/?/894233?/?/?/?/ | en_US |
dc.rights.accessRights | openAccess | en_US |
dc.rights.holder | Copyright 2024 The Author(s) | en_US |
dc.rights.uri | https://creativecommons.org/licenses/by/4.0 | en_US |
dc.rights | Attribution 4.0 International (CC BY 4.0) | en_US |
dc.title | An UltraMNIST classification benchmark to train CNNs for very large images | en_US |
dc.type.version | publishedVersion | en_US |
dc.type | Journal article | en_US |
dc.type | Tidsskriftartikkel | en_US |
dc.type | Peer reviewed | en_US |