dc.contributor.advisor | Bjørndalen, John Markus | |
dc.contributor.advisor | Anshus, Otto | |
dc.contributor.advisor | Horsch, Alexander | |
dc.contributor.author | Thom, Håvard | |
dc.date.accessioned | 2017-06-30T11:31:18Z | |
dc.date.available | 2017-06-30T11:31:18Z | |
dc.date.issued | 2017-06-01 | |
dc.description.abstract | A more efficient and effective approach for detecting animal species in digital images is required. Every winter, the Climate-ecological Observatory for Arctic Tundra (COAT) project deploys several dozen camera traps in eastern Finnmark, Norway. These cameras capture large volumes of images that are used to study and document the impact of climate changes on animal populations. Currently, the images are examined and annotated manually by ecologists, hired technicians, or crowdsourced teams of volunteers. This process is expensive, time-consuming and error-prone, acting as a bottleneck that hinders development in the COAT project.
This thesis describes and implements a unified detection system that can automatically localize and identify animal species in digital images from camera traps in the Arctic tundra. The system unifies three state-of-the-art object detection methods that use deep Convolutional Neural Networks (CNNs), called Faster Region-based CNN, Single Shot Multibox Detector and You Only Look Once v2. With each object detection method, the system can train CNN models, evaluate their detection accuracy, and subsequently use them to detect
objects in images.
Using data provided by COAT, we create an object detection dataset with 8000 images containing over 12000 animals of nine different species. We evaluate the performance of the system experimentally, by comparing the detection accuracy and computational complexity of each object detection method. By experimenting in an iterative fashion, we derive and apply several training methods to improve animal detection in camera trap images. These training methods include custom anchor boxes, image preprocessing and Online Hard Example Mining.
Results show that we can automatically detect animals in the Arctic tundra with 94.1% accuracy at 21 frames per second, exceeding the performance of related work. Moreover, we show that the training methods are successful, improving animal detection accuracy by 6.8%. | en_US |
dc.identifier.uri | https://hdl.handle.net/10037/11218 | |
dc.language.iso | eng | en_US |
dc.publisher | UiT Norges arktiske universitet | en_US |
dc.publisher | UiT The Arctic University of Norway | en_US |
dc.rights.accessRights | openAccess | en_US |
dc.rights.holder | Copyright 2017 The Author(s) | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-sa/3.0 | en_US |
dc.rights | Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0) | en_US |
dc.subject.courseID | INF-3981 | |
dc.subject | VDP::Technology: 500::Information and communication technology: 550::Computer technology: 551 | en_US |
dc.subject | VDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550::Datateknologi: 551 | en_US |
dc.title | Unified detection system for automatic, real-time, accurate animal detection in camera trap images from the arctic tundra | en_US |
dc.type | Master thesis | en_US |
dc.type | Mastergradsoppgave | en_US |