Show simple item record

dc.contributor.advisorBjørndalen, John Markus
dc.contributor.advisorAnshus, Otto
dc.contributor.advisorHorsch, Alexander
dc.contributor.authorThom, Håvard
dc.date.accessioned2017-06-30T11:31:18Z
dc.date.available2017-06-30T11:31:18Z
dc.date.issued2017-06-01
dc.description.abstractA more efficient and effective approach for detecting animal species in digital images is required. Every winter, the Climate-ecological Observatory for Arctic Tundra (COAT) project deploys several dozen camera traps in eastern Finnmark, Norway. These cameras capture large volumes of images that are used to study and document the impact of climate changes on animal populations. Currently, the images are examined and annotated manually by ecologists, hired technicians, or crowdsourced teams of volunteers. This process is expensive, time-consuming and error-prone, acting as a bottleneck that hinders development in the COAT project. This thesis describes and implements a unified detection system that can automatically localize and identify animal species in digital images from camera traps in the Arctic tundra. The system unifies three state-of-the-art object detection methods that use deep Convolutional Neural Networks (CNNs), called Faster Region-based CNN, Single Shot Multibox Detector and You Only Look Once v2. With each object detection method, the system can train CNN models, evaluate their detection accuracy, and subsequently use them to detect objects in images. Using data provided by COAT, we create an object detection dataset with 8000 images containing over 12000 animals of nine different species. We evaluate the performance of the system experimentally, by comparing the detection accuracy and computational complexity of each object detection method. By experimenting in an iterative fashion, we derive and apply several training methods to improve animal detection in camera trap images. These training methods include custom anchor boxes, image preprocessing and Online Hard Example Mining. Results show that we can automatically detect animals in the Arctic tundra with 94.1% accuracy at 21 frames per second, exceeding the performance of related work. Moreover, we show that the training methods are successful, improving animal detection accuracy by 6.8%.en_US
dc.identifier.urihttps://hdl.handle.net/10037/11218
dc.language.isoengen_US
dc.publisherUiT Norges arktiske universiteten_US
dc.publisherUiT The Arctic University of Norwayen_US
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2017 The Author(s)
dc.rights.urihttps://creativecommons.org/licenses/by-nc-sa/3.0en_US
dc.rightsAttribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)en_US
dc.subject.courseIDINF-3981
dc.subjectVDP::Technology: 500::Information and communication technology: 550::Computer technology: 551en_US
dc.subjectVDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550::Datateknologi: 551en_US
dc.titleUnified detection system for automatic, real-time, accurate animal detection in camera trap images from the arctic tundraen_US
dc.typeMaster thesisen_US
dc.typeMastergradsoppgaveen_US


File(s) in this item

Thumbnail
Thumbnail
Thumbnail

This item appears in the following collection(s)

Show simple item record

Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0)