dc.contributor.advisor | Ha, Hoai Phuong | |
dc.contributor.author | Haugen, Eirik | |
dc.date.accessioned | 2021-08-31T08:46:54Z | |
dc.date.available | 2021-08-31T08:46:54Z | |
dc.date.issued | 2021-05-15 | |
dc.description.abstract | This thesis discusses the application of optimizations to machine learning algorithms. In particular, we look at implementing these algorithms on specialized hardware, I.e. a Graphcore Intelligence Processing Unit, while also applying software optimizations that have been shown to improve performance of traditional workloads on general purpose CPUs. We discuss the feasibility of using these techniques when performing Matrix Factorization using Stochastic Gradient Descent on an IPU. We implement a program doing this, and show the results of changing different parameters during the running of SGD. We demonstrate that while machine learning is inherently approximate this does not mean that all approximate computation techniques are applicable, and that indeed some of these techniques require a more measurable level of approximation that is given by there being a correct answer, I.e. that the algorithm being approximated is not inherently approximate from the start. We also show that other techniques can be applied to reduce the time it takes for SGD to converge. | en_US |
dc.identifier.uri | https://hdl.handle.net/10037/22313 | |
dc.language.iso | eng | en_US |
dc.publisher | UiT Norges arktiske universitet | en_US |
dc.publisher | UiT The Arctic University of Norway | en_US |
dc.rights.accessRights | openAccess | en_US |
dc.rights.holder | Copyright 2021 The Author(s) | |
dc.rights.uri | https://creativecommons.org/licenses/by-nc-sa/4.0 | en_US |
dc.rights | Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) | en_US |
dc.subject.courseID | INF-3990 | |
dc.subject | VDP::Technology: 500::Information and communication technology: 550::Computer technology: 551 | en_US |
dc.subject | VDP::Teknologi: 500::Informasjons- og kommunikasjonsteknologi: 550::Datateknologi: 551 | en_US |
dc.title | Investigating the effects of dynamic approximation methods on machine learning (ML) algorithms running on ML-specialized platforms | en_US |
dc.type | Master thesis | en_US |
dc.type | Mastergradsoppgave | en_US |