Investigating the effects of dynamic approximation methods on machine learning (ML) algorithms running on ML-specialized platforms
Permanent link
https://hdl.handle.net/10037/22313Date
2021-05-15Type
Master thesisMastergradsoppgave
Author
Haugen, EirikAbstract
This thesis discusses the application of optimizations to machine learning algorithms. In particular, we look at implementing these algorithms on specialized hardware, I.e. a Graphcore Intelligence Processing Unit, while also applying software optimizations that have been shown to improve performance of traditional workloads on general purpose CPUs. We discuss the feasibility of using these techniques when performing Matrix Factorization using Stochastic Gradient Descent on an IPU. We implement a program doing this, and show the results of changing different parameters during the running of SGD. We demonstrate that while machine learning is inherently approximate this does not mean that all approximate computation techniques are applicable, and that indeed some of these techniques require a more measurable level of approximation that is given by there being a correct answer, I.e. that the algorithm being approximated is not inherently approximate from the start. We also show that other techniques can be applied to reduce the time it takes for SGD to converge.
Publisher
UiT Norges arktiske universitetUiT The Arctic University of Norway
Metadata
Show full item recordCollections
Copyright 2021 The Author(s)
The following license file are associated with this item: