Vis enkel innførsel

dc.contributor.advisorHemmatpour, Masoud
dc.contributor.advisorHa, Phuong H. 
dc.contributor.authorOnderwater, Jurian Jasper
dc.date.accessioned2025-07-01T15:31:10Z
dc.date.available2025-07-01T15:31:10Z
dc.date.issued2025
dc.description.abstractDeploying machine learning on resource-constrained devices such as microcontrollers, especially in harsh environments (e.g., the arctic) presents significant challenges. This thesis explores solving these challenges in the framework of TinyMLOps, focusing on enabling live model updates and predicting inference latency on STM micro controllers. A method for seamless runtime weight updates via direct memory modification was implemented and we developed an automated workflow using the STM32EdgeAI REST API for benchmarking and code generation. Experiments confirmed successful live updates and achieved high accuracy in predicting single-layer network latency based on architecture (R2 > 0.98) and multi-layer networks up to 3 layers (R2 > 0.89). While predicting multi-layer network (more than 3 layers) latency requires further refinement, this work provides practical contributions to TinyMLOps, facilitating remote maintenance and establishing a foundation for hardware-aware model optimisation on edge devices.
dc.description.abstract
dc.identifier.urihttps://hdl.handle.net/10037/37365
dc.identifierno.uit:wiseflow:7267694:61780022
dc.language.isoeng
dc.publisherUiT The Arctic University of Norway
dc.rights.holderCopyright 2025 The Author(s)
dc.rights.urihttps://creativecommons.org/licenses/by/4.0en_US
dc.rightsAttribution 4.0 International (CC BY 4.0)en_US
dc.titleAutomating the TinyML Pipeline: From Model Compression to Edge Deployment
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution 4.0 International (CC BY 4.0)
Med mindre det står noe annet, er denne innførselens lisens beskrevet som Attribution 4.0 International (CC BY 4.0)