Vis enkel innførsel

dc.contributor.advisorJenssen, Robert
dc.contributor.advisoremailrobert.jenssen@phys.uit.noen
dc.contributor.authorKvisle Storås, Ola
dc.date.accessioned2009-02-16T12:54:46Z
dc.date.available2009-02-16T12:54:46Z
dc.date.issued2007-12-17
dc.description.abstractThis thesis is a study of pattern classification based on information theoretic criteria. Information theoretic criteria are important measures based on entropy and divergence between data distributions. First, the basic concepts of pattern classification with the well known Bayes classification rule as a starting point is discussed. We discuss how the Parzen window estimator may be used to find good density estimates. The Parzen window density estimator can be used to estimate cost functions based on information theoretic criteria. Furthermore, we explain a model of an information theoretic learning machine. With cost functions based on information theoretic criteria, we argue that a learning machine potentially captures much more information about a data set than the traditional mean squared error cost (MSE) function. We find that there is a geometric link between information theoretic cost functions estimated using Parzen windowing, and mean vectors in a Mercer kernel feature space. This link is used to propose and implement different classifiers based on the integrated squared error (ISE) divergence measure, operating implicitly in a Mercer kernel feature space. We also apply spectral methods to implement the same ISE classifiers working in approximations of Mercer kernel feature spaces. We investigate the performance of the classifiers when we weight each data point with the the inverse of the probability density function at that point. We find that the ISE classifiers working implicitly in the Mercer kernel feature space performs similar to a Parzen window based Bayes classifier. Using a weighted inner-product definition gives slightly better results for some data sets, while for other data sets the classification rates are slightly worse. When comparing the results between the implicit ISE classifier using unweighted data points and the Parzen window Bayes classifier, some of the results indicate that the ISE classifier favor the classes with highest entropy.en
dc.format.extent1092544 bytes
dc.format.extent2070 bytes
dc.format.mimetypeapplication/pdf
dc.format.mimetypetext/plain
dc.identifier.urihttps://hdl.handle.net/10037/1773
dc.identifier.urnURN:NBN:no-uit_munin_1538
dc.language.isoengen
dc.publisherUniversitetet i Tromsøen
dc.publisherUniversity of Tromsøen
dc.rights.accessRightsopenAccess
dc.rights.holderCopyright 2007 The Author(s)
dc.subject.courseIDFYS-3921nor
dc.subjectVDP::Mathematics and natural science: 400::Information and communication science: 420::Simulation, visualization, signal processing, image processing: 429en
dc.subjectInformation theoretic learningen
dc.subjectPattern classificationen
dc.titleInformation theoretic learning for pattern classificationen
dc.typeMaster thesisen
dc.typeMastergradsoppgaveen


Tilhørende fil(er)

Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel