• Consensus Clustering Using kNN Mode Seeking 

      Myhre, Jonas Nordhaug; Mikalsen, Karl Øyvind; Løkse, Sigurd; Jenssen, Robert (Chapter; Bokkapittel, 2015-06-09)
      In this paper we present a novel clustering approach which combines two modern strategies, namely consensus clustering, and two stage clustering as represented by the mean shift spectral clustering algorithm. We introduce the recent kNN mode seeking algorithm in the consensus clustering framework, and the information theoretic kNN Cauchy Schwarz divergence as foundation for spectral clustering. In ...
    • Deep divergence-based approach to clustering 

      Kampffmeyer, Michael C.; Løkse, Sigurd; Bianchi, Filippo Maria; Livi, Lorenzo; Salberg, Arnt Børre; Jenssen, Robert (Journal article; Tidsskriftartikkel; Peer reviewed, 2019-02-08)
      A promising direction in deep learning research consists in learning representations and simultaneously discovering cluster structure in unlabeled data by optimizing a discriminative loss function. As opposed to supervised deep learning, this line of research is in its infancy, and how to design and optimize suitable loss functions to train deep neural networks for clustering is still an open question. ...
    • The deep kernelized autoencoder 

      Kampffmeyer, Michael C.; Løkse, Sigurd; Bianchi, Filippo Maria; Jenssen, Robert; Livi, Lorenzo (Journal article; Tidsskriftartikkel; Peer reviewed, 2018-07-18)
      Autoencoders learn data representations (codes) in such a way that the input is reproduced at the output of the network. However, it is not always clear what kind of properties of the input data need to be captured by the codes. Kernel machines have experienced great success by operating via inner-products in a theoretically well-defined reproducing kernel Hilbert space, hence capturing topological ...
    • Deep kernelized autoencoders 

      Kampffmeyer, Michael C.; Løkse, Sigurd; Bianchi, Filippo Maria; Jenssen, Robert; Livi, Lorenzo (Peer reviewed; Book; Bokkapittel; Bok; Chapter, 2017-05-19)
      In this paper we introduce the deep kernelized autoencoder, a neural network model that allows an explicit approximation of (i) the mapping from an input space to an arbitrary, user-specified kernel space and (ii) the back-projection from such a kernel space to input space. The proposed method is based on traditional autoencoders and is trained through a new unsupervised loss function. ...
    • Ranking Using Transition Probabilities Learned from Multi-Attribute Data 

      Løkse, Sigurd; Jenssen, Robert (Journal article; Tidsskriftartikkel; Peer reviewed, 2018-09-13)
      In this paper, as a novel approach, we learn Markov chain transition probabilities for ranking of multi-attribute data from the inherent structures in the data itself. The procedure is inspired by consensus clustering and exploits a suitable form of the PageRank algorithm. This is very much in the spirit of the original PageRank utilizing the hyperlink structure to learn such probabilities. ...
    • Spectral clustering using PCKID – A probabilistic cluster kernel for incomplete data 

      Løkse, Sigurd; Bianchi, Filippo Maria; Salberg, Arnt-Børre; Jenssen, Robert (Journal article; Tidsskriftartikkel; Manuskript; Peer reviewed; Preprint, 2017-05-19)
      In this paper, we propose <i>PCKID</i>, a novel, robust, kernel function for spectral clustering, specifically designed to handle incomplete data. By combining posterior distributions of Gaussian Mixture Models for incomplete data on different scales, we are able to learn a kernel for incomplete data that does not depend on any critical hyperparameters, unlike the commonly used RBF kernel. To evaluate ...
    • Training Echo State Networks with Regularization Through Dimensionality Reduction 

      Løkse, Sigurd; Bianchi, Filippo Maria; Jenssen, Robert (Journal article; Tidsskriftartikkel; Peer reviewed, 2017)
      In this paper, we introduce a new framework to train a class of recurrent neural network, called Echo State Network, to predict real valued time-series and to provide a visualization of the modeled system dynamics. The method consists in projecting the output of the internal layer of the network on a lower dimensional space, before training the output layer to learn the target task. Notably, we ...