Show simple item record

dc.contributor.authorXiong, Jinhuien_US
dc.contributor.authorRichtarik, Peteren_US
dc.contributor.authorHeidrich, Wolfgang en_US
dc.contributor.editorSchulz, Hans-Jörg and Teschner, Matthias and Wimmer, Michaelen_US
dc.description.abstractState-of-the-art methods for Convolutional Sparse Coding usually employ Fourier-domain solvers in order to speed up the convolution operators. However, this approach is not without shortcomings. For example, Fourier-domain representations implicitly assume circular boundary conditions and make it hard to fully exploit the sparsity of the problem as well as the small spatial support of the filters. In this work, we propose a novel stochastic spatial-domain solver, in which a randomized subsampling strategy is introduced during the learning sparse codes. Afterwards, we extend the proposed strategy in conjunction with online learning, scaling the CSC model up to very large sample sizes. In both cases, we show experimentally that the proposed subsampling strategy, with a reasonable selection of the subsampling rate, outperforms the state-of-the-art frequency-domain solvers in terms of execution time without losing the learning quality. Finally, we evaluate the effectiveness of the over-complete dictionary learned from large-scale datasets, which demonstrates an improved sparse representation of the natural images on account of more abundant learned image features.en_US
dc.publisherThe Eurographics Associationen_US
dc.subjectComputing methodologies
dc.subjectImage representations
dc.subjectTheory of computation
dc.subjectOnline learning algorithms
dc.titleStochastic Convolutional Sparse Codingen_US
dc.description.seriesinformationVision, Modeling and Visualization
dc.description.sectionheadersMachine Learning in Vision and Analysis

Files in this item


This item appears in the following Collection(s)

  • VMV19
    ISBN 978-3-03868-098-7

Show simple item record