Anisotropic Superpixel Generation Based on Mahalanobis Distance

dc.contributor.authorCai, Yiqien_US
dc.contributor.authorGuo, Xiaohuen_US
dc.contributor.editorEitan Grinspun and Bernd Bickel and Yoshinori Dobashien_US
dc.date.accessioned2016-10-11T05:19:48Z
dc.date.available2016-10-11T05:19:48Z
dc.date.issued2016
dc.description.abstractSuperpixels have been widely used as a preprocessing step in various computer vision tasks. Spatial compactness and color homogeneity are the two key factors determining the quality of the superpixel representation. In this paper, these two objectives are considered separately and anisotropic superpixels are generated to better adapt to local image content. We develop a unimodular Gaussian generative model to guide the color homogeneity within a superpixel by learning local pixel color variations. It turns out maximizing the log-likelihood of our generative model is equivalent to solving a Centroidal Voronoi Tessellation (CVT) problem. Moreover, we provide the theoretical guarantee that the CVT result is invariant to affine illumination change, which makes our anisotropic superpixel generation algorithm well suited for image/video analysis in varying illumination environment. The effectiveness of our method in image/video superpixel generation is demonstrated through the comparison with other state-of-the-art methods.en_US
dc.description.number7
dc.description.sectionheadersImage Processing
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume35
dc.identifier.doi10.1111/cgf.13017
dc.identifier.issn1467-8659
dc.identifier.pages199-207
dc.identifier.urihttps://doi.org/10.1111/cgf.13017
dc.identifier.urihttps://diglib.eg.org:443/handle/10.1111/cgf13017
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.titleAnisotropic Superpixel Generation Based on Mahalanobis Distanceen_US
Files
Collections