Fast Human Classification of 3D Object Benchmarks

Abstract
Although a significant number of benchmark data sets for 3D object based retrieval systems have been proposed over the last decade their value is dependent on a robust classification of their content being available. Ideally researchers would want hundreds of people to have classified thousands of parts and the results recorded in a manner that explicitly shows how the similarity assessments varies with the precision used to make the judgement. This paper reports a study which investigated the proposition that Internet Crowdsourcing could be used to quickly and cheaply provide benchmark classifications of 3D shapes. The collective judgments of the anonymous workers produce a classification that has surprisingly fine granularity and precision. The paper reports the results of validating Crowdsourced judgements of 3D similarity against Purdue's ESB and concludes with an estimate of the overall costs associated with large scale classification tasks involving many tens of thousands of models.
Description

        
@inproceedings{
10.2312:3DOR/3DOR10/055-062
, booktitle = {
Eurographics Workshop on 3D Object Retrieval
}, editor = {
Mohamed Daoudi and Tobias Schreck
}, title = {{
Fast Human Classification of 3D Object Benchmarks
}}, author = {
Jagadeesan, A. P.
 and
Wenzel, J.
 and
Corney, Jonathan R.
 and
Yan, X.
 and
Sherlock, A.
 and
Torres-Sanchez, C.
 and
Regli, William
}, year = {
2010
}, publisher = {
The Eurographics Association
}, ISSN = {
1997-0471
}, ISBN = {
978-3-905674-22-4
}, DOI = {
10.2312/3DOR/3DOR10/055-062
} }
Citation
Collections