Skip to main content
Article
Comparison of Visual Datasets for Machine Learning
IEEE Conference on Information Reuse and Integration 2017
  • Kent Gauen, Purdue University
  • Ryan Dailey, Purdue University
  • John Laiman, Purdue University
  • Yuxiang Zi, Purdue University
  • Nirmal Asokan, Purdue University
  • Yung-Hsiang Lu, Purdue University
  • George K. Thiruvathukal, Loyola University Chicago
  • Mei-Ling Shyu, University of Miami
  • Shu-Ching Chen, Florida International University
Document Type
Conference Proceeding
Publication Date
8-4-2017
Abstract
One of the greatest technological improvements in recent years is the rapid progress using machine learning for processing visual data. Among all factors that contribute to this development, datasets with labels play crucial roles. Several datasets are widely reused for investigating and analyzing different solutions in machine learning. Many systems, such as autonomous vehicles, rely on components using machine learning for recognizing objects. This paper compares different visual datasets and frameworks for machine learning. The comparison is both qualitative and quantitative and investigates object detection labels with respect to size, location, and contextual information. This paper also presents a new approach creating datasets using real-time, geo-tagged visual data, greatly improving the contextual information of the data. The data could be automatically labeled by cross-referencing information from other sources (such as weather).
Creative Commons License
Creative Commons Attribution-Noncommercial-No Derivative Works 3.0
Citation Information
Kent Gauen, Ryan Dailey, John Laiman, Yuxiang Zi, Nirmal Asokan, Yung-Hsiang Lu, George K. Thiruvathukal, Mei-Ling Shyu, and Shu-Ching Chen, Comparison of Visual Datasets for Machine Learning, Proceedings of IEEE Conference on Information Reuse and Integration 2017.