TY - JOUR
T1 - Comparison of Visual Datasets for Machine Learning
AU - Gauen, Kent
AU - Dailey, Ryan
AU - Laiman, John
AU - Zi, Yuxiang
AU - Asokan, Nirmal
AU - Lu, Yung-Hsiang
AU - Thiruvathukal, George K.
AU - Shyu, Mei-Ling
AU - Chen, Shu-Ching
N1 - Kent Gauen, Ryan Dailey, John Laiman, Yuxiang Zi, Nirmal Asokan, Yung-Hsiang Lu, George K. Thiruvathukal, Mei-Ling Shyu, and Shu-Ching Chen, Comparison of Visual Datasets for Machine Learning, Proceedings of IEEE Conference on Information Reuse and Integration 2017.
PY - 2017/8/4
Y1 - 2017/8/4
N2 - One of the greatest technological improvements in recent years is the rapid progress using machine learning for processing visual data. Among all factors that contribute to this development, datasets with labels play crucial roles. Several datasets are widely reused for investigating and analyzing different solutions in machine learning. Many systems, such as autonomous vehicles, rely on components using machine learning for recognizing objects. This paper compares different visual datasets and frameworks for machine learning. The comparison is both qualitative and quantitative and investigates object detection labels with respect to size, location, and contextual information. This paper also presents a new approach creating datasets using real-time, geo-tagged visual data, greatly improving the contextual information of the data. The data could be automatically labeled by cross-referencing information from other sources (such as weather).
AB - One of the greatest technological improvements in recent years is the rapid progress using machine learning for processing visual data. Among all factors that contribute to this development, datasets with labels play crucial roles. Several datasets are widely reused for investigating and analyzing different solutions in machine learning. Many systems, such as autonomous vehicles, rely on components using machine learning for recognizing objects. This paper compares different visual datasets and frameworks for machine learning. The comparison is both qualitative and quantitative and investigates object detection labels with respect to size, location, and contextual information. This paper also presents a new approach creating datasets using real-time, geo-tagged visual data, greatly improving the contextual information of the data. The data could be automatically labeled by cross-referencing information from other sources (such as weather).
UR - https://ecommons.luc.edu/cs_facpubs/148
U2 - n/a
DO - n/a
M3 - Article
JO - Computer Science: Faculty Publications and Other Works
JF - Computer Science: Faculty Publications and Other Works
ER -