{"id":595,"date":"2019-02-23T11:00:01","date_gmt":"2019-02-23T16:00:01","guid":{"rendered":"http:\/\/my.dev.vanderbilt.edu\/aivaslab\/?page_id=595"},"modified":"2021-06-16T14:41:03","modified_gmt":"2021-06-16T19:41:03","slug":"datasets","status":"publish","type":"page","link":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/datasets\/","title":{"rendered":"Datasets"},"content":{"rendered":"<hr \/>\n<p><strong><a href=\"https:\/\/aivaslab.github.io\/toybox\/\">Toybox<\/a><\/strong><\/p>\n<p>Toybox is designed to enable an improved understanding of small sample learning and hand-object-vision interactions.  The dataset contains video clips of structured, handheld transformations of 360 individual objects from 12 different categories (cups, mugs, spoons, balls, cars, trucks, airplanes, helicopters, horses, cats, ducks, and giraffes)&#8212;with over 2 million images in total.<\/p>\n<div style=\"width: 584px;\" class=\"wp-video\"><!--[if lt IE 9]><script>document.createElement('video');<\/script><![endif]-->\n<video class=\"wp-video-shortcode\" id=\"video-595-1\" width=\"584\" height=\"329\" preload=\"metadata\" controls=\"controls\"><source type=\"video\/mp4\" src=\"https:\/\/aivaslab.github.io\/toybox\/output_final.mp4?_=1\" \/><a href=\"https:\/\/aivaslab.github.io\/toybox\/output_final.mp4\">https:\/\/aivaslab.github.io\/toybox\/output_final.mp4<\/a><\/video><\/div>\n<\/p>\n<p><em>Selected publications<\/em><\/p>\n<ul>\n<li>Wang, X., Ma, T., Ainooson, J., Cha, S., Wang, X., Molla, A., Kunda, M. (2018). The Toybox dataset of egocentric visual object transformations.  <a href=\"https:\/\/arxiv.org\/abs\/1806.06034\">https:\/\/arxiv.org\/abs\/1806.06034<\/a><\/li>\n<li>Wang, X., Eliott, F., Ainooson, J., Palmer, J., and Kunda, M. (2017). An object is worth six thousand pictures: The egocentric, manual, multi-image (EMMI) dataset. In International Conference on Computer Vision Workshop on Egocentric Perception, Interaction, and Computing (EPIC@ICCV), Venice, Italy.<\/li>\n<\/ul>\n<p>A sampling of some of the Toybox objects:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2019\/02\/toybox_organized.png\" alt=\"toybox_organized\" width=\"1662\" height=\"945\" class=\"alignnone size-full wp-image-617\" srcset=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2019\/02\/toybox_organized.png 1662w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2019\/02\/toybox_organized-300x171.png 300w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2019\/02\/toybox_organized-768x437.png 768w, https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2019\/02\/toybox_organized-650x370.png 650w\" sizes=\"auto, (max-width: 1662px) 100vw, 1662px\" \/><\/p>\n<hr \/>\n","protected":false},"excerpt":{"rendered":"<p>Toybox Toybox is designed to enable an improved understanding of small sample learning and hand-object-vision interactions. The dataset contains video clips of structured, handheld transformations of 360 individual objects from 12 different categories (cups, mugs, spoons, balls, cars, trucks, airplanes, &hellip; <a href=\"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/datasets\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":5139,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"sidebar-page.php","meta":{"footnotes":""},"class_list":["post-595","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/pages\/595","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/users\/5139"}],"replies":[{"embeddable":true,"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/comments?post=595"}],"version-history":[{"count":14,"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/pages\/595\/revisions"}],"predecessor-version":[{"id":684,"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/pages\/595\/revisions\/684"}],"wp:attachment":[{"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/media?parent=595"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}