{"id":2,"date":"2016-06-07T19:33:29","date_gmt":"2016-06-08T00:33:29","guid":{"rendered":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/?page_id=2"},"modified":"2022-03-29T07:13:30","modified_gmt":"2022-03-29T12:13:30","slug":"projects","status":"publish","type":"page","link":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/projects\/","title":{"rendered":"Projects"},"content":{"rendered":"<table style=\"border: none;border-collapse: collapse\" border=\"0\" cellspacing=\"0\" cellpadding=\"0\" width=\"100%\" align=\"left\">\n<col width=\"200\" \/>\n<tr>\n<td style=\"vertical-align:middle;border:0px\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2016\/06\/rpm.png\" alt=\"rpm\" width=\"152\" height=\"152\" class=\"alignright size-full wp-image-238\" \/><\/td>\n<td style=\"vertical-align: center;border:0px\"><strong>Thinking in Pictures \/ Imagery-Based AI<\/strong><br \/>\nThis project was inspired by the book <a href=\"http:\/\/www.grandin.com\/inc\/visual.thinking.html\">Thinking in Pictures<\/a> by Temple Grandin, a professor of animal science who is also on the autism spectrum and who feels that she is a visual thinker. The goal of this project is better understand how visual thinkers process information and experience the world around them.  This project involves building and studying visual-imagery-based AI systems, and also developing new assessments to measure visual thinking in people.<\/td>\n<\/tr>\n<tr>\n<td colspan=\"2\" style=\"vertical-align:middle;border:0px\">\n<em>Selected publications<\/em><\/p>\n<ul>\n<li><span style=\"color:coral\">[PNAS]<\/span>  Kunda, M. (2020). AI, visual imagery, and a case study on the challenges posed by human intelligence tests. Proceedings of the National Academy of Sciences, 117 (47), 29390-29397. <a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2022\/02\/Kunda-2020-AI-visual-imagery-and-a-case-study-on-the-challenges-posed-by-human-intelligence-tests.pdf\">[pdf]<\/a><\/li>\n<li><span style=\"color:coral\">\u2605 Best Paper Award \u2605<\/span>  Yang, Y., McGreggor, K., and Kunda, M. (2020). Not quite any way you slice it: How different analogical constructions affect Raven&#8217;s Matrices performance.  Eighth Annual Conference on Advances in Cognitive Systems (ACS). <i>Winner of the inaugural ACS Patrick Henry Winston Award for Best Student Paper.<\/i>  <a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2021\/11\/Yang-McGreggor-Kunda-2020-Not-quite-any-way-you-slice-it-How-different-analogical-constructions-affect-Ravens-Matrices-performance.pdf\">[pdf]<\/a><\/li>\n<li><span style=\"color:coral\">[Cortex]<\/span>  Kunda, M. (2018). Visual mental imagery: A view from artificial intelligence.  Cortex, 105, 155-172.  <a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2022\/02\/Kunda-2018-Visual-mental-imagery-A-view-from-artificial-intelligence.pdf\">[pdf]<\/a><\/li>\n<li>Warford, N., and Kunda, M. (2018).  Measuring individual differences in visual and verbal thinking styles.  40th Annual Meeting of the Cognitive Science Society, Madison, WI.  <a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2022\/03\/Warford-Kunda-2018-Measuring-individual-differences-in-visual-and-verbal-thinking-styles.pdf\">[pdf]<\/a><\/li>\n<li><span style=\"color:coral\">[Intelligence]<\/span>  Kunda, M., Souli\u00e8res, I., Rozga, A., &amp; Goel, A. K. (2016). Error patterns on the Raven&#8217;s Standard Progressive Matrices Test. Intelligence, 59, 181-198. <a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2022\/02\/Kunda-et-al-2016-Error-patterns-on-the-Ravens-Standard-Progressive-Matrices-test.pdf\">[pdf]<\/a><\/li>\n<li><span style=\"color:coral\">[JADD]<\/span>  Kunda, M., and Goel, A. K. (2011). Thinking in Pictures as a cognitive account of autism. Journal of Autism and Developmental Disorders, 41 (9), pp. 1157-1177. <a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2021\/11\/Kunda-Goel-2011-Thinking-in-Pictures-as-a-cognitive-account-of-autism.pdf\">[pdf]<\/a><\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr>\n<td style=\"vertical-align:middle;border:0px\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2022\/03\/cropped-theatre_and_time_machine_room_with_tom_dialogue_preview-1.jpg\" width=\"152\" height=\"152\" class=\"alignright\" \/><\/td>\n<td style=\"vertical-align: center;border:0px\"><strong>Film Detective: A Game to Help Kids Learn Social and Theory of Mind Reasoning Skills<\/strong><br \/>\nIn this project, we are working with collaborators in Vanderbilt&#8217;s <a href=\"https:\/\/wp0.vanderbilt.edu\/oele\/\">Open-Ended Learning Environments<\/a> group and at the Vanderbilt Kennedy Center&#8217;s <a href=\"https:\/\/vkc.mc.vanderbilt.edu\/vkc\/triad\/home\">Treatment and Research Institute for Autism Spectrum Disorders (TRIAD)<\/a> to develop new, visually-oriented, technology-based approaches for teaching theory of mind and social skills to adolescents on the autism spectrum.  <a href=\"https:\/\/my.dev.vanderbilt.edu\/filmdetective\/\">See our Film Detective website here.<\/a><\/td>\n<\/tr>\n<tr>\n<td colspan=\"2\" style=\"vertical-align:middle;border:0px\">\n<em>Selected publications<\/em><\/p>\n<ul>\n<li><span style=\"color:coral\">[JADD]<\/span>  Rashedi, R., Bonnet, K., Schulte, R., Schlundt, D., Swanson, A., Kinsman, A., Bardett, N., Warren, Z., Juarez, P., Biswas, G., &amp; Kunda, M. (2021). Opportunities and challenges in developing technology-based social skills interventions for adolescents with autism spectrum disorder: A qualitative analysis of parent perspectives.  Journal of Autism and Developmental Disorders. <a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2022\/02\/Rashedi-et-al-2021-Opportunities-and-challenges-in-developing-technology%E2%80%90based-social-skills-interventions-for-adolescents-with-Autism-Spectrum-Disorder-A-qualitative-analysis-of-parent-perspectives.pdf\">[pdf]<\/a><\/li>\n<li>Chen Z., Li S., Rashedi R., Zi X., Elrod-Erickson M, Hollis B., Maliakal A., Shen X., Zhao S., &amp; Kunda M. (2020). Creating and characterizing datasets for social visual question answering. IEEE Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL\/EPIROB).  <a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2022\/02\/Chen-et-al-2020-Characterizing-datasets-for-social-visual-question-answering-and-the-new-tinysocial-dataset.pdf\">[pdf]<\/a><\/li>\n<li>Zi, X., Li, S., Rashedi, R., Rushdy, M., Lane, B., Mishra, S., Biswas, G., Swanson, A., Kinsman, A., Bardett, N., Warren, Z., Juarez, P., and Kunda, M. (2020).  Science learning and social reasoning in adolescents on the autism spectrum: An educational technology usability study.  Proceedings of the 42nd Annual Meeting of the Cognitive Science Society. <a href=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2022\/02\/Zi-et-al-2020-Adapting-educational-technologies-across-learner-populations-A-usability-study-with-adolescents-on-the-autism-spectrum.pdf\">[pdf]<\/a><\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr>\n<td style=\"vertical-align:middle;border:0px\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2016\/06\/eyetracking.jpg\" alt=\"eyetracking\" width=\"154\" height=\"154\" class=\"alignright size-full wp-image-241\" \/><\/td>\n<td style=\"vertical-align: middle;border:0px\"><strong>Attention and Wearable Cameras<\/strong><br \/>\nVisual attention impacts virtually every aspect of intelligent behavior in humans, from perception and learning to communication and social interaction.  Recent advances in wearable technology now enable us to measure human visual attention in real-world settings. This project leverages wearable camera and eye-tracking technologies to support research into the relationships between visual attention, learning, and intelligent problem-solving.<\/td>\n<\/tr>\n<tr>\n<td colspan=\"2\" style=\"vertical-align:middle;border:0px\">\n<em>Selected publications<\/em><\/p>\n<ul>\n<li>Brown, E., Park, S., Warford, N., Seiffert, A., Kawamura, K., Lappin, J., and Kunda, M. (2018). An architecture for spatiotemporal template-based search. Advances in Cognitive Systems, 6, 101-118.<\/li>\n<li>Kunda, M., El-Banani, M., and Rehg, J. (2016). A computational exploration of problem-solving strategies and gaze behaviors on the Block Design task. In Proceedings of the 38th Annual Meeting of the Cognitive Science Society, Philadelphia, PA.<\/li>\n<li>Kunda, M., and Ting, J. (2016). Looking around the mind\u2019s eye: Attention-based access to visual search templates in working memory. Advances in Cognitive Systems, 4, 113\u2013129.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr>\n<td style=\"vertical-align: middle;border:0px\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2016\/06\/plot.png\" alt=\"scatterplot\" width=\"153\" height=\"152\" class=\"alignright size-full wp-image-263\" \/><\/td>\n<td style=\"vertical-align: middle;border:0px\"><strong>Data Visualization<\/strong><br \/>\nThe goal of this project is to understand visual cognition in the context of human data visualization activities, including studying and modeling the roles of visual perception (what you see), semantic knowledge (what you know), and goals (what you are trying to do).  These models will help to identify factors that contribute to human performance on data visualization tasks and will also lay foundations for developing new intelligent data visualization technologies.<\/td>\n<\/tr>\n<tr>\n<td colspan=\"2\" style=\"vertical-align:middle;border:0px\">\n<em>Selected publications<\/em><\/p>\n<ul>\n<li>Eilbert, J., Peters, Z., Eliott, F., Stassun, K., and Kunda, M. (2018). Shapes in scatterplots: Comparing human visual impressions and computational metrics. 40th Annual Meeting of the Cognitive Science Society, Madison, WI.<\/li>\n<li>Eliott, F., Stassun, K., and Kunda, M. (2018). IACI: A human-inspired computational architecture to help us understand visual data exploration. Sixth Annual Conference on Advances in Cognitive Systems, Menlo Park, CA.<\/li>\n<li>Eliott, F. M., Stassun, K., and Kunda, M. (2017). Visual data exploration: How expert astronomers use flipbook-style visual approaches to understand new data. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society, London, UK.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr>\n<td style=\"vertical-align:middle;border:0px\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2016\/06\/eft.png\" alt=\"eft\" width=\"154\" height=\"154\" class=\"alignright size-full wp-image-240\" \/><\/td>\n<td style=\"vertical-align:middle;border:0px\"><strong>AI in Cognitive Assessments<\/strong><br \/>\nThe goal of this project is to develop new AI tools that improve the usefulness of standardized cognitive assessments that are used in research and clinical practice.  We focus mostly on nonverbal cognitive assessments, such as Raven&#8217;s Progressive Matrices, Leiter, Embedded Figures, and Block Design, and we examine how AI models can be used to make more detailed inferences about human response patterns.<\/td>\n<\/tr>\n<tr>\n<td colspan=\"2\" style=\"vertical-align:middle;border:0px\">\n<em>Selected publications<\/em><\/p>\n<ul>\n<li>Palmer, J. H., and Kunda, M. (2018). Thinking in PolAR pictures: Using rotation-friendly mental images to solve Leiter-R Form Completion. AAAI National Conference.<\/li>\n<li>Ainooson, J., and Kunda, M. (2017). A computational model for reasoning about the Paper Folding task using visual mental images. In Proceedings of the 39th Annual Meeting of the Cognitive Science Society, London, UK.<\/li>\n<li>Kunda, M., Souli\u00e8res, I., Rozga, A., &amp; Goel, A. K. (2016). Error patterns on the Raven&#8217;s Standard Progressive Matrices Test. Intelligence, 59, 181-198.<\/li>\n<li>Kunda, M., McGreggor, K., and Goel, A. K. (2013). A computational model for solving problems from the Raven\u2019s Progressive Matrices intelligence test using iconic visual representations. Cognitive Systems Research, 22-23, pp. 47-66.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<tr>\n<td style=\"vertical-align: middle;border:0px\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/cdn-dev.vanderbilt.edu\/t2-my-dev\/wp-content\/uploads\/sites\/2127\/2016\/06\/duck.png\" alt=\"duck\" width=\"150\" height=\"150\" class=\"alignright size-full wp-image-233\" \/><\/td>\n<td style=\"vertical-align: middle;border:0px\"><strong>Developmentally Inspired AI<\/strong><br \/>\nSome biologically-inspired approaches to AI aim to emulate the neural structure of the brain. This project takes a parallel approach of looking at developmental aspects of human intelligence.  In particular, we study how the physical environment, the maturation of motor and attentional skills, and interactions with social actors all play role in defining what, and how, human infants learn about the world.<\/td>\n<\/tr>\n<tr>\n<td colspan=\"2\" style=\"vertical-align:middle;border:0px\">\n<em>Selected publications<\/em><\/p>\n<ul>\n<li>Wang, X., Wang, X., and Kunda, M. (2018). Ordering of training inputs for a neural network learner. Sixth Annual Conference on Advances in Cognitive Systems, Menlo Park, CA.<\/li>\n<li>Wang, X., Eliott, F., Ainooson, J., Palmer, J., and Kunda, M. (2017). An object is worth six thousand pictures: The egocentric, manual, multi-image (EMMI) dataset. In International Conference on Computer Vision Workshop on Egocentric Perception, Interaction, and Computing (EPIC@ICCV), Venice, Italy.<\/li>\n<\/ul>\n<\/td>\n<\/tr>\n<\/table>\n","protected":false},"excerpt":{"rendered":"<p>Thinking in Pictures \/ Imagery-Based AI This project was inspired by the book Thinking in Pictures by Temple Grandin, a professor of animal science who is also on the autism spectrum and who feels that she is a visual thinker. &hellip; <a href=\"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/projects\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":5139,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"sidebar-page.php","meta":{"footnotes":""},"class_list":["post-2","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/pages\/2","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/users\/5139"}],"replies":[{"embeddable":true,"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/comments?post=2"}],"version-history":[{"count":73,"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/pages\/2\/revisions"}],"predecessor-version":[{"id":981,"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/pages\/2\/revisions\/981"}],"wp:attachment":[{"href":"https:\/\/my.dev.vanderbilt.edu\/aivaslab\/wp-json\/wp\/v2\/media?parent=2"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}