Kar, KohitijKashef Alghetaa, Yousif Khalid Faeq2024-10-282024-10-282024-07-152024-10-28https://hdl.handle.net/10315/42386Unraveling human visual strategies during object recognition remains a challenge in vision science. Existing psychophysical methods used to investigate these strategies are limited in accurately interpreting human decisions. Recently, artificial neural network (ANN) models, which show remarkable similarities to human vision, provide a window into human visual strategies. However, inconsistencies among different techniques hinder the use of explainable AI (XAI) methods to interpret ANN decision-making. Here, we first develop and validate a novel surrogate method, in silico, using behavioral probes in ANNs with explanation-masked images to address these challenges. Finally, by identifying the XAI method and ANN with the highest human alignment, we provide a working hypothesis and an effective approach to explain human visual strategies during object recognition -- a framework relevant to many other behaviors.Author owns copyright, except where explicitly noted. Please contact the author directly with licensing requests.BiologyNeurosciencesArtificial intelligenceProbing Human Visual Strategies Using Interpretability Methods for Artificial Neural NetworksElectronic Thesis or Dissertation2024-10-28