Research Affiliations

University of Florida
Research at the University of Florida focusing on the fusion of symbolic reasoning with neural networks to enhance explainability in AI systems.
Research Papers

Abstracting General Syntax for XAI after Decomposing Explanation Sub-Components
This paper presents an overview of the Qi-Framework, a novel approach to defining and quantifying explainability in machine learning. It introduces a mathematically grounded syntax that abstracts and organizes the subcomponents of common eXplainable Artificial Intelligence (XAI) methods. The framework aims to provide a standardized language for describing explainability needs, evaluating explanation relevance to specific use cases, and guiding the selection of XAI methods. The report explores how the Qi-Framework helps rank methods by their utility, supports the discovery of new XAI techniques, and fosters collaborative advancements in interpretable machine learning.

MIRAGE: Multi-model Interface for Reviewing and Auditing Generative Text-to-Image AI
This paper introduces MIRAGE, a web-based tool designed to enable users to audit and compare outputs from multiple AI text-to-image (T2I) models. By providing a structured platform for evaluating AI-generated images, MIRAGE empowers users to surface harmful biases and contribute to the growing efforts in improving generative AI systems. A preliminary user study with five participants revealed that MIRAGE users could draw on their lived experiences and identities to identify subtle biases more effectively when reviewing multiple T2I models simultaneously, compared to evaluating a single model. The report highlights MIRAGE’s potential to foster more inclusive and trustworthy generative AI applications.

Optimizing Digital Learning Through Data Analytics and Natural Language Processing
This study examines student engagement and sentiment within CourseKata, a digital learning platform for statistics education. By analyzing student interactions and pulse checks—periodic assessments of sentiment and understanding—we identify key gaps in engagement strategies. Discrepancies between positive pulse check responses and actual performance suggest a misalignment between perceived and actual comprehension. Using sentiment analysis with t-SNE and NLP clustering via a locally hosted Large Language Model (LLM), we uncover common challenges such as time management and concept complexity. Our findings highlight the need for refined pulse checks, varied question formats, and standardized content length to enhance learning outcomes and engagement.