My Projects

SALIENCYSLIDER

Web application that allows users to upload images and explore how a CNN classifies images based on specific regions, using a saliency slider for feature visibility.

ASA DATAFEST

Data Analysis presentation using CourseKata data for annual DataFest competition at the University of Florida; awarded best overall.

DINODETECT

This project involved creating a Discord Bot that uses Sentiment Analysis to detect cheating, sentiment, and toxicity in server communications.

ADAPTREE

AI-powered educational platform that creates personalized learning pathways based on user's background. It secured finalist standing at the 2023 AI Days Hackathon.

REGRESSIVE REASONING

A machine learning project focused on classifying brand logos with a 95% validation accuracy, utilizing tensorflow and EfficientNet on UF's HiPerGator supercomputer.

MUS MUSCULUS NEURONAL ANALYSIS

This project involved leading gene analysis of house mouse in R, using machine learning algorithms to decode neuronal network patterns from RNA-seq datasets.

BUDGET BUDDY

Flutter based app developed to assist in financial planning, featuring dynamic graphs and sorting algorithm analyses for efficient budget management.

JASON'S JOURNEY

A dungeon crawler python game inspired by professor Jason Harrington's love for mathematics developed with pygame for MAD2502.

ROOM AUTOMATION

A software solution for optimizing room assignments at UF, employing C++ and Qualtrics API to streamline the process for over 200 Resident Assistants.

Research Papers

Abstracting General Syntax for XAI after Decomposing Explanation Sub-Components

This paper presents an overview of the Qi-Framework, a novel approach to defining and quantifying explainability in machine learning. It introduces a mathematically grounded syntax that abstracts and organizes the subcomponents of common eXplainable Artificial Intelligence (XAI) methods. The framework aims to provide a standardized language for describing explainability needs, evaluating explanation relevance to specific use cases, and guiding the selection of XAI methods. The report explores how the Qi-Framework helps rank methods by their utility, supports the discovery of new XAI techniques, and fosters collaborative advancements in interpretable machine learning.

MIRAGE: Multi-model Interface for Reviewing and Auditing Generative Text-to-Image AI

This paper introduces MIRAGE, a web-based tool designed to enable users to audit and compare outputs from multiple AI text-to-image (T2I) models. By providing a structured platform for evaluating AI-generated images, MIRAGE empowers users to surface harmful biases and contribute to the growing efforts in improving generative AI systems. A preliminary user study with five participants revealed that MIRAGE users could draw on their lived experiences and identities to identify subtle biases more effectively when reviewing multiple T2I models simultaneously, compared to evaluating a single model. The report highlights MIRAGE’s potential to foster more inclusive and trustworthy generative AI applications.