UX Research Projects
AI-Generated Pedagogical Agents in Videos
Ph.D. Dissertation | Cognitive Science in Education
@ Columbia University (2025)
​
Executive Summary:
A Pedagogical Agent (PA) is a lifelike digital persona designed to facilitate learning and engagement within digital ecosystems. While traditional software relied on pre-recorded video or static avatars, the advent of Generative AI enables the creation of infinite, hyper-realistic instructors. My doctoral research investigated a critical frontier at the intersection of HCI and cognitive science: Does synthetic realism fulfill the requirements of Social Agency Theory to prime deeper cognitive processing? By evaluating eye gaze, gestures, and vocal prosody, I examined whether AI-generated agents can effectively "stand in" for human instructors to deliver meaningful, high-fidelity learning experiences that feel intuitive rather than "artificial."

Core Research Pillars
​
1. The "Utility Threshold" of GenAI
Using a 2x2 mixed factorial design, I evaluated the instructional effectiveness of AI-generated agents compared to human counterparts to establish empirical benchmarks for AI integration.
​
-
Instructional Equivalency: Results revealed no significant difference in learning outcomes or cognitive load, suggesting modern GenAI has reached a "utility threshold" where it neither impedes nor enhances learning relative to humans.
​
-
The Credibility Gap: Longitudinal analysis between pilot and final data documented a marked shift: as GenAI fidelity improved, the perceived credibility gap between AI and human agents narrowed significantly.

2. Multimedia Learning & Embodiment
I applied Mayer’s Principles of Multimedia Learning to test how digital "humanity" influences attention allocation and user trust.
​
-
Eye-Tracking Analysis: Used empirical data to move beyond subjective feedback, measuring how learners’ eye gaze and attention shifted in response to the agent’s gestures and facial expressions.
​
-
Voice & Embodiment: Confirmed that when visual and auditory cues are aligned, AI agents successfully leverage the "Embodiment Principle" to promote deeper learning, provided the social cues are precise.

3. Cognitive Scaffolding & Sequence Logic
The research highlighted that while the agent's appearance matters, the instructional architecture remains the primary driver of success.
​
-
Order Effects: Data showed that content sequencing (e.g., Declarative-first) had a greater impact on retention than the agent’s "humanity."
​
-
Strategic Insight: This finding proves that for AI products, pedagogical logic and system flow are just as vital as visual realism. ​

4. Inclusive Design: Neurodiversity & ASD
Drawing from my work in Mixed Reality, I explored how these findings extend to assistive technology and social-cognitive scaffolds.
​
-
The Avatar Effect: Leveraging research showing that students with Autism Spectrum Disorder (ASD) are highly engaged by avatars to design AI tutors that reduce disruptive behaviors.
​​
-
Universal Design: Using GenAI to move from "one-size-fits-all" instruction to hyper-personalized, safe, and adaptive models for social skill practice.

Strategic Impact & Translation
*How I'm bridging the gap between foundational cognitive research and actionable product roadmaps to architect psychologically aligned, agentic AI systems today.
​
-
From Research to Product: I specialize in the "messy middle"—the space where complex behavioral data meets a product roadmap. I translate foundational cognitive insights into evaluation frameworks for "agentic" partners.
​
-
Engineering Human-AI Alignment: I leverage mixed-methods UXR to provide empirical benchmarks for how models influence human trust and mental models.
​
-
Architecting Agentic Systems: By applying findings on Social Cues, I partner with engineering teams to ensure AI behaviors are not only realistic but contextually grounded and psychologically aligned.​​​