top of page

UX Research Projects

AR Design Principles for Reading and Motivation

Doctoral Research @ Columbia University (2023 - 2024)

Published in the International Journal of Child-Computer Interaction

https://doi.org/10.1016/j.ijcci.2024.100701

​

Executive Summary:

I conducted an experimental research study investigating how Augmented Reality (AR) design principles can support foundational literacy development in early readers (grades 1–3). The project focused on transforming recreational reading into an interactive, multimodal experience that bridges the gap between traditional print and digital pedagogical scaffolding to improve literacy skills and reading motivation.

​

To address the challenge of evaluating "non-deterministic" immersive experiences, I developed a rigorous empirical framework that decomposes complex AR interactions into measurable cognitive benchmarks. This research established how vision-based product features—such as image recognition and spatial tracking—directly influence human mental models and attentional control.

arpedia-dinosaurs 2.gif

Core Research Pillars

​

1. Operationalizing "Good" via Cognitive Benchmarking

I translated abstract qualitative concepts like "user interest" and "engagement" into observable, data-driven success criteria.

​​

  • Attentional Metrics: By isolating "Non-Distractive Design," I measured a 25.1% increase in attentional control, creating a benchmark for evaluating vision-based product UI.

​

  • Metric Decomposition: I decomposed multimodal interactions into constituent parts (e.g., visual-haptic feedback, spatial contiguity) to reason about how these parts influence the total user experience.

ar reading - observation.png

2. Scalable Evaluation & "Golden Dataset" Curation

I designed a rigorous evaluation methodology that utilized a dataset of 10 distinct non-fiction AR books to test system reliability across diverse use cases.

​

  • Systematic Annotation: I led the curation and annotation of behavioral datasets, coding over 1,600 interaction instances to identify statistically significant patterns in model/system behavior.

​

  • Automated Logic Triggers: I defined the "instructions" for human raters and system evaluators by establishing a rubric of 32 AR design principles, ensuring reproducibility in testing vision-based product area.

ar design principle.png

3. Evaluating Multimodal & Agentic Interactions

Focusing on "Agentic" behaviors, I evaluated how digital Pedagogical Agents (PAs) influence human trust and motivation in high-stakes learning environments.

​

  • Anthropomorphic Balance: I developed evaluations for AI-generated personas, measuring how scientific accuracy combined with "human-like" discussion affects user self-efficacy

​

  • Failure-State Analysis: I analyzed how "technical errors" and inconsistent tracking disrupted user agency, providing actionable recommendations for improving system robustness and human-AI alignment.

IMG_5412 (1).JPG

4. Evidence-Based Motivation: Narrative & Gameful Learning

Applying my specialization in intelligent technologies, I integrated narrative and game-based elements to induce a state of "flow" during literacy practice.

​

  • The Fantastical-Real Balance: I designed a framework to balance engaging fantastical elements with scientific accuracy, which led to an 11.1% increase in vocabulary engagement.

​​

  • Proprietary Interest Metrics: We tracked qualitative shifts in interest and self-efficacy, finding that narrative learning integration increased observed interest behavior by 34.4%.

IMG_4196.JPG

Research Impact

​

  • Proven Engagement: Simple linear regression confirmed that motivation for reading significantly predicted a preference for AR-enhanced books over traditional print.

​

  • Pedagogical Scaffolding: Demonstrated that AR can provide "shared reading" benefits for non-confident readers by offering audio support and interactive depictions of complex scientific terms.

​

  • Foundational Framework: Established a validated roadmap for designers to utilize AR as a bridge for multiliteracy, moving from "learning to read" to "reading to learn" in informal contexts.

​

​

Strategic Impact & Translation

*How I'm bridging the gap between foundational cognitive research and actionable product roadmaps to architect psychologically aligned, agentic AI systems today.
​

  • Operationalizing Success Criteria: I specialize in defining what "good" looks like for non-deterministic experiences by decomposing complex interactions into measurable cognitive benchmarks. For example, I translated abstract "attentional control" into a measurable 25.1% performance increase by isolating non-distractive design principles—a framework directly applicable to evaluating the intuitive nature of Vision products.

​

  • Scalable Evaluation Frameworks: I bridge the gap between human sciences and data science by architecting rigorous evaluation pipelines. Drawing on my current work at Samsung with state-based "Agentic AI" logic, I curate high-fidelity "golden datasets" and write precise rater instructions that hold up under the pressures of iterative product development and non-linear AI behaviors.

​

  • Engineering Human-AI Alignment: I leverage mixed-methods UXR to ensure agentic systems are contextually grounded and psychologically aligned. By applying my research on the "Persona Effect" and "Anthropomorphic Balance," I partner with engineering teams to optimize multi-modal interactions (voice, visual, and spatial), ensuring AI behaviors drive a 34.4% increase in user interest while maintaining scientific and educational integrity.

bottom of page