糖心视频污

Skip to main content Skip to search

糖心视频污 News

糖心视频污 News

Students Create AI Model That Boosts Speed, Cuts Bias in Parent-Child Analysis

Under the supervision of Dr. Amiya Waldman-Levi, a clinical associate professor of occupational therapy, Dengyi Liu, a student in the Ph.D. in mathematics, and Chana Cunin, a student in the Occupational Therapy Doctorate, created the AI model to assess parent-child interaction.

By Dave DeFusco

At the Katz School鈥檚 Graduate Symposium on Science, Technology and Health, a team of mathematics and occupational therapy students presented a project that could reshape how we understand, assess and support children鈥檚 play. Their project, 鈥淎I-Powered Play Assessment Using Video Language Models,鈥 promises to automate one of the most nuanced tasks in pediatric care: evaluating joint play between children and their caregivers.

Dengyi Liu, a student in the Ph.D. in Mathematics, and Vanessa Murad and Chana Cunin, both students in the Occupational Therapy Doctorate, developed and tested an artificial intelligence model capable of analyzing parent-child interactions with unprecedented efficiency and precision.

鈥淥bservation-based assessments are powerful but time-consuming,鈥 said Liu. 鈥淲e wanted to build a system that could take in a 10-minute video and identify, track and evaluate play behaviors using the same criteria clinicians use鈥攐nly faster and without fatigue or inconsistency.鈥

At the heart of this work is the Parent/Caregiver Support of Children鈥檚 Playfulness (PC-SCP), a validated observational tool developed in 2023 by Dr. Amiya Waldman-Levi, a clinical associate professor of occupational therapy at the Katz School, and Dr. Anita Bundy, a professor in the College of Health and Human Sciences at Colorado State University. The PC-SCP evaluates the quality of joint play experiences, which are crucial to children鈥檚 social, emotional and cognitive development.

鈥淓ven with trained raters, scoring PC-SCP manually can take hours, and the results can vary depending on who鈥檚 watching the video,鈥 said Dr. Waldman-Levi. 鈥淲e needed a better way to scale our work while keeping the rigor.鈥

That鈥檚 where AI comes in. Liu and the team fine-tuned Qwen2.5-VL, a cutting-edge video language model designed to understand both visual and textual data. 鈥淭he model鈥檚 architecture allows it to process video frames and textual prompts together, much like how a human therapist watches, listens and interprets at once,鈥 said Liu.

By feeding the model annotated video Q&A datasets, the team trained it to recognize key elements of joint play鈥攃ooperation, initiation, responsiveness, and more. The study recruited 39 mother-child pairs, encompassing both neurotypical and neurodiverse children between ages 2 and 6. Most mothers were college-educated, English-speaking and married, with moderate to high household income. Importantly, both manual and AI assessments were conducted on the same 60 video clips. Manual PC-SCP scoring showed strong inter-rater reliability (75%鈥100%), which served as the gold standard for evaluating the AI鈥檚 performance.

鈥淭he AI model achieved a top-five accuracy of 61.3% and 40.7% precision on key scoring items,鈥 said Murad. 鈥淭hat鈥檚 a solid result considering the complexity of what it鈥檚 being asked to do, which is essentially replicate a trained clinician鈥檚 observational judgment.鈥

Cunin emphasized the real-world impact. 鈥淭his could change how occupational therapists engage with families,鈥 she said. 鈥淚nstead of spending hours reviewing video and scoring interactions frame by frame, clinicians can rely on AI to do the heavy lifting, freeing their time for intervention and counseling.鈥

Dr. Waldman-Levi, who served as a faculty advisor, noted the broader vision: 鈥淭his is not about replacing therapists; it鈥檚 about extending their reach,鈥 she said. 鈥淲ith automated scoring, we can conduct larger studies, reach more diverse populations and reduce human bias in assessment. That鈥檚 an ethical win as much as a scientific one.鈥

Dr. Honggang Wang, professor and chair of the Department of Graduate Computer Science and Engineering, said the project is the kind of interdisciplinary collaboration the Katz School aims to foster. 鈥淐ombining deep learning with behavioral science opens entirely new frontiers in healthcare diagnostics,鈥 he said. 鈥淚t鈥檚 not just innovation鈥攊t鈥檚 mission-driven innovation.鈥

Still, the team is aware of limitations. Variability in video quality and diversity of interaction styles remain a challenge. 鈥淕eneralizing the model across different socioeconomic and cultural contexts will require more diverse training data,鈥 said Liu. 鈥淥ur next step is to expand the dataset, refine the model鈥檚 sensitivity and begin real-world testing in clinical settings.鈥

The potential is enormous. Automated scoring can cut down analysis time from hours to minutes and scale longitudinal studies on child development in ways previously unimaginable. In a healthcare system strained by labor shortages and rising costs, the model offers a tool that鈥檚 both efficient and reliable.

鈥淭his is just the beginning,鈥 said Murad. 鈥淥ur vision is to create AI tools that can assist in early diagnosis, track developmental progress and support interventions鈥攁ll while honoring the human relationships at the heart of pediatric care.鈥

Share

FacebookTwitterLinkedInWhat's AppEmailPrint

Follow Us