An AI-based Framework For Parent-child Interaction Analysis

dc.contributor.advisorMoshirpour, Mohammad
dc.contributor.advisorDuffett-Leger, Linda
dc.contributor.authorNikbakhtbideh, Behnam
dc.contributor.committeememberFar, Behrouz
dc.contributor.committeememberDrew, Steve
dc.date2023-11
dc.date.accessioned2023-07-17T22:24:07Z
dc.date.available2023-07-17T22:24:07Z
dc.date.issued2023-07
dc.description.abstractThe quality of parent-child interactions is foundational to children's social-emotional and cognitive development, as well as their lifelong mental health. The Parent-Child Interaction Teaching Scale (PCITS) is a well-established and effective tool used to measure parent-child interaction quality. It is utilized in both public health settings and basic and applied research studies to identify problem areas within parent-child interactions. However, like other observational measures of parent-child interaction quality, the PCITS can be time-consuming to administer and score, which limits its wider implementation. Therefore, the main objective of this research is to organize a framework for the recognition of behavioural symptoms of the child and parent during interventions. Based on the literature on interactive parent-child behaviour analysis, we categorized PCITS labels into three modalities: language, audio, and video. Some labels have dyadic actors, while others have a single actor (either the parent or child). In addition, within each modality, there are technical issues, considerations, and limitations in terms of artificial intelligence. Hence, we divided the problem into three modalities, proposed models for each modality, and a solution to combine them. Firstly, we proposed a model for recognizing action-related labels (video). These labels are interactive and involve two actors: the parent and the child. We conducted a feature extraction algorithm to produce semantic features passed through a feature selection algorithm to extract the most meaningful semantic features from the video. We chose this method due to its lower data requirement compared to other modalities. Also, because of using 2D video files, the proposed feature extraction and selection algorithms are to handle the occlusion and natural conditions like camera movement, Secondly, we proposed a model for recognizing language- and audio-related labels. These labels represent a single-actor role for the parent, as children are not yet capable of producing meaningful text in the intervention videos. To develop this model, we conducted research on a similar dataset to utilize transfer learning between two problems. Therefore, the second part of this research is associated with working on this text dataset. Third, we focused on multi-modal aspects of the work. We conducted experiments to determine how to integrate the prior work into our model. We also provided an ensemble model, which combined the modalities of language and audio based on the semantic and syntactic characteristics of the text. This ensemble model provides a baseline for developing further models with different aspects and modalities. Finally, we provided a roadmap to support more labels that were not covered in this research due to not reaching enough samples. Our proposed framework includes a labelling system that we developed in the primary stages of the research to gather labelled data. This system also plays a role to be integrated with AI modules to provide auto-recognition of the behavioural labels in parent-child interaction videos to the nurses.
dc.identifier.citationNikbakhtbideh, B. (2023). An AI-based framework for parent-child interaction analysis (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.
dc.identifier.urihttps://hdl.handle.net/1880/116757
dc.identifier.urihttps://dx.doi.org/10.11575/PRISM/41599
dc.language.isoen
dc.publisher.facultyGraduate Studies
dc.publisher.institutionUniversity of Calgary
dc.rightsUniversity of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission.
dc.subjectNLP
dc.subjectMachine learning
dc.subjectDeep learning
dc.subjectPCITS
dc.subjectParent-child interaction teaching scale
dc.subjectParent-child interactions
dc.subject.classificationArtificial Intelligence
dc.titleAn AI-based Framework For Parent-child Interaction Analysis
dc.typemaster thesis
thesis.degree.disciplineEngineering – Electrical & Computer
thesis.degree.grantorUniversity of Calgary
thesis.degree.nameMaster of Science (MSc)
ucalgary.thesis.accesssetbystudentI do not require a thesis withhold – my thesis will have open access and can be viewed and downloaded publicly as soon as possible.
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ucalgary_2023_nikbakhtbideh_behnam.pdf
Size:
6.84 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.62 KB
Format:
Item-specific license agreed upon to submission
Description: