Quantcast
Channel: Latest Results
Viewing all articles
Browse latest Browse all 68

Generative Pre-trained Transformers for Coding Text Data? An Analysis with Classroom Orchestration Data

$
0
0

Abstract

Video content analysis is of importance for researchers in technology-enhanced learning. A common starting point typically involves transcribing video into textual transcripts that enable the application of a coding scheme to group the text into key themes. However, manual coding is demanding and requires time and effort of human annotators. Therefore, this study explores the possibility of using Generative Pre-trained Transformer 3 (GPT-3) models for automating the text data coding compared to baseline classical machine learning approaches using a dataset manually coded for the orchestration actions of six teachers in classroom collaborative learning sessions. The findings of our study showed that a fine-tuned GPT-3 (curie) model outperformed classical approaches (F1 score of 0.87) and reached a 0.77 Cohen’s kappa, which indicated a moderate agreement between manual and machine coding. The study also brings out the limitations of our text transcripts and highlights the importance of multimodal observations that capture the context of orchestration actions.


Viewing all articles
Browse latest Browse all 68

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>