NATURAL LANGUAGE PROCESSING- WRITING ANALYTICS
This section of our text dives into the field of learning analytics as it pertains to writing analytics (WA) and natural language processing (NLP) technologies. It was interesting to hear common threads in the context of NLP, as I have commonly heard in early childhood literacy practicum: the difference between learning to read and reading to learn. However, in the practice of WA and NLP it is more extrinsically focused around the analysis of the linguistics of writing and then the analysis on the context of the demonstration of knowledge in writing.
Linguistic Orientation
As I write this reflection, my writing is dynamically animated as my spellcheck extension makes suggestions on improving my writing *and* , and *remiinids* reminds me of *ervy* every letter I flip *bacwkards* backward. As mentioned in our reading, these tools are referencing the field of linguistics and using language theories like Universal Grammar and Functional Grammar to assist me in my writing.
Domain Orientation
Similarly to my spellcheck, my Grammarly extension is continually making suggestions on *how to improve* improving my writing for clarity and delivery. Grammarly is using what is described in this chapter as natural language processing (NLP) which uses models of billions of words to focus on the “the purpose of the written text and its content.”
Descriptive Writing Analytics & Evaluative Writing Analytics
Section 3 breaks the two orientations down into how they are applied in WA, Descriptive WA and Evaluative WA. The critical difference between the two is their intended purpose. Descriptive WA focuses on describing and summarizing the structural components of the subject’s writing. Whereas, Evaluative WA is concerned with evaluating the quality of writing- meaning did the author convey their intended point with clarity and within the intended writing style.
I was surprised to learn the extent to which the effort has is made to distinguish between Linguistic and Domain orientation and its relevance to the science of learning analytics. As a former elementary educator, this distinguishment is at the crux of early education, so much so that much of student learning in the day is built around the segmentation and intersection of both domains. I was also surprised to read about the limitations of NLP as it pertains to learning differences. It makes sense that is the system is learning from a homogenized set of data meaning that the average will set the rule, someone like my daughter who is identified as having dyslexia and who struggles to read words in string but scores high in phonemic awareness, will be an anomaly to a system using NLP and may not be able to develop an accurate model of her as a reader or writer. Does this mean that the literacy learning apps will be unable to help grow her as a reader if it deos not allow her learning difference to be accounted for?
If you could see my copy of the reading, you would see the equivalent of fireworks in highlighter throughout section 4, Pedagogy and Writing. In this section we read that “good pedagogy demands that writing analysis account for the quality of feedback.” I work on an assessment-based application and often team up to train on the product. We try to offer more than click training and weave in assessment and response to data best practices. I always feel defeated if my audience does not share my enthusiasm for our data analysis interface, as it can be used for immediate feedback with students (both anonymous and not). Without that piece, our product is another useless tool to slap a grade in a gradebook, and that betrays our foundational ethics around the intention of assessment. I agree that our “success depends not only on the quality of technology, but also on the quality of the pedagogy.”
Reference
Winne, P. H. (2022). Learning analytics for self-regulated learning. The Handbook of Learning Analytics, 96–104. https://doi.org/10.18608/hla22.008