Reflection Topic: How comfortable are you being tracked by AI? 👀🧿

LTEC, 5703 | Respond to the following:

  • Given what you understand about how AI can be used to track your gaze, your posture, and other indicators of attention, how comfortable are you with giving power to managers/admins to use these video tools with you as an employee?

  • Given how you feel, how comfortable are you using these tools with your learners?

I think the way this week’s questions have been juxtaposed is very telling, and it mirrors many conversations I have had with educators over the past year. 

“Use it on me? Uh, no thanks. My sentiment is more than the data you collect could ever represent.“


“Use it on my students? Oh, yah, that’s a great idea. I could do so much to help my students and better provide the supports they need.”

At the core of this conversation is the need for transparency. From the technology used to the subsequent action taken based on the data collected, there are models that exist that can guide the implementation and usage of AI for learning analytics. The first being iterations of the Technology Acceptance Model, first introduced in 1986, is now being adapted to frame AI technology usage [1].  With the collection of learning analytics, we also need to adhere tightly to principles of ethics widely accepted for learning analytics: (1) privacy; (2) data ownership and control; (3) transparency; (4) consent; (5) anonymity; (6) non-maleficence and beneficence; (7) data management and security; and (8) access [2]. These can offer a researched framework for the implementation and sustained use. My point is, we feel like these are completely uncharted waters, but the reality is that we have a place to start to implement AI responsibly and effectively.


But let's talk practically… what does this really mean? Top-down mandates will leave organizations with unwilling participants who (whether knowingly or unknowingly) deliver skewed data. As the employee or the educator to the student, the purpose of data collection and its intended use has to be transparent, purposeful, and mutually agreeable. At the end of the day, it is about clear establishment and communication on the why and how: why are we collecting this data, why is it important, how will leadership respond, and how will observees be expected to respond? Do we all understand this, and do we all agree on this? Simple, yet one of the most difficult concepts to successfully accomplish.


Reference:

  1. Baroni, I., Re Calegari, G., Scandolari, D., & Celino, I. (2022). AI-TAM: a model to investigate user acceptance and collaborative intention in human-in-the-loop AI applications. Human Computation, 9(1), 1-21. https://doi.org/10.15346/hc.v9i1.134

  2. Corrin, L., Kennedy, G., French, S., Buckingham Shum S., Kitto, K., Pardo, A., West, D., Mirriahi, N., & Colvin, C. (2019). The Ethics of Learning Analytics in Australian Higher Education. A Discussion Paper. https://melbournecshe.unimelb.edu.au/research/research-projects/edutech/the-ethical-use-of-learning-analytics

Previous
Previous

AI for Diagnostics & Decision Making… a cautionary tale.

Next
Next

Sentiment Analysis for Learning Analytics