AI for Diagnostics & Decision Making… a cautionary tale.

Respond to the following:

  • Given what you understand about how AI can be used for learning diagnostics and decision making in educational spaces, how do you think educators and administrative staff should employ it and why?

  • What are potential practical pitfalls with relying on AI for these tasks?

  • What are potential ethical challenges with relying on AI for these tasks?

Let’s talk about GRADE, UT Austin’s Graduate Admissions Evaluator. GRADE was developed as a decision-making algorithm for the University of Texas computer science program. On paper, it seems pretty genius. Let’s teach the machine to evaluate based on how we have in the past. It will look for keywords and criteria we look for using our former data sets. It will automate the process for ranking the submissions so we can save time and only look at the best fits as a committee. We can call it equitable because it does not consider race or gender… can we insert the '90s NBC “The More You Know” jingle here? 

What they built was a machine that served as an echo chamber for what “wins”, never evolving, never doing better. They tried to claim that humans were in the loop because submissions were still evaluated by one human committee member, but to say that their evaluation would not be influenced by the numbers the computer produced is naive. Ignoring our human traits like gender and race (and so many other human defining points) does not create an equitable playing field. And, it does not grow the computer science program or the world we live in built by those programmers. 

These programmers believed they could create a system of 0s and 1s that evaluated human potential without a conscious or unconscious bias. This is how a lot of people think about the use of artificial intelligence. We get so hyped up about the efficiencies and the promise of a system sans bias that we overlook the very obvious cracks in the wall. I would like to think I am better and do better every day, week, year of my life. I hope that I continue to look back on my past beliefs and disagree with them, even in a small, nuanced way. I hope that I am always at least a little bit better. And, I would hope that the decision-makers for such a prestigious program that yields a workforce that defines the world around us are a little better every year, too. Fortunately, GRADE was discontinued in 2020. It is inevitable that we will have a lot of hopes and promises when it comes to tech tools, but what is important is to continue to value the human experience more than a rote evaluation and to be willing to admit that we can do better and should.

Reference

Burke, L. (2020, December 14). U of Texas will stop using controversial algorithm to evaluate Ph.D. applicants. Inside Higher Ed | Higher Education News, Events and Jobs. https://www.insidehighered.com/admissions/article/2020/12/14/u-texas-will-stop-using-controversial-algorithm-evaluate-phd

Previous
Previous

Thoughts on the future of AIED…timestamped 4/30/24 1:48pm bc I will probably have another opinion tomorrow…

Next
Next

Reflection Topic: How comfortable are you being tracked by AI? 👀🧿