Categories
AI CMM higher education

Ed tech must reads: column #85

First published in Campus Morning Mail 6th June, 2023

Before I kick off the final instalment of this column (in this place), I’d like to quickly thank Stephen Matchett for his tireless work on CMM and acknowledge the significant contribution that he has made in informing and enlightening the HE community. I am very grateful to have had the opportunity to be part of it.

I will be carrying on the Ed/Tech must-reads column from next week as a free Substack newsletter, so please sign up for uninterrupted service. Now on with the show.

GPT detectors are biased against non-native English writers from arXiv

As HE leaders continue to search for the academic integrity silver bullet and vendors continue to promise the world, the news from the world of Gen AI detection tools remains bleak. This study from five Stanford computing academics isn’t peer-reviewed but it does make a strong case that detection tools consistently generate false positives when evaluating the work of non-native English speakers. In addition, they find that they were able to use iterative prompting to largely bypass detectors, with requests such as “elevate the provided text by employing literary language”

Student Perceptions of AI-Generated Avatars in Teaching Business Ethics: We Might not be Impressed from Postdigital Science and Education

Among the ‘fun’ advancements in our current age of GenAI has been the ability to generate video and audio of realistic human avatars from text. Vallis, Wilson, Gozman and Buchanan (USyd) explored student perceptions of the use of these avatars in a redesigned Business Ethics unit. They found that students were far more ambivalent than they had expected and were interested in the potential of being able to customise your own digital lecturer. Some students weren’t aware that avatars had been used until it was pointed out, which itself sparked further thinking about ethics. The fact that the avatars were too ‘smooth’, lacking the usual fillers, stumbles and digressions was noted as a downside.

Prototypes-in-progress for bi(nary)-curious university educators and researchers from Safe-to-fail AI

For those people keen to get their hands (virtually) dirty, this site from Armin Alimardani (UoW) and Emma Jane (UNSW) offers some usable prototypes of GenAI tools built specifically for use in Australian Higher Education. These include student quiz feedback, a course outline FAQ, conversational AI and a speech recognition tool.

ASCII art by chatbot from AI weirdness

And finally, in reassuring news from the AI trenches, this collection of bizarre attempts at ASCII art (art made up of letters, numbers and characters) from ChatGPT shows that some areas are still safe. A giraffe that looks more like an elongated human skull and a running uniform that looks like the outline of a heart are highlights for me.

And that’s it for me. I hope to see you next week on the Substack.