You’ve already heard them, the slightly panicked arguments for why we should be using generative AI and teaching our students to use it. The first iteration is this:
If you don't use AI, you will be replaced by AI.
I certainly agree that it’s important to know how to use new technologies, especially as they become increasingly integrated into workplaces across a range of industries and fields. But this argument never rang true to me as the right justification for using AI.
Then, the idea was refined, and more recently I’m encountering this argument a lot:
If you don’t know how to use AI, you will be replaced by someone who knows how to use AI.
This is slightly more convincing. But only just. Still, it keeps getting repeated, and I am hearing it from more and more people. I use AI (albeit in limited ways), and I encourage my students to use AI (and, in limited ways). But I also believe strongly that we need to avoid farming out important cognitive tasks to AI. Many other writers have made the case that over-reliance on AI will have negative impacts on cognitive development. (See Brian Klaas’s article for a particularly compelling argument.)
So I have been troubled by the frequency with which people repeat the idea that you have to use AI in order to avoiding being replaced by it. For one thing, I don’t believe that humans are replaceable. More importantly, I don’t believe that this is the correct justification for using AI.
Lately I’ve been thinking that a better justification for using AI is this:
If you don’t know how to use AI, you won’t be able to make the case for when not to use it.
For all of you out there who, like me, have serious reservations about using AI, please consider this. Learn how to use it. Get to know its strengths and its shortcomings. And get comfortable talking about these things with your colleagues and your students.
If you don’t know how to use AI, you won’t be able to shape the conversation about how to use AI. And, crucially, you won’t be able to make the case for when not to use it.