What to Expect from AI in Class and Beyond
Co-Intelligence: Living and Working with AI
By Ethan Mollick
(Penguin/Random House, 2024 – Learn more)
Reviewed by Sarah Cooper
As a writer, eighth grade history teacher and school administrator, I’m as curious as anyone about what will happen when the robots eventually take over (more on that later). But after reading Ethan Mollick’s Co-Intelligence: Living and Working with AI, I’m newly optimistic about the possibilities of AI in education – and trying to live more like a cyborg.
Wharton School professor of management Ethan Mollick burst through the university walls into K-12 education shortly after ChatGPT launched in November 2022, with his One Useful Thing Substack and videos such as AI Required: Teaching in a New World.
I’ve been looking forward to this book because I knew it would be readable and immediately applicable to our work as educators.
Back to living like a cyborg
Mollick argues that we can supercharge our own thinking by taking advantage of large language models, such as ChatGPT and Claude, to “blend machine and person, integrating the two deeply.”
For instance, when Mollick encountered writer’s block while writing Co-Intelligence, he prompted the AI to give feedback in the voices of various readers, some more critical and others more supportive: “You are Ozymandias. You are going to help Ethan Mollick write a book chapter on using AI work. You speak in a pompous, self-important voice but are very helpful and focused on simplifying things. Here is the chapter so far. Introduce yourself.”
The results are both humorous and helpful, enough that I want to get AI feedback on more of my own creative work – whether that’s creating a final portfolio project for my civics classes, implementing accreditation recommendations for our school, or penning this review.
Through such collaboration with AI, Mollick argues that we discover for ourselves the Jagged Frontier of this innovative technology, what it is best at helping us with, and what currently lies outside its capabilities.
Four Rules for Co-Intelligence
Part 1 of the book has three chapters, Creating Alien Minds, Aligning the Alien, and Four Rules for Co-Intelligence. The first two give an overview of how LLMs work and how to aim for “alignment” with human goals through technology policy and development (we can only hope).
The four rules merit being heavily quoted because they invite us to accept that “we live in a world with AIs, and that means we need to understand how to work with them.”
► Principle 1: “Always invite AI to the table.” With experimentation on all kinds of tasks, you can motivate yourself to do hard things and explore where your own Jagged Frontier lies.
► Principle 2: “Be the human in the loop.” If there’s one theme that runs through this forward-thinking book, it’s that we will continue to need human insight, expertise and discernment as we interact with AI.
► Principle 3: “Treat AI like a person (but tell it what kind of person it is).” This book is replete with prompts like the one for Ozymandias above, and it also reminds us that AI, for whatever reason, can perform better with a little positive encouragement.
► Principle 4: “Assume this is the worst AI you will ever use.” One of my favorite analogies in the book was that all of us are “playing Pac-Man in a world that will soon have PlayStation 6s,” or far beyond that. Any flaws in our AIs today are likely to be fixed soon, and playing with these technologies is the best way to befriend them.
Personalizing AI for Creatives and Educators
Part 2 makes AI more personal, with appealing chapter titles:
- AI as a Person
- AI as a Creative
- AI as a Coworker
- AI as a Tutor
- AI as a Coach
- AI as Our Future
- Epilogue: AI as Us
With the chapter on AI as a Creative, I really started envisioning the possibilities and drawbacks for education. Here and throughout the book Mollick cites gripping studies, such as one showing that professionals such as marketers, data analysts, grant writers, and consultants who use AI for creative work do it significantly faster and better, according to their human colleagues.
Where does this leave us as teachers? We must accept that our students will want to push The Button, as Mollick describes – the LLM click that creates an essay or solves a problem set. Or, for teachers, writes a letter of recommendation for a student. In fact, Mollick asks the open question of whether, if AI can write in our voice and be more persuasive, we are disadvantaging our students by not using AI for such letters.
Two chapters in the book, AI as a Tutor and AI as a Coach, directly address education, with a plethora of encouraging and cautionary tales.
You’re probably curious about plagiarism, and Mollick addresses the Homework Apocalypse question, as he calls it, head on. His response is similar to his cyborg proposal, in that he requires students in both his undergrad and MBA courses at the University of Pennsylvania to use AI, with different criteria for different projects. For one, students use AI with abandon but have to fact-check the results scrupulously; for another, students query AI about “10 ways your project could fail and a vision of success” for a seemingly impossible-to-complete business proposal.
One relief about the education chapters is that they dispelled easy myths that have arisen over the past year for what we should be teaching about AI – such as that all our students should become crackerjack prompt writers. Instead, “rather than distorting our education system around learning to work with AI via prompt engineering, we need to focus on teaching students to be the humans in the loop, bringing their own expertise to bear on problems.”
And Mollick, like so many educators, is a huge fan of AI as a tutor, especially if teachers can keep track of student learning online and thus differentiate based on what individuals need, as with Khan Academy’s Khanmigo.
He also puts forth an inspiring theoretical example of deliberate practice for two architecture students. One meets once a week with an experienced architect to go over plans. For the other, “Each time he creates a design, the AI provides instantaneous feedback,” acting as “an ever-present mentor” and allowing him to progress more deeply.
Happily, Mollick also believes that a foundation in basic skills and knowledge is even more important than ever for our students and for professionals, because “the path to expertise requires a grounding in facts” to see patterns, errors and the big picture.
One question I wanted answered, which was probably beyond the scope of the book, is how we determine what “mastery” means at each grade or skill level for our students. When is it appropriate for students to write an essay entirely themselves, whether in class by hand or on a Google Doc with the writing history attached, and when can they move on to getting feedback from AI? I imagine Mollick would say immediately, at every level.
If that’s the case, then how do we make sure that students are thinking for themselves and not simply pressing The Button? Along with the excitement that we as educators generate by encouraging students to experiment with AI, having guidelines for ‘what comes when’ is crucial to make sure that students are not simply learning how to make better prompts that do all their work for them.
How About That Robot Apocalypse?
In the last chapter, Mollick lays out four possibilities for our continued coexistence with AI that are as clear as any I’ve seen, beginning with the observation that “This book may seem as if it is full of science fiction, but everything I am describing has already happened.” So what comes next?
► Scenario 1: As Good as It Gets. AI stagnates because of technological hurdles, government policy that restricts development, or other unforeseen issues. Mollick does not think this is likely.
► Scenario 2: Slow Growth. This future would give us time to respond to AI’s development, as we have with many other technologies in the past century-plus, allowing “a measured pace” with “largely positive” impacts.
► Scenario 3: Exponential Growth. Mollick points out that “Moore’s Law, which has seen the processing capability of computer chips double roughly every two years, has been true for fifty years.” AI could well morph at superspeed, leading to powerful bad actors, magnificent gains in human quality of life, or both.
► Scenario 4: The Machine God. Here comes the singularity, or the point at which computers hit artificial general intelligence (AGI) that outstrips human capabilities. “In the fourth scenario, human supremacy ends,” for better or for worse.
Phew! Just writing all this feels like baking a loaf of bread with way too much yeast. Will the bread rise only to fall? Will it explode, leaving a doughy residue on the oven walls? Will the oven cease to work entirely?
Ultimately, Co-Intelligence was equal parts compelling and frightening. I dashed through it in three days, wanting to know how to apply its insights to the work we all do. If you’ve been hesitating and want to read just one book to animate you about the possibilities of AI, this is it. If you’re already immersed in the new challenges of AI, this is also it. And I would read it soon, before one of Mollick’s later scenarios comes true!
Sarah Cooper teaches eighth-grade U.S. History and is Associate Head of School at Flintridge Prep in La Canada, California, where she has also taught English Language Arts. Sarah is the author of Making History Mine (Stenhouse, 2009) and Creating Citizens: Teaching Civics and Current Events in the History Classroom (Routledge, 2017). She presents at conferences and writes for a variety of educational sites. You can find all of Sarah’s writing at sarahjcooper.com.