A Conversation with José Antonio Bowen, Educator, Musician, and Scholar, by James Scarborough
July 24, 2024
If you haven’t already tried it out, you’ve at least heard of the potential for AI to enhance our educational practices. It can provide students with personalized feedback. It can streamline our administrative tasks. And it can facilitate creative projects for both faculty and students.
At the same time, we need to examine its ethical and practical implications. Top of the list is academic integrity. As AI tools become more sophisticated - and students become more adept at their use - old school ways of detecting plagiarism and assessing student work are becoming obsolete. It’s time to develop new frameworks for evaluating our students’ work that acknowledges AI-generated contributions, ensuring that assessments remain fair and rigorous.
It’s not just about academic integrity. AI technologies can also perpetuate pre-existing biases in their training data. This can compound existing educational disparities. That’s why we need to incorporate AI literacy into curricula, equipping students with the skills to critically evaluate AI outputs and understand the limitations and ethical considerations of these tools.
The big question is, HOW? For starts, consider the work of Jose Antonio Bowen. He is a renowned educator, scholar, and musician (of which more later.) His work embodies the way AI can reshape teaching and learning. He shows what’s possible.
Pedagogically he is peerless. He has taught at Stanford, Georgetown, and the University of Southampton (UK. He served as a dean at Miami University and SMU and as President of Goucher College (voted a Top 10 Most Innovative College under his leadership). His book Teaching Naked (2012) won the Ness Award for Best Book on Higher Education from the American Association of Colleges and Universities. Other books include Teaching Naked Techniques: A Practical Guide to Designing Better Classes with C. Edward Watson (2017), and Teaching Change: How to Develop Independent Thinkers using Relationships, Resilience, and Reflection (2021). His latest book, of great interest here, is Teaching with AI: A Practical Guide to a New Era of Human Learning (2024) (also available here), co-authored with Watson, explores the practical applications of AI in education.
Back to the HOW. Creating best practices for AI in teaching and learning requires lateral thinking. This is where Bowen’s musical career comes in. He brings a huge background in music, both as a performer and a scholar, to the discussion. This amplifies his impact on the role of AI in education. He has performed alongside jazz legends like Stan Getz and Dizzy Gillespie. He was nominated for a Pulitzer Prize in Music. And, in addition to the books noted above, he edited The Cambridge Companion to Conducting (2003) and a 6-CD set, Jazz: The Smithsonian Anthology (2011)
Bowen’s diverse experiences in music, academia, and scholarship underscore the importance of critical thinking and creativity in our evolving educational landscape. His TED talk on Beethoven as Bill Gates, for example, exemplifies the innovative thinking required to navigate AI’s role in education. Bowen advocates for an interdisciplinary approach, integrating insights from chemistry, music, and humanities to foster a holistic learning environment.
AI promises significant benefits, many not yet known, for education. It requires a careful, balanced, and broad-based approach to its integration into teaching. That’s what follows below, a conversation with Jose Antonio Bowen to learn how his accomplishments as an educator, scholar, and, yes, musician, reveal the potential for AI to enrich educational experiences while emphasizing the need for thoughtful and ethical implementation.
JS: In what ways has your experience with performing, composing, and conducting shaped your approach to leveraging AI tools in academic settings?
JAB: I’m a performer and an improvisor. I am always looking for connections and new ideas that could creatively be applied to what I am doing.
JS: Can you share any specific instances where your background in music and the arts informed your strategies for using AI to enhance creative projects in education?
JAB: Creatives wrestle daily with a number of issues.
First, we know that a lot has already been done: even facing a blank canvas we know that what we produce is likely to be derivative. You are what you hear, read or see. Creativity is largely about new combination of previous ideas. In this sense, AI is very creative: it combines old ideas with ease. Indeed, hallucinations are generally a problem when using an AI, but a human who hallucinates (and imagines things that are not real) is an artist.
Second, humans are socially inhibited. That is not a bad thing, you don’t really want unvarnished commentary on your clothes every day, but it also means we second-guess our ideas. Will others think this is obvious or stupid? AI is not limited by this.
Finally, creativity relies on quantity. We get quality from quantity and again AI has a creative advantage here.
JS: What motivated you to explore the intersection of AI and education? How has your perspective evolved?
JAB: Put together those three creative advantages of AI mean it can make all humans more creative. We all get stuck: I need a better assignment here or a different way to introduce this topic. I started asking AI to give me suggestions—lots of them, and found it helped. Not every idea is good, but I only need one.
JS: What are the most significant ethical considerations educators must address when integrating AI into their teaching practices?
JAB: There are plenty, but the first is probably access: all of your students need to have access to the best tools. Those who don’t will have a less secure future, so that feels like an ethical obligation to all teachers. But we also need to teach them to use AI appropriately: When is it ok to use AI? Do you need to cite AI? How? Understanding that you as the human are ultimately responsible.
JS: How can AI tools be utilized to enhance personalized learning experiences without compromising academic integrity?
JAB: I am not sure those things are opposed. Ai can offer customized assignments and feedback. In an of itself, there is no threat to academic integrity there. Yes, students could use customized feedback to improve, but is that cheating? Does it matter if the source of improvement is the writing center or a dictionary (which is also the intellectual labor of others and is usually not cited). There is certainly an issue of integrity when you pass off ANYTHING that you did not create as yours. AI has made that easier, but we should treat these are two separate issues. We can use AI to help students learn. We also need to consider how AI creates new temptations. A car is both useful and sometimes a temptation not to walk and exercise, but we don’t typically think of a car in this paradox.
JS: What strategies can educators employ to ensure that AI technologies do not worsen existing educational inequalities?
JAB: We need to ensure that everyone has access to tools but also to instruction in how to use AI. At the moment wealthier students are paying for better AI and are more likely to be using it.
JS: How should academic institutions approach the challenge of developing standardized guidelines for citing AI-generated content in scholarly work?
JAB: I am not sure you need standardized guidelines. Different disciplines classes, situtations and assignments will each need careful thought as to whether this is a good place for AI.
JS: What role do you see AI playing in the future of creative disciplines, such as writing, design, and music within educational contexts?
JAB: We are already seeing a rapid use of Ai for co-creation. For a long while to come, the best humans will exceed AI in most things, but AI is already better than average in most things. Your C students are going to have a harder life unless you raise standards. Everyone is going to need to be an expert as something and be able to bring some special creativity, knowledge or discernment to all work. I think that is true in all fields.
JS: How can educators balance the benefits of AI-assisted grading with the need to maintain human oversight and critical judgment?
JAB: It depends a lot on the situation. If you have 600 students, then you might be better off allowing AI to provide faster, better and more consistent feedback to students, which might allow you more time for human relationships. In many situations, grading can be shared: allow AI to do the rote part, just like we allowed scantron to grade the multiple choice, but we graded the essays. Perhaps AI grades against a checklist: did this lab report have the following features, and then I only need to look at a few sections?
The bigger problem is how do we instill critical judgement in our students. How do we teach them to edit if they don’t practice as much writing?
JS: What are the potential risks of over-reliance on AI in educational settings? How can these be mitigated?
JAB: In many ways faculty are the ideal users of AI: we are already experts and so we know if an AI response is bogus. We can spot the pitfalls. For students, an answer that looks good might be a trap. We have experienced this already with the internet: no shortage of garbage there either. We need to teach skepticism and digital literacy at every level.
(Other resources: CSU’s AI site and Jose’s AI tools and prompts page)