A Conversation With Director Miguel Perez, “One for the Road”, Art of Acting Studio, Theatre 2, by James Scarborough
A Conversation with William Hardaway, Instructional Designer, California State University Fresno, by James Scarborough

A Conversation with Ryan Lufkin, Vice President of Global Academic Strategy at Instructure, by James Scarborough

Artificial intelligence is dramatically reshaping higher education. As institutions grapple with how to integrate AI tools effectively and ethically, many questions remain about the implications for teaching, learning, and academic integrity. To explore these critical issues, I spoke with Ryan Lufkin, Vice President of Global Academic Strategy at Instructure, the company behind the widely used Canvas learning management system.

With over two decades of experience in educational technology, Lufkin has a unique perspective on the evolving role of AI in higher education. He emphasizes the need for institutions to be proactive in developing AI policies and preparing both faculty and students to leverage AI tools responsibly. While acknowledging valid concerns around academic integrity, Lufkin sees tremendous potential for AI to enhance learning experiences and improve student outcomes when implemented thoughtfully.

Lufkin sees AI playing an expanding role across various aspects of higher education, from personalizing learning pathways to streamlining administrative tasks. He envisions AI-powered teaching assistants that can provide 24/7 tutoring support, adaptive assessments that pinpoint knowledge gaps, and intelligent systems that help instructors create more engaging course content. At the same time, he emphasizes that the goal should be enhancing, not replacing, the invaluable human connections at the heart of education.

JS: As AI capabilities expand, how can we ensure higher education maintains its core purpose of developing critical thinking, creativity, and uniquely human skills?

RL: As we look at ways to apply generative AI we need to ensure our focus remains on solving real, human-driven problems. Saving teachers time and ensuring students stay on track to achieving their academic goals are the highest priorities. We also need to ensure students leverage these innovative tools in ways that enhance their learning, not avoid learning. This will require evolution in the ways we measure and access skill mastery, and we’ll need to ensure we prioritize what have been traditionally considered soft skills like critical thinking, creativity, interpersonal communication, networking, etc. By building these into the skills competency frameworks we measure against when defining student success, we’ll enshrine them in the process and ensure they’re not lost.  

JS: How can institutions ensure AI systems used in education are free from bias and promote inclusive learning environments?

RL: At Instructure we’ve prioritized developing inclusive learning environments by ensuring any application of generative AI meets our three key principles - Intentional, Safe and Equitable. Solving those real, human-driven problems, meeting the security, privacy and accessibility guidelines already in place, and developing tools that don’t require additional cost or unique hardware and remain available to all learners. This lays the foundation for inclusive access for all learners. Upon this foundation we can address the concerns of bias, inaccurate results and hallucinations with human-in-the loop processes that continually check for these deviations. We can limit the data sets the LLMs have access to in constructing their responses to limit bias and inaccuracies. But ultimately we need to train both educators and students on AI literacy so they understand what these tools are capable of and ensure healthy skepticism in their responses. 

JS: As AI tools become more prevalent in higher education, how can institutions ensure they are used ethically and responsibly? What policies or guidelines should be put in place?

RL: It’s important that any exploration of applying generative AI starts with ensuring that the tools meet the security, privacy and accessibility guidelines already in place. It’s key to understand what Large Language Models (LLMs) are being applied, where are they hosted, what regions they’re available in, what data are they ingesting, how are student data and educator IP being protected, and a host of additional factors.  

One of the biggest issues playing out across education is the growing gap between educators and students in the adoption and mastery of AI itself. Prioritizing AI literacy programs for educators and students should be top priorities for education at all levels. Focusing on understanding the basics of how LLMs develop their responses, how to write effective prompts to generate desired outputs, how to evaluate different AI tools to measure their quality and value, and, maybe most importantly, teaching the ethics of how and when to properly leverage AI in their course work and in life. 

JS: Given Instructure’s position in the educational technology landscape, what role do you see companies like yours playing in shaping AI integration in higher education? How can ed-tech companies responsibly guide this transformation?

RL: I like to say right now we’re in the trust building phase of AI. It’s incredibly important for software developers, like Instructure, to help establish that trust through transparency and communication. For example, for every AI -powered feature in Canvas we’ve developed a “nutritional facts’ card akin to what you’d find on a cereal box, to help developers and end-users understand more about the LLMs being leveraged. This transparency helps ensure increased trust, and it’s something we’re asking of all of our partners developing AI tools that can be integrated into Canvas. All of these nutritional facts cards are accessible through the AI Hub in our Instructure Community. 

JS: How might the integration of AI change the role of faculty in higher education? What new skills or approaches will instructors need to develop?

RL: Educators truly are the spark that drives academic success. AI should be viewed as fuel to make that spark burn brighter. Saving educators time by automating mundane and administrative tasks, and extending the reach of educators outside of the classroom, on evenings and weekends, and scaling their outreach for very large courses. But to achieve these benefits, both educators and administrators need to come together to ensure they have the resources and training to properly leverage AI.  

JS: What are the most promising applications of AI in teaching and learning that you see emerging? How might these enhance the student experience?

RL: Ultimately I think AI holds the promise to provide personalized learning in ways we’re just starting to explore. By understanding an individual student's understanding and needs, AI can customize content and information to help ensure a student's success. This is difficult for educators, especially in very large classrooms, but with AI for the first time we can scale to support this model. The educator is still the spark, but AI is helping ensure the connection like never before. 

JS: How can colleges and universities prepare students to leverage AI in their academic work while maintaining academic integrity?

RL: A focus on AI literacy for both educators and students is really the starting point. We make a lot of assumptions about what students do and don’t know about the ethical use of AI. For many young, tech-native students it’s simply another tool to help them study. We have to very clearly articulate our expectations around its acceptable use at the institutional, program, course and even assignment level.   

JS: What changes might be needed in curriculum design and assessment practices to account for students’ use of AI tools?

RL: The key is to develop new and novel assessment methods that can’t easily be gamed using generative AI. In some cases this could be one-on-one reviews to ensure they understand what was submitted in written assignments, in some cases this is turning the whole process on its head and having AI generate the final paper for students to review, critique and expand upon. But it’s key that both teachers and students understand what AI is capable of and have clear definitions on its acceptable use in the classroom, and even individual assignments. That brings us back to the importance of AI literacy for all involved.   

JS: What potential risks or downsides of increased AI usage in higher education should institutions be mindful of? How can these be mitigated?

RL: With every new technology, from the calculator to the internet, there have been concerns on the impact they would have on critical thinking. In actuality all of these tools have made us smarter and more efficient in a myriad of ways. AI is no different, and it’s here to stay. Ultimately we need to focus on students using AI to enhance learning, not avoid learning, so that they leave their academic journey with the skills they need to be effective in the careers of their choice. This alignment with employers, and their expectations around AI usage and skill mastery will be important from this point forward. 

JS: Looking ahead 5 to 10 years, how do you see AI reshaping the higher education landscape? What major changes do you envision?

RL: I think in the next couple of years we’ll be talking about AI very differently. Much like we talked about HTML5 a few years ago, no one talks about html5 today. It simply powers more interactive online experiences in ways we simply take for granted. The same will be true for generative AI and it will kind of fade into the background and make our lives easier in a lot of ways we don’t fully comprehend. What’s clear is that we’re just scratching the surface of what this innovative technology is capable of, and this ‘trust phase’ we find ourselves in is just the first step of a very exciting future.