Raise your hand if you’ve ever wondered who’s supposed to wrangle the wild, mysterious force that is AI on today’s campuses. I remember the first time ChatGPT wrote better than I could, and my professor’s eyebrow nearly disappeared into her hairline. Enter the plot twist: Queen’s University just appointed its first Special Advisor on Generative AI—Eleftherios Soleas—a move that feels equal parts bold, overdue, and honestly, a little bit thrilling.
Meet the Navigator: Eleftherios Soleas and the Birth of a New Role
Let’s get one thing straight: when Queen’s University announced the appointment of its first Special Advisor on Generative AI, it wasn’t just about keeping up with the latest tech trend. It was about steering the entire campus through a digital revolution. And at the helm? Eleftherios Soleas—a name you’ll want to remember if you care about the future of AI in higher education.
Now, who exactly is Soleas? He’s not just a tech geek or some faceless administrator. With a PhD from the Queen’s Faculty of Education, Soleas has been shaping minds as an adjunct professor since 2015 and leading as Director of Continuing Professional Development in Health Sciences since 2018. That’s a decade of hands-on experience, blending academic insight with real-world application. His background is a mix of education, policy, and practical leadership—exactly what you’d want in someone tasked with guiding the Queen’s University AI Advisor appointment.
So, why create this AI advisory position now? Timing, as always, is everything. Generative AI tools like ChatGPT are popping up everywhere—classrooms, offices, you name it. Queen’s recognized that the wave was coming fast, and instead of waiting to be swept away, they decided to put someone in the captain’s chair. On May 21, 2025, Soleas officially stepped into this two-year role, right as the university is building its approach to AI integration and governance.
But here’s my hot take: Sometimes, it takes a maverick to drag academia into the future. Soleas isn’t just there to write policy; he’s charged with launching the AI Centre of Excellence at Queen’s University. This isn’t just another committee—it’s a cross-disciplinary hub meant to unite experts in pedagogy, law, ethics, and technology. The goal? To ensure that AI is used responsibly, creatively, and in line with the university’s core values of fairness, trust, and academic integrity.
“The human is in the driver seat.” – Eleftherios Soleas
Soleas’s philosophy is clear: AI should enhance—not replace—human judgment and creativity. He’s all about empowering students and staff to question, critique, and refine what AI produces. As he puts it, we need to be able to look at an AI-generated response and say, “This isn’t accurate—and here’s why.” That’s the kind of critical thinking that will shape the future of the AI Centre of Excellence at Queen’s University, and it’s why this Queen’s University AI Advisor appointment is such a big deal.
AI in the Real World: What Queen’s Approach Tells Us About College, Classrooms, and Critical Thinking
Let’s be real—AI tools in university classrooms in 2025 are everywhere, but Queen’s University isn’t pretending there’s a one-size-fits-all solution. Instead, the campus is a patchwork of different approaches, with each department setting its own rules about how AI is used in coursework and grading. If you’re picturing a little academic drama, you’re not wrong. Some departments are all-in on innovation, while others are a bit more old school, clinging tightly to tradition and caution.
Take the Department of Political Studies, for example. They’ve drawn a hard line: generative AI like ChatGPT is strictly off-limits unless a course syllabus says otherwise. If a student goes rogue and uses AI without permission, it’s flagged as a potential academic integrity violation. Meanwhile, other departments are leaving the door ajar—maybe not flinging it wide open, but definitely letting in a breeze of experimentation. This variability is Queen’s in a nutshell: no single campus-wide AI policy, just a lot of conversations and, honestly, a bit of confusion as everyone figures things out on the fly.
That’s where Eleftherios Soleas, Queen’s first Special Advisor on Generative AI, comes in. Appointed in May 2025, Soleas is tasked with guiding the university through these messy waters. He’s not just about setting rules—he’s about fostering a culture where AI in higher education is used responsibly and thoughtfully. “The human is in the driver seat,” Soleas insists, and he’s adamant that AI should enhance, not replace, human judgment and creativity.
What does that mean for students and staff? It means being skeptical, not passive. Soleas puts it bluntly:
“We need students to be able to look at an AI-generated response and say, ‘This isn’t accurate—and here’s why.’”
So, critical thinking and academic integrity are at the heart of Queen’s evolving approach. Departments are encouraged to help students question, critique, and refine whatever AI spits out, rather than just accepting it at face value. It’s not about banning AI tools in university classrooms, but about making sure they’re used in ways that align with the university’s values—fairness, trust, and respect for the learning process.
At the end of the day, most students (and, let’s be honest, some staff) are still figuring it out as they go. The only thing that’s clear? The conversation around AI and critical thinking in higher education is just getting started—and it’s not always neat or predictable.
Beyond Hype: Why Responsible AI Isn’t Just ‘Nice to Have’—It’s a Survival Skill Now
Let’s be honest—AI isn’t just a buzzword anymore. At Queen’s, the responsible use of AI in education isn’t some distant ideal; it’s a daily reality that’s reshaping how we learn, teach, and run the university. With generative AI tools like ChatGPT now woven into classrooms and admin offices, the stakes are higher than ever. This isn’t about jumping on the latest tech trend. It’s about making sure that every decision, every grade, every automated process reflects the core values of academic integrity, fairness, transparency, and trust.
When I think about the ethical implications of generative AI, I can’t help but picture a student staring at an AI-generated answer that just doesn’t add up—literally. If an AI says 2+2=5, do we just shrug and move on? Or do we dig deeper? Increasingly, students and staff are becoming part-time detectives, learning to question, critique, and refine what AI produces. That’s not just a skill; it’s a survival tactic in today’s academic landscape.
This is where AI risk management comes into sharp focus. What happens when an AI system grades your paper or processes your admin forms? The margin for error—and the potential for bias or unfairness—means we can’t afford to be complacent. Transparency isn’t just a nice-to-have; it’s non-negotiable. Queen’s approach, led by Eleftherios Soleas, is all about aligning AI policy development in higher education with these non-negotiable values. As Soleas puts it,
“AI should enhance, not replace, human judgment and creativity.”
That’s why Queen’s has chosen a decentralized, consultative process for AI governance. Instead of a one-size-fits-all policy, faculties and departments have the flexibility to address AI in ways that make sense for their disciplines—always under the broader umbrella of university values. The Digital Planning Committee and the soon-to-launch AI Centre of Excellence are key players here, ensuring that every corner of campus is engaged in the conversation about AI ethics and professionalism.
In the end, building a responsible AI strategy is about more than compliance. It’s about creating a culture where everyone—students, staff, and faculty—feels empowered to question, challenge, and improve the technology shaping their world. At Queen’s, responsible AI isn’t just a checkbox. It’s the foundation for trust, innovation, and academic excellence in a rapidly changing digital age.
TL;DR: Queen’s University now has a Special Advisor on Generative AI—Eleftherios Soleas—to guide responsible, ethical, and effective use of AI across campus and launch an AI Centre of Excellence. It’s a human-first, critical-thinking-driven approach designed to prepare Queen’s for the evolving role of AI in higher education.