This year, I had the privilege of organizing Global Azure Quebec 2025 and it was without a doubt one of the most energizing, rewarding, and thought-provoking events I’ve ever been part of.

What started as a community gathering has grown into something truly special. We welcomed cloud engineers, architects, developers, students, and security professionals from all across Quebec (and beyond), all coming together to share knowledge, connect with peers, and dive deep into the future of Azure, AI, and cloud security.

A Community-Driven Event with Real Impact

Organizing this year’s event was no small feat—but every late-night planning call, every speaker coordination thread, every sponsorship pitch… it all paid off. Seeing a packed room full of curious minds, people asking the hard questions, and genuine hallway conversations made it worth every second.

Our sessions spanned everything from cloud-native app development to AI tooling, governance, platform engineering, and cybersecurity. The local talent we had on stage was simply incredible. I’m proud we could give them a platform—and equally proud of the strong turnout and engagement from the audience.

Alongside organizing, I also had the chance to present one of my current research interests: AI Red Teaming.

My session, titled « Security Risks for Generative AI », explored how we can build autonomous, LLM-powered agents to simulate adversarial behavior and proactively test the security of GenAI workloads.

In short, the AI Red Teaming Agent is designed to:

  • Simulate prompt injection and data leakage scenarios
  • Stress test model outputs for toxicity, hallucination, and jailbreaks
  • Integrate into security pipelines for continuous red teaming
  • Generate structured findings and map them to frameworks like MITRE ATLAS

The idea is simple but powerful: if AI is going to be used to build things, it should also be used to break them ethically, of course.

The feedback was amazing. Many attendees were intrigued (and maybe a little concerned) by the offensive potential of AI. But more importantly, there was a strong appetite for building defensible, auditable, and secure GenAI pipelines.

Chargeur En cours de chargement…
Logo EAD Cela prend trop de temps ?

Recharger Recharger le document
| Ouvert Ouvrir dans un nouvel onglet

Looking Ahead

Global Azure Quebec 2025 confirmed what I already knew: our community is ready for the next phase of cloud innovation—but it must be built with security in mind.

As we embrace AI, we also need to invest in the offensive side of security research to understand our weaknesses before attackers do. That’s where AI red teaming comes in. And that’s the conversation I’ll keep pushing forward.

To everyone who attended, supported, or helped behind the scenes—thank you. I can’t wait to see where we take this next.

Until then, stay curious, stay secure.

Maxime.