Blog
8
min read

Reflections from Web Summit: On Stage and Off

Full Name*

Email Address*

Phone Number*

Subject*

Topic

Thank you! Your submission has been received.
We'll get back to you as soon as possible.

In the meantime, we invite you to check out our free resources to help you grow your service business!

Free Resources
Oops! Something went wrong while submitting the form.
Reflections from Web Summit: On Stage and OffAbstract background shapes
Table of Contents
Talk to an Expert

I've been processing my time at Web Summit in Lisbon this week, and what keeps coming back to me is more than the scale of the event or even the stages themselves - it's the conversations that happened in the margins, the questions that lingered after the sessions ended and the realization that we're all grappling with the same fundamental tension: how do we build AI systems that serve humanity, not just markets?

I was fortunate to take the stage for a panel on "Demystifying Responsible AI: A Guide for Leaders," where things got real. Sharing the stage with brilliant thinkers, we moved past theory into the practical work of ethical, sustainable AI deployment.

The private roundtable I moderated during the EcoSystem Summit Offsite made this even more evident. Sitting with policymakers and tech executives, trying to map out how we build resilient, inclusive tech ecosystems worldwide, the conversation kept returning to one truth that regulations are coming. It's not a question of if, but when and how.

In the meantime, the work falls to us. The practical applications of responsible AI rely on action long before regulation arrives. Incentives that encourage organizations to adopt these practices now help raise the baseline ahead of any mandated requirements. Another panelist, Angeli Patel, executive director of the Berkeley Center of Law and Business, highlighted some of the fascinating work the University of Berkeley is doing around this area, including developing resources and frameworks that make responsible AI accessible rather than abstract. Proof that the move from theory to practice is entirely achievable when the guidance is grounded in real use.

Across every discussion that week, I saw a change I’ve been hoping to see for a long time. The industry is starting to acknowledge that the technology matters far less than the orientation and values that guide its use. The focus on keeping people centered, on evaluating AI systems through the lens of human impact rather than capability alone, and on making that evaluation an integral part of deployment (rather than something applied afterward), created a sense of alignment that felt overdue and deeply encouraging.

Europe's approach, particularly around the AI Act and the broader regulatory conversation that's underway, offers a preview of what's coming globally. That direction reinforces a point that surfaced repeatedly throughout the week, which is the growing recognition that humanity and trust form the basis for innovation that endures. I left Lisbon with more questions than answers, which feels right. I also left with renewed conviction that this work - the complex task of building AI systems that reflect our values - isn't someone else's responsibility. It sits with all of us. 

Author
Wendy Gonzalez
Wendy Gonzalez
Director

RESOURCES

Related Blog Articles

No items found.