From Vibe-Coding to Production: What I Built at Google DevFest Toronto 2025

"Agents are in the driving seat now."
That line stuck with me from DevFest Toronto this past Saturday. But for me, it wasn't just a keynote insight, it was the foundation of everything I built in the hands-on workshops.
If you weren't at the event, here's what you need to know: the paradigm shift isn't coming. It's here. And I want to walk you through what I experienced, because it fundamentally changes how we think about building on cloud infrastructure.
The Lab: Building With AI the Right Way
I committed to Track 2 of DevFest, the hands-on workshop track: "Build with AI using Google Cloud." This wasn't theoretical. We weren't sitting in lecture halls. We were shipping real features. Working features.
The workshop split into two concurrent lab experiences, and I dove into both:
Lab 1: Vibe-Coding a Gmail Add-on with Gemini CLI
Here's where it gets interesting.
We built a fully functional Gmail add-on from scratch using Gemini CLI, Google's open-source AI agent that brings Gemini directly into your terminal. No boilerplate. No scaffolding code. Just describe what you want, and the AI builds it.
The workflow:
- Describe what you want (in plain English)
- Gemini CLI executes it (using MCP servers for tooling integration)
- You iterate (let Gemini fix errors as they surface)
- Deploy the result to Google Workspace
This is not how we've built software for the last 20 years.
What we shipped:
Step 1: A Gmail add-on that displays AI-generated images using the Cat-as-a-Service API. Simple enough.
Step 2: We advanced to custom image generation using Vertex AI's Gemini 2.5 Flash Image model. Not random cats, intelligent, contextual image generation.
Step 3: The image becomes dynamic. When you open an email, it generates a speech bubble with the sender's name : "[sender name] rocks!", with the actual sender's name inserted. The personalization wasn't hardcoded. It was intelligent context extraction and generation.
Step 4: Extensibility. We added a dropdown menu letting users choose different animal types. Not a hacky modal. Not a series of buttons. Clean, intuitive UX, built by the agent, refined through iteration.
The kicker? I didn't write a single line of HTML/CSS/JavaScript manually. Gemini CLI handled all of it. I focused on what the user needed and why. The AI translated intent into implementation.
This is the "vibe-coding" revolution people have been talking about. Except it actually works.
Lab 2: Enterprise-Scale AI with Gemini Enterprise
While Lab 1 showed me how to build agentic applications, Lab 2 showed me the full picture: how AI agents manage enterprise data at scale.
We deployed a Gemini Enterprise application and connected three data sources:
- Google Drive (documents, CSVs, images)
- Cloud Storage (unstructured data - PDFs, text files)
- Google Calendar (temporal context, scheduled meetings)
Then we queried it: "What meetings are on my agenda today? What do I need prepped for them?"
The agent didn't just return calendar entries. It:
- Scanned the calendar
- Retrieved context from connected documents
- Extracted relevant information (sales figures, customer feedback, project status)
- Generated actionable prep materials in seconds
This isn't automation. This is reasoning on enterprise data.
We also tested the Deep Research agent, an agentic feature that can autonomously browse up to hundreds of websites, synthesize findings, and generate multi-page reports. In minutes.
And NotebookLM, document-grounded analysis where the AI only answers questions using content you provided. No hallucinations. No inventing data. Just deep thinking on your documents.
The Paradigm Shift: Why This Matters for Infrastructure
Three themes from the keynotes crystallized during the workshops:
1. "Harness, Don't Replace"
Gemini CLI taught me this viscerally. The agent doesn't replace you, it executes what you specify. You remain the architect. The agent is the builder.
The advice was simple but profound: "Crawl before you walk before you run."
- Start small. Get comfortable. Build confidence.
- Then iterate. Refine. Optimize.
- Finally, scale.
This reframes how we should onboard agentic systems into our infrastructure. It's not "deploy agents everywhere." It's "learn to think in agents, then build systematically."
2. "Backseat Drivers Are Out"
The best analogy of the day: LLMs are like backseat drivers. They talk. But AI Agents are in the driving seat. They execute.
This distinction is architectural, not philosophical.
Traditional LLMs (GPT-3 era thinking) were conversational. They discussed solutions. They were advisory. Useful, but passive.
AI Agents are different. They:
- Plan (reason about multiple steps)
- Execute (call tools, APIs, run code)
- Adapt (adjust based on feedback)
- Coordinate (work with other agents)
The infrastructure implication is massive: we're shifting from monolithic systems to specialized, headless agents built on microservices.
Each agent owns one responsibility. They coordinate through Agent-to-Agent (A2A) protocols. They integrate external tools and systems via Model Context Protocol (MCP) — a new open standard for agent tooling.
This is microservices architecture, evolved for intelligence.
3. "AI Is Now a Foundational Pace Layer"
AI isn't a feature. It's not a framework you bolt on. It's becoming foundational infrastructure, like APIs were 15 years ago, or databases before that.
Think about it: Canada announced a $2B Sovereign Compute Strategy specifically because of AI. Not because of any single company. Because of the foundational shift.
When infrastructure-level investments are being made at the government level, you know the pace layer has changed.
The Technical Stack I'm Excited About
If you're building cloud infrastructure or designing intelligent systems, here's the toolkit that actually matters right now:
- Gemini CLI: Open-source AI agent. Eliminates boilerplate. Makes vibe-coding real.
- MCP (Model Context Protocol): The standard for connecting agents to tools. Google Cloud Platform, databases, APIs, custom services, all become agent-accessible through MCP servers.
- Vertex AI: Production-grade model serving. Gemini 2.5 Flash for reasoning. Gemini 2.5 Image for generation. Built for scale.
- Google Workspace Add-ons: Intelligent features embedded where people actually work. Gmail, Sheets, Docs, Calendar. Not external tools. Native integration.
- Gemini Enterprise: Multi-agent reasoning on enterprise data. Search, summarization, task automation, data analysis, reporting, all orchestrated through a single reasoning engine.
- NotebookLM: Document-grounded analysis. When you need the AI to think deeply but only using specific sources. No hallucinations.
What's revolutionary: These tools work together. You're not juggling 15 different platforms. It's orchestrated. Coherent. Built to compose.
For Cloud Solutions Architects & DevOps Engineers
If you're building infrastructure, this is directly relevant:
Pattern 1: Agents as Microservices
Traditional microservices are functions/tasks. Agentic services are reasoners. They take ambiguous input, determine strategy, execute, and adapt.
This changes how you design service contracts. From RPC-style calls to "here's my goal, figure out how to achieve it."
Pattern 2: MCP as the New Integration Layer
For decades, we've used REST APIs as the contract between systems. MCP is becoming the contract between agents and tools.
Infrastructure teams need to think: "How do I expose my systems as MCP servers?"
This becomes your integration strategy.
Pattern 3: Observability Transforms
When your system is agents reasoning about data, traditional metrics shift. Latency and throughput still matter. But so does reasoning quality and decision correctness.
You need to log agent decision chains. Understand why the agent took action X instead of Y. Validate that multi-step reasoning actually reached the right conclusion.
This is a new observability paradigm.
Pattern 4: Multi-Cloud Gets Easier (or Harder)
Since agents abstract the underlying cloud, orchestrating Azure + AWS + GCP should become architectural practice, not engineering gymnastics.
But it also means vendor lock-in at the agent level. Be intentional about which agent framework you choose and why.
What I Took Away
The workshops proved something fundamental: You don't need to be a deep learning engineer to build intelligent systems anymore.
You need to understand:
- Architecture (how do components fit together?)
- Orchestration (how do agents coordinate?)
- Integration (how do agents access your data and tools?)
- Observability (how do you debug agent reasoning?)
The AI is the tool. Your job is designing how it reasons and acts on behalf of your organization.
That's a fundamental shift. And it's happening now.
What's Next?
I'm diving deeper into:
- Multi-agent orchestration: Building systems where agents coordinate to solve complex problems. Not single-agent, single-task. Actual agent teams.
- MCP server patterns: How to design and expose your infrastructure as MCP servers so agents can integrate seamlessly.
- Agentic observability: Tools and patterns for understanding agent decision-making in production.
- Multi-cloud agent orchestration: Deploying agent systems that work across AWS, Azure, GCP, and Oracle simultaneously.
- Agent-native application architecture: Moving beyond "LLMs in an app" to "entire app as coordinated agents."
The Conversation I Want to Start
If you're building cloud infrastructure, designing intelligent workflows, or exploring AI at scale, this is the moment to move beyond hype to execution.
Here's what I'm genuinely curious about:
- Are you experimenting with agents in your infrastructure? What's working? What's not?
- What's the biggest blocker you're facing moving from demos to production?
- How are you thinking about agent security and governance?
- Are you designing your microservices differently to account for agentic coordination?
Because these aren't theoretical questions anymore. They're architectural decisions we're making right now.
Final Thought
DevFest Toronto 2025 wasn't just a conference. It was a snapshot of where infrastructure is heading.
The paradigm shift isn't coming. It's here.
And the builders who understand that, who can think in agents, who understand MCP, who can orchestrate agentic systems at scale, those are the people shaping the next decade of cloud architecture.
If that resonates with you, let's talk. Drop a comment. I want to hear what you're building.
Want to discuss this further?
I'm always happy to chat about cloud architecture and share experiences.