Regnard Raquedan, AI Solutions Architect & Tech Speaker, photographed in Toronto

Regnard Raquedan

Shipping agentic AI on enterprise platforms by day. Building digital literacy in my own neighbourhood by night.

Family comes first for me. By day, I'm a Senior Solutions Architect at GitLab, where I lead the Google Cloud technology partnership presales globally, and a Google Developer Expert. I help large enterprises ship secure, agentic AI on Google Cloud. Outside work, my wife Liza and I run TechGuides for families and seniors in our corner of Toronto. I also write about the patterns behind the AI tools I build — most recently as a design pattern I call Precomputed AI — and ship open source tools like CloudEstimate, RightModel, and RunWhere as worked examples.

What I'm building now

After family and the day job, these are the projects and ideas I keep showing up for.

TechGuides — the work my wife Liza and I are doing in our own neighbourhood

TechGuides is the outside-of-work project closest to home for me. My wife Liza and I run it for families and seniors in Don Mills and Thorncliffe Park in Toronto. We relaunched it in April 2026 with fresh AI-focused content, a live site, and a learning tool we built ourselves.

The thinking is simple: The people with the most to gain from AI right now — families navigating a new country, seniors trying not to get left behind — are the people least likely to be served by the tools being built for them. TechGuides is our attempt to fix a small piece of that, in one neighborhood, with our own hands. If it works here, it can work elsewhere.

Precomputed AI — the design pattern behind the tools I ship

Precomputed AI, or PAI, is a design pattern I've been refining across the tools I build: relocate LLM reasoning into artifacts produced ahead of time, and reserve live inference for opt-in escalation. Reason ahead of time, serve instantly.

The pattern started as a footnote in a post I wrote about token consumption anxiety. The more I shipped, the more I noticed the same posture underneath each tool — precompute the common case, escalate only when the artifact can't decide. CloudEstimate, RightModel, and RunWhere are the worked examples. PAI is the name for what they have in common, and the pattern I'll keep writing and building against.

CloudEstimate — the tool I built for a problem I kept running into

CloudEstimate is an open source tool I built because sizing self-managed enterprise workloads across clouds is harder than it should be. Vendor docs give you reference architectures. Cloud calculators give you pricing. Very little bridges the two quickly when the real question is whether the same workload looks different on Google Cloud, AWS, and Azure.

I spend enough time in that gap to know how much friction it creates. So I built something intentionally narrow: a way to map published guidance to concrete instance shapes and pricing snapshots, so a first-pass estimate can stand up in a Slack thread, a procurement review, or an architecture conversation. It's also a worked example of Precomputed AI — scheduled regeneration of the pricing artifact, served instantly at request time.

RightModel — the tool I built because choosing an AI model turned into guesswork

RightModel is a free, open source tool I built because the model-selection problem got noisy fast. Teams are being asked to choose between Anthropic, Google, OpenAI, and everything else before they even have a clear way to describe the work. Too often, that decision gets made with instinct, stale pricing tables, or whichever model someone happened to use last week.

So I built something intentionally simple: paste what you are about to build, choose whether you're constrained to a provider, and RightModel recommends a model in seconds with transparent reasoning, cost breakdowns, and a deeper path when the answer is less obvious. It's another worked example of Precomputed AI: a precomputed ruleset handles the common case in zero tokens, and a live LLM is there as opt-in escalation when the ruleset can't decide.

RunWhere — the tool I built because the self-hosting question needed a default

RunWhere is a free, open source tool I built for the AI infrastructure question that tends to turn into a spreadsheet too early: should this workload stay on a hosted API, or is it worth running your own model? Its default answer is intentionally opinionated: stay on the API unless your workload is one of the exceptions.

The site asks four questions — API spend, hosted model, traffic shape, and hard constraints — then checks whether the workload belongs in the exception set. When it does, RunWhere compares the relevant paths: managed endpoints, serverless GPU, always-on GPU VMs, scheduled batch GPU, and owned hardware. It's the third worked example of Precomputed AI: bake the boundary ahead of time, serve the default instantly, and reserve live analysis for close calls.

What I believe

A few things I've come to believe, after twenty years of building in this industry:

The point of working in tech is to help people who weren't going to be helped otherwise. I came to Canada from the Philippines in 2011 looking for a better life for my family. I think the best version of a career in tech is one that quietly opens that same door for someone else — through teaching and building tools that reach the people the market forgets. That's the kind of work I care about.

You can't teach a domain you haven't built in. The AI conversation right now is full of people with strong opinions, but no shipped systems. I've learned the hard way that the only opinions worth holding are the ones you earned by getting something into production, watching it break, and fixing it. And when it works, I'm very happy to share it to those who want to know.

I've failed at this more times than I've succeeded, and I keep going anyway. I've co-founded ventures that didn't survive. Every one of them taught me something I couldn't have learned any other way, and none of them stopped me from starting the next one. This kind of grit is needed in this industry, now more than ever. I hope kids pick this up, to be honest.

Speaking

I've been speaking at tech conferences for nearly twenty years, across four continents and audiences ranging from a few hundred to several thousand. The work has shifted as my career has — from early Philippines industry events, to global entrepreneurship stages, to enterprise platform conferences — but the loop has stayed the same: build something real, then teach what I learned from it.

Selected talks:

Google Cloud Next '26 — Las Vegas, April 2026

Secure and Fast Agentic AI Development with Gemini and GitLab. Lightning Talk, Session 3908496. A working pattern for shipping agentic AI inside regulated enterprises.

Google Cloud Next '24 and '25 — Las Vegas

Two consecutive years presenting on the GitLab × Google Cloud partnership and platform engineering for AI workloads.

KubeCon + CloudNativeCon — Europe and North America, 2023

Two appearances in the same year on cloud native platform engineering and DevSecOps at scale.

GitLab Summit 2023

Internal company stage, partnership and platform strategy.

Global Entrepreneurship Summit — Nairobi, Kenya, 2015

GIST Tech-I global startup competition. Presented to several hundred attendees from across the entrepreneurship and innovation community.

Y4IT Conference — Philippines, 2009

Among the largest IT conferences for youth in the country at the time. Audience of several thousand.

Earlier appearances include FSOSS Toronto (2013–2014), community events in Berlin, and SEMCON Philippines in 2007 — my first industry stage.

Writing

I write as a creative outlet. Sometimes for a national audience thinking about policy. Sometimes for enterprise platform engineers shipping production systems. And sometimes just for myself and the wider developer community, because there are problems worth thinking through in public.

Where I write:

Precomputed AI — a design pattern I'm developing

A pattern for relocating LLM reasoning off the user's request, with live inference reserved as opt-in escalation. The manifesto is the starting point; pattern write-ups will follow.

GitLab Blog — ongoing

A running series of technical posts on agentic AI, GitLab CI/CD, Google Cloud, and the working patterns that make them ship together inside real organizations.

dev.to — ongoing

My independent technical voice. Notes from the work I'm doing outside of any employer's roadmap — experiments, opinions, and the things I'm figuring out as I go.

"Why Canada needs to disrupt the child care industry" — Maclean's, 2015

An op-ed I wrote during my CubbySpot years, arguing for a structural rethink of how Canadian families access child care. Different decade, different industry, same instinct: the people the market is failing are usually the ones with the most to gain from a better system.

Get in touch

For speaking, advisory, or collaboration inquiries — or just to say hello. I read every message and reply personally, usually within a few days.

You can also find me on LinkedIn, GitHub, or email me directly at info@raquedan.com.