About Me

4 min read

About Me

TL;DR: I’m a problem solver who uses technology as the vehicle. After 25 years, the domains have changed—applications, infrastructure, security systems, and now LLMs—but the drive is the same: understand the problem deeply, find the right tool, build the solution.

What Keeps Me Interested

  • Problems worth solving. What drives me isn’t any particular technology—it’s the process of understanding a problem deeply enough to build something that actually solves it. Early in my career that was application-centric: how do you build software that works for real users? Now it’s more often infrastructure and platform problems. The constant is the satisfaction of taking something broken or missing and making it work.

  • Systems that scale without spiraling operational complexity. Scaling compute is easy; scaling the team’s ability to operate and debug that compute is hard. I gravitate toward designs that stay manageable as they grow—which usually means investing in abstractions, automation, and observability upfront.

  • Security that ships. I’ve seen too many security initiatives that result in lengthy audits but not actual protection. I prefer guardrails that are hard to circumvent and visibility that surfaces problems early. The best security controls are the ones developers don’t have to think about because they’re built into the platform.

  • Understanding what LLMs can and can’t do. I’m actively exploring how large language models fit into real workflows—not as hype, but as another tool for solving problems. I’m interested in where they genuinely accelerate work, where they fail in non-obvious ways, and how to evaluate their output critically. This is the latest expression of the same drive: a rapidly evolving technology space with real problems to figure out.

How I Work

  • Capture intent in a simple user story before writing code. A user story is a single sentence: “As a [who], I want [what] so that [why].” It’s easier to rewrite a sentence than a thousand lines of code. I’ve watched teams build the wrong thing because nobody wrote down what problem they were solving—or wrote it down but buried it in a 40-page requirements doc nobody read. A good user story forces clarity and gives everyone a shared reference point.

  • Start with a concise architecture draft. I do this because expensive mistakes compound—catching a bad assumption in a sketch costs minutes; catching it in production costs weeks. The goal isn’t to predict everything, but to surface the decisions that matter early.

  • Validate assumptions quickly and iterate in small slices. I’ve learned that confidence in a design should come from evidence, not from the thoroughness of the planning document. Ship something small, measure whether it behaves as expected, adjust.

  • Emphasize instrumentation and feedback loops. Decisions are only as good as the information they’re based on. I instrument systems so that I can see when assumptions break—and so the team doesn’t have to guess what’s happening in production.

  • Prefer maintainable, boring solutions unless the payoff is real. “Boring” technology has predictable failure modes and a deep bench of people who know how to operate it. I reach for novel approaches when they solve a problem that boring technology can’t, but I’m skeptical of novelty for its own sake.

  • Communicate deliberately, especially across distance. I’ve been working remotely across geographically distributed teams since 2011, when I joined a Hong Kong-based team from the US. By 2014 I was managing direct reports there across a 12-hour time zone gap. Fifteen years of that has taught me that good remote collaboration isn’t about replicating an office over video calls. It’s about writing things down, making decisions visible, and being intentional about when synchronous time is actually needed.

Current Focus Areas

  • Platform engineering and Kubernetes operations. Building on-premise Kubernetes platforms that are secure, repeatable, and don’t require a dedicated team to babysit. The Talos Kubernetes project captures my current approach.

  • Security data pipelines and multi-tenant isolation. Designing log ingestion and SIEM architectures that can handle multiple tenants without data leakage and without operational overhead that scales linearly with tenant count. The Wazuh SIEM project is the most detailed example.

  • Observability that drives decisions, not just dashboards. I’m interested in observability as a tool for validating architectural assumptions—not just alerting when things break, but surfacing evidence about whether the system behaves as designed.

  • LLM capabilities and limitations. Exploring where large language models genuinely solve problems versus where they create false confidence. I’m treating this the way I’ve treated every new technology shift—hands-on experimentation, honest assessment of strengths and weaknesses, and a focus on practical application rather than hype. See the musings section for early thinking.