Most of the AI conversations happening inside Philippine enterprises right now are about the wrong thing. They are about which tool to buy, which vendor to shortlist, which license to renew, which platform to consolidate onto. These are real questions. They are not the important ones.

The organizations that are actually pulling ahead — the ones where AI has moved from pilot theater to operating reality — are not winning on procurement. They are winning on leadership. On conviction, cadence, and the willingness to measure what changed.

Tooling is a purchase order. Adoption is a leadership decision, made and remade every week until the organization is different.

— 01 · The ThesisThe tooling layer is not the bottleneck.

Commercially available AI capability has converged. The frontier models are within touching distance of each other on the dimensions most enterprises actually need. The interfaces have stabilized. The costs have collapsed. For the average Philippine enterprise, the gap between the best tool and the third-best tool is no longer the variable that determines outcomes.

The variable that determines outcomes is whether the organization uses what it has. And the evidence on that front is embarrassing. License utilization across most Philippine enterprises we see hovers between 8 and 22 percent. The other 78 to 92 percent is paying for capacity that nobody touches.

This is not a tooling problem. It is a leadership problem dressed up in tooling clothes.

— 02 · DiagnosisWhat organizations usually get wrong.

The standard failure mode is predictable. An executive reads something, attends a conference, returns with conviction. A procurement cycle begins. A vendor is chosen. A rollout plan is circulated. A training workshop is held. A Slack channel is created. A steering committee meets monthly.

Six months later, the executive asks how it’s going. The answer is always some variant of “great, we’re seeing real momentum.” Nobody can actually say what changed. Nobody has measured what changed. The tool is technically deployed. Adoption, in any meaningful sense, has not occurred.

The root failure is almost always the same: the executive treated this as a procurement event rather than an operational transformation. Those are different species of work. One ends when the contract is signed. The other ends — if it ever ends — when the organization has visibly changed how it works.

— 03 · What WorksThe pattern in organizations that are actually changing.

We’ve seen enough of both outcomes now — in our own practice and in the enterprise work that preceded it — to be specific about what the winning pattern looks like. It is not glamorous. It is not proprietary. It is just consistently executed.

It starts with conviction, not strategy.

The executives who drive real adoption are not usually the ones with the most sophisticated AI strategy. They are the ones who have decided — in a gut, visible, durable way — that this is how the organization will operate going forward, and that they will keep saying so until the organization believes them.

Strategy follows conviction. It does not substitute for it.

Cadence matters more than content.

Monthly steering committees do not produce adoption. Weekly operating reviews do. The rhythm at which leadership returns to the question — what changed this week, and what is getting in the way — is the single most predictive variable we see.

Measurement discipline is the quiet differentiator.

Most organizations measure activity: how many users logged in, how many prompts were sent, how many training sessions were held. None of that is adoption. Adoption is measured in outcomes — cycle time reductions, quality improvements, work that used to take a week and now takes a day. The organizations getting it right are fanatical about this distinction.

— 04 · Four MovesWhat we recommend to leadership teams.

When we work with executives on adoption, we consistently push toward four moves. None are sophisticated. All are difficult to execute.

  1. Declare the direction publicly and repeat it until it is tiresome. Ambiguity is the enemy of adoption.
  2. Build a weekly operating cadence — not a quarterly one — where AI outcomes are reviewed with the same seriousness as financial outcomes.
  3. Measure what changed, not what was deployed. Cycle times, quality metrics, error rates. Activity is a trailing indicator of adoption theater.
  4. Invest in capability, not licenses. Most organizations are over-provisioned on tools and under-invested in fluency. Rebalance.

Adoption is not the moment the tool is deployed. It is the moment the organization is different because of it.

— 05 · ClosingThe question worth asking.

If you are a Philippine executive sponsoring an AI program right now, the most useful question you can ask your team this week is not about your vendor, your pilot, or your roadmap. It is this: what, specifically, does our organization now do differently than it did six months ago — and can we prove it?

If the answer is unclear, or if the evidence is thin, the problem is not your tools. It is likely sitting closer to the top of the organization than anyone is saying out loud. That is uncomfortable. It is also the only useful place to start.

— Author

J. Miranda

LAKAN Practice · AI & Adoption

Leads LAKAN's AI Training & Adoption practice. Background in enterprise-scale AI adoption programs across the Asia-Pacific region, with a focus on the operational conditions under which organizations actually change — rather than the ones under which they merely say they have.