Boards at Polish companies compare generative AI vendors today mainly through the lens of model benchmarks. From what I have observed — confirmed by the product decisions of the major players over the past twelve-plus months — this is a perspective that steers Boards toward the less important question. The real divide in the market runs elsewhere today — around open standards, and around whether a vendor hands your organization control over infrastructure and data.
Two tracks, not one
For the first two years of the GenAI revolution, all the major vendors followed the same road — a closed platform, a model accessible through their API, integrations built around their ecosystem. In recent months the market has split. Some are holding on to the closed model and betting on the depth of their own ecosystem. One of the major players — Anthropic, creator of the Claude family of models — chose the opposite path. It opened the standards: Model Context Protocol (MCP), which OpenAI and Microsoft have since adopted, and the "skills" format for packaging organizational knowledge. And in its desktop variant (Cowork on 3P) it allows the customer to run the model in their own cloud — Google Cloud, AWS, or Azure — in a region of their choice, without sending conversations to the application vendor.
This is not an architects' dispute, it is a Board question
One could dismiss this split as a technical discussion. From what I have observed — and from the logic of the contracts Polish companies sign with their own customers — this is a strictly governance question. Open standards mean that integrations built this year will survive a change of vendor. The option to run the model in your own cloud means that your customers' data does not leave the infrastructure your organization has chosen and controls. In other words — one architectural direction reduces two risks that do not exist on the vendor's slides — vendor lock-in and loss of control over data residency.
Not benchmarks, but boundaries
I am not writing this as a product recommendation — I do not sell other people's licenses. I am writing it as a structural observation. When the market splits, Boards have a year, maybe a year and a half, to make a decision that will define the next five years of their company. And that decision is not about which model is three points better on a coding benchmark. It is about whose standards you accept and whose infrastructure you let inside your organization.
My conclusion for Boards
Don't ask: "Which model wins today's tests?" Ask instead: "Which vendor hands us control over standards, architecture, and data — and which keeps that control for itself?" That question, asked once a year, will save you three years of uncontrolled technology debt. Because before you decide which model to deploy, you need to know whose world you are choosing.
Dear Reader. If you are facing this architectural decision and want to discuss — at Board level — what really distinguishes today's major GenAI vendors beyond benchmarks, get in touch.
Leszek Giza
