There is a trend that few are talking about: Pressure on firms to use clients' specialised AI solutions, often accompanied by downwards pressure on payment for services.

In a lot of cases that looks to be backed by AI firm incursion into dominant companies in markets that set the strategies for long supply chains. Let's use finance and pharma as examples.
Anthropic have just made a big finance play. OpenAI are in a lot of sectors. Education and government services are huge focus areas.
It comes with help to drive adoption of the solutions based on their models. All of these firms are placing billion dollar consultancy bets to get their tech stacks into organisations.

What do suppliers do when they get this kind of message:
We have assessed potential uplift in efficiency achieved by using AI and as a result we will be discounting payments by 30, 40, 50% if you use AI to deliver.
Alternatively / concurrently this from client governance teams:
You cannot use AI to process our data or other intelligence unless you provide precise details about AI usage and we provide explicit consent.
Further accompanied by:
You must use our specialist AI to deliver your services if you want to leverage our data.
That latter pitch is almost always talking about a foundation model chat interface with likely integration of proprietary client information, data warehouse access, customer data, research, methods, policies, processes.
Clients paid for outcomes, now they partner with AI firms to harvest end-to-end processes, knowledge bases, methods, and research

This is an oligopolistic pincer movement serving a few purposes:
Undamming the data river
Training on the internet and published works is pretty much exhausted and sources are progressively poisoned by LLM output. One new frontier was always going to be domain specific data from organisations.
Paying specialists for boring piecework to review and refine prompts and responses (RLHF / tuning) was helping, but there's nothing like getting the top specialists in firms to train models for you, while they pay for the privilege.
Or they don't pay. Sometimes AI firms accept deep integration in domain specialist data and process ecosystems with a loan of top AI talent as a loss-leader partnership. Then data hungry AI firms look around the ecosystem for local knowledge, ways to build dependence, and other opportunities to justify that investment.
Understanding lucrative supplier offerings
Huge pharma, finance, and other firms in turn harvest lucrative use cases from suppliers that could be partly or wholly brought in house (or rather supplied via AI solution partners. The firm with market dominance and their AI partner perhaps ask suppliers to use their models, or take a pay cut if they use alternatives.
Verbose logging, chat metadata, user and use metadata, chat content caches, detail of information accessed and uploaded to try and achieve the outcome. That is a supplier laying their expertise entirely bare, at least the part an LLM can ingest.
Showing how they use their own research, deep specialist knowledge, schemas, templates, and analysis to achieve the outcome... step-by-step.
Realising promised savings
Job replacement through automation is a real effect. Maybe driven by AI FOMO while ignoring the risk that deep local knowledge will be lost. Maybe costs will outweigh savings, but for the first quarter or two the balance sheet bump looks worth it. Maybe headcount shrinkage is genuinely justified by efficiencies.
However, due to process complexity and domain knowledge dependence, it has generally not yet been the total disruption trumpeted from the rooftops.
Supporting dominant players in verticals to apply downwards price pressure on suppliers, while harvesting intelligence about service provision... that is smart and dangerous.
Who ends up with all that knowledge? What does it do to competition? What kind of dependence on the big AI firms does that create? What does it cost in terms of lost specialist knowledge with no easy path to replace it?
Please, while we work out what AI enhanced human work is worth, think a bit harder about all of that.
That is the existential risk for specialist knowledge workers
Quoting from the Marketing AI Institute site:
But there’s a catch. Services aren’t nearly as scalable or profitable as software. You need human experts. You need infrastructure. And if OpenAI ever plans to go public, heavy service revenue could weigh on its valuation, just like it would have for HubSpot in its early days.
Still, Roetzer argues, the allure is obvious.
"This is an age-old issue where the creator of the product wants more control and believes they can drive greater performance, adoption, utilization, retention, and value creation if they're more involved," he says.
If OpenAI can prove its model with a few key clients—and all signs suggest it’s already doing so—it could rapidly expand into a consulting behemoth. That would place it squarely in the competitive crosshairs of not just Palantir and Accenture, but also emerging startups and cloud giants offering custom AI tools.
If you’re an enterprise trying to deploy AI, expect a new wave of competition: for talent, for vendor access, and for strategic advantage.
OpenAI’s consulting move signals a clear message: The future isn’t just about better AI models. It’s about who gets to shape them, who gets early access, and who owns the implementation.
And in that world, $10 million might be the price of admission
Is the promise of AI democratising knowledge and creating space for anyone to compete with big firms ringing a bit hollow?
It should be.