proxem.ai  ·  /trust-frameworkFive principles · In practice
The method

Five principles.
Each with a test.

The Trust Framework is how we decide whether to accept a mandate, how we run it once accepted, and how we refuse to deliver work that would fail by our own standards.

Published 2024. Revised 2026. Binding on every partner.
Principle 01

Independence.

We take no fees, commissions, referrals or revenue share from any technology provider, system integrator or capital allocator. Our sole revenue is the advisory retainer paid by the client.

In practice
  • Partner compensation is fixed salary plus firm profit share. No kickback schema exists.
  • Our vendor assessments name vendors. We do not suppress criticism to protect a relationship.
  • If a client asks us to implement, we decline and refer to three independent integrators.
TestCan you read our audited P&L and find a single euro from a vendor? No.
Principle 02

Evidence.

Every substantive claim we put to a client is annotated with its source, a document, a figure from a system of record, a named interview, a regulator's text. Claims without sources are removed.

In practice
  • Reports ship with an annex of sources. Clients can trace any statement to its origin.
  • AI-generated prose is not used in client-facing writing. Every sentence is written by a human.
  • Where evidence is weak, we say so explicitly. “We do not know” is an acceptable finding.
TestPick any paragraph. Ask for the source. It exists, or the paragraph comes out.
Principle 03

Proportionality.

The simplest system that does the job. Heuristics before classical ML before generative models before agents. Additional complexity is recommended only when the evidence of incremental value is plain.

In practice
  • We have talked six clients out of deploying LLMs where a rules engine would have sufficed.
  • Recommendations state the simpler alternative considered and why it was rejected.
  • Pilots measure against a baseline of “do nothing differently”, not against vendor marketing numbers.
TestWhat is the cheapest version of this that works? If we didn't ask, we failed.
Principle 04

Accountability.

A named, identifiable human owns every consequential decision made by or with an AI system. Ownership is not delegated to a vendor, a committee, or to the model itself.

In practice
  • Each deployment has an accountable executive by name on an internal register.
  • Where human oversight is statutory, our controls test whether it is genuine, not theatrical.
  • Vendor indemnities are read by counsel. We have removed “model hallucination” exclusions from 14 contracts.
TestIf this system causes material harm on a Tuesday, who answers the regulator on Wednesday?
Principle 05

Reversibility.

Every system we recommend can be switched off within 30 days without structural damage to the business. Lock-in is a risk, not a strategy, and we cost it explicitly.

In practice
  • Architecture reviews include a documented exit plan. If none exists, the deployment is not approved.
  • Data and prompts are client-owned. Vendor termination does not orphan institutional knowledge.
  • Any process re-engineered around an AI system must retain a workable manual fallback for 12 months.
TestCan the business run, materially unchanged, 30 days after this system is removed?
Regulatory alignment

What this maps to.

The Trust Framework predates most of the regulation that now applies to enterprise AI. It was written to produce compliant systems, and it does, across every jurisdiction our clients operate in.

RegulationJurisdictionCoverageFramework link
EU AI Act
Reg. (EU) 2024/1689
European UnionHigh-risk systems, GPAI, transparency01 · 02 · 04 · 05
GDPR
Reg. (EU) 2016/679
European UnionArt. 22 automated decisions, lawful basis, DPIAs02 · 04
DORA
Reg. (EU) 2022/2554
European financial sectorICT risk, third-party concentration01 · 05
NIS2
Dir. (EU) 2022/2555
European UnionCybersecurity, operational resilience04 · 05
ISO/IEC 42001
AI management systems
InternationalControl framework, lifecycle, governance02 · 03 · 04 · 05
MaRisk / BAIT
BaFin circulars
Germany, bankingModel risk, outsourcing, IT controls02 · 04
FSMA / NBB guidance
Belgian financial authorities
Belgium, financeAI in credit, markets, insurance01 · 02 · 04
FINMA Circular 2023/01
Operational risks
SwitzerlandAI governance for supervised institutions02 · 04 · 05
/ engageresponse < 48h

Test the framework
against your programme.

In the briefing we walk the five principles against a real system of yours and tell you, on the record, where it passes and where it does not.