AI Gigafactories: Powering Europe's AI Future with Trust, Sovereignty, and the EUBW
- Carsten Stöcker
- Jun 26
- 39 min read
How Large Scale AI Infrastructure Meets European Digital Identity, Delegation, and Governance Standards
The article was originally published by Carsten Stöcker on Medium.
TL;DR: AI Gigafactories - Europe's Chance to Lead with Trust
The EU plans to invest €20 billion to build 4–5 AI Gigafactories under the InvestAI initiative. These facilities are meant to match global players like xAI or Meta in compute power, enabling Europe to train and operate sovereign AI models at scale.
But AI compute alone is not enough. Without a shared trust infrastructure, such AI hubs risk fragmentation, security incidents, and lack of adoption from regulated industries.
The article introduces a 3-layer model for future AI Gigafactories: (1) Compute Layer - GPUs, data centers, cloud orchestration; (2) Ecosystem & AI Workflow - business users, AI workflows, AI agents, data providers, business users; (3) Trust Layer - verifiable identities, delegation, provenance, compliance
This Trust Layer is critical. It ensures that every AI action is: tied to a verifiable business identity; authorized through digital Power of Attorney (PoA); auditable via AI Service Passports (AISP); aligned with eIDAS 2.0 standards for legal validity across the EU
The AI Service Passport (AISP) is a key trust component - like a Digital Product Passport (DPP), but for AI agents. It holds verifiable identity, provenance, delegation, and risk data. In the age of the rapidly emerging agent-based internet, its impact is even greater:
AISP = A Story for Every AI Agent.
Trust, provenance, and authorization credentials must be embedded into both the Agent Lifecycle Workflows (onboarding, delegation, revocation) and Agent-to-Agent (A2A) interactions - to ensure accountability and security in distributed AI environments.
The European Business Wallet (EUBW) plays a central role: it links organizations, agents, and credentials into a unified, machine-verifiable trust framework - enabling secure, compliant, and scalable AI operations.
When it comes to trust infrastructure for AI Gigafactories, the EUBW is the "only game in town" - enabling verifiable identity, delegation, and compliance across the EU.
The EU is the only region combining AI Gigafactories with a legally binding trust infrastructure (eIDAS 2.0 + EUBW), making trust Europe's strategic differentiator in the global AI race.
Policymakers must embed trust by design into AI Gigafactories - making the EUBW and eIDAS 2.0 the mandatory backbone for verifiable identity, delegation, and compliance. This is how we ensure:
"Made in Europe. AI made with Trust."
Our conclusion: AI Gigafactories will only succeed if Europe embeds trust by design, using EUBW and AISP as the foundational elements of a sovereign AI ecosystem. This is where Europe can lead - and not follow.
AI Gigafactories need more than compute - they need trust. This 3-layer model shows how secure, scalable AI infrastructures require a foundational "Trust Layer" based on eIDAS 2.0, the "European Business Wallet" (EUBW), Power of Attorney (PoA) Machanisms, Authorisation Workflows (embedded in "Agent-to-Agent Protocols") and the "AI Service Passport" (AISP) to enable identity, delegation, security, safet yand compliance across AI workflows. Source: Spherity GmbH 1. Global AI Gigafactories and EU Initiatives
The race to build AI "gigafactories" - massive data centers dedicated to AI model training and execution - is well underway globally. In the United States, for example, Elon Musk's startup xAI is constructing what's billed as the world's largest AI supercomputer in Memphis, targeting 1 million GPUs. This "computing gigafactory" already hosts ~200,000 NVIDIA GPUs and plans a 10× expansion to reach 1 million, dwarfing current facilities. Such scale presents huge infrastructure challenges: xAI has requested 300 MW of power but local grids can only supply half, forcing on-site power generation to meet an estimated >1 GW electricity need. Similarly ambitious projects are emerging elsewhere - Meta, for instance, announced a 1.3 million GPU AI datacenter in Louisiana requiring 1.5 GW of power. China is also investing heavily in AI supercomputers, prompting Europe to worry about falling behind.
In response, Europe has launched "InvestAI," a €20 billion plan to build 4–5 AI gigafactories across EU member states by end of 2025. Each facility is envisioned to host on the order of 100,000 cutting-edge AI chips, making them four times larger than Europe's current top supercomputer (the Jupiter HPC in Germany). European Commission President Ursula von der Leyen unveiled this plan in Feb 2025 as part of a €200 billion AI Action Plan, calling the gigafactories a "CERN for AI" - large public-private hubs open to startups, industry and researchers. The goal is to ensure Europe can train advanced models domestically in line with EU rules (such as the upcoming AI Act) while reducing dependence on U.S. cloud giants. Von der Leyen noted the AI gigafactories will be public-private partnerships so that "all our scientists and companies - not just the biggest - [can] develop the most advanced models needed to make Europe an AI continent." In practice, the EU will mobilize funds via existing programs (Digital Europe, Horizon, etc.) plus member state contributions and EIB support, de-risking private investment in these facilities.
Challenges remain: Industry experts warn Europe's plan faces hurdles in sourcing chips and electricity at this scale. NVIDIA's high-end AI GPUs cost ~$40k each, implying several billion euros per factory for ~100k chips. Moreover, U.S. export controls have capped access to certain AI chips for Europe. Power is another constraint - data centers already consume ~3% of EU electricity, and AI will raise this sharply. High energy costs in Europe could strain expansion. Nonetheless, EU leaders see building AI infrastructure as vital for digital sovereignty. France's President Macron framed it as "our fight for sovereignty" and Germany's Chancellor Merz likewise emphasized the need for Europe's own AI infrastructure for its economic future. NVIDIA's CEO Jensen Huang has been touring Europe advocating "sovereign AI," highlighting that each region should develop AI grounded in its own language, knowledge and culture. His message - that Europe must invest in AI capacity or be left an "AI taker" - has resonated, spurring initiatives like InvestAI.
Several EU countries and companies are positioning to host these EU-backed AI data centers. For example, Spain has a consortium led by Telefónica proposing a gigafactory in partnership with other firms. In Germany, an initial alliance of Deutsche Telekom, SAP, IONOS and Schwarz Group formed to bid for an AI hub. However, that alliance fragmented: SAP pulled out, not wanting to be an operator/investor, and the German players ended up submitting separate proposals. The result is a split effort - Deutsche Telekom on its own, and cloud provider IONOS teaming with construction firm HOCHTIEF - rather than a unified German bid. HOCHTIEF and IONOS announced an Expression of Interest to build a gigafactory, leveraging IONOS's cloud expertise and HOCHTIEF's data center construction experience. Their plan would start with 50,000 GPUs, scalable to 100,000+, and target operations by 2027. This fragmentation has raised concerns that Germany could "risk the AI location factor of the future" by failing to present a cohesive vision. Still, the German government's coalition has pledged to secure at least one of the EU's AI centers on German soil, underlining the strategic importance attached to hosting such infrastructure.
InvestAI's timeline is aggressive: the European Commission launched the AI Action Plan and opened calls for proposals in April 2025, with a formal tender for the gigafactories expected by Q4 2025. Projects will be selected as public-private partnerships, co-funded by EU grants and industry investment. The aim is to break ground quickly so that Europe's AI capacity comes online by 2026–27, roughly matching the timeline of U.S. projects like xAI's Memphis center. By planting five AI hubs across Europe - potentially in countries like Germany, Spain, France, the Netherlands, etc. - the EU hopes to create an AI ecosystem that attracts talent and startups, while ensuring compliance with Europe's stricter data protection and AI safety standards. The big question, as posed by one Reuters piece, is whether "if Europe builds the gigafactories, will an AI industry come?" - i.e. can Europe nurture homegrown AI champions (like France's Mistral AI) to fully utilize these facilities. To do so, experts say Europe must also strengthen its cloud services, data availability, and governance frameworks around AI, not just pour concrete for data centers. This is where a robust trust infrastructure becomes critical, ensuring that AI development in these gigafactories is not only powerful but also secure, compliant, and widely accessible.
2. The Need for a Trust Infrastructure in AI Gigafactories
Launching AI gigafactories is not just a hardware challenge - it also requires a trusted operational framework so that various organizations and AI agents can collaborate safely on shared infrastructure. Key trust needs and concepts include:
Verified Identity & Authenticity: In a multi-tenant AI data center, every participating entity - from companies to individual AI agents - must have a secure, verifiable identity. This raises the question: who "owns" or represents each autonomous AI agent running in the facility? It could be a telco/cloud provider (as identity host), an industry partner, or the client company deploying the AI. Traditional identity methods (API keys, static certificates) are insufficient for free-roaming AI agents. Research underscores the need for "infrastructure-grade trust" to assign and verify identities for AI across distributed systems. One novel approach proposes leveraging telco cloud trust infrastructure - telcos could host secure hardware modules or provide secure Multi-party computation (sMPC) key management infrastructure to manage agent-specific cryptographic material, allowing AI agents to authenticate remotely with carrier-grade security. Regardless of the mechanism, the authenticity of each AI agent must be assured so only legitimate, authorized agents (e.g. tied to a known company or developer) can operate in the gigafactory. The European Business Wallet (EUBW, discussed later) can play a central role here by providing verifiable digital identities for organizations, sub-units, and even software agents.
Delegated Authority (Power of Authority): With AI systems acting on behalf of humans or companies, we need a way to digitally delegate permissions - essentially an AI-specific Power of Attorney (PoA). This means giving an AI agent a cryptographic credential that defines what it is allowed to do (e.g. execute transactions up to a limit, make certain decisions) and on whose behalf. Explicit mandates are critical so that autonomous agents operate within defined bounds. For example, an enterprise could issue a PoA credential to an AI that lets it negotiate contracts under preset parameters, but not sign off beyond a certain budget or without human review. These delegation chains can be layered (an AI agent might delegate a sub-task to another agent, etc.), but each link must be traceable and governed by verifiable credentials. "AI agents must have verifiable identities, PoA credentials, and delegation chains to operate legally and securely." Without such controls, an AI could take actions without clear authorization - a major security and liability risk. In high-risk domains (finance, autonomous vehicles, etc.), incorporating PoA checks is vital. The trust layer should enforce that no AI action is taken unless a valid chain of human or organizational approval is in place, using technologies like verifiable credentials (VCs) and digital signatures to prove the delegation.
Provenance and Audit Trails (a.k.a. "A Story for Every AI Agent"): Establishing provenance chains for AI activities is essential to build trust and accountability. This means tracking which model or dataset an AI used, what actions it took, and under whose authority, in a tamper-proof log. Prior research has shown that "verifiable provenance graphs" can record the lineage of AI models and data through an AI supply chain, improving transparency and trust between parties. In an AI gigafactory context, every request, model training, or inference result could be signed and logged in the provenance graph. These audit trails ensure that later one can verify which AI did what, when, and trace outcomes back to inputs (critical for compliance with regulations and investigating incidents). The trust framework should thus include a "provenance graph" or similar mechanism (possibly integrated with the AI Service Passport, below) that records key events in a verifiable way. This enables not only troubleshooting and accountability, but also helps with regulatory audits - proving that sensitive data was used appropriately, or that an AI's decisions followed mandated guidelines.
Continuous Risk Monitoring (AI Risk Scoring): Even with identities and policies in place, AI processes can behave unexpectedly or drift over time. A risk scoring system provides dynamic, real-time assessments of an AI agent's trustworthiness or compliance level. Similar to credit scoring but for AI behavior, it would incorporate metrics like: has the agent triggered any security alerts, is it operating within normal parameters, how complex or opaque are its decisions, etc. This could be implemented via a combination of event monitoring, log analysis, and AI governance rules. For instance, if an AI model begins making decisions outside of approved policy (e.g. an autonomous supply chain AI choosing an unvetted supplier purely for cost), its risk score would increase, potentially pausing certain privileges. The AI Service Passport (below) can encapsulate some of this by including "risk assessments" and compliance checks as part of an agent's profile. In essence, the trust layer should continuously evaluate the "health" and policy compliance of AI agents, enabling a "trust but verify" approach where higher-risk agents face greater scrutiny or restrictions. Over time, such scoring could even be used to certify AI systems (e.g. an AI with consistently low risk profile might earn greater autonomy).
AI Service Passport (AISP): As a novel concept, an AI Service Passport would act as a machine-readable "trust dossier" for each AI agent - akin to a digital passport or resume that others can check before interacting with that AI. The passport would contain verifiable claims about the AI: who developed it, its ownership (linked to a business identity), what it's authorized to do, its technical characteristics (model version, training data provenance), and its compliance credentials (e.g. certified under EU AI Act, adhering to GDPR, etc.). According to one proposal, an AISP is essentially a verifiable registry entry tracking an AI's "capabilities, provenance, and delegation history", which ensures AI actions remain auditable and within approved limits. The AI Service Passport might include entries like Provenance Information (origin and training data of the AI model), Delegation History (the chain of PoA delegations and any revocations), and Risk/Compliance Assessments (audit results, compliance with eIDAS, AI Act, etc.). Before trusting an AI agent to, say, execute a transaction or access data, a company could query its Passport to verify it meets all required criteria. This concept extends ideas like digital product passports to AI services. By packaging trust evidence in a standardized, easily verifiable format, AISPs could greatly streamline due diligence in automated interactions - much like a human presenting an ID and certificates. It enables machine-to-machine trust, where an AI agent can present its passport (cryptographically signed) and have another system automatically validate its permissions and integrity.
Agent-to-Agent (A2A) Protocols - The Next Layer of Control: As AI agents begin to interact directly with each other - across companies, platforms, and jurisdictions - Agent-to-Agent (A2A) protocols define how these interactions are structured, authenticated, authorized, and governed. These protocols are the backbone of decentralized, autonomous AI ecosystems. Big tech players are already competing to control A2A communication standards, embedding their own identity models and permissions logic to gain platform dominance. If Europe wants to maintain digital sovereignty and interoperability, trust must be embedded directly into A2A protocols: every message, transaction, or negotiation between agents must carry verifiable credentials (identity, PoA, compliance claims). Without trust at the protocol level, A2A ecosystems become vulnerable to spoofing, large-scale surveillance, unauthorized actions, or systemic manipulation. Europe needs open, standards-based A2A protocols aligned with eIDAS 2.0 and the EUBW to ensure secure, rule-based collaboration between AI agents across sectors.
As AI gigafactories bring together many actors and autonomous systems under one roof, a robust trust layer is not optional - it's a necessity. Identity, delegation, provenance, risk monitoring, and credentialing (passports) work in tandem to create a "zero-trust" environment where nothing and no one is taken at face value without verification. Just as humans in an organization undergo ID checks, role-based access control, and audits, AI agents need an equivalent trust framework to ensure they operate transparently, securely, and within mandate. The next sections discuss how such a framework can be implemented, particularly leveraging the EU's emerging digital identity infrastructure (eIDAS 2.0 and the European Business Wallet) as the foundation.
3. Roles and Stakeholders in the AI Gigafactory Ecosystem
Standing up an AI gigafactory and its trust infrastructure involves a complex ecosystem of stakeholders, each with distinct roles:
Infrastructure Operators: These are the entities that build and run the physical data center and cloud platform. In Europe's case, this could include telcos and cloud providers (e.g. Deutsche Telekom's T-Systems, IONOS cloud, etc.), specialized datacenter operators, or consortiums thereof. They provide the hardware (GPUs, networking, cooling) and baseline cloud services on which AI workloads run. Their role in trust: implement the base security (physical security, network segmentation) and possibly host trust services (for example, telcos could host the secure HSM modules or sMPC nodes for agent identities). They ensure the gigafactory meets uptime, scalability, and security requirements and that only accredited participants gain access.
Technology Partners (Chip & Hardware Suppliers): These include companies like NVIDIA (GPUs), specialized chip startups, server manufacturers, and power/cooling technology providers. They supply the critical components (advanced AI chips, high-performance systems) and often collaborate on site design (e.g. NVIDIA partnering to optimize supercomputer configurations). In terms of trust, chip providers may introduce features like hardware security modules, trusted execution environments, or chip-level attestation that can be leveraged to verify computations and identities. For instance, new GPUs might support cryptographic proofs of what code they ran, aiding provenance.
AI Software/Platform Providers: These are the AI framework and service providers that supply the software stack on top of the raw hardware. Companies like SAP (software), AI model developers, middleware providers fall here. They might offer AI training platforms, data management tools, or specialized industry AI solutions deployed in the gigafactory. Their role includes integrating identity and access management into their platforms - e.g. ensuring their AI tools can consume EUBW credentials to authenticate users/agents, logging model lineage for provenance, etc. Some may also be providers of the trust infrastructure itself (e.g. a company providing a credential wallet or audit trail system as a service).
Training Data Providers (Curated Data Suppliers): These actors supply high-quality, domain-specific datasets used for training AI models within the gigafactory. They may include open data consortia, companies with proprietary industrial, medical, or legal data, or public institutions. Their role is critical, as the performance and trustworthiness of AI systems depend heavily on the quality and provenance of the training data. From a trust infrastructure perspective, they must provide verifiable claims regarding data origin, licensing terms, and compliance (e.g. GDPR, copyright) - ideally via data source credentials that are referenced in the AI Service Passport of the resulting models. They may also collaborate with Trust Service Providers to issue digital provenance attestations.
AI Quality & Provenance Auditors (Independent Model & Data Validators): These stakeholders are responsible for independently verifying the quality, integrity, and compliance of AI models and training data before deployment. Their activities include bias detection, robustness testing, validation against reference data, GDPR audits, and ethical assessments. These "AI Quality Assurance Providers" can be accredited auditing bodies, certification authorities, or specialized trust-tech firms. Their output - such as bias audit certificates, model lineage reports, or training data provenance attestations - are recorded as verifiable credentials and become part of the AI Service Passport. They play a key role in enabling regulatory approval and building downstream trust in AI systems.
Business Users (Clients): These are the companies or research institutions that utilize the AI gigafactory's compute power. For example, pharmaceutical firms, automotive manufacturers, financial institutions, universities etc. will run AI workloads (like drug discovery models, autonomous driving simulations, large language model training) on the shared infrastructure. They are data controllers in their own right, bringing possibly sensitive data and proprietary models to the facility. From a trust perspective, business users need assurance of secure multi-tenancy (their models and data are isolated and safe from other users), and they need to abide by the trust framework: e.g. registering their AI agents' identities, obtaining necessary PoA for them, and consuming the audit reports. They will also be the ones accountable if their AI agent misbehaves, so the trust layer protects and benefits them by preventing unauthorized actions under their name.
Regulators and Policymakers: Given the strategic importance, government bodies (at both EU and national level) and regulators (for data protection, AI ethics, cybersecurity) are key stakeholders. The EU is not just funding InvestAI but will oversee compliance with the AI Act, GDPR, Cybersecurity Act, etc. Regulators will likely require reporting and audit access - hence their interest in mechanisms like provenance tracking and risk scoring. They may integrate the gigafactory into broader governance (for example, linking to the European AI Board or other oversight entities from the AI Act). Policymakers also set standards: e.g. defining what an acceptable AI Service Passport contains or ensuring interoperability of trust services across the EU. In essence, they provide the governance layer and legal framework in which the trust ecosystem operates.
Trust Service Providers: A new class of player emerges specifically to handle digital trust services in the ecosystem. Under eIDAS, these could be Qualified Trust Service Providers (QTSPs) who issue and verify identities, signatures, timestamps, etc. In the AI gigafactory context, think of providers who issue EUBW credentials (the European Business Wallet verifiable IDs), provide digital signatures and seals for AI transactions, and manage secure wallets for organizations. For example, a company like Spherity or other identity tech firms might serve as an EUBW issuer or wallet provider for businesses enrolling in the AI hub. They ensure that every organization and agent has the proper credentials and that those credentials are recognized EU-wide. They might run the trust registries or be the ones to perform identity verification (KYC/KYB) before a business can participate. In many ways, they are the backbone connecting the technical operations with the legal trust requirements (ensuring that, say, a PoA credential is digitally signed by a notary or that an AI model's audit trail is timestamped by a qualified service).
Others: What other roles exist? We could consider financial stakeholders - e.g. the European Investment Bank or venture consortia funding these projects - but their role is more funding than day-to-day operation. Another stakeholder is the research community and standardization bodies: organizations like EuroHPC Joint Undertaking (coordinating HPC resources), standards bodies like ISO/IEC (for AI standards) or W3C (for verifiable credentials standards). They contribute by aligning the gigafactory's practices with international standards and by ensuring interoperability (so that, for example, an AI passport issued in one facility is trusted in another). Security auditors or cyber insurance providers could also be stakeholders: independent entities that review the security of the facility and the integrity of the trust processes, providing certifications or insurance - their involvement would incentivize maintaining high trust standards (lower premiums for robust controls, etc.). Lastly, end consumers (indirectly) are stakeholders: if a pharma company's AI at the gigafactory develops a drug, patients need to trust the AI's output was safe and compliant. Though not directly interacting with the system, public trust in AI outcomes is shaped by how well this ecosystem manages trust internally.
A secure AI infrastructure requires that all stakeholder actions are traceable to a verifiable business identity.
All these stakeholders must also collaborate under a common trust framework, where responsibilities are clear. For instance, the infrastructure operator might ensure that hardware roots-of-trust and secure enclaves are available; the trust service provider issues the necessary digital identities; the business user assigns PoA to its AI agents; and regulators set the rules for audit logs and intervene if risk scores signal something awry. By mapping out roles, we ensure that trust is shared and enforced at all levels - technology, process, and governance.
4. eIDAS 2.0 and the European Business Wallet as the Trust Layer Foundation
The EU's updated digital identity regulation, eIDAS 2.0, provides a crucial foundation for the AI trust layer. At its core, eIDAS 2.0 establishes a framework for cross-border electronic identification, authentication, and trust services - including the introduction of the European Digital Identity Wallet (EUDIW) for citizens and a corresponding European Business Wallet (EUBW) for organizations. This regulatory upgrade, which came into force in May 2024, aims to enable "secure, dynamic, and ongoing governance" of digital identities across the EU. In other words, it's not just static ID cards, but a live system where identities (and associated attributes or credentials) can be issued, used, and verified in real-time with legal effect.
Under eIDAS 2.0, each member state will roll out standards-compliant digital wallets that individuals and companies can use to store verifiable credentials - from ID cards and diplomas (for people) to business registrations and mandates (for companies). These wallets use strong authentication and certificates, ensuring any credential presented is cryptographically signed by a trusted authority. Importantly for AI, eIDAS 2.0 and the forthcoming European Digital Identity Framework establish common standards for things like electronic signatures, seals, timestamps, and delegation attestation across the EU. This means if an AI agent is given a digital PoA signed according to eIDAS 2.0 specs, any service in the EU should be able to automatically validate that credential and trust its origin.
The European Business Wallet (EUBW) is envisioned as a specialized wallet for legal entities (companies, organizations) that extends these capabilities to the B2B and industrial context. It is a strategic pillar for Europe's digital industrial competitiveness, enabling verifiable business identities and transaction trust at scale. Unlike the citizen wallet, which faces adoption hurdles, businesses have clear incentives to adopt EUBW for compliance and efficiency. For the AI gigafactory, EUBW can serve as the trust anchor for all participants:
Verifiable Enterprise IDs: Every company (and potentially sub-division) that participates can have a verifiable Legal Entity credential in its EUBW, issued by a trusted authority (e.g. a national business register or a QTSP). This is more than a static number - it's a digital certificate that can be presented and instantly validated. Thus, when an organization's AI agent connects to the platform, the platform can check the agent's certificate to confirm, "Yes, this agent belongs to Company X, which is a legally registered entity in the EU (with VAT number, etc.)". The EUBW essentially guarantees "digital KYB" (Know Your Business) on the fly. This is the bedrock: no anonymous or pseudonymous agents should be running in a sensitive gigafactory context; they all tie back to a real enterprise or institution.
Digital Power of Attorney Storage: EUBW allows companies to issue, store, and manage PoA credentials in a standardized way. For example, a company's authorized officer could use EUBW to issue a Power of Attorney credential to an AI agent (identified by a DID - Decentralized ID) that's valid for certain tasks. That credential is signed and stored in the wallet. When the AI agent attempts an action, it presents this PoA (via a verifiable presentation), and anyone can verify via eIDAS trust services that the PoA is authentic and unrevoked. EUBW thus acts as the repository of who within or on behalf of the company is allowed to do what. It also supports revocation - if an AI's mandate is revoked (or it's discovered to be compromised), the PoA can be canceled in the wallet and will no longer validate. This aligns with continuous governance: mandates aren't permanent papers but live credentials that can expire or be rescinded as needed.
Authenticating AI Agents: Through EUBW, not only human employees but also AI agents can be given identity and authority. The Medium article on EUBW notes that it "could facilitate issuance of verifiable credentials for AI agents, allowing them to securely exchange attestations within defined trust domains." In practice, an AI agent could have its own "organizational eID" in a sense - likely implemented as a DID linked to the company's wallet. The EUBW would contain credentials binding that DID to the company and listing its permissions (e.g. a credential might state "Agent 007 - employed by Company X's procurement department - delegated to approve orders up to €100k"). When that agent interacts, it uses cryptographic keys associated with its EUBW identity, ensuring non-repudiation (the actions can be attributed) and authentication. This leverages eIDAS trust services like electronic seals (for software agents signing documents) and the legal recognition they carry. Essentially, EUBW provides the mechanism to "clearly identify and authorize KI-Agenten (AI agents)" as noted in the prompt. Without such structure, AI agents would be amorphous code acting with no official identity; EUBW grounds them in the legal identity system.
Integrated Trust Services (not just a module): A key point is that this identity and trust layer should be an integral part of the shared infrastructure, not an afterthought or optional plugin. The EUBW and related eIDAS services must be embedded by design in the gigafactory's operations. For example, when setting up any new AI cluster or deploying a model, the workflow should include registering its credentials. Audit logs should automatically tie into eIDAS timestamping and sealing for legal validity. Viewing EUBW as "not just an individual module, but an integral component of common infrastructure" means all partners rely on it as the single source of truth for identity and delegation. This avoids fragmented trust solutions. As I emphasized, the EUBW is meant to be a "regulatory backbone" supporting EU frameworks (eIDAS, GDPR, etc.) and ensuring common standards for identity and access control across ecosystems. In the context of AI centers, that translates to a unified trust layer that every stakeholder uses.
By basing the trust layer on eIDAS 2.0 and EUBW, the gigafactory ecosystem benefits from EU-wide interoperability and legal assurance. A PoA or credential issued in Germany can be verified in Spain with the same confidence. Companies don't have to reinvent identity management; they plug into the European frameworks. This also aligns with "digital sovereignty" goals - Europe controls its identity and trust infrastructure (rather than relying on, say, a proprietary system from a foreign tech giant), which resonates with the sovereign AI narrative. Nvidia's Jensen Huang himself noted that Europe's move toward sovereign AI - which includes keeping critical infrastructure and data under EU governance - is gaining traction. Leveraging eIDAS 2.0 and EUBW is a concrete step in that direction, giving European AI endeavors an identity fabric that is secure, standardized, and sovereign-by-design.
5. Value Proposition of a Trust-Based AI Ecosystem
Implementing a comprehensive trust layer (as described above) for AI gigafactories yields significant benefits and addresses key business concerns:
Compliance and Governance: Perhaps the strongest driver is ensuring regulatory compliance and auditable governance of AI activities. With explicit delegation and identity chains in place, you get "lückenlose Delegationsketten" (unbroken chains of delegation) and a clear audit trail for every decision. This makes it far easier to demonstrate compliance with regulations like the AI Act, and to enforce accountability. If an AI signs a transaction, we know exactly which legal entity is responsible and which human gave it authority. This mitigates the legal and reputational risks of "rogue AI" behavior. It also simplifies external audits and certification - auditors can rely on cryptographically secure logs instead of messy paper trails. Overall, the trust infrastructure brings governance by design: policies (like who can do what) are baked into credentials and smart contracts, not just in a manual SOP document. The result is that compliance checking can be partly automated and continuous (e.g. a regulator could query an AI's Service Passport at any time to see its compliance status). In a world where AI systems might need to be paused or adapted if they become non-compliant, having this transparency and control is invaluable.
Security (Zero Trust): The framework significantly bolsters security by ensuring strong authentication and authorization for both humans and machines at every step. In effect, it adopts a Zero Trust Architecture: never trust, always verify. Every API call, every model deployment, every data access by an AI agent would be accompanied by an identity proof and authorization check. This dramatically reduces the attack surface. For example, it prevents a scenario where a malicious actor might smuggle an unauthorized AI process into the data center - without a valid credential, it won't get compute time or data access. It also helps mitigate insider threats or human error: even an employee can't accidentally run a powerful AI job without the proper delegation in place. Additionally, by authenticating devices and software modules, the system can defend against tampering (if someone tried to swap an AI model or inject code, it would lack the expected cryptographic signatures). The trust layer's emphasis on secure key management, verifiable credentials, and continuous verification means the entire operation moves closer to "secure-by-default." Notably, it's not just about keeping bad actors out, but also containing AI agents themselves - if an AI agent tries to step outside its lane, the authorization system will block it. This is crucial given fears that autonomous AI could be co-opted for wrongdoing (e.g. conducting cyberattacks) or simply err due to a bug; the trust layer acts as a safety net.
Interoperability and Ecosystem Collaboration: A standardized trust infrastructure (based on EU standards) creates interoperability benefits that facilitate collaboration. Partners can seamlessly connect their systems - "standardised interfaces between partners and EU systems" means, for instance, a company's procurement system can accept quotes from an AI agent of a supplier because both speak the same trust protocol (verifiable credentials via EUBW). This reduces integration friction. In the gigafactory context, companies from different countries or industries can share the same facility with confidence because they trust the common identity fabric. It's akin to how a common financial system (like SWIFT for banks) enables global transactions; here a common identity & trust system enables cross-organization AI workflows. Interoperability also future-proofs the investment - as new partners or regions join, they can plug into the trust framework rather than building one-off solutions. For example, if an AI gigafactory result needs to be fed into a national research cloud, using EIDAS/EUBW credentials ensures compatibility with government systems or other cloud trust schemes. Overall, it encourages an ecosystem effect: many stakeholders can contribute data, models, and services to a shared AI ecosystem because a baseline of trust is guaranteed. This broad participation can lead to a network effect where the value of the platform increases as more trusted data and AI services become available on it.
Efficiency and Scale (Trust at Scale): A shared trust layer can also unlock operational efficiencies and economies of scale. Instead of each company implementing its own identity management and logging, they rely on the common infrastructure, reducing duplicate effort. Smart, verifiable credentials can automate previously manual processes - for instance, onboarding a new partner to use the AI facility might be as simple as issuing them a credential, rather than a lengthy legal contracting process for every dataset exchange (the compliance requirements can be encoded in the credentials). Real-time risk scoring and automated enforcement can catch issues early, potentially avoiding costly incidents or downtime. Also, by establishing trust, companies may be willing to share resources or data that they otherwise would silo - e.g. two pharma companies might agree to a joint AI project in the gigafactory if they know a solid trust framework prevents data leakage and ensures IP ownership via cryptographic proofs. This kind of sharing can lead to innovation and network effects: more data can improve models, more collaboration can spawn new AI solutions, benefiting all participants. Essentially, the trust infrastructure provides a "security scale" - as more parties use it, the overall system becomes more secure (since everyone is contributing to and benefiting from the collective monitoring and standards). It's similar to how a "community immunity" works in cybersecurity: a common defense standard means if one detects a breach attempt, others are automatically protected by updated credential revocations or risk alerts.
Safety in Cyber-Physical AI Workflows: As AI systems increasingly control physical processes - from robotics in manufacturing to autonomous vehicles and critical infrastructure - the trust layer becomes a safety-critical component. In these cyber-physical environments, AI agents make decisions that affect human lives and real-world assets. Embedding verifiable identity, digital delegation (PoA), and continuous risk monitoring ensures that only authorized, policy-compliant agents can act - and that every decision is traceable and reversible if needed. For example, a robotics AI on a factory floor must prove its identity and mandate before altering machine parameters; if it deviates from approved logic, its actions can be blocked or paused in real time. The trust infrastructure acts as a "digital safety interlock", preventing unauthorized or unsafe behavior from propagating into the physical world. In regulated sectors like mobility, energy, and healthcare, this is not optional - it is essential for operational resilience, legal compliance, and public trust. AI safety is no longer just about algorithms - it is about the infrastructure of trust that surrounds them.
The value prop spans "hard" benefits (meeting compliance, preventing fraud, ensuring safety, avoiding fines or breaches) and "soft" benefits (building confidence, enabling partnerships, improving brand trust). It creates a competitive advantage: AI services delivered with trust and transparency are likely to be favored by customers and regulators. As one analysis put it, AI trust is a business necessity, not just a nicety. Those who invest in robust AI governance will be better positioned to leverage AI at scale without the frequent setbacks of security incidents or ethical controversies. Especially in Europe, with its stringent regulations and emphasis on ethical AI, a trust framework is the ticket to play in high-value AI applications.
6. Key Elements of an AI Trust Framework (Architecture Blueprint)
Drawing together the needs and value drivers, we can outline the technical architecture of a trust layer for AI gigafactories - essentially a blueprint comprising several key elements:
Enterprise Identity Management: At the foundation is Enterprise (Legal Entity) Identity. This is handled via the European Business Wallet (EUBW) as described. Each organization is represented by a legal digital identity (e.g. a DID with associated verifiable credential like a Legal Entity Identifier or national business ID). Within an organization, there can be finer identities - departments or units could have sub-credentials, and importantly each AI agent gets its own identity certificate linking it to the organization. The architecture includes an Identity Registry/Directory (which could be decentralized or centralized) where public keys and revocation lists for these identities are maintained. This allows any participant to resolve an identity and get the assurance it's valid and who it belongs to. For usability, this might integrate with existing IAM (Identity & Access Management) systems of companies - but enhanced with EU verifiable credentials so that trust extends beyond one organization. In summary, a trusted ID for every entity (human, machine, or AI) is the first pillar.
Delegation & Power of Authority (PoA): The next layer is the delegation framework. This comprises the issuance and validation of PoA credentials and trust chains. Technically, this means a company officer uses an interface (the business wallet app) to create a Verifiable Credential (VC) that states "Agent X is authorized to perform Y on behalf of Company Z". This credential is signed with the company's private key (which itself is tied to the company's legal ID). The architecture needs a Credential Issuance Service and a Credential Verification Service. Issuance might be done by the company itself via their wallet; verification will be done by any counterparty's system by checking the credential signature and status (against revocation registries). The framework should support chaining - e.g., a human gives PoA to an AI, which in turn could delegate a sub-task to another AI. To manage this, the credentials can be chained or nested, and the system might use a graph of delegations to compute effective permissions. A policy engine might enforce that certain high-risk delegations cannot be passed on. Essentially, the architecture enforces that at runtime, every action an AI tries is checked against a valid PoA. If the chain is missing or broken, the action is denied. Smart contract technology or distributed ledger can be used to automate these checks in a trusted way (for instance, encoding access or delegation logic in a smart contract so that an AI's transaction will only execute if its credential is present and current).
Authentication & Authorization (Continuous): Coupled with identity and PoA is the AuthN/AuthZ system. This component ensures that at every interface in the data center (APIs, databases, model repositories, etc.), calls are authenticated (the caller presents a credential) and authorized (the system verifies the caller's PoA allows the requested operation). A unified access management layer would integrate with cloud orchestration - e.g., when an AI job is submitted to the GPU cluster, the orchestration service authenticates the job's owner via EUBW credential and checks an authorization policy (perhaps using something like Attribute-Based Access Control with attributes coming from the verifiable credentials). Technologies like OAuth/OIDC could be extended to use verifiable credentials instead of traditional tokens, or a custom solution using DIDs and VCs could be employed. The key is that this layer treats both human users and AI agents uniformly in the sense that both have identities and roles. For instance, if an AI agent tries to access a dataset labeled "confidential," the authorization service will look for a credential that agent has indicating clearance. No credential, no access - by design. Moreover, the architecture should enable continuous authentication: not just a one-time check when a session starts, but potentially continuous verification for long-running AI processes (aligning with Zero Trust principles where trust is continuously evaluated). This might tie into threshold security or policy engines that can revoke or adjust permissions on the fly if risk conditions change (e.g., if an agent's risk score goes high, temporarily reduce its access until it passes a review).
AI Service Passport & Compliance Ledger: Implementing the AI Service Passport (AISP) concept involves a verifiable provenance graph or ledger that compiles key information about each AI agent. This could be a cryptographic data structure where for each AI agent's DID, there is an associated set of linked records that contain:
Provenance data: hashes or pointers to the AI model's origin, training datasets, perhaps an AI Bill of Materials (listing libraries, model parameters, etc.).
Delegation history: records of when/where the AI was activated, by whom, any changes in its PoA scope, and logs of major actions (this could be akin to a transaction history for the AI).
Risk & compliance (audit) assessments: results of any audits or compliance checks, possibly a risk score or rating. For example, after each task or on a schedule, the AI could be evaluated (maybe even by another AI) for adherence to policies, and an updated risk level is written to its passport record. If an AI violates a rule, that might reflect as a flag on its passport until resolved.
Credentials and certifications: the passport can also include references to any certifications the AI has (e.g. safety certification, bias audits passed, etc.).
This passport data store should be queryable by authorized parties. For instance, before two AI agents engage in an automated transaction, they can exchange and check each other's passports (likely via an automated protocol). The architecture might use Decentralized Identifiers (DID) and Wallets to store some info, and pair that with off-chain storage for larger data. Smart contracts can enforce that updates to a passport are only made by authorized sources (e.g., only the company that owns the AI can update its delegation entries, only an auditor can add a compliance certificate). Essentially, the AISP ledger acts as a trust database for all AI in the system. This is an advanced element, but it's extremely powerful for transparency: it's like having a constantly updated dossier on each AI that anyone (with permission) can review to determine trust. We can also integrate automated compliance checks here - e.g., if an AI's passport says it's using personal data, systems can automatically ensure GDPR provisions are applied. This element addresses the "Provenance Chains" and "AI Service Passport" points from earlier in a concrete way.
Risk Scoring and Monitoring Engine: The architecture should include a component (or a service) dedicated to monitoring AI agent behavior and computing risk metrics. This can be thought of as a Security Information and Event Management (SIEM) system adapted for AI operations, or an AI Governance Engine. It would ingest logs from various sources: the AI's decisions (from audit logs), system performance, external news (e.g., an AI gets flagged in another context), etc. Using rule engines and possibly ML anomaly detection, it would produce a risk score or alerts for each agent. If an AI starts doing unusually frequent transactions at odd hours, the engine might flag that as suspicious. If an AI's supplier selection pattern deviates from policy, that's noted. The risk engine can then trigger automated responses: e.g., if risk crosses a threshold, notify an admin or temporarily constrain the AI's permissions (this ties back to the authz system). Additionally, this engine contributes to the AI's Service Passport risk assessment field as discussed. In effect, it closes the loop for continuous assurance - not only do we set initial policies, but we actively watch and adjust as the AI operates. This is important because AI models can drift or unexpected scenarios can emerge, so a static approval isn't enough; ongoing governance is needed.
Verifiable Logging and Audit Trail (Provenance): All significant events should be logged in a tamper-evident way. This might involve a distributed log (or even Qualified Electronic Ledger in line with eIDAS 2.0 regulation) where entries are append-only and signed. Each log entry (e.g., "AI X executed transaction Y at time T") would carry digital signatures of the agent and perhaps the platform, plus a timestamp from a trusted timestamping service (another eIDAS service). As a result, the entire history of actions is provably intact. Auditors or regulators can be given read-access to these logs (with privacy controls as necessary). Using a DLT for this ensures no single party can alter history - which is critical in a multi-stakeholder environment to build mutual trust. If something goes wrong (say an AI made an improper trade), one can audit the chain to see who approved that AI, what data it used, what exactly it did, and who is accountable. This element underpins the non-repudiation principle: no one can later deny what happened, because the evidence is cryptographically locked. It complements the AI passport (which is more about current status) with a temporal sequence of events.
Architecturally, these elements work together as a Trust Layer "mesh" over the physical AI infrastructure. It might be visualized as multiple layers of defense and oversight:
At the base, hardware-enforced identity (e.g. HSM/MPC based IDs or TPM attestation for machines) feeds into the digital identity system.
In the middle, credential issuance and verification flows (using standards like W3C Verifiable Credentials, DIDComm for communication) enable agents to prove themselves to each other and to services.
Overarching everything, a governance network (a qualified electronic ledger and a network of trust service providers) coordinates trust lists, identity vetting, semantic models, conformance assessment, and audits.
One could call this a "Trust Layer Blueprint" for AI data centers. It ensures that every autonomous action is authenticated, every decision is authorized, every outcome is recorded, and every anomaly is detected. By design, it aligns with eIDAS 2.0 and leverages the EUBW as the user-friendly interface for companies to engage with these trust features.
To illustrate concretely: imagine an AI agent in the gigafactory wants to procure more cloud GPU hours to train a model. The steps might be:
The AI agent's controlling company issues it a PoA (via EUBW) to spend up to €50,000 on cloud resources.
The AI agent sends a request to the cloud management API with its EUBW authentication and attached PoA credential.
The cloud API's authorization service verifies the EUBW credential (ensuring the agent is legit and the PoA is valid for that action and amount).
The API logs this event in an audit log (the AI's ID, action details, timestamp).
The resource is allocated and the AI continues its task. The AI Service Passport for that agent is updated to record this delegation usage.
Meanwhile, the risk engine sees that this agent has spent €50k, which is within limits so no alert. But if it tried €500k, exceeding its mandate, the auth would reject it and flag the attempt to security officers.
Later, an auditor reviews the procurement - they query the audit trail and see a clear chain: Manager Alice -> PoA to AI Agent -> Agent initiated purchase, logged at 2025–07–01T10:00Z, fulfilled by CloudOps Service.
Through such scenarios, one can appreciate how friction is reduced (AI can act autonomously up to a point) while control is maintained (everything is within a sandbox of trust and oversight). This kind of architecture is what will make it feasible to run autonomous operations at scale in the AI gigafactory without constant human gatekeeping - because the trust gatekeeping is handled by the system.
7. Use Cases Demonstrating the Trust Framework
To make this more tangible, consider a few use cases in a gigafactory context and how the trust ecosystem enables them:
Pharmaceutical AI Gigafactory (Drug Discovery Collaboration): Suppose several pharma companies share an AI gigafactory to train models on clinical data (which is highly sensitive and regulated). Using the trust framework, each company's data is tagged with owner credentials and access policies. AI agents in the cluster might simulate drug molecule interactions using combined data, but an agent from Company A can only access Company B's data if a verifiable consent credential exists (perhaps issued under a data-sharing agreement). The EUBW would store such consents as VCs. All model training runs are logged with provenance - exactly which data was used and by which AI model version - to ensure traceability for FDA audits. If an AI model suggests a drug candidate, its AI Service Passport might include the provenance of the training data and the validation checks performed. Regulators could later review this passport to see if the model was trained on approved datasets and if the decision process was documented. Delegated authority is crucial here too: if an AI tries to, say, approve moving a drug into human trials, it would need a PoA from a human trial director. This ensures ethics and safety oversight remain in human hands where required. Essentially, the trust layer in this use case protects patient data, enforces collaboration terms, and leaves a compliance trail that all new pharma products derived from AI can be audited against. Without it, sharing such sensitive data in a joint AI facility would likely be a non-starter due to liability and privacy fears.
Autonomous Industrial Operations (Manufacturing "Lights-Out" Factory): Consider an automotive manufacturer running an autonomous factory floor optimized by AI agents (robotics control, supply chain ordering, etc.), possibly connected to the AI gigafactory for heavy compute analysis. Here, AI agents control physical equipment and make real-time decisions that affect safety. The trust framework ensures each robot or AI controller has a machine identity and AI PoA from the company certifying what tasks it can do. For instance, an AI is allowed to autonomously reorder parts when inventory is low, up to a cost limit, but not allowed to shut down safety systems - that is outside its mandate. If a robot AI deviates (say it tries to speed up a production line beyond safety limits), the risk monitoring system catches the anomaly and could even trigger an automatic safe-mode (perhaps via a smart contract that monitors telemetry and overrides commands if outside permitted range). All actions on the floor are logged; if an accident occurs, investigators can review the tamper-proof log to see if an AI malfunctioned or if it acted without proper authorization. The trust system also enables secure interactions with external systems: e.g., the factory's AI agent can negotiate with a supplier's AI (autonomously ordering parts) because they exchange trusted credentials first (each knows the other is a legit agent of a known company, not a hacker bot). This corresponds to a supply chain use case described in the strategic insights: "AI-powered supply chain agents must verify and approve shipments, invoices, and customs clearances at multiple checkpoints," which they do by using persistent encrypted communication and credentials to ensure they operate within boundaries and prevent tampering. The end result is a highly automated factory that maintains safety and accountability - every autonomous action is under a leash of trust. This gives manufacturers the confidence to run "lights-out" operations (with minimal human oversight) because the trust layer is the invisible supervisor.
Logistics and Supply Chain Orchestration: Expand the scenario to a global logistics network where AI agents schedule shipments, clear customs, and manage inventory across multiple companies. In a gigafactory context, perhaps a logistics AI hub coordinates between factories and warehouses. The trust framework allows different parties (manufacturers, shippers, customs authorities) to accept decisions from each other's AI because of standardized trust. For example, an AI from a shipping company presents an AI Service Passport when interfacing with a smart port's systems; the port's systems automatically verify that the AI has a valid credential from Shipping Co., is certified for customs declarations, and hasn't been revoked for any violations. Thus, the port lets the AI schedule unloading of cargo at a specific time slot. Smart contracts might automate some of this, only executing a cargo handover if all agents involved have the necessary credentials (this eliminates paperwork and manual checks). If an issue arises - say an AI approves a shipment that violates sanctions - the provenance logs quickly identify which AI did it, under whose authority, and what it knew at the time, enabling rapid accountability. One can even integrate IoT devices (trucks, containers) which also carry EIDAS-backed device identities, so the AI agents can trust the data (e.g., temperature sensors in a pharma shipment are authenticated, ensuring the AI reacts only to genuine alerts, not spoofed data). This use case highlights how the trust ecosystem not only secures actions within a single data center, but can link multiple organizations' processes into a trusted supply chain web. In fact, as noted in research, "PoA-linked credentials in EUBW Wallets and VCs create an audit trail of AI-driven decisions in supply chain and procurement, mitigating liability and strengthening compliance." This is exactly what our framework accomplishes for logistics AI.
These examples demonstrate that whether it's pharma, manufacturing, or logistics, a common trust layer enables autonomous processes to scale up safely. Participants can let AI agents negotiate and act on their behalf with much less fear of losing control or oversight. It's important to note that this doesn't remove humans from the loop entirely - rather, it puts humans in the right loop. Humans (and their organizational policies) set the boundaries via credentials and can always intervene by revoking or adjusting those credentials. But they don't have to micromanage every step; the trust framework enforces the rules consistently and transparently. This is critical for achieving the efficiency gains of AI without sacrificing the "trust and verify" ethos that regulators, partners, and the public demand.
For instance, in an autonomous truck platooning scenario (logistics + autonomous vehicles): multiple trucks from different companies form a convoy managed by AI. Each truck's AI has a passport stating it passed safety checks and is delegated by a carrier company to join convoys. They establish a DIDComm secure channel to coordinate driving speeds. If one truck's AI starts deviating (maybe a sensor fault), the others detect a risk (via anomaly/risk scores shared in real time) and could vote to eject that truck from the convoy for safety - because the trust framework allowed sharing of trusted status info. The malfunctioning AI might have its PoA auto-revoked by its company when the risk is flagged. This kind of real-time, cross-entity trust signaling is only possible with a robust underlying system like what we described. It shows how trust infra can directly impact physical world outcomes (prevent accidents) in use cases that mix AI and IoT.
8. Recommendation: Building a EUBW-Based Trust Ecosystem for AI Gigafactories
To ensure the success of national and European AI gigafactories, it is highly recommended to implement a unified trust ecosystem grounded in eIDAS 2.0 and the European Business Wallet. The analysis above illustrates that simply deploying massive compute power is not enough - trust is the enabling layer that will allow Europe's AI infrastructure to be used to its full potential, by multiple stakeholders, in a secure and sovereign manner.
Key recommendations and concluding thoughts:
Embed the Trust Layer from Inception: Trust and compliance mechanisms should be architected into the gigafactory from day one, not bolted on later. This means allocating budget and effort to set up the EUBW infrastructure, credential management systems, and audit ledgers as part of the project deliverables. The gigafactory's technical blueprint should include the "Trust Stack" alongside compute, storage, and networking. Given that the EU is backing these projects with public funds, it can set conditions that recipients implement eIDAS-compliant identity and trust measures. This ensures consistency across all funded AI centers (so they can interoperate and trust each other's outputs). As Ursula von der Leyen's Competitiveness Compass indicated, the EUBW is to be a cornerstone of Europe's digital strategy; thus AI infrastructure projects should treat it as such.
Leverage European Standards and Providers: Utilize European trust service providers and frameworks to deploy the solution. For example, work with QTSPs to issue the necessary digital certificates, and use EU frameworks like Gaia-X (which focuses on federated cloud trust) to align the gigafactory with broader European cloud initiatives. Nvidia's outreach to Europe shows that even global tech players recognize Europe's desire for "sovereign AI". By collaborating with them under EU's rules (e.g., Nvidia offering to allocate chip production to Europe and adapt to EU's trust requirements), we can ensure the hardware and software both adhere to Europe's trust and transparency values. The recommendation is to not simply import a turnkey "AI cloud" from abroad, but to build on Europe's evolving trust architecture (EUBW, EIDAS, GDPR, AI Act) - this will yield an AI infrastructure that is globally distinctive for its trustworthiness.
Create Multi-Stakeholder Governance: Set up a governance board or consortium for each AI gigafactory's trust ecosystem, including representatives from the operators, user companies, government, and trust service providers. This body can oversee the policies (e.g. defining standard credential profiles, risk thresholds, response procedures for incidents). It also signals to all participants that the trust system is neutral and collaboratively managed, not owned by any single company. This is important for adoption: businesses will be more willing to rely on the shared trust layer if they have a say in its governance and can ensure their requirements are met. The EU could facilitate a Trust Framework Consortium that spans all gigafactories, ensuring alignment (similar to how the EUDI Wallet Consortium is piloting the wallet across states). Such coordination will accelerate learning (e.g., if one facility develops a great AI Passport schema for healthcare AI, it can be adopted by others).
Focus on High-Impact Use Cases to Demonstrate Value: Early on, pick a few flagship applications (like the pharma research collaboration or supply chain automation described above) and implement them with the full trust stack. Showcasing a successful cross-company AI project, enabled by the EUBW-based trust, will build confidence and demand. It will answer the question "why do we need all this overhead?" by proving that certain things that were impossible before are now possible - e.g., competitors safely pooling data for AI training or fully autonomous trade finance transactions executed with regulatory approval logged in real-time. These success stories will drive wider adoption of the trust ecosystem (similar to how initial use cases of digital signatures in EU drove broader uptake). They will also help refine the technology: real-world use will highlight any usability issues or gaps, allowing improvement of standards and tools.
Ensure Human Oversight and Ethical Alignment: A trust framework is not just about tech; it's about enabling controlled autonomy. Recommend that organizations establish internal processes to use the new tools responsibly. For example, a company should have an "AI Delegation Committee" that decides which tasks to delegate to AI and issues PoAs accordingly, under oversight. The trust layer will provide them with fine-grained control, but they must use it wisely. Regular audits (perhaps with regulators observing) should be done, which thanks to the system will be easier: one can audit AI decisions by checking logs and credentials rather than digging through unstructured records. Embedding ethical guidelines into the credential issuance (like requiring that any high-risk AI has a human co-sign off) can further align this ecosystem with the forthcoming AI Act's requirements for human-in-the-loop for high-risk AI. Essentially, the recommendation is not to treat the trust tech as a license to let AI run wild - but as guardrails that augment human governance.
Network and Ecosystem Growth: As each gigafactory stands up its trust layer, connect them into a federated European AI trust network. This could mean establishing mutual recognition of credentials (an AI agent certified in Gigafactory A could be trusted in Gigafactory B if needed, etc.). Perhaps an EU AI Trust Mark (I highly recommed this!) could be developed - akin to a safety certification - awarded to AI agents or services that operate under this rigorous trust framework. This could even extend to AI products delivered: e.g., if an AI model is developed in the gigafactory with full provenance tracking and validation, it could carry a "Trustworthy AI - EU Gigafactory" label when deployed to market, indicating it was developed under compliant conditions. This helps differentiate EU AI solutions globally as being trustworthy and compliant by design, which could be a competitive selling point.
Europe's push to build AI gigafactories must go hand-in-hand with building the trust scaffolding around them. Investing in GPUs and data centers will create raw capacity; investing in the EUBW-based trust ecosystem will ensure that capacity is used in a secure, collaborative, and value-generating way. Europe has an opportunity to lead not just in AI compute but in AI governance - creating a model for how to deploy powerful AI responsibly at scale. By implementing the recommendations above, the EU can turn its gigafactories into more than just server farms; they will be cradles of trustworthy AI innovation, where businesses dare to pursue transformative AI projects because the proper checks and balances are in place. In the long run, this trust framework will be the differentiating factor that makes Europe's AI ecosystem globally competitive. It's how we ensure that if we build the gigafactories, a vibrant, secure AI industry will indeed come - one that embodies European values of privacy, safety, and trust while driving economic growth.
As one expert insight put it, "AI trust is a business necessity." Embracing that philosophy, with the European Business Wallet as the linchpin, will be the key to turning Europe's ambitious AI infrastructure into a lasting success.
Learn more about Spherity's EUBW-ready solution tailored for Trusted AI.
Chapter 9: Call to Action for EU Policymakers
Europe has a unique opportunity: not only to build powerful AI infrastructure, but to "build it with trust at its core". This is where Europe can lead - and not follow.
What truly sets Europe apart in the global AI race is not just compute - it's trust by design. With eIDAS 2.0 and the European Business Wallet (EUBW), the EU is the only region in the world creating a legally backed trust system for AI. This enables:
verifiable digital identities for companies and agents
secure delegation of tasks to AI systems (Power of Attorney)
full auditability of AI actions across borders
Combined with the upcoming AI Gigafactories, this gives Europe a strategic edge: an AI ecosystem that is not only powerful, but secure, transparent, and legally reliable. This is a unique selling point for Europe - and a chance to attract global investment into trustworthy, sovereign AI made in Europe.
What must policymakers do now?
Ensure that trust is embedded by design in all InvestAI-funded projects - make the Trust Layer a mandatory component, not an optional extra.
Support full rollout and adoption of the European Business Wallet (EUBW) across all sectors involved in AI development and use.
Fund pilots that combine AI Gigafactories and EUBW-based trust infrastructure, especially in regulated industries like health, mobility, and energy.
Promote European standards globally: position eIDAS 2.0 and the EUBW as the global benchmark for trusted AI.
Coordinate trust infrastructure governance via a dedicated EU-level body, involving Member States, trust service providers, and industry stakeholders.
Include the trust infrastructure (EUBW, eIDAS 2.0, AISP) as a core element in the EU's Digital Internationalisation Strategy, to support global adoption and trusted cross-border AI collaboration.
If Europe fails to lead on trust, others will dictate the rules. If Europe embeds trust into AI from the start, it can become the global home for safe and sovereign AI.
The time to act is now.