top of page

Verifiable Credentials in ‘Private Individual’ vs ‘Employee’ Contexts (X2I vs X2E)

  • Writer: Carsten Stöcker
    Carsten Stöcker
  • Jun 5
  • 34 min read

The article was originally published by Carsten Stöcker on Medium.


Introduction

In the W3C Verifiable Credentials (VC) community, a set of questions was raised by Manu Sporny regarding tracking and consent in the context of first responders using digital credential “badges.” This discussion highlighted a broader issue: VCs might be used very differently in a personal context versus an employment context.


In this article, we focus exclusively on Verifiable Credentials issued to natural persons — distinguishing between private individual (X2I) and employee (X2E) scenarios. It shall be noted that discussions about tracking, auditing, or “phoning home” related to legal person credentials — such as those tied to companies or organizations — are outside the scope of this analysis. Legal entities are often subject to mandatory transparency and control disclosure obligations via public registers, and tracking such entities or their representatives is frequently a regulatory requirement for compliance and accountability purposes. This article instead examines the privacy and technical implications when natural persons are involved, either in their private lives or as employees operating within institutional workflows.


For example, a VC used by a private individual to prove age or identity is governed by different expectations than a VC issued by an employer to track an employee’s activities. In this document, we analyze the use of VCs with a focus on these two paradigms — referred to as X2I (entity-to-Individual) and X2E (entity-to-Employee).


We define X2I (I = Individual) as a neutral, inclusive category for all natural persons regardless of citizenship, immigration status, or legal residence. X2I covers use cases in which a business, government, machine, or another individual interacts with an individual person in their private capacity — for example, to verify age, education, or vaccination status.


In contrast, X2E (E = Employee) use cases involve credentials issued to or used by a person in an employment, organisational or business context, including internal employees, external contractors, or affiliated agents acting on behalf of an organization.


We explore how privacy expectations, legal frameworks, and technical requirements differ between these scenarios, using the first responder digital badge use case as a running example. Additionally, we address the emerging question of distinguishing private individual wallets from employment-related wallets to enhance privacy and operational clarity. The goal is to provide a comprehensive, fact-based analysis to inform future W3C and industry practices on semantic VC data model requirements, consent management, tracking, and data protection.


Privacy Perspectives in Private Individual vs Employee Contexts

From a privacy standpoint, there is a clear contrast between individual-centric and employee-centric credential use. Individuals (in consumer or civic contexts) generally expect that their personal data will be used sparingly and with their consent, under strong data protection laws.


For instance, regulations like the GDPR in Europe treat individuals as data subjects with robust rights, and any credential use (e.g. a digital ID or license) must respect principles of data minimization and purpose limitation.


In contrast, employees may reasonably expect some monitoring of their activities on the job — employers have legitimate interests in supervising work for productivity, compliance, security, or safety. However, even in the workplace, privacy rights do exist: monitoring should be proportional, transparent, and limited to work-related contexts. Employees often must be informed about what data is collected and how it’s used, and excessive surveillance (beyond legitimate business needs) can violate labor regulations or civil liberties.


Employee tracking in office
Figure 1: Employee tracking in office and industrial environments, aided by real-time location systems (blue markers). Many organizations use digital systems to monitor attendance, location, and safety compliance of their workforce.

One key difference is consent:


A private individual using a credential (say, to verify their age to buy alcohol) typically gives consent voluntarily at the point of use, and they can decline if they feel it invades their privacy.


An employee, on the other hand, might be required to use certain credentials or tracking systems as a condition of employment (for example, an ID badge to enter the workplace or a system that logs their truck’s location).


In legal terms, employee consent is tricky — due to the power imbalance, labor regulators often consider such consent not fully voluntary. Instead, employers must rely on other justifications (like legitimate interest or contractual necessity) and ensure safeguards are in place.


Thus, in an X2I scenario, the emphasis is on user control and minimal disclosure, whereas in an X2E scenario, there is more tolerance for data collection within bounds — with an expectation of accountability and transparency from the employer’s side.


Necessity for Distinct Private Individual and Employee Digital Identity Wallets

Given the differences in privacy expectations and consent management between individual-centric (X2I) and employee-centric (X2E) scenarios, there is a practical need to clearly separate private and employment-related digital identity wallets. As previously discussed, employee consent is often considered less voluntary due to inherent power dynamics, necessitating additional safeguards around transparency and accountability. A distinct wallet infrastructure for employment-related credentials can thus provide better clarity and more robust controls for managing consent and data privacy.


A pertinent example highlighting the need for this distinction is found in the European Union’s eIDAS 2.0 regulation and its implementation of the European Digital Identity Wallet (EUDI Wallet). Under eIDAS 2.0, EU citizens and residents are generally expected to derive their identity credentials, such as digital identity cards or mobile driver’s licenses, into a single wallet instance — typically on their private devices. This regulatory approach poses significant practical challenges. when distinguishing between personal and professional digital identities.


Firstly, individuals may be reluctant to use their personal devices for work-related digital transactions due to privacy concerns and the desire to maintain a clear boundary between personal and professional spheres.


Secondly, organizations often enforce security policies that prohibit the use of personal devices for business purposes to mitigate risks associated with data breaches and ensure compliance with industry regulations.


These factors, in combination with business workflow requirements, contribute to the phenomenon known as the “wallet dance,” where employees are compelled to switch between personal and professional contexts and infrastructures. This not only leads to inefficiencies, additional data privacy concerns, and potential security vulnerabilities in employment-related processes, but also introduces significant process and user experience (UX) complexity — especially when business processes are forced to interact with personal wallets on private hardware, outside the organization’s managed infrastructure. This back-and-forth between private wallets, PoA and Authorisation Credentials (e.g. a first responder digital badge), organisational identity and enterprise infrastruture, was closely examined in the EU EUDI Large Scale Pilot projects, where it was found to result in fragmented workflows, poor UX, and increased operational friction.


Case Study: Vienna Election Staff Rejecting Use of Private Phones for ID Verification

Context:

In preparation for the municipal elections in Vienna, Austria, the government explored the potential use of the Austrian digital identity card (eAusweis) to streamline the voter identification process. The eAusweis is a smartphone-based digital credential that could replace traditional ID cards during voting.


Challenge Identified:

The pilot project was ultimately abandoned due to one critical obstacle: election staff refused to use their private smartphones to scan and verify digital IDs. The rejection was grounded in concerns about privacy, technical complexity, and the security of mixing personal devices with official state processes.


Key Concerns Raised by Election Workers:

Privacy and Consent: Election workers expressed unease about using their own devices for processing sensitive data.


Security: There was no assurance that personal phones met the necessary cybersecurity standards for handling government ID verification tasks.


Organizational Policy: The government did not provide secured, dedicated devices for ID verification, making the use of private phones the only option — which proved unacceptable to the election staff.


Outcome:

Due to these concerns, the authorities reverted to a fully analog process. Voters were required to present physical ID cards, and no digital identity tools were deployed.


Implications:

This case reinforces a critical insight in the context of verifiable credentials and wallet architectures: relying on personal devices for employment-related digital tasks poses a significant barrier to adoption. It supports the argument for a dedicated employee wallet infrastructure — separated from private wallets — to address legal, privacy, and operational concerns in enterprise and public sector environments.


Illustration of the failure of Austria’s eAusweis during the 2025 Vienna municipal elections
Figure 2: Illustrating the failure of Austria’s eAusweis during the 2025 Vienna municipal elections. A cheerful voter attempts to use their digital ID (eAusweis), but an election staff member refuses to use their personal phone for verification due to privacy and security concerns — forcing the voter to search for their physical ID card in frustration.

To address these issues, it is proposed to establish a dedicated business (or eGovernment) cloud wallet infrastructure that encompasses both legal person identity as well as natural person identity, employee-specific credentials and verification / issuance capabilities. This approach necessitates the integration of verified personal identity attributes into the employee wallet to ensure attributability, authenticity and trustworthiness in professional transactions. Such a dual-wallet system would facilitate clearer consent management, allowing employees to have greater control over their data and providing organizations with a structured framework to enforce privacy policies and compliance requirements.


Implementing separate wallets for personal and professional use aligns with the principles of data minimization and purpose limitation, as exemplified by the EU General Data Protection Regulation (GDPR). It enables a more granular approach to data governance, ensuring that personal data is processed strictly within the context for which it was collected. Moreover, this separation supports the development of user-centric identity management systems, fostering trust and enhancing the overall security posture of digital identity frameworks, such as eIDAS 2.0 in the EU.


However, in less controlled environments, hybrid wallets combining both personal and employment-related credentials may also be feasible, provided that consent management and privacy protections are tailored appropriately to address the differing expectations of personal and professional contexts. It should be noted that when general-purpose hybrid wallets are used, the management of consent and privacy in employment-related use cases is typically less clearly defined and less customizable, unless major technology providers such as Apple or Google establish standardized offerings specifically designed for employment-related hybrid wallet scenarios.


The establishment of distinct private and employee digital identity wallets or appropriately managed hybrid solutions is essential to reconcile the objectives of user privacy, organizational security, and regulatory compliance.


Business Requirements for Employee Tracking

Organizations have a variety of business requirements that lead them to track employee credentials and activities. Unlike private individual use cases (which seldom require ongoing tracking), employee use cases often demand logging and oversight for operational reasons. Some common examples include:


Authentication and Access Logs: Companies routinely log employee sign-ins to IT systems and entry swipes to facilities. Every login with an employee credential or badge is recorded to monitor authorized access, detect intrusions, and meet compliance requirements. Best practices in cybersecurity and auditing call for capturing details of each authentication event (user, time, success/failure). These logs provide accountability — for example, knowing which employee accessed a sensitive database — and they are often required by standards (such as ISO 27001) or regulations in finance and healthcare.


Commercial Vehicle Telematics: In industries like transportation, employers track vehicles and driver activities through onboard devices. Truck drivers in the U.S., for instance, are required by law to use Electronic Logging Devices (ELDs) that automatically record driving hours. This ensures compliance with safety rules on rest times. Such systems may capture location and engine data continuously. Notably, privacy protections are sometimes built in — the U.S. ELD mandate does not require real-time GPS tracking by regulators, and it limits the granularity of location data shared during inspections to protect drivers’ privacy. Still, the employer typically has access to fairly detailed telemetry for dispatch and safety (with the caveat that it should not be used to harass drivers or violate other laws).


Safety Monitoring in Hazardous Environments: In manufacturing, construction, mining, and other high-risk fields, employers increasingly deploy wearable devices or digital badges to monitor workers’ well-being and location for safety purposes. For example, wearable sensors can report an employee’s heart rate, heat stress, or proximity to dangerous equipment in real time. This data helps prevent accidents (e.g., by detecting fatigue or an unsafe condition), but it means the employee is continuously tracked during work hours. Successful safety programs balance this monitoring with privacy — e.g., collecting only necessary data and assuring workers that health info won’t be misused. Constant monitoring raises concerns and must be clearly justified to workers. Indeed, companies adopting such wearables are advised to be transparent about what is collected and even consider making the program voluntary at first, highlighting the need for consent and trust even in employer-driven initiatives.


Physical Access Control Logging: Modern workplaces often use digital access badges or mobile credentials to secure premises. Each door entry is automatically logged, creating a timeline of who is in which area. These records serve security (detecting unauthorized access) and can be crucial in emergencies — for example, to ensure everyone is accounted for during an evacuation (often called a “mustering” report). Access control logs are typically considered a standard, even mundane, aspect of building management. Employees generally expect that badge swipes are recorded for later review. Furthermore, integration of access systems with other tools (like video surveillance and visitor management) means an employee’s entry might trigger other data captures. All of this data must be handled according to privacy laws and company policies, e.g., used only for security and not to micro-manage without cause.


These examples demonstrate that X2E (to-Employee) credential use is often intertwined with operational tracking. Employers track logins, vehicle usage, safety compliance, and physical movement as part of legitimate business and safety requirements. By contrast, in X2I (to-Individual) scenarios, such pervasive logging would be seen as intrusive. (For instance, a government-issued digital driver’s license used by a citizen shouldn’t continuously report the citizen’s movements — that would violate personal privacy and likely face public outcry.) Thus, the design of credential systems must account for these different expectations, providing flexibility for business needs while safeguarding individual rights.


Analysis of the First Responder Use Case

The first responder digital badge scenario is a prime example of an X2E use case that sparked the recent discussion. First responders (such as paramedics, firefighters, and disaster relief volunteers) are typically employees or agents of an organization (government agencies, fire departments, hospitals, etc.), and they operate in environments where accountability and real-time coordination are critical. Issuing them Verifiable Credentials as “digital badges” is intended to improve on traditional methods of identification and status tracking at incident sites.


The first responder digital badge scenario is a prime example of an X2E use case that sparked the recent discussion. First responders (such as paramedics, firefighters, and disaster relief volunteers) are typically employees or agents of an organization (government agencies, fire departments, hospitals, etc.), and they operate in environments where accountability and real-time coordination are critical.


Issuing them Verifiable Credentials as “digital badges” is intended to improve upon traditional methods of identification and status tracking at incident sites. Moreover, these credentials can significantly enhance the security of communications by enabling secure, authenticated exchanges exclusively between authorized first responder organizations and personnel. This reduces the risk of cyber-attacks by malicious actors attempting to disrupt or compromise critical access control and communications during emergencies.


Let’s break down why this is considered an X2E case and how it relates to business and safety requirements:


Employment Context: A first responder’s badge is issued by an authority (e.g. a fire department or emergency management agency) to an individual in their role as an employee (or authorized responder). It’s not a generic private individual or citizen ID — it’s tied to their organizational role, training, and permissions. Using this credential is part of doing their job (responding to an incident), not a personal consumer activity. In other words, it’s “Organization-to-Employee”: the organization needs to verify and track its personnel.


Operational Tracking Needs: During an incident, knowing “who is where and doing what” can save lives. Incident commanders must track which responders are present, what qualifications they have (medic, HAZMAT training, etc.), and when they check in or out of the site. Traditionally, this might be done with sign-in sheets, paper ID cards, or manual radio reports. A VC-based badge system can digitize and automate this. For example, when a responder arrives and presents their digital badge, the system logs their presence (fulfilling a clock-in function) and could instantly verify their certifications or licenses on the spot. When they leave, it clocks them out. This matches common workforce management tasks — essentially a specialized time-and-attendance system for emergency scenarios. Indeed, in the DHS pilot project, one of the goals is to enable incident management software to perform “clock in/clock out functionality and real-time skills/training auditing” using the credentials.


Safety and Accountability: In dangerous operations (burning buildings, disaster zones), it’s vital to account for everyone. A digital badge can help ensure no one is left unaccounted for. If each entry/exit is logged, commanders know who is inside a hazardous area at any given time. This is analogous to access control logs in a building, but in an ad hoc emergency scene. It also provides a record after the fact for debriefing and any investigations. The justification for tracking is very strong here — it’s directly tied to safety of the responders and effective management of the emergency.


Relevance to Existing Requirements: The first responder use case doesn’t exist in a vacuum; it builds on known requirements. For instance, emergency services in the U.S. have systems for identifying credentials across jurisdictions (there are initiatives for interoperable responder IDs) and for resource tracking under Incident Command System (ICS) protocols. The VC approach is meant to make this more efficient and trustworthy (via cryptographic verification). By aligning with global standards (W3C VCs, Decentralized Identifiers, and even ISO/IEC 23220 for badge credentials), the goal is to avoid proprietary solutions. In short, the use case is legitimate because it stems from real needs: ensuring only qualified personnel access a disaster scene, coordinating multi-agency response, and logging activities for post-incident analysis.


Given these points, the first responder credential clearly fits the X2E pattern. The responders are acting as employees/agents, and the system is intentionally tracking certain data (identity, qualifications, timestamps of involvement) as a matter of organizational necessity. Importantly, this tracking is not an end in itself — it’s in service of public safety and responder safety.


In this context, the addition of privacy-enhancing technologies (PETs), such as zero-knowledge proofs or unlinkability mechanisms, offers limited advantage. X2E use cases — particularly in emergency response — are, by design, highly correlated: the employee’s identity, role, and organizational affiliation are expected to be known and verifiable by authorized systems and personnel. Attempting to obscure this data would undermine core operational goals, such as rapid trust establishment and role-based coordination in high-pressure environments. Moreover, the use of PETs in such contexts would increase system complexity, introduce potential usability challenges for frontline personnel, and may hinder interoperability between agencies that must collaborate quickly using shared data formats.


Therefore, we recommend focusing on simplicity and operational robustness. Rather than relying on advanced PETs, the emphasis should be on implementing clear data minimization practices within the credential’s data model — ensuring that only the necessary and proportionate data elements are included for the specific task or operational context. This approach better aligns with real-world needs of first responder workflows while still adhering to core data protection principles.


Nonetheless, it raises the critical question that Manu Sporny and others have posed: how do we implement such a system without undermining privacy and consent principles? Should a responder have to formally consent to this data logging? And how do we prevent misuse of the data (for example, using it to discipline responders in ways unrelated to the incident, or, worse, a government repurposing the system to monitor employees in everyday life)? These questions lead us to the broader evaluation below.


Evaluation of Key Questions

In the aforementioned W3C community discussion, several key questions were raised about the first responder badge scenario and, by extension, similar X2E cases. Here we evaluate each of those questions:


Legitimacy of the Use Case: Is it appropriate to use the VC technology for this kind of employee tracking scenario? Based on the analysis above, the use case is legitimate and compelling. First responders operating with digital credentials stand to benefit from faster authentication and interoperability across organizations. The fact that the U.S. DHS funded pilots for VC-based first responder badges underscores that this is a real-world need, not a contrived example. Moreover, such patterns already happen (albeit with less sophisticated tech): badges, RFID tags, and ICS check-in processes are common. Using VCs is an evolution to improve security (cryptographic verification of credentials) and efficiency (standard data formats) in a process that is already considered necessary. That said, legitimacy depends on proper use: the system should be used only for its intended purpose (incident response) and not as a general surveillance tool. Assuming it’s constrained to that scope, it falls well within accepted business requirements and public interest.


Need for Standardization: Do we need to standardize features or practices in VC protocols to accommodate this use case (and similar ones)? In a word, yes, there is a need for some standard guidance. Currently, as we will discuss, the VC Data Model provides generic mechanisms (like the ability to express terms of use) but does not specify how to handle scenarios of ongoing tracking or consent logging. The first responder project itself aims to produce a specification or profile for such badges that could go “standards-track”. This implies that without a standardized approach, each organization might implement tracking in an ad-hoc way, potentially leading to inconsistencies or privacy gaps. Additionally, from a policy perspective, we heard that no internationally recognized framework yet governs workplace monitoring tech in a consistent, rights-respecting way. Standards development (whether in W3C or elsewhere) can fill this void by defining how an “employee credential” should signal its terms and how software should handle them. Standardization could cover things like a common credential type for employment credentials, a consistent way to indicate that the issuer will log presentations, and best practices for verifiers. This would help vendors build interoperable solutions and set expectations for privacy. It would also dovetail with broader efforts (e.g., an updated code of practice from the International Labour Organization, if one were developed to address digital credentials).


Privacy-Preserving Alternatives: Are there ways to achieve the goals of the use case with better privacy, and have those been considered? This is a nuanced question. Some privacy-preserving techniques common in the VC world (such as zero-knowledge proofs or minimal disclosure) seem at odds with the requirement here (which is essentially to identify and log a particular person’s presence and qualifications). If the goal is to know Alice the EMT is on site from 2:00pm to 6:00pm, you cannot fully anonymize that — the whole point is to track Alice’s identity and status. However, privacy can still be protected by minimizing data and access. For example, the credential might include lots of personal information about Alice (home address, date of birth, etc.), but the verification at the incident should perhaps only pull the necessary data (name, role, certifications) — a VC could allow selective disclosure to achieve that. Another alternative is ensuring that location data is not over-collected: the system might log when a responder checks in, but not continuously track their GPS location throughout the operation (unless needed for safety). In essence, collect only what is required for the task. Additionally, any data that is collected can be handled in a privacy-preserving way: stored securely, retained only for a limited time, and not shared freely. We can also draw inspiration from the ELD example: the ELD devices record granular data for the trucking company, but when transmitting externally (to regulators), they redact or reduce location detail to protect privacy. Similarly, a first responder system could ensure that detailed logs stay within the incident management system and are not broadcast beyond authorized parties. Finally, one might ask if an opt-in model is possible: could responders choose not to be tracked? Realistically, in critical incidents, opt-out is not feasible — commanders need a complete picture. But obtaining prior consent (as part of the employment agreement or volunteer sign-up) with clear limitations on use is crucial. In summary, while you cannot eliminate tracking in this scenario, you can engineer the system to be as privacy-preserving as possible given the constraints. This means using VCs to limit unnecessary data sharing and implementing policy safeguards (discussed later) so that the tracking doesn’t extend beyond its justified scope.


Wallet and Verifier Guidance: What guidance should be provided to digital wallet providers and verifiers regarding such use cases? This is an important consideration because the software (wallets and verification platforms) mediates the user experience and data flow. For wallets (holder software), guidance could include:


Recognizing when a credential has certain terms of use or flags that indicate tracking. For instance, if an employer-issued credential includes a term like “Usage of this credential will be logged by [Issuer] for safety and compliance,” the wallet might visually flag this credential as “Work Credential” or notify the user upon first use.


During the issuance process, the wallet could display the terms of use and require the user (employee) to acknowledge them. This is analogous to how some identity wallets show you the attributes that will be shared and ask for consent. Here, it might show: “Issuer X is issuing you a First Responder Badge. By accepting, you agree that whenever you present this badge to authorized verifiers, your attendance and identity will be recorded for incident management purposes.” The user would then accept to proceed. Making this part of the UX ensures consent isn’t just a paper form lost in HR files, but a living part of the credential.


For verification interactions, wallet UI could remind the holder, in context, that “You are about to present your badge to [Verifier Y]. Per your badge’s terms, this action will be logged.” In many cases, the employee will expect this (it’s their job), and a simple notification is enough. The wallet might allow them to proceed by confirming, or even allow a “don’t show this again for this credential” if appropriate (to avoid notification fatigue when repeatedly using a work badge).


For verifiers, the guidance would focus on proper handling of credentials with special terms:


A verifier application should check the credential’s terms of use (if provided) before or during verification. If the verifier is not the intended type (for example, a credential might say “for official emergency use only”), then the verifier app should reject or warn. In our case, verifiers will mostly be in-network (incident command systems), but guidance helps ensure they honor any constraints. The VC Data Model allows holders to also send terms in a presentation — for instance, a holder could assert “I present this only for the purpose of emergency response.” A verifier should respect that and not repurpose the data.


In addition to purpose and context, the terms of use can also define how the data may be processed, for which operational goals, and under what legal or organizational framework. They may include requirements for data retention — e.g., that all logs or copies of credential data must be deleted after a specified number of days — and limit processing to only those functions directly necessary for the declared use case. This enables a privacy-aware, policy-based data lifecycle that supports accountability and minimizes long-term exposure of sensitive information.


Additionally, both wallet and verifier implementations should follow privacy best practices from the spec — e.g., avoid “phoning home” unnecessarily. The W3C spec explicitly warns against designs where every verification triggers a call to the issuer that could track the holder. In an employee scenario, the issuer wants the data, but the design should still avoid insecure patterns. Instead, the verifier can log events internally (and share with the issuer/employer through a secure channel), rather than the credential itself trying to contact the issuer each time (which would violate the spec’s anti-tracking principle).


In summary, clear guidance can ensure that holders (employees) are aware of and agree to the tracking aspects, and verifiers handle the data in a manner consistent with the credential’s stated terms. This avoids surprises and builds trust into the system’s UX.


Civil Liberties Risks: Even if we accept this use case as valid, does it introduce broader risks to civil liberties or set troubling precedents? This is a critical question. One risk is the potential spillover effect — if we enable and normalize tracking credentials for employees, what stops a government from attempting to use similar techniques on citizens or residents, outside of the narrow context? For example, in an authoritarian context, one could imagine a government-issued “citizen card” that must be presented for everyday activities and which logs citizens’ or residents’ movements or which excludes all non-citizens from public life. There is a fine line: the technology could be misused if the safeguards and context are not crystal clear. It’s important to stress the contextual limits of the first responder use case: it is for emergency management, under authority of incident commanders, and for the duration and purpose of the incident only. Building in technical and legal guardrails (like automatic data deletion after an incident, or ensuring the credential cannot be used to track outside of an incident) can help mitigate misuse. Another civil liberties concern is employee rights. Even in democratic societies, there’s a balance between an employer’s interests and a worker’s privacy. If misuse occurs — say an employer starts using responder badge logs to reprimand someone for being “slow to arrive” at a scene, or to evaluate performance in a punitive way — it could chill the responders’ autonomy or willingness to do their job with discretion. We must consider the power dynamics: employees might feel they are under constant surveillance and could be judged by the data in ways they didn’t expect. To counter this, policies should be in place (perhaps as part of the terms of use or HR policy) that the data will only be used for safety, accountability, and resource coordination, not minor disciplinary actions. Additionally, data should be accessible to employees for transparency — e.g., a responder should have the right to see the records of their own deployments, which adds oversight and can correct errors. Civil liberties organizations like the ACLU advocate that workplace monitoring be narrowly tailored and not create an atmosphere of intimidation. Applied here, that means designing the system such that it is truly a safety tool, not a general surveillance network. In the bigger picture, acknowledging these risks means W3C (and the VC community) should approach standardizing such use cases very carefully — with input from privacy advocates — to ensure that enabling the technology doesn’t inadvertently enable abuses. Clear separation between private individual credentials and employee credentials (a theme we’ll expand on next) is one way to signal that what’s acceptable in one sphere is not in another. If we articulate those differences in standards and guidance, we reduce the chance that a feature meant for employee scenarios gets quietly reused in private individual scenarios without due scrutiny.


Examination of the W3C VC Data Model

To address these issues at a technical level, it’s useful to look at what the W3C Verifiable Credentials Data Model (current version 2.0) offers in terms of relevant features. Two aspects are particularly pertinent: the ability to classify credential types and the inclusion of terms of use in credentials or presentations.


Credential Type: In the VC data model, every credential has a type property (or array) which designates what kind of credential it is. By default, a credential might be of type “VerifiableCredential” plus more specific types (for example, a university degree might have types [“VerifiableCredential”, “DegreeCredential”]). This typing system allows extensibility — communities of practice can define new credential types to fit their domain. In our context, one could define a type such as “FirstResponderBadgeCredential” or more generally “EmployeeIDCredential”. The type doesn’t by itself enforce behavior, but it signals to verifiers and holders what schema and expectations apply. For instance, an employer could issue a credential with type “EmployeeBadge”, and a verifier could recognize this and treat it differently than, say, a “CustomerLoyaltyCard”. The VC spec does not come with a predefined list of domain-specific types; it’s up to implementers or extensions to define them. In the first responder work, they have indeed created a vocabulary and presumably a type for responder badges. In practice, that might extend a more generic “organizational identity” credential concept. For example, a hypothetical credential could have: “type”: [“VerifiableCredential”, “OrganizationalID”, “EmergencyResponderBadge”]. The presence of “OrganizationalID” could indicate this is issued in an employment or membership context. The key point is that the data model is flexible enough to mark credentials as being of a certain nature, but it’s up to ecosystem participants to agree on and use those markers.


Terms of Use: The VC Data Model provides an explicit extension point for attaching terms of use to a credential (and similarly, a holder can attach terms to a presentation). This is highly relevant for expressing things like “you agree that we may log usage of this credential”. According to the specification, an issuer can place one or more terms of use in the credential, each term being a structured object with a type (indicating what kind of policy it is) and possibly an id or details. These could reference external documents or just be human-readable statements. The spec gives examples such as providing a URI to a public policy, or naming a standard trust framework that the credential follows. For instance, a term of use might be of type “TrustFrameworkPolicy” pointing to a URL that describes the governing framework under which this credential was issued and can be used. Another might be a simple text like “For official use by accredited emergency response agencies only.” The spec intentionally leaves the semantics of terms of use up to the specific type definition. But it does lay out the expectation that if a holder or verifier accepts a credential with certain terms, they are doing so willingly and might incur liability if they violate those terms. In other words, the terms of use are like a digital wrapper of the rules/conditions around the credential.


For the first responder credential, the issuer (say, a state Emergency Management Agency) could include terms such as: “This credential may be used to verify your identity and qualifications during authorized emergency response operations. Use of this credential will be logged for incident management and safety auditing purposes.” That could be represented either as a custom EmployeeUsagePolicy term or embedded in a trust framework document that covers all emergency workers. The wallet, upon receiving the credential, might display these terms to the user. And any verifier (e.g., an incident command system) could also access these terms via the credential’s data.


It’s worth noting that the VC spec also includes privacy considerations which indirectly relate to this scenario. For instance, it prohibits certain patterns like a credential status check that would phone-home and notify an issuer of each verification. That is a privacy-preserving default rule — but in our scenario, the issuer wants to know usage (tracking). One must be careful to implement tracking without violating the spec’s intent. The recommended approach would be that the verifier logs the event and perhaps shares it with the issuer (or the issuer is itself the verifier in employer’s system), rather than the credential itself being designed to beacon out. In other words, the VC Data Model doesn’t stop an employer from tracking credential use, but it requires that it be done in a way that doesn’t undermine the broader privacy architecture of VCs (e.g., no hidden correlating identifiers that leak to unintended parties).


The W3C VC Data Model provides the building blocks needed: we can classify credentials by type (to distinguish private individual vs employee uses at least informally) and we can imbue credentials with explicit terms of use. What the core data model doesn’t do is dictate specific categories or specific terms for our scenario — that’s left to implementers or extensions. This is why proposals are emerging to define more concrete credential types and usage conventions for things like employee badges.


Proposal for Distinct Credential Types

Given the differences we’ve explored between private individual and employee use cases, one proposal is to establish distinct categories of credentials, sometimes informally dubbed “Private Individual Credentials” vs “Employee Credentials.” The idea is not to create an artificial binary for all situations, but to recognize that credentials used in an employment context might need to be treated differently (by software and policy) than those used in a personal context. Concretely, this could involve:


Defining a New Type or Schema: We could standardize (or at least commonly adopt) a type like EmployeeCredential (as an extension of VerifiableCredential). Any credential with this type would indicate it’s issued as part of an employment or organizational role relationship. Similarly, one might define PrivateIndividualCredential for things issued to individuals outside of an employment context (like a driver’s license, a student ID, etc.). The goal is that a wallet or verifier can immediately discern the nature of the credential. For example, a wallet might show a small briefcase icon on credentials that are marked as work-related. This distinction could trigger different default behaviors (such as more frequent reminders of terms of use for employee creds, or hiding certain attributes when sharing private individual creds). It also allows policy engines to apply separate rules (e.g., an employer’s system might reject any credential presented that is not an EmployeeCredential of a recognized type, to avoid someone trying to use a random private individual or citizen credential in its workflow).


Differentiation in Terms and Capabilities: Along with the type, the data model or profile for an Employee Credential could require including certain fields that private individual credentials wouldn’t. For instance, an employee credential might always include a termsOfUse section detailing the employment policy, a reference to an employee ID number or contract, and possibly an expiry aligned with employment tenure. A private individual credential, on the other hand, might emphasize user consent for each sharing (some private individual wallets already do this per credential presentation). By having distinct schemas or templates, issuers and developers are guided to include the right information. In the first responder case, the “Responder Badge” credential might include fields like agency, rank, skills etc., and be under the EmployeeCredential class, whereas a general-purpose ID would not carry those.


Policy and Legal Recognition: We could envision that trust frameworks or governance groups explicitly recognize these categories. For example, a digital identity trust framework might say: “If a credential is classified as an Employee Credential, then the issuer must have obtained acknowledgment from the holder of specific terms, and verifiers must agree to data handling rules (see Section X). Private individual Credentials are subject to different requirements (see Section Y) focusing on individual consent at use time, etc.” In other words, treat them as different trust contexts. This is analogous to how in some systems there are separate processes for personal ID vs professional ID (consider how a work ID card is handled differently by policy than a national ID card).


One potential parallel in the physical world is the distinction between a national ID and a work badge. A national ID (or driver’s license) is something you might show to anyone who needs to identify you, and there are laws limiting how that can be recorded or used (e.g., in some countries, stores are not allowed to photocopy your ID without cause). A work badge is issued by your company; you wear it at work, and the company can scan it whenever for internal purposes — but that badge isn’t usually accepted outside the workplace. If someone tried to use their corporate ID to prove their age at a bar, it would be odd and probably not accepted. Similarly, blurring the lines between these credential types in the digital sphere would be problematic. We wouldn’t want, for example, a police officer demanding to see a person’s employment badge usage logs as a proxy for tracking their whereabouts — that data is meant for the employer’s scope. By clearly separating credential types, we enforce contextual integrity: each credential is used in its proper context.


Implementing distinct types also offers a chance to incorporate consent and tracking flags at a structural level. For instance, an EmployeeCredential schema could have a boolean property like trackingConsentProvided or a reference to a terms document, which a wallet could check. Meanwhile, a PrivateIndividualCredential schema might forbid such a property or default it to false (meaning no tracking beyond the transaction itself).


We concluide that establishing separate “lanes” for private individual vs employee credentials in the ecosystem could be very beneficial. It lets us tailor technological and policy solutions to each use case without one bleeding into the other. The first responder badge would firmly sit in the EmployeeCredential lane, with all the attendant rules (and protections) that that entails.


Enhanced Terms of Use and Consent Processes

Building on the ability to include terms of use in credentials, we propose enhancing how consent and terms are managed, especially for employee-tracking scenarios:


Terms of Reference for Employee Tracking: As noted, the VC could include a terms of use that points to a detailed policy (perhaps a URL to a “First Responder Credential Usage Policy” or an employee privacy notice). Rather than stuffing all obligations into a short text, this approach leverages a “terms of reference” — a pointer to an authoritative document or contract that the employee has agreed to. For example, the credential’s terms could be an object: { “type”: “EmployeeDataPolicy”, “id”: “https://agency.example.org/policies/credential-tracking-v1" }. This way, the actual content can be updated or elaborated at that URL, and it’s clear which policy version applies (versioning via the URL or an ID). This is analogous to how software licenses might be attached by reference. The wallet could even fetch and display this policy if the user wants (since phoning home to a static URL for policy might be acceptable in this context, or the policy text could be embedded at issuance). Importantly, using a URL avoids embedding contact-sensitive info that could create correlation risk (the spec suggests using public locations to avoid “phone home” issues).


Wallet User Notifications and UI: The wallet plays a crucial role in capturing and conveying consent. During issuance, when the credential is being issued to the holder’s wallet, the wallet should display any terms of use provided by the issuer. This is akin to a consent screen. The holder (employee) would need to accept before the credential is stored. This acceptance could even be cryptographically proven — for instance, the holder could digitally sign a statement that “I, holder DID X, acknowledge terms Y of credential Z”. That proof might be stored by the issuer or embedded in the credential as an attribute (there is an evidence property in VC that could potentially carry such an acknowledgment, or it could be a separate audit trail record). Going forward, whenever that credential is presented, the wallet might show a small reminder (depending on UX decisions). Since employees may use their credentials frequently, the UI should be careful to inform but not overly annoy. Perhaps a one-time tutorial or icon indicator is sufficient after the initial consent, unless the terms change.


Consent Integration During Issuance: It’s worth emphasizing how issuance protocols can integrate consent. If the credential is issued via a protocol like OpenID Connect for Verifiable Credentials (OIDC4VC), there might be a step where the issuer can present a consent page (OIDC has concepts of consent pages for scope approval, etc.). In a custom app, the issuer could simply require the employee to click “Accept” in the wallet app. By standardizing this step, we ensure no credential is issued under the radar with sneaky terms — the user must actively accept. Additionally, the issuer can log that acceptance (which protects both parties: the issuer can prove the employee knew, and the employee can’t later claim ignorance of tracking, but conversely the employee can point to exactly what they agreed to, which could limit misuse).


Ongoing Consent vs One-Time: In employment contexts, consent is often one-time (you sign an agreement when you start the job). Here, the credential issuance is analogous to that moment. We propose that after that, the user shouldn’t need to re-consent each time they use it (that would be impractical in emergencies). However, periodic re-confirmation might be wise, say if terms of use update or annually just to remind the user. The wallet could facilitate that by checking if the termsOfUse id has changed or is nearing expiry and prompting the user to fetch and accept a new policy credential if needed (like an updated version of the badge with new terms).


In essence, marrying the legal consent process with the technical credential issuance is a way to ensure alignment. When a first responder gets their digital badge, part of that onboarding is digitally agreeing, via the wallet, to how that badge will be used and tracked. This is far more transparent than burying it in an employment contract.


Finally, beyond the technology, organizations should still follow best practices: have an open dialogue with employees about these tools, allow feedback, and ensure they understand the benefits to them (safety, quicker verification) as well as the risks. This human process element was echoed in advice for introducing wearables — transparency and even voluntary adoption phases can build trust. While first responders likely won’t opt-out in emergencies, including them in the design of the terms can ease concerns. For example, maybe the policy will explicitly state that data will not be used to evaluate job performance or will not include any health metrics without separate consent — whatever assurances can be made, put them in writing.


Verifier Terms of Use Requirements


On the other side of the equation, we must consider the verifiers — typically, in X2E cases, the verifier is either the employer itself or a service acting on behalf of the employer (like an incident management platform in the first responder case). It’s crucial that verifiers also be bound by certain terms of use, especially when handling potentially sensitive personal data from credentials. Here are recommendations for verifier requirements:


Adherence to Issuer’s Terms: The VC spec implies that verifiers who accept a credential with terms must adhere to those terms or risk liability. In practice, this means if the issuer (employer) says “verifier must not store this data beyond 24 hours unless required for audit”, any verifier must follow that. In an internal company system, this is straightforward (the company sets the policy and its systems obey). In a broader ecosystem, say a mutual aid situation where a responder from one jurisdiction is verified by another jurisdiction’s system, the verifying party should respect the terms the credential carries. This could be enforced via contracts or agreements between agencies: e.g., an MOU that any data obtained by verifying another agency’s credentials will be kept confidential and erased after the incident. Technically, one could encode some of this; for instance, the credential’s terms might indicate data retention limits in a machine-readable way. The verifier’s software could automatically purge records accordingly or flag if it’s exceeding the allowed use.


Responsible Data Handling: Verifiers should implement proper data protection measures. If an incident management system is collecting logs of responder check-ins, that data is personal (it shows where a person was and when). It should be stored securely, access-controlled, and encrypted where appropriate. Only authorized personnel (e.g., incident commanders, safety officers) should be able to view identifiable logs. If data is aggregated for reporting, identifiers could be removed for broader sharing. Essentially, verifiers should treat this data with the same care as other employee personal data or even higher, given it might involve emergency scenarios (which could be sensitive in their own right).


No Unintended Use or Sharing: The verifier must not repurpose the data for something else. For example, the vendor of the incident management software shouldn’t siphon off the data to analyze responder movements for product research without permission. Or a responding agency should not use another agency’s personnel data for recruiting efforts later, etc. This comes down to policy and contracts more than technology — but technology can assist by compartmentalizing data per incident and per agency, and providing audit trails of who accessed it.


Verifier Identity and Trust: Interestingly, in VC interactions, the holder can also require the verifier to present a credential (e.g., the verifier could present a “Verifier Credential” to prove it’s authorized). In our scenario, a responder’s wallet might want to know it’s presenting to an official incident system, not a malicious impostor. This could be solved by the verifier having its own VC (like a credential for the incident commander or agency running the incident). If we include that, then the wallet can log which verifier (by DID or name) it shared data with. This provides some accountability: if later an issue arises, the responder can say “I only shared my badge with authorized verifiers X, Y, Z; if data leaked, it’s on their side.”


Auditing and Oversight: Organizations deploying these systems should have audit logs and possibly external oversight to ensure policies are followed. For instance, a privacy officer or an independent auditor might periodically check that data logs from responder credentials were handled per policy (deleted on time, not accessed inappropriately). If verifiers know they are subject to audit, they will be more cautious to follow the rules. In high-stakes cases (like policing), one could even involve community oversight boards to build trust that the technology isn’t quietly enabling something it shouldn’t.


In essence, verifiers have an ethical and often legal responsibility to handle credential data with care. We propose that any standard or best practice documentation around VCs in employment contexts include a section on verifier obligations. If the VC ecosystem evolves a notion of “trusted verifier” accreditation (similar to how some ID ecosystems have trusted issuers lists and relying party registration in eIDAS 1.0 and 2.0), then being a trusted verifier might require committing to these data handling rules.


To illustrate: The first responder VC pilot might produce a “Responder Credential Trust Framework” document. In it, it could state: Issuing Agency will include terms of use in each credential. Participating Verifiers (incident management systems) MUST comply with those terms. Specifically, they must not retain personal data longer than 30 days post-incident, must use it only for incident response, and must ensure it’s shared only with entities directly involved in that incident. Violations could be subject to discipline or removal from the trust network. This kind of governance element ensures that the technology serves its intended purpose and no more.


Conclusion

The exploration of X2I vs X2E use cases for Verifiable Credentials highlights an important insight: context matters enormously in digital identity and credentialing. A feature or practice that might be privacy-invasive in a private individual scenario could be acceptable, even necessary, in an employee scenario — but only with the right guardrails in place. Using the first responder digital badge example, we’ve seen that VCs can indeed support complex real-world needs like tracking personnel in emergencies, provided we address consent, transparency, and data protection from the start.


Going forward, the W3C VC community and industry stakeholders should consider formalizing the distinctions between credentials in different contexts. This might involve new standard credential types or profiles, richer use of the terms of use feature, and clear guidelines for implementers (wallets and verifiers) on how to handle credentials that come with tracking expectations. By incorporating privacy-preserving designs (like minimal disclosure and avoidance of unwarranted correlation) and by binding usage policies into the credentials themselves, we can leverage VCs in scenarios like X2E without betraying the privacy-first spirit that underpins the VC paradigm.


Ultimately, it’s a balancing act between utility and privacy. Employers and governments will justifiably use digital credentials to streamline operations and enhance safety. Our responsibility in the standards community is to ensure that such systems are built in a way that respects individual rights and maintains trust. That means making consent explicit and informed, keeping data use proportional and purposeful, and preventing scope creep. The discussion initiated by Manu Sporny and others is an important step in interrogating these issues. The recommendations in this document — from distinguishing wallet and credential types to strengthening terms of use and verifier accountability — aim to inform that dialogue with concrete ideas.


By implementing these measures, we can enable powerful X2E verifiable credential use cases (like the first responder badge, commercial driver logs, employee access passports, etc.) while upholding the civil liberties and privacy that must underpin any digital identity ecosystem. In doing so, we ensure VCs remain a tool for empowerment and trust in all contexts, rather than a vehicle for unwelcome surveillance. The path forward will likely involve continued collaboration between technologists, policymakers, employers, and privacy advocates to refine these approaches and incorporate them into the next generation of standards and products. The result, we hope, will be a robust framework where both private individuals and employees can benefit from verifiable credentials with confidence that their rights are protected.


Sources:

bottom of page