!!! Overview[1] [{$pagename}] ([VoT] [RFC 8485]) that defines a mechanism for measuring and signaling several aspects of [Digital Identity] and [authentication] transactions that are used to determine a level of [trust] in that transaction. In the past, there have been two extremes of communicating [authentication] transaction information. At one extreme, all attributes can be communicated with full [provenance] and associated [trust] markings. This approach seeks to create a fully-distributed attribute system to support functions such as [Attribute Based Access Control] ([ABAC]). These attributes can be used to describe the end user, the [Identity Provider (IDP)], the [Relying Party], or even the transaction itself. While the information that can be expressed in this model is incredibly detailed and robust, the complexity of such a system is often prohibitive to realize, especially across [Security Domains]. In particular, a large burden is placed on [relying parties|Relying Party] needing to process the sea of disparate attributes when making a security decision. At the other extreme there are systems that collapse all of the attributes and aspects into a single scalar value that communicates, in sum, how much a transaction can be trusted. The [NIST] special publication 800-63 ([NIST.SP.800-63]) version 2 defines a linear scale [Level Of Assurance] ([LOA]) measure that combines multiple attributes about an identity transaction into such a single measure. While this definition was originally narrowly targeted for a specific set of government use cases, the [LOA] scale appeared to be applicable with a wide variety of [authentication] scenarios in different [domains]. This has led to a proliferation of incompatible interpretations of the same scale in different [contexts], preventing interoperability between each [LOA] definition in spite of their common measurement. [LOA] is artificially limited due to the original goal of creating a single linear scale. Since [Identity Proofing] strength increases linearly along with [credential] strength in the [LOA] scale, this scale is too limited for describing many valid and useful forms of an identity transaction that do not fit the government's original model. For example, an anonymously assigned hardware token can be used in cases where the real world identity of the subject cannot be known for privacy reasons, but the credential itself can be highly trusted. This is in contrast with a government employee accessing a government system, where the identity of the individual would need to be highly proofed and strongly credentialed at the same time. The [{$pagename}] ([VoT]) effort seeks to find a balance between these two extremes by creating a data model that combines attributes of the user and aspects of the authentication context into several values that can be communicated separately but in parallel with each other. This approach is both coarser grained than the distributed attributes model and finer grained than the single scalar model, with the hope that it is a viable balance of expressibility and processability. Importantly, these three levels of granularity can be mapped to each other. The information of several attributes can be folded into a vector component, while the vector itself can be folded into an assurance category. As such, the vectors of trust seeks to complement, not replace, these other identity and trust mechanisms in the larger identity ecosystem while providing a single value for RPs to process. !! More Information There might be more information for this subject on one of the following: [{ReferringPagesPlugin before='*' after='\n' }] ---- * [#1] - [Vectors-of-trust-15|https://tools.ietf.org/html/draft-richer-vectors-of-trust-15|target='_blank'] - based on information obtained 2018-10-01