!!! Overview
[{$pagename}] is a [Data representation] of a [number] 

[{$pagename}] [{$applicationname}] refers you to Wikipedia: [Floating-point_arithmetic#Floating-point_numbers|Wikipedia:Floating-point_arithmetic#Floating-point_numbers|target='_blank']

[IEEE] standardized the computer representation for binary [{$pagename}] numbers in [IEEE 754] (a.k.a. IEC 60559) in [1985|Year 2008] and was revised in [2008|Year 2008] which is used by almost all modern [computers]. [IBM] [mainframes] support IBM's own [hexadecimal] [{$pagename}] format and [IEEE 754]-2008 decimal [{$pagename}] in addition to the [IEEE 754] [binary] format. The Cray T90 series had an IEEE version, but the SV1 still uses Cray [{$pagename}] format.


Bfloat16 (Brain Floating Point) format is a computer number format occupying 16 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. This format is a truncated (16-bit) version of the 32-bit [IEEE 754] single-precision [{$pagename}] format (binary32) with the intent of accelerating [Machine Learning] and near-sensor computing. Bfloat16 preserves the approximate dynamic range of 32-bit floating-point numbers by retaining 8 exponent bits, but supports only an 8-bit precision rather than the 24-bit significand of the binary32 format. More so than single-precision 32-bit floating-point numbers, bfloat16 numbers are unsuitable for integer calculations.

Bfloat16 was originally developed originally by [Google] and implemented in its third generation [Tensor Processing Unit] ([TPU])


!! More Information
There might be more information for this subject on one of the following:
[{ReferringPagesPlugin before='*' after='\n' }]
----
* [#1] - [Bfloat16_floating-point_format|Wikipedia:Bfloat16_floating-point_format|target='_blank'] - based on information obtained 2019-08-30