I had looked into unusual memory architectures to allow a computer to be designed which had single-precision floats that were 36 bits long, so that it would be possible more often to avoid recourse to double precision, and which had double-precision floats that were 60 bits long, also a multiple
of 12, because it wasn't necessary to have all the precision of 64-bit floats.
Also thrown in were 48-bit floats, which were designed to have 11-digit precision and a range just exceeding 10^-99 to 1^99, so as to be
comparable with what scientific calculators offer.
While it was interesting to examine the possible ways this could be
managed, all the possibilities involved awkwardness and complexity - as might be expected.
So how could I achieve my original goals while avoiding awkwardness?
Well, I came up with this:
Have floating-point formats that are either 36 bits long or 72 bits long.
That way, the 36-bit format is available, and longer formats, being twice
as long, are easy to fetch from memory.
One of the 72-bit formats has the same significand (or mantissa) length as the 48-bit floats in my idealized computer. But no bits are wasted;
instead, the exponent field is just enlarged.
It's still a conventional floating-point format, where the lengths of the exponent and significand are fixed, unlike John Gustavson's posits. But?? fraction ??
this gives it the advantage that either a computation will fail, or the precision of all the intermediate results will be the same as that of the final result; no catastrophic loss of precision will pass by unnoticed.
The other 72-bit format has a significand
the same size as that of IEEE--- Synchronet 3.21b-Linux NewsLink 1.2
754 64-bit floats. Offering lower precision, the same as that of a 60-bit float... would, no doubt, be too tough a sell. So the exponent field,
while not as large as that of the other format, would still be 8 bits
longer than usual, which, no doubt would be helpful.
John Savard
John Savard <quadibloc@invalid.invalid> posted:
I had looked into unusual memory architectures to allow a computer to
be designed which had single-precision floats that were 36 bits long,
so that it would be possible more often to avoid recourse to double
precision, and which had double-precision floats that were 60 bits
long, also a multiple of 12, because it wasn't necessary to have all
the precision of 64-bit floats.
Any machine on sale today (selling at les 100,000 machines/year)
provide 36-bit or 60-bit or 72-bit FP ?!?
If you want to build a 12-bit=base machine, go ahead--just don't expect
much takeup.
So how could I achieve my original goals while avoiding awkwardness?
Avoid non 8^n design points altogether.
Well, I came up with this:
Have floating-point formats that are either 36 bits long or 72 bits
long.
Ok, better than the above, 12^n -> {12, 24, 48, 96}
WOOPS no 36, 60 or 72 !!!
............................6^n -> {6, 12, 24, 48, 96} still does not
work.
One of the 72-bit formats has the same significand (or mantissa) length
as the 48-bit floats in my idealized computer. But no bits are wasted;
instead, the exponent field is just enlarged.
72-bit FP (ala IEEE754 rules) is arguably better than Posits.
The other 72-bit format has a significand
?? fraction ??
On Thu, 05 Feb 2026 01:57:22 +0000, MitchAlsup wrote:
John Savard <quadibloc@invalid.invalid> posted:
I had looked into unusual memory architectures to allow a computer to
be designed which had single-precision floats that were 36 bits long,
so that it would be possible more often to avoid recourse to double
precision, and which had double-precision floats that were 60 bits
long, also a multiple of 12, because it wasn't necessary to have all
the precision of 64-bit floats.
Any machine on sale today (selling at les 100,000 machines/year)
provide 36-bit or 60-bit or 72-bit FP ?!?
Not that I know of. Of course, there's Univac, which still sells machines supporting their old 36-bit architecture.
If you want to build a 12-bit=base machine, go ahead--just don't expect
much takeup.
That's indeed the problem, so I tried to address the problem.
So how could I achieve my original goals while avoiding awkwardness?
Avoid non 8^n design points altogether.
That, unfortunately, couldn't achieve my original goals.
Well, I came up with this:
Have floating-point formats that are either 36 bits long or 72 bits
long.
Ok, better than the above, 12^n -> {12, 24, 48, 96}
WOOPS no 36, 60 or 72 !!!
............................6^n -> {6, 12, 24, 48, 96} still does not
work.
The idea is now there's a 9-bit byte, and everything is build around that 9-bit byte. Although 9 is not a power of two, all other lengths are 9
times a power of two, so binary addressing of these bytes and two-byte and four-byte and eight-byte quantities remains just as simple as on a pure
2^n machine.
Since 2^n machines with *bit addressing* are just about as rare as 36-bit
and 60-bit machines... now my proposal is "just as good".
I _still_ don't _really_ expect much takeup, even though my floats have
sizes that seem to match the precisions those engaged in scientific
computing were fond of.
One of the 72-bit formats has the same significand (or mantissa) length
as the 48-bit floats in my idealized computer. But no bits are wasted;
instead, the exponent field is just enlarged.
72-bit FP (ala IEEE754 rules) is arguably better than Posits.
At least one bit of positivity.
The other 72-bit format has a significand
?? fraction ??
A floating-point number usually has three parts; a sign, an exponent
(which includes its own sign) and...
a coefficient or mantissa or fraction... which is now referred to, in the IEEE standard, as a "significand", so I guess we have to get use to the
new official name for it.
John Savard
On Thu, 05 Feb 2026 01:57:22 +0000, MitchAlsup wrote:
John Savard <quadibloc@invalid.invalid> posted:
I had looked into unusual memory architectures to allow a computer to
be designed which had single-precision floats that were 36 bits long,
so that it would be possible more often to avoid recourse to double
precision, and which had double-precision floats that were 60 bits
long, also a multiple of 12, because it wasn't necessary to have all
the precision of 64-bit floats.
Any machine on sale today (selling at les 100,000 machines/year)
provide 36-bit or 60-bit or 72-bit FP ?!?
Not that I know of. Of course, there's Univac, which still sells machines >supporting their old 36-bit architecture.
The idea is now there's a 9-bit byte, and everything is build around
that 9-bit byte. Although 9 is not a power of two, all other lengths are
9 times a power of two, so binary addressing of these bytes and two-byte
and four-byte and eight-byte quantities remains just as simple as on a
pure 2^n machine.
Since 2^n machines with *bit addressing* are just about as rare as
36-bit and 60-bit machines... now my proposal is "just as good".
Giving it additional numeric types which are stored normally in
registers, but which are stored in memory using only the least
significant eight bits of each nine-bit byte, would allow it to
exchange data with conventional machines based on the eight-bit
byte.
Isn't that going to create opcode space pressure?
How are you planning to handle UTF-8, UTF-16 and UTF-32 character data? Creating UTF-9, UTF-18 and UTF-36 seems like pointless complexity.
On Fri, 06 Feb 2026 16:37:00 +0000, John Dallman wrote:
How are you planning to handle UTF-8, UTF-16 and UTF-32 character data?
Creating UTF-9, UTF-18 and UTF-36 seems like pointless complexity.
I think UTF-9 was described in an April 1st RFC.
How are you planning to handle UTF-8, UTF-16 and UTF-32 character data?
On Fri, 06 Feb 2026 16:37:00 +0000, John Dallman wrote:
Isn't that going to create opcode space pressure?
Well, that will be less of an issue in an architecture where the instructions are stored in wider memory.
How are you planning to handle UTF-8, UTF-16 and UTF-32 character data? Creating UTF-9, UTF-18 and UTF-36 seems like pointless complexity.
I think UTF-9 was described in an April 1st RFC. But I agree with that.
Essentially, I am now thinking that a CPU with this architecture might
have its primary application as a numerical co-processor for a
conventional CPU. This would provide the opportunity for carrying out computations with extra exponent range or higher precision without having
to switch to a much larger floating-point format, thus avoiding loss of speed.
One would need to create a new kind of RAM module to support a 144-bit
wide data bus, but it would be unrealistic to create new video cards and
so on.
So it would have its own FORTRAN compiler - that would be the highest priority in software development, after some kind of operating system for the compiler to run within. Well, maybe porting a C compiler would need to come first, to allow everything else to be ported.
John Savard
Have floating-point formats that are either 36 bits long or 72 bits
long.
That way, the 36-bit format is available, and longer formats, being
twice as long, are easy to fetch from memory.
...
The other 72-bit format has a significand the same size as that of
IEEE 754 64-bit floats. Offering lower precision, the same as that
of a 60-bit float... would, no doubt, be too tough a sell. So the
exponent field, while not as large as that of the other format,
would still be 8 bits longer than usual, which, no doubt would be
helpful.
quadi <quadibloc@ca.invalid> posted:
John Savard
Why Quadi ??
I have given more thought to interoperability with the 8-bit world.
Giving it additional numeric types which are stored normally in
registers,
but which are stored in memory using only the least significant eight
bits of each nine-bit byte, would allow it to exchange data with
conventional machines based on the eight-bit byte.
On Thu, 05 Feb 2026 23:58:52 +0000, quadi wrote:
I have now added a new page to my site,
http://www.quadibloc.com/arch/per16.htm
where this is explained more completely with illustrations.
On Mon, 09 Feb 2026 22:46:44 +0000, quadi wrote:
On Thu, 05 Feb 2026 23:58:52 +0000, quadi wrote:
I have now added a new page to my site,
http://www.quadibloc.com/arch/per16.htm
where this is explained more completely with illustrations.
I have further updated that page to show how this principle can be
extended to connect the 36-bit word computer not only to a 32-bit word computer, but also to a 24-bit word computer, and I mention that integer formats as well as floating-point ones of this type are needed.
John Savard--- Synchronet 3.21b-Linux NewsLink 1.2
Over the last couple of days, I have come to the conclusion that your
job, in the near future, is to sell the idea of a 72-bit computer architecture.
On Tue, 10 Feb 2026 19:10:45 +0000, MitchAlsup wrote:
Over the last couple of days, I have come to the conclusion that your
job, in the near future, is to sell the idea of a 72-bit computer architecture.
Here, you are raising the one question that I have been merrily avoiding
as irrelevant. Although it is anything but irrelevant in one way, as it directly deals with the value of all this in the real world.
Trying to persuade the world to switch from 32 bits to 36 bits? How could
I be anything other than an amusing crank if I did that?
I remember having read one article in a computer magazine where someone mentioned that an unfortunate result of the transition from the IBM 7090
to the IBM System/360 was that a lot of FORTRAN programs that were able to use ordinary real nubers had to be switched over to double precision to yield acceptable results.
And I noticed that a lot of mathematical tables from the old days went up
to 10 digit accuracy, and scientific calculators had 10 digit displays, calculating internally to a slightly higher precision.
And a passing remark in Petr Beckmann's "A History of Pi" about how even using pi to the accuracy of a computer double precision number was 'artificial' encouraged me to think of trimming down double precision a
bit - say by one digit, to match the precision of numbers in the Control Data 6600, with which scientists seemed to have been quite content in its day.
All this was a rather slim basis on which to conclude that our 32-bit and 64-bit floats ought to be replaced by 36-bit, 48-bit, and 60-bit floats.
And in the days that immediately followed the emergence of the IBM System/ 360, of course, transistors were still *expensive*. So it made sense to be concerned about optimizing floating-point formats, so that their precision was as much as necessary, but no more - so that a computer with as few transistors as possible could perform calculations as fast as possible to get the results needed.
But now? Powerful microprocessors are cheap. The cost of buying a custom specialized part would be so high as to completely eliminate the potential savings of using 36-bit floats instead of 64-bit floats when they might do.
So the only way a benefit would result... is if 36/72 bits became the ubiquitous new standard! I suppose that _could_ happen, if it were widely acknowledged that the requirements of scientific computing would be better met in that case.
So it seems as if it's impossible for the 36/72 bit transition to start on--- Synchronet 3.21b-Linux NewsLink 1.2
a small scale, with something that fills a niche demand, because the lower production volumes would create higher costs that entirely negate the
value for the niche.
Except...
Speaking of niche products, there's the SX-Aurora TSUBASA from NEC... it looks like a video card, but it's actually the last surviving *vector* supercomputer in the Cray tradition!
As it happens, I encountered - in my years as a grad student - a computer add-on from Floating-Point Systems which, so that it could be attached to (then still existing) 36-bit computers or 18-bit minis in addition to the 32-bit and 16-bit ones... used 38-bit floating-point numbers internally.
And Cray style vector instructions are one thing I've been including in my various hypothetical architectures, on the grounds that there about the
only architectural feature aimed at providing more power that (some) mainframes had that isn't routine in micros these days. Of course, though, you've noted that it can't really be effective without huge memory bandwidth, which is impractical to provide.
And the SX-Aurora TSUBASA has internal memory, which may even be HBM, so that removes the issue that standard memory modules are designed around
the 32/64/128/256 -bit data bus width.
So vector modules are a potential niche that could run in 36 bits while connecting to a 32 bit world - and making 36 bits connect to 32 bits is,
of course, just what my latest brainstorm was dealing with.
John Savard
I remember having read one article in a computer magazine where someone mentioned that an unfortunate result of the transition from the IBM 7090
to the IBM System/360 was that a lot of FORTRAN programs that were able to use ordinary real nubers had to be switched over to double precision to
yield acceptable results.
This reminds me of when I took a numerical analysis course. (The many
ways that computer calculations can go wrong and how to deal with it.)
The professor said that the schools IBM (360 or 370, ca. 1980) was
perfect for the course because of the defects in its floating point
system. Guard digits and rounding sorts of things as near as I can recall.
According to David Schultz <david.schultz@earthlink.net>:
This reminds me of when I took a numerical analysis course. (The many
ways that computer calculations can go wrong and how to deal with it.)
The professor said that the schools IBM (360 or 370, ca. 1980) was
perfect for the course because of the defects in its floating point >system. Guard digits and rounding sorts of things as near as I can recall.
The 360's floating point is a famous and somewhat puzzling failure, considering
how much else they got right.
It does hex normalization rather than binary. They assumed that
leading digits are evenly distributed so there's be on average one
zero bit, but in fact they're geometrically distributed, so on average there's two. They got one bit back by making the exponent units of 16
rather than 2, but that's still one bit gone. It truncated rather than rounding, another bit gone. They also truncated rather than rounding results.
Originally there wre no guard digits which made the results comically
bad but IBM retrofitted them at great cost to all the installed machines.
IEEE floating point can be seen as a reaction to that, how do you use
the same number of bits but get good results.
John Levine <johnl@taugh.com> posted:
According to David Schultz <david.schultz@earthlink.net>:
This reminds me of when I took a numerical analysis course. (The manyThe 360's floating point is a famous and somewhat puzzling failure, considering
ways that computer calculations can go wrong and how to deal with it.)
The professor said that the schools IBM (360 or 370, ca. 1980) was
perfect for the course because of the defects in its floating point
system. Guard digits and rounding sorts of things as near as I can recall. >>
how much else they got right.
It does hex normalization rather than binary. They assumed that
leading digits are evenly distributed so there's be on average one
zero bit, but in fact they're geometrically distributed, so on average
there's two. They got one bit back by making the exponent units of 16
rather than 2, but that's still one bit gone. It truncated rather than
rounding, another bit gone. They also truncated rather than rounding
results.
Originally there wre no guard digits which made the results comically
bad but IBM retrofitted them at great cost to all the installed machines.
IEEE floating point can be seen as a reaction to that, how do you use
the same number of bits but get good results.
VAX got this correct too (the VAX format not the one inherited from >PDP-11/45; PDP-11/40* FP was worse). ...
And I noticed that a lot of mathematical tables from the old days went up
to 10 digit accuracy, and scientific calculators had 10 digit displays, calculating internally to a slightly higher precision.
On 2/11/26 5:04 PM, quadi wrote:
I remember having read one article in a computer magazine where someone
mentioned that an unfortunate result of the transition from the IBM
7090 to the IBM System/360 was that a lot of FORTRAN programs that were
able to use ordinary real nubers had to be switched over to double
precision to yield acceptable results.
This reminds me of when I took a numerical analysis course. (The many
ways that computer calculations can go wrong and how to deal with it.)
The professor said that the schools IBM (360 or 370, ca. 1980) was
perfect for the course because of the defects in its floating point
system. Guard digits and rounding sorts of things as near as I can
recall.
On 2/11/2026 3:04 PM, quadi wrote:
And I noticed that a lot of mathematical tables from the old days went
up to 10 digit accuracy, and scientific calculators had 10 digit
displays, calculating internally to a slightly higher precision.
The ten digit displays came from the design of the first electric calculators, made by such companies as Friden and Monroe in the 1940s
and 50s). They had ten rows of numeric keys (0-9), so that the
operator, who presumably had ten fingers (including thumbs) could
operate them quickly.
quadi <quadibloc@ca.invalid> posted:
All this was a rather slim basis on which to conclude that our 32-bit
and 64-bit floats ought to be replaced by 36-bit, 48-bit, and 60-bit
floats.
36/72-bit have the property that 32/64-bit^2 does not overflow !!
avoiding all sorts of IEEE_HYPOT() problems.
So the only way a benefit would result... is if 36/72 bits became the
ubiquitous new standard! I suppose that _could_ happen, if it were
widely acknowledged that the requirements of scientific computing would
be better met in that case.
In this case YOU have to ask YOURSELF why are you providing any of those strange data sizes AT ALL ??? That is who is Concertina for ???
{{One can STILL argue whether
deNormals were a plus or a minus in IEEE}}
John Levine <johnl@taugh.com> posted:From the perspective of stability of convergence of few common
According to David Schultz <david.schultz@earthlink.net>:
This reminds me of when I took a numerical analysis course. (The
many ways that computer calculations can go wrong and how to deal
with it.) The professor said that the schools IBM (360 or 370, ca.
1980) was perfect for the course because of the defects in its
floating point system. Guard digits and rounding sorts of things
as near as I can recall.
The 360's floating point is a famous and somewhat puzzling failure, considering how much else they got right.
It does hex normalization rather than binary. They assumed that
leading digits are evenly distributed so there's be on average one
zero bit, but in fact they're geometrically distributed, so on
average there's two. They got one bit back by making the exponent
units of 16 rather than 2, but that's still one bit gone. It
truncated rather than rounding, another bit gone. They also
truncated rather than rounding results.
Originally there wre no guard digits which made the results
comically bad but IBM retrofitted them at great cost to all the
installed machines.
IEEE floating point can be seen as a reaction to that, how do you
use the same number of bits but get good results.
VAX got this correct too (the VAX format not the one inherited from PDP-11/45; PDP-11/40* FP was worse). VAX FP is arguably as good as
IEEE 754 with the exception that more IEEE numbers have reciprocals
due to the change in exponent bias by 1. {{One can STILL argue whether deNormals were a plus or a minus in IEEE}}
CMU had a PDP-11/40 with writable control store 1974. I programmed it--- Synchronet 3.21b-Linux NewsLink 1.2
to do PDP-11/45 FP instead of PDP-11/40 FP as a Jr. project.
On 2/11/26 5:04 PM, quadi wrote:
I remember having read one article in a computer magazine where
someone mentioned that an unfortunate result of the transition from
the IBM 7090 to the IBM System/360 was that a lot of FORTRAN
programs that were able to use ordinary real nubers had to be
switched over to double precision to yield acceptable results.
This reminds me of when I took a numerical analysis course. (The many
ways that computer calculations can go wrong and how to deal with
it.) The professor said that the schools IBM (360 or 370, ca. 1980)
was perfect for the course because of the defects in its floating
point system. Guard digits and rounding sorts of things as near as I
can recall.
On 2/11/2026 3:04 PM, quadi wrote:
snip
And I noticed that a lot of mathematical tables from the old days went up
to 10 digit accuracy, and scientific calculators had 10 digit displays,
calculating internally to a slightly higher precision.
The ten digit displays came from the design of the first electric >calculators, made by such companies as Friden and Monroe in the 1940s
and 50s). They had ten rows of numeric keys (0-9), so that the
operator, who presumably had ten fingers (including thumbs) could
operate them quickly. So 10 digits sort of became standard. When
computers came along, and the designers wanted to use binary for them,
On Wed, 11 Feb 2026 19:50:00 -0800, Stephen Fuld wrote:
On 2/11/2026 3:04 PM, quadi wrote:
And I noticed that a lot of mathematical tables from the old days went
up to 10 digit accuracy, and scientific calculators had 10 digit
displays, calculating internally to a slightly higher precision.
The ten digit displays came from the design of the first electric
calculators, made by such companies as Friden and Monroe in the 1940s
and 50s). They had ten rows of numeric keys (0-9), so that the
operator, who presumably had ten fingers (including thumbs) could
operate them quickly.
So you're saying that the tendency of log tables and the like to go up to
a maximum of ten digits precision wasn't because ten digits were needed
for, say, celestial mechanics or something like that, so my premise that
ten significant digits was what scientific computation usually needs, as reflected in the design of calculators and math tables is completely mistaken.
On 2/11/2026 9:55 PM, quadi wrote:
On Wed, 11 Feb 2026 19:50:00 -0800, Stephen Fuld wrote:
On 2/11/2026 3:04 PM, quadi wrote:
And I noticed that a lot of mathematical tables from the old days went >>>> up to 10 digit accuracy, and scientific calculators had 10 digit
displays, calculating internally to a slightly higher precision.
The ten digit displays came from the design of the first electric
calculators, made by such companies as Friden and Monroe in the 1940s
and 50s). They had ten rows of numeric keys (0-9), so that the
operator, who presumably had ten fingers (including thumbs) could
operate them quickly.
So you're saying that the tendency of log tables and the like to go up to
a maximum of ten digits precision wasn't because ten digits were needed
for, say, celestial mechanics or something like that, so my premise that
ten significant digits was what scientific computation usually needs, as
reflected in the design of calculators and math tables is completely
mistaken.
See
https://en.wikipedia.org/wiki/36-bit_computing#History'
Stephen Fuld <sfuld@alumni.cmu.edu.invalid> writes:
On 2/11/2026 3:04 PM, quadi wrote:
snip
And I noticed that a lot of mathematical tables from the old days went up >>> to 10 digit accuracy, and scientific calculators had 10 digit displays,
calculating internally to a slightly higher precision.
The ten digit displays came from the design of the first electric
calculators, made by such companies as Friden and Monroe in the 1940s
and 50s). They had ten rows of numeric keys (0-9), so that the
operator, who presumably had ten fingers (including thumbs) could
operate them quickly. So 10 digits sort of became standard. When
computers came along, and the designers wanted to use binary for them,
When computers came along, they used 40 bits to store 10 BCD digits
(e.g. the electrodata 220 (44 bit) from the mid 50s and the successor Burroughs
machines (B300, B3500). The B3500 extended the maximum operand size
to 100 BCD digits. 80's versions of the B3500 had a 40-bit memory
bus (operating on 10 digits at a time).
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
{{One can STILL argue whether
deNormals were a plus or a minus in IEEE}}
I am surprised to read that from you, who has always written that
denormals can be implemented cheaply and efficiently in hardware. The additional hardware cost (or the cost of trapping and software
emulation) has been the only argument against denormals that I ever encountered.
- anton--- Synchronet 3.21b-Linux NewsLink 1.2
On Thu, 12 Feb 2026 02:04:58 GMT
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
John Levine <johnl@taugh.com> posted:
According to David Schultz <david.schultz@earthlink.net>:
This reminds me of when I took a numerical analysis course. (The
many ways that computer calculations can go wrong and how to deal
with it.) The professor said that the schools IBM (360 or 370, ca. >1980) was perfect for the course because of the defects in its
floating point system. Guard digits and rounding sorts of things
as near as I can recall.
The 360's floating point is a famous and somewhat puzzling failure, considering how much else they got right.
It does hex normalization rather than binary. They assumed that
leading digits are evenly distributed so there's be on average one
zero bit, but in fact they're geometrically distributed, so on
average there's two. They got one bit back by making the exponent
units of 16 rather than 2, but that's still one bit gone. It
truncated rather than rounding, another bit gone. They also
truncated rather than rounding results.
Originally there wre no guard digits which made the results
comically bad but IBM retrofitted them at great cost to all the
installed machines.
IEEE floating point can be seen as a reaction to that, how do you
use the same number of bits but get good results.
VAX got this correct too (the VAX format not the one inherited from PDP-11/45; PDP-11/40* FP was worse). VAX FP is arguably as good as
IEEE 754 with the exception that more IEEE numbers have reciprocals
due to the change in exponent bias by 1. {{One can STILL argue whether deNormals were a plus or a minus in IEEE}}
From the perspective of stability of convergence of few common
algorithms denormals are significant plus.
From the perspective of minimizing surprises it is also plus. On VAX
(a > b) does not necessarily guarantee (a-b > 0).
I wonder in which situation it can be seen as a minus?
There are several things that I don't like about IEEE-754 Standard, but
none of them related to format of binary numbers.
CMU had a PDP-11/40 with writable control store 1974. I programmed it
to do PDP-11/45 FP instead of PDP-11/40 FP as a Jr. project.
On 2/12/2026 7:54 AM, Scott Lurndal wrote:
When computers came along, they used 40 bits to store 10 BCD digits
(e.g. the electrodata 220 (44 bit) from the mid 50s and the successor
Burroughs machines (B300, B3500). The B3500 extended the maximum
operand size to 100 BCD digits. 80's versions of the B3500 had a
40-bit memory bus (operating on 10 digits at a time).
In the early days of computers, there was a distinction between
"business" computers and "scientific" computers. Many (most?) of the business computers were decimal) e.g. the ones you mentioned and some
IBM lines) and character oriented. Conversely, many of the scientific computers were binary and often used 36 bit words.
https://en.wikipedia.org/wiki/36-bit_computing#History
These often used 6 bit characters and conveniently used octal.
Was not quality of arithmetic of CDC machines of the 70s even worse than
that of IBM ?
On 2/11/2026 9:55 PM, quadi wrote:
So you're saying that the tendency of log tables and the like to go up
to a maximum of ten digits precision wasn't because ten digits were
needed for, say, celestial mechanics or something like that, so my
premise that ten significant digits was what scientific computation
usually needs, as reflected in the design of calculators and math
tables is completely mistaken.
See
https://en.wikipedia.org/wiki/36-bit_computing#History
On Thu, 12 Feb 2026 10:53:58 +0200, Michael S wrote:
Was not quality of arithmetic of CDC machines of the 70s even worse than that of IBM ?
I don't know about that. But I do know that despite having a power-of-two exponent, quality of arithmetic on the Cray I was pretty terrible.
John Savard
On Thu, 12 Feb 2026 10:53:58 +0200, Michael S wrote:
Was not quality of arithmetic of CDC machines of the 70s even worse than that of IBM ?
I don't know about that. But I do know that despite having a power-of-two exponent, quality of arithmetic on the Cray I was pretty terrible.
John Savard
Michael S <already5chosen@yahoo.com> posted:
On Thu, 12 Feb 2026 02:04:58 GMT
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
John Levine <johnl@taugh.com> posted:
According to David Schultz <david.schultz@earthlink.net>:
This reminds me of when I took a numerical analysis course.
(The many ways that computer calculations can go wrong and how
to deal with it.) The professor said that the schools IBM (360
or 370, ca. 1980) was perfect for the course because of the
defects in its floating point system. Guard digits and
rounding sorts of things as near as I can recall.
The 360's floating point is a famous and somewhat puzzling
failure, considering how much else they got right.
It does hex normalization rather than binary. They assumed that
leading digits are evenly distributed so there's be on average
one zero bit, but in fact they're geometrically distributed, so
on average there's two. They got one bit back by making the
exponent units of 16 rather than 2, but that's still one bit
gone. It truncated rather than rounding, another bit gone.
They also truncated rather than rounding results.
Originally there wre no guard digits which made the results
comically bad but IBM retrofitted them at great cost to all the installed machines.
IEEE floating point can be seen as a reaction to that, how do
you use the same number of bits but get good results.
VAX got this correct too (the VAX format not the one inherited
from PDP-11/45; PDP-11/40* FP was worse). VAX FP is arguably as
good as IEEE 754 with the exception that more IEEE numbers have reciprocals due to the change in exponent bias by 1. {{One can
STILL argue whether deNormals were a plus or a minus in IEEE}}
From the perspective of stability of convergence of few common
algorithms denormals are significant plus.
From the perspective of minimizing surprises it is also plus. On
VAX (a > b) does not necessarily guarantee (a-b > 0).
I wonder in which situation it can be seen as a minus?
a-b underflows and takes a trap.
There are several things that I don't like about IEEE-754 Standard,
but none of them related to format of binary numbers.
CMU had a PDP-11/40 with writable control store 1974. I
programmed it to do PDP-11/45 FP instead of PDP-11/40 FP as a Jr. project.
I remember having read one article in a computer magazine where someone mentioned that an unfortunate result of the transition from the IBM 7090
to the IBM System/360 was that a lot of FORTRAN programs that were able to use ordinary real nubers had to be switched over to double precision to yield acceptable results.
And I noticed that a lot of mathematical tables from the old days went up
to 10 digit accuracy, and scientific calculators had 10 digit displays, calculating internally to a slightly higher precision.
anton@mips.complang.tuwien.ac.at (Anton Ertl) posted:
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
{{One can STILL argue whether
deNormals were a plus or a minus in IEEE}}
I am surprised to read that from you, who has always written that
denormals can be implemented cheaply and efficiently in hardware. The
additional hardware cost (or the cost of trapping and software
emulation) has been the only argument against denormals that I ever
encountered.
It is only after IEEE 754-2008 came with FMAC that deNormals became
a low cost addition. {And that has been my point--you seem to have
forgotten the -2008 part or the argument}
quadi <quadibloc@ca.invalid> wrote:
I remember having read one article in a computer magazine where someone
mentioned that an unfortunate result of the transition from the IBM 7090
to the IBM System/360 was that a lot of FORTRAN programs that were able to >> use ordinary real nubers had to be switched over to double precision to
yield acceptable results.
Note that IBM floating point format effectively lost about 3 bits of
accuracy compared to modern 32-bit format. I am not sure how much they
lost compared to IBM 7090 but it looks that it was at least 5 bits.
Assuming that accuracy requirements are uniformly distributed between
20 and say 60 bits, we can estimate that loss of 5 bits affected about
25% (or more) of applications that could run using 36-bits. That is
"a lot" of programs.
But it does not mean that 36-bits are somewhat magical. Simply, given
36-bit machine original author had extra motivation to make sure that
the program run in 36-bit floating point.
Note that IBM floating point format effectively lost about 3 bits ofbits.
accuracy compared to modern 32-bit format. I am not sure how much
they lost compared to IBM 7090 but it looks that it was at least 5
But it does not mean that 36-bits are somewhat magical.
Oh I forgot that using hex exponents meant there was no hidden bit, so
in practice it lost three bits of precision on every operation. There was
a great deal of grumbling that people with 709x Fortran codes had to
make everything double precision to keep getting reasonably good results.
According to Waldek Hebisch <antispam@fricas.org>:
quadi <quadibloc@ca.invalid> wrote:
I remember having read one article in a computer magazine where someone >>> mentioned that an unfortunate result of the transition from the IBM 7090 >>> to the IBM System/360 was that a lot of FORTRAN programs that were able to >>> use ordinary real nubers had to be switched over to double precision to >>> yield acceptable results.
Note that IBM floating point format effectively lost about 3 bits of >>accuracy compared to modern 32-bit format. I am not sure how much they >>lost compared to IBM 7090 but it looks that it was at least 5 bits. >>Assuming that accuracy requirements are uniformly distributed between
20 and say 60 bits, we can estimate that loss of 5 bits affected about
25% (or more) of applications that could run using 36-bits. That is
"a lot" of programs.
But it does not mean that 36-bits are somewhat magical. Simply, given >>36-bit machine original author had extra motivation to make sure that
the program run in 36-bit floating point.
It's worse than that, because the 360's floating point had wobbling precision.
Depending on the number of leading zero bits in the fraction it could lose anywhere from 1 to 5 bits of precision compared to a rounded binary format. Hence the badness of the result depended more than usual on the input
data.
Well, IBM format had twice the rage of IEEE format, so effectively one
bit moved from mantissa to exponent. Looking at representable values
except at low end of the range only nomalized values matter. In
hex format 15/16 of values are normalized, ...
According to Waldek Hebisch <antispam@fricas.org>:
Well, IBM format had twice the rage of IEEE format, so effectively one
bit moved from mantissa to exponent. Looking at representable values >>except at low end of the range only nomalized values matter. In
hex format 15/16 of values are normalized, ...
That's the same mistake IBM made when they designed the 360's FP.
Leading fraction digits are geometrically distributed, not linearly.
(Look at a slide rule to see what I mean.)
There are on average two leading zeros so only half of the values are normalized.
Quadi, have your computer architectures included IBM 360 floating point support? There is probably more demand for that than for 36-bit these
days.
According to Waldek Hebisch <antispam@fricas.org>:
Well, IBM format had twice the rage of IEEE format, so effectively one
bit moved from mantissa to exponent. Looking at representable values >>except at low end of the range only nomalized values matter. In hex
format 15/16 of values are normalized, ...
That's the same mistake IBM made when they designed the 360's FP.
Leading fraction digits are geometrically distributed, not linearly.
(Look at a slide rule to see what I mean.)
There are on average two leading zeros so only half of the values are
normalized.
No. By _definition_ hex floating point number is normalized if and
only if its leading hex digit is different than zero.
According to Waldek Hebisch <antispam@fricas.org>:
There are on average two leading zeros so only half of the values are
normalized.
No. By _definition_ hex floating point number is normalized if and
only if its leading hex digit is different than zero.
I wrote sloppily. On average a normalized hex FP number has two leading zeros so you lose another bit compared to binary, in addition to what you lose by no hidden bit and no rounding.
On Sun, 15 Feb 2026 14:37:00 +0000, John Dallman wrote:
Quadi, have your computer architectures included IBM 360 floating point
support? There is probably more demand for that than for 36-bit these
days.
Yes, in fact they have. The goal there is to facilitate data interchange
and emulation, not to provide better quality floating-point arithmetic... since, of course, it provides rather the opposite, as has been discussed
in this thread.
The original CISC Concertina I architecture went further; it had the goal
of being able to natively emulate the floating-point of just about every computer ever made.
quadi <quadibloc@ca.invalid> wrote:
On Sun, 15 Feb 2026 14:37:00 +0000, John Dallman wrote:
Quadi, have your computer architectures included IBM 360 floating point
support? There is probably more demand for that than for 36-bit these
days.
Yes, in fact they have. The goal there is to facilitate data interchange and emulation, not to provide better quality floating-point arithmetic... since, of course, it provides rather the opposite, as has been discussed in this thread.
The original CISC Concertina I architecture went further; it had the goal of being able to natively emulate the floating-point of just about every computer ever made.
That was probably already written, but since you are revising your
design it may be worth stating some facts. If you have 64-bit
machine with convenient access to 32-bit, 16-bit and 8-bit parts
you can store any number of bits between 4 and 64 wasting at most
50% of storage and have simple access to each item. So in terms
of memory use you are trying to avoid this 50% loss. In practice
loss will be much smaller because:
- power of 2 quantities are quite popular
- when program needs large number of items of some other size
programmer is likely to use packing/unpacking routines, keeping
data is space efficient packed formant for most time and unpacking
it for processing
- machine with fast bit-extract/bit-insert instruction can perform
most operation quite fast even on packed data
so possible gain in memory consumption is quite low. Given that
non-standard memory modules and support chips tend to be much more
expensive than standard ones, economically attempting such savings
make no sense.
Of course, that is also question of speed. The argument above shows
that loss of speed on access itself can be quite small. So what
remains is speed of processing data. As long as you do processing
on power of 2 sized items (that is unusual sizes are limited to
storage), loss of speed can be modest, basically dedicated 36-bit
machine probably can do 2 times as much 36-bit float operations
as standard machine can do 64-bit operations. Practically, this
loss will be than loss of storage, but still does not look significant
enough to warrant developement of special machine.
Things are somewhat different when you want bit-accurate result
using old formats. Here already one-complement arithmetic has
significant overhead on two-complement machine.
And emulating
old floating point formats is mare expensive. OTOH, modern
machines are much faster than old ones. For example modern CPU
seem to be more than 1000 times faster than real CDC-6600, so
even slow emulation is likely to be faster than real machine,
which means that emulated machine can do the work of orignal
one.
So to summarize: practical consideration leave rather small space
for machine using non-power-of-two formats, and it is rather
unlikely that any design can fit there.
Of course, there is very good reason to expore non-mainstream
approaches, namely having fun. But once you realize that
mainstream designs make their choices for good reasons,
exploring alternatives gets less funny (at least for me).
But once you realize that mainstream
designs make their choices for good reasons,
exploring alternatives gets less funny (at least for me).
On Tue, 17 Feb 2026 20:43:35 +0000, Waldek Hebisch wrote:
But once you realize that mainstream
designs make their choices for good reasons,
exploring alternatives gets less funny (at least for me).
At one time, back in the past, the mainstream computers had word lengths
such as 12 bits, 18 bits, 24 bits, 30 bits, 36 bits, 48 bits, 60 bits...
all multiples of 6 bits.
The reason for this was that computers needed a character set with
letters, numbers, and various special characters - and a six-bit
character, with 64 possibilities, was adequate for that.
As technology advanced, and computer power became cheaper, it became
possible to think of using computers for more applications. Using an eight- bit character allowed the use of lower-case characters, getting rid of a limitation of the older computers that could possibly become annoying in
the future. Of course, a 7-bit character would also be enough for that -
and at least one company, ASI, actually made computers with word lengths
that were multiples of 7 bits.
Even before System/360, IBM made a computer built around a 64-bit word,
the STRETCH. It was intended to be a very powerful scientific computer,
but it also had the very rare feature of bit addressing - which a power-of- two word length made much more practical.
Hardly any architectures provide bit addressing these days, though.
None the less, a character set that includes lower-case is a good reason. Since a 36-bit word works better with 9-bit characters instead of 6-bit characters being addressable, nothing is really lost by going to 36 bits.
Of course, there's another good reason for sticking with 32-bit or 64-bit designs: because that's what everyone else is using, standard memory
modules have data buses corresponding to such widths, possibly with extra bits for ECC.
To me, those don't seem to be enough "good reasons" to absolutely preclude different word lengths. But there would definitely have to be a real
benefit to justify the cost and effort to use a different length. It seems
to me there is a real benefit, in that the available data sizes in the 32- bit world aren't optimized to the needs of scientific computation.
But it's quite correct to feel this real benefit isn't enough to make machines oriented around the 36-bit word length likely.
John Savard
anton@mips.complang.tuwien.ac.at (Anton Ertl) posted:
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
{{One can STILL argue whether
deNormals were a plus or a minus in IEEE}}
I am surprised to read that from you, who has always written that
denormals can be implemented cheaply and efficiently in hardware. The
additional hardware cost (or the cost of trapping and software
emulation) has been the only argument against denormals that I ever
encountered.
It is only after IEEE 754-2008 came with FMAC that deNormals became
a low cost addition. {And that has been my point--you seem to have
forgotten the -2008 part or the argument}
- anton
On 2/12/2026 11:09 AM, MitchAlsup wrote:
anton@mips.complang.tuwien.ac.at (Anton Ertl) posted:
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
{{One can STILL argue whether
deNormals were a plus or a minus in IEEE}}
I am surprised to read that from you, who has always written that
denormals can be implemented cheaply and efficiently in hardware. The
additional hardware cost (or the cost of trapping and software
emulation) has been the only argument against denormals that I ever
encountered.
It is only after IEEE 754-2008 came with FMAC that deNormals became
a low cost addition. {And that has been my point--you seem to have forgotten the -2008 part or the argument}
And, can note, this is assuming that one actually pays the cost of
native hardware FMAC.
Well, and the secondary irony that it is mainly cost-added for FMUL,
whereas FADD almost invariably has the necessary support hardware already.
But:
FMUL is expensive operation + cheap normalizer (if no denormals);
FADD is cheap operation with expensive normalizer.
FMAC then is gluing the costs of the two units together, but:
With roughly the latency of both;
The need to be significantly wider internally to deal with some cases.
So, FMAC is a single unit that costs more than both units taken
separately, and with a higher latency.
BGB <cr88192@gmail.com> posted:
On 2/12/2026 11:09 AM, MitchAlsup wrote:
anton@mips.complang.tuwien.ac.at (Anton Ertl) posted:
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
{{One can STILL argue whether
deNormals were a plus or a minus in IEEE}}
I am surprised to read that from you, who has always written that
denormals can be implemented cheaply and efficiently in
hardware. The additional hardware cost (or the cost of trapping
and software emulation) has been the only argument against
denormals that I ever encountered.
It is only after IEEE 754-2008 came with FMAC that deNormals
became a low cost addition. {And that has been my point--you seem
to have forgotten the -2008 part or the argument}
And, can note, this is assuming that one actually pays the cost ofIt is exceedingly difficult to get an IEEE quality rounded result if
native hardware FMAC.
not done in HW.
Well, and the secondary irony that it is mainly cost-added for
FMUL, whereas FADD almost invariably has the necessary support
hardware already.
But:
FMUL is expensive operation + cheap normalizer (if no denormals);
FADD is cheap operation with expensive normalizer.
FMAC then is gluing the costs of the two units together, but:
With roughly the latency of both;
The need to be significantly wider internally to deal with some
cases.
The add stage after the multiplication tree is <essentially> 2- as
wide. FMUL needs a 108-bit 2-input adder
FMAC needs a 160-bit 3-input adder and a 52-bit incrementor.
The multiplication tree is the same, normalizer is larger.
So, FMAC is a single unit that costs more than both units taken separately, and with a higher latency.
Prior RISC processors did FMUL in 3-4 cycles (mostly 4).
Later RISC processors and x86 did FMAC in 4-cycles (occasionally 5).
Quadi, have your computer architectures included IBM 360 floating point support? There is probably more demand for that than for 36-bit these
days.
On Thu, 19 Feb 2026 17:30:50 GMT
MitchAlsup <user5857@newsgrouper.org.invalid> wrote:
BGB <cr88192@gmail.com> posted:
On 2/12/2026 11:09 AM, MitchAlsup wrote:
anton@mips.complang.tuwien.ac.at (Anton Ertl) posted:
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
{{One can STILL argue whether
deNormals were a plus or a minus in IEEE}}
I am surprised to read that from you, who has always written that
denormals can be implemented cheaply and efficiently in
hardware. The additional hardware cost (or the cost of trapping
and software emulation) has been the only argument against
denormals that I ever encountered.
It is only after IEEE 754-2008 came with FMAC that deNormals
became a low cost addition. {And that has been my point--you seem
to have forgotten the -2008 part or the argument}
And, can note, this is assuming that one actually pays the cost of native hardware FMAC.It is exceedingly difficult to get an IEEE quality rounded result if
not done in HW.
Well, and the secondary irony that it is mainly cost-added for
FMUL, whereas FADD almost invariably has the necessary support
hardware already.
But:
FMUL is expensive operation + cheap normalizer (if no denormals);
FADD is cheap operation with expensive normalizer.
FMAC then is gluing the costs of the two units together, but:
With roughly the latency of both;
The need to be significantly wider internally to deal with some
cases.
The add stage after the multiplication tree is <essentially> 2|u as
wide. FMUL needs a 108-bit 2-input adder
FMAC needs a 160-bit 3-input adder and a 52-bit incrementor.
The multiplication tree is the same, normalizer is larger.
So, FMAC is a single unit that costs more than both units taken separately, and with a higher latency.
Prior RISC processors did FMUL in 3-4 cycles (mostly 4).
Later RISC processors and x86 did FMAC in 4-cycles (occasionally 5).
Arm Inc. application processors cores have FMAC latency=4 for
multiplicands, but 2 for accumulator.
Maybe we should switch to 18-bit bytes to support UNICODE.
BGB <cr88192@gmail.com> posted:
On 2/12/2026 11:09 AM, MitchAlsup wrote:It is exceedingly difficult to get an IEEE quality rounded result if
anton@mips.complang.tuwien.ac.at (Anton Ertl) posted:
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
{{One can STILL argue whether
deNormals were a plus or a minus in IEEE}}
I am surprised to read that from you, who has always written that
denormals can be implemented cheaply and efficiently in hardware. The >>>> additional hardware cost (or the cost of trapping and software
emulation) has been the only argument against denormals that I ever
encountered.
It is only after IEEE 754-2008 came with FMAC that deNormals became
a low cost addition. {And that has been my point--you seem to have
forgotten the -2008 part or the argument}
And, can note, this is assuming that one actually pays the cost of
native hardware FMAC.
not done in HW.
Well, and the secondary irony that it is mainly cost-added for FMUL,
whereas FADD almost invariably has the necessary support hardware already. >>
But:
FMUL is expensive operation + cheap normalizer (if no denormals);
FADD is cheap operation with expensive normalizer.
FMAC then is gluing the costs of the two units together, but:
With roughly the latency of both;
The need to be significantly wider internally to deal with some cases.
The add stage after the multiplication tree is <essentially> 2|u as wide. FMUL needs a 108-bit 2-input adder
FMAC needs a 160-bit 3-input adder and a 52-bit incrementor.
The multiplication tree is the same, normalizer is larger.
So, FMAC is a single unit that costs more than both units taken
separately, and with a higher latency.
Prior RISC processors did FMUL in 3-4 cycles (mostly 4).
Later RISC processors and x86 did FMAC in 4-cycles (occasionally 5).
On 2/19/2026 11:30 AM, MitchAlsup wrote:
BGB <cr88192@gmail.com> posted:
On 2/12/2026 11:09 AM, MitchAlsup wrote:It is exceedingly difficult to get an IEEE quality rounded result if
anton@mips.complang.tuwien.ac.at (Anton Ertl) posted:
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
{{One can STILL argue whether
deNormals were a plus or a minus in IEEE}}
I am surprised to read that from you, who has always written that
denormals can be implemented cheaply and efficiently in hardware.-a The >>>>> additional hardware cost (or the cost of trapping and software
emulation) has been the only argument against denormals that I ever>>>>> encountered.
It is only after IEEE 754-2008 came with FMAC that deNormals became
a low cost addition. {And that has been my point--you seem to have
forgotten the -2008 part or the argument}
And, can note, this is assuming that one actually pays the cost of
native hardware FMAC.
not done in HW.
Likely depends.
Can use the trick of bumping to the next size up and use that for computation.Neither of those work!
So, for Binary32 compute it as Binary64, and for Binary64 compute it as Binary128.
BGB wrote:
On 2/19/2026 11:30 AM, MitchAlsup wrote:
BGB <cr88192@gmail.com> posted:
On 2/12/2026 11:09 AM, MitchAlsup wrote:It is exceedingly difficult to get an IEEE quality rounded result if
anton@mips.complang.tuwien.ac.at (Anton Ertl) posted:
MitchAlsup <user5857@newsgrouper.org.invalid> writes:
{{One can STILL argue whether
deNormals were a plus or a minus in IEEE}}
I am surprised to read that from you, who has always written that
denormals can be implemented cheaply and efficiently in hardware. >>>>>> The
additional hardware cost (or the cost of trapping and software
emulation) has been the only argument against denormals that I ever >>>>>> encountered.
It is only after IEEE 754-2008 came with FMAC that deNormals became
a low cost addition. {And that has been my point--you seem to have
forgotten the -2008 part or the argument}
And, can note, this is assuming that one actually pays the cost of
native hardware FMAC.
not done in HW.
Likely depends.
Can use the trick of bumping to the next size up and use that for
computation.
So, for Binary32 compute it as Binary64, and for Binary64 compute it
as Binary128.
Neither of those work!
I believed this to be true but I was shown the error of my thinking by
more knowledgable people in the 754 working group. I.e. they had a very simple/small example where doing the calculation in the next higher precision would still cause double rounding errors.
Also note that Mitch have stated multiple times that you need ~160
mantissa bits during FMAC double calculations.
Terje
John Levine <johnl@taugh.com> posted:
According to David Schultz <david.schultz@earthlink.net>:
This reminds me of when I took a numerical analysis course. (The manyThe 360's floating point is a famous and somewhat puzzling failure, considering
ways that computer calculations can go wrong and how to deal with it.)
The professor said that the schools IBM (360 or 370, ca. 1980) was
perfect for the course because of the defects in its floating point
system. Guard digits and rounding sorts of things as near as I can recall. >>
how much else they got right.
It does hex normalization rather than binary. They assumed that
leading digits are evenly distributed so there's be on average one
zero bit, but in fact they're geometrically distributed, so on average
there's two. They got one bit back by making the exponent units of 16
rather than 2, but that's still one bit gone. It truncated rather than
rounding, another bit gone. They also truncated rather than rounding
results.
Originally there wre no guard digits which made the results comically
bad but IBM retrofitted them at great cost to all the installed machines.
IEEE floating point can be seen as a reaction to that, how do you use
the same number of bits but get good results.
VAX got this correct too (the VAX format not the one inherited from PDP-11/45; PDP-11/40* FP was worse). VAX FP is arguably as good as
IEEE 754 with the exception that more IEEE numbers have reciprocals
due to the change in exponent bias by 1. {{One can STILL argue whether deNormals were a plus or a minus in IEEE}}
| Sysop: | Amessyroom |
|---|---|
| Location: | Fayetteville, NC |
| Users: | 59 |
| Nodes: | 6 (0 / 6) |
| Uptime: | 00:15:36 |
| Calls: | 810 |
| Files: | 1,287 |
| Messages: | 197,308 |