Sysop: | Amessyroom |
---|---|
Location: | Fayetteville, NC |
Users: | 35 |
Nodes: | 6 (1 / 5) |
Uptime: | 18:57:15 |
Calls: | 321 |
Calls today: | 1 |
Files: | 957 |
Messages: | 82,382 |
Could the VAX have been designed as a
RISC architecture to begin with? Because not doing so meant that, just
over a decade later, RISC architectures took over the “real computer” >market and wiped the floor with DEC’s flagship architecture, >performance-wise.
The answer was no, the VAX could not have been done as a RISC
architecture. RISC wasn’t actually price-performance competitive until
the latter 1980s:
RISC didn’t cross over CISC until 1985. This occurred with the
availability of large SRAMs that could be used for caches.
If you sent, e.g., me and the needed documents back in
time to the start of the VAX project, and gave me a magic wand that
would convince the DEC management and workforce that I know how to
design their next architecture, and how to compiler for it, I would
give the implementation team RV32GC as architecture to implement, and
that they should use pipelining for that, and of course also give that
to the software people.
OTOH, DEC had great success with the VAX for a while, and their demise
may have been unavoidable given their market position: Their customers >(especially the business customers of VAXen) went to them instead of
IBM, because they wanted something less costly, and they continued
onwards to PCs running Linux when they provided something less costly.
So DEC would also have needed to outcompete Intel and the PC market to >succeed (and IBM eventually got out of that market).
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Could the VAX have been designed as a
RISC architecture to begin with? Because not doing so meant that, just
over a decade later, RISC architectures took over the “real computer” >>market and wiped the floor with DEC’s flagship architecture, >>performance-wise.
The answer was no, the VAX could not have been done as a RISC
architecture. RISC wasn’t actually price-performance competitive until >>the latter 1980s:
RISC didn’t cross over CISC until 1985. This occurred with the
availability of large SRAMs that could be used for caches.
Like other USA-based computer architects, Bell ignores ARM, which outperformed the VAX without using caches and was much easier to
design.
As for code size, we see significantly smaller code for RISC
instruction sets with 16/32-bit encodings such as ARM T32/A32 and
RV64GC than for all CISCs, including AMD64, i386, and S390x <2024Jan4.101941@mips.complang.tuwien.ac.at>. I doubt that VAX fares
so much better in this respect that its code is significantly smaller
than for these CPUs.
Bottom line: If you sent, e.g., me and the needed documents back in
time to the start of the VAX project, and gave me a magic wand that
would convince the DEC management and workforce
that I know how to
design their next architecture, and how to compiler for it, I would
give the implementation team RV32GC as architecture to implement, and
that they should use pipelining for that, and of course also give that
to the software people.
As a result, DEC would have had an architecture that would have given
them superior performance, they would not have suffered from the
infighting of VAX9000 vs. PRISM etc. (and not from the wrong decision
to actually build the VAX9000), and might still be going strong to
this day. They would have been able to extend RV32GC to RV64GC
without problems, and produce superscalar and OoO implementations.
OTOH, DEC had great success with the VAX for a while, and their demise
may have been unavoidable given their market position: Their customers (especially the business customers of VAXen) went to them instead of
IBM, because they wanted something less costly, and they continued
onwards to PCs running Linux when they provided something less costly.
So DEC would also have needed to outcompete Intel and the PC market to succeed (and IBM eventually got out of that market).
- anton
On Sat, 1 Mar 2025 11:58:17 +0000, Anton Ertl wrote:
Like other USA-based computer architects, Bell ignores ARM, which
outperformed the VAX without using caches and was much easier to
design.
Was ARM around when VAX was being designed (~1973) ??
Found this paper <https://gordonbell.azurewebsites.net/Digital/Bell_Retrospective_PDP11_paper_c1998.htm>
at Gordon Bell’s website. Talking about the VAX, which was designed as
the ultimate “kitchen-sink” architecture, with every conceivable
feature to make it easy for compilers (and humans) to generate code,
he explains:
The VAX was designed to run programs using the same amount of
memory as they occupied in a PDP-11. The VAX-11/780 memory range
was 256 Kbytes to 2 Mbytes. Thus, the pressure on the design was
to have very efficient encoding of programs. Very efficient
encoding of programs was achieved by having a large number of
instructions, including those for decimal arithmetic, string
handling, queue manipulation, and procedure calls. In essence, any
frequent operation, such as the instruction address calculations,
was put into the instruction-set. VAX became known as the
ultimate, Complex (Complete) Instruction Set Computer. The Intel
x86 architecture followed a similar evolution through various
address sizes and architectural fads.
The VAX project started roughly around the time the first RISC
concepts were being researched. Could the VAX have been designed as a
RISC architecture to begin with? Because not doing so meant that, just
over a decade later, RISC architectures took over the “real computer” market and wiped the floor with DEC’s flagship architecture, performance-wise.
The answer was no, the VAX could not have been done as a RISC
architecture. RISC wasn’t actually price-performance competitive until
the latter 1980s:
RISC didn’t cross over CISC until 1985. This occurred with the
availability of large SRAMs that could be used for caches. It
should be noted at the time the VAX-11/780 was introduced, DRAMs
were 4 Kbits and the 8 Kbyte cache used 1 Kbits SRAMs. Memory
sizes continued to improve following Moore’s Law, but it wasn’t
till 1985, that Reduced Instruction Set Computers could be built
in a cost-effective fashion using SRAM caches. In essence RISC
traded off cache memories built from SRAMs for the considerably
faster, and less expensive Read Only Memories that held the more
complex instructions of VAX (Bell, 1986).
The answer was no, the VAX could not have been done as a RISC
architecture. RISC wasn’t actually price-performance competitive until >>the latter 1980s:
RISC didn’t cross over CISC until 1985. This occurred with the
availability of large SRAMs that could be used for caches.
Like other USA-based computer architects, Bell ignores ARM, which >outperformed the VAX without using caches and was much easier to
design.
If you look at the VAX 8800 or NVAX uArch you see that even in 1990 it
was still taking multiple clocks to serially decode each instruction and
that basically stalls away any benefits a pipeline might have given.
Lawrence D'Oliveiro wrote:
Found this paper
<https://gordonbell.azurewebsites.net/Digital/Bell_Retrospective_PDP11_paper_c1998.htm>
at Gordon Bell’s website. Talking about the VAX, which was designed as
the ultimate “kitchen-sink” architecture, with every conceivable
feature to make it easy for compilers (and humans) to generate code,
he explains:
The VAX was designed to run programs using the same amount of
memory as they occupied in a PDP-11. The VAX-11/780 memory range
was 256 Kbytes to 2 Mbytes. Thus, the pressure on the design was
to have very efficient encoding of programs. Very efficient
encoding of programs was achieved by having a large number of
instructions, including those for decimal arithmetic, string
handling, queue manipulation, and procedure calls. In essence, any
frequent operation, such as the instruction address calculations,
was put into the instruction-set. VAX became known as the
ultimate, Complex (Complete) Instruction Set Computer. The Intel
x86 architecture followed a similar evolution through various
address sizes and architectural fads.
The VAX project started roughly around the time the first RISC
concepts were being researched. Could the VAX have been designed as a
RISC architecture to begin with? Because not doing so meant that, just
over a decade later, RISC architectures took over the “real computer”
market and wiped the floor with DEC’s flagship architecture,
performance-wise.
The answer was no, the VAX could not have been done as a RISC
architecture. RISC wasn’t actually price-performance competitive until
the latter 1980s:
RISC didn’t cross over CISC until 1985. This occurred with the
availability of large SRAMs that could be used for caches. It
should be noted at the time the VAX-11/780 was introduced, DRAMs
were 4 Kbits and the 8 Kbyte cache used 1 Kbits SRAMs. Memory
sizes continued to improve following Moore’s Law, but it wasn’t
till 1985, that Reduced Instruction Set Computers could be built
in a cost-effective fashion using SRAM caches. In essence RISC
traded off cache memories built from SRAMs for the considerably
faster, and less expensive Read Only Memories that held the more
complex instructions of VAX (Bell, 1986).
If you look at the VAX 8800 or NVAX uArch you see that even in 1990 it
was
still taking multiple clocks to serially decode each instruction and
that basically stalls away any benefits a pipeline might have given.
If they had just only put in *the things they actually use*
(as show by DEC's own instruction usage stats from 1982),
and left out all the things that they rarely or never use,
it would have had 50 or so opcodes instead of 305,
at most one operand that addressed memory on arithmetic and logic
opcodes
with 3 address modes (register, register address, register offset
address)
instead of 0 to 5 variable length operands with 13 address modes each
(most combinations of which are either silly, redundant, or illegal).
Then they would have be able to parse instructions in one clock,
which makes pipelining a possible consideration,
and simplifies the uArch so now it can all fit on one chip,
which allows it to complete with RISC.
The reason it was designed the way it was, was because DEC had
microcode and microprogramming on the brain.
In this 1975 paper Bell and Strecher say it over and over and over.
They were looking at the cpu design as one large parsing machine
and not as a set of parallel hardware tasks.
This was their mental mindset just before they started the VAX design:
What Have We Learned From PDP11, Bell Strecker, 1975 https://gordonbell.azurewebsites.net/Digital/Bell_Strecker_What_we%20_learned_fm_PDP-11c%207511.pdf
According to Anton Ertl <anton@mips.complang.tuwien.ac.at>:
The answer was no, the VAX could not have been done as a RISC >>>architecture. RISC wasn’t actually price-performance competitive until >>>the latter 1980s:
RISC didn’t cross over CISC until 1985. This occurred with the
availability of large SRAMs that could be used for caches.
Like other USA-based computer architects, Bell ignores ARM, which >>outperformed the VAX without using caches and was much easier to
design.
That's not a fair comparison. VAX design started in 1975 and shipped in 1978. >The first ARM design started in 1983 with working silicon in 1985. It was a >decade later.
Like other USA-based computer architects, Bell ignores ARM, which outperformed the VAX without using caches and was much easier to design.
On 3/1/2025 5:58 AM, Anton Ertl wrote:------------------------------
Lawrence D'Oliveiro <ldo@nz.invalid> writes:
Would likely need some new internal operators to deal with bit-array operations and similar, with bit-ranges allowed as a pseudo-value type
(may exist in constant expressions but will not necessarily exist as an actual value type at runtime).
Say:
val[63:32]
Has the (63:32) as a BitRange type, which then has special semantics
when used as an array index on an integer type, ...
The previous idea for bitfield extract/insert had turned into a
composite BITMOV instruction that could potentially do both operations
in a single instruction (along with moving a bitfield directly between
two instructions).
Idea here is that it may do, essentially a combination of a shift and a masked bit-select, say:
Low 8 bits of immediate encode a shift in the usual format:
Signed 8-bit shift amount, negative is right shift.
High bits give a pair of bit-offsets used to compose a bit-mask.
These will MUX between the shifted value and another input value.
I am still not sure whether this would make sense in hardware, but is
not entirely implausible to implement in the Verilog.
Would likely be a 2 or 3 cycle operation, say:
EX1: Do a Shift and Mask Generation;
May reuse the normal SHAD unit for the shift;
Mask-Gen will be specialized logic;
EX2:
Do the MUX.
EX3:
Present MUX result as output (passed over from EX2).
The other thing is that the VAX 11/780 (released 1977) had a 2KB cache,
so Bell's argument that caches were only available around 1985 does not
hold water on that end, either.
IBM tried to commercialize it in the ROMP in the IBM RT PC; Wikipedia
says: "The architectural work on the ROMP began in late spring of
1977, as a spin-off of IBM Research's 801 RISC processor ... The first examples became available in 1981, and it was first used commercially
in the IBM RT PC announced in January 1986. ... The delay between the completion of the ROMP design, and introduction of the RT PC was
caused by overly ambitious software plans for the RT PC and its
operating system (OS)." And IBM then designed a new RISC, the
RS/6000, which was released in 1990.