> The hardware that ran the software described in this manual didn't have tagged memory or garbage collection support in hardware and the real instruction set was similar to any RISC CPU.
Then you should check the architecture of those machines some time: MIT CONS, MIT CADR, Symbolics LM-2 (a repackaged CADR), Symbolics 3600, LMI Lambda, ...
Page 5: 1 GC bit, 1 User bit, 2 cdr code bits, 5 bits data type, 23 bit pointer.
Looks to me like a tagged CPU architecture...
The MIT CADR Lisp Machine was a stack architecture with 24bit data and 8 bit tags. Six bits for data type encoding and 2 bits for compact lists. The CPU does type checks on operations, ...
It was nothing like a RISC machine, which were researched for Lisp (Symbolics, Xerox, SPUR, SPARC, ...) mid/end 80s. A full decade later after the architecture of the MIT CONS Lisp Machine.
I am well aware of the architecture of the MIT CADR, LMI Lambda and TI Explorer, Symbolics lispms less so but they are not the subject of this thread. I have written CADR microcode recently.
None of the features you list are constrained by the architecture of the hardware, they are just conventions of the software VM running on it. Would you suggest that the X86 is a tagged CPU architecture just because SBCL or a JVM use tags ?
>Page 5: 1 GC bit, 1 User bit, 2 cdr code bits, 5 bits data type, 23 bit pointer.
This is not true for the software that matches this version of the manual. System 99 used 25 bit pointers, there wasn't a GC or user bit. The change from the earlier word format was possible because this was not fixed in hardware.
The CADR microinstruction set is load/store with regular opcode fields, it is very much like an early RISC.
If the X86 would provide SBCL with such instructions and data, it would be a tagged architecture, but it doesn't. The SBCL compiler outputs conventional X86 instructions.
The Lisp Machine compiler OTOH generates instructions for a mostly stack machine, which runs on the CPU in microcode.
Please don't assume that the Lisp compiler on some Symbolics could not output micro code. It could, IIRC.
But it was not what a Lisp developer normally would do, he/she would use the compiler in such a way that it outputs the usual machine code, not micro code.
Whether microcode is hardware or software is blurred. Remember, when microcode was introduced in the 1960's, it was used for implementing the same thing in software that other versions of the same computer family did in hardware. With microcode, a vendor could offer different machines at different price/performance points. A sequential circuit can implement an algorithm; microcode can implement an algorithm.
Computer architecture on the user level is defined by the data format and instruction set the CPU offers. How it is implemented is another level. I don't know how some Intel i7 is implemented, but it probably has writable microcode and some very different architecture inside.
That Intel hides the microcode and the CADR didn't is just another detail.
Then you should check the architecture of those machines some time: MIT CONS, MIT CADR, Symbolics LM-2 (a repackaged CADR), Symbolics 3600, LMI Lambda, ...
http://www.bitsavers.org/pdf/mit/cons/TheLispMachine_Nov74.p...
Page 5: 1 GC bit, 1 User bit, 2 cdr code bits, 5 bits data type, 23 bit pointer.
Looks to me like a tagged CPU architecture...
The MIT CADR Lisp Machine was a stack architecture with 24bit data and 8 bit tags. Six bits for data type encoding and 2 bits for compact lists. The CPU does type checks on operations, ...
It was nothing like a RISC machine, which were researched for Lisp (Symbolics, Xerox, SPUR, SPARC, ...) mid/end 80s. A full decade later after the architecture of the MIT CONS Lisp Machine.