Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

on-topic acorn-related discussions not covered by the other forums
User avatar
BigEd
Posts: 6261
Joined: Sun Jan 24, 2010 10:24 am
Location: West Country
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by BigEd »

(Branching off a bit from this RISCy discussion - thanks Paul for those two links, very interesting.)

Not to argue with the advantages and desirability of a compiler, and noting that the availability of a C compiler makes the z80 more attractive to me than it would be otherwise, I do wonder if very many of the applications for Z80 and CP/M were in fact compiled code - I suspect not. That is to say, a C compiler today may be a great advantage, or even a necessity, for a micro to survive and prosper into the 21st century, but it might not have been so crucial back in the day - through to the mid-80s, say. I'm thinking that it's the applications which must have made CP/M successful.

(There are of course a couple of cases for microcomputers: one is that the user writes their own application, and the other is that they use third-party applications. Or indeed both, as in the case where the user needs to start by acquiring an interpreter, assembler, or compiler.)

On another tack, I'm thinking that the combination of Z80, S100, and CP/M are all part of each other's mutual success - until the rise of mass market micros on the one hand and microcontrollers on the other.
B3_B3_B3
Posts: 404
Joined: Sat Apr 08, 2017 10:42 pm
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by B3_B3_B3 »

This is very interesting, but on 6502 and compilers: shouldnt a compiler be better able to deal with tedius register saves to stack etc (cos so few) than a human, and use the index register/zero page address register modes to make its own localdata/ parameter stacks on addition to the 6502 simple hardware stack:

Like in this series I have been reading
https://wilsonminesco.com/stacks/parampassing.html

On risc I thought it was mainly the 6502s simple pipeline that led to the risc suggestions. And it was originally meant to be used like a microcontroller .... did early acorn ever consider the z80 or have a deep philosophical reason for rejecting it? It doesn't sound like they pondered it for the proton?
Coeus
Posts: 3557
Joined: Mon Jul 25, 2016 12:05 pm
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by Coeus »

BigEd wrote: Thu Jun 15, 2023 6:43 am ...I do wonder if very many of the applications for Z80 and CP/M were in fact compiled code - I suspect not....
You're probably right but my point was that "just re-compile it for the 6502" wasn't a practical proposition at the time either because the application was written directly in Z80/8080 assembler or, if it was in a high level language, the a compiler wasn't available.

Interestingly, Apple have moved processor a few times on the Mac platform, from 68000 to PowerPC to x86 to ARM, IIRC and, though for modern software re-compiling it would be a practical proposition and I am sure new version of software were released as native code, Apple also, I believe, included a JIT translator each time to be able to run object code from the old processor on the new one.
BigEd wrote: Thu Jun 15, 2023 6:43 am On another tack, I'm thinking that the combination of Z80, S100, and CP/M are all part of each other's mutual success - until the rise of mass market micros on the one hand and microcontrollers on the other.
It certainly helps when there is competition. The cloning of the IBM PC and the competition between vendors after that is almost certainly a component of the subsequent success just as having a number of manufacturers of VHS machines helped that format prevail over technically better alternatives.

The point about microcontroller/general computing divergence is interesting. In a sense, the application of microprocessors to general computing was something the manufacturers hadn't planned. Going back to the point about languages, general computing, represented by mainframes, had a longer word length, more memory and compilers available for high level languages and was therefore quite different. Yet, as soon as microprocessors got pressed into service as cheap general computing it must have been plain where things were going. From the oral history Paul linked, IBM were already trying to shrink a 24-bit processor from minicomputing into a microprocessor and it might have been the processor in the IBM PC; the history doesn't explain why that didn't happen. Motorolla's 68000 series was a jump straight from 8 bit to a 32 bit architecture even if the first chip was 16-bit externally and thus clearly aimed at capturing business from the mimi and maybe even mainframe market.
paulb
Posts: 1767
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by paulb »

Coeus wrote: Thu Jun 15, 2023 1:02 pm From the oral history Paul linked, IBM were already trying to shrink a 24-bit processor from minicomputing into a microprocessor and it might have been the processor in the IBM PC; the history doesn't explain why that didn't happen.
It seems that the interviewee (on page 12) indicated that the processor he worked on, which would have been the IBM ROMP, went into the IBM RT PC.
Coeus
Posts: 3557
Joined: Mon Jul 25, 2016 12:05 pm
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by Coeus »

B3_B3_B3 wrote: Thu Jun 15, 2023 11:43 am ...Like in this series I have been reading
https://wilsonminesco.com/stacks/parampassing.html
On immediate data after a subroutine call, that is certainly done by assembly language programmers, usually for exactly as per the example - immediate strings for display. The 6502 is a bit clumsy at this compared to the Z80 as it is necessary to pull the return address and put it in zero page, then, if an index register is to be used, push the old value on the stack and do the reverse when about to return. The Z80 has the EX (SP),HL instruction which seems to be for exactly this, i.e. after executing it HL contains what was the return address and the old value of HL is safely on the stack. The code can increment HL to process the data then do another EX (SP),HL before returning and it returns to the right address with the old value of HL back in place. Compare:

Code: Select all

prtext	PLA
	STA zp
	PLA
	STA zp+1
	TXA
	PHA
	TYA
	PHA
	LDY #1
loop	LDA (zp),Y
	BEQ done
	JSR charout
	INY
	BNE loop
done	TSX
	TYA
	CLC
	ADC zp
	STA &103,X
	LDA #0
	ADC zp+1
	STA &104,X
	TYA
	PHA
	PLA
	TAX
	RTS
with:

Code: Select all

prtext	EX   (SP),HL
loop	LD   A,(HL)
	OR   A
	JR Z done
	CALL charout
	INC  HL
	JR   loop
done	EX   (SP),HL
	RET
On a zero page stack, I think one of the FORTH languages for the BBC micro takes that approach for the main stack that FORTH is based around.
B3_B3_B3 wrote: Thu Jun 15, 2023 11:43 am On risc I thought it was mainly the 6502s simple pipeline that led to the risc suggestions....
It does do a couple of interesting things. A fixed instruction size is mentioned as one of things that makes pipelining easier and the 6502 doesn't have that but behaves in some ways as if it does with a fixed instruction size of 16-bit, 8-bits opcode and 8-bits operand, in that immediately upon fetching the instruction, and while decoding it, it immediately fetches the following byte anyway. There are many instructions in which that is a useful thing to do and, if you look at 6502 that uses ZP extensively, which is good to do when you can, the majority of instructions are indeed two bytes long. For the few instructions that don't have a second byte there isn't really a speed penalty as that read from memory was covering the decode time. Then the last cycle of working out a result can overlap with the fetch of the next instruction.

I wonder if a deeper pipeline relies on an instruction cache to get the best performance. With a simple bus like the 6502, if it started fetching ahead of where it is executing that would sometimes be contenting with fetching operands or storing results, though conceivably some of the dead cycles in some instructions could be used to fetch the next instruction.
User avatar
scruss
Posts: 653
Joined: Sun Jul 01, 2018 4:12 pm
Location: Toronto
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by scruss »

BigEd wrote: Thu Jun 15, 2023 6:43 am I do wonder if very many of the applications for Z80 and CP/M were in fact compiled code - I suspect not.
Maybe not C, but Turbo Pascal was huge for CP/M.
User avatar
BigEd
Posts: 6261
Joined: Sun Jan 24, 2010 10:24 am
Location: West Country
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by BigEd »

Interesting! Jerry Pournelle said that although Turbo Pascal was only $50, a run time license was an extra $100 - one wonders how many people cared, complied, or were put off by that. (I was a bit surprised that the era of Turbo Pascal overlaps with the popularity of CP/M, but apparently it does.)
User avatar
arg
Posts: 1289
Joined: Tue Feb 16, 2021 2:07 pm
Location: Cambridge
Contact:

Re: Cpm65: cpm for 6502s

Post by arg »

paulb wrote: Wed Jun 14, 2023 11:45 pm Returning belatedly to the topic, your point about the suitability of the 6502 for higher-level systems, including its suitability for native compilers, is a crucial one. People were eager to use higher-level languages and systems, but the 6502 seems very much like a microcontroller-level product. Even augmented with additional logic to make more sophisticated systems possible, its instruction set architecture still seems to deter native code compiler development.
Yes. In my work writing the SJ Econet fileservers (Z80 based) in competition with the Acorn ones (6502 based), I always felt that I had a HUGE advantage in being able to use a Pascal compiler. Acorn's initial fileserver was an outstanding piece of work, but they didn't subsequently develop it much because working on that all-assembler codebase was just too difficult.

Admittedly, I ended up translating quite a lot of my code into Z80 assembler by hand (for performance/size reasons) as the compilers of the day weren't as good as modern ones, but I still had that structure of the high level language to hang things in, and the ability to do most of the initial feature development in high level language, only squeezing it down to assembler once it was proven (and where the gain was worth the effort).

On the Pascal vs C question mentioned elsewhere, given a suitable dialect of Pascal (one where you can trust the memory layout of records and with a calling convention into assembler for I/O functions) there really isn't much difference between the facilities offered by the two languages. As was said at the time "A real programmer can write FORTRAN programs in any language", you might say I was writing C programs in Pascal...
B3_B3_B3
Posts: 404
Joined: Sat Apr 08, 2017 10:42 pm
Contact:

Re: Cpm65: cpm for 6502s

Post by B3_B3_B3 »

arg wrote: Fri Jun 16, 2023 9:13 am ......

Admittedly, I ended up translating quite a lot of my code into Z80 assembler by hand (for performance/size reasons) as the compilers of the day weren't as good as modern ones, but I still had that structure of the high level language to hang things in, and the ability to do most of the initial feature development in high level language, only squeezing it down to assembler once it was proven (and where the gain was worth the effort).
....
Do you if know if Acorn had deep philosophical reasons for preferring the 6502 or if by the time of thinking of an atom successor they thought it was too 'tedious to repent' and go z80, or thought it was too late anyway and affordable working 16bit or moreprocessors that they liked would arrive soon anyway(oops?)...?
User avatar
arg
Posts: 1289
Joined: Tue Feb 16, 2021 2:07 pm
Location: Cambridge
Contact:

Re: Cpm65: cpm for 6502s

Post by arg »

B3_B3_B3 wrote: Fri Jun 16, 2023 12:56 pm Do you if know if Acorn had deep philosophical reasons for preferring the 6502 or if by the time of thinking of an atom successor they thought it was too 'tedious to repent' and go z80, or thought it was too late anyway and affordable working 16bit or moreprocessors that they liked would arrive soon anyway(oops?)...?
I think the 6502 was very much preferred; there was no desire to switch to Z80 (and probably, to a lesser extent, no capability to do so).

The 6502 was perceived as the higher-performing device (at comparable clock speeds/memory subsystems) - and I think that was true for hand-optimised assembler. Indeed I've seen it said elsewhere that most of the Z80's enhancements (over the 8080) didn't actually improve performance compared to implementing the same functions in 8080 code - particularly the IX/IY prefix instructions that are maybe convenient for a compiler implementing stack frames, but lots of bytes of opcode and consequently quite slow to execute.

It was also perceived as more suited to fancy tricks (like the two-phase CPU vs video RAM architecture of the BBC, or the 1MHz/2MHz clock switching), though that is maybe harder to substantiate (Sinclair machines did different interesting things with Z80).

But the Proton/BBC couldn't feasibly have been a Z80 machine as it drew too heavily on existing stuff on the shelf from the Atom/System range. The two-phase video architecture was 'the' new idea that created the new machine, the rest of the hardware (as sold to the BBC) was a shopping list of interface designs and supporting software already in existence on the existing machines (Mode 7, Econet, 8271/DFS, 6522 used for various things, the core of the OS, BASIC (OK, Atom BASIC was quite different, but there were plans for enhancements to that BASIC, and those plans fused with the BBC's wish-list resulted in BBC BASIC). Switching to Z80 would have meant starting again, and (in the context of selling the machine to the BBC) no reason to explain why Acorn were in a better position to do the job than anybody else.

The only reason for wanting to use a Z80 (from my perception of Acorn's view) was to support CP/M. And Acorn didn't want to make a CP/M machine, because that left too little room for innovation. I don't think I ever saw it written down as such, but it was very clear that Acorn's strategy was all about producing machines that were 'different', with innovation in the design. If they produced a machine "same as everyone else is doing" then it wouldn't be any cheaper - Acorn's manufacturing was all outsourced, so no edge there, and they were carrying an R&D overhead that would make machines more expensive than a 'box shifting' type competitor. And they weren't yet a big brand, so again no leverage to sell machines that way.

The compiled languages issue was a blind spot - I'm not sure if the BASIC-plus-fast-bits-in-assembler alternative that developed (and carried through into RISC OS) was a deliberate choice or a consequence of having picked 6502 at the start. Part of the reason compiled languages were ignored was the timing in the evolution of RAM size - in 1981, 32K was considered "a lot of RAM", so compilers were rare on machines of the class that Acorn considered themselves to be competing in, the usefulness of the output of a compiler on such machines was limited, and the effort to fill your small amount of RAM with hand-crafted code wasn't so bad. By 1983 when I was enjoying use of Pascal, I had more than 64K of RAM to play with.

16-bit machines at that time weren't considered relevant - they were an entirely different price-class, and not seen as being 'just round the corner' - which indeed they really weren't: although IBM PCs were released at almost the same time as the BBC, and the Mac was along a few years later, they were vastly more expensive and not really seen as a competitor at all until the end of the decade.
paulb
Posts: 1767
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Cpm65: cpm for 6502s

Post by paulb »

arg wrote: Fri Jun 16, 2023 2:40 pm The compiled languages issue was a blind spot - I'm not sure if the BASIC-plus-fast-bits-in-assembler alternative that developed (and carried through into RISC OS) was a deliberate choice or a consequence of having picked 6502 at the start.
I think that various people at Acorn were comfortable with writing programs in 6502 assembly language or BASIC, and this dictated the strategy over the longer term. There may have been other strategies playing out, but those seem to be associated with more immediate market needs, like meeting the expectations of different markets that Acorn was in or attempting to enter.

Although a few higher-level languages were brought out for the Beeb, most of these all had their own particular niche. Things like Logo were for the education market, and Acorn eventually went with Logotron's version in the Master Compact, anyway. The Lisp and Prolog implementations were probably also aimed at education, being necessarily limited on such a platform. Maybe ISO-Pascal got a few outings in products beyond education, but I can imagine that the awkwardness of deploying anything written in Pascal confined such products to complete hardware-plus-software solutions.

I know that BCPL was used to implement some actual Acorn and Acornsoft products but had similar deployment considerations to ISO-Pascal. The C language implementations came along quite late in the day, and again, there appear to have been deployment complications. One of the Beeb's C language implementations (from HCCS, I think) was actually written in Forth, which I suppose demonstrates something about the utility of that language on the platform, but the result was constrained by such underpinnings and criticised for being unconventional, seemingly consigning it also to the educational niche.

When the Archimedes came along, Acorn made sure to deliver language implementations at first. After all, if you have a machine that you are describing as being faster than various minicomputers, it helps to have languages that minicomputer users might be using. But it was clear that after a while, those language products were largely not being refreshed, and they were still rather pricey. Eventually, I think that the only language products that Acorn persisted with were its Desktop C and Assembler, and as other platforms saw their development tools evolve, with C++ also entering the picture, one got the impression that Acorn's decision makers were hardly bothered by the increasing convenience deficit.

People like Mark Colton and Charles Moir were quite vocal about such matters, Colton consistently so from an early stage, but I imagine that the culture at Acorn had settled on the myth that BBC BASIC and a dash of assembly language was all that people needed to write applications, being good enough for Wilson et al. Moir came to such views rather late in the day, leaving him with successful applications that were increasingly difficult to enhance and maintain. (Having seen the stability of early versions of Impression, one wonders whether Computer Concepts might also have tried to write a compiler as well as an operating system.)

Comparable delusions persisted about the technological adequacy of RISC OS. Had Acorn invested properly in development tools, they could have navigated that technological challenge, migrated high-end users to some form of Unix, shared some of the benefits of more convenient application development with RISC OS users, and eventually brought their entire user base on board one or more modern platforms. They might even have emerged as a credible player in the Unix or cross-platform development space.

Instead, they pursued markets where RISC OS could be squirreled away inside a sealed box and hoped that no-one would ever need to develop new applications for those appliances. Or, more likely, that anyone unfortunate enough to do so would tolerate the esoteric development culture.
User avatar
arg
Posts: 1289
Joined: Tue Feb 16, 2021 2:07 pm
Location: Cambridge
Contact:

Re: Cpm65: cpm for 6502s

Post by arg »

paulb wrote: Fri Jun 16, 2023 3:32 pm Although a few higher-level languages were brought out for the Beeb, most of these all had their own particular niche. Things like Logo were for the education market, and Acorn eventually went with Logotron's version in the Master Compact, anyway. The Lisp and Prolog implementations were probably also aimed at education, being necessarily limited on such a platform. Maybe ISO-Pascal got a few outings in products beyond education, but I can imagine that the awkwardness of deploying anything written in Pascal confined such products to complete hardware-plus-software solutions.
Yes, none of them were really practical for developing most types of commercial software, and arguably this was because the 6502 was too hard to compile for - none of those compilers you cite would have been useful for the sort of work I was doing with a Z80-target compiler under CP/M. And the shortage of RAM on the BBC meant that such compilers as did exist were fairly tiresome to use even if you could accept their limitations (like having to have the runtime ROM in each machine running the resulting application).
B3_B3_B3
Posts: 404
Joined: Sat Apr 08, 2017 10:42 pm
Contact:

Re: Cpm65: cpm for 6502s

Post by B3_B3_B3 »

arg wrote: Fri Jun 16, 2023 2:40 pm ......

I think the 6502 was very much preferred; there was no desire to switch to Z80 (and probably, to a lesser extent, no capability to do so).

The 6502 was perceived as the higher-performing device (at comparable clock speeds/memory subsystems) - and I think that was true for hand-optimised assembler. ....

....
Thanks, very interesting, so thr bbc / proton continued the idea of basic and assembler mixed (a pity the user guide wasn,t supplemented by an (included) updated atomic practice which showed examples rather than the bbc user guides , that is a subject for another book...

Perhaps later a 680xx 2nd processor( forbid the slow 300+ cycle instruction) with the usual bbc basic and assembler but bundled with pascal and a disc drive would have been useful to beeb owners holding off to see if tbe 8086 pc would get a civilised successor (Archimedes was too late imo....).
B3_B3_B3
Posts: 404
Joined: Sat Apr 08, 2017 10:42 pm
Contact:

Re: Cpm65: cpm for 6502s

Post by B3_B3_B3 »

paulb wrote: Fri Jun 16, 2023 3:32 pm
arg wrote: Fri Jun 16, 2023 2:40 pm The compiled languages issue was a blind spot - I'm not sure if the BASIC-plus-fast-bits-in-assembler alternative that developed (and carried through into RISC OS) was a deliberate choice or a consequence of having picked 6502 at the start.
I think that various people at Acorn were comfortable with writing programs in 6502 assembly language or BASIC, and this dictated the strategy over the longer term. There may have been other strategies playing out, but those seem to be associated with more immediate market needs, like meeting the expectations of different markets that Acorn was in or attempting to enter.

Although a few higher-level languages were brought out for the Beeb, most of these all had their own particular niche. Things like Logo were for the education market, and Acorn eventually went with Logotron's version in the Master Compact, anyway. The Lisp and Prolog implementations were probably also aimed at education, being necessarily limited on such a platform. Maybe ISO-Pascal got a few outings in products beyond education, but I can imagine that the awkwardness of deploying anything written in Pascal confined such products to complete hardware-plus-software solutions.

I know that BCPL was used to implement some actual Acorn and Acornsoft products but had similar deployment considerations to ISO-Pascal. The C language implementations came along quite late in the day, and again, there appear to have been deployment complications. One of the Beeb's C language implementations (from HCCS, I think) was actually written in Forth, which I suppose demonstrates something about the utility of that language on the platform, but the result was constrained by such underpinnings and criticised for being unconventional, seemingly consigning it also to the educational niche.

When the Archimedes came along, Acorn made sure to deliver language implementations at first. After all, if you have a machine that you are describing as being faster than various minicomputers, it helps to have languages that minicomputer users might be using. But it was clear that after a while, those language products were largely not being refreshed, and they were still rather pricey. Eventually, I think that the only language products that Acorn persisted with were its Desktop C and Assembler, and as other platforms saw their development tools evolve, with C++ also entering the picture, one got the impression that Acorn's decision makers were hardly bothered by the increasing convenience deficit.

People like Mark Colton and Charles Moir were quite vocal about such matters, Colton consistently so from an early stage, but I imagine that the culture at Acorn had settled on the myth that BBC BASIC and a dash of assembly language was all that people needed to write applications, being good enough for Wilson et al. Moir came to such views rather late in the day, leaving him with successful applications that were increasingly difficult to enhance and maintain. (Having seen the stability of early versions of Impression, one wonders whether Computer Concepts might also have tried to write a compiler as well as an operating system.)

Comparable delusions persisted about the technological adequacy of RISC OS. Had Acorn invested properly in development tools, they could have navigated that technological challenge, migrated high-end users to some form of Unix, shared some of the benefits of more convenient application development with RISC OS users, and eventually brought their entire user base on board one or more modern platforms. They might even have emerged as a credible player in the Unix or cross-platform development space.

Instead, they pursued markets where RISC OS could be squirreled away inside a sealed box and hoped that no-one would ever need to develop new applications for those appliances. Or, more likely, that anyone unfortunate enough to do so would tolerate the esoteric development culture.
This blogger dreams of Acorn replacing RISCOS with BeOS and going SMP...
https://liam-on-linux.livejournal.com/55562.html
paulb
Posts: 1767
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Acorn, multiprocessing and operating systems

Post by paulb »

B3_B3_B3 wrote: Wed Nov 01, 2023 9:07 pm This blogger dreams of Acorn replacing RISCOS with BeOS and going SMP...
https://liam-on-linux.livejournal.com/55562.html
Fairly far from the original topic by now, I would say, but I remember from the first half of the 1990s that it was becoming clear that Acorn's performance advantage was being eroded. When the Archimedes came out, it was largely competing with 8086- and 80286-based PCs at its price point and was substantially faster. Even the more expensive models were competitive with the 80386-based PCs costing a few thousand pounds. Over time, 386-based PCs got faster and cheaper, making the Archimedes less compelling, although its reputation for high performance did seem to persist.

The ARM3 put the range ahead of the 386, but by then the 486 was out, offering a similar jump in performance from the 386 to that seen with the ARM3 relative to the ARM2. However, it was apparent that Intel could scale the performance of the 486 whereas the ARM3 would only see a more modest evolution in clock frequency. (It seems that the Motorola 68040 also couldn't scale significantly.) Meanwhile, ARM wasn't delivering a competitive progression in performance: when the Risc PC came out, the ARM610 in it wasn't radically faster than the ARM3-based machines, although the general architecture of the Risc PC will have helped the overall system performance.

In the early 1990s, concurrency was a fairly hot topic in academia, driven by assumptions that sequential performance would increasingly impose a limitation on the overall performance of computer systems, that hardware designs would need to employ multiple processors, and that software technologies would need to adapt to take advantage of multiple processors. With that backdrop, those of us studying computing and related topics thought that with ARM not really keeping up, but with the stated low power advantages of ARM processors, Acorn might have sensibly pitched a multiple processor system to work around the performance deficit of individual ARM processors. That would have allowed them to keep up, at least theoretically, whilst adapting to the new technological landscape and offering products that higher education and, ultimately, the commercial realm might have wanted to buy.

I remember reviews noting that the A540 (and R260) could support multiple cards on the bus to add memory, with the processor also on its own card for a potential upgrade, and thinking that a multiple processor system would surely be the next step. However, when the Risc PC toured higher education in 1994, having had the suggestive codename of Medusa, it was disappointing that Acorn had instead decided to focus the multiple processor support on shoehorning an "alien" x86 processor onto the bus to let people run PC software, directing considerable effort towards merely continuing the company's questionable strategy of accommodating so-called "industry standard" software, running at less than competitive speeds, while failing to invest in its own platforms.

At the time, those of us who were Acorn enthusiasts really thought that a machine with, say, four or eight relatively inexpensive processors could have been attractive to customers wanting a ready-made product to explore the development of concurrent systems. Academic customers would have embraced the challenge of writing or porting software systems, and things like Helios and Chorus probably would have become available. Having experience of things like Unix, we also felt that Acorn should have been replacing RISC OS with something more substantial.

I thought that Acorn should have tried to bring various user interface technologies to Unix and the X Window System (reminiscent of what Torch and IXI did), making proper widget toolkits which could also have been rolled out on RISC OS to improve the developer situation and to help people migrate upwards over time. They might even have become a multi-architecture operation, adopting other processor families and positioning themselves to take advantage of commercial opportunities on the systems of other vendors.

Instead, Acorn doubled down on RISC OS, dabbled with video-on-demand (with no sensible route to mass-market commercialisation) and then network computing, were bailed out somewhat by StrongARM (which only increased the levels of delusion in the company and community), and offered the Risc PC 2 with far too little rather too late. With regard to whether BeOS would have helped, as the 1990s progressed, various free BSD Unix implementations as well as Linux would have been increasingly viable. Indeed, network computers moved to BSD and Linux due to the deficiencies of RISC OS, and Acorn could have taken advantage of the emerging demand for Linux-based appliances had the company not been driven onto a beach in the Cayman Islands, ceding that territory to relative minnows like Simtec.
User avatar
arg
Posts: 1289
Joined: Tue Feb 16, 2021 2:07 pm
Location: Cambridge
Contact:

Re: Acorn, multiprocessing and operating systems

Post by arg »

paulb wrote: Thu Nov 02, 2023 3:27 pm In the early 1990s, concurrency was a fairly hot topic in academia, driven by assumptions that sequential performance would increasingly impose a limitation on the overall performance of computer systems, that hardware designs would need to employ multiple processors, and that software technologies would need to adapt to take advantage of multiple processors. With that backdrop, those of us studying computing and related topics thought that with ARM not really keeping up, but with the stated low power advantages of ARM processors, Acorn might have sensibly pitched a multiple processor system to work around the performance deficit of individual ARM processors. That would have allowed them to keep up, at least theoretically, whilst adapting to the new technological landscape and offering products that higher education and, ultimately, the commercial realm might have wanted to buy.
I think the trouble was that by this time the tail was wagging the dog - ARM weren't doing stuff for Acorn's benefit, they were pursuing their own goals and only occasionally (eg. StrongARM as you note) did this throw out stuff that suited Acorn's requirements.

Ultimately it comes down to money/market size: Acorn couldn't afford the investment needed to stay ahead of the game as a hardware company - neither the capital nor the returns they were likely to make on it in their established markets or plausibly accessible adjacent ones.

The could have thrown in the towel on having their own hardware platforms- RiscOS/386 perhaps? But that gets back into the problem that had dogged them for even longer: there was no point them doing "the same as everybody else", because everybody else was doing it on slim margins with much less R&D spend so could sell cheaper. That was what was wrong with the Business Systems products in the 8-bit era: it was considered that business machines needed CP/M, but if you wanted a CP/M machine why would you buy an Acorn one?

So then you say they could have become a software-only outfit, but hard to see a scenario where that works out well.
User avatar
BigEd
Posts: 6261
Joined: Sun Jan 24, 2010 10:24 am
Location: West Country
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by BigEd »

Quick performance calibration question - did StrongARM put Acorn's products back into being relatively fast machines compared to the competition? Or were they still behind x86 even with StrongARM?
paulb
Posts: 1767
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Acorn, multiprocessing and operating systems

Post by paulb »

arg wrote: Thu Nov 02, 2023 4:57 pm I think the trouble was that by this time the tail was wagging the dog - ARM weren't doing stuff for Acorn's benefit, they were pursuing their own goals and only occasionally (eg. StrongARM as you note) did this throw out stuff that suited Acorn's requirements.
Yes, this meant that they either needed to adapt within the given constraints - use more processors per board - or they needed to find other processor families. The latter would have obligated them to adopt a more portable software strategy.
arg wrote: Thu Nov 02, 2023 4:57 pm Ultimately it comes down to money/market size: Acorn couldn't afford the investment needed to stay ahead of the game as a hardware company - neither the capital nor the returns they were likely to make on it in their established markets or plausibly accessible adjacent ones.
I think there could have been a niche doing multiprocessing. But certainly, the strategy could not have involved going head-to-head with high-volume personal computer manufacturers. For a lot of the 1990s, there were small companies doing interesting things. One example that I looked into fairly recently was DeskStation Technology: they did workstations, but the technologies involved were a hybrid of traditional workstation technologies and more mainstream elements like ISA, VESA and PCI. They may have benefited from some kind of working relationship with DEC, although that hardly guaranteed them a fortune as a consequence, given DEC's strategic incoherency particularly towards the end.
arg wrote: Thu Nov 02, 2023 4:57 pm The could have thrown in the towel on having their own hardware platforms- RiscOS/386 perhaps?
There wouldn't have been any point having a portable software strategy based on RISC OS.
arg wrote: Thu Nov 02, 2023 4:57 pm But that gets back into the problem that had dogged them for even longer: there was no point them doing "the same as everybody else", because everybody else was doing it on slim margins with much less R&D spend so could sell cheaper. That was what was wrong with the Business Systems products in the 8-bit era: it was considered that business machines needed CP/M, but if you wanted a CP/M machine why would you buy an Acorn one?
Well, Acorn could have just sold plain CP/M systems and plain DOS, Windows, OS/2, Unix-on-Intel systems, and so on. They would have needed to position themselves as a solutions company, which may have been a consideration for something like Xemplar, had that not been a vehicle for Apple to try and improve its relatively feeble position in the UK education market at the time. Having such systems in the portfolio would have prevented the need for compromise solutions like various PC cards and emulators, but it would also have needed more investment and more confidence in Acorn's own platforms to be able to keep selling products based on those and not just becoming a box shifter.
arg wrote: Thu Nov 02, 2023 4:57 pm So then you say they could have become a software-only outfit, but hard to see a scenario where that works out well.
I think there were plenty of opportunities for a company with strong software offerings. In the 1990s, there was a considerable appetite for better development environments and desktop environments despite the increasing Wintel dominance in the mainstream part of the industry. Sadly, Acorn didn't measure up very well in these areas of opportunity, although efforts like Galileo suggested that the company needed to do something about the situation.
paulb
Posts: 1767
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by paulb »

BigEd wrote: Thu Nov 02, 2023 5:30 pm Quick performance calibration question - did StrongARM put Acorn's products back into being relatively fast machines compared to the competition? Or were they still behind x86 even with StrongARM?
So, "Acorn's Latest Arrival Packs a Punch" indicates a Dhrystone score of 290000 for the 228MHz StrongARM (around 166 VAX MIPS), up considerably from the 52000 of the 40MHz ARM710 (around 30 VAX MIPS), and a bit like going from ARM2 to ARM3 in terms of the speed-up.

The benchmarks on here suggest something like a score of 420000 (around 239 VAX MIPS) for a similarly clocked StrongARM in a Risc PC, so maybe the earlier revisions of the processor were slower than they ought to have been. Comparing this to other architectures is awkward because those usually have SPECint benchmark results stated and not Dhrystone results.
paulb
Posts: 1767
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by paulb »

paulb wrote: Thu Nov 02, 2023 6:00 pm Comparing this to other architectures is awkward because those usually have SPECint benchmark results stated and not Dhrystone results.
Another quick note on this. One catalogue of Dhrystone benchmark results puts the Risc PC with StrongARM amongst machines with Alpha 21064, MIPS R10000 and Pentium 2 (P6) processors, although the former two competitors are 64-bit machines. It is possible that the general integer arithmetic throughput of the StrongARM is comparable in the context of something like Dhrystone, but then one has to wonder how applicable that benchmark was at that point in time and how well it describes general performance.

The P6 microarchitecture has speculative execution and various pipeline efficiency enhancements. The 21064 and R10000 can execute multiple instructions at once, and P6 is also supposed to be superscalar, in contrast to the StrongARM. So, it is probable that other benchmarks and "real world" use would demonstrate more noticeable differences in performance between StrongARM-based machines and machines based on these other processors.
B3_B3_B3
Posts: 404
Joined: Sat Apr 08, 2017 10:42 pm
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by B3_B3_B3 »

It seems the Nascom 2s descendant, the Gemini 80-Bus CP/M system had a System monitor called RP/M which formed a Rom/cassette subset:
https://glasstty.com/the-gemini-80-bus-saga-part-1/ (Scroll down to the 'MBasic (Basic-80)' paragraph for description.
julie_m
Posts: 587
Joined: Wed Jul 24, 2019 9:53 pm
Location: Derby, UK
Contact:

Re: Why no discless cpm machines with cpm loaded from a rom abstracted as a rom disk?

Post by julie_m »

I'm sure it was as simple as: In the days of CP/M, ROM -- especially EPROM -- was expensive. Mask programmed ROM would have been cheaper per unit, but impossible to update if a problem was discovered.
Post Reply

Return to “general”