paulb wrote: ↑Tue Apr 23, 2024 11:13 pmThat makes me feel old: I was dabbling in ARM2 code before university, but our assembly language module at university involved the 68000 on a bunch of Atari STs, probably retained purely for that purpose. Contrary to the impression people like to communicate now, at the time (more like thirty years ago) I imagine that ARM was something that no-one was sure would stick around. It could easily have slotted into computing history alongside the Am29000 or i960 or maybe some of the microcontrollers that persist but no-one outside certain sectors would even be able to name.
If I do the actual arithmetic, then I finished that degree 21 years ago but, yeah, even by then ARM was on the roster only because it's a relatively pure RISC of the few that had made a commercial impact* and by then had just about found a second life in mobile phones.
* by which I overtly mean: in contrast to PowerPC, probably the more successful RISC of the era.
paulb wrote: ↑Tue Apr 23, 2024 11:13 pmBack on topic, I did manage to compile Clock Signal on Debian, needing an adjustment to the CMakeLists.txt to add the following:
set(THREADS_PREFER_PTHREAD_FLAG ON)
find_package(Threads REQUIRED)
target_link_libraries(clksignal PRIVATE Threads::Threads)
I'd consider packaging this for Debian if anyone showed any enthusiasm for my existing packaging efforts. Otherwise, I should explore the configuration issues such as changing the keyboard layout. It reminds me of the good old days with ElectrEm!
Oh, for the SDL target, presumably? Qt is also an option, essentially only for X11 because I found Qt's native keyboard API to be insufficient and decided to ignore it wherever possible. So it can build with pure Qt but probably doesn't work that well. Actually, the whole OpenGL side of things needs a substantial refresh — the Apple-specific Metal side of things is essentially version 2 of that subsystem and there are a whole bunch of improvements I could roll back. I keep telling myself I'll just get one more big improvement that I want to make to the composite side of things on Metal, then worry about the more arduous task of an an OpenGL implementation, and I'm sure that one day I will.
Anyway, quick hacking notes:
This was my [re]learn C++ project when it first began, so code quality still varies greatly. That accepted, the main thrust of it is a bunch of freestanding machines which implement relevant `MachineTypes` interfaces, usually at a minimum `MachineTypes::TimedMachine` and `MachineTypes::ScanProducer`.
The former just means you need to keep calling int to run the machine for whatever quantities of a second is appropriate for you.
The latter means that if you supply a host-appropriate subclass of `Outputs::Display::ScanTarget` then it'll serialise video output to there, which means it'll receive a bunch of rasters — a pair of (x, y) coordinates indicating start and end plus a PCM sampling of the data to fill that space with in one of currently eight pixel formats. As practical concerns it'll also get notified of detected horizontal and vertical syncs. and get the user's selection for display type and what colour space and colour subcarrier that'd be if it's composite. It is also involved in allocation of the storage for that PCM data, so that it can be in shared GPU/CPU space if available.
There's a couple of ScanTargets implemented, one for OpenGL and one for Apple's Metal. Both just calmly enqueue data as they receive it and need separate calls actually to paint it outward; both permit those things to happen simultaneously. So hosts can run the machine runs on one thread and video output on another, and normally they just do video output whenever makes sense. The metaphor is supposed to be that the host machine has a camera pointed at the screen of the client so you can snapshot it whenever you like. There is some feedback that allows mild warping of time to try to pull host and client retraces into sync where that's sensible, but that's not the default. The idea is that if you've spent your money on a 144Hz monitor or whatever because you care about latency then the emulator will give you 144 independent frames per second with minimised latency. Whatever works.
The Archimedes also implements `MachineTypes::AudioProducer`, which allows the host to provide a callback and preferred buffer size, and negotiate a good output sample rate for audio. Then `MachineTypes::MouseMachine` is how the machine canvasses for mouse input, and `MachineTypes::MappedKeyboardMachine` is probably the one that's interesting if you want to look at keyboard mappings.
There are supposed to be two things going on there:
- the machine provides the stuff in `MachineTypes::KeyActions` by which a host that knows the specific emulated machine is supposed to be able just directly to set specific keys up and down; and
- it also provides a `MachineTypes::MappedKeyboardMachine::KeyboardMapper` which will map from a normative PC/Mac 102-key layout to the particular keys recognised by `MachineTypes::KeyActions`. So a host that doesn't want to invest in per-machine mapping doesn't have to; it can just work with 102-key presses, and map them through that.
In practice I've not yet pulled out any standard defines for the Archimedes so there's kind of an assumption that you're just going to use the default mapping. But check out the Archimedes' KeyboardMapper.h — the Arc-side key mappings are just row and column combined into a byte, so at least provisionally you can work from there if you want to skip the mapper.
And, to mention it, the other big tool for hosts that want to support the emulator's entire array of machines is the static analyser, which takes some media and spits out a list of potential targets for it (almost always only one), then can also provide an instance of a machine for any given target. Probably not worth getting too invested in at first, just be aware that's how all three of the current OS bindings for the emulator — SDL, Qt and macOS — are concluding that they need to create an Archimedes if asked to open a relevant ADF, HFE or similar.
As per above, the ARM is currently only implemented as a high-level emulation, it doesn't yet even attempt bus accuracy, and that flows into the current Archimedes chipset decisions, though video, timers and audio run at the proper precision as if they were bus accurate, they just get all DMA instantly and without cost. There is work to do here.
Oh, and the emulated monitor attached to the emulated Arc isn't yet multisync so expect synchronisation to be lost if you pick a VGA mode. I'm going to fix that. Click to witness: