Need to be slightly careful here; some functionality (eg chroot, OS shutdown) requires privileges. Some of these might be usable with capabilities, but NFS doesn't support them so people with diskless PIs (eg me) can't use "setcap". So we might need a special purpose setuid "helper" app which has a highly constrained privileged path and drops privs ASAP.cr12925 wrote: ↑Sat Dec 30, 2023 3:11 pmWatch this space…james wrote: ↑Sat Dec 30, 2023 3:00 pmIanJeffray wrote: ↑Sat Dec 30, 2023 2:54 pm Wouldn't it be cute to have the PiBridge able to execute (arbitrary/curated) linuxto put it to bed.Code: Select all
*HALTPI
PiBridge 2.1-dev pushed to github
Re: PiBridge 2.1-dev pushed to github
Rgds
Stephen
Stephen
Re: PiBridge 2.1-dev pushed to github
Have you developed some form of telepathy?sweh wrote: ↑Sat Dec 30, 2023 6:02 pmNeed to be slightly careful here; some functionality (eg chroot, OS shutdown) requires privileges. Some of these might be usable with capabilities, but NFS doesn't support them so people with diskless PIs (eg me) can't use "setcap". So we might need a special purpose setuid "helper" app which has a highly constrained privileged path and drops privs ASAP.
I also think this is better done as a specific bridge protocol as opposed to an FS builtin command…
C
2 x Master 128, BBC B+IntegraB, Viglen floppy drives, A3000 ZIDEFS+Econet, RISC PC StrongArm Mk3+Econet ModulePidule, 3 x Econets, 5 x Pi Econet bridges, organist, former purveyor of BBS software...
- IanJeffray
- Posts: 5963
- Joined: Sat Jun 06, 2020 3:50 pm
- Contact:
Re: PiBridge 2.1-dev pushed to github
I was thinking more generic. *PICMD halt
Where 'halt' is just the arbitrary Linux command being executed.
You may choose to have a curated safe/allowed list, via the config file, rather than any truly arbitrary command.
I can see how aliases/named commands could be cute too though, so
Code: Select all
Command Alias FINISH halt
Then I could envisage, say, execution of a shellscript which would buzz through all ,ff9 (sprite) files in the folder and run sprite2png on them if there was no %basename%.png file. etc go mad..
Re: PiBridge 2.1-dev pushed to github
Using the 2.1-dev version, I experienced an interesting issue today. I am not sure if this happens with 2.0, but thought I would post it anyway.
When trying to create a new user when the directory already exists, you get an error stating "Unable to create home directory" however if you try to login with that user, you get "User not Known", but if you try to create the user again, it gives error "User exists".
You then have to *REMUSER, delete the existing directory, and then create the user. I haven't experienced that before, is it expected?
When trying to create a new user when the directory already exists, you get an error stating "Unable to create home directory" however if you try to login with that user, you get "User not Known", but if you try to create the user again, it gives error "User exists".
You then have to *REMUSER, delete the existing directory, and then create the user. I haven't experienced that before, is it expected?
BBC Master, BBC Model B, Electron, A5000, A4000, RISC PC, PIBridge, Piconet, and too many Raspberry PI's and now an A4
Re: PiBridge 2.1-dev pushed to github
Highly likely that exists in v2.0 as well - I’ll have a look!
Thanks
C
Thanks
C
2 x Master 128, BBC B+IntegraB, Viglen floppy drives, A3000 ZIDEFS+Econet, RISC PC StrongArm Mk3+Econet ModulePidule, 3 x Econets, 5 x Pi Econet bridges, organist, former purveyor of BBS software...
Re: PiBridge 2.1-dev pushed to github
Yes it's also in previous versions.
-Mark
2 x BBC, 1 Viglen BBC, M128, M512, M128+copro, 1 Master ET, BBC AIV Domesday System, E01S, E01, E20 Filestore, 3 x A4000, RISC PC 600,700, StrongArm. Probably more I've missed and all sorts of bits and pieces.
2 x BBC, 1 Viglen BBC, M128, M512, M128+copro, 1 Master ET, BBC AIV Domesday System, E01S, E01, E20 Filestore, 3 x A4000, RISC PC 600,700, StrongArm. Probably more I've missed and all sorts of bits and pieces.
Re: PiBridge 2.1-dev pushed to github
fs.c:9080 sets user priv but, if the URD can’t be created then the live userbase will have an active user that isn’t saved to Passwords. I would guess if you create the same scenario as above but instead of trying to create the user again you restart the bridge, you won’t get the “User exists” error (because it hasn’t saved).
Fix is to set that to 0 and only set it to FS_PRIV_USER just before the save in the subsequent else {} braces I think.
I’ll experiment one night this week.
C
Fix is to set that to 0 and only set it to FS_PRIV_USER just before the save in the subsequent else {} braces I think.
I’ll experiment one night this week.
C
2 x Master 128, BBC B+IntegraB, Viglen floppy drives, A3000 ZIDEFS+Econet, RISC PC StrongArm Mk3+Econet ModulePidule, 3 x Econets, 5 x Pi Econet bridges, organist, former purveyor of BBS software...
Re: PiBridge 2.1-dev pushed to github
I note the following statement in the ReadMe:
The reason I ask, is I can't get Ozmoo games to run on V2.1. It fails when building a path to the Users home directory:
This continues until the Ozmoo loader errors out due to the string that's being built becoming too large:
It works on V2.0. This is how it looks on V2:
The game then continues to load normally. This is the difference:
Any ideas?
Has this behaviour changed at all between V2.0 and V2.1?ReadMe wrote: - Where the L3 server will not permit *DIR ^ within your home directory
or $, this server *will*. From home it has the expected behaviour.
From $ (or if $ is home) it will effectively be ignored.
The reason I ask, is I can't get Ozmoo games to run on V2.1. It fails when building a path to the Users home directory:
Code: Select all
[+ 1919.277954] tid 6634 FS : from 0.191 Get Object Info INSV relative to 05, command 2
[+ 1919.291016] tid 6634 FS : from 0.191 RUN INSV
[+ 1919.293945] tid 6634 FS : Interlock on server 0 attempting to open path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/INSV, mode 1, userid 2
[+ 1919.293945] tid 6634 FS : Interlock opened internal handle 2, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/INSV
[+ 1919.295044] tid 6634 FS : Interlock close internal handle 2, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/INSV
[+ 1919.295044] tid 6634 FS : Interlock closing internal handle 2 in operating system
[+ 1919.324951] tid 6634 FS : from 0.191 Get Object Info FASTSCR relative to 05, command 2
[+ 1919.338013] tid 6634 FS : from 0.191 RUN FASTSCR
[+ 1919.340942] tid 6634 FS : Interlock on server 0 attempting to open path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/FASTSCR, mode 1, userid 2
[+ 1919.342041] tid 6634 FS : Interlock opened internal handle 2, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/FASTSCR
[+ 1919.342041] tid 6634 FS : Interlock close internal handle 2, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/FASTSCR
[+ 1919.342041] tid 6634 FS : Interlock closing internal handle 2 in operating system
[+ 1919.439941] tid 6634 FS : from 0.191 Get Object Info relative to 05, command 6
[+ 1919.487061] tid 6634 FS : from 0.191 DIR ^
[+ 1919.489014] tid 6634 FS : Interlock on server 0 attempting to open path /home/pi/econetfs/0Skidog-01/Games/Ozmoo, mode 1, userid 2
[+ 1919.489014] tid 6634 FS : Interlock opened internal handle 2, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo
[+ 1919.489014] tid 6634 FS : from 0.191 User handle 2 allocated for internal handle 2
[+ 1919.489990] tid 6634 FS : Interlock close internal handle 3, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1
[+ 1919.489990] tid 6634 FS : Interlock closing internal handle 3 in operating system
[+ 1919.503052] tid 6634 FS : from 0.191 Get Object Info relative to 02, command 6
[+ 1919.538940] tid 6634 FS : from 0.191 DIR ^
[+ 1919.541016] tid 6634 FS : Interlock on server 0 attempting to open path /home/pi/econetfs/0Skidog-01/Games, mode 1, userid 2
[+ 1919.541992] tid 6634 FS : Interlock opened internal handle 3, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/Games
[+ 1919.541992] tid 6634 FS : from 0.191 User handle 4 allocated for internal handle 3
[+ 1919.541992] tid 6634 FS : Interlock close internal handle 2, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo
[+ 1919.542969] tid 6634 FS : Interlock closing internal handle 2 in operating system
[+ 1919.556030] tid 6634 FS : from 0.191 Get Object Info relative to 04, command 6
[+ 1919.590942] tid 6634 FS : from 0.191 DIR ^
[+ 1919.592041] tid 6634 FS : Interlock on server 0 attempting to open path /home/pi/econetfs/0Skidog-01, mode 1, userid 2
[+ 1919.593018] tid 6634 FS : Interlock opened internal handle 2, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01
[+ 1919.593018] tid 6634 FS : from 0.191 User handle 2 allocated for internal handle 2
[+ 1919.593018] tid 6634 FS : Interlock close internal handle 3, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games
[+ 1919.593018] tid 6634 FS : Interlock closing internal handle 3 in operating system
[+ 1919.605957] tid 6634 FS : from 0.191 Get Object Info relative to 02, command 6
[+ 1919.640991] tid 6634 FS : from 0.191 DIR ^
[+ 1919.642944] tid 6634 FS : Interlock on server 0 attempting to open path /home/pi/econetfs/0Skidog-01, mode 1, userid 2
[+ 1919.642944] tid 6634 FS : Interlock opened internal dup handle 2, mode 1. Readers = 2, Writers = 0, path /home/pi/econetfs/0Skidog-01
[+ 1919.644043] tid 6634 FS : from 0.191 User handle 4 allocated for internal handle 2
[+ 1919.644043] tid 6634 FS : Interlock close internal handle 2, mode 1. Readers now = 1, Writers now = 0, path /home/pi/econetfs/0Skidog-01
[+ 1919.656982] tid 6634 FS : from 0.191 Get Object Info relative to 04, command 6
[+ 1919.692017] tid 6634 FS : from 0.191 DIR ^
[+ 1919.692993] tid 6634 FS : Interlock on server 0 attempting to open path /home/pi/econetfs/0Skidog-01, mode 1, userid 2
[+ 1919.692993] tid 6634 FS : Interlock opened internal dup handle 2, mode 1. Readers = 2, Writers = 0, path /home/pi/econetfs/0Skidog-01
[+ 1919.693970] tid 6634 FS : from 0.191 User handle 2 allocated for internal handle 2
[+ 1919.693970] tid 6634 FS : Interlock close internal handle 2, mode 1. Readers now = 1, Writers now = 0, path /home/pi/econetfs/0Skidog-01
[+ 1919.708008] tid 6634 FS : from 0.191 Get Object Info relative to 02, command 6
[+ 1919.741943] tid 6634 FS : from 0.191 DIR ^
[+ 1919.744019] tid 6634 FS : Interlock on server 0 attempting to open path /home/pi/econetfs/0Skidog-01, mode 1, userid 2
[+ 1919.744019] tid 6634 FS : Interlock opened internal dup handle 2, mode 1. Readers = 2, Writers = 0, path /home/pi/econetfs/0Skidog-01
[+ 1919.744995] tid 6634 FS : from 0.191 User handle 4 allocated for internal handle 2
[+ 1919.744995] tid 6634 FS : Interlock close internal handle 2, mode 1. Readers now = 1, Writers now = 0, path /home/pi/econetfs/0Skidog-01
[+ 1919.758057] tid 6634 FS : from 0.191 Get Object Info relative to 04, command 6
[+ 1919.793945] tid 6634 FS : from 0.191 DIR ^
[+ 1919.795044] tid 6634 FS : Interlock on server 0 attempting to open path /home/pi/econetfs/0Skidog-01, mode 1, userid 2
[+ 1919.795044] tid 6634 FS : Interlock opened internal dup handle 2, mode 1. Readers = 2, Writers = 0, path /home/pi/econetfs/0Skidog-01
[+ 1919.796021] tid 6634 FS : from 0.191 User handle 2 allocated for internal handle 2
[+ 1919.796997] tid 6634 FS : Interlock close internal handle 2, mode 1. Readers now = 1, Writers now = 0, path /home/pi/econetfs/0Skidog-01
[+ 1919.810059] tid 6634 FS : from 0.191 Get Object Info relative to 02, command 6
[+ 1919.845947] tid 6634 FS : from 0.191 DIR ^
[+ 1919.848022] tid 6634 FS : Interlock on server 0 attempting to open path /home/pi/econetfs/0Skidog-01, mode 1, userid 2
[+ 1919.848999] tid 6634 FS : Interlock opened internal dup handle 2, mode 1. Readers = 2, Writers = 0, path /home/pi/econetfs/0Skidog-01
[+ 1919.848999] tid 6634 FS : from 0.191 User handle 4 allocated for internal handle 2
[+ 1919.849976] tid 6634 FS : Interlock close internal handle 2, mode 1. Readers now = 1, Writers now = 0, path /home/pi/econetfs/0Skidog-01
It works on V2.0. This is how it looks on V2:
Code: Select all
[+ 100.610001] tid 8905 FS : from 0.191 Get Object Info INSV relative to 05, command 2
[+ 100.623001] tid 8905 FS : from 0.191 RUN INSV
[+ 100.625000] tid 8905 FS : Interlock opened internal handle 2, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/INSV
[+ 100.625999] tid 8905 FS : Interlock close internal handle 2, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/INSV
[+ 100.625999] tid 8905 FS : Interlock closing internal handle 2 in operating system
[+ 100.646004] tid 8905 FS : from 0.191 Get Object Info FASTSCR relative to 05, command 2
[+ 100.657997] tid 8905 FS : from 0.191 RUN FASTSCR
[+ 100.661003] tid 8905 FS : Interlock opened internal handle 2, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/FASTSCR
[+ 100.661003] tid 8905 FS : Interlock close internal handle 2, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/FASTSCR
[+ 100.662003] tid 8905 FS : Interlock closing internal handle 2 in operating system
[+ 100.758003] tid 8905 FS : from 0.191 Get Object Info relative to 05, command 6
[+ 100.825996] tid 8905 FS : from 0.191 DIR ^
[+ 100.828003] tid 8905 FS : Interlock opened internal handle 2, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo
[+ 100.828003] tid 8905 FS : from 0.191 User handle 2 allocated for internal handle 2
[+ 100.828003] tid 8905 FS : Interlock close internal handle 3, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1
[+ 100.828003] tid 8905 FS : Interlock closing internal handle 3 in operating system
[+ 100.890999] tid 8905 FS : from 0.191 Get Object Info relative to 02, command 6
[+ 100.948997] tid 8905 FS : from 0.191 DIR ^
[+ 100.949997] tid 8905 FS : Interlock opened internal handle 3, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/Games
[+ 100.950996] tid 8905 FS : from 0.191 User handle 4 allocated for internal handle 3
[+ 100.950996] tid 8905 FS : Interlock close internal handle 2, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo
[+ 100.950996] tid 8905 FS : Interlock closing internal handle 2 in operating system
[+ 100.962997] tid 8905 FS : from 0.191 Get Object Info relative to 04, command 6
[+ 101.019997] tid 8905 FS : from 0.191 DIR ^
[+ 101.021004] tid 8905 FS : Interlock opened internal handle 2, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01
[+ 101.021004] tid 8905 FS : from 0.191 User handle 2 allocated for internal handle 2
[+ 101.021004] tid 8905 FS : Interlock close internal handle 3, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games
[+ 101.021004] tid 8905 FS : Interlock closing internal handle 3 in operating system
[+ 101.033997] tid 8905 FS : from 0.191 Get Object Info relative to 02, command 6
[+ 101.134003] tid 8905 FS : from 0.191 DIR $.Games.Ozmoo.Zork1
[+ 101.136002] tid 8905 FS : Interlock opened internal handle 3, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1
[+ 101.137001] tid 8905 FS : from 0.191 User handle 4 allocated for internal handle 3
[+ 101.137001] tid 8905 FS : Interlock close internal handle 2, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01
[+ 101.137001] tid 8905 FS : Interlock closing internal handle 2 in operating system
[+ 101.157997] tid 8905 FS : from 0.191 DIR
[+ 101.158997] tid 8905 FS : Interlock opened internal dup handle 0, mode 1. Readers = 2, Writers = 0, path /home/pi/econetfs/0Skidog-01/ZORK
[+ 101.158997] tid 8905 FS : from 0.191 User handle 2 allocated for internal handle 0
[+ 101.160004] tid 8905 FS : Interlock close internal handle 3, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1
[+ 101.160004] tid 8905 FS : Interlock closing internal handle 3 in operating system
[+ 101.181000] tid 8905 FS : from 0.191 DIR SAVES
[+ 101.182999] tid 8905 FS : Interlock opened internal handle 2, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/ZORK/SAVES
[+ 101.182999] tid 8905 FS : from 0.191 User handle 4 allocated for internal handle 2
[+ 101.183998] tid 8905 FS : Interlock close internal handle 0, mode 1. Readers now = 1, Writers now = 0, path /home/pi/econetfs/0Skidog-01/ZORK
[+ 101.211998] tid 8905 FS : from 0.191 Get Object Info $.Games.Ozmoo.Zork1.OZMOOSH relative to 04, command 2
[+ 101.226997] tid 8905 FS : from 0.191 RUN $.Games.Ozmoo.Zork1.OZMOOSH
[+ 101.230003] tid 8905 FS : Interlock opened internal handle 3, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/OZMOOSH
[+ 101.232002] tid 8905 FS : Interlock close internal handle 3, mode 1. Readers now = 0, Writers now = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/OZMOOSH
[+ 101.232002] tid 8905 FS : Interlock closing internal handle 3 in operating system
[+ 102.211998] tid 8905 FS : from 0.191 Open $.Games.Ozmoo.Zork1.DATA readonly yes, must exist? yes
[+ 102.214996] tid 8905 FS : Interlock opened internal handle 3, mode 1. Readers = 1, Writers = 0, path /home/pi/econetfs/0Skidog-01/Games/Ozmoo/Zork1/DATA
[+ 102.214996] tid 8905 FS : from 0.191 User handle 2 allocated for internal handle 3
[+ 102.216003] tid 8905 FS : from 0.191 Opened handle 2 (:Skidog-01.$.Games.Ozmoo.Zork1.DATA)
[+ 102.222000] tid 8905 FS : from 0.191 Get Object Info $.Games.Ozmoo.Zork1.DATA relative to 04, command 4
[+ 102.233002] tid 8905 FS : from 0.191 fs_getbytes() 0200 from offset 0000 (being used) by user 0002 on handle 02, ctrl seq is OK (stored: 02, received: 81)
[+ 102.233002] tid 8905 FS : from 0.191 fs_getbytes() offset 000000, file length 019B30, beyond EOF No
[+ 102.234001] tid 8905 FS : from 0.191 fs_getbytes() bulk transfer: bytes required 000200, bytes already sent 000000, buffer size 1000, ftell() = 000200, bytes to read 000200, bytes actually read 000200
[+ 102.234001] tid 8905 FS : from 0.191 fs_getbytes() Acknowledging 0200 tx bytes, cursor now 000200
[+ 102.264000] tid 8905 FS : from 0.191 Get random access info on handle 02, function 00 - cursor 000200
[+ 102.272003] tid 8905 FS : from 0.191 fs_getbytes() 0200 from offset 0200 (being used) by user 0002 on handle 02, ctrl seq is OK (stored: 01, received: 80)
[+ 102.272003] tid 8905 FS : from 0.191 fs_getbytes() offset 000200, file length 019B30, beyond EOF No
[+ 102.273003] tid 8905 FS : from 0.191 fs_getbytes() bulk transfer: bytes required 000200, bytes already sent 000000, buffer size 1000, ftell() = 000400, bytes to read 000200, bytes actually read 000200
[+ 102.273003] tid 8905 FS : from 0.191 fs_getbytes() Acknowledging 0200 tx bytes, cursor now 000400
[+ 102.304001] tid 8905 FS : from 0.191 Get random access info on handle 02, function 00 - cursor 000400
[+ 102.313004] tid 8905 FS : from 0.191 fs_getbytes() 0200 from offset 0400 (being used) by user 0002 on handle 02, ctrl seq is OK (stored: 00, received: 81)
[+ 102.313004] tid 8905 FS : from 0.191 fs_getbytes() offset 000400, file length 019B30, beyond EOF No
Any ideas?
Last edited by KenLowe on Sat Mar 02, 2024 10:29 pm, edited 1 time in total.
Re: PiBridge 2.1-dev pushed to github
Not as far as I know. What has changed is that the user file handle allocation has changed so that it will only permit 8 handles by default (each having a different bit set) because that's what L3FS does, and NFS3 doesn't like anything else. (M128 onwards do.)
As far as I can see, in the v2.1 system, the code just keeps doing '*DIR ^' over and over and doesn't seem to notice that it's already in the root directory.
What does L3FS do if you try to do '*DIR ^' in (i) your home directory and (ii) $ ?
I may be able to make the fs do the same thing if that will help.
C.
As far as I can see, in the v2.1 system, the code just keeps doing '*DIR ^' over and over and doesn't seem to notice that it's already in the root directory.
What does L3FS do if you try to do '*DIR ^' in (i) your home directory and (ii) $ ?
I may be able to make the fs do the same thing if that will help.
C.
2 x Master 128, BBC B+IntegraB, Viglen floppy drives, A3000 ZIDEFS+Econet, RISC PC StrongArm Mk3+Econet ModulePidule, 3 x Econets, 5 x Pi Econet bridges, organist, former purveyor of BBS software...
Re: PiBridge 2.1-dev pushed to github
On a separate note, v2.1-dev is starting to feel quite stable now. I would be interested in feedback from those who wish to try it!
C.
C.
2 x Master 128, BBC B+IntegraB, Viglen floppy drives, A3000 ZIDEFS+Econet, RISC PC StrongArm Mk3+Econet ModulePidule, 3 x Econets, 5 x Pi Econet bridges, organist, former purveyor of BBS software...
Re: PiBridge 2.1-dev pushed to github
Yes, that's exactly what seems to be happening. But the burning question is... why on v2.1 but not on v2.0???cr12925 wrote: ↑Sat Mar 02, 2024 8:29 pm Not as far as I know. What has changed is that the user file handle allocation has changed so that it will only permit 8 handles by default (each having a different bit set) because that's what L3FS does, and NFS3 doesn't like anything else. (M128 onwards do.)
As far as I can see, in the v2.1 system, the code just keeps doing '*DIR ^' over and over and doesn't seem to notice that it's already in the root directory.
This is what it does on L3FS-v1.25:
So, it allows me to navigate above my home directory, but creates a 'Not found' error if I try to go above $.
Edit, but to complicate matters further, L3FS-v0.92 behaves differently: It doesn't allow me to navigate above my home directory with *DIR ^. Instead it generates the 'Not found' error. However, it does allow me to switch to the root with *DIR $.
There was a bit of discussion about this when Ozmoo was getting developed:
viewtopic.php?p=286124#p286124
Re: PiBridge 2.1-dev pushed to github
Latest 2.1-dev push fixes this. The MDFS spec v1.00 appears not to specify accurately what goes in the packet for FSOp 18 arg 6. Byte 5 is in fact a directory length. And it would appear the return data only copes with 10 character names, and the v2.1 FS copes with longer names - where they are supported by the protocol. Put a 0x0A in byte 5 and it's happy again...
(And, by the by, MDFS *PATHNAME works again...)
C.
(And, by the by, MDFS *PATHNAME works again...)
C.
2 x Master 128, BBC B+IntegraB, Viglen floppy drives, A3000 ZIDEFS+Econet, RISC PC StrongArm Mk3+Econet ModulePidule, 3 x Econets, 5 x Pi Econet bridges, organist, former purveyor of BBS software...
Re: PiBridge 2.1-dev pushed to github
That's fixed it, thank you. I was slowly getting there too. I was in about OSGBPB function code 6 (Read currently selected directory name into data block), looking for differences between v2.0 & v2.1, but you beat me to it!
Re: PiBridge 2.1-dev pushed to github
Hi Chris
I thought I would try and play with the new functionality, specifically around *FAST. I followed the instructions, and ran:
The *BRIDGEUSER command works, but if I try *FAST I get a Bad Command. I have tried as the user I have given the permissions too, and SYST. What am I missing? No doubt I have missed something, but I cannot figure it out.
I thought I would try and play with the new functionality, specifically around *FAST. I followed the instructions, and ran:
Code: Select all
make setuid
./econet-hpbridge --enable-syst-fast
BBC Master, BBC Model B, Electron, A5000, A4000, RISC PC, PIBridge, Piconet, and too many Raspberry PI's and now an A4
Re: PiBridge 2.1-dev pushed to github
Ha. I had a similar (email) query yesterday:
I've not tried to find it yet.cr12925 wrote:It's an MDFS library utility...
Re: PiBridge 2.1-dev pushed to github
Ahhhhh... That makes sense! I have had a Quick Look (in the usual places) can't find it.
Thanks Ken! Thought I was being dumb
BBC Master, BBC Model B, Electron, A5000, A4000, RISC PC, PIBridge, Piconet, and too many Raspberry PI's and now an A4
Re: PiBridge 2.1-dev pushed to github
I think it is available as part of NetLibB.zip on mdfs.net
2 x Master 128, BBC B+IntegraB, Viglen floppy drives, A3000 ZIDEFS+Econet, RISC PC StrongArm Mk3+Econet ModulePidule, 3 x Econets, 5 x Pi Econet bridges, organist, former purveyor of BBS software...
Re: PiBridge 2.1-dev pushed to github
If you need *FAST, I can supply an image and I believe I have the source.
However, I'm intrigued as to what you want to use it for - normally it's for access to SJ fileservers, and if so you've presumably got a copy of it on said fileserver.
However, I'm intrigued as to what you want to use it for - normally it's for access to SJ fileservers, and if so you've presumably got a copy of it on said fileserver.
Re: PiBridge 2.1-dev pushed to github
v2.1-dev has a responder to *FAST which allows some control over the bridge machine (if installed setuid root - which it only grabs just before the key moment - it can do a full shutdown of the Pi) and the fileserver on the station you connect to. It’s aim, ultimately, is to allow control over the bridge for people who don’t use Linux except to install the bridge - ie a headless box in the corner.
The priv required to use *FAST *might* be extended in the future to allow editing of the bridge config file.
I might expand that in future into being able to forward a connection onto a distant TCP port. I disassembled *FAST to see how it worked, but the source would be very handy for figuring out (for example) how to quit out if it!!
Best
C
2 x Master 128, BBC B+IntegraB, Viglen floppy drives, A3000 ZIDEFS+Econet, RISC PC StrongArm Mk3+Econet ModulePidule, 3 x Econets, 5 x Pi Econet bridges, organist, former purveyor of BBS software...
Re: PiBridge 2.1-dev pushed to github
Enclosed some source - four version-numbered iterations of the BBC ROM, and a subdirectory containing the Archimedes code.
All of it is in the form of BASIC files containing assembly source.
While the *FAST protocol works well for its original purpose, it has the disadvantage of using RPC at the Econet layer instead of normal packets. That would potentially have been useful on the BBC to allow background operation of a byte stream protocol (to intercept the MOS's serial port support, perhaps), but was a nuisance in trying to implement it on later systems that didn't support RPC, and wasn't strictly necessary for the original use with a dedicated terminal program.
I later implemented a similar protocol ("BBCterm") that just used ordinary packets, allowing BBCs to be shell terminals to the R140. That could probably be ported to the Pi, but I don't know what your preferred API is for local Econet applications - I don't think you've implemented the Ioctl() interface to /dev/ecoXX provided on the R140 (and which I also supported with the PC Econet card under FreeBSD)?
All of it is in the form of BASIC files containing assembly source.
While the *FAST protocol works well for its original purpose, it has the disadvantage of using RPC at the Econet layer instead of normal packets. That would potentially have been useful on the BBC to allow background operation of a byte stream protocol (to intercept the MOS's serial port support, perhaps), but was a nuisance in trying to implement it on later systems that didn't support RPC, and wasn't strictly necessary for the original use with a dedicated terminal program.
I later implemented a similar protocol ("BBCterm") that just used ordinary packets, allowing BBCs to be shell terminals to the R140. That could probably be ported to the Pi, but I don't know what your preferred API is for local Econet applications - I don't think you've implemented the Ioctl() interface to /dev/ecoXX provided on the R140 (and which I also supported with the PC Econet card under FreeBSD)?
- Attachments
-
- fast_source.zip
- (89.05 KiB) Downloaded 10 times
Re: PiBridge 2.1-dev pushed to github
Thank you! Much appreciated @arg
Will have a look.
Best
C
Will have a look.
Best
C
2 x Master 128, BBC B+IntegraB, Viglen floppy drives, A3000 ZIDEFS+Econet, RISC PC StrongArm Mk3+Econet ModulePidule, 3 x Econets, 5 x Pi Econet bridges, organist, former purveyor of BBS software...
Re: PiBridge 2.1-dev pushed to github
Thought I'd make a quick post here to share details of some recent awesome updates that @ch12925 has made for me.
I have been trying to get as many games as possible to run from the econet PiFS without having to patch the living daylights out of each game. In particular, I've been looking at a number of disc based games (both DFS & ADFS) that access the disc during game play to see if these will also work from econet. Along the way, there have been a few challenges:
Root Directory
Many games expect to run directly off the root directory of the disc. This is fine when each game has its own disc, but can become a problem when sharing the same PiFS disc. As a very minimum, the root directory can become very large and unwieldy, but probably more of an issue is that different games may use the same filename causing a conflict when both need to be saved to the root.
The Level 9 graphical adventure games are a great example of this, where each different game uses exactly the same filenames. We have overcome this problem by creating a new, per user, option to remap the root of the disk as being the User Home Directory. So, for example, with the Level 9 adventure game Lancelot, you can create a directory called $.Games.Level9.Lancelot on your PiFS and then copy all the games into that directory. Then create a new user called Lancelot (*NEWUSER Lancelot), and use *SETHOME to point to the $.Games.Level9.Lancelot. The final (new) bit is to set the User Home Directory as the root. This is done with the new '*PRIV Lancelot C' command. Then, when you log in as Lancelot, you will be taken to the home directory, which will now also appear to be the root of the disc. The true root of the PiFS disc will not be visible to Lancelot.
Disc Number vs Disc Name
When using a full path name to access a DFS or ADFS file, you would typically enter something as follows:
Here, the ':0' defines the drive number. However, this doesn't work with PiFS (or, I suspect, L3FS). Instead the econet fileserver expects the DISC NAME to be used instead of the DISC NUMBER. So, for example, PiFS would expect the format to be something like this:
PiFS has now been updated to translate the DISC NUMBER to DISC NAME so that when trying to access a file using the DFS / ADFS pathnames it will still work on the PiFS. So, if you set up your virtual drive as 0Econet, any time a game tries to access disc :0, PiFS will change this on the fly to :Econet.
ANFS (Master) missing colon
For some odd reason, when trying to access drives with a single character name (eg when DISC NAME = '0'), ANFS will strip off leading ':', so instead of sending the following command to PiFS:
which is trying to load the file 'Title' from directory 'P' of Disc '0', ANFS instead sends:
and PiFS tries to load the file 'Title' from directory '0.P' of the currently selected Disc. That is totally wrong!
So, another configuration option has been added to PiFS, where PiFS will try to work out if the ':' has been stripped by ANFS, and if so, add it back in again. This configuration option is again a per user option, and is set with the '*PRIV <user> A' command. This command would normally be used in conjunction with the *PRIV <user> C' command, and must be used with care. It is possible that the colon has not been stripped by ANFS, and that '*LOAD 0.P.Title' is the correct command.
Note that there is a little wrinkle in the way the command is currently working, which means PiFS will only add the colon back in again if the start of the path is specifically '<SINGLE CHARCTER DRIVE NAME>.$'. Checking for '$' is too restrictive, and will hopefully be getting removed in the next update.
Edit: A fix for this has now been pushed.
PAGE & Page Zero
The above PiFS changes make it much easier to get games running from Econet, but there are still a couple of things to be mindful of. Firstly, on the beeb, NFS sets PAGE to &1200, which is slightly higher than the safe minimum PAGE for DFS, which is &1100. Therefore games that use memory from PAGE &1100 may not work on NFS; particularly if NFS disc access is required during game play. This is less of an issue on the Master, where PAGE is &E00 for both DFS and NFS.
Games that run from ADFS on the beeb should be less of a problem, because PAGE for ADFS is &1D00, which is higher than the PAGE for NFS.
One further issue is that Page Zero addresses &90..&9F are set aside for Econet workspace. Games that are designed to run from DFS or ADFS may choose to use these addresses for their own purpose, corrupting the Econet workspace. If games are using any of the Page Zero addresses, then the game may need to be patched to use other addresses. I have had to do this with the Level 9 graphical adventure games, where the game loader was using address &97. This address was being overwritten by NFS, causing the loader to fail. I patched the loader to use address &66 instead.
Working / Not working Games
This is still very much a WIP, but here's the current status...
Working:
Level 9 adventures (requires patch): Lancelot, Knights Orc, Ingrids Back, TimeAndMagik, Gnome Ranger & Scape Ghost
Ozmoo based adventure games (like Zork, HHGTTG, Hollywood Hijinx etc)
Elite for Econet
White Light (for the Master)
Repton 3
Ravenskull
Battle Zone
Chuckie Egg
Manic Miner
Jet Set Willie
Ladybug
Phoenix
Citadel
Wordle
Not working:
Exile
Cholo
White Light (for the beeb)
I have been trying to get as many games as possible to run from the econet PiFS without having to patch the living daylights out of each game. In particular, I've been looking at a number of disc based games (both DFS & ADFS) that access the disc during game play to see if these will also work from econet. Along the way, there have been a few challenges:
Root Directory
Many games expect to run directly off the root directory of the disc. This is fine when each game has its own disc, but can become a problem when sharing the same PiFS disc. As a very minimum, the root directory can become very large and unwieldy, but probably more of an issue is that different games may use the same filename causing a conflict when both need to be saved to the root.
The Level 9 graphical adventure games are a great example of this, where each different game uses exactly the same filenames. We have overcome this problem by creating a new, per user, option to remap the root of the disk as being the User Home Directory. So, for example, with the Level 9 adventure game Lancelot, you can create a directory called $.Games.Level9.Lancelot on your PiFS and then copy all the games into that directory. Then create a new user called Lancelot (*NEWUSER Lancelot), and use *SETHOME to point to the $.Games.Level9.Lancelot. The final (new) bit is to set the User Home Directory as the root. This is done with the new '*PRIV Lancelot C' command. Then, when you log in as Lancelot, you will be taken to the home directory, which will now also appear to be the root of the disc. The true root of the PiFS disc will not be visible to Lancelot.
Disc Number vs Disc Name
When using a full path name to access a DFS or ADFS file, you would typically enter something as follows:
Code: Select all
*LOAD :0.P.Title
Code: Select all
*LOAD :Econet.P.Title
ANFS (Master) missing colon
For some odd reason, when trying to access drives with a single character name (eg when DISC NAME = '0'), ANFS will strip off leading ':', so instead of sending the following command to PiFS:
Code: Select all
*LOAD :0.P.Title
Code: Select all
*LOAD 0.P.Title
So, another configuration option has been added to PiFS, where PiFS will try to work out if the ':' has been stripped by ANFS, and if so, add it back in again. This configuration option is again a per user option, and is set with the '*PRIV <user> A' command. This command would normally be used in conjunction with the *PRIV <user> C' command, and must be used with care. It is possible that the colon has not been stripped by ANFS, and that '*LOAD 0.P.Title' is the correct command.
Note that there is a little wrinkle in the way the command is currently working, which means PiFS will only add the colon back in again if the start of the path is specifically '<SINGLE CHARCTER DRIVE NAME>.$'. Checking for '$' is too restrictive, and will hopefully be getting removed in the next update.
Edit: A fix for this has now been pushed.
PAGE & Page Zero
The above PiFS changes make it much easier to get games running from Econet, but there are still a couple of things to be mindful of. Firstly, on the beeb, NFS sets PAGE to &1200, which is slightly higher than the safe minimum PAGE for DFS, which is &1100. Therefore games that use memory from PAGE &1100 may not work on NFS; particularly if NFS disc access is required during game play. This is less of an issue on the Master, where PAGE is &E00 for both DFS and NFS.
Games that run from ADFS on the beeb should be less of a problem, because PAGE for ADFS is &1D00, which is higher than the PAGE for NFS.
One further issue is that Page Zero addresses &90..&9F are set aside for Econet workspace. Games that are designed to run from DFS or ADFS may choose to use these addresses for their own purpose, corrupting the Econet workspace. If games are using any of the Page Zero addresses, then the game may need to be patched to use other addresses. I have had to do this with the Level 9 graphical adventure games, where the game loader was using address &97. This address was being overwritten by NFS, causing the loader to fail. I patched the loader to use address &66 instead.
Working / Not working Games
This is still very much a WIP, but here's the current status...
Working:
Level 9 adventures (requires patch): Lancelot, Knights Orc, Ingrids Back, TimeAndMagik, Gnome Ranger & Scape Ghost
Ozmoo based adventure games (like Zork, HHGTTG, Hollywood Hijinx etc)
Elite for Econet
White Light (for the Master)
Repton 3
Ravenskull
Battle Zone
Chuckie Egg
Manic Miner
Jet Set Willie
Ladybug
Phoenix
Citadel
Wordle
Not working:
Exile
Cholo
White Light (for the beeb)
Re: PiBridge 2.1-dev pushed to github
I keep getting this error a lot with 2.1-dev. "Too many open directories". I have also tried reverting back to 2.0 but still get same error. Happens anytime I try to copy lots of files "from" the PIbridge, or set access to a lot of directories, etc, basically anything with a lot of file operations. Have I managed to corrupt some of the data? The only way I can get things working again is to do a *bye and login again. I also tried *treecopy on my Master but after a while that just hangs (no errors).
As long as I am not doing bulk filer operations everything continues to work fine, and I can copy files to the PIBridge without any issues.
Here is an econet trace if it helps.
https://www.dropbox.com/scl/fi/m8p19z2z ... 2l0kw&dl=0
Not sure what I can do to fix it?
As long as I am not doing bulk filer operations everything continues to work fine, and I can copy files to the PIBridge without any issues.
Here is an econet trace if it helps.
https://www.dropbox.com/scl/fi/m8p19z2z ... 2l0kw&dl=0
Not sure what I can do to fix it?
BBC Master, BBC Model B, Electron, A5000, A4000, RISC PC, PIBridge, Piconet, and too many Raspberry PI's and now an A4
Re: PiBridge 2.1-dev pushed to github
Have you turned on the ManyHandles feature? It will help with this issue but will likely break compatibility with BBC Bs..davehill wrote: ↑Mon Mar 18, 2024 10:19 am I keep getting this error a lot with 2.1-dev. "Too many open directories". I have also tried reverting back to 2.0 but still get same error. Happens anytime I try to copy lots of files "from" the PIbridge, or set access to a lot of directories, etc, basically anything with a lot of file operations. Have I managed to corrupt some of the data? The only way I can get things working again is to do a *bye and login again. I also tried *treecopy on my Master but after a while that just hangs (no errors).
Screenshot 2024-03-18 at 9.58.21 AM.png
Screenshot 2024-03-18 at 10.03.40 AM.png
As long as I am not doing bulk filer operations everything continues to work fine, and I can copy files to the PIBridge without any issues.
Here is an econet trace if it helps.
https://www.dropbox.com/scl/fi/m8p19z2z ... 2l0kw&dl=0
Not sure what I can do to fix it?
C
2 x Master 128, BBC B+IntegraB, Viglen floppy drives, A3000 ZIDEFS+Econet, RISC PC StrongArm Mk3+Econet ModulePidule, 3 x Econets, 5 x Pi Econet bridges, organist, former purveyor of BBS software...
Re: PiBridge 2.1-dev pushed to github
Genius!! That has fixed it!! I rarely use my BBC B so I guess I can always turn it off when I need too! Thanks Chris! (Maybe I should read the user manual more too!!)
BBC Master, BBC Model B, Electron, A5000, A4000, RISC PC, PIBridge, Piconet, and too many Raspberry PI's and now an A4
Re: PiBridge 2.1-dev pushed to github
Have you had any luck finding the utility? I tried MDFS net in NetLibB and I also tried compiling the source attached in this thread, but I have had no joy. It will be a mixture of my lack of knowledge of compiling on Beebs, and my poor attempts at googling for the actual utility Hoping I can find it somewhere
BBC Master, BBC Model B, Electron, A5000, A4000, RISC PC, PIBridge, Piconet, and too many Raspberry PI's and now an A4
- BeebMaster
- Posts: 7380
- Joined: Sun Aug 02, 2009 5:59 pm
- Location: Lost in the BeebVault!
- Contact:
Re: PiBridge 2.1-dev pushed to github
*FAST in on the SJ MDFS Master floppy disc in $.Library. There's a logical copy here:
https://mdfs.net/Mirror/Archive/SJ/MDFS/MASTER.zip
https://mdfs.net/Mirror/Archive/SJ/MDFS/MASTER.zip
Re: PiBridge 2.1-dev pushed to github
There it is!!! Thanks @Beebmaster ! I said my googling skills were failing me, I have been looking for days. Appreciate it!BeebMaster wrote: ↑Wed Mar 20, 2024 6:28 pm *FAST in on the SJ MDFS Master floppy disc in $.Library. There's a logical copy here:
https://mdfs.net/Mirror/Archive/SJ/MDFS/MASTER.zip
BBC Master, BBC Model B, Electron, A5000, A4000, RISC PC, PIBridge, Piconet, and too many Raspberry PI's and now an A4
Re: PiBridge 2.1-dev pushed to github
OOOO pretty flashing colours on RISC OS trying *FAST. I guess Arculator doesn't like it... might have to venture to shedquarters and try a physical machineBeebMaster wrote: ↑Wed Mar 20, 2024 6:28 pm *FAST in on the SJ MDFS Master floppy disc in $.Library. There's a logical copy here:
https://mdfs.net/Mirror/Archive/SJ/MDFS/MASTER.zip
BBC Master, BBC Model B, Electron, A5000, A4000, RISC PC, PIBridge, Piconet, and too many Raspberry PI's and now an A4
Re: PiBridge 2.1-dev pushed to github
Nice job @cr12925! Works brilliantly!!
BBC Master, BBC Model B, Electron, A5000, A4000, RISC PC, PIBridge, Piconet, and too many Raspberry PI's and now an A4