Jun. 24th, 2025 12:00 am

Assembling the PC-6001

[syndicated profile] leadedsolder_feed

Posted by

When you’ve got a computer as underpowered as 1981’s NEC PC-6001, you need to squeeze every ounce of performance you can out of it. BASIC just didn’t cut it. For many enthusiasts, the only game in town was machine language. It’s well past time for me to apply my Z80 assembly knowledge to the little white wedge.

Background

As regular readers have probably inferred by now, I really like the NEC PC-6001. It’s a low-spec Z80 system from the very early 80s. Programming games for it is a challenge, not least because of the small amount of RAM, but because of the mysterious Motorola 6847-like video chip inside.

My very first NEC PC-6001 at the auction

Later generations of the PC-6001 changed out the 6847 for NEC’s own semi-compatible video chip, but whatever lessons I learn here will carry on to the more advanced machines. Lots of commercially-released games targeted the PC-6001mkII instead of the original-recipe model, so things can only get better from here.

Although I have workmanlike Z80 assembly skills, I have no idea how to program graphics for the PC-6001, so I decided that I would try and figure it out.

Environment configuration

As with previous Z80 experiments, my assembler of choice here is zasm. Zasm works well and has a reasonable (if occasionally cryptic) macro facility, which helps make boring chunks of code easier to write.

I reused a Makefile from a previous project, which essentially boils down to this handful of rules:

MAME = /opt/mame0253-arm64/mame
mame_dir = $(dir $(MAME))
local_path = $(dir $(abspath $(lastword $(MAKEFILE_LIST))))
mame_args = -skip_gameinfo -window
asm_args = -w
asm = zasm

pc6001.bin: pc6001.asm
        $(asm) $(asm_args) pc6001.asm -o pc6001.bin

all: pc6001.bin

clean:
        rm -f pc6001.bin pc6001.lst

run: pc6001.bin
        cd $(mame_dir) && $(MAME) pc6001 $(mame_args) -cart1 $(local_path)pc6001.bin

debug: pc6001.bin
        cd $(mame_dir) && $(MAME) pc6001 $(mame_args) -debug -cart1 $(local_path)pc6001.bin

I’m sure that anyone who uses make more than I do could make this shorter and snappier, but computers are fast. It’ll be fine.

For debugging, I use the MAME debugger. I suspect at least one of the PC-6001 emulators can import the LST file generated by the assembler in order to get symbols, but as far as I know, this is not possible in MAME. The most I’ve been able to do so far is add comments using a debugger startup script (-debugscript.) Thankfully, the programs I wrote in this article were so short that I could keep track of my symbols the hard way.

A great set of tips on how to use and extend the MAME debugger using Lua can be found on Matt Greer’s website; I wish that I had found these when I was first starting out.

Hello World

I was lucky enough to find a complete PC-6001 “Hello World” example at mm’s website. Not only does it explain how to make a viable cartridge image and print text to the screen, but it also gives the magic addresses for a couple other system calls such as locate and putchar.

My modified program is as follows:

; cartridge hello world demo
; from http://p6ers.net/mm/pc-6001/dev/4keprom/index.html

putchar: .equ $1075
cls:     .equ $1dfb
locate:  .equ $116d
putstr:  .equ $30cf

; cartridges start at $4000
.org $4000
; PC-6001 original cartridges are identified by this magic string...
.db "AB"
; ...followed by the address to jump to
.dw main

main:
    call cls
    ld hl, $0a08
    ; set position of the text we're about to print (roughly centred)
    ; H = column, L = row, both 1-indexed
    ; $0101 - top left, $0201 - first row, one column to the right?
    call locate
    ; print hello world to the screen
    ld hl, msg_hello
    call putstr

loop:
    jr loop

msg_hello:
    .db "Hello PC-6001!", $00 ; 14 characters long

Does it work?

A screen from the MAME PC-6001 emulator, showing "Hello PC-6001!" roughly in the middle of the screen. A function key bar is shown at the bottom of the screen.

Nice.

I fought for awhile longer and then found out that you can set the CONSOLE3 global ($fda6 ) and then call the internal CNSMAIN implementation of the BASIC command CONSOLE (at $1d52 ), and it will hide the function-key bar for you too. Sweet.

The same screen as before, but now the function key bar is gone.

Simple Graphics

To figure out how graphics worked, I took a look at the P6 Tech page on the subject. On the PC-6001, you tell it where in RAM to pull the video buffer from by setting the value of a magic write-only I/O port, $b0 . Then in your newly-dibs’d VRAM, you write a bunch of “attribute” bytes that each control 16 lines of video – palette and resolution. The first 512 bytes of VRAM are dedicated to these attributes (keener readers than me will realize this number lines up with the 32x16 text mode exactly.)

It took me a bit of spinning in circles to understand exactly what was going on here, as the naturally funky level of free-association poetry that Google Translate sometimes produces from Japanese technical documentation was on full display here. Ultimately, it took some of Inufuto’s code (thank you!) and a bit of pen-and-paper math to figure out how the layout works.

In the mode I’m using, mode 31, you have 128 x 192 pixels to play with and 2-bit colour (chosen from one of two palettes – I use green, yellow, blue, red.) That means that each line is 128 / 4 = 32 bytes long, with each byte containing the values for four pixels.

For instance, here is the pattern 0b01101001, or blue-yellow-yellow-blue, applied to every line of the video, all 6144 bytes of the buffer:

A bunch of vertical lines of blue, yellow, yellow, blue pixels are on top of a searing green background.

The program isn’t too complex, although I’m sure it could have been done shorter. I really need to figure out how to write a 16-bit loop macro in zasm:

    ; now try to get into graphics mode
    ld a, (port_b0)
    and $f9 ; clear the vram address bits 1, 2 -
    ; that tells the 6847 VRAM starts at $c000
    out ($b0), a

    ; write attribute bytes
    ld hl, $c000
    ld bc, 512 ; 512 attribute bytes (for 512 characters in text mode)
set_attributes:
    ld (hl), $8c ; green yellow blue red (mode 3)
    inc hl
    dec bc
    ld a, c
    or b
    jr nz, set_attributes

    ; wipe the display portion of VRAM, which starts
    ; after the attributes ($c000 + $200 = $c200)
    ld hl, $c200
    ld bc, VRAM_ROW_SIZE * VRAM_HEIGHT ; or 128 x 192 divided by 4 = 6144
erase_vram:
    ld (hl), 0b01101001 ; blue yellow yellow blue
    inc hl
    dec bc
    ld a, c
    or b
    jr nz, erase_vram

Pretty cool stuff. Now I had to figure out how to draw something actually useful. I had some Python code lying around from earlier when messing with dithering, so I wanted to find an image that had a lot of blues, yellows, greens and red and see if I could get it to look good on the PC-6001.

Monet’s Meules is as good a choice as any. It’s got all four colours!

A haystack is painted as standing strongly against the sunset.

Someone paid $110 million for this painting, but we can draw it on the PC-6001 with just a little bit of effort and some very nasty dithering code…

# PC-6001 mode 3 is pretty brutal: green, yellow, blue, red
MODE_3_PALETTE = [
        0, 255, 0,
        255, 255, 0,
        0, 0, 255,
        255, 0, 0
        ]

from PIL import Image, ImagePalette

palette = Image.new('P', (4,1))
palette.putpalette(MODE_3_PALETTE)

oldimage = Image.open('meules.png').convert('RGB')
newimage = oldimage.quantize(4, palette=palette)
newimage.save('meules-4.png')

This Python script produces the following quantized image. PIL used the default dithering routine (Floyd-Steinberg.) I’m sure there are better choices for this kind of extremely-low-colour palette2, but it looks surprisingly good anyway… despite having obliterated the trees.

The quantized version of the previous image shows a very grainy-looking haystack. The trees in the background are now almost impossible to see as they get lost in a buzzy haze of yellow sky.

After adjusting the dithering script so that it also resizes the image to the appropriate width for the PC-6001, we end up with a lot of pixel data ready to be blasted to the screen.

Because of this mode of the 6847’s unique portrait aspect ratio of 128x192, I first thought it would be a clever idea to rotate the image 90 degrees before display. This way, it can fill the entire field without wasting any pixels. You just have to tilt your head… or use tate mode.

I wrote some more quick Python code to convert the whole mess into a packed .db array for the assembler. After one embarrassing screw-up where I realized that I had accidentally given it too long of a length value, overflowing the PC-6001’s memory (and somehow reading part of MAME’s internal state into the PC-6001’s video RAM in the process) I was able to get some nice looking Meules:

The image is now displayed around a green border, running in the MAME emulator. It looks surprisingly close to the original image despite only using eight garish colours.

Okay, that’s pretty wild. It’s amazing how even sloppy dithering fools our primitive monkey eyes.

I also tried a horizontal version, but the width wasted a lot of space. You can see the remains of the blue/yellow test pattern behind this image, which is a mere 72 pixels tall out of 192 pixels of room…

The image is now horizontal, but most of the visual field is the old background.

I tried scaling it disproportionately and I think it just doesn’t… look like the original picture. It’s like when your weird uncle shows off his collection of full-screen DVDs.

The image fills the whole screen, but it's strangely tall and feels uncomfortable.

As a quick hack, I decided to double up the rows, drawing each pixel twice vertically. I thought it looked pretty okay, but the stretching in the grain is very noticeable.

The image is less sharp than before, but is stretched vertically by doubling up each line. It is much more pleasant this way.

“Full-screen” it is. You’re welcome, Unc. Let’s take a look at it on real hardware.

Hardware configuration

To test on real hardware, I decided it would be best to use my own cartridge. Even though I have a PC-6006 with ROM sockets, ROMs in a compatible pinout (µPD2716, µPD2732, µPD2364) are not easy to come by. These earlier ROMs don’t follow the same JEDEC-pinout as the 27c64, 27c128, 27c256, 27c512 etc that I’m used to using (and have on hand.)

Although I can use an adapter, the PC-6006 cartridge only supports addressing 8K of each ROM inserted as well. That’s not a particular obstacle here, as the Monet-infused ROM is under 6K even without compression, but the combination of these two issues bug me enough that I wanted to use my own cartridge.

Because the program doesn’t need expanded RAM, I was able to grab my bodged-up Pico cartridge from the previous entry and throw it right on there. How convenient!

Actually, it’s not convenient at all. Putting a new ROM on the Pico cartridge consists of:

  1. Assembling the ROM;
  2. Padding the ROM out to 8K or 16K;
  3. Dumping the ROM out to C array format and putting it in the header;
  4. Recompiling and re-flashing the Pico firmware.

Still, “popping it in” takes less than 10 minutes, which is faster than waiting for a UV eraser to erase a ROM so I can burn it again. Don’t ask how long it takes if I misplace my USB-C cable.

We’ll have to figure out a quicker method for this develop/assemble/flash/test cycle in the future – even figuring out how to keep the A8PicoCart’s drag-and-drop flash functionality would be a step up from this. Still, it’s good enough for now.

The finished Pico cartridge, with a red PCB and several bodges, sitting on top of the PC-6001 logo plate.

My somewhat-untrusty Samsung 910MP LCD was on the bench, so I opted to use it for this test. And it looked okay!

The 910MP LCD shows a very pixellated, somewhat unpleasant stretched version of the previous images. Colour is pretty consistent though, with a pleasant green background.

But: I know what people really want here is weird CRTs. Here’s the same demo on a dinged-and-scratched Magnavox RD0510 I found at a flea market. It’s a 5” portable colour CRT with a composite input.

The haystacks are now in a hellish red and have turned purple in protest at being shoved into this tiny CRT. Also the lens is pretty scratched.

Oh no! Those haystacks look much less appealing on this set. I have noticed that the 6847’s method of generating composite colour makes some television sets really mad, although all the detail is clear. This set works fine with other systems, although it does have a troubling pink hue for a little while until it warms up. Likely a future project. Oh well, it’s still neat to have it on a tiny little CRT.

GitHub repository

Because this project was built on the hard work of so many others, it would be silly not to distribute my own code as well. If you’re interested in how this was programmed, come check out the GitHub repository for pc6001-assembly-programming. Be forewarned, it’s pretty rough in there, and has a lot of other code from previous PC-6001 projects intermingled with the files you want to look at.

The Future of Touching Bits

Now that I have done the absolute bare minimum to figure out how to program the PC-6001’s graphics system, I’m ready to move onto something more exciting. Making a small, original arcade-like game for my favourite little computer has been on my “dream project” list for quite some time, so maybe I should sit down and do it.

A port of the Sharp MZ game Numbertron could also be really fun, and wouldn’t need anything more than the PC-6001’s tile modes. It would be a good excuse to see if the PC-6001 port of the z88dk tools are up to snuff enough to program the system in C.

Obviously, displaying a static image is a lot different from anything with motion in it, so learning all the performance tricks will be essential for a smooth arcade game without flicker. There appear to be lots of cool strategies for quickly displaying sprites in order to overcome the PC-6001 graphics system’s glacial speed. For instance, the newly developed PC-6001 game XeGrader appears to use self-modifying code. That’s a technique I haven’t used much, but could be a mind-bending amount of fun.

Where would you like to see this mess of Z80 assembly go next?

  1. In the TRS-80 CoCo, I think that this would be referred to as the “CG6” mode, as that’s what Motorola called it. 

  2. PC-98 games often appear to have used an ordered-dither technique, such as Bayer dithering, in order to stretch the limited palette for jaw-dropping effects. That would probably work well here as well. 

[personal profile] mjg59
Single signon is a pretty vital part of modern enterprise security. You have users who need access to a bewildering array of services, and you want to be able to avoid the fallout of one of those services being compromised and your users having to change their passwords everywhere (because they're clearly going to be using the same password everywhere), or you want to be able to enforce some reasonable MFA policy without needing to configure it in 300 different places, or you want to be able to disable all user access in one place when someone leaves the company, or, well, all of the above. There's any number of providers for this, ranging from it being integrated with a more general app service platform (eg, Microsoft or Google) or a third party vendor (Okta, Ping, any number of bizarre companies). And, in general, they'll offer a straightforward mechanism to either issue OIDC tokens or manage SAML login flows, requiring users present whatever set of authentication mechanisms you've configured.

This is largely optimised for web authentication, which doesn't seem like a huge deal - if I'm logging into Workday then being bounced to another site for auth seems entirely reasonable. The problem is when you're trying to gate access to a non-web app, at which point consistency in login flow is usually achieved by spawning a browser and somehow managing submitting the result back to the remote server. And this makes some degree of sense - browsers are where webauthn token support tends to live, and it also ensures the user always has the same experience.

But it works poorly for CLI-based setups. There's basically two options - you can use the device code authorisation flow, where you perform authentication on what is nominally a separate machine to the one requesting it (but in this case is actually the same) and as a result end up with a straightforward mechanism to have your users socially engineered into giving Johnny Badman a valid auth token despite webauthn nominally being unphisable (as described years ago), or you reduce that risk somewhat by spawning a local server and POSTing the token back to it - which works locally but doesn't work well if you're dealing with trying to auth on a remote device. The user experience for both scenarios sucks, and it reduces a bunch of the worthwhile security properties that modern MFA supposedly gives us.

There's a third approach, which is in some ways the obviously good approach and in other ways is obviously a screaming nightmare. All the browser is doing is sending a bunch of requests to a remote service and handling the response locally. Why don't we just do the same? Okta, for instance, has an API for auth. We just need to submit the username and password to that and see what answer comes back. This is great until you enable any kind of MFA, at which point the additional authz step is something that's only supported via the browser. And basically everyone else is the same.

Of course, when we say "That's only supported via the browser", the browser is still just running some code of some form and we can figure out what it's doing and do the same. Which is how you end up scraping constants out of Javascript embedded in the API response in order to submit that data back in the appropriate way. This is all possible but it's incredibly annoying and fragile - the contract with the identity provider is that a browser is pointed at a URL, not that any of the internal implementation remains consistent.

I've done this. I've implemented code to scrape an identity provider's auth responses to extract the webauthn challenges and feed those to a local security token without using a browser. I've also written support for forwarding those challenges over the SSH agent protocol to make this work with remote systems that aren't running a GUI. This week I'm working on doing the same again, because every identity provider does all of this differently.

There's no fundamental reason all of this needs to be custom. It could be a straightforward "POST username and password, receive list of UUIDs describing MFA mechanisms, define how those MFA mechanisms work". That even gives space for custom auth factors (I'm looking at you, Okta Fastpass). But instead I'm left scraping JSON blobs out of Javascript and hoping nobody renames a field, even though I only care about extremely standard MFA mechanisms that shouldn't differ across different identity providers.

Someone, please, write a spec for this. Please don't make it be me.
Jun. 20th, 2025 01:11 am

My a11y journey

[personal profile] mjg59
23 years ago I was in a bad place. I'd quit my first attempt at a PhD for various reasons that were, with hindsight, bad, and I was suddenly entirely aimless. I lucked into picking up a sysadmin role back at TCM where I'd spent a summer a year before, but that's not really what I wanted in my life. And then Hanna mentioned that her PhD supervisor was looking for someone familiar with Linux to work on making Dasher, one of the group's research projects, more usable on Linux. I jumped.

The timing was fortuitous. Sun were pumping money and developer effort into accessibility support, and the Inference Group had just received a grant from the Gatsy Foundation that involved working with the ACE Centre to provide additional accessibility support. And I was suddenly hacking on code that was largely ignored by most developers, supporting use cases that were irrelevant to most developers. Being in a relatively green field space sounds refreshing, until you realise that you're catering to actual humans who are potentially going to rely on your software to be able to communicate. That's somewhat focusing.

This was, uh, something of an on the job learning experience. I had to catch up with a lot of new technologies very quickly, but that wasn't the hard bit - what was difficult was realising I had to cater to people who were dealing with use cases that I had no experience of whatsoever. Dasher was extended to allow text entry into applications without needing to cut and paste. We added support for introspection of the current applications UI so menus could be exposed via the Dasher interface, allowing people to fly through menu hierarchies and pop open file dialogs. Text-to-speech was incorporated so people could rapidly enter sentences and have them spoke out loud.

But what sticks with me isn't the tech, or even the opportunities it gave me to meet other people working on the Linux desktop and forge friendships that still exist. It was the cases where I had the opportunity to work with people who could use Dasher as a tool to increase their ability to communicate with the outside world, whose lives were transformed for the better because of what we'd produced. Watching someone use your code and realising that you could write a three line patch that had a significant impact on the speed they could talk to other people is an incomparable experience. It's been decades and in many ways that was the most impact I've ever had as a developer.

I left after a year to work on fruitflies and get my PhD, and my career since then hasn't involved a lot of accessibility work. But it's stuck with me - every improvement in that space is something that has a direct impact on the quality of life of more people than you expect, but is also something that goes almost unrecognised. The people working on accessibility are heroes. They're making all the technology everyone else produces available to people who would otherwise be blocked from it. They deserve recognition, and they deserve a lot more support than they have.

But when we deal with technology, we deal with transitions. A lot of the Linux accessibility support depended on X11 behaviour that is now widely regarded as a set of misfeatures. It's not actually good to be able to inject arbitrary input into an arbitrary window, and it's not good to be able to arbitrarily scrape out its contents. X11 never had a model to permit this for accessibility tooling while blocking it for other code. Wayland does, but suffers from the surrounding infrastructure not being well developed yet. We're seeing that happen now, though - Gnome has been performing a great deal of work in this respect, and KDE is picking that up as well. There isn't a full correspondence between X11-based Linux accessibility support and Wayland, but for many users the Wayland accessibility infrastructure is already better than with X11.

That's going to continue improving, and it'll improve faster with broader support. We've somehow ended up with the bizarre politicisation of Wayland as being some sort of woke thing while X11 represents the Roman Empire or some such bullshit, but the reality is that there is no story for improving accessibility support under X11 and sticking to X11 is going to end up reducing the accessibility of a platform.

When you read anything about Linux accessibility, ask yourself whether you're reading something written by either a user of the accessibility features, or a developer of them. If they're neither, ask yourself why they actually care and what they're doing to make the future better.
[personal profile] mjg59
I'm lucky enough to have a weird niche ISP available to me, so I'm paying $35 a month for around 600MBit symmetric data. Unfortunately they don't offer static IP addresses to residential customers, and nor do they allow multiple IP addresses per connection, and I'm the sort of person who'd like to run a bunch of stuff myself, so I've been looking for ways to manage this.

What I've ended up doing is renting a cheap VPS from a vendor that lets me add multiple IP addresses for minimal extra cost. The precise nature of the VPS isn't relevant - you just want a machine (it doesn't need much CPU, RAM, or storage) that has multiple world routeable IPv4 addresses associated with it and has no port blocks on incoming traffic. Ideally it's geographically local and peers with your ISP in order to reduce additional latency, but that's a nice to have rather than a requirement.

By setting that up you now have multiple real-world IP addresses that people can get to. How do we get them to the machine in your house you want to be accessible? First we need a connection between that machine and your VPS, and the easiest approach here is Wireguard. We only need a point-to-point link, nothing routable, and none of the IP addresses involved need to have anything to do with any of the rest of your network. So, on your local machine you want something like:

[Interface]
PrivateKey = privkeyhere
ListenPort = 51820
Address = localaddr/32

[Peer]
Endpoint = VPS:51820
PublicKey = pubkeyhere
AllowedIPs = VPS/0


And on your VPS, something like:

[Interface]
Address = vpswgaddr/32
SaveConfig = true
ListenPort = 51820
PrivateKey = privkeyhere

[Peer]
PublicKey = pubkeyhere
AllowedIPs = localaddr/32


The addresses here are (other than the VPS address) arbitrary - but they do need to be consistent, otherwise Wireguard is going to be unhappy and your packets will not have a fun time. Bring that interface up with wg-quick and make sure the devices can ping each other. Hurrah! That's the easy bit.

Now you want packets from the outside world to get to your internal machine. Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005. On the VPS, you're going to want to do:

iptables -t nat -A PREROUTING -p tcp -d 321.985.520.309 -j DNAT --to-destination 867.420.696.005

Now, all incoming packets for 321.985.520.309 will be rewritten to head towards 867.420.696.005 instead (make sure you've set net.ipv4.ip_forward to 1 via sysctl!). Victory! Or is it? Well, no.

What we're doing here is rewriting the destination address of the packets so instead of heading to an address associated with the VPS, they're now going to head to your internal system over the Wireguard link. Which is then going to ignore them, because the AllowedIPs statement in the config only allows packets coming from your VPS, and these packets still have their original source IP. We could rewrite the source IP to match the VPS IP, but then you'd have no idea where any of these packets were coming from, and that sucks. Let's do something better. On the local machine, in the peer, let's update AllowedIps to 0.0.0.0/0 to permit packets form any source to appear over our Wireguard link. But if we bring the interface up now, it'll try to route all traffic over the Wireguard link, which isn't what we want. So we'll add table = off to the interface stanza of the config to disable that, and now we can bring the interface up without breaking everything but still allowing packets to reach us. However, we do still need to tell the kernel how to reach the remote VPN endpoint, which we can do with ip route add vpswgaddr dev wg0. Add this to the interface stanza as:

PostUp = ip route add vpswgaddr dev wg0
PreDown = ip route del vpswgaddr dev wg0


That's half the battle. The problem is that they're going to show up there with the source address still set to the original source IP, and your internal system is (because Linux) going to notice it has the ability to just send replies to the outside world via your ISP rather than via Wireguard and nothing is going to work. Thanks, Linux. Thinux.

But there's a way to solve this - policy routing. Linux allows you to have multiple separate routing tables, and define policy that controls which routing table will be used for a given packet. First, let's define a new table reference. On the local machine, edit /etc/iproute2/rt_tables and add a new entry that's something like:

1 wireguard


where "1" is just a standin for a number not otherwise used there. Now edit your wireguard config and replace table=off with table=wireguard - Wireguard will now update the wireguard routing table rather than the global one. Now all we need to do is to tell the kernel to push packets into the appropriate routing table - we can do that with ip rule add from localaddr lookup wireguard, which tells the kernel to take any packet coming from our Wireguard address and push it via the Wireguard routing table. Add that to your Wireguard interface config as:

PostUp = ip rule add from localaddr lookup wireguard
PreDown = ip rule del from localaddr lookup wireguard

and now your local system is effectively on the internet.

You can do this for multiple systems - just configure additional Wireguard interfaces on the VPS and make sure they're all listening on different ports. If your local IP changes then your local machines will end up reconnecting to the VPS, but to the outside world their accessible IP address will remain the same. It's like having a real IP without the pain of convincing your ISP to give it to you.
Page generated Jun. 26th, 2025 05:17 pm
Powered by Dreamwidth Studios