(16 replies, posted in Sinclair)

Thanks for the info. Seems that either way the Next will be totally overpowered in terms of sound capabilities yikes It's basically a chip orchestra-in-a-box at this point. Especially pleased to hear that SAA support is on the list. Now we just need a decent tracker for all of this...


(16 replies, posted in Sinclair)


Did you have this discussion before they upgraded to the SLX16? Maybe at least OPL2 will be possible after all, who knows wink On the other hand, I'll probably stick to the beeper anyways. I need those limitations, with FM I'd just be stuck tweaking sounds all day.

Made by one of the most talented chip artists on the planet wink

Ha, neat. They even got a Sharp Pocket PC!

The Commodorians are actually blessed as far as live music coding goes, since they've got defMON wink


(10 replies, posted in Sinclair)

Not sure if Shiru has developed this idea any further yet. He's done a few ports of ZX beeper engines though, as you've probably seen. I've mostly been busy working on MDAL myself, though I've also done a Z80 emulation core in the meantime (not for AVR though, just a C++ experiment to see if I could pull it off in general). I'm sure we'll come back to this at a later point though.


(16 replies, posted in Sinclair)

Yeah, programming natively on the thing is what I'm most looking forward to. As far as sound goes, I think the current specs are great already. And there's always the chance for an updated firmware later on wink

Also what? The Lyndon Sharp is active? I've been trying to get in touch with the man for ages, to no avail. He briefly popped up on WorldofSpectrum a while ago, but nobody gave a damn (unsurprisingly, as many people with a serious interest in Speccy things have left that place after the new management took over), so he disappeared from there again. If you could manage to reach out to him and lure him here, that'd be awesome. Not sure if he even knows that there are people actively using his beeper sound routine nowadays...


(16 replies, posted in Sinclair)

Holy moly, over 720,000 GBP! Don't think anybody saw that coming. Well deserved though, imo this really is the Speccy of the future.

Haven't tried tasm_on_calc, but judging from the readme, it might not be very useful. It only implements a small subset of commands, and a lot of important stuff like logic, 16-bit arithmetic, relative jumps, and data directives aren't implemented at all. That said, the 84+ can be programmed natively in machine code. So you can always write your asm code on paper, and translate it by hand wink Quite tedious, but it's doable for small projects.

Houstontracker takes up most of the available RAM. However, on the 84+, you can always stuff things into the Flash archive to make room in the RAM. So it's certainly possible to use the same calc for music tracking and asm experiments.

Z80 is sometimes used in the Sega Genesis/Megadrive as a sound co-processor. It's also used for sound generation in a number of other home computers without dedicated sound hardware.

You don't really need an in-depth understanding of the ULA. I suppose my wording was a bit confusing: As far as sound goes, the ULA only converts the digital output from the Z80 into an analogue signal. The actual sound synthesis is done by the Z80.

Basically, you mostly interact with the ULA either by writing to VRAM (0x4000 - 0x5bff) in order to change graphics, or by writing to port 0xfe in order to change the border color, and the state of the 1-bit signal. Bit 4 is the relevant bit for controlling the beeper state.

The tricky part of running graphics and audio at the same time is this: the Z80 shares the data bus with the ULA in the lower 16 KB of the Spectrum's RAM (0x4000 - 0x7fff), and the ULA gets precedence over the CPU. So when the CPU is executing a read or write operation that concerns the first 16K of RAM, it may have to wait a variable amount of cycles. This is called "memory contention". Now here's the catch: When synthesizing 1-bit music, you usually need to use cycle-exact timing. However, the amount of delay caused by the memory contention is very difficult to predict, so cycle-exact code becomes nearly impossible when generating graphics at the same time. There are some tricks to make it appear as if graphics and music are running at the same time, but actual synchronous gfx+msx generation is considered very challenging (though not impossible).

Well, that certainly means taking the hard route wink On the other hand, I can completely understand your desire to stay away from "big" computers. In this case you may want to familiarize yourself with MRS (the Memory Resident System). It's one of the most advanced assemblers that run natively on the 48K, and it should work just fine in conjunction with the divSD.

Regarding the 48K sound, it actually is generated by the ULA chip, which also does the graphics. The ULA sends the signal to both the on-board speaker and the MIC output (and since the latter isn't fully seperated from the EAR port it's audible there as well, at a lower volume). Pretty much all the recordings available online are taken from the MIC port.

Like what I've seen so far of your works on vimeo, nice and dark. Will have to check out more of that wink

Cheers! Another thing you'll want to do is setting up a toolchain. Books usually don't explain how to do this in a modern way. There are many different ways of going about this, ultimately it all depends on your personal preferences. Are you a bit familiar with using the shell on your Mac and compiling simple C/C++ programs?

Well, I'd be more than happy to aid you in your quest to pick up assembly. We definately need more people writing their own sound routines.

Btw check out these:
Mastering Machine Code on Your ZX81, by Toni Baker - There's also a ZX Spectrum version of this book, but this one comes as a hyperlinked html version. Very handy, especially chapter 8.
How To Write ZX Spectrum Games, by Jonathan Cauldwell - Very useful once you've learned some basic assembly theory and are wondering how to put it into practise.

You'll also want a mnemonics table for a quick overview over all the available commands. I mostly use this one. It's missing a few undocumented opcodes, though. The definitive list of all Z80 opcodes is here.

Edit: Oh, and thanks for your contribution on bandcamp btw smile Glad to hear there's at least one person who actually likes my beeper poetry big_smile

Hey Mark, welcome aboard! I realize we don't actually have a sticky about getting started on Speccy 1-bit music yikes And we're always happy to help, so please feel free to ask any beginner's questions you may have.

Man, sucks to hear your stuff got stolen. But great to hear that doesn't discourage you!

Hello HT2 friends, I've uploaded a new beta version (2.24) to the github repo. Fixed a major bug in the keyhandler which was probably responsible for a whole range of stability issues, and added a "loop pattern" playback function (accessible with [ALPHA][(-)]. Provided there are no bugs, this will become the next stable release. Please do test and report any bugs you find.

Also note that the html documentation is no longer updated. For HT2.3, there'll be a new fancy pdf manual. The current draft is attached below.


(4 replies, posted in Sinclair)

ZXBaremulator uses the core from JSpeccy, which iirc has the same problems with beeper sound. Sound from the Pi itself should be fine.


(29 replies, posted in Sinclair)

Hehe, better late then never eh? big_smile
Dunno why, but 0.23 also overall feels faster/less jerky for me. And still super happy about the added sequence navigation.


(4 replies, posted in Sinclair)

Nice project, bummer about the sound emulation. Might be worth a try porting it to the C.H.I.P, though the timing issues on that machine will probably ruin the emulation.

Thanks! I had initially planned on implementing this, but ultimately abandoned the idea, mostly because of my lack of experience in coding such a system. At the current stage, it would be quite tricky to add, because there's just a few hundred bytes of memory left. And there's actually a few other features that I still want to implement (advanced live mode, better copy/paste/clone support), so I'd rather spend the remaining bytes on this. I might consider it again at some point, but at the moment I'd rather try to convince the TiLP guys to try and add Android support, which would probably solve the problem for many people as well.

Btw, a call for action to everyone: Please do test the latest beta versions from github. There are a couple of nasty issues which I need to sort out before I can progress further. So far I've had no luck with these, so any input on them is highly appreciated.

Talkin' bout da tracker at Revision last weekend: https://www.youtube.com/watch?v=7UgSDAkXAJw
Rather technical, with a lot of uhhm and ehhm, and a failed demonstration of the QED68 sound routine at the end.


(60 replies, posted in Sinclair)

The first example can be done faster with 12-bit counters, at the cost of loosing variable duty:

   add hl,de
   ld a,h
   out (#fe),a
   out (#fe),a
   out (#fe),a

Second example is very powerful. One can create all sorts of waveforms with this, especially if you also play with the distances between the OUT commands. If I understood fourier transformations, I'd have some great fun with this...

My idea was to use a free-running counter. Original idea of an 8-bit counter didn't work, so here is a pretty clumsy version with a 16-bit counter:

    ld de,#80
    xor a
    ld h,a
    ld l,a
    ld b,a
    ld c,a
    out (#fe),a    ;11__36
    add hl,de      ;11
    inc bc         ;6
    ld a,h         ;4
    nop            ;4
    out (#fe),a    ;11__36
    add a,b        ;4
    ld r,a         ;9
    jr loop        ;12

Suprisingly it actually works, but wasting a whole 16-bit reg on this seems rather wasteful. Though on the other hand, one free-running counter can serve multiple channels, so maybe not so bad after all.

Edit: On second thought, that register is maybe not wasted at all... because it might be possible to use the frame length counter for this :D
Edit2: Trying out the idea in BeepModular. The effect is less pronounced here because of the volume difference (48 vs 64t), but generally abusing the timer for this seems like a good idea.

Well, generating a huge number of pin pulse channels is not a problem at least, thanks to Jan Deak's buffer method. I wonder though if it's possible to come up with an elegant algorithm that uses only one oscillator and derives the detuned copies from that, like the original SuperSaw supposedly does. For example, a simple 8-bit counter that increments once per loop iteration could already be enough to generate two derivatives (one added to main osc, one subtracted).

Interesting technique! How does it work exactly? Do you think it would be viable for a native implementation on Spectrum etc?


(60 replies, posted in Sinclair)

Well, trying to imitate the waveforms of the allophones is basically already a sort of proto-formant synthesis wink

Hybrid Qchan would be doable. Generally, almost any beeper engine can be combined with AY without too much trouble. The main problem is that adding AY would significantly increase row transition noise on most engines, because (re)loading the AY registers takes a lot of time. The other problem is volume balance. For example the classic Qchan is not a good candidate for combining with AY buzzer, because the buzzer is much, much louder. This is why I chose Squeeker-type synthesis for my hybrid experiment, because this type of synthesis produces very loud sound, and it also cloaks row transition clicks pretty well.

Well, as I said earlier, I've retired from writing beeper engines for the time being. So don't put your hopes on me. I do hope though that someone else will pick up these ideas, and I'm of course more than willing to help if somebody is to take up the challenge.


(60 replies, posted in Sinclair)

As far as singing speech synths go, there's of course tavzx: https://www.youtube.com/watch?v=KkZKDJwwb2o
I agree, would be interesting to have something like this in an actual music engine. However, I'd be more interested in implementing actual formant synthesis. I've done a sample-based speech synth a couple of years ago (also with variable pitch btw), it's pretty boring work imo.