I want to share with you all an idea that I got about sound sample replaying on an AY chip.
I really don't know what it worth but I feel appealed by this idea.
If you're not familiar with AY sound chip, you can leave now because the rest of the post is going to be somewhat hardcore.
If you know about the AY chip, please find the courage to read and tell me what you think because I myself don't know what to think.
Playing sampled sounds on the Oric is done by using the fact that, when no tone is generated by the tone generator of the AY chip, the digital to analog converter of the envelope amplitude controller acts as a simple voltage DAC with output connected to speaker.
(I know it's over simplifying .. ISS is more accurate here )
PCM rendering sound is then possible by making the voltage follows the waveform of the sound we want to replay.
The basic thing to be careful about is that the DAC of the envelope amplitude controller is logarithmically scaled rather than linear.
This is either compensated by using compensation tables on the fly (Oric Tech Demo)
Or the correction can even be incorporated right into the sample (it is done in NY2020demo by preprocessing samples with SampleTweaker.exe of OSDK).
And I'm wondering if we could get a better sound quality by taking into account the fact that the accuracy of the linearisation is not the same on the whole range of value and by incorporating that reality into the sample preprocessing.
Please fasten your seat belt .. i'm going to get you high in a few seconds
The figure below shows what we do when we linearize the DAC log ouput:
- X-axis is the value we put in the amplitude register.
- Y-axis is voltage out of the channel env amplitude
- yellow line shows what would be the perfect linearisation.
- green dots shows the uncompensated voltage.
- blue dots are compensated values
But if we pay a closer look at the error we still make in spite of the linearisation there's something that immediately catch the attention.
This is shown in the figure below that comes by drawing the difference between blue dots and yellow line. It appears that errors are bigger on high values than on low values.
So the question is : "For a given sound sample, could we get a waveform in which small values are more numerous than big values ?"
The idea behind is that, statistically speaking, the more small values we have in the sample waveform the more accurate we will be in replaying.
But of course, we don't want to destroy the sound. We just want samples that compose the waveform to be as low as possible.
In order to experiment, I took a basic square signal as the one below: (I took a square because it is full of harmonic and it is easy to decompose for sake of demonstration)