iss wrote: ↑Tue Feb 05, 2019 11:27 am
@NekoNoNiaow: Well, we talking about two different Makefiles.
1. Makefiles to compile the OSDK sources from DF SVN repository to native executable for Linux.
I'm using the Jylam's Makefiles found in OSDK sub-directories (they are good), but I've created my version of the 'rules.mak' to suit some my special needs, to allow me easy to patch some sources and to add more tools to the toolchain which I want.
2. Makefiles to compile Oric binaries (i.e. TAP and DSK) from sources using the above native Linux compiled OSDK.
These are actually the rules how to call the assembler, compiler, linker... i.e. the same what the BAT-files do but using make's syntax. That's it.
Okies, these seem indeed specific to your needs. I thought that they were related to including c65 as part of the OSDK.
iss wrote: ↑Tue Feb 05, 2019 11:27 am
Long time ago I made both
public available but because of no interest I removed them. Now I checkout DF SVN from time-to-time and if there are updates I compile the toolchain (the same I do with CC65). I'm happy with that solution - I know every detail and can easy to do fixes and patches when needed, but I really doubt that I'll put it back public - simply I don't have the time to maintain, support, explain and document it for more than one user
.
Reading the post you linked, it seems like DBug was awaiting for you to bring your changes back into SVN. The discussion seems to have been left at that stage.
This said, there are no reasons why you could not bring your changes back into the OSDK even now.
We can merge them progressively and fix any incompatibilities one after the other in a separate branch until everything can be smoothly merged into main/master/whatever-the-name-is.
Having your sources on a public repository of some kind would certainly help. Is there one available?
In any case, I think you should not abandon the idea of bringing back your changes to the community. If they are helpful to you, they should be helpful to many of us too, maybe not most, but at least this would avoid duplicate effort (and time/effort is precious in our community
! ).
Chema wrote: ↑Tue Feb 05, 2019 2:59 pm
Let's face it, guys... a C compiler will
never generate good code for a 6502 architecture. Simply C is not designed for so limited 8-bit processors.
I disagree and for many reasons, but first of all I want to state that I am not looking for the perfect optimizing C compiler for the Oric, just one that is moderately good and allows me to test algorithms before I convert them to ASM.
Currently, lcc65 (without peephole optimizations) is just too bad for that purpose hence my search for a better one.
It should also be noted that I am currently working on making opt65 (the peephole optimizer of lcc65) work again so it can be used on the OSDK to produce better code than it does now. When I will be done, we will have a better C compiler even if it is not perfect.
Now on to the low level details...
Chema wrote: ↑Tue Feb 05, 2019 2:59 pm
1/ lack of 16 bit registers and operations (arithmetical operations suffer, pointer management...). Remember the base type in C is 16-bit (int) and everything is implicitly casted to int in many occasions.
C does supports 8 bit data types without issue (
char and
unsigned char).
Modern compilers are actually *really* good at register usage optimization even with very few registers since they use robust and well tested resource allocation algorithms. GCC and LLVM should have no issues juggling with the 6502 register set.
Chema wrote: ↑Tue Feb 05, 2019 2:59 pm
2/ lack of a real stack. Having it tied to page 1, 256 bytes-only and no user stack... this means creating a "custom" C stack and managing it. C makes extensive use of this stack, so there is a problem. But even if the use of the processor stack were possible, only P and A can be pushed to the stack, leaving really the only available general-purpose register A (the only available to perform arithmetic operations and, remember, 8-bit only) as the option to store things on the stack.
C uses only as much stack as the programmer tells it to. And that is a secondary problem for my purpose since I am testing algorithms inner loops, not deeply recursive functions. So I am fine even if a software stack is slow.
GCC and LLVM can be taught to use the stack only for storing return addresses anyway and to use other methods for register passing and/or temporary variables. (That is what cc65 does by the way.)
Chema wrote: ↑Tue Feb 05, 2019 2:59 pm
There are possibly other things that make things complicated, but you get the idea. A compiler may generate code which is less stupid than another with more aggressive optimizations and clever tricks, but nothing that would generate a really much faster code, in my opinion. And for the size, you can make it small filling everything with subroutines, but that will make everything much slower!
Once again, I am not looking for a compiler which can fully replace assembly language code, I even specified that many times.
I just want to test algorithms at a higher level than assembly and not have to deal with register allocation.
Even if the compiler reaches only half of the speed of a human it does not matter for me since that is good enough to evaluate the sanity of my algorithm before I switch to assembly. Even between humans coding in assembly, a difference of half speed is very easy to reach so that would be already good. Hell, even one quarter or one height speed would be fine.
This said, figures speak louder than speculations: a brave soul has actually adapted GCC for the 6502 for use with 8-bit Atari machines.
https://atariage.com/forums/topic/27614 ... 2-vs-cc65/
To illustrate the gains obtained by using an industrial grade compiler here is how it performs on the classic computation of prime numbers using the classic Sieve of Eratosthenes compared to cc65:
cc65: 301 ticks (6.02s)
gcc: 97 ticks (1.94s)
-> three times faster. Not bad heh?
Executables are larger but in some cases that might be worth it. (and in my case to test just a single function, that is definitely worth it).