Assuming you mean static code size, my Mu project currently supports an interpreted Lisp (macros, fewer parens, infix) implemented in a compiled memory-safe low-level language without _any_ C in 450KB. The "runtime library" (mostly in the safe low-level language) requires 120KB. Without unit tests, the binary sizes drop down to 200KB and 30KB respectively, which gives you some sense of the space devoted to tests in each case.
As the tests hint, Mu doesn't do anything to try to reduce code size. It's small just by focusing on the essentials. What's left is written for ease of comprehension.
(An example of "essentials": Mu still cannot free memory. I run my programs in Qemu with 2GB which seems to suffice.)
Author here. I assume you're referring to https://github.com/akkartik/mu/blob/main/shell/evaluate.mu? It's hard for me to tell, since there's 819k lines of code in Mu according to find . -type f | grep -v '\.git' | xargs wc -l. Does it have lambda and support metacircular evaluation? If not, then why do you think it's LISP-like? If you do nothing to reduce code size, then why do you call it Mu? (I assume by "Mu" you're referring to what UNICODE calls the MICRO SIGN.) 30kb is two orders of a magnitude larger. I don't see how it's relevant to SectorLISP.
Sorry to tramp over your thread! Mu isn't too related to SectorLISP, I was just responding to the side-discussion about what we can build in under 1MB.
To answer your question, evaluate.mu does support lambda (I call it `fn`). My estimates of size were based on `ls -l a.bin` after `./translate shell/*.mu`.
It that case it'd be really cool if you implemented John McCarthy's metacircular evaluator in Mu LISP syntax to demonstrate it's a LISP. Sort of like how when John McCarthy shared the idea for LISP one of the first things he had to do was show that it's able to do the same things as a Turing machine.
(I haven't done this so far because I find metacircularity to not be very interesting. It was interesting when JMC proved it could be done. Mu's whole reason for existence is linear rather than circular bootstrapping. We can disagree over whether I get to call it a Lisp or not, but if it has the full power of macros I'm happy.)
And Justine, if you want to join our ever continuing argument about bootstrapping processes and trust in programming language toolchains, Kartik is the right person to talk to :p
major points for having a thorough test suite, it has saved my a** on my own projects countless times, and I still encounter projects too often IMHO that have barely any test suite if any at all
Keep in mind that IBM PC/XT had only 640 kB, but there were compilers and interpreters for any language, which were available for it.
Moreover, before IBM PC, a CP/M computer with Zilog Z80 or Intel 8080 had usually only 32 kB or 48 kB, but you could use without problems Basic interpreters, Pascal, Fortran, Cobol and PL/M compilers and many others.
However, in order to fit in 32 kB, the compilers themselves were typically written using a macro-assembler, and not in a high-level language. The C language became popular for such tasks somewhat later.
And that's the maximum (without bank switching schemes like EMS). The first version had only 64 KB. There were even plans for a 16 KB version with no disk drive but I don't think it was ever released.
I love Red and want to use it more. Sadly my professional life keeps giving me hardware that I can't run it on, so Red is getting marginalized. I wish I had more skill and time to help them out with the 32 to 64-bit transition. [1]
1MB is really a lot if you know what you're doing, what language you actually can't fit into 1MB of ram due to inherent specifics of language itself, not because interpreter/compiler doesn't optimize for such cases?
Most then existing HLLs were very usable in 640Kb. If you go down to 8-bits, while translators for many of these did exist the utility was rapidly diminishing. You'd essentially have system's native version of BASIC + assembler for anything practical. The rest were more of parlor tricks.
This was back when 1 MB was considered and amazing amount of memory. It took two years to write this, about 50K lines of high level code and another 10K or so of assembly. Today you would approach such a project in a completely different way, you'd use much more time to make it look pretty and just that part would probably be much larger than the whole package that I wrote.
The funny thing is that even so many years later the original software is still in use in some places and there is a whole company centered around that core that has been re-written a couple of times to keep it up to date and to expand its functionality.
I highly doubt the present day version would fit in something that small. What's interesting to me is that that old stuff tended to be super productive to work with, zero distractions, just some clearly defined task in a clearly defined environment, if it worked it was bullet proof. No hackers, SaaS, a million connectivity options and no eye candy. Just that one job to be done and done as good as the hardware would allow you to.
But that's exactly my point. I think we forgot to aim at super productive, zero distraction, lightweight. And it would be of high educative value to see this sort of work again. Both on the small scale details and the pragmatic value.
The sad part to me is that most web apps reenact the same functions but in a css-transition-capable DOM. But functionally I'm not sure you get more.
I wonder how high you can go with only 1MB say.