>> Some people seem to think that C is a real programming language, but they are sadly mistaken. It really is about writing almost-portable assembly language
And that's why comparing C to other, more modern languages is kind of useless - it all comes down to what kind of programming you're doing. Even though more expressive languages have been created and more efficient ways to create programs are around, C is a very nice balance between high-level syntax and low-level actions.
Agreed. When you are bit-banging and other low-level stuff, you want a language like C.
It's such a silly argument. Good programmers pick the language that matches the problem they are trying to solve. Anyone that considers themselves a hacker isn't a square peg in round hole kind of person. They have several tools in their bag and will pick out the most appropriate one for the job.
That's all the comes to mind these days when I see these arguments: people need to open up their field of view and see what other great language exist for their craft. Did Leonardo only use paintbrushes to write in his notebooks?
You can do bitwise operations in VisualWorks Smalltalk quite efficiently, particularly for encryption applications. Of course, it helps that the encryption library author and the VM engineer at the time collaborated closely. (Hence, the existence of classes like ThirtyTwoBitArray.)
I seem to remember a study someone did with CISC machines like VAX. On average, a C statement resulted in 2.5 machine language instructions.
It isn't the ability to do efficient bit operations that is a strength of C - it is the ability to easily and succinctly do whatever you want to the bits in memory. Arbitrary pointer/memory operations help greatly in writing things like memory managers, device drivers, and all of the other wonderful parts the make up the guts of an OS.
edit: Added the text "easily and succinctly" as a qualifier on doing things to bits in response to stcredzero's comment. It isn't that C is the only language to allow it, it is that C allows one to do it without obscuring your intent. C# handles this quite well with unsafe blocks, Java not so much.
I want to go against the misconception that high level languages mean that you have to give up control over bits. You can control particular bits that matter, even if your language runs on a VM.
I don't see why arbitrary pointer operations are necessary anymore for memory management and the other things. I think this is something left over from the days when we had to squeeze every iota of performance out of machines. I suspect that entire systems could be built with references and almost no pointer arithmetic, and that we're simply stuck in the way we've always done things.
So what if we give up 20 or 30% of the CPU to do this? Pretty soon, we'll be trying to figure out how to keep 8 cores busy on a typical desktop. How about we use some of this to allow greater abstraction in the OS kernel and device drivers?
I don't see why arbitrary pointer operations are necessary anymore for memory management and the other things. I think this is something left over from the days when we had to squeeze every iota of performance out of machines.
Squeezing out performance is pretty much the job description of system programmers. Can you give a compelling reason to change that?
Pretty soon, we'll be trying to figure out how to keep 8 cores busy on a typical desktop.
No, we won't. Software becomes twice as slow every two years. Things will stay pretty much the same as they've always been.
Achieving a higher degree of abstraction, more straightforward availability of system resources to programmers, higher levels of code reuse.
Present-day system programmers seem to be able to build decent enough operating systems and the like at the level of abstraction C provides, and I suspect that most would consider straightforward management of system resources to be higher priority than straightforward availability of them.
I will concede the possibility that some unambiguously-better system could be written in a higher-level language. However, until evidence demonstrates that it actually can be done, that remains just a possibility, and one with very real and significant costs. Those costs are not, in the view of many system programmers, worth it.
I will concede the possibility that some unambiguously-better system could be written in a higher-level language. However, until evidence demonstrates that it actually can be done, that remains just a possibility
Already happened: Symbolics Lisp Machines. (Modulo a few decades of programming practice not available to inform the Lisp Machines.)
I suspect that Go could get us there. Interfaces would allow for a higher degree of abstraction. The language would offer much cleaner concurrency. All the memory management cruft would be gone. In such an environment, many things that are now esoteric would become the realm of ordinary programming. I think there is much to be gained.
Yeah, because correctness is better than speed. I am willing to wait 2 milliseconds longer for an image to show up if it means I can open untrusted images without having my machine taken over. (See recent libpng and vorbis buffer overflows in Firefox.)
Remember, higher-level languages are almost as fast as C. For some languages, that "almost" is less than 2x slower.
Software becomes twice as slow every two years.
And it becomes 10x less correct. Less speed for less bugs is a reasonable tradeoff, considering computers double in speed every few years. You can buy speed at newegg for $100, but you can't buy correctness anywhere. No program I use today is particularly slow, but many are horribly incorrect.
So it seems like optimizing for speed is a bad idea. Programs that get me the wrong answer quickly are worthless.
Oh, but you don't write incorrect or unsafe C code... only other people do... I get it...
The performance argument falls down when you consider the embedded domain where there is a long way to catch up before all embedded devices are multi-core with lots of RAM and huge caches.
There always are situations where you need to squeeze out performance and there always will be.
A. as mentioned above there are devices that have very limited performance.
B. There is always software or parts of software where every last drop of performance needs to be squeezed out, regardless of how fast your computer is. For example, the scheduler for your OS. Or doing certain graphics calculations. If you have to repeat a certain type of code for every pixel over a million pixels, you do not want to be running a VM.
So this argument is a dead end. There will always be certain type of tasks where it pays to be as fast as possible. Even as computers get more powerful, software gets more complex and more abstracted which means there is always something on the bottom of all the abstractions that needs to run super fast and if it doesn't it will slow everything else to a crawl.
There always are situations where you need to squeeze out performance and there always will be...So this argument is a dead end.
What argument do you think is going on here? My position is "all system programming doesn't have to be in C." It's not "C is dead" or any such nonsense.
The "argument" is a dead end because it exists in the imaginations of other posters. My position isn't such a straw man!
In the end you have to manually manage memory and compile to assembly code. C is one of the best languages we have for this and it has a lot of history behind it. This is why it is useful.
No. You cannot rely on a dynamic memory manager inside sections of the kernel. The kernel provides the memory architecture to the operating system and therefore must implement the memory manager.
The C runtime (and language and compiler) was a very accurate abstraction of the underlying hardware, and later the hardware makers made their hardware to be compatible with the ecosystem.
That is why low-level parts of systems continue to be written in C. Only recently, with the advent of multi-core processors, we started discovering the deficiencies of the almost 40 year old language. The biggest is the missing memory model issue, which makes it impossible to write correct and portable multi-threaded code, which is inevitable at system level.
Legacy code really is aptly named. There are COBOL and assembly routines in production that have outlived their programmers. Even if a product (for example Oracle AS being replaced by Bea WLS) is desupported/killed, it still lives on if it's still in use.
There are plenty of software jobs out there that require writing new components in C; I'm not sure that the same is true for COBOL. Sure, there's a ton of legacy C code, but there's also a great deal of new C code being developed, which makes it hard for me to see C as being "dead."
C's not dead, but I agree with the article's author: its use will slowly erode, but never get to zero. It'll bottom out at a fairly steady, low rate, rather like COBOL.
>> Some people seem to think that C is a real programming language, but they are sadly mistaken. It really is about writing almost-portable assembly language
And that's why comparing C to other, more modern languages is kind of useless - it all comes down to what kind of programming you're doing. Even though more expressive languages have been created and more efficient ways to create programs are around, C is a very nice balance between high-level syntax and low-level actions.