Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's a good rule of thumb, but you should keep in mind that it's impossible to fundamentally understand most of today's challenges. I'd argue this has kinda lost it's meaning as we kept adding more and more leaky abstractions over the decades, making a fundamental understanding of the whole tech stack and what each layer does impossible.

And without these abstractions, you'd struggle to even do a fraction of what today's platforms provide.

But yes, the closer you can get to understanding the challenge you're solving, the better you'll likely be able to solve it. With the caveat that this understanding doesn't necessarily let you truly innovate either. It's just a rule of thumb from a time when computers were much less versatile and ubiquitous.

Totally off topic from the context of the article though



But these abstractions help the developer not solve computer problems, like how to arrange and shift bits in memory, but focus on the actual problem.

Only a subset of problems requires the maximum of performance.

But I would also argue, there are problems, that require more than a few characters in a text editor, despite understanding them.


What are "most of today's challenges"?

When I look back over my career, I mostly see increasing levels of abstraction for basically the same set of tasks: here's a database, make a UI to interface with it — sometimes the "database" has been a custom file format or a REST API, but it's still a database.

That's not to say there's no other categories of work besides CRUD apps, of course there is, games and document editors are mostly coming up with interesting rules for transforming a state that's fairly easy to display or sonify, and the challenge was one of "make it faster" which often meant throwing away every abstraction above the hardware itself — but even then, before starting my degree I managed to put together a decent raster painting app with just Visual Basic, and I know that code was bad even by the standards I had at the end of the degree.

Knowing MVC and MVVM makes it easier to work with system libraries based on those architectures. But why did we ever get things like VIPER? All I had when working with that was stress — what should have taken one person a few days, took a team several weeks. When I've been given free rein to choose the right solution, going with the old one has more than once allowed me, alone, keep up with an entire team doing the same thing some other way on a different platform.

Even "just" reactive UI is an abstraction that most of us don't really need — which is why there's even a push for "vanilla" JavaScript.

So, what are today's challenges, that current abstractions we need to learn to work with each other are the solution?

Or did you mean abstractions such as "I can pretend my keyboard doesn't bounce when considering keypresses" and "I don't need to care about OSI levels 1-6 and barely need to think about level 7"?


The quote I was responding to contextualized it to being able to write the few lines necessary to solve the problem.

From this perspective, the fundamental understanding for a crud application is quiet challenging, as you'd need to understand

* block storage

* the way the database handles the data (storage, access, permissions etc)

* The way your application interfaces with the database, likely over the wire adding

* The entire tcp/IP stack

* The VM/runtime you're using to create your application, along with every used library

* also everything these libraries use, adding the whole OS to the stack.

* If it's a web application, you'll also have to add the browser along with it's JS ecosystem

You can keep going, having a fundamental understanding of the problem to a degree that you'd be able to easily break it down into working code in an editor implies you're able to directly understand all error scenarios which every abstraction layer added, this I find impossible.

The only project I'm aware of even attempting something like that is SQLites proprietary testing suite, and they're at what, 900+ times testcode vs application code right now?

If you instead think about the tech stack of the 80, it had a lot less to keep in mind, as the scope ended at a pretty rudimentary layer. The terminal being the main UI and likely nothing between your code and any persistence you might utilize.

But I wasn't a programmer in the 80th, I've formed this opinion by reading about the tools that programmers used at that time. Maybe there was a lot more layers to software development back then too? I cannot say one way or another with any confidence, as I haven't lived through that time period as a working adult.


On the other hand, understanding what part of what you are doing is due to a leak from the lower layer helps you isolate it in case you should shift what you are building on. And now we have another abstraction!

It is rare that you need to understand more than a few layers down, at any given time. As I type on this keyboard, I have no care what the neutrons, protons and electrons are doing in the keys are doing. From time to time maybe if I get a nasty static electric shock I want to stop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: