I'm not sure what motivates you to write a comment like this, but maybe you should reflect on it.
The person you are replying to is consciously trying to make the world a better place, and probably succeeding in a small way. Are they perfect? No. But they are literally sacrificing something for the good of someone (or something) else. This is the definition of altruism.
For some reason, you felt the need to criticize them for not being more altruistic?
Finally, if you really want to live cruelty free and 100% sustainably, the only option is to throw yourself off of a bridge because any time you interact with modern society you are producing CO2 indirectly and potentially harming animals, no matter how careful you are.
I can't speak for proof of identity in the US, but please understand that digital privacy is a slippery slope we're already sliding down, it is not unreasonable to be critical of any privacy violating initiative, because privacy is never given back, only taken away.
Do you have a source for that? Does your source imply that this is desired by the population?
My question is mostly rhetorical: it is obvious that government & safety institutions are themselves fanning the flames of this ridiculous movement away from privacy and towards a surveillance state of over-protectionism. The world has not significantly changed in 50 years in terms of terrorist threats, (except for, ironically, threats to your identity online), yet suddenly now that we can track people online, we must to combat this non-changing threat factor? It's all security theater.
All intelligence agencies benefit from more data, and will happily use lack of data as a scapegoat for their own incompetence. They instill fear to justify their existence, unlawful behavior, and lack of results.
This data is indeed not irregularly distributed, in fact the fun thing about geospatial data is that you always know the maximum extent of it.
About your binary tree comment: yes this is absolutely valid, but consider then that binary trees also are a bad fit for distributed computing, where data is often partitioned at the top level (making it no longer a binary tree but a set of binary trees) and cross-node joins are expensive.
I think the key is in the distributed nature, h3 is effectively a grid so can easily be distributed over nodes. A recursive system is much harder to handle that way. R-trees are great if you are OK with indexing all data on one node, which I think for a global system is a no-go.
This is all speculation, but intuitively your criticism makes sense.
Also, mapping 147k cities to countries should not take 16 workers and 1TB of memory, I think the example in the article is not a realistic workload.
Although I understand your frustration (and have certainly been at the other side of this as well!), I think its very valuable to always verbalize your intuition of scope of work and be critical if your intuition is in conflict with reality.
Its the best way to find out if there's a mismatch between value and effort, and its the best way to learn and discuss the fundamental nature of complexity.
Similar to your argument, I can name countless of situations where developers absolutely adamantly insisted that something was very hard to do, only for another developer to say "no you can actually do that like this* and fix it in hours instead of weeks.
Yes, making a TUI from scratch is hard, no that should not affect Claude code because they aren't actually making the TUI library (I hope). It should be the case that most complexity is in the model, and the client is just using a text-based interface.
There seems to be a mismatch of what you're describing would be issues (for instance about the quality of the agent) and what people are describing as the actual issues (terminal commands don't work, or input is lost arbitrarily).
That's why verbalizing is important, because you are thinking about other complexities than the people you reply to.
As another example `opencode`[0] has number issues on the same order of magnitude, with similar problems.
> There seems to be a mismatch of what you're describing would be issues (for instance about the quality of the agent) and what people are describing as the actual issues (terminal commands don't work, or input is lost arbitrarily).
I just named couple examples I've seen in issue tracker and `opencode` on quick skim has many similar issues about inputs and rendering issues in terminals too.
> Similar to your argument, I can name countless of situations where developers absolutely adamantly insisted that something was very hard to do, only for another developer to say "no you can actually do that like this* and fix it in hours instead of weeks.
Good example, as I have seen this too, but for this case, let's first see `opencode`/`claude` equivalent written in "two weeks" and that has no issues (or issues are fixed so fast, they don't accumulate into thousands) and supports any user on any platform. People building stuff for only themselves (N=1) and claiming the problem is simple do not count.
---------
Like the guy two days ago claiming that "the most basic feature"[1] in an IDE is a _terminal_. But then we see threads in HN popping up about Ghostty or Kitty or whatever and how those terminals are god-send, everything else is crap. They may be right, but that software took years (and probably tens of man-years) to write.
What I am saying is that just throwing out phrases that something is "simple" or "basic" needs proof, but at the time of writing I don't see examples.
> What I am saying is that just throwing out phrases that something is "simple" or "basic" needs proof, but at the time of writing I don't see examples.
Besides the other comments already here about code gen & contracts, a bigger one for me to step away from json/xml is binary serialization.
It sounds weird, and its totally dependent on your use case, but binary serialization can make a giant difference.
For me, I work with 3D data which is primarily (but not only) tightly packed arrays of floats & ints. I have a bunch of options available:
1. JSON/XML, readable, easy to work with, relatively bulky (but not as bad as people think if you compress) but no random access, and slow floating point parsing, great extensibility.
2. JSON/XML + base64, OK to work with, quite bulky, no random access, faster parsing, but no structure, extensible.
3. Manual binary serialization: hard to work with, OK size (esp compressed), random access if you put in the effort, optimal parsing, not extensible unless you put in a lot of effort.
4. Flatbuffers/protobuf/capn-proto/etc: easy to work with, great size (esp compressed), random access, close-to-optimal parsing, extensible.
Basically if you care about performance, you would really like to just have control of the binary layout of your data, but you generally don't want to design extensibility and random access yourself, so you end up sacrificing explicit layout (and so some performance) by choosing a convenient lib.
We are a very regularly sized company, but our 3D data spans hundreds of terabytes.
(also, no, there is no general purpose 3D format available to do this work, gltf and friends are great but have a small range of usecases)
This was the norm many years ago, I worked on a simulation software which existed long before Protobuf was even an apple in it's authors eyes. The whole thing was on a server architecture with a Java (later ported to Qt) GUI and a C++ core. The solver periodically sent data in a custom binary format over TCP for vector fields and things.
You're making assumptions about what kind of software people write. For a Hacker News degenerate, everything in the world revolves around bean-counting B2B SaaS CRUD crap, but it doesn't mean it's all there is to the world, right? You would be shocked how much networked computer software (not everything is a website) exists that is NOT a CRUD "app."
Statistically, a lot of people who post on HN and cling to new or advanced tech *do* just write CRUD apps with a little special sauce, it’s part of what makes vibe coding and many of the frameworks we use so appealing.
I’m not ignoring that other things exist and are even very common; and I was agreeing with the person that’s a useful case.
I’ve also worked for various companies where protobuf has been suggested as a way to solve a political/organizational issue, not a code or platform issue.
> someone has to make a native cross-platform desktop UI framework that doesn't suck
That's the browser, native ui development failed because it didn't want to lose money on cross platform compatibility, security, or user onboarding experience.
The web is fast enough for 99% of UIs, the story is not about using web, the story is about using the web poorly. old.reddit is not qt.
Sure, but from the perspective of the code that has the move() its good to assume the value is moved at that call, which I guess was the intention of picking the name.
Usually yes, however because that's not for some resource types it can lead to less than ideal behaviour e.g. if your RAII resource is something which will get corrupted if there are two handles to it (some sort of odd hardware resource), you std::move() the object into a callee, assume it is moved and released, so you acquire a new resource, and turns out the callee did not move it and now you have two of them.
std::move tells the devs and the compiler that you _intend_ the value to be moved
sadly that isn't reflected well in it's implementation as it will "silently"
degrade even if it isn't a "move" (1)
A `std::move` which fails to compile if it can't be a move(1) it would not have this issues.
But it has other issues, mainly wrt. library design especially related to templates/generics, which probably(?) need a `std::move` which works like the current one. I think someone else in this comment section already argued that one issue with modern C++ is too much focusing on the complicated/template/const library design by experts case compared to the "day to day" usage by non experts.
(1): There is a bit of gray area in what in rust would be Copy types, for simplicity we can ignore them in this hypothetical argument about an alternative std::move design.
I'm sorry what?
This is not even close to consensus, as you present it.
Also, a thought exercise, just for you:
1. Should stabbing people be illegal? 2. Should we make it impossible to stab people?
Think about those things, and how they relate to eachother. What would the consequences be of #2?
reply