Yeah, it's hard to hit the right balance with nuance around these and you're spot on. What I meant to get at was the specific difference in default modes where gVisor's systrap intercepts syscalls via seccomp traps and handles them entirely in a user-space Go kernel, so there's no hardware isolation boundary in the memory/execution sense. A microVM puts the guest in a VT-x/EPT-isolated address space, which is a qualitative difference in what enforces the boundary (perhaps?)
Whereas yeah, you can run gVisor in KVM mode where it does use hardware virtualization, and at that point the isolation boundary is much closer to a microVM's. I believe the real difference then becomes more about what's on either side of that boundary where gVisor gives you a memory-safe Go kernel making ~70 host syscalls, a microVM gives you a full guest Linux kernel behind a minimal VMM. So at least in my mind it comes down to a bit of around different trust chains, not necessarily one strictly stronger than the other.
Getting the same thing, "Failed to verify your browser. Code 11". Some noise about WebGL in the browser console, getExtension() invoked on a null reference. LibreWolf on Linux + resist fingerprinting.
Maybe opting for a better-written WAF could boost the reach?
> I'm experimenting with implementing such a sandbox that works cross-system (so no kernel-level namespace primitives) and the amount necessary for late-bound policy injection, if you want user comfort, on top of policy design and synthetic environment presented to the program is hair-pulling.
Curious, if this is cross-platform, is your design based on overriding the libc procedures, or otherwise injecting libraries into the process?
I'm not interposing libc or injecting libraries. Guests run as WASM modules, so the execution substrate is constrained. The host mediates and logs effects. Changes only propagate via an explicit, policy-validated promotion step.
> not much to do with "App Sandboxes" which is a distinct macOS feature
The App Sandbox is literally Seatbelt + Cocoa "containers". secinitd translates App Sandbox entitlements into a Seatbelt profile and that is then transferred back to your process via XPC and applied by an libsystem_secinit initializer early in the process initialization, shortly before main(). This is why App Sandbox programs will crash with `forbidden-sandbox-reinit` in libsystem_secinit if you run them under sandbox-exec. macOS does no OS-level virtualization.
It is a little more direct than that even. The application's entitlements are passed into the interpretation of the sandbox profile. It is the sandbox profile itself that determines which policies should be applied in the resulting compiled sandbox policy based on entitlements and other factors.
An example from /System/Library/Sandbox/Profiles/application.sb, the profile that is used for App Sandboxed applications, on my system:
Most of this mythical "taste", at least as hinted by the article, can be acquired rather easily—by looking into what's already out there before jumping to creating.
Is there nothing? Great, go ahead and fill the void.
Is there so much that it becomes overwhelming to even look? If so, ask yourself: does your thing have any significant differentiators? Are you willing to maintain it? Do you want the people who come after you to see one more option in the sea, or an existing project made better thanks to your changes?
It's about respecting the time of one another. If I'm looking for a to-do app, I'm looking for a good one, at least in the ways that matter to me. Not for thousands of applications with the same exact issues. And so are you. Nobody needs a million of options that suck. We all want a handful or ideally one that does the job.
Instead of using third party apps for a todo list, I recently wrote myself a utility - a background process to reschedule iOS Reminders I don't get to, make sure every reminder I create actually gets a scheduled date/time, and to deconflict reminders from calendar entries if I get an overlap.
It took less than 90 minutes using claude code, I have a testflight I've shared with friends for feedback, and I'll probably put it out there for a dollar once I add a couple more settings.
The built in UIs, syncing, and integrations are really good. It took me a while to realize I didn't need another todo list app, just to tweak the built-ins.
It's a fairly radical idea that AI can (and should!) be doing things invisibly with existing platforms and avoid the whole nightmare of UI development.
> does your thing have any significant differentiators?
When I see a Show HN around a very popular product concept (like a habit tracker), the first thing I search for is a FAQ or comparison table against other similar apps.
> The most of this mythical "taste", at least as hinted by the article, can be acquired rather easily—by looking into what's already out there before jumping to creating.
Yes, you should do discovery, but that alone is not sufficient to develop taste. Being an also-ran is low taste even if you religiously meet the market expectations by following a pattern. Just like in fashion, you need to understand the rules to know when its okay to break the rules so that you appear fashion-forward, that is a form of taste no differently.
Of course they are, taste is a social conversation to align for a window of time on a set of guidelines. Taste is a social construct, being a social construct (or "made up") does not make it any less real or valuable.
I disagree taste is a very real thing and there are multiple levels to taste from shallow and easily changed, to deep and relatively constant.
Shallow taste is stuff like popular trends that come and go, and hating the taste of beer until you’ve had it a few times (not saying everyone has to like beer, that’s not the point).
Deeper taste is more like your deeply held cognitive biases. Like a current of a river or the valleys cut into a mountain. It’s the shape of your cognition that determines how information flows through your brain.
Deeper taste is heavily connected to you and your identity. It’s part of who you are. I think most people would agree that parts of themselves change very slowly, and some not at all.
I know there are parts of me that feel the same as when I was a child. To deny the existence of taste is to deny the existence of a “you” that is different from others.
The problem is that people are often delusional and AI feeds these delusions. You have to switch to objective measures to gain skill and taste. This is true for art (ask: Where is the focal point) instead of "is this good or necessary"
There are long lists of successful programs that market themselves as little more than "like program X, but faster/distributed/higher resolution/bigger map"
I used to find this and the whole idea of "Web3" ridiculous, but with the recent saturation of low-quality slop and disinformation, perhaps it's time to reconsider.
I enjoy reading thorough publications written by actual humans who have something to say. Part of why I'm here. And I'd take micropayments over subscriptions anytime.
There's just one catch nobody seems to be eager to talk about. While I'm willing to pay that 1¢, if it's 1¢ + any identifying information, I'm out.
While I also share the sentiment with the author, I can't help but notice that the article is picturing things as more dramatic than they really are.
What's been happening to software development from the 2010s onwards is closer to what happened to manual craftsmanship as the industrial revolution took off, than to the effects high-level programming languages and abstractions had on the field. Many attempts have been made to turn software development teams into assembly lines; between ultra high-level frameworks and AI, we've had all those "new-new" formal methodologies and extreme offshoring, for example. Another factor that contributed to the status quo is the fact that programming has become well-paid, which inevitably attracted people who are in it for the money and made it an attractive target for "cost optimization".
Not all hope is lost, however. There are two significant differences that set programming apart from traditional crafts: performance and security. There's no universal recipe for either—LLMs and large bloated orgs suck equally at both. Smaller players still can largely outperform behemoths if they have the right idea, similar to what WhatsApp did to Microsoft's Skype or to what Anthropic is now doing to OpenAI, Google and Microsoft. And as for security, just look at Apple's and Google's bug bounties.
At its core, software development is still a meritocracy. This hasn't changed despite the trillions of dollars that have been poured into making it a quantifiable problem. Organizations that refuse to accept this have their projects fail. As for the influx of money-oriented programmers, it might have skewed the proportions, but it definitely did not drive out all of the passionate ones. Keep your head up.
Also, I must say I like the irony of this post making it to the front page of a website that's usually full of headlines of the likes of "How I used Claude to code a revolutionary JavaScript framework running 100% on Amazon Lambda" :)
Having worked on kernel and hypervisor code, I really don't see much of a difference in terms of isolation. Could you elaborate on this?
reply