Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Bun's experimental Rust rewrite hits 99.8% test compatibility on Linux x64 glibc (twitter.com/jarredsumner)
679 points by heldrida 1 day ago | hide | past | favorite | 645 comments
https://xunroll.com/thread/2053047748191232310

Recent and related: Zig → Rust porting guide - https://news.ycombinator.com/item?id=48016880 - May 2026 (540 comments)

 help



From 4 days ago: https://news.ycombinator.com/item?id=48019226

  > I work on Bun and this is my branch
  >
  > This whole thread is an overreaction. 302 comments about code that does not work. We haven’t committed to rewriting. There’s a very high chance all this code gets thrown out completely.
  >
  > I’m curious to see what a working version of this looks, what it feels like, how it performs and if/how hard it’d be to get it to pass Bun’s test suite and be maintainable. I’d like to be able to compare a viable Rust version and a Zig version side by side.

cargo check reported over 16,000 compiler errors when I wrote that message. It could not print a version number or run JavaScript. I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive. There’ll be a blog post with more details.

If this experiment ends up resulting in a real migration path, I think that would be completely awesome. Maybe it means we have a chance to revive older projects such as ngspice [0], but with modern affordances and better safety properties.

From your post, though, it sounds like Bun may have been a pretty direct rewrite, without too many hard choices along the way. Is that fair?

[0] https://ngspice.sourceforge.io/


I hear your suggestion without feeling the need to remark the far too common Linux/Deveoper response of “but if you just do all this other stuff and run it this special way and install 15 dependencies and compile XYZ lib from source then clearly it works fine and you’re mistaken”.

That’s exactly the type of thing that is needed is to optimize projects for modern compatibility, portability and safety when other modernization efforts or forks don’t exist.

That said, I suspect this rewrite went so quickly and so optimally because it had the benefit of (effectively) 100% test coverage already in place in a really well defined system. Most open source project spawn from efforts of a single developer who frequently never waste time writing tests for a little side project. Later as it grows, they rarely stop and go back to implement testing. So if you’re truly working with an old dead project, there is a really good chance there are zero tests to be found. That is far more difficult to reach the same completeness unless the goal is simply to port all of those same problems to a new language and hope type safety fixes them.

(Not specific to ngspice, just mean generally.)


You can instruct an LLM to improve the test coverage.

You are absolutely correct!

[flagged]


As an amateur in the space: I download on Mac, run `ngspice`, "Error: Can't open display: :0". I look in the code - hardcoded X11-era assumptions. Not exactly modern affordances...

Then I try to understand and extract the actual formulas, and there isn't a clean formula layer anywhere. All is procedural, e.g. in `b4v6temp.c` formulas are tangled with branching, caching, model-state mutation. Extracting the computation, embedding cleanly and exposing through a sane API feels hair-pulling.

So yeah, maintained, but not as in 'modern, embeddable, understandable software component' I'd be looking forward in a rewrite. Maybe not even touch the simulation core, just rewriting Embedding/API layer and the UX would already be a big deal.


This explains a lot. But you merely need to look into the family of spice forks to realise, given the way that they're strangely limited to certain operating systems and embedded inside certain proprietary IDEs, that's there's something very wrong with the code architecture.

So, that would be an awesome project!


> As an amateur in the space

Why are you not using this through KiCad? That's what I would expect an amateur to do; especially since they handle the UX that you are complaining about.

And you are complaining about tangled code but that code is almost certainly hyper-optimized since performance actually mattered a LOT to people running spice simulations. ng-spice (and Spice3 and Spice2) were not written for programming ease; they were written to get a real job worth real money done.

In addition, any change you make to that code needs to be run back through numerical regression tests to make sure you didn't break things since this is software that people expect to get correct answers.

However, if the legacy seems to bother you so much, perhaps you should look at Xyce from Sandia?


> And you are complaining about tangled code but that code is almost certainly hyper-optimized since performance actually mattered a LOT to people running spice simulations.

I can 100% guarantee you, that these are never mutually exclusive at all.


> Why are you not using this through KiCad? That's what I would expect an amateur to do; especially since they handle the UX that you are complaining about.

They sound like an amateur at circuit design, not software engineering (which is how I'd describe myself too).


KiCad is still the preferred interface.

The original point stands. Ngspice shows its heritage from the days of Fortran far more than a modern code base would or should. It's sole great virtue (from my point of view) is that it integrates with KiCad and only falls over with no reason about 5% of the time.

I would suspect that some of the simulation systems coming out of the Julia community or Xyce would be a better base.


I see "sourceforge" and immediately I think "this project is way behind time and is going to pose a lot of issues to new users, if it's still active".

I could have linked Github repo which has been abandoned for 11 years and ranks higher on Google than the sourceforge page, but that would have maybe been disingenuous. (https://github.com/ngspice/ngspice)

I moved to codeberg and google still insists on linking SOLELY the old archived project on github. While of course snyk and such awful scanners mark them as abandoned because they don't know codeberg exists.

I think this is highlighting the problem the poster you're responding to laments!

+1, a project presenting at FOSDEM certainly does not need a "revive".

The spice core that ngspice is built off is terrible code. It has a long history going back to 1970s era fortran. Starting fresh is probably preferable

That's not a revive though, revive (at least to me) implies it's dead.

> The spice core that ngspice is built off is terrible code. It has a long history going back to 1970s era fortran. Starting fresh is probably preferable

That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.

Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

However, circuit simulation is remarkably difficult to get right (stiff systems with multiple time constants are not uncommon) and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

If, however, the legacy of ngspice bugs you that much, go look at Xyce and see if that is more to your taste.


and correctness too - I guess there aren't that many hardcore electrical engineers/physicists/mathematicians that can make sure the results it makes are correct and sound, and debug weird issues coming from numerical stability.

The sort of people who can do this are very rare, and it's not likely they will just randomly decide to donate their time to rewrite the codebase.


> and generally resistant to parallelization (each device can have its own model which are a unique set of linear differential equations).

Solving sets of differential equations is something that's parallelizable though

See for example how there's physics engines running on GPU. That's mechanics and not electric circuits, however it's differential equations all the same.


Which differential equations are you talking about? Linear ones have standard solutions and are definitely parallelisable (though you can basically just write the solution down by hand). Non-linear ones vary from can basically be approximated by a linear solution with corrections to needing to use relaxation methods (which are obviously not parallelisable).

Mechanics is generally linear, and for game physics engines fast is more valuable than correct (fast inverse square root being the obvious poster child). Add viscosity and you're in for a bad time.


To be specific, a linear solver can be (as in I have done) written in a week.

A serious non-linear solver that handles legacy Spice models is another beast entirely. And if you want to integrate modern advances in algebraic-differential systems you take that to a higher level.

These are not partial differential equations such as you find in Navier-Stokes. These are sparse non-linear differential equations that do not parallelize nearly as simply.

Another example of related problems that parallelize poorly even though they are linear are the FDTD formulations for Maxwell's equations. These are relatively simple systems, but the bottleneck is almost always the memory bandwidth because it is so hard to parallelize.


The type of people who need spice is dead serious about accuracy. 1ppm error sometimes is not tolerable. So, an optimization in a game engine is definitely not suitable for engineering simulation.

Dude these are incredibly oversimplified models of real components. How are you getting 1ppm when basic shit like tempco and self heating are missing from pretty much every vendor provided spice model?

> That code is also hyper-optimized for performance. I sincerely doubt you are going to match the performance easily with any random rewrite.

Hyper optimized for '70s era fortran not gonna be all that optimized on modern CPUs.

I bet that just compiler optimizations that LLVM could do with clean code gonna be faster


> Now, if you had a very clear idea of why the code was making assumptions from the 1990s that are no longer valid, then you might stand a chance of producing something that would outperform it. Or, perhaps, if you had particular knowledge of modern high-performance numerical libraries that you could apply to the problem, then you might be able to beat it.

But that's exactly the sort of exotic domain knowledge that AI models have that I don't.


That code was optimized for performance for 1980s hardware. It’s very far from optimized for modern CPUs.

UPDATE: This would make for an excellent case study if you don’t mind sharing the details. I am very curious about the number of agents, hours it took, and models used (did you use Mythos?).

This would not have been possible 5 years ago. LLMs are going to push us into the space age. Both Anthropic and OpenAI have committed to spending 10s of billions of dollars on training alone for the year. I am equally excited and terrified at the pace of progress!


Rust is really fun to work with and the compiler is great, just make sure the rewrite takes compile times into account since larger projects often have to be organized in a way that makes compilation reasonably fast.

In my experience Bun in Zig compiles more slowly than Deno in Rust.

Single compiles for sure. Where Zig is optimizing compilation is in the incremental compiler, which I've seen compile the compiler itself in an instant after a single line change. Of course, that kind of speed is probably not interesting to some people if the AI is writing tons of lines of code before they go to the compilation step.

I found making single line changes in Bun’s zig code led to very long compiles compared to doing the same in Rust code. It was a while ago though and maybe I was doing something wrong.

Probably a very long time ago then. Try again with Zig 0.16. It's amazing how fast recompiles can be.

They can't, because Bun is tied to a fork of Zig 0.14 which is not compatible with regular Zig compiler.

Bun’s patched Zig is on Zig 0.15.1


  how long does it take to compile?

  @jarredsumner: It's basically the same as in zig using our faster zig compiler. If we were using the upstream zig compiler, rust port would compile faster.
https://x.com/jarredsumner/status/2053050239423312035

This is at least partially disingenuous. Zig is working on, and has already shipped for some situations, a faster compiler. Bun runs on an outdated version of Zig that doesn't include it.

What coding model are you using for the rewrite? Opus for everything? A prerelease model like Mythos?

Just an aside, is there any way to know how many of those 16,000 compiler errors are independent. I mean, could it be that just by changing say 500 lines of code all those errors disappear?

Perhaps 16,000 could just measure cascade breakage, for example one lifetime mismatch can cause errors in every function that tries to use that reference.

Rust reference lifetime bookkeeping is a difficult task for LLMs. The LLM has to maintain, across multiple functions and structs, which references outlive which. Furthermore compiler messages are highly contextual and lifetime patterns are sparse in the training set.


This does not surprise me in the least. Several Claudes are very good at splitting up and working through them all.

That's a post I am eagerly waiting to read.

Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.

I am a Rust developper myself but I really love Zig and Bun. I am just overly curious of all this.


> Basically we are seeing now an "inverse Hofstadter's Law" where doing something with an LLM takes less time thanexpected even when you take into account this law.

Even LLMs themselves can't accurately estimate this (though this may be out of distribution stuff)


LLMs have no conception of time, unless you explicitly feed in timestamps to the context

I think given the current mood of things, it would be prudent to not make such strong assertions on anything. Trust is in increasingly short supply these days.

Nothing Jarred said is an assertion other than "There’ll be a blog post with more details."

"I didn’t expect it to work this quickly and I also didn’t expect the performance to be as competitive."

These are two assertions. There could have been a prior secret rewrite that took much longer than six days and this is a marketing stunt for Anthropic. In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.


Those are not assertions of anything meaningful. We have no idea what his expectations were. Maybe he expected it to be absolute crap, and it was only kind of crap. None of it means that it's actually viable. My fat uncle trying to beat Bolt's time could exceed my expectations by improving from 30s to 20s, doesn't mean it's ever going to be a reality.

> In case people still don't get it, Jarred works for Anthropic and Bun belongs to Anthropic.

In case people still don't get it, Jarred works for Anthropic and Bun belongs to Antrhopic. This means that people that have an ax to grind against anthropic (admittedly a reasonable position), will take the most antagonistic position they possibly can because of personal bias.


I disagree. This is the same sort of marketing strategy as Mythos.Wow it out performed so much we have to tell you in the future. If he wasn't aligned financially with the outcome I'd agree but he's not

So do you picture them locking up the Rust port behind closed doors as well, or what's the game gonna be? Cause it reads like it's kinda all public already.

Absolutely not, I think they prioritize it because it's internal. I to expect to see a stronger marketing push on its ability to do language translations because there is honestly value in that. Question is when they have compute but it's less crisis marketing then their security stuff so I'd see it at a lower priority. I just don't think it's as honest as the parent post posits

The Mythos-truther community is absolutely batshit, sorry. You wrote fanfic and now you're writing more fanfic. The company is faking for marketing so therefore they're faking for marketing. The only things in common between the two situations are you and the word Anthropic, the rest of us are just confused and worried. I'm worried, that's why I'm speaking to you plainly.

> I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.

haven't used zig...(only used rust)

but zig doesn't solve those problems?


Zig is a middle ground. It solves some of the common foot-guns in C, Without the costs of affine substructural typing that offers Rust its super powers.

I am of the opinion that it is horses for courses and not a universal better proposition.

Because my needs don’t fit in with Rust’s decisions very well I will use zig for personal projects when needed. I just need linked lists, graphs etc…

While hopefully someone can provide a more comprehensive explanation here are the two huge wins for my use case.

1) In Zig, accessing an array or slice out of bounds is considered detectable illegal behavior.

2) defer[0] allows you to collocate the the freeing of resources with code.

That at least ‘feels’ safer to me than a bunch of ‘unsafe’ rust that is required for my very specific use case.

I was working on some eBPF code in C and did really miss zig.

For me it fits the Pareto principle but zig is also just a sometimes food for me, so take that for what it is worth.

[0] https://zig.guide/language-basics/defer/


Fwiw you don't need unsafe for graphs or linked lists in Rust. At least not directly - these things can be abstracted. The petgraph crate is the most popular for graphs. I'm not sure about linked lists because linked lists are the wrong choice 99.9% of the time.

I've written hundreds of thousands of lines of Rust and outside of FFI, I've written I think one line of unsafe Rust.


[flagged]


Not really though. That's like saying that no language is "safe" because the compiler could have a bug.

It's true that safe wrappers around unsafe code sometimes have bugs in them, but it's orders of magnitude easier to get the abstraction right once than to use unsafe correctly in many places sprawled across a large codebase.


It's not as simple as that. All software is abstraction and with any software if you go deep enough you'll find unsafe code.

E.g. look at a Python list. Is it safe? In Python sure, but that's abstracting a C implementation which definitely isn't safe.

If you look at Rust's std::Vec you'll find a very similar story - safe interface over an unsafe implementation.

It isn't as binary as you think.


If you don’t see any difference between those two, I’m really not sure what to say.

Show code

I think he meant "show me a true linked list / node graph in rust that isn't unsafe". The reason being its not possible using c-style pointer following (or without just putting everything auto-pointers). What you've shown is exactly the tradeoff they were referring to. In rust, the answer is: make sure lifetime of all memory is explicitly managed, then use integers for the 'links' between nodes.

His point was that for his programming, he wants to be able to make real pointers and real linked lists with memory unsafe, which Rust makes difficult or opaque. For example with linked list, you could simulate (to avoid unsafe), by either boxing everything (so all refs are actually smart pointers), or you can use a container with scoped memory lifetime, and have integers in an array that are the "next" pointer. In addition to extra complexity, the "integers as edges" doesn't actually solve the complexity, it just means you can't get a bad memory error (you can still have 'pointers' that point to the wrong index if you're rolling your own).

Same with your graph code. Using a COO representation for a graph does in theory make it "memory safe" (albeit more clumsy to use if you are doing pointer-following logic), and it also introduces other subtle bugs if your logic is wrong (e.g. you have edge 100 but actually those nodes were removed, so now you're pointing at the wrong node).

I think the point (which I agree with for things like linked list, graph, compiler) is that depending on your usecase, the "safety" guarantees of rust are just making it harder to write the simplest most understandable code. Now instead of: `Node* next` I have lifetimes, integer references, two collections (nodes and edges) to keep in sync, smart pointers, etc. Previously my complexity was to make sure `next != null`, now its a ton of boilerplate and abstractions, performance hits, or more subtle bugs (like 'next' indices getting out of sync with the array of 'nodes').

If there was a way to explicitly track the lifetime of an arbitrary graph/tree of pointers at compile time, we wouldn't need garbage collection -- its not solvable at compile time, and the complexity has to live somewhere.


> it also introduces other subtle bugs if your logic is wrong (e.g. you have edge 100 but actually those nodes were removed, so now you're pointing at the wrong node

This is not actually a different kind of bug; it's just use-after-free, which you can of course get when using pointers instead of indices.

Actually it's slightly safer than pointer use-after-free because it is type safe and there's no UB.

Also some of the Rust arenas give you keys (equivalent to pointers) which can check for this. There's a good list here (see "ABA mitigation"):

https://donsz.nl/blog/arenas/


Err https://github.com/petgraph/petgraph

What are you asking for exactly?


Forgive me if I've mis-understood this thread, but there are unsafe declerations in that crate. Is there really any difference between using unsafe in your own code, versus wrapping it inside some crate?

I guess you are making the point that the user does not have to concern themselves with the unsafe declarations?


I would say yes, there’s a difference, in general. I would much rather leave the unsafe code to crates used and tested by many other applications, than have them in the application code itself.

> Is there really any difference between using unsafe in your own code, versus wrapping it inside some crate?

Yes, in the same way that there's a difference between using `std::Vec` (which uses `unsafe`), and writing an unsafe Vec class yourself.

Or even the difference between using Python (which wraps an unsafe CPython implementation), and doing everything in unsafe Python code.

The difference is that widely used code like CPython and `std::Vec` are much much better tested and audited than anything I would write myself, because so many people use them. This is a continuum so something like petgraph is going to be not as well tested as std::Vec but still way better tested than anything I've written.


I don't think it's unreasonable, even though I am getting marked down for daring to ask, for people who are making assertions, even if they are well understood *within their own community* (that is, not necessarily universally known) to show examples of what they are talking about.

You're correcting someone, so it's clear that your understanding isn't universal, and example code is the absolute minimum.


It doesn't seem clear what code you're asking for.

zig is unmanaged memory. But rust also allows memory leaks, and they're not uncommon in large, complex programs. So this rewrite will not necessarily control for that.

What language doesn't allow memory leaks?

There are two kinds of memory leaks: forgotten manual freeing (all references are gone, but allocation is not) and forgetting to get rid of references that keeps an allocation alive. Both are a kind of logical error, but the first is mostly possible in languages with manual memory management. The second one is a universal logical error (only programmer knows which live references are really needed).

In the Haskell community I’ve seen the second kind called “space leaks.” I don’t see it used much outside that community but I like the term and use it when talking about other languages as well.

Rust allows reference-counting cycles, right?

Zig doesn't even have RAII...

which is a good thing. C++'s RAII is magic-sauce that does a lot for you when you can simply use `defer` in zig. A constructor is just a function call. A destructor is just a function call.

And a function call is just a fancy JMP, still it's generally acknowledged to be better to have all the bookkeeping automated.

Does defer in zig track the objects lifetime directly, or is it like the various other 'context' features in other languages where it only really works for lifetimes of function-local variables and leaves you on your own when things get more complicated? (which, IMO, is precisely when RAII becomes most useful. It does seem like most of these languages only consider the 'forgetting to cleanup on an early return from a function' case)

Constructors and destructors are also just function calls in C++

And you can't forget to type defer


How is defer not magic sauce?

Whether you consider it magic is up to you, but, unlike a destructor in RAII, there is nothing automatic going on. If you don't explicitly invoke a destructor, you won't get a destructor.

The fact that you can explicitly invoke the destructor to happen later is simply syntactic sugar, just like if/else/while, or any other control construct more powerful than a conditional jump instruction.


And more importantly, you can choose what destructor to call. This is perhaps what's most underrated about defer, because defer can select among many different destructors possible, at multiple different levels (group free with arenas, individual free, etc).

Or even whether you need a destructor, or something simpler, like nulling out a pointer or two to break a reference loop.

defer is a perfectly general structured flow concept; it only cares about when you do something, and is completely orthogonal to what you need to accomplish.


I'm not sure the folks responding can tell the difference.

> If you don't explicitly invoke a destructor, you won't get a destructor.

When you explicitly invoke a "destructor", you do it on many code paths (and miss one or two)

>The fact that you can explicitly invoke the destructor to happen later

You don't specify where the `defer`-red "destructor" will be invoked.


> When you explicitly invoke a "destructor", you do it on many code paths (and miss one or two)

Unless, of course, you do it inside a defer block.

> You don't specify where the `defer`-red "destructor" will be invoked.

Yes, actually, you do. It is patently obvious, by code inspection, where the destructor, or anything else specified in a deferred block, will be invoked. defer is a perfectly cromulent part of structured control flow, allowing for easy reasoning about when things occur without having to calculate an insane number of permutations of conditional branch instructions.


Nope! Zig is like C in this regard. There’s no borrow checker. Managing memory is your responsibility.

It gives you a few more tools than C - like a debug allocator, bounds checked array slices and so on. But it’s not a memory safe language like rust.


It's not.. but im pretty sure it could be. could probably even take this (WIP) idea and bolt on a formal verifier pretty easily.

https://github.com/ityonemo/clr


It'd take more than that to match rust's borrow checker. Rust's borrow checker tracks lifetimes, and sometimes needs annotations in code to help it understand what you're actually trying to do. I suppose you could work around that by adding lifetime annotations in zig comments. Then you've have a language that's a lot like rust, but without an ecosystem of borrowck-safe libraries. And with worse ergonomics (rust knows when it can Drop). And rust can put noalias everywhere in emitted code. And you'd probably have worse error messages than the rust compiler emits.

Its an interesting idea. But if you want static memory safety in a low level systems language, its probably much easier to just use rust.


> I suppose you could work around that by adding lifetime annotations in zig comments.

you can make a no-op function that gets compiled out but survives AIR

> rust knows when it can Drop.

and its possible to cause problems if you aren't aware where rust picks to dropp.

> And rust can put noalias everywhere in emitted code.

zig has noalias and it should be posssible to do alias tracking as a refinement.

> But if you want static memory safety in a low level systems language, its probably much easier to just use rust.

don't use that attitude to suck oxygen out of the air. rust comes with its own baggage, so "just using rust because its the only choice" keeps you in a local minimum.


> and its possible to cause problems if you aren't aware where rust picks to drop.

Can you give some examples? I've never ran into problems due to this.

> don't use that attitude to suck oxygen out of the air. rust comes with its own baggage

Yeah, that's a totally fair argument. One nice aspect of the approach you're proposing is it'd give you the opportunity to explore more of the borrow checker design space. I'm convinced there's a giant forest of different ways we could do compile time memory safety. Rust has gone down one particular road in that forest. But there's probably loads of other options that nobody has tried yet. Some of them will probably be better than rust - but nobody has thought them through yet.

I wish you luck in your project! If you land somewhere interesting, I hope you write it up.


> Can you give some examples? I've never ran into problems due to this.

If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.

thank you. Unfortunately in the last few weeks i've been too busy with my startup to put as much work into it. We'll see =D


> If it's doing a drop in the hot loop that may be an unexpected performance regression that could be carefully lifted.

Yeah, I've heard of people being surprised that when they make massive collections of Box'ed entries, then get surprised that it takes a long time to Drop the whole thing. But this would be the same in C or Zig too. Malloc and free are really complex functions. Reducing heap allocations is an essential tool for optimisation.

The solution to this "unexpected performance regression" in rust is the same as it is in C, C++ and Zig: Stop heap allocating so much. Use primitive types, SSO types (SmartString and friends in rust) or memory arenas. Drop isn't the problem.


In zig the solution is to use an arena allocator. That’s about as easy as it gets. Maybe Rust also allows doing that, I don’t know.

You can use arenas in Rust, it's just not as trivial to swap allocators generally. But there are plenty of crates for it.

It is quite obvious that Zig is pre 1.0 with thousands of stranded unsolved issues (per their GitHub repo). A review of Zig hype gives the strong impression it was created by being relentlessly and suspiciously pushed on HN, beyond logic or its language rankings (per TIOBE or GitHub stats), so that many were under the illusion that the language was something more or other than what it really is.

Zig is still under development and beta. Stability, crashes, and leaks should not be surprising, and even expected. To stick with a beta language, usually companies and developers are philosophically and/or financially aligned with the language. An example is JangaFX and Odin, where they not only have committed to using the language (despite being beta) in their products, but have directly hired GingerBill.

Team Bun appears to have "alignment and relationship issues" with Zig, to the point they have decided to extensively explore their options. Now Bun is rewritten in Rust. They are seeing if Rust solves their requirements. As with any relationship, if one ignores or takes a partner for granted, don't be surprised if they want a divorce or jump to someone else.


You might want to check their Codeberg then, because they've moved all their development over there...

Zig very much could of moved all of their GitHub issues over to Codeberg, to be resolved, but chose not to do so. Thus left thousands of issues unsolved and stranded.

This maneuver was arguably obfuscated by the anti-LLM stance and finger pointing at Microsoft, but nevertheless, many still have noticed. Zig, for a long time, had been falling behind and doing poorly on their open to close ratio for resolving issues. It should be embarrassing to leave so many issues open.

Even if not accepting new GitHub issues, they have demonstrated an inability to resolve existing issues, except at an extremely slow pace. Considering there are just about no new issues on their GitHub repo, it is understandable if there are those that find the pace to close and amount of issues unacceptable or questionable, in addition to the clearly bad open to close ratio.


Did you read their migration post? They are thinking about it as COW, so they're using both issue trackers right now, but as soon as the update an issue it jumps straight to the Codeberg issue tracker. It's an unconventional way of doing it, but it's no conspiracy.

Peter Naur: Programming as Theory Building

Bun: Hold my beer


Looks like he did the maintainability performance and test suite checks and made his decision :)

Honestly, I fully support the rewrite to Rust, but he should have just owned this from the start. I'm sure he knew in the back of his mind how dedicated he was to that branch as he had already spent the equivalent of thousands of dollars in tokens by that point.

Bun was VC funded and acquired by Anthropic. He's spending company money, not his own money.

That's why I said "the equivalent of". Additionally, time and cognitive effort are not free. The work spent on this branch was work that was not spent on other branches. Does that make sense?

6 days is also nothing when you're doing R&D on your company's dime. He could have spent a month trying a dozen different things and thrown away all the code at the end. As long as he ends that month with a clear picture of where to steer the company over the next 5 years, it's time well spent.

Had my former employers been so lenient with how I spend company time, I might still be an office worker instead of self-employed!

Not even the company is spending money. It’s their employee working on a rework of the code owned by the company that owns the infrastructure on which the rework is done. And that company is still yet to turn profit. This work is subsidised by everyone who pays for Claude.

Announcing the decision a week earlier wouldn't help anyone. Maybe he expected it to work (though he didn't say that), but there's no reason to make a final call before seeing that it did work.

Fair enough. I didn't say anything about a "final call". It just feels like there is a middle ground between that and telling people they are overreacting.

Yeah but with no guarantee that it was going to work, why should he have?

Yeah, but he obviously had enough confidence in this project to keep the agents working at it, didn't he? Given infinite time and money, if you prompt an LLM about something enough times, it will eventually work.

Insert something about monkeys, typewriters, and Shakespeare here.


He was 2 days into a project that ended up taking 6. You're being extremely unreasonable.

But you didn’t have to sit and type. Assuming that you look at what it did, why not?

But he was just working along and someone else outed his branch, right? Dude doesn't owe you any sort of explanation.

Yeah, that means it's an extremely successful experiment so far.

"No one has the intention of building a wall" - Walter Ulbricht, chairman of the central committee, a couple of months before the Berlin Wall was built.

The AI companies and their associates are beginning to surpass that level of denials and lies.


It’s disrespectful to immediately jump to adversarial conclusions from a simple desire to refactor and poor netiquette.

The right to be suspicious of the motives of powerful people is infinitely more important than protecting their feelings from being hurt by suspicion.

Powerful people figured out how to make suspicion work for them long ago. You have every right to be unconditionally suspicious, but it’s not a good way of accomplishing any change. Also their feelings are not hurt by what you or I think, they don’t care.

> Also their feelings are not hurt by what you or I think, they don’t care.

Definitely not true, they tend to care more than most.


> Also their feelings are not hurt by what you or I think, they don’t care.

I would have agreed with this like 15 years ago, but the very existence of Twitter (and the acquisition saga) proves this to not be true.


> Powerful people figured out how to make suspicion work for them long ago. You have every right to be unconditionally suspicious, but it’s not a good way of accomplishing any change.

How does one accomplish change? Even being a martyr doesn't get traction. As far as I can tell, you need to already be powerful. Nobody lets you into that group if you're not aligned with said group.

Protests (at least in their current form) don't work. Trying to assassinate someone doesn't move the needle (also not the play, I don't support murder), vocal grassroots leaders are no longer relevant at all, if they ever were.

How does one accomplish any change?


Become mighty rich too first, and then accomplish change.

Not by trading the same suspicions on the internet with fellow true believers over and over again, I think the past 10 years have proven that pretty conclusively. Maybe people should try some of the things previous social movements did, seemed to work pretty well even against a much more uniform media environment and a stronger hostile social consensus.

Protests don’t immediately solve everything, but I think looking at 2026 and concluding they don’t move the needle at all is a weird take. There are recent examples of protest movements (especially long-term ones) working all over the world.


I said assassinations don’t move the needle. Protests just give people the warm fuzzies. They don’t change anything. Tell me what has changed with all of the no kings protests? How well did it go for Iran when they protested?

The onus is on you to prove your point, not me to disprove it.


I’m not asking you to prove my point, but I think you’re being myopic.

* Vikor Orban was just ousted by a popular protest movement. This took years due to structural electoral issues but it did work eventually. And it wouldn’t if the people opposing him gave up because they didn’t change anything right away.

* The US ICE protests (and the federal government’s insane overreaction to them) let to the head of DHS being fired and a quantifiable drop in ICE activity (e.g. arrests and number of people currently detained).

* Nepal’s protests last year led to the resignation of the prime minister and a resounding electoral victory this year for their opponents.

Protests aren’t magic win buttons, especially because even the people protesting don’t fully agree on exactly what “winning” looks like. But they accomplish more than acting out your emotions on the internet.


This isn’t about rights. It’s about not being a jerk. Assume positive intent unless you have direct evidence to the contrary.

Protecting software creators, engineers, builders, and their work, regardless of their tools, is infinitely more important. Full stop.

Four days ago there was no intention to rewrite, now it's a simple desire to refactor. It's not adversarial conclusion, it's pointing out the clear hypocrisy.

Running an experiment, the experiment being more successful than you thought, and then deciding to put more effort into a bigger experiment is not hypocrisy. It’s engineering. If you think some of the objective facts they’re putting out (like test coverage and performance) are lies, go and prove it instead of appealing to emotion.

Especially if given near unlimited tokens to burn through, because any level of success fuels the LLM hype machine, which brings ROI.

> It’s engineering.

Significantly, but not totally. The marketing value can't be ignored.


What do you think one would have to pay to have flesh-and-blood engineers get a cross-language port of a codebase of over half a million lines with a broad test suite to over 99% conformance? I think it would be astronomically high, especially given that for this specific project your hiring pool is going to be limited to people who can get up to speed with Zig and JavaScriptCore right away (or you’re going to have to pay them for low output for a while as you train them). Also it would be literally impossible to do in 6 days no matter how much money you paid, so unless they’re lying about that it’s still something that couldn’t have been done prior for any price.

More handwaving about the LLM hype machine is incredibly boring and enough of it is spewed everywhere that whatever social good it was going to accomplish must have already happened by now. If you want to inject reality into the situation, talk about reality (like Anthropic is at least pretending to).


The hype machine is real and we will talk about it as long as it pleases us. It took decades to get rid of smoking in public places and restaurants, and the clankers will eventually fall, too.

So cash out before that.


Did I say it wasn’t real? Or tell you that you couldn’t talk about it? No, I just pointed out that it’s all anybody talks about and it’s boring and doesn’t engage with anything specific about this stunt/project. And I can make melodramatic analogies too — like to the panic about global overpopulation that led to mass sterilizations in The Emergency. Panic is not an unalloyed good, and if you want to fight “the clankers” you should understand what they are and are not capable of.

Also I already cashed out, jokes on you.


Anti AI cope is unreal, the comparisons to smoking won't stop lol. The mental model of such people (like you) will be studied. LLM's won't go anywhere, keep dreaming.

Studied by whom? Your virtual AI concubine who has you under her thumb? I thought human thinking is obsolete, as can be seen by your comments.

Sure. Let’s see whether ai sticks or not. Till then whine about it in the internet. Maybe a few would care

This attempt is like shooting for the stars. Most of us software developers are plumbers and we just need to reach to the moon.

Running an experiment and deciding based on the results is not hypocrisy, it's engineering, 100%.

Saying you have no intention of doing something then doing it is not engineering, it's being dishonest. He could have said "well decide when we see the results", why didn't he?


If he wasn’t willing to change his mind after he saw the results, then why would he do it at all? Can you explain the false motivation that you think he communicated in the original kerfuffle about this?

Maybe he didn't think it would work. Maybe even if it does "work" they'll keep the zig version anyway. Maybe further study is needed beyond existing compiling/test-suite. Intentions and perspectives change over time, even only a few days, without dishonesty.

I'm guessing that if I said it ... that we have no intention of re-writing in rust ... that what I mean is "we have no intention of spending the extreme cost it would take to rewrite". When I discover the cost model is completely different that changes things.


Giving an opinion and making a commitment are different things, wording is important.

If you mean "we have no intention of spending the extreme cost it would take to rewrite" then say that, and it would be fine. If you instead say "we have no intention of re-writing in rust" you've said something very different, using a different set of words, which changes the meaning. Especially, if you say it directly in response to someone asking you whether you're going to rewrite or not like was the case here, and say that there's a high chance you'll just be throwing it away, to get the other person off your back. If then you go ahead and do it, expect them to call you out for it.

This is a very simple concept that can generally be understood by children at around age 4. Trying to cover it with vague terms and using the defence of "well I said I had no intention, and I probably won't do it but you see, I saw the results so I changed my mind so the chance was small but not zero", that's what a slightly older kid will try to do to see if they can get away with it, and as any kid discovers, that doesn't fly.


Being able to change your mind is a excellent exercise in free will.

Totally.

Saying you don't intend to do something and then doing it is free will.

It's also lying. They are not mutually exclusive.


"People cannot change their mind!

One must stick to old assertions forever!

Giant foot is gonna squish us!"

...this forum is as bad as a single backwater sub Reddit.

I am so sick of emotionally frail software engineers. I don't know why I keep bothering floating back here every once in a while to see what is up.

Same old rustled jimmies over technology evolution like back during the emacs and vi! tabs vs spaces! Sysv init vs systemd!

Super hero power scaling message boards are more engaging than this site.

AI save us from these needlessly economically empowered labor exploiting non-contributor script kiddies. Such an unserious community.


Okay, that's such a shallow take I'm going to try and explain it to you like you're 5 years old:

Changing your mind is okay, for example if someone said it was impossible to do the migration with current LLMs and it turns out they did it in four days, that person can and should admit they were wrong. That's not what he did though. What he did is say he had no intention of doing it, and then did it. That is lying. If he was testing and he didn't know if the change was going to be worth it, he could have said for example:

"This branch is a test, it's not a given it will work so until we see the results we won't decide if we'll be migrating or not."

He didn't say anything like that though, he basically said:

"We have no intention to migrate."

Why did he said the latter and not the former? Because he wasn't being honest, he was just trying to get people off his back, and so he didn't say what he was doing, the best for his own interest. We have a saying in my country: "it's easier to catch a liar than someone who's lame".

Also, before you come and say but he said he had no "intention" not that he wasn't gonna do it. A five year old might think that's a valid argument, but this person is an adult and we're all adults here, so it's not, it's equivocation and it's a logical fallacy.

> I am so sick of emotionally frail software engineers.

Then don't look in the mirror, you're probably being the biggest crybaby in this thread so far.


If experienced (in open source and corporate politics) developers would bet on Polymarket if the rewrite is going to be ultimately merged, which side would you bet on?

What would the emerging odds be? My guess is 19/20 in favor of ditching Zig.

I have followed many initial denials on a wide range of topics, not only rewrites, over the years. Like clockwork, most of them were lies.


I don't think there's much chance it gets merged.

Even if it passed the full test suite there are a ton of software qualities that are not captured by tests and I think it's unlikely the AI made the right trade-off in every such case.

* We haven't seen the benchmarks yet.

* It hasn't seen wide usage. Zig Bun has had tons of bugs ironed out, Rust Bun has a different set of bugs to iron out.

* The developers know the zig codebase well, they don't know the rust code base.


I don't think most serious developers have time to watch prediction markets.

Not to mention invoking a major historical event, appeal to emotion move.

you know this whole exercise is both a marketing exercise and a way to make noise.

would the world come to a standstill tomorrow if every Bun instance out there ran on Node.js ?

they know their A.I can't sell without the noise that it's now on the edge of the frontier. this is hype.

zig adopting a strict 'no LLM' policy affects the LLM vendors.


A good point. The business and marketing aspect of this situation can not be overlooked. The rewrite in Rust was a clear marketing opportunity, to maintain the LLM hype, that team Bun warmly embraced.

At this point one should just say Anthropic team. I can't think of a Bun team since Anthropic bought Bun.

Jared, the hacker is now replaced by Jared, the millionaire soon to be billionaire as Anthropic valuation keeps going up.


Exactly. Always asks “who benefits from this?” . The answer in this case is: AI vendors, not us.

It’s also just a useful exercise in general, especially for getting feedback for models and harnesses.

I’ve been thinking about setting up a non trivial project to use as a benchmark for any plugins and/or harness changes I make.

Having a prebuilt verification suite is great. You can use it to asses things like token usage, time, across different harnesses, models, plugins.


I don’t think the Zig project adopting a strict ‘no LLM’ policy affects the LLM vendors at all. How many developers are working on the Zig project itself that will (maybe) now not buy a Claude subscription? I can buy that this is a marketing stunt, but nobody at the top cares if a relatively small open source project doesn’t allow AI contributions.

I don't know about that. Zig's bdfl got significant mainstream press attention for his anti-LLM stance. Definitely enough attention for various LLM vendors to notice.

Based on their actions, I don’t think the LLM vendors take anti-AI sentiment very seriously. If anything they court it, though I think it’s more likely they’re just high on their own supply. I doubt the Zig statement had any effect on the thoughts of the people who actually sign contracts with Anthropic, who are mostly not engineers.

The marketing opportunity here is in promoting Claude Code, not giving a smackdown to Andrew Kelley (who vanishingly few people who throw around millions of dollars on AI contracts have heard of).


If you think Claude needs manufactured hype at this point to sell it you're delusional.

Anthropic literally has an astroturfing program:

https://news.ycombinator.com/item?id=47945021


I would expect from 'astroturfing' that they were in some way paying people to recommend it. This just seems to be advice on how to recommend it for people who already want to recommend it.

Manufactured hype is just marketing. And companies losing money and looking to get listed very soon absolutely need it.

That’s how marketing works.

If you think they can survive without hype, you are the naive one

Also a few days before that:

> I expect OSS to go the opposite direction: no human contribution allowed. Slop will be a nostalgic relic of 2025 & 2026.

We should have seen this coming after they got acquired by Anthropic, but it's still disappointing. I'm not against large language models as a technology, just thoroughly disgusted how these "AI" companies rose to power, eating the software industry and the rest of society. It's creating a very unhealthy dependency.

Think a few steps ahead and start preparing a slop-free software stack and community. That includes Zig and its ecosystem. Even if we (and future generations) don't manage to live entirely without slop, it's more important than ever to ensure a sustainable computing culture, free as in freedom.


Software companies have been about automating human labor since the invention of computers. It's the whole damn point. Why do you think finance used to be (sometimes still is) the head of the IT dept? Because we automated accounting away. Then typists. Then secretaries. Then drafting. Etc etc.

> It's the whole damn point.

Believe it or not, for some of us it’s not “the whole damn point”.


The purpose of a system is what it does. If people constantly use your device to turn kittens into pulp, you have built a kitten grinder, even if the label you slapped on the side says "coffee beans only".

Whether or not you want to admit that is up to you. If you're selling automation or efficiency gains, you're removing human labor.

My first "job" in computing, where someone else paid me for code, was in a research context where we were modeling radio propagation. Nothing about that was removing human labor. It in face eventually called for a bunch of humans to interact with each other. See: https://www.hamsci.org/basic-project/2017-total-solar-eclips...

I don't think it is fair to claim computers are about putting people out of jobs.


I think it is. Before computers you would have had to write all that down on paper logs. By using code, you saved yourself time. If it wasn't less labor, you wouldn't have done it that way.

Before it was less labor, they might not have done it at all. Computers let you do things quicker. So you do more things.

Ok, then go work on homelessness or political corruption. It's not like we have a dearth of problems. Coding is solved.

> Before computers

Computer used to mean "human who does math". Before machine computers, we had human computers. Machine computers replaced all of these human computers.


People *did* write down these logs, manually, and submit them.

And without software, what then, make a bunch of books and mail it to all these people? On this site of all sites, it's blowing my mind that this kind of thing isn't obvious to everyone. I guess maybe it isn't if you were born before the internet, but man, I'm really surprised by some of these comments.

Human labor could do the math by hand

And in fact, was how it was done.

Why else would one create software, if not to do something that a human does/did?

To do things that a human could have done in theory, but did not do because it would have been too expensive.

A few off the top of my head:

- Video games

- Medical device firmware

- Synthesizers

- Detailed universe-scale physics simulations

- Mars rover control software

- The Linux kernel


- Video games - only feasible because of computers.

- Medical device firmware - hardware control layer for medical devices, which are used to aid in medical procedures.

- Synthesizers - help to make music.

- Detailed universe-scale physics simulations - help to make certain physics problems more tractable.

- Mars rover control software - helps to remote control rovers.

- The Linux kernel - control layer that sits between firmware and actual applications, pretty much just a common shared library so apps don't have to each ship with a full stack.

I don't really see your point here. None of these examples counter the argument that software is created to automate human labour as much as is practical.

Video games are an interesting category since they're entirely enabled by software: I can't imagine anyone driving a video game manually (note I don't consider things like Chess, etc software to be video games in this context; more things like FPS, racing, etc). I do remember as a kid I thought that there were actually little people doing the stuff in video games though.


> I don't really see your point here.

The parent comment said "to do something that a human does/did", so I tried to come up with a diverse list of software that performs functions humans hadn't/couldn't've done.

> software is created to automate human labour as much as is practical

That's certainly a reason software is created, but not the only reason.

> Medical device firmware - hardware control layer for medical devices, which are used to aid in medical procedures.

I should've been more specific, maybe "MRI scanner firmware". Lots of medical devices could not exist without software.

> Synthesizers - help to make music.

Yes, they "help to make music", but synthesizers can produce sounds that humans cannot produce by themselves. If the upthread comment were about technology broadly rather than software specifically, I could've written "saxophones" here.

> Detailed universe-scale physics simulations - help to make certain physics problems more tractable.

"More tractable", or "tractable at all"? Simulations that would take 100 human lifetimes to compute on paper weren't even attempted before.

> Mars rover control software - helps to remote control rovers.

This clearly wasn't ever done without software, so I don't think I understand your response. I can't even imagine how it could have been done without software (my first ridiculous thought is very long cables going from Earth to Mars mechanically controlling a rover, but even if we had a magical material that'd enable that, the cables would get tangled up as the planets move).

> The Linux kernel - control layer that sits between firmware and actual applications, pretty much just a common shared library so apps don't have to each ship with a full stack.

I thought the pushback on this would be "this is just an implementation detail to let us run other software, so it shouldn't count". I don't think I understand your response here either.

---

I guess my general reaction is: sure, if you broaden the criteria enough then you can interpret most anything as "something that a human does/did". Like: humans "have fun" and therefore video games don't count, or humans can jump therefore they "travel through the air" therefore airplanes are just "doing something that humans do". But I don't think this reading of the upthread comment leads to interesting discussion.


Ah I think I see where things went off the rails. I should've explicitly added "would have to do" to purposes for creating software; it was just on my mind, and left as an implicit.

I don't think there's anything out there that a computer can do but humans can't do per se. Whether it's manually doing what an MRI does, or sending people with the Mars rover. It would be anything from tedious/inefficient through crazy difficult/dangerous to totally impossible at this time (at some point in time it would at least be possible). Though that's just being pedantic, especially re video games.

> "this is just an implementation detail to let us run other software, so it shouldn't count"

That's essentially what I said, but in different words.

The main point in my original reply was to question the point of software creation, if not to stand in for human capability, wholly or partially. I don't see people creating software explicitly to just let it gather dust for example, even though that happens very often.


> I don't think there's anything out there that a computer can do but humans can't do per se.

The first thing that comes to mind is complex calculations that need to happen within a certain time budget to be useful. Like, sure, I could "play GTA 5" by sending each of my inputs to a room full of mathematicians frantically doing calculations who then instruct artists how to paint the next frame to send back to me[0], but even if you could somehow get that to run at 1 frame per day, I'd argue that's not really "playing GTA 5" anymore (a core aspect of the game is reacting to things in real time). For a more tangible scenario, imagine trying to pilot a quadcopter by manually controlling each actuator individually (there's no way you could do that quickly/accurately enough to avoid crashing).

[0]: Also this is arguably still "a computer", just one with an unconventional architecture.


> The main point in my original reply was to question the point of software creation, if not to stand in for human capability, wholly or partially. I don't see people creating software explicitly to just let it gather dust for example, even though that happens very often.

Are you referring to the developer's/organization's motivations? Maybe this is a proximate-vs-ultimate-cause sort of thing, but people are also motivated to create software by a desire to express themselves, to win competitions, to stave off boredom, to commit crimes, to prove theorems, to earn money, to show off, to learn things, and so on.

I write software to automate away plenty of my own activities (and occasionally others' too), but even when counting things like test suites, build scripts, etc, I'd estimate that less than a third of the code I've written was because I sat down at the keyboard thinking "I want to replace a human capability".


This list is funny.

All of these things existed in pre computer form.

A scheduler used to be a person putting punch cards into a machine.


My reply in a sibling thread[0] is applicable here too. I'm not sure if you have the same things in mind as skeledrew, but at least this seems probably relevant:

> If you broaden the criteria enough then you can interpret most anything as "something that a human does/did". Like: humans "have fun" and therefore video games don't count, or humans can jump therefore they "travel through the air" therefore airplanes are just "doing something that humans do". But I don't think this reading of the upthread comment leads to interesting discussion.

I'd be happy to discuss specific examples of the "pre computer forms", if you provide some.

[0]: https://news.ycombinator.com/item?id=48083805


What's the human form of a video game ?

Board games? All sorts of toys?

Well not really, since the board game itself doesn't need a paid human to work. It's been crafted by a human, but video games are also crafted by (arguably many more) humans. The closest would be escape games, or larger scale games maybe

To do new things no number of humans can do

No one is taking away programming as a hobby from you :)

There are software components out there that are the backbone of our industry, and they are not governed by multibillion dollar companies. Linux, postgres, HTTP, TCP/IP, qemu,…

It’s not that anthropic/google/openai/etc are unavoidable


> they are not governed by multibillion dollar companies

Every tech you mentioned is absolutely governed by multibillion dollar companies. Something like 75-85% of OSS code is contributed by employees doing their day job. Most Linux and Postgres contributions come from those same employees. HTTP and TCP/IP are managed by standard bodies and industry working groups that, you guessed it, are governed by multibillion dollar companies. Red Hat and IBM are responsible for 40-60% of contributions to Qemu.


The way I understood op is that we don't necessarily have to pay to use linux or postgres (when self hosting, for example). But we have to pay to use claude code... which sucks big time (also, open source models are behind private models)

The usual model for OSS projects is that initially they are written for free. Then an inner circle forms and exploits the second generation of idealists who write entire large features without ever getting the same rights.

Some of the inner circle move to corporations to increase their power and are joined by corporate developers (sometimes their bosses) to take over the project.

A lot of corporate OSS development are entirely unnecessary rewrites or simple things like release management. So I'd put the number of useful code by employees much lower.

But governed, hell yeah, I agree. The corporations crack the whip and oppress real contributors.


[flagged]


Don't make accounts just to add comments for a specific thread, you will get flagged.

"ok guys, that's enough progress since now it's my job at stake, we can stop."

So you argue we discriminate based on who/what wrote the code, instead of what's in it?

Let's take this to a different domain, self driving cars. Would you equally argue for human driving? I'm pretty sure over time it will become clear to everyone that machines will be able to outperform humans consistently at this task, to the degree that human driving will become illegal. But for now the press likes to focus on any failure of machine driving, taking for granted human drivers are the largest or second largest cause of premature death in many countries.

Coding (in many ways, but not all) is a more open ended and versatile task than driving, so it's natural that current iterations seem untrustworthy, but ignoring the trajectory is erring on conservatism, and doesn't seem to me to be grounded in any sound reasoning.


How could it possibly be open source if it requires proprietary models developed by a few companies to writs the code.

Seems like that would make open source entirely controlled by open ai, anthropic et al.


Open source and open weight models are already really good. I don’t think anyone really depends on the big AI companies anymore, if they go away, the open source models seem to be already sufficiently good to take the torch and will continue to improve thanks to research. They may require money to train , but the cost of that is already covered quite well and if these model became the mainstream way to use AI , more money from governments and research institutions would be poured into them.

That is actually a very plausible scenario!


It isn’t really slop anymore and it will keep improving.

He works at claude, he has unlimited tokens. He can do anything, he is using mythos.

I think such re-implementations will be a huge asset to the process of software developments in the future.

What's your point

To demonstrate engineers may not be as skilled and knowledgeable as they appear. To make such a comment then turn around and make an announcement days later indicates that the engineers are not skilled in the tools they’re using or even possibly the domain they’re working in.

The quote doesn’t provide warrant for this claim. The developer did a great job investigating the applicability of a new tool and it appears the investigation yielded fruit.

Your kind of negativity is pathological.


[flagged]


What are you even talking about?

I totally disagree with this! I think it's very important for experts to be able to adapt to their opinions based on evidence.

Sure but if you’re an expert you’re probably finishing your project and collecting results, not sprinting to an online thread to evangelize for Llms with partial results. That sounds amateur to me.

He's tweeting his experiences. Calling this "sprinting" and "evangelizing" is just rhetoric. Posting about a project you're working on is hardly amateurish.

Ugh, I really find this sort of thing frustrating. I like people developing, and testing, and ideating, and exploring in public!

This is one of my problems with academia: people only sharing results when they're positive and complete. I want to hear about what people tried that didn't work, and see the string of failures. People are already inclined to avoid sharing their work out of concern that they'll be judged--let's not encourage that behavior, please.


Being an expert software developer - which Jarred Sumner indisputably is, having created Bun - doesn't automatically make you an expert on predicting the improvements in software development performance that LLMs enable. All of us - experts and amateurs alike - are in the process of figuring that out, in real time, around the world, right now.

Underestimating how quickly a non-trivial project will come together is an almost unheard of phenomenon. It used to invariably be the other way around, to the point that there are laws about it, like Hofstadter's Law, which says that projects always take longer than anticipated, even when accounting for the law itself. Or Fred Brooks' work, which puts limits on how much the development of software projects can be sped up.

The sane takeaway here is that if what's being reported is true (keeping in mind it's coming from a newly minted Anthropic employee), it implies an astonishing, unheard of improvement in software development speed, at least for certain kinds of tasks, enabled by LLMs.

To somehow twist that into "experts may not be as skilled and knowledgeable as they appear" or "not skilled in the tools they’re using" makes me think of the Charles Babbage quote, "I am not able rightly to apprehend the kind of confusion of ideas that could provoke such [an opinion]."


Very impressive that they could do this so quickly because I have been on a similar project (porting TypeScript to Rust) for 5 months. But I guess I don't have access to Mythos and unlimited tokens. I'm also close to 100% pass rate. 99.6% at the time of writing.

https://tsz.dev

Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.

Also want to note that writing the code using LLM doesn't remove the need to have a vision for the design and tradeoffs you make as you build a project. So Jarred and his team are the right kind of people to be able to leverage LLMs to write huge amounts of code.


> Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.

I question this. Yes, strong enforcement of invariants at compile time helps the LLM generate functional code since it gets rapid feedback and retraces as opposed to generating buggy code that fails at runtime in edge cases.

On the other hand, Rust is a complex language prone to refactoring avalanches, where a small change in a component forces refactoring distant code. If the initial architecture is bad or lacking, growing the code base incrementally as LLMs typically do will tend towards spaghettification. So I fear a program that compiles and even runs ok, but no longer human readable or maintainable.


> Rust is a complex language prone to refactoring avalanches

This may be so, but LLMs are great at slogging through such tedious repercussions.

I would say if the language prevents sloppy intermediate states, that actually makes it more amenable to AI; if you just half-ass a refactor into a conceptually inconsistent state, it’s possible for bad tests to fail to catch it in Python, say. But if many such incomplete states are just forbidden, then the compiler errors provide a clean objective function that the LLM can keep iterating on.


This is true in my experience as well. I'd even say it's the most common failure mode of current AI! It "fixes" some problem locally and declares victory, but it doesn't fully address the consequences of the change everywhere, and then the codebase is inconsistent.

I’ve seen Claude address the consequences of a change in a way that honestly was more comprehensive than I would be capable of. But I still agree that sometimes it misses the mark. I think that may be due to “adaptive effort “ which Claude used now by default.

> On the other hand, Rust is a complex language prone to refactoring avalanches, where a small change in a component forces refactoring distant code.

Are you saying this out of personal experience or just hypothesizing? I am working on a large, complex rust project with Claude Code and do not experience this at all.


It can happen like this:

- write sleek operator-overloading-based code for simple mathematical operations on your custom pet algebra

- decide that you want to turn it into an autograd library [0]

- realise that you now need either `RefCell` for interior mutability, or arenas to save the computation graph and local gradients

- realise that `RefCell` puts borrow checks on the runtime path and can panic if you get aliasing wrong

- realise that plain arenas cannot use your sleek operator-overloaded expressions, since `a + b` has no access to the arena, so you need to rewrite them as `tape.sum(node_a, node_b)`

- cry

This was my introduction to why you kinda need to know what you will end up building with Rust, or suffer the cascade refactors. In Python, for example, this issue mostly wouldn't happen, since objects are already reference-like, so the tape/graph can stay implicit and you just chug along.

I still prefer Rust, just that these refactor cascades will happen. But they are mechanically doable, because you just need to 'break' one type, and let an LLM correct the fallout errors surfaced by the compiler till you reach a consistent new ownership model, and I suppose this is common enough that LLM saw it being done hundreds of times, haha.

[0] https://github.com/karpathy/micrograd


Yeah, totally unrelated to LLMs, this has definitely happened to me when changing core types or interfaces in our large rust codebase.

However, a) I think the compiler telling me everywhere I need to fix it is great, and b) even before LLMs, using compile mode with emacs and setting up a macro to jump to next clippy warning, jump to code, fix, and then repeating in batches of like 20-50 could often make it go quite fast.


You can still use the fancy operators for readability, just use a macro to translate them into the actual code. Very common pattern in non-trivial Rust libraries.

This post has some good examples of this sort of problem: https://loglog.games/blog/leaving-rust-gamedev/

That link reads like an autobiography about his love affair with Rust and subsequent breaking up after pushing the relationship a step too far: into gaming. He has been using Rust much, much longer than me, but I rekcon I already hit most of the pain points he mentions. (And I notice he left some things out, like async.)

I've come away feeling that most it looks fixable - but it won't be fixed in Rust. Some of the language choices (like favouring monomorphization to the point of making dll's near impossible) are near impossible to undo now, and in other cases where it might conceivably be fixed (like async) it won't be because the community is too invested with their current solution.

So we are stuck with the Rust we have; warts and all. That blog post convinced me those warts mean the language should be avoided for game development. Similarly sqlite developers convinced me the current state of Rust tooling meant it wasn't a good fit for their style of high reliability coding, so they are sticking with C. Which is a downright perverse outcome.

But for most of us C programmers who aren't willing to put in the huge effort Sqlite does to get the reliability up, Rust is the only game in town right now. It's the first and currently only language to implement a usable formal proof checker that eliminates most of the serious footguns in C and C++. But I am now hoping it becomes a victim of the old engineering adage: plan to throw the first one away, because you will anyway.


I also work on a large complex rust project (>1M LOC) with extensive use of Claude Code. It is very consistent with my experience. Claude frequently subverts the obvious intent of the system - whether that's expressed in comments or types - in the pursuit of "making the build green", as it so often puts it. It, like many junior engineers, has completely failed to internalize the lesson that type errors are useful information and not a bad thing to make go away as soon as possible. It is remarkably capable, but you cannot trust it to have good taste.

It's very easy to just instruct the LLM to build using isolated crates, to maintain boundaries, focus on "ports and adapters", etc, and not run into this - in my experience.

I haven't had any issues with this getting out of hand on >10KLOC vibed rust codebases.


From the languages that I know, Rust is the only language that I can look at a multi-threaded code and understand it. This stuff being checked by the compiler is a huge advantage

I only used Rust for fun maths projects crunching billions of numbers (else python is easier for me), but I have to say rayon is the most amazing multi-processing experience I've ever had!

> I haven't had any issues with this getting out of hand on >10KLOC vibed rust codebases.

This rewrite is >750k lines of Rust


I don't see any reason why the approach wouldn't hold just fine, if not better, as the codebase scaled. Indeed this appears to be exactly what the author has done, they mention that they made heavy use of crates.

When Microsoft rewrote it in go, there was a comment from one of the leads that they chose it over rust because of the similarity in paradigms (garbage collection, etc), and that using rust would've been more difficult, requiring a lot of "hoop jumping". Now that you've done it... Thoughts?

Yes indeed. More than 1 million lines of code (including tests) is jumping lots of hoops but with LLMs it's not as painful so you can just ask it to do the hard things.

Example of a Claude Code session after 2 hours of "Crunching" that came out without results https://github.com/mohsen1/tsz/pull/4868 (Edit I force pushed to PR to solve the problem, you can see the initial refuse message in the initial version of PR description)

Funny thing is, the last percent of the test have been so hard to work on that Opus 4.7 routinely bails and says "it's too involved or complicated" so I had to add prompts specifically asking it not to bail.


You should try GPT, I’d be really interested to hear if it works better. (Exclusively using GPT for systems work at $DAYJOB, but compare with opus every couple weeks and GPT consistently gives me better results)

I've been comparing Claude vs Codex using GPT and Claude consistently is better than GPT about reasoning, about writing code, and using the tools as appropriate.

GPT for instance had a lot of issues using git worktrees, and didn't understand how to correctly use it to then merge stuff back into a main branch, vs Claude which seems to do this much more naturally.

GPT also left me with broken tests/code that I had to iterate on manually, Claude is much better about reasoning through code. Primarily Python.


> GPT for instance had a lot of issues using git worktrees, and didn't understand how to correctly use it to then merge stuff back into a main branch, vs Claude which seems to do this much more naturally.

I wonder how much of that is due to the model being somehow better, or the harness having built-in instructions on how to use them.

I've used worktrees with Codex just fine, but I instructed it to use my scripts for setting it up and tearing it down. The scripts also reflinked existing compilation artifacts to speed up compiling and allocated a fresh db instance for it, but then also applied a simple protocol for locking the master repository during merges, so multiple agents wouldn't try to merge at the same time. It has been following those instructions quite well.


OpenAI gave me that 10x boost and used it all already for this week. I'm guessing the last 50 tests is only doable by GPT 5.5 xhigh

Do you have any write ups on your workflow with Claude and github dev?

That might be opus 4.7 behaviour because I also get that all the time in the past few weeks. Also complex code base, but likely an order of magnitude simpler than yours.

They mentioned that they wanted to port their compiler over to retain existing behavior (vs a re-write) and Rust has a hard time with their cyclic data structures.

Is GC useful for a static type checker? Or did they make a new runtime?

The point is that having a GC will affect your data structure and algorithm design, so it’s easier to automatically transform JS or TS to Go than to rust because you’re mostly reducing things down to one problem (translation) rather than multiple intertwined problems.

tyscript compiler is a cli tool. and is run for short periods of time. GC collection and memory leaks should be least of issue to look for

Same but for multi-threaded Postgres[0]. 96% pg regression tests pass after 1 month and 823K LOC. 8 Codex accounts at $200/mo is what i could use up with no Mythos

I've also seen the benefits of Rust for this too. And making the bet that my pg experience will help me make good design choices around many of the things people have been having trouble with in pg for a long time[1]. Excited to see AI make it more possible to improve complex pieces of software than has historically been practical.

[0] https://github.com/malisper/pgrust [1] https://malisper.me/the-four-horsemen-behind-thousands-of-po...


Very cool! If you have extra tokens laying around ask the agent try to break things and open GitHub issues. This is what I do for tsz and beyond conformance test I can see it finding very good bugs.

1600/mo, there is now a token-rich class.

> PostgreSQL, rewritten from scratch in Rust.

You use the test suite and LLMs are trained on Postgres.

Are you at Freshpaint? A company that "helps healthcare marketing teams grow in a world where privacy is the baseline, but performance is the goal."

Nice promises! Surely the marketing teams will respect privacy!


96% tests passing sounds impressive, but I remember that C compiler that had similar (or better) stats yet was still hilariously broken because the test suite didn't cover many "obvious" things that a human wouldn't get wrong even without the tests.

There's a few big differences between the Anthropic C compiler and pgrust. The C compiler was built mostly autonomously and as a clean room implementation. OTOH I'm steering codex and using the Postgres source code as a reference. That's leading to the implementation being based more on how pg does things than anything else. If you want to try it out, I compiled it to wasm so you can try it out here[0]. You'll see it's much more faithful to Postgres than a C compiler that doesn't handle type checks.

[0] https://pgrust.com/


wow!

curious about your workflow for running all these accounts. different harnesses in parallel? manually switching in codex? 5.5pro only?

what works for you?


I wrote up a bit about my workflow here[0][1]. I'm using conductor.build to manage multiple codex sessions at once. When I hit the rate limit, I'm using codex-auth[2] to switch codex accounts.

[0] https://malisper.me/pgrust-rebuilding-postgres-in-rust-with-... [1] https://malisper.me/pgrust-update-at-67-postgres-compatibili... [2] https://github.com/loongphy/codex-auth


Rust is amazing, but the way I want to build Rust software breaks down on large projects with LLMs. Maintaining clean boundaries or even just establishing them stops being a flow state and turns into painful reviews that push me into procrastination mode.

I’ve struggled to get Opus to not write the weirdest possible Rust, ignoring all idioms and so on. Any tips?

Be absolutely ruthless with technical debt. Opus is perfectly capable of producing idiomatic code in any mainstream language you please, but will seize on any opportunity to justify writing basically-python instead because that's "consistent" with the "convention". Deprive it of that excuse.

Give it coding guidelines. It'll largely try to do what you ask.

Left to itself, it often follows human developers who conceive of their goal as "get the program working, the end justifies the means." Which makes sense because there are a lot of systems like that in the training corpus.


Wow, amazing work.

Pretty impressive that it is faster than the Go version already.


Thank you!

It's much faster in single file benchmarks (3 to 5x)

https://tsz.dev/benchmarks/micro

I have optimizations planned for large projects that I'm still flushing out.


Regarding the architecture documentation you have up on tsz.dev, one thing that jumped out to me was the use of the per node typed side pools. A semi-recent talk[0] had benchmarked this and found it to be a deoptimisation: he couldn't explain it, but an audience member suggested it is likely because an AST is not generally very type-homogenous in its visit order. After a CallExpr node the next node to visit is probably not a CallExpr but more probably an Identifier etc, so storing the node "extra data" in separate pools makes them more likely to be cold in cache rather than hot.

In Nova JavaScript engine[1] I've done exactly as you've done and split objects into typed side pools (I call them "(typed) heap vectors") but in a JavaScript engine my _hypothesis_ is that the visitation patterns are much more amenable to this: an Array, Set, or Map is more likely to be homogeneous than heterogeneous, and therefore a loop over the contents is likely going to hit the same side pool for each entry.

[0]: https://www.youtube.com/watch?v=s_1OG9GwyOw [1]: https://trynova.dev/


That talk is very interesting. Thanks for sharing. I'll watch it later.

Now that most of the implementation is nearly completed, I'm building a lot of instrumentations to have better visibility into those things to have concrete answers to that sort of question. I'm experiencing huge RSS right now that could be exactly what you're pointing to.


Zig is much more type aligned to bun than typescript. And there’s a common interface of C ffi so you could imagine porting it modularly and keeping the test suite in zig

>Rust is perfect for writing all of code using LLM. It's strict type system makes is less likely to make very dumb mistakes that other languages might allow.

100%. I've been telling everyone who will listen this for 2 years. LLMs are infinitely more productive with swift code like

let engineCycleCount: Int = 5

vs

let eC = 5

They still make mistakes, but forcing _explicit_ typing in a strongly typed language makes them make far fewer mistakes + the compiler is catching >90% of what you try to catch with a billion rspecs in trash languages like ruby.


shouldn't typed code that uses functional style be kinda the perfect end game for llms? You can parallelize generation at any granularity, easily ring fence changes, reproduce everything, types give clues to the llm.

>Rust is perfect for writing all of code using LLM.

Rust is a terrible language for using LLMs to write code if Rust's low latency isn't needed, because of its extreme compile times. LLMs code faster than humans so a far bigger fraction of the time is spent waiting for the compiler, and a reasonably sized project will take literally 10x longer to compile in Rust than in e.g. Zig or Go.


[flagged]


> How do we know it is true?

The branch is open.

You can check it out and run the tests if you don’t believe it.


Zig isn’t so much on the blacklist because of the culture it carries from its maintainers, but because the ecosystem is no longer easily composed with other GitHub projects/GitHub Actions.

> We are dealing with a company of habitual liars and promoters.

Any sources to back this up?


I just want to comment that I think it's a good change if we look past the AI involvement.

Bun has had an extremely high amount of crashes/memory bugs due to them using Zig, unlike Deno which is Rust.

Of course, if Bun's Rust port has tons of `unsafe`, it won't magically solve them all, but it'll still get better


> Bun has had an extremely high amount of crashes/memory bugs

Any stats/source? Not that I think it's false

> and the ugly parts look uglier (unsafe) which encourages refactoring.

Looks like Bun owes that to itself to some extent, not solely because of the language



You want a better source than the actual author of Bun?

Authors can't exaggerate? Maybe some actual numbers can convince people.

Here: https://github.com/oven-sh/bun/issues?q=is%3Aissue%20%22Segm...

Around 2500 issues with segmentation fault.


As compared with 41 for deno

https://github.com/denoland/deno/issues?q=is%3Aissue%20%22Se...

With the total number of issues being 16,458 for bun and 14,259 for deno.


The cool thing is the author doesn't actually have to convince anyone

I believe the author is the creator of Bun.

Is he working for Anthropic now?

anthropic bought bun recently

That's exactly my point. The fact they created Bun does not mean they will do what's better for Bun and the people using it.

FTA:

> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.

Not a hard number obviously but a clear indication those issues exist.


I don’t understand: just use an agent to find all memory leaks and segfaults. I don’t get the argument if you are gonna vibe code anyway.

With unlimited tokens make it a lint rule or auto formatter.


LLMs are a force multiplier, not magic. They benefit from good tooling.

If you look at percent of segfault errors in each repo, Bun had a much larger percent. Although don't quote on me.

> Bun has had an extremely high amount of crashes/memory bugs due to them using Zig

This just sounds like they are not good at using Zig. I have been daily driving ghostty on linux for a fairly long time now and I have never seen these kinds of issues. I have also used ghostty on macos for a bit and didn't have any problems there either.

Zig is really good for writing stable and reliable code. There is also a database written in Zig that seems to be fairly successful [0].

I also wrote zig for some time and compiler/toolchain was really pleasant to use. I wrote more segfaults in Rust ffi code than all segfaults I had in Zig in total while I was writing Zig.

[0] https://tigerbeetle.com/


Ah, yes, the "you're holding it wrong" defense. If one tool has a higher safety rating than another, significantly so, preventing entire classes of mistakes from happening that the other does not, in a kind of superset manner - even the most skilled craftsman will inevitably make mistakes that would have been prevented by the safer tool.

It's very possible to write dogshit rust code.

Yes, but you really have to go out of your way to segfault/ run afoul of memory safety issues, which are a pain in the ass to debug.

I think the main problem with Bun is that they are trying to move very quickly.

Tigebeetle devs spend 90% time working on stability, safety, tests and so on. They don't need new features, they need reliable software. Their database is pretty simple in terms of features and their goal was always stability and speed. Bun devs spend the majority of the time adding new features.


Maybe they should put more consideration towards quality if they have a ton of memory issues.

That's what they're doing.

> This just sounds like they are not good at using Zig.

That's odd, because of the visibility of team Bun using the language, one would think they could get whatever help and guidance they asked for. Seems weird for team Bun to complain about crashes, leaks, and bugs if they could have what they are doing wrong explained to them or their issues fixed in a timely manner.


Not sure if ghostty is the best example https://mitchellh.com/writing/ghostty-memory-leak-fix

Last time i checked their issue tracker (in 2025), the main source of problem was the engine, not their Zig code. A lot of core dump was happening inside and around JSC.

I remember back in the day we used to blame the user and not the tool, but I guess we changed that notion when it comes to tool vs tool comparisons LOL

> Of course, if Bun's Rust port has tons of `unsafe`, it won't magically solve them all, but it'll still get better

You get very few of the Rust guarantees when you litter your code with unsafe to get around the safety checks (which is what they're doing here). I would not recommend running this in production.


Yes, liberal unsafe code makes in Rust it arguably worse than writing it in a presumed-unsafe language.

From what I understand, rust "unsafe" is actually pretty damn safe compared to an actually memory unsafe language.

Not really. In fact unsafe Rust is widely considered to be significantly harder to get right than C or Zig.

E.g. see https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/ or https://lucumr.pocoo.org/2022/1/30/unsafe-rust/

The saving grace is that in idiomatic Rust code you have very little unsafe code - usually none. But yeah there's no way I would trust AI to get unsafe Rust right.


And they're clearly marked as `unsafe`, so easy to find, which gives them a nice list of issues to address.

Is your claim that using Zig ends in an "extremely high amount of crashes/memory bugs?" Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool? There is a lot of quality stuff made with C/C++, so what is Zig doing wrong?

> Is your claim that using Zig ends in an "extremely high amount of crashes/memory bugs?" Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool?

What caused you to hallucinate such a broad blanket statement? The point is the memory unsafety issues they ran into would be categorically impossible in safe Rust, which is why they're doing this in the first place.


It's not hallucination, it's a basic extrapolation. "Bun has had an extremely high amount of crashes/memory bugs due to them using Zig" is the same statement as "using Zig resulted in Bun having an extremely high amount of crashes/memory bugs". It is then natural to ask whether their position is "using Zig results in an extremely high amount of crashes/bugs" in general.

That's a hell of a lot more than "basic extrapolation." You're misrepresenting the original claim to fight against one that's trivially easy to dispute. "Bun has had an extremely high amount of crashes/memory bugs due to them using Zig" (which unlike Rust, doesn't prevent you from writing them) is a completely different statement than your "using Zig results in an extremely high amount of crashes/bugs." Implying that such a generalization was even on the table is insulting.

Yes, obviously you can write high-quality software in Zig. But does Zig categorically reject the kind of bugs Bun was suffering from? Rust does.


The point is that the "extremely high amount of crashes/bugs" is maybe not the fault of Zig after all, as was implied.

How software behaves is very obviously downstream of the tools (in this case programming language) used to build it.

"Downstream of" is doing a lot of work in that sentence. Language has an effect on, but in no way determines, the reliability of software written in it.

Downstream doesn't imply determinism.

The original claim is one of determinism. Your use of the term "downstream" is hiding the distinction; it can be read in either way, so it bridges the gap between the position you want to defend ("using Zig causes a higher probability of memory bugs") and the position you're forced to defend ("using Zig results in extremely many memory bugs").

In short, I'm accusing you of doing a motte-and-bailey.


It's generalizing from Bun (which might be especially tricky code) to other software that might not have the similar issues. There are lots of different kinds of software.

Even assuming that's a correct interpretation, does "using C/C++ results in having an extremely high amount of crashes/memory bugs" not true?

No, that's provably false by a fairly simple existence proof. If it was true that using C results in an "extremely high amount of crashes/memory bugs", we would expect to not find any substantial pieces of software written in C without an "extremely high amount of crashes/memory bugs". Now where exactly you draw that line is necessarily going to be somewhat arbitrary, but by any definition, I think we can all agree that SQLite does not fit that description. Yet SQLite is written in C. Therefore, we conclude that the statement must be false. QED.

Now C does have some aspects which make it more prone to crashes and memory bugs. The less strong statement of "using C results in a higher propensity for crashes/memory bugs than Rust" is absolutely true, I would argue. And both C++ and Rust inherit some (but not all, and not the same) of the aspects which make C prone to memory bugs. (So does Go, I would argue, but less than C++ and Zig.)


Bah waking up today to notice a typo, after the edit window. "And both C++ and Rust inherit some ... aspects" was of course meant to be "And both C++ and Zig inherit some ... aspects".

>I think we can all agree that SQLite does not fit that description

One of the reasons WebSQL died was due to how many memory bug related vulnerabilities SQLite had.


You know, I try to ask questions rather than making assertions in order to better my chances at provoking useful thought and conversation.

It is basically Modula-2 / Object Pascal with C like syntax.

While bounds checking, improved argument passing, typed pointers, proper strings and arrays are an improvement over C, it still suffers from use after free cases.

C++ already prevents many of those scenarios, at least for those folks that don't use it as a plain Better C, and actually make use of the standard library in hardned mode. When not, naturally is as bad as C.

Also to note that the tools that Zig offers to prevent that, are also available in C and C++, but people have to actually use them, e.g. I was using Purify back in 2000's.

Then there is the whole point that Zig is not yet 1.0, and who knows what will still change until then.


> Then there is the whole point that Zig is not yet 1.0, and who knows what will still change until then.

Seems like their luck finally ran out. For the longest time, they were getting all kinds of passes, as if a post 1.0 language, that others don't get. 10 years is quite a long time not to hit 1.0 or still be into beta breaking changes. Though I think that (the luck) was significantly aided by their perpetual and odd HN boosting.

> While bounds checking, improved argument passing, typed pointers, proper strings and arrays are an improvement over C, it still suffers from use after free cases.

While Zig was a bit safer and more modern C alternative, safety was arguably not so much their selling point. Plenty of other C alternative languages are equally or more safe. Dlang and Vlang, both now having optional GCs and ownership, are examples.


Yeah, pity that D somehow lost its adoption opportunity.

Now you can get most of it via C# AOT or Swift, with much better ecosystem.

Still, it is part of the official GCC and LLVM frontends, so there is that.


Thank you for actually making the effort to respond to the curiosity in my question.

You would like the T3X language as an exercise to port stuff from Free Pascal too it. In a near future I plan to port two libre text adventures with it, Beyond the Titanic and Supernova. If it fits under T3X, it might run in 'high end' CP/M systems out there.

https://t3x.org/t3x/0/index.html

https://t3x.org/t3x/0/t3xref.html

Beyond these Curses simple games, there's a 6502 assembler and disassembler among a Kim-1 simulator, Micro Common Lisps and whatnot.


Have to look into it, thanks.

Nice. A tip: there are 'modules' where are just helpers (strings, io) over main functions.

Kinda like write vs printf in C, but easier to grasp. The cheatsheet will help you a lot.

Another thing: setting up the compiler might be cumbersome, I might post a guide soon. I am not the author but making it compile well on some arches can be odd (openbsd/amd64) vs native code (fbsd, 32 bit linux)... nothing complex once you set it up once.

My T3XDIR in the makefile and bin/ scripts it's set to $HOME/t3x0/lib and the bn PATH being set to $HOME/T3XDIR/bin in both Unix env vars and the scripts. It's a 10 minute setup, but after than you will just run

        tx0 -c -s file 

        
(file actually being file.t) and get a binary. Cross compiling for DOS or CP/M involve simlar flags. And it's cool as hell, as I translated Ladder into Spanish for some Spanish OpenBSD pubnix... and the same port will work in DOS too.

On Titanic/Supernova, well, it was a former TP game ported to FPC, is not very complex, and tons of stuff could map 1:1 to t3x. The game might be too big for CP/M but for DOS it would be ideal (even by using the T3X 'big' libraries).

The bundled cheatsheet (make will generate a cheatsheet.pdf file if you have groff) might help you. For instance, gotoxy can be written in T3X as con.move(x,y). You need to import the console library as:

         use console: con;
Also, the WYOP book from the samepge comes with a good chunk of examples to play with in a ZIP file.

Have fun.


Eh, I made a typo. The PATH for tcvm and tx0 should be $HOME/t3x0/bin.

It is much harder to write quality stuff in c/c++ that doesn't have memory bugs (use after free, out of bounds access, use of unitialized memory, double free, memory races, etc.). I wouldn't say it isn't feasible to build high quality software in those languages, but even the highest quality software written in those languages has these types of bugs. Zig is better than c, and maybe a little bit better than c++, especially with respect to spatial memory bugs, but it doesn't provide the same garantees as rust.

I use clang, LLM, zig compiler, brave, firefox, kde, linux, steam, PC games, neovim, ghostty and more software written in c/c++/zig, and I can't remember the last time I had a crash issue with memory issues.

KDE also includes many other programs inside it like music player, document reader etc. that I never had any issues with.


Based on what? I am not familiar with this language called called "c/c++" but if you are writing Modern C++, you shouldn't be creating problems like "double free." It's really not that hard to avoid at all. This reminds me of how all the people carried on as if they were making the kernel so much safer not realizing they needed to use unsafe rust. I think so many people call themselves programmers now but so few know very much about computing beyond whatever the latest fad web framework is up to.

This kind of argument is why security folks look down on C and C++ developers.

Because instead of discussing serious matters, they missed English grammar class on the use of / and then get up in arms about the use of "and, or".

Additionally, even code bases from companies that seat at WG21, lack the use of the so called Modern C++, without any language feature or header files inherited from C.

Better C with some niceties keeps being the prevalent approach, unfortunately.

C strings, C arrays, pointer math, printf family, C style casts, macros instead of templates, no STL, and if not hardned ...


Sure if you restrict yourself to a subset of c++ that avoids the more unsafe features, you can avoid some of those problems, but not all of them. And IME, a lot of c++ in the wild still uses those unsafe features, especially when interfacing with c libraries. And even if you always use smart pointers and make sure you always initialize your variables there are still plenty of ways you can get undefined behavior in c++.

> This reminds me of how all the people carried on as if they were making the kernel so much safer not realizing they needed to use unsafe rust.

Those are not contradictory. Confining unsafe code to a few unsafe blocks makes it easier to identify areas that need closer scrutiny. Just because there are unsafe blocks doesn't mean that using rust in the kernel isn't making it safer.


The answer is that C (and by extension Zig, C++) code goes through a hardening process. New code in these languages tends to be unsafe. But bugs and vulnerabilities get squashed over time. Bun gets updated fast and so has a lot of new unsafe code.

The statement “there exists a project where zig led to an extremely high amount of crashes/memory bugs” does not imply “all zig projects have an extremely high amount of crashes/memory bugs”.

This is a classic logic problem - eg “there is an orange cat” doesn’t imply “all cats are orange”.


> There is a lot of quality stuff made with C/C++

There’s a lot of leaky crap written in those languages too. One of the core promises of Rust is that the compiler will catch memory issues other languages won’t experience until runtime. If Zig doesn’t offer something similar it’ll make Rust very compelling.


Zig is a love letter to C. It does not do much of anything to address memory management. Doesn't even have any concept of ownership like C++ does (ergo, no equivalent of unique_ptr / shared_ptr). All you get over C is the addition of defer, and even that isn't really that different if you're using GCC or Clang and thus have __attribute__((cleanup)).

This is a hot take, but programming languages haven't progressed since the 90's. We've been conditioned to believe that if you want to be a serious programmer, you have to either use C++-style RAII (which includes Rust), or garbage collection, and there's no in-between, and C programmers are dinosaurs who can be ignored.

Arena allocators are a great way to automatically manage memory allocations. You malloc a whole bunch of memory and release it all with a single free, which makes it much easier to reason about your program's memory safety.

Casey Muratori has a good video talking about this. https://www.youtube.com/watch?v=xt1KNDmOYqA

And about Zig, you have an Arena Allocator out of the box: https://zig.guide/standard-library/allocators/ . And it's not just limited to that, you have debug allocators that detects memory leaks and gives you stack traces where they occurred.

This isn't to say that Zig is great at everything. I think Rust is great for things like kernels, high-frequency trading systems, and authentication servers where memory safety and performance is paramount. But for things like video games, memory leaks and buffer overflows aren't that big of a deal, and Zig's "Good Enough" approach is great for those types of applications.


Arena allocators are not some grand new concept. They're already commonly used in C++ in the places it makes sense to use them. Which is really not that many places, it's a fast but rather niche optimization. There's not a whole lot of scenarios where lots of temporary memory is needed for one well defined scope.

Video games are large and have lots of state and lots of threads. Zig's lack of ownership here with fully manual memory management is overall a poor fit.


I disagree with a lot of what you said, but I don't feel authorative enough to say you're wrong.

> Which is really not that many places, it's a fast but rather niche optimization. There's not a whole lot of scenarios where lots of temporary memory is needed for one well defined scope.

Arena allocators are not niche optimizations, or not something picked first for optimization. Contrary to what you said, arenas are useful for temporary allocations with poorly defined intermediate scope or lifetime (think functions directly or indirectly called by the arena owner). If the scope is local and well-defined, a regular allocator or even a fixed buffer would do just fine.

> Zig's lack of ownership

Zig doesn't have explicit annotations for it, but the concept of ownership and lifetime doesn't go away. It's not enforced by the compiler, which is an intentional tradeoff to let the programmer have more control and freedom. When you use languages with manual memory management, it's expected that you are capable of designing sensible programs in such a way that ownership and lifetimes are tractable and are part of the program design, rather than something to workaround to please the compiler.


> Zig doesn't have explicit annotations for it, but the concept of ownership and lifetime doesn't go away. It's not enforced by the compiler, which is an intentional tradeoff to let the programmer have more control and freedom.

Right, it's exactly like C, and we kinda all know how that worked out in practice already...

Hence why I called Zig a "love letter to C". If all you want is C with a dash of zest, that's Zig. If you want a modern language that has learned from the many hard lessons the industry has dealt with over the years... well, Zig ain't it. Which is a perfectly fine thing for Zig to be, it doesn't have to be a good general purpose language. We have plenty of those already from Rust to Go to Java/C#/Kotlin to etc...

> arenas are useful for temporary allocations with poorly defined intermediate scope or lifetime (think functions directly or indirectly called by the arena owner).

Arenas are not good for that because the arena as a whole has to outlive all of those poorly defined scopes & lifetimes, which is hard to do. Especially if you later go add on something like an retry-with-backoff or asynchronous metrics/tracing or caching or whatever. Then suddenly you're either fighting use-after-frees or doing deep-copying of data.


Zig does in fact do some stuff to address memory management like making allocations more explicit using allocators and shipping with arenas.

C also has only explicit memory allocators...

rust does not promise leak safety.

True. But rust does make it a lot harder to leak memory by accident. Rust variables are automatically freed when they go out of scope. Ownership semantics mean the compiler knows when to free almost everything.

> But rust does make it a lot harder to leak memory by accident. Rust variables are automatically freed when they go out of scope.

RAII has entered the chat.


> Wouldn't that mean that it isn't even feasible to make high-quality software with such a tool?

plenty of other companies/entities making high quality software in zig? tigerbeetle, zig itself for example.

Bun's entire history has been a kind of haphazard move as fast as you can story, so...


it's feasible to write good software but anything on the scale of millions of lines of code will have memory and pointer issues. I've worked in large C++ code bases with people much more experienced and skilled than I was and every single one of them would tell you that at that scale, no matter how economic and simple you program you will produce memory bugs, the smartest person in the world makes errors holding that much stuff in their head.

They're difficult to find, difficult to reason about in big software and you'll always create some. Languages that rule that out are a huge improvement in terms of correctness.


This is correct but people with too big of an ego or affected too much by Dunning-Kruger) will try to say otherwise even when presented with ample evidence. Instead of a valid response you'll get "skill issue" from people that produce segfaulting code on a regular basis.

Can you or someone shed some light on how much compute it took to do this?

6 days of work to do this. Even if it doesn't end up becoming meaningful, it shows just how tokens and work done will be linked now and in the future.

It's going to be hard to compete with someone or a company that has more compute. They will just be able to do things you can't.


Translating a project that includes a good test suite from one language to another is known to be a great case where LLMs work well.

When you’re starting with a complete codebase to use as an example and a test suite to check everything it’s much easier to iterate toward the desired goal. The LLM can already see what the goals are and how they’ve been implemented once already, which is a much easier problem than starting from a spec.


Sure, but, given that, does it not seem like the conclusion is: if you have something that could in principle be reverse engineered by a competitor with more compute, they can and will steal it, because the only constraint is roi.

Great case where rust works well too. I won't cite every famous libs that got rewritten in rust but it wasn't all with LLM.

I fail to think of a successful Rust rewrite, so far what I've seen is just programmers who aren't sufficiently experienced, who decide to pick Rust and rewrite something in it, and then (this is the bad part) claim it's better for that reason only. It never is. It's always worse, because rewrites fundamentally end up with a worse product first.


It's not hard to imagine a future where the only things committed to git repos are tests and specs.

And maybe not even the tests. Just a specification for the tests.

I can see open source projects as just prompts as well.

The goal posts are always moving. This would have been an unthinkable task a couple years ago.

Even last year at this time people wouldn't believe it.

Unclear. Very good products tend to be about doing one or a few things very well; not about doing tons of stuff. So far, all I see is “Man, Im a 10x engineer now!”, shipping more code but without clear direction and taste. At this point, most of LLM-based work is just noise.

It's a new era of capital, literally, in software development. Ownership of the means of production is now concentrated.

You could have said the same thing about steam power or electricity. And it’s not just an analogy: The magic of these things is in being universal information engines. You spend capital to build them, using well-understood, scalable techniques, plug them into electricity, and out comes value.

My point is, there’s no chance of a “haves and have nots” emerging, any more than electricity turned out that way in the modern world.


Electricity might be a good analogy - but for the other side of this argument.

In the US, (nearly) full electrification wasn't achieved until the late 1940's/early 1950's - a process of nearly a century. (A moment of personal trivia, my great grandfather worked on crews electrifying rural areas of the midwest.)


We already have SOTA local inference devices in everyone’s pocket, which also provide high bandwidth access to SOTA data center inference at what is rapidly becoming commodity pricing.

What comparable gap is there to bridge?


>My point is, there’s no chance of a “haves and have nots” emerging, any more than electricity turned out that way in the modern world.

Energy costs vary widely across the world and that has enormous capacity for the economies of different countries and their industrial capacity.


https://worldpopulationreview.com/country-rankings/cost-of-e...

Electricity looks pretty even. Higher in Europe but they can afford that.


Due to purchasing power parity, it is actually much hhigher in poorer countries, in that they are absolutely still asking the have nots.

When something is expensive specifically because a country is poor and everything is harder to buy, that expense isn't making inequality worse.

I am talking about have nots at a nation scale here. At level of British empire.

I'm not sure what that means. Every country has electricity and any country can get GPUs if it wants them.

(And the profit from selling GPUs isn't haves versus have nots, it's a couple companies versus the entire world.)


Nah. These agents are getting easier and easier to run local. Have you tried Qwen 3.6 27b? It’s insane what it can do compared to its size. Like 100% vibe small projects if you manage context properly.

These models are a race to the bottom just like compute.


I don’t think it matters. Local matters becoming better has not stopped demand for SOTA models.

There are not many companies that live off taking full test suite made over decades and just generating code off it.

I can't help but wonder what this cost in USD assuming you paid standard rates from Anthropic. Can someone even ballpark the price?

Much less than what it’d costs for a team of rust engineers.

This is both amazing and scary; has been for a while now.


It costs several times what it would cost a small team of engineers, even assuming you gave the engineers more time to do it. I'm guessing (wildly) this was around 0.5M USD in compute time. You do get the result quicker, though.

> I'm guessing (wildly) this was around 0.5M USD in compute time.

That seems like an especially wild guess. If you take e.g. Opus 4.7 prices, and make the assumption that you are consuming roughly $30 for every million tokens of output (this comes from just summing the $25 per million tokens of output and $5 per million tokens of input and assuming that caching basically makes all that work out), and assume an output rate of 80 tokens per second (which seems like a high estimate based on online searching), it would take you about 2411 days of non-stop Opus 4.7 usage to hit 500k in API spend.

The only way you could possibly run that amount of usage in 6 days is if you were running ~400 instances in parallel. From personal experience, that seems crazy high for this project.

I think you are off by at least an order of magnitude (potentially even 2 depending on how the person is managing agents, but I could see something like dozens of agents 24/7, so I'm way less confident in 2, but I think it's still more likely to be closer to 10-20k in API spend).


Half a million is pretty damn cheap for a full rewrite into Rust of a million line of code codebase.

But usually companies are much more careful before even spending that half a million. (And most companies don't have that money sitting around.) They would do small PoCs, do comprehensive benchmarks and evaluations of those PoCs, and decide whether to actually go ahead, and, more importantly, stick to it.

Being able to afford half a million doesn't mean you do it on a whim, or just throw all of that away if things don't go well.

But what do I know. I am nothing compared to our AI overlords like Anthropic.


> They would do small PoCs, do comprehensive benchmarks and evaluations of those PoCs, and decide whether to actually go ahead

Perfect, $1mil in salaries to spare the company $500k in spend :)


10k lines ~$250 in OpenAI API calls (no plan)

45 million lines would get to ~$1.125 mil for the linux kernel.

950k lines for Bun would get to $23,750

use whatever math you like ofc.

Does an Anthropic/employee pay that, no. Even if it's at a loss in terms of company revenue, it's worth burning the private capital for all kinds of other reasons.


With less employees....

Isn’t just one guy?

Exactly

This is exactly how Anthropic will market this rewrite towards companies thinking about doing more layoffs.

1 person did a rust rewrite that took 6 days that would have taken hundreds of engineers more than a year to do.


> 1 person did a rust rewrite that took 6 days that would have taken hundreds of engineers more than a year to do.

The entire bun team was only about a dozen people and they wrote it from scratch.

It would not take hundreds of engineers to port the existing codebase to another language.

I think this is a cool experiment, but some of these claims are getting absurd.


The saving grace here is a rewrite of a project with a good test suite is the sweet spot: LLMs are great at translation and do great with verifiable goals.

I agree it’s still mind blowing compared to before times, though.


> would have taken hundreds of engineers more than a year

This is estimating what, 10 lines per day each? No way translating code is anywhere near that slow.


It probably wouldn't take a single person who knew what they were doing more than a year to re-implement Bun in basically anything, by hand and from scratch, i.e. not even looking at source. Writing the code for something you already understand and have built before is incredibly fast.

I'm sure they'll market what you said, but it's so ridiculous that I would hope people would see through this stuff.


And he has zero idea how it works. His capacity for understanding it is tied to his wallet now.

It's actually tied to his employment at Anthropic.

> 1 person did a rust rewrite that took 6 days that would have taken hundreds of engineers more than a year to do.

Even cheaper would just be to not do it in the first place. Was there a pressing need to rewrite it?


The majority of Bun was written by one guy in less than a year. In what world would a rewrite take hundreds of engineers more than a year to do? The hyperbole is getting ridiculous.

Completely unbased, but I don’t want to have to do anything with bun anymore. It’s just a gut feeling, but I don’t trust them and support them.

They fork Zig to utilize LLM rewrites and build something the Zig team clearly disregarded (non-deterministic compiling)

And now like a whiny baby they LLM rewrite to Rust. There is a very real chance that Zig design philosophy got them to the point where they are now by enforcing to make the tough but precise decisions and the Rust rewrite is the start of the downfall.

It’s purely politics-based not technical, but it seems like bun is full on pampered by Claude. So much that I wouldn’t wonder that the next marketing piece of Anthropic is. Claude Mythos rewrote leading 950k LOC JS Runtime to Rust.


Who's the whiny baby? The developer writing some code in their own repo, or the guy complaining about it on Hacker News?

Yeah I also noticed this irony. In addition to accusing the rewrite to being political and not technical, while their whole comment is being political not technical.

I meant my comment not the rewrite

Ah, fair enough then, you mean want to clarify that a bit as it can be interpreted both ways. And the whiny baby part seems a bit uncalled for and distracting from the point you’re trying to make.

Don't give them too much credit, they responded to other comments clearly referring to the developers' comments on twitter about his technical motivations. He's just backtracking now due to your comment.

I meant the developers motivation with "whiny baby" and I take the point that this was over the top and I could’ve found better words.

But I meant that my comment is "politics-based and not technical", because the gut feeling is more based on my reading of soft factors than it is from in-depth technical analysis of everything involved.


How can you be so blind? This is all a marketing campaign by anthropic. No more no less. The developers doing the rewrite have no voice at all in this game.

People never want to admit when they're used, especially when they're only used to make someone slightly more rich.

> You're posting valid criticism, therefore you're a crying baby

Yawn.


To me it reads much more as political polemic/signaling than any sort of valid critique

I'm team Zig in most cases but I genuinely think they are better off with Rust. They have had a lot of buffer overruns and segfaults as a result of undisciplined Zig code. I think Rust actually is a better technical choice for them.

[flagged]


I don't think that's going to save them. There are big problems and little problems. RAII+ownership/borrowing solves some memory and file handle issues. But the big problem and this happened before the rewrite, is that they have ceded the system level. Which locks the project into a local minima.

It's not a "your holding it wrong" problem it's a you fundamentally have no idea how your own program works past 1 or 2 level of indentation in most places. If the LLM says that something isn't possible you just have to take it at it's word.


> And now like a winey baby they LLM rewrite to Rust.

I didn't see any whining from Jarred, this seems like misplaced sentiment

> It’s purely politics-based

The linked twitter thread gives clear technical justifications


Jarreds Twitter is a Claude Code Billboard

Whether incidentally or intentionally, that rings true.

> I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues

There are legit reasons to rewrite a program in a better fitting language, but as a runtime to be "tired of worrying about & spending lots of time fixing memory leaks and crashes and stability" is really borderline to me.

Also there are way more things to it than just compile time and tests: you reset mental model and will lose contributers. There is philosophy, developer skill and more attached to a language.

In this case both compile via LLVM the same and there is no performance benefit given the code is written exactly the same, so it’s developer preference, where the current head seemed to prioritize his own DX over everyone else’s.

But again this is mainly my gut feeling. I’m not the first dev that doesn’t like the way bun changes : https://news.ycombinator.com/item?id=48011184


"They" likely refers to Anthropic in this case rather than being an indeterminate singular pronoun:

https://bun.com/blog/bun-joins-anthropic

I'm not sure if the 50% of people defending the whole rewrite live under a rock with regard to the acquisition or have never worked at a US company or a deliberately naive. Companies give instructions. Nothing of this is accidental or prompted by curiosity.


It looks more political than technical. Also, criticizing the Zig team for not making any AI contributions before this gives a hint.

I agree. From the get-go, Bun was apparent in its design philosophy: we do everything you'd ever want; runtime, bundler, test suite, package manager, all in a new breaking patch each week. With each and every one blowing the established competition out, better, faster and stronger. But it was glaringly obvious that they'd do anything but Keep It Simple Stupid. It was obvious that the only production environment it would see the light of the day in the near future would be YC startups burning one after another at the speed of an accelerant. Now, they're past the point of no return.

I consider zig the "whiny baby" approach to be honest.

Skill issue.

> It’s purely politics-based not technical

Jarred mentioned having to work on fixing memory leaks as the main motivation to try this.

https://xcancel.com/jarredsumner/status/2053058171338682875#...

I was never fully comfortable with Zig given it's much less mature than Rust. Maybe this will be for the better.


I mentioned a similar sentiment 4 days ago in the original discussion about this project, and HN for some reason did not like that I noted Rust is used in production longer and way more than Zig is, including Firefox, CloudFlare's own reverse proxy, Discord, and many other massive effort projects that affect millions if not billions of people.

People are seriously naive about corporate incentives. You think he'll go "Yeah, it being in Zig has put a wrench in our AI usage and that's not a good look now that we're with Anthropic"? No, he'll confirm everyone's biases instead - and it's working as well as expected on this crowd.

He is a puppet. Anthropic is making him a billionaire. No surprise no one here can notice the difference

I don't have the personal investment that you appear to have with Bun, but why does this matter? Do you scrutinize the rest of your dependencies this way?

Much of working in the JS / NPM ecosystem is already pure faith on un-vetted dependencies, and this appears no different pre or post LLM rewrite. If it satisfies the intended goal and API contract it originally did, is there any difference? Were you carefully reading the original source code before?


> Do you scrutinize the rest of your dependencies this way?

You don't?


No, and I don’t really believe that you do to this degree either.

Enough to make judgement calls on them based on the individual Twitter posts of each of their developers? Absolutely not!

If I go beyond the initial vetting, that's a minimum of 30+ projects multiplied by however many contributors each. Without even mentioning all of their sub dependencies. It's a pipe dream to think you can ever have a complete picture of the motivations and political machinations of your entire dependency tree.


I have definitely dropped dependencies from production codebases in the past because "lead developer is widely known to be a clown". You don't need to catch everything but it's generally a good idea to have a picture of, like, the twenty most important dependencies in your codebase and the 90th percentile most notorious clowns in the community.

What is your definition of "known to be a clown"? I'm not sure how one would even begin to evaluate that at scale. Or what practical impact that would actually have for anything but the most critical of dependencies that might be too difficult to swap at will.

Hyperbole, yeah, but top 10% undesirable leads is literally thousands of people?

I couldn't imagine following the communities of even the top ten dependencies of one of our (many) projects very deeply. Every single one of them is having divisive conversations in threads like this all the time that never really lead anywhere or sum up to anything meaningful.


React router duo would be a great example of known clowns. I totally get what the other person means.

Most of us what to avoid the circus.


Sure, but you need to consider that, in this case we are talking about the language runtime. It isn't just some other library dep. It's basically the base layer of the stack. It has a huge blast radius. It is, imo, a nontrivial decision to swap runtimes. If problems emerge you can't easily plug some other runtime, that's a major technical decision and should be treated as such.

In the past at least you could assume the maintainers of the runtime had some kind of mental model of how it worked. In my view, with the way this rewrite has been approached, you can't assume that at all. It's good the test suite passes, but who knows how this will affect the evolution of the codebase? Do we even know if the code is good? How much is just slop? Tests do not test architecture. Is this new rewrite even going to be maintainable? How is the team going to get up to speed on a new codebase in a new language that the main author presumably doesn't even fully understand?

There are many reasons to be concerned. Treating this as no big deal would make me question one's ability to make assessments of technology. There's a world of difference between relying on gen AI heavily in products and leaf nodes of the stack, using it in a purely assistive way, and using it to drive a massive scale rewrite of a base component in a language the maintains team has an unproven amount of experience with. From a reliability standpoint the way this project was executed is completely preposterous, and it's very clearly a marketing stunt more than a sound technical decision on how to drive a project. It's not about the use of LLMs, it's about thee stupid and blatantly obvious generation of cognitive debt all to help sell claude. I'd have way fewer qualms if they used LLMs to do a rewrite in a way that retained developer understanding (i.e. not driven by one person and in such a short timespan that having a robust mental model, even for that person, is highly unlikely)


You're implying that reckless rewrites within the JS ecosystem are a novel event, or more specifically that surprise language changes over a short period of time are. And yet... I can think of at least six times in which exactly this has happened and little fuss was made because the polarizing element of "AI" was not involved. Not just JS to Typescript, but to Dart, Go, C, Rust, Zig, Nim etc.

From any reasonable perspective, this is business as usual in the house of cards we all operate in. Perhaps the sensationalization would be justified if the lang migration wasn't one of less correct -> enforced correctness by default?

To your point in general about maintainers holding a mental model of the runtime: I would challenge that to say that it is very likely that there is no developer who holds a complete mental model of an entire runtime at any given point. As with anything of this scale you understand individual parts in their entirety and have general assertions about the rest until specifically revisited, even if you are the sole developer. In this case specifically, Bun has been largely AI driven for quite a while anyway so it is even more unlikely that the developers ever had a complete picture in the first place. If you trusted them before, then nothing has changed.

It's not lost on me that code logic can be subtly incorrect even as tests are passing either, but there isn't exactly a lot of grey area in this particular context. Does your code compile or not? If it builds as expected, then your own unit tests will highlight the difference.



Bun is effectively dead.

Anthropic bought it in a somewhat dumb attempt to solve their "performance" issues (not realizing their horrible code was the issue in the first place).

It probably helped them, simply because they brought in some actually competent developers.

But doing so, Bun went from being a public project to more of an internal tool for Anthropic, spoiled for now with AI money and losing quite a bit of focus.

Let's hope that when the bubble pops, some of the Bun effort could at least be salvaged. I don't see Anthropic maintaining it long term, they are simply not in the business of selling support for a runtime nor have the (Google) scale justifying maintaining one on the side.


Yep, the Anthropic acquisition, this petulant Rust rewrite, and bun's increasingly buggy releases (slop) have caused me to migrate my projects (personal and work) to nodejs+pnpm.

The risks of using bun are no longer just those concerns around a newer tech and "drop-in" replacement for nodejs. Now you have to marry Anthropic, Rust, and a founder with conflicting priorities.


Having read the comments from the actual engineer doing this rewrite, the only petulance I have seen is from those reacting so strongly to it.

just wait a year or two.

How exactly will waiting a year or two make this effort appear “characterized by impatience and grumpy annoyance”, as opposed to the people right now who are loudly bemoaning an engineer trying something out as an experiment?

I think a lot of people taking this at face value , a lot of this was possible because of the beyond standard extensive and comprehensive test suit previously built.

It's still an impressive achievement that would have taken even the most competent engineers an exponentially longer time to accomplish.

I just hope it's noted when this is eventually marketed how much human effort went into designing and curating the test suite that even enabled this speed in the first place.

A test suite sort of functions exactly like the ideal scenario for current gen llms. A comprehensive enough test suite essentially forms the spec for agents to implement however they see fit - in this case rust.

You could probably throw away the entire actual source code in certain cases and reimplement the whole thing from scratch just giving an agent access to the tests when it's as well crafted as a project like bun.


If this is a "beyond standard" test suite, (so much so that it _uniquely_ makes this work possible compared to other projects,) then how is Bun also uniquely unstable compared to other Zig programs (and so deserving of rewrite?) If the blame lies partially with the test suite, what does this imply (if anything) about the Rust port?

Because tests validate behavior, not undefined behavior.

The thesis is that Rust makes undefined behavior less likely.


Look what it can do in 6 days!

Ignore the hundreds of thousands of hours put into the original architecture and test suite that made it possible in the first place.


This is such a bad faith argument. How long would it take a dev or a team of devs to do this with the same architecture and test suite? A hell of a lot longer than 6 days..

But what is the purpose? When you rewrite a project in another language, it's for engineers to be able to maintain and further develop the project better on some metrics due to advantages of the language. It doesn't hold when LLM does the rewrite, since there is no one who understands the code after that.

It's a good demonstration of capabilities, sure, but the result itself makes no sense. We'll have to figure out where these capabilities can bring real advantage


> When you rewrite a project in another language, it's for engineers to be able to maintain and further develop the project better

I don't think that is the case here. Bun is pretty much using AI to write all of it's code, with a human reviewing it. Zig exists as a language to provide a nice DX over C and Rust, not to be memory safe. If you are using an LLM to generate code, the DX benefits are removed and so then why would you ever choose Zig over Rust?


This is such an insightful comment. It also underscores why these AI companies' marketing efforts are promoting rewrites.

I agree that the comment is insightful, but I don’t think AI companies are particularly promoting rewrites, other than that it’s a task LLMs are good at as “the code is the spec”.

The industry as a whole still is realizing that any LLM usage that actually writes all the code for you is causing cognitive debt, and we’re even slowly losing our skills of the art.

I’m trying my best to navigate this myself, but no matter what we do, using LLMs is both a blessing and a curse.


why do you think no one understands the code after the LLM rewrites it?

Becase no one has written it. You can't ask the guy who has written it, not because this guy has left, but because he does not exist. Also, it often reads weirdly.

I disagree with calling this bad faith. For instance:

* I can agive you one quarter of amazing profits, if you let me dismantle and sell all the assets of a company.

* I can give you a few years of incredible food production, if you let me strip a rainforest and plant commercial crops.

* I can give you incredibly cheap energy, if you let me mine non renewing fossil fuels from the earth.

The context of why something is possible matters. In this case, because a very large and comprehensive test suite was seen as a necessity to specify a successful project (managed by humans). I do not believe a LLM coded project could ever have made such a test suite. In this case, the LLM is consuming the result of expensive human labor (the test suite) to make what ultimately is a minor variation to it (the implementation language).


> This is such a bad faith argument. How long would it take a dev or a team of devs to do this with the same architecture and test suite? A hell of a lot longer than 6 days..

Pocket calculator also can multiply numbers much faster than engineer, it doesn't make it engineer itself..


You missed the point.

People want to use stuff like this as somehow evidence for AI being able to write entire software systems in a few days. We saw the same shit with the "compiler" they made with a bunch of agents. Literally the only reason it's possible is because the hundreds of thousands of man hours and God knows how much money that was poured into the reference projects befoes the AI got anywhere near it.

To replicate this kind of thing with a green field project would take an absolute ton of spec work and requirements derivation, which will substantially eat into any savings from having AI generate it.

The accomplishment itself is interesting, and unlocks opportunities to do work no one would have bothered with before, but it doesn't represent what a lot of people desperately want it to.


Exactly this.

I am not sure why people sound so astounded, to be honest. This has been my frank experience of the agentic tools both Codex and Claude since about December.

When given the right constraints this kind of thing is entirely conceivable.

However the important question not being answered here is: does anybody working on it have a full understanding of what has been built?

My experience having constructed similar types of projects using these tools is yes, you could do this in a week or two but now you'll have a month or two of digging through what it made, understanding what was built, and undoing critical yolo leaps of faith it made that you didn't want.


Not to mention to even attempt something like this from scratch would take hundreds of hours if spec work. I see it all day everyday in the aerospace sector. Software engineers have absolutely no idea what deriving a design document and all its associated artifacts actually looks like, and they're in for a rude surprise if the industry really does shift hard that direction

What a time to be alive.

So much of the fundamental dynamics of the industry and the job have changed in so little time. Basically over night.

Some days I am so excited at how much I can do now. You can build anything you want, in basically no time! 100% of my software dreams can be a reality.

Some days I am terrified at what's going to happen to the job market.

Suddenly you can get so much with so little. The world only needs so much software.

Is every company that sells software as their core business model going to go out of business?

What will happen if only certain companies or governments get access to the best models?


> Is every company that sells software as their core business model going to go out of business?

Probably not, for a number of reasons:

* Some software suites are (probably still for a few years) too big to regenerate them through a coding LLM

* There's quite a lot of proprietary knowledge not just in the code itself, but in the requirements, industry knowledge etc. For example if you want to write a hospital management system, you need to know a lot about how hospital works, how they are billing their services in different legislatures, data protection rules etc.

* For some pieces of software (like computer-aided engineering), validation of the software is just as important as the software itself.

* Liability: suppose you build bridges, and you're on the hook if it fails too early. Do you really want to vibe-code your own software that validates the bridge's design? Will any insurance company cover that? Probably not in the near future...

* Currently, security and safety of LLM-generated code is still a pretty big concern. I guess this will get better as the LLM-Coding industry matures.


> The world only needs so much software.

Around the time of the dot com crash, there was a decent amount of rhetoric advising students and job seekers against getting into the software industry, because it was getting "too saturated." The thinking was there's just not that much work to go around, especially for the number of people flocking to the field. And the crash just reinforced that narrative.

But even as a student back then, I could tell that there was unlimited scope for software. Pretty much any cognitive thing we do manually could be done in software. I once idly tried to enumerate those and quickly realized there was soooo much to do. Plus, I also understood that the more you do things a new way, a lot more things pop up that we haven't even imagined yet. The possibilities were countless. It was clear that the "saturation" narrative stemmed from a lack of people's imagination and understanding of what software really was.

I just knew that this field would never get saturated because it was impossible to run out of things to write software for.

But these days...

I mean, I know we will always have new software to build as things evolve, which they will do faster than ever with AI. But these days, I wonder if it's now possible to write software faster than we can imagine new things to do.


> Pretty much any cognitive thing we do manually could be done in software.

Yes, although I suggest being careful with that kind of thinking.

https://www.orwell.ru/library/novels/The_Road_to_Wigan_Pier/...


Ooh, I hadn't read that one, have put on my list. I couldn't read the page properly because ads keep popping up and making the page jump around... but it seems the linked section was be about displacement of workers? If so, that's always been true of all technology, but that's less a problem with technology and more with the social system it is applied in. I just posted this comment elsewhere that may be relevant: https://news.ycombinator.com/item?id=48078930

It's not about the displacement of workers. It talks about a fundamental principles-level objection to unbounded "progress". It's not an absolute argument and Orwell himself says so, but it is worth keeping in mind.

Try reading it here: https://www.george-orwell.org/The_Road_to_Wigan_Pier/11.html


Let's take a SW business like a ticketing system.

Do you think 100 enterprises with 1 bln of tokens are going to make a better product than specialized vendor with 100bln of tokens?

For sure SW vendors and SAAS like "logo creator" are already dead, but unless the next generation of LLMs aren't going to have an embedded ticketing system the ticketing system vendor will be fine(maybe less headcount, but not sure).


> Do you think 100 enterprises with 1 bln of tokens are going to make a better product than specialized vendor with 100bln of tokens?

I'm not sure if this is sound reasoning, because "better product" is very context-dependent.

My currently employer has migrated away from RT to OTRS as ticket system, and now moving to servicenow.

The RT instance was heavily patched/customized.

The OTRS instance was heavily patched/customized.

We try not to customize servicenow quite as much, but the less we customize it, the more we have to change the workflows in our company. And humans are slow to adapt.

With this experience in mind, the question is more: do we want to spend lots of money on a vendor-supplied ticket system, and then spend lots more LLM tokens to customize it, or do we LLM-build it from the ground-up?

If we started a new ticket system migration project today, maybe the best answer would be to start with an easily-customizable Open Source ticket system, and then throw LLM-power at customizing it.


> or do we LLM-build it from the ground-up?

But in this case you don't spend tokens only on your workflows: you have to patch it constantly, perform vulnerability scans, check and adapt for law changes(eg. if you in europe: GDPR or DORA), create and maintain (again security) integrations with other systems and so on.

And, most importantly, you as a corporate need an internal team to do the work and that means it's a liability to you as a corporate ... and we all know it's better to have some else to blame.

Just imagine the CTO or CISO explaining to the CEO that the data breach they had last week and that cost them millions was due to some customization they did on top of an open source ticketing system.


Certainly companies and governments will have access to better models than the public (in fact, that's already the case with Mythos). The public will still be able to help themselves with models that are behind the frontier.

Maybe, or they use the same smartphones as everybody else. The mass market also wants the best model and will pay accordingly.

It’s pure marketing. Don’t be naive

I think the industry is moving to English as the programming language, and specifications-context-tdd as the framework for building software.

Many find it distasteful, and many finding liberating. I think it's broadly correlates with how they feel about expressing themselves in english vs say C++.

As a side question, is there anyone who's using LLMs primarily in non-english mode to program? I suspect there's quite a few people using mandarin, but can someone share first-hand account.


I’m Korean, and I’ve used GitHub Copilot, Claude Code, and Codex. At first, I prompted them in English, but over time I came to the conclusion that using Korean works better for me. It may consume more tokens, but reducing the time spent understanding and correcting the plan is more valuable. That said, when the context gets close to its limit, the responses sometimes include Korean words that do not actually exist.

As an aside, I don’t think the benefits LLMs bring to non-English users are widely understood. I studied linguistics and Russian, and I’m capable of professional interpretation in English and Russian. Even so, I can read technical documents, understand them, and communicate about them much faster and with far less effort in my native language, Korean. These days, I read most English documentation and HN posts through Chrome’s automatic translation. Sometimes the translation is ambiguous, but in those cases I can immediately refer back to the original English. This has been a major help to me and to other Korean developers I work with.


I'm using it 50% English (personal projects)/50% Polish (workplace; reasons being agents.md / team is not that english proficient) and honestly I haven't seen much difference in the output/ambiguity.

Polish prompts tend to be shorter due to the language having a lot of verb forms/conjugations, the only "bad" thing for me is that when it's saying "it broke" it tends to use uncanny / blunt words that make me sometimes laugh.


Same here, been using 50% English and 50% Spanish for months, no particular reason, just whatever feels easier at the moment. Sometimes I even switch languages in the middle of a session. I have not noticed a difference in the quality of the output.

Interesting. Some questions: Would you say polish is more dense or less dense than english? It's interesting to hear that code quality is not suffering but the response text is sillier or blunter. Any other descrepenacies compared to English?

I would say it certainly can be more dense but even if it's more dense, the tokenizers count it as more. Last time I checked in OpenAI tokenizer for my agents.md it ate 30/40%~ more tokens than the English version at roughly 1:1 meaning.

I think it will eventually be its own dialect of English. Telling LLMs what to do is better using not quite normal English and I think this will continue until it isn't recognizable as natural English anymore, but a new fuzzy programming language (probably >1).

I believe new (programming) languages will emerge both for LLMs to parse and take instructions from as well as for them to generate code in. The former is because English is a nuanced language evolved for human usage which the LLMs don't quite need, with the only advantage of it being a metric ton of training material. Same goes for Rust, Go and other languages LLMs do primarily well coding in, which all have concepts geared towards human convenience.

>Telling LLMs what to do is better using not quite normal English

What are your prompts like?


I wonder how well Mandarin works for LLM-based programming. On one hand, it's very token efficient as Mandarin script is very dense in meaning. On the other, I suppose this can increase ambiguity.

Character-density and token-efficiency are different things. Latter is data and, therefore, tokenizer specific e.g. take GPT-5's tokenizer o200k_base and run mandarin text and its translation through. Some amount of the time en will beat zh. I just tested with news articles and wikipedia.

After all `def func():` is only 3 tokens on o200k_base.


I can speak, read, and write Taiwanese Mandarin (which is likely relatively underrepresented in the training sets and, which is, in my practical experience, materially different in its usage.)

The authoritative answer for this question would best come from the millions (or tens of millions) of Chinese-speakers who are currently using LLMs to write software.

However, it is my suspicion that you would see no advantages using any language other than English. While there is a certain token-level density to written texts, it seems the benefits of this (and the more recent discussion around “caveman talk”) are quite limited.

Furthermore, consider that the vast majority of textbooks, technical documentation, blog posts, StackOverflow answers, &c. are originally in English. Historically, where these have been translated to Chinese, the translations have often been of very poor quality (and the terminology and phraseology is often incomprehensible unless you also understand some English.) I would suspect that this makes up the overwhelming majority of the training sets for these models.

That said, my experience using the most recent models, is that they are surprisingly language-agnostic in a way that surpasses readily-available human capability. For example, I can prompt the LLM to translate English into something that uses German grammar, Chinese vocabulary, and Japanese characters, and I'll get an output that is worse than what a human expert could do… but where am I going to find a multilingual expert?

(Of course, I have so far only ever been impressed that a model could generate an output but never impressed with the output it did generate. Everything—translations, prose, code—seems universally sloppy and bland and muddy.)

So what I would anticipate the biggest benefit for a Chinese-speaker today… is that if they are disinterested in working internationally, they have significantly less dependency on learning English.


I'm using it in english / albanian. Not much difference really. Impressive.

I use French nearly all the time, it works well. Not that I can't write English prompts, but I find it easier to use my native language.

I'm teaching my kids to be fluent in tokenese

Natural language doesn’t have the precision required for building systems. We already have languages for specifying systems precisely. It’s called “code”…

Well, what we're seeing the past few months is that natural language does - at least enough to build code and tests.

I agree, and those are still too focused on code generation for specific languages are fighting the last war.

It is the revenge of UML modeling.

Eventually it will get good enough that what comes out of agent work, is a matter of formal specification.

Assuming that code is actually needed and cannot be achieved as pure agent orchestration workflows.


You really think that's what the positions on either side boil down to, how they feel about expressing themselves in English vs C++? No, that's ridiculous. That's such a wild reductionistic simplification.

Just a cautionary case of porting to Rust using AI

https://blog.katanaquant.com/p/your-llm-doesnt-write-correct...


Also passing tests doesn't mean something works.

Claude code C compiler passed 100% of gcc tests and couldn't even run a hello world...


It couldn't run "hello, world" on systems where the include files were not located in the directory that it expected -- producing instead diagnostics saying, quite clearly, that the header files were not found. On systems where they were, it built versions of postgresql, redis, and several other things which passed their test suites completely.

If you've heard this problem described as a fundamental limitation of the compiler, and not the kind of packaging glitch that's routine to find in pre-alpha software of all descriptions, whoever described it to you that way is not serving their readers well.

I'm not saying CCC was production-ready, or close -- the total lack of an optimizer would be a killer in any real use, and I assume that there were problems with the diagnostics at least as bad as problems with performance and the include files, for similar reasons -- the LLMs hadn't been asked to optimize for that stuff yet, just test suite correctness. But it did achieve that, and the amount of cope I've seen on social media claiming otherwise is more than a bit disturbing.


I have a colleague who multiple times committed code that doesn't work, like at all. Why? His code is only used in tests but not in the actual application. And apparently he never even bothered to click through things even once, let alone reviewing the code.

If it doesn't work, it doesn't. You can find all these excuses. But at the end of the day, there is a difference between an end user being able to get something out of your code or not.


The C compiler written by Claude a few months was able to compile a hello world.

The main problem I think that it was extremely slow.


i think theres a different lesson to be taken from those cases - the LLM will build to what you give a feedback loop for.

if you give just the logical tests, it wont consider the speed at all. if you included tests that measure the speed and ask the llm to match the performance, itll do that too.

its the same class of error as everything else with llms. it has no common sense context for things people consider important. if you dont enforce the boundaries, it will ignore them


Question is, are our optimization functions well specified enough? (No)

How important is well specified opt function? No one knows. We will find out


Discussed here if anyone's interested:

LLMs work best when the user defines their acceptance criteria first - https://news.ycombinator.com/item?id=47283337 - March 2026 (422 comments)


I'm a full time Zig developer, and I see this as an absolute win. I know Jarred has said in the past he feels Zig makes him more productive, but I also think it's fair to say Bun was programmed in a way that's quite cavalier towards buffer overruns. I think Jarred and the Oven team will have significantly better luck with Rust.

Some commenters have remarked they only heard of Zig because of Bun, therefore this is bad for Zig. Not so. In my opinion, there has always been a mismatch. I say with no ill will that a divorce is likely better for both parties. I genuinely believe Bun will be better software once fully converted to Rust.


I remember looking into the nodejs alternatives some years ago, one way to compare them is to look at the open issues. bun had so many hits for 'segfault' and deno has basically none.

Even now:

bun (zig) [1] 119 open / 885 closed

deno (rust) [2] 0 open / 1 closed

I don't think this has that much to do with Zig's anti-AI stance. More about using the right tool for the job.

[1] https://github.com/oven-sh/bun/issues?q=is%3Aissue%20state%3...

[2] https://github.com/denoland/deno/issues?q=is%3Aissue%20state...


You misspelled segfault as segfaut on your Deno search:

https://github.com/denoland/deno/issues?q=is%3Aissue%20state...

There 10 open and 40 closed on Deno.


oh, good catch! Point stands

Not sure why you're getting downvoted; I think you're close to right. They were successful with one technology and had a great exit. They may also be successful with another technology post-acquisition.

Lets see the fruit of their decision.


Presumably the biggest loser in all this is Zig, I only know of the language because of Bun.

But the timescale still gives me pause… just because AI lets us convert a codebase in 6 days doesn’t mean it’s wise. There are surely a lot of downstream implications! It’s always felt a little like Bun is making up a plan as it goes along (and maybe that’s unfair), this seems to underline the point.


Zig is a great low-level language. It's much better than C, while not being so much larger as e.g. Rust or C++. AFAICT Zig does well in embedded development, and should continue to do so. Note that Zig is not even 1.0 yet.

Yeah but now they got the fame of the language that fumbled the ball because of an overly onerous anti-AI stance.

They haven't fumbled anything. One person has used AI to vibe code a rewrite of a Zig program in another language. Zig didn't gain popularity due to Bun, last I checked Bun doesn't even mention it is written in Zig on the homepage. Zig is appreciate for major improvements over C, while being simple and concise.

In addition, a core Zig developer has explained why the PR was rejected, because it would introduce non-deterministic bugs into the compiler, just to achieve a speedup Zig is already gaining thanks to recent work on the self-hosted backend and incremental compilation, which are far more general as well.


It's been repeated many times that the rejection of the Bun PR was unrelated to their AI-policy. It's also not clear they've "fumbled the ball" given how many projects are complaining about slop PRs.

I think it would help if Zig put out a statement on their actual AI policy, regardless of whether they’d be repeating something that should already be known.

As often happens, the online discourse has, for some reason, decided that this was an anti-AI stance, while - as far as I understand - the problem was simply that the PR had problems, which lead to Bun forking Zig.



Lol. What a goofy take.

For most use cases I can’t imagine why you’d make the effort to move off C and not just go all the way to Rust.

Rust is too mainstream now. Every Zig fanboy clutches to their heart the confidence that they are still a little bit "alt" in some way.

These tools let you get a massive codebase functional in 6 days. But, presumably, there's no better language to target than Rust (in terms of safety/performance), and therefore the rest of time can be spent making the birthed-in-6-days codebase better.

But the author said "the code truly works, passing the test suite on Linux and soon other platforms" which just sounds really wise.

> 99.8% of bun’s pre-existing test suite passes on Linux x64 glibc in the rust rewrite

OK, they've got a working prototype, congrats! Now it needs to be put into shape so that all the unsafe blocks are eliminated (maybe with a few tiny exceptions), and the code is turned into maintainable, readable, reasonably idiomatic Rust.

I wonder how long is it going to take.


About 2 months, or 60 days, if we go by the old 90/10 rule.

Not sure that rule is even applicable anymore, but I don't have a better heuristic to make guesses by either.


maybe its tokens instead of time now? bun has access to an unlimited amount of it

This is the kind of program that would need to have a lot of unsafe even if it had been written in Rust from the very beginning. For comparison, there are about 2600 unsafe blocks in Deno, not counting dependencies.

> Now it needs to be put into shape so that all the unsafe blocks are eliminated

All the unsafe seems to be FFI?

https://github.com/search?q=repo%3Aoven-sh%2Fbun+unsafe+lang...

> and the code is turned into maintainable, readable, reasonably idiomatic Rust. I wonder how long is it going to take.

This isn't a c2rust rewrite?


That GitHub search only covers the main branch, not the not-yet-merged Rust rewrite; the only Rust code in there is tests for Rust FFI (so that people can write native extension modules for Bun in Rust if they want to).

The rewrite's in https://github.com/oven-sh/bun/tree/claude/phase-a-port. By running the following command on it, I count about 14,000 unsafe blocks:

  rg --stats -g '*.rs' 'unsafe \{|unsafe impl|#!?\[unsafe\('

I have not had time to look at the code myself, but from when this was initially posted to Reddit, IIRC it had around a thousand global mutable variables, which are unsafe to access.

I am very curious what the numbers are once the test suite passes and after a few passes of reducing the amount of unsafe.


At the very least, it's interesting to be a bystander observering as efforts like this progress. The first thing it makes me wonder is how comprehensive/high quality the test suite is to begin with. Not to cast aspersions, but even at 100% on all platforms I wonder how confident the Bun team would be in migrating.

I harbor some hope that the (sad) fall of human SWEs will at least be accompanied by language defragmentation. We don't need 38 systems languages once human taste is mostly out of the picture.

Since the LLM craze started I have always assumed it would end up in a place where programming languages are dead and LLMs generate something more low level.

Programming languages were always designed as an abstraction to allow humans to more easily instruct a computer than by writing binary or assembly. If humans write natural language and don't check the generated code, there's no reason to take the hit of generating C, JS, etc that still has to be compiled and/or interpreted.


If anything LLMs should use something higher level because it compresses the context and makes programming closer to natural language they are trained on.

Forcing LLMs to do a shitty job of what a compiler can do deterministically is not a good approach IMO.


Bun is going this route because their proposed fix wasn’t great. https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

Cannot imagine this agent rewrite had anyone review any the code (you can’t at that speed).

I’m positive this will go extremely well :p


Fwiw, that's not the stated motivation for the rewrite experiment. In fact, the Rust rewrite is slower to compile than the zig code when compiled with their internal fork of zig (tho it is faster when OG zig is used).

I don't want to infringe upon your right to speculate. I just want to point out that your statement is at best a speculation.


No doubt on my side porting was "easy". What I’d find interesting is the ability to maintain and properly care for the code over time for the next iterations. Do we eventually end up with a codebase that nobody truly understands in depth anymore, where everything is generated and modified through GenAI?

Thanks for the sharing


Yeah, that's my issue with llm code. If we imagine a future without human programmers - sure, go ahead, we are not there yet, but maybe it's possible.

But if you want it to coexist with humans, then it doesn't seem to work well. It gets in the way of human learning and human communication. Making professionals and teams weaker essentially


And here I am trying to get an LLM to add types to a 100k line Ruby repository for 2 days, and it's not going so hot...

I have some experience in this. Reach out (email in my bio) I would love to chat.

A SMT solver may work better.

Will that work if my codebase is filled with nils it shouldn't be filled with, and HashMaps instead of structs with a loosely defined schema, and tuples masquerading as arrays?

No since you just use an LLM to code, so itll probably fix all that for you. Your pronouns are probably they/them. Lol!

Obviously LLMs would identify as binary.

(But also, low effort meanness is bad, HN strives for better in both dimensions)


Guys calm down, this is just marketing from anthropic the same as the browser and the c compiler.

I still don't understand how people consider bun as a viable runtime when its owned by evil corp trying to use it for capture of the tooling layer and its the most insecure runtime on top of that. Meanwhile deno is performant and has dramatically improved node compatibility while exposing a proper permission broker api.

This is remarkable. Man, there are all those ancient things that "we've lost the source code for". One time, in a past job 10 years ago we were reimplementing something that was lost to the sands of time, using an out of date spec it had used. It was such a tedious job with verification but we got there. Amazing how easy that would be today.

Are you sure you will be able to spend time playing around that kind of stuff when anthropic/openai/google/etc make you jobless? (well, perhaps not YOU precisely, but 90% of devs, so there’s a high chance).

We always think it’s not gonna hit us… we may be wrong


I don't think this kind of thing works nearly so well without a comprehensive test suite or the ability to easily use the reference version as a test harness. The typical enterprise relic for which no specification or source remains almost surely lacks the former and probably isn't very amenable to the latter.

The Ubutnu coreutils thing last week really soured me on 99.8% test compatibility Rust rewrites :|. I clicked through to the tweet linked here and it was kind of like shudder I feel quite opposite now when I see this kind of thing. I'm like *looking for exit*

There is no way a port this massive will have human code reviews.

If this succeeds, there is no stopping AI given it will have crossed the rubicon of human bottlenecks.


What does this mean for Zig?

Few big popular projects use Zig, if they start to move away from it, what Zig's future will look like?


I think the issue is that Zig lost their biggest project, which was a posterboy project for real uses of Zig. Worse, the project felt like Zig wasn't meeting their needs, to the point they abandoned Zig and rewrote their entire project in a different language. Really bad signal for anyone thinking of using Zig for a big project. It is still in beta, but has there been any situation like this, where a upcoming programming language was abandoned by its biggest external project and still was able to be considered a successful language after that?

Well they haven't lost anything yet. Somebody is vibe coding a rewrite in another language and we don't know much else. The author said he will write a blog post about it soon. So far all we know is it is passing most of the test suite.

But Bun has open issues and bugs. The test suite doesn't tell us whether it has introduced many new bugs, solved existing ones the test suite doesn't catch, or anything else. Not to mention, the rewrite is 960K lines that nobody understands. How long will it take for the Rust version to be better, and be understood as well as its current maintainers understand the Zig version?

Having a project consider a rewrite isn't so big a deal. Zig has been designed from the ground up with a vision, and isn't worried about taking a while to create a stable API to achieve that vision. The self-hosted backend shows how incredibly fast incremental compilation is when the language is built for it ground-up. Compared to other languages that implement weaker forms of incremental compilation it isn't even close.

I don't think the Zig team is concerned at all.


>Having a project consider a rewrite isn't so big a deal.

I don't agree that them actually doing an entire draft rewrite can just be characterized as them considering a rewrite.

>I don't think the Zig team is concerned at all.

I wonder if that's the mentality that got them in this situation in the first place.


>I don't agree that them actually doing an entire draft rewrite can just be characterized as them considering a rewrite.

You're right, a rewrite is in existence, and whether it is good enough to be used or expand upon is what is being considered. I don't think that changes the fact that languages don't live or die by whether or not 1 large project using them continues using them. Especially a language like Zig which has taken plenty time making breaking changes. They know this is par for the course.

>I wonder if that's the mentality that got them in this situation in the first place.

I highly doubt it. To my knowledge, the only "why" Jarred has given is frustration with memory issues. Speculated reasons I see are: 1. Anthropic wants a rewrite to a language with a more favorable AI contribution policy, to avoid bad press by acquiring a framework written in a language that is skeptical of AI code quality. 2. Rust is more stable and a better target for AI-assisted programming or entire vibe coding. 3. Bun is upset Zig does not want to merge their fork into main.

Focusing on the issue Jarred gave as why he started the rewrite, I don't see how Zig got themselves into the situation at all. Zig was always upfront that it aimed to be a modern C: simple language, powerful modern features, and excellent compatibility with all things C. While it certainly has much better behavior concerning memory safety and undefined behavior, it has never aimed for Rust or GC level memory safety.

It's not like Jarred has been begging the Zig devs to implement language changes to make Bun development easier. Zig was always upfront that you will have to manage memory manually, and that allows for operator error. I think Jarred is in this situation because he wants to be, simply. He works for Anthropic, probably has no limit on how many tokens he spends, and may have access to their most powerful internal models like Mythos. I would guess he pointed agents at this problem and let them go, because why not? He has likely has no opportunity cost.


>It's not like Jarred has been begging the Zig devs to implement language changes to make Bun development easier. Zig was always upfront that you will have to manage memory manually, and that allows for operator error.

Huh? The patch that Bun submitted was for Zig was about compilation times, and making Zig's type resolution faster. I am sure the Bun developer who is submitting patches to Zig is well aware that manual memory management is a core tenet of Zig. The issue is that Zig has to "pick a struggle"; When using C, you manage all of the memory yourself, and get blazing fast compile and run times. In Rust, Rust uses the borrow checker to reduce the amount of memory safety bugs, however Rust can also have slower compile times due to this. Why would it make sense to use a language that is not helping you manage memory and is slow to compile, especially if you submit a patch to the language to address the issue and get rebuffed?


Side note, but Rust's long compile times aren't due to lifetime analysis, it's more about type resolution, monomorphisim, and LLVM codegen. Zig has fast compile times because they've been willing to remove features that don't play well with compiler parallelization (like removing usingnamespace).

> I think the issue is that Zig lost their biggest project, which was a posterboy project for real uses of Zig.

Bun, Ghostty, and TigerBeetle are 3 popular projects that I have heard about using zig.


So I have barely heard about Ghostty, and not at all about Tigerbeetle. Honestly, even Bun is a 2nd/3rd tier JS engine. Regardless, I think that if Bun goes forward with the port, the biggest issue is that for a language to so not live up to expectations that someone was willing to rewrite a ~1M LOC project to an entirely new language is insane. The most common case when there are some deficiencies in a language stalling development is to rewrite/refactor parts of their code base to another language that they feel as though would be better suited, not completely rewrite everything.

Is it lost already? Did antropic already say new LLM generated thing is way to go for the future?

Nobody knows. Here's my two cents.

Zig is a very interesting LOW level language, but honestly I think it should be considered for what it is: a better C. I don't think it fits for anything that someone would have written in C++, Java, Haskell or C#. Instead, Rust is competitive with all of these languages when it comes to safety, abstractions and speed. And also C and Zig itself.

Zig has a couple very interesting ideas that make it stand out: comptime and the zig build system.

Alas, Zig is still far from being stable. Rust came out to the public in 2012 and became stable (1.0) in 2015. Zig came out to the public in 2016, and it's 10 years now and someone says it's still years away from 1.0.

So, while rust took 3 years of public development to become stable, zig is taking 10/15 years. I love the language, but TBH I don't see a great future ahead, especially with LLMs advancements that can use safer languages to do the same work. There's no point in risking more memory bugs when the effort for writing code is the same.


Honestly I think, at least to the Zig community, But isn't the biggest name we'd think of. There's been some philosophical friction between the Zig project and Bun (Zig is pretty anti-AI and favor methodical thinking through of problems, while Bun is more move fast and break things). I think TigerBeetle is a better representation of what Zig can do. TigerBeetle is fuzzed within an inch of its life, and is absolutely rock solid. The people who work on it are brilliant programmers who care a lot about correctness. They find that Zig lets them express their ideas succinctly, while still giving them the needed power.

When I read about Bun, I get the sense that Jarron has different priorities, mainly moving quickly. Bun also implements a lot of userspace APIs, since the core engine is JavaScriptCore which is written in C++. I think Rust really shines in applications programming, so I guess it makes sense that Rust has lined up with Jarron's needs. I'd be interested to see what JavaScriptCore would look like in Zig versus Rust, I think Zig might have an edge in the core interpreter and JIT.


This is like when Aaron ported Reddit over from Lisp to Python

meaning it doesn't matter except for online discourse about X being bad for 2 days


It means nothing for Zig. Zig isn't even out of beta yet.

Jarred has already said on Twitter that this was only an experiment for comparisons and very, very unlikely that they'd switch to Rust.


STOP Analyzing.. Now rewrite the Linux kernel in rust. DO NOT MAKE MISTAKES, then post it on Hacker News.

---


The Pareto principle is in play here. It might take years to get that last percentage point.


Good enough for a side project, not good enough for transferring banking system from cobol

That is actually what companies like IBM and Unisys are already doing today, LLM assisted porting.

https://research.ibm.com/publications/enterprise-scale-cobol...


Why not? I think we are perfectly capable on generating a test and validation environment that we can use for correctness. Most likely llms could do this better than engineers with zero to none domain and language knowledge can do these days. From that point on, rewrites would become feasable (not easy, feasable).

I suspect that the test suite isn't that great tho. Bun has so many different behaviors compared to other JS engines, sometimes just plain wrong or contradicting the spec. Test suite didnt catch those.. Not sure how much I trust the rewrite :)

Notably, Bun is not a JS engine. JavaScriptCore is the JS engine. Bun is just a complicated wrapper around it.

If this goes through, it feels like it will stoke rust on zig violence

I just wish the camps would stop being as tribalistic. I see a broad spectrum of fights between any "better C" language and Rust enthusiasts. There is room for both of these things. Just use what works for you. Rust is a bit more like Ada in spirit, it introduces a lot of friction compared to "C like" things which gladly accept you blowing your leg off. Each tool has unique benefits, and is uniquely suited to different problems.

If I'm building a simple GUI app, I'm not sure the friction from Rust is all that worthwhile. If I'm sending someone to space, I think I'd rather have the safeties of a Rust or an Ada, or MISRA C.


I really don't think so. Bun was using their own language, that forked from Zig 0.14. It's not like the communities interacted much. All of Bun's code was their own internal code, it was not part of the Zig ecosystem. I don't see how this could have any impact in the Zig community at all.

Sadly, yes. I feel too much "violence" on both parts.

Honestly, Zig community seems the most bitter for whatever reason, while on the Rust side it seems to me that are simply overstating how great the language is and are pushy in trying to convince the other of their ideas.

If this goes through, we can all take SWE lessons from it, but I think the communities will suffer.


Xunroll's "view on X" truncates the first character of the username.

Responses seem to be very "either or" as usual on such topics.

I think it should be possible to appreciate how impressive this is on one hand, while also discussing the limitations of the approach.

Everyone can probably agree that getting this far without LLMs would have taken substantially longer and required a huge amount of work.

But what is then the end result?

Personally, for me it would still be hard pass on using a 1M LoC LLM-migrated language runtime - I have seen CC do enough crazy things to still be wary of any code without a human in the loop. It simply plays too fundamental a role in the tech stack. Others might feel differently, and time will tell how things play out.

Even if this does play out as optimistically as one can imagine, would it then mean I can go and migrate some of my enterprise codebases the same way? I doubt it.

Bun has the nice feature that it has an extensive set of black box / E2E tests that don't themselves need migrating. Most projects in the wild seem to be much more reliant on unit and integration tests that are part of the codebase itself, and would therefore also need to be migrated and be subject to mistakes in the migration process.

It also seems fairly rare that test suites are good enough to guarantee that the program will work as expected in all cases. I am yet to come across a larger enterprise codebase where the tests were good enough to make human review and even manual testing fully redundant. To be honest I doubt that is the case for Bun either, but I don't know enough about bun to conclude that.


Obviously there is a huge trend of "rewrite X in Rust". I understand why, Rust is a huge improvement in safety and speed.

My question is, to people even older than me (and I'm certainly not young), does anyone remember this much enthusiasm about people rewriting C code into (C++/Java/Whatever was new and hot)? Because I don't, but maybe I missed it.


I recall C++ OOP being the new hotness when I started out and C was always contrasted as the old & busted example. Kind of the "Everything-as-an-object will simplify everything" phase. Windows MFC was the new way, then STL.

Java WORA write once, run anywhere was definitely a thing when it came out. Java Applets came out of the woodwork and were the WASM of their day. Even Cisco ran Java for their router UI for a while, which was painful.

More recently, HN went through a period about 10 years ago where every other article ended in " ... written in Go".

The mantra may not have rhymed with "rewrite X in Y" but the spirit was there.


> every other article ended in " ... written in Go"

What happened to that: is Go no longer considered great / popular?


In the circles where I hang out I think community opinion is that go is _fine_, but python has faster iteration speed for experiments, and rust has better correctness and performance for production, so there's less excitement around it

Kind of the opposite, I was deep in the R world a decade ago and there was a huge trend of replacing Java dependencies with C/++ ones because the JVM was such a pain to manage. The community eagerly adopted the replacements about as soon as they existed.

There were no good options previously. It was either C or C++. Most of the other languages were either fringe or had a GC, or had a pseudo runtime GC (Swift). The culture of Java and C# and Go didn't really support the type of low level optimizations needed, even though you could technically do system programming if you restrict yourself to a specific subset of language and cut yourself off from most of the standard library and ecosystem. Nim was unstable. OCaml had the same issues as Go and Java and C#. You simply did not have any options until Rust came along. Oberon was an academic trinket. The less said about the various lisps and forths the better.

OS and embedded programming require bare metal support and data structures that can run standalone in the absence of an OS and standard library, and the ecosystem must exist to support such a style of programming.

Currently Rust has over 10000 crates that would theoretically work just fine in an kernel environment.

https://crates.io/categories/no-std


>this is a 960,000 LOC rewrite, the code truly works, passing the test suite on Linux and soon other platforms

I wonder how much of this is original size vs rust requiring verbosity vs the LLM being verbose in general.

Not a criticism, I do believe language translation it's the one field that AI is mature enough to near one shot projects.


I love Bun & Zig and this feels a bit like my parent are getting a divorce. I thought it was a bit strange that Bun did not sponsor the Zig foundation while others much smaller companies have.

Are you kidding? IIRC Oven gave $5k/month to Zig for years. And btw that was before they got acquired for billions, when they had no income at all.

Yeah, that tracks according to the numbers.

https://ziglang.org/news/300k-from-mitchellh/

https://ziglang.org/news/2024-financials/#income

https://ziglang.org/news/2025-financials/#income

I had a bit of trouble finding it myself but Claude proved a better Googler than I


Alright my bad I did not find any info about this, but still they are no longer mentioned as a sponsor.

Interesting that ports can be written so quickly with AI. But that aside, I have to ask...why? You want a super performant bundler/runtime/package manager written in rust with TS support, Deno has this already.

I'm looking forward to the race to the bottom in the tokens-for-work-done race.

So let me get this straight:

Developers use LLMs to migrate a million line codebase to a language that they have much less experience with in such a short amount of time that they likely do not have a good mental model of the migrates code.

At least the tests pass.

Only one person drove the migration, so the number of people that understand the new code is ~0.5 under the assumption there's no way the sole dev could build a mental model of fresh 1m code in 6 days.

This is code for a language runtime.

It's great that the tests pass but it's really hard for me to interpret this as anything other than horrible mismanagement of a promising project. When you sit this low in the stack this is grossly irresponsible and I have no idea why anyone would use Bun after this. You'd be literally adopting a runtime the devs presumably don't understand, keep in mind they now somehow need to evolve and maintain this in the future.

Hopefully this remains an experiment, or Bun has some plan for re-upping dev knowledge of the codebase. Sorry but a component with massive blast radius like a runtime isn't really a good candidate for vibe coding, no matter how good the AI is. I'd like the maintainers to actually understand their runtime, thanks.


Thank you put my gut feeling that I had in my top comment here in words. I didn’t have the full explanation ready, why this threw me off.

They won't, they will continue to vibe code it until it collapses under them and the project fades into obscurity. Which it will regardless since it was acquired by Anthropic.

Node beat Deno and Bun. Pretty impressive.


Its going to be interesting to see how this holds up in production after a release or two.

Obviously bun having been acquired by Anthropic changes the arithmetic a bit, but I'd love to see the token cost/consumption of this initiative.

That's amazing, over time I got a few memory related crashes w/ bun but have deep respect for the performance work put in. Hopefully Rust's compiler will help even more.

Off: I'm wondering if now when more JS finds place on our machines and bundle size is 2nd place for most, would a revival of prepack or projects in the same vein would be worth it, especially with agents.


Has anybody thought through the legal aspects of this, regarding code ownership?

As far as I understand the situation in the US (sorry, no idea where he is located), output from LLMs, once published, is essentially in the Public Domain, since there isn't any human who owns it.

However, in some sense, this is also a machine-assisted translation from one computer language into another, so one could argue that the ownership of the original code base still applies to the new one.

Which one is it? Is there any way to find out before a similar case goes to court?


> output from LLMs, once published, is essentially in the Public Domain, since there isn't any human who owns it

That’s not what the court case in question was about: https://www.morganlewis.com/pubs/2026/03/us-supreme-court-de...

If I ask an LLM to come up with an entirely new story on its own, the output is not copyrightable.

But if I feed an LLM a Tom Clancy novel and ask it to regurgitate that same novel, I cannot legally then put the output on a website for anyone to download.


AI rewrite one of the Microsoft source code leaks to Rust and publish it as open source on GitHub. We will soon find out what the answer is.

When is someone going to do linux rust rewrite

It’s relatively easy to get a basic Unix-liked kernel together. Hardware compatibility (and associated testing) is where it gets hard.

Now in Zig, Julia, Nim, Crystal. I just love programming languages.

But in all honesty, I don't understand the extremism in Rust engineers that reject any other language.


I think it was Rich Hickey who said "Programmers understand the benefits of everything, and the costs of nothing."

I'm also reminded of video game forums where everyone argued whether the Xbox or Playstation is better, not because they're genuinely interested in the pros and cons of each system, but because they only have an allowance to buy one of them, so they're trying to gaslight everyone and themselves into believing the one they picked is better. In the case of programming languages, there's only so much time in the day, so the people who post on this site go all-in on the programming language they picked, and will rationalize any reason they can think of to believe the language they picked is better.


Were there perhaps [licensing issues](https://www.phoronix.com/news/Chardet-LLM-Rewrite-Relicense) with the original?

Serious question… Who’s going to want to run a vibe coded runtime in production?

I don’t see how this is a good look for Bun?


One should care about tests more than how code was coded.

If I had a codebase with lots of tests and asked someone else to rewrite it to another language passing the same test suite, I honestly wouldn't expect a great quality job.

I say this because it happened 3 times in the company I work for: we conducted experiments by tasking different companies to rewrite the same code in another language. All of them passed (most) of the tests, but code quality was low. If the job is a black box, rely on the I/O to determine quality, not the inner workings.


I care that runtime developers know and understand their codebase deeply. 1M LOC written by 1 dev in a short time does not inspire confidence in such an important dependency.

There's no way this code is understood fully by the original author, let alone anyone else. I wouldn't accept this from an intern, let alone in code that's fundamental to my business.


I have seen, many times, code that has lots of tests but don't work.

Why?

Some of the patterns that I saw:

* The code is only called from tests but never called in production

* Tests are not testing the actual application logic, or the logic that matters. In some cases, the tests have nothing to do with the application code at all, because it does not even run any application code.

* Tests repeat the same logic as in application (tautology). All the time.

* Application code is actually incorrect. But tests just end up using the wrong expected value to make tests pass, disregarding what should happen.

That's using the latest models.

To make things better, apparently people never bothered to go through the manual workflow at least once to verify the behavior.

Good luck just relying on tests.


I think you and I don't share what is a "test". Are you thinking about unit tests? I'm thinking about unit tests, smoke tests, integration tests, e2e tests, functional tests, manual QA tests and probably even "the-product-works-as-expected-as-I-can-see-from-the-amazon-reviews-of-our-clients tests".

I agree with your point of view in general, but "having tests" doesn't mean "having great tests". If I rewrite my code and give the binary to our clients and they don't see any difference or bug, well, that means the rewrite passed the ultimate test. In fact, the percentage of our clients that care about implementation details (such as PL) is precisely 0%.


I just see a ton of reflexive AI hate here. I don't care if it was vibe coded, if it passes the entire test suite and was vibe coded by the original authors, I trust it as much as the original Bun. These are Jarred's words about it:

> it’s basically the same codebase except now we can have the compiler enforce the lifetimes of types and we get destructors when we want them. and the ugly parts look uglier (unsafe) which encourages refactoring.

> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.

This makes me trust it more, not less.


How many tokens did this port consume?

Bun is owned by anthropic and so has access to Mythos & unlimited tokens.

The answer is... more than any of us could likely afford.


would be fun to do zig -> rust -> zig and to measure the delta

(in a VAE-ish way, kl div on the embeddings?)


also feels like a good posttraining task

> and crashes and stability issues

inb4 .unwrap() / slice / etc hell + livelocks & deadlocks + resource leaks & toctou bugs + larger exposure to supply chain attacks

Still, ~1M LOC ported in a work week (400 LOC/min, wtf?) and almost all of it working is pretty wild. I hope the guy managed to maintain normal function, cause I found that getting into the flow but with AI is even more self-consuming and intoxicating than without it, which was already potentially rather rough.


At 100 agents in parallel that’s 4loc/min, and 100 agents is a lower bound on what they had access to.

It's not so much the agents' througput I'd be worried about, more meant to imply that at such speed, large parts of this are going to be pretty much just guaranteed unsupervised / unchecked completely. Like literal "LGTM + god bless + fuck it we ball" tier.

Interesting! I wonder how the performance is compared to the Zig version

What license is this ? Let me guess, its is no GPL...

Unlike the GNU coreutils rewrite in Rust, the Bun rewrite in Rust is being undertaken by the owners of the project.

That said, yes, you’re correct that Bun isn’t GPL: https://github.com/oven-sh/bun?tab=License-1-ov-file


Hmm, that's unfortunate - why does so much Rust stuff seem to default to MIT/BSD ? Just because Mozilla used that for most of the Rust stuff ?

Do developers using Rust even know the difference ? Like how anyone can basically take all you work & base a proprietary fork on it with maybe saying "thanks" (attribution) if they feel like it ? :P


> Like how anyone can basically take all you work & base a proprietary fork on it with maybe saying "thanks" (attribution) if they feel like it ? :P

I'd assume the Bun people got a bit more than a thanks when Anthropic acquired them. :)

You also can't take your GPL code (unless you do CLAs with all contributors), convert it to closed source yourself and make a massive VC funded startup around it. Which is about the only other way anyone makes better money from open source than by just working for a big tech company.


I'm very aware when I pick Apache-2. I want attribution for my work, but I don't care about open source purity. I respect closed source software and I put my open source code up for free because I don't care to profit off of my hobbies.

for the same reason most ruby and javascript/typescript stuff is. Heck, even most python.

Most of them never got into the GPL in the first place.


Your guess is correct! Congrats. Bun itself is not GPL either by the way. Oh, rust compiler itself isn't GPL either.

They could also do a rewrite of CC itself to Rust.

The flagship product is both the cash cow (subsidizes rewrite) AND the labor (amortizes? rewrite).

Curious how the test suite was applied. Was it ported from Zig to Rust beforehand?

Almost all of Bun's tests are written in JavaScript run in Bun itself.

Deleted

@simonw explains how hilariously misguided that paper is in one of the top comments, and how it doesn't apply remotely to a real agent harness. Plus it's not even clearly relevant here, because the model isn't trying to regurgitate the original document, but generate a new one, and there are guardrails to put it back on track in the form or a compiler and tests. Also, the test suite is very thorough, and pre-existing, and the vast majority passes already. This is skepticism for the sake of it.

Perhaps you can elaborate on how your comment is relevant to the Bun's experiment here.

3 years from now: Linux ported to Rust in 6 days.

And on the seventh day Claude ended His work which He had done, and He rested on the seventh day from all His work which He had done


That's a fun point. I honestly don't think it will happen in 3 years, but I think it will surely doable in 10.

More interestingly: will we need to care about the code at all, at that point?


Do scala.js next

Bunner

The fastest large-scale rewrite in the history of software engineering, likely

will this mean opencode is finally portable?

There is some really cool work to port opencode's underlying opentui to Node.js, including some new FFI work in node itself that got merged (called... drum roll please... node:Ffi! Really cool stuff. https://github.com/anomalyco/opentui/pull/939 https://github.com/nodejs/node/pull/62762

Also worth noting that opentui is... Zig!

Very unclear what it's going to take to get this reviewed and shipped, but some very high potential. I've seen some other changes going by in opencode for node.js compatibility; I'm not sure what besides the tui has Ffi needs that might be gating; maybe nothing!


Merge with Deno

This is a good reminder that tooling choices compound over time. The short-term speedup matters less than whether the next maintainer can still reason about the system.

Kinda crazy to use AI to switch from zig to rust in a tool that runs js. Bin bun and use a real lang to begin with. No reason to have that extra layer anymore.

Lol, I had a similar thought as well, but more along the lines of "We're coming for you next, JavaScript!"

But the effort is certainly an exquisite rearrangement of the deck chairs, no?


Bun runs TypeScript directly without external tooling.

bun script.ts just works.

Otherwise I bet it wouldn't even be a blip in our radar.


Being anthropic accuired project does he have access to mythos or it’s normal Claude we plebs have access to

This is entirely possible with Claude as it existed even last year.

The LLMs are quite good at re-writes and even better when provided an 'oracle' like a well rounded test suite or existing implementation to work against.

Its part of the reason we keep seeing "I rewrote <library> in <language>" posts on hackernews and when you look at the repo its more like I prompted claude to rewrite this repo in rust or whatever.


As an Anthropic acquihire, not only does he have access to every model and service but he probably has infinite tokens available.

Bun powers Claude.


Also, isn't it a great ad for Anthropic itself? One wonders

Indeed, knowing the amount of tokens spent would be very interesting.

best way to kill an open source project in 2025 - use AI to port it to Rust.

Bun alert!

Explain it for dummies. Isn't Zig a programming language? Why are they re-writting a programming language in another programming language?

They're not rewriting zig. They're rewriting bun, which is currently written in zig

jared's post is singlehandedly shitting on Zig's reputation. not good juju for him to post like that.

"I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues"

bun was zig's poster child. if it moves away, it becomes yet another random language like nim or crystal.


I'd feel better to have that kind of person out of my community.

First of all, did he not pick the language for Bun himself? Then introduced bunch of memory bugs, sound like skill issue cascade.

I remember some years ago in podcast touting how amazing Zig is to allow them being so performant which was the claim to fame for Bun, now to turn around and shit on the thing. Interesting persona.


[flagged]


> absolute position of hating something such as AI and progress

Most takes I've seen are far more nuanced.

Key is that 'progress' has a positive connotation. It is different from change. Mere change - such as new inventions - may not necessarily be aligned with progress in a field, society, etc.

Change may be inevitable, but it's up to us humans to sculpt it into progress.


But I am talking about Zig and others who have the same stance. Zig has a very strict No LLM / AI contribution policy and it likely got in the way of the Bun maintainers at Anthropic. From [0]

>> No LLMs for issues.

>> No LLMs for patches / pull requests.

>> No LLMs for comments on the bug tracker, including translation.

[0] https://codeberg.org/ziglang/zig#strict-no-llm-no-ai-policy


They don't hate it. There's no antagonism that I know of there. I believe they want it to be fully human-authored and want low-hanging fruit items to be good onboarding for developers, not targeted by AI contributions. Simon Willison wrote a good blog post on it: https://simonwillison.net/2026/Apr/30/zig-anti-ai/

The Bun pull request was refused for additional reasons: 'AI is entirely beside the point here...': https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

None of this is, in the original comment's text, "hating... AI".


Thats true, but the author might have decided on its own. Not everything is a marketing plan.

Meh. I prefer Java, all hours of the day, every day of the week.


> why: I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues. it would be so nice if the language provided more powerful tools for preventing these things.

As expected, Modula-2 / Objective Pascal like safety was great during the last century, before automatic resource management, and improved type system became common in this century.

Naturally also have to note, wasn't this supposed to be only an experiment, nothing serious?


An update on Bun’s experimental migration from Zig to Rust:

The Rust rewrite now passes 99.8% of Bun’s pre-existing Linux x64 glibc test suite.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: