Hacker Newsnew | past | comments | ask | show | jobs | submit | alxhill's commentslogin

The commenters below are confusing two things - Rust binaries can be dynamically linked, but because Rust doesn’t have a stable ABI you can’t do this across compiler versions the way you would with C. So in practice, everything is statically linked.


Rust's stable ABI is the C ABI. So you absolutely can dynamically link a Rust-written binary and/or a Rust-written shared library, but the interface has to be pure C. (This also gives you free FFI to most other programming languages.) You can use lightweight statically-linked wrappers to convert between Rust and C interfaces on either side and preserve some practical safety.


> but the interface has to be pure C. (This also gives you free FFI to most other programming languages.)

Easy, not free. In many languages, extra work is needed to provide a C interface. Strings may have to be converted to zero terminated byte arrays, memory that can be garbage collected may have to be locked, structs may mean having to be converted to C struct layout, etc.


Specifically, the rust dependencies are statically linked. It's extremely easy to dynamically link anything that has a C ABI from rust.


A culture isse, as in the C++ world, of Apple and Microsoft ecosystems, shipping binary C++ libraries is a common business, even it is compiler version dependent.

This is why Apple made such a big point of having a better ABI approach on Swift, after their experience with C++ and Objective-C.

While on Microsoft side, you will notice that all talks from Victor Ciura on Rust conferences have dealing with ABI as one of the key points Microsoft is dealing with in the context of Rust adoption.


C++ binaries should be doing the same. Externally, speak C ABI. Internally, statically link Rust stdlib or C++ stdlib.


Exporting a C API from a C++ project to consume in another C++ project is really painful. This is how you get COM.

(which actually slightly pre-dates C++, I think?)


> This is how you get COM. (which actually slightly pre-dates C++, I think?)

No. C++ is from 1985 (https://en.wikipedia.org/wiki/C%2B%2B), COM from 1993 (https://en.wikipedia.org/wiki/Component_Object_Model)


COM is actually good though. Or if you want another object system, you can go with GObject, which works fine with Rust, C-+, Python, JavaScript, and tons of other things.


OWL, MFC, Qt, VCL, FireMonkey, AppFramework, PowerPlant...

Plenty do not, especially on Apple and Microsoft platforms because they always favoured other approaches to bare bones UNIX support on their dynamic linkers, and C++ compilers.


Static linking also produces smaller binaries and lets you do link-time-optimisation.


Static linking doesn't produce smaller binaries. You are literally adding the symbols from a library into your executable rather than simply mentioning them and letting the dynamic linker figure out how to map those symbols at runtime.

The sum size of a dynamic binary plus the dynamic libraries may be larger than one static linked binary, but whether that holds for more static binaries (2, 3, or 100s) depends on the surface area your application uses of those libraries. It's relatively common to see certain large libraries only dynamically linked, with the build going to great lengths to build certain libraries as shared objects with the executables linking them using a location-relative RPATH (using the $ORIGIN feature) to avoid the extra binary size bloat over large sets of binaries.


Static linking does produce smaller binaries when you bundle dependencies. You're conflating two things - static vs dynamic linking, and bundled vs shared dependencies.

They are often conflated because you can't have shared dependencies with static linking, and bundling dynamically linked libraries is uncommon in FOSS Linux software. It's very common on Windows or with commercial software on Linux though.


You know how the page cache works? Static linking makes it not work. So 3000 processes won't share the same pages for the libc but will have to load it 3000 times.


You can still statically link all your own code but dynamically link libc/other system dependencies.


Not with rust…


I wonder what happens in the minds of people who just flatly contradict reality. Are they expecting others to go "OK, I guess you must be correct and the universe is wrong"? Are they just trying to devalue the entire concept of truth?

[In case anybody is confused by your utterance, yes of course this works in Rust]


Can you run ldd on any binary you currently have on your machine that is written in rust?

I eagerly await the results!


I mean, sure, but what's your point?

Here's nu, a shell in Rust:

    $ ldd ~/.cargo/bin/nu
        linux-vdso.so.1 (0x00007f473ba46000)
        libssl.so.3 => /lib/x86_64-linux-gnu/libssl.so.3 (0x00007f47398f2000)
        libcrypto.so.3 => /lib/x86_64-linux-gnu/libcrypto.so.3 (0x00007f4739200000)
        libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f473b9cd000)
        libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f4739110000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f4738f1a000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f473ba48000)
        libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f473b9ab000)
        libzstd.so.1 => /lib/x86_64-linux-gnu/libzstd.so.1 (0x00007f4738e50000)
And here's the Debian variant of ash, a shell in C:

    $ ldd /bin/sh     
        linux-vdso.so.1 (0x00007f88ae6b0000)
        libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f88ae44b000)
        /lib64/ld-linux-x86-64.so.2 (0x00007f88ae6b2000)


Well seems I was wrong about linking C libraries from rust.

The problem of increased RAM requirements and constant rebuilds are still very real, if only slightly less big because of dynamically linking C.


That would have been a good post if you'd stopped at the first paragraph.

Your second paragraph is either a meaningless observation on the difference between static and dynamic linking or also incorrect. Not sure what your intent was.


Why do facts offend you?


I’m genuinely curious now, what made you so convinced that it would be completely statically linked?


I think people often talk about Rust only supporting static linking so he probably inferred that it couldn't dynamically link with anything.

Also Go does produce fully static binaries on Linux and so it's at least reasonable to incorrectly guess that Rust does the same.

Definitely shouldn't be so confident though!


Go may or may not do that on Linux depending what you import. If you call things from `os/user` for example, you'll get a dynamically linked binary unless you build with `-tags osusergo`. A similar case exists for `net`.


go by default links libc


It doesn't. See the sibling comment.


Kind of off-topic. But yeah it's a good idea for operating systems to guarantee the provision of very commonly used libraries (libc for example) so that they can be shared.

Mac does this, and Windows pretty much does it too. There was an attempt to do this on Linux with the Linux Standard Base, but it never really worked and they gave up years ago. So on Linux if you want a truly portable application you can pretty much only rely on the system providing very old versions of glibc.


The standard library is the whole distro :)

It's hardly a fair comparison with old linux distros when osx certainly will not run anything old… remember they dropped rosetta, rosetta2, 32bit support, opengl… (list continues).

And I don't think you can expect windows xp to run binaries for windows 11 either.

So I don't understand why you think this is perfectly reasonable to expect on linux, when no other OS has ever supported it.

Care to explain?


Static linking produces huge binaries, it lets you do LTO but the amount of optimisation you can actually do is limited by your RAM. Static linking also causes the entire archive to need constant rebuilds.


You don't need LTO to trim static binaries (though LTO will do it), `-ffunction-sections -fdata-sections` in compiler flags combined with `--gc-section` (or equivalent) in linker flags will do it.

This way you can get small binaries with readable assembly.


> Static linking also causes the entire archive to need constant rebuilds.

Only relinking, which you can make cheap for your non-release builds.

Dynamic linking needs relinking everytime you run the program!


zsh has been the default on macOS for a while now so it's pretty commonly used


Sure, but the comment was

> But for server side we as usual stick to default shipment, vim, zsh etc

And macOS isn't a server OS.


For what it's worth, a friend of mine is a lawyer in a well-known hedge fund and he gets access to their funds too (funds that would not otherwise be accessible without making a substantially larger investment I believe).


Sure, but most modern OSes would inform the user when the WiFi connection dies - so there's something happening on the CPU too.


On a state change, sure. But you're not frequently gaining & losing WiFi access.


This is false and represents a poor understanding of how these models work - they do abstract concepts and no you can't trivially get training images out.


it's exactly the argument that the court cases against training on copyrighted works without permission are using

if the courts agree and this is ruled to be infringement then we'll left with products with level of quality we see here

meaning the technology will be an economic dead end

here's to hoping


Copyright is about copying. It is not about observing. Reading a copyrighted book isn't infringement. Writing out copies of it is, even if you don't use a computer or anything.


>they do abstract concepts

I don't think, given that we don't even have a particularly full understanding of how human conceptual logic functions, that we can claim that even AIs are using it as well. It's only "abstracting" in the sense that it has labeled one million objects in its training set with the word "tree," and fuses many of those images together to form a general picture dependent on specific parameters made to limit its set (oak tree, winter tree, etc.)

But that is different from me or you using the word tree, which is just a signifier among signifiers, it stands for nothing but a negation of the very thing it points to in a certain set of symbolic relations. Humans communicate in the order of symbolic structures, our minds function much more like LLMs, creating multitudinous pattern relations. What you call "abstract concepts" are of a secondary order imposed to create rigorous exactitude overtop the riddled mess that we call the human psyche.


On the other hand, Facebook went from about 4,000 employees to over 70,000 in the same time period, so given how much Apple's business has grown in the last decade (when they released the iPhone 5) it seems pretty impressive that they grew only ~2x.


How is that different with LLMs versus badly-written human generated content? Most clickbait/SEO articles are as poorly researched as they come, and shouldn't be assumed to be accurate anyway.


> Anybody who lost their sense of taste or smell took months to recover that. Loss of smell or taste is almost 50% for Covid prior to Omicron. Even with Omicron, it's about 20%.

And yet, in more anecdata, everyone I know who had taste issues recovered them in much less than a month with no long-term impacts.


I think the complexity arrives when the 14-year-old asks the follow up question "why?"


Though couldn’t a teenager (older than 14, but still) understand the value that deferring paying for something is useful in context to a part time job? I would only get paid at certain days, and knowing generally what I’ll get at the end of the month after tips and hours, I could budget for that number rather than what I have in the moment.


Then the teenager might ask, sure, that's credit like a bar tab or whatever and credit is convenient for all kinds of reasons, it's just, why do you need a whole other company to get involved and issue a specialized card to do that? Why doesn't the bank just say "we'll let you overdraft up to $4000 without any fuss but you'll have to pay interest on the negative balance"?


Because if you get scammed its easier dealing with a credit card company than your bank


Why? (5 “why”s in total)


Because when your credit card is scammed, its the credit card companies money that has been stolen so they are want to get it back. When its your debit card being scammed, its your money being stolen, and any help to get it back is just a cost center for the bank so they don't really invest in this side of their business like credit card companies do.


So there's a bunch of existing intermediaries, which is bad, and to fix that we're going to add the exact same set of intermediaries, but on top of bitcoin, which means its good?


Margins are insane and can definitely come down.


Boiling the oceans is incredible value add.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: