I was the only one who handed in a solution for that particular problem, it was scored 70 out of 100. I no longer have my solution, but I doubt that it was very accurate, and I didn't have time for experiments.
Debian ships Chromium on many architectures for a long time now, apparently. I never tried it outside of x86_64, so I can't say how usable it is. What am I missing? Is this about V8 JIT and widewine? Although those must be already supported on chromebooks, so I don't know.
Lists of architectures on oldstable (bookworm): amd64, arm64, armhf, i386, ppc64el
From where I stand it seems they enabled a build architecture for Chrome, but I don't think this required a lot of porting effort. Kudos for the official support though.
I'd guess that the issue is running the `%install` and `%check` stages of the .spec file. The Python library rpy (to pull a random example from Marcin's PRs) runs rpy's pytest test suite and had to be modified to avoid running vector tests on RISC-V.
Obviously a solvable problem to split build and test but perhaps the time savings aren't worth the complexity.
Maybe the tests could be run with user-mode qemu instead of the whole thing running under qemu or on RISC-V hardware. Could possibly be more or less seamless with binfmt_misc being set up in the builders.
Near as I know, Fedora prefers native compilation for the builds.
Your question made me look up Arm's history in Fedora and came up on this 2012 LWN thread[1]. There's some discussion against cross-compilation already back then.
Yocto, which we use at work, manages it just fine to build a whole embedded Linux distro. So I don't see why Fedora couldn't make it work if they wanted. You could even scp over the test suites to run that on native systems if you wanted.
Yocto manages it thanks to the tireless effort of a community of people maintaining patches and unholy hacks for a ton of software to make it cross compilable. And they have nowhere near the amount of recipes that Fedora has.
This is true, but the hacks are mostly in the C and C++ recipes as I understand it. Something like Rust or especially Go or Zig is far easier to cross compile.
I personally found cross compiling Rust easy, as long as you don't have C dependencies. If you have C dependencies it becomes way harder.
This suggests that spending time to upstream cross compilation fixes would be worth it for everyone, and probably even in the C world, 20% of the packages need 80% of the effort.
libstdc++'s <print> is very heavy, reflection or not. AFAIK there is no inherent reason for it to be that heavy, fmtlib compiles faster.
<meta> is another question, it depends on string_view, vector, and possibly other parts. Maybe it's possible to make it leaner with more selective internal deps.
std::print author here. Indeed, std::print shouldn't be expensive to compile, it's just a thin wrapper around a single type-erased function. The only reason why it is expensive in libstdc++ is that the type-erased function is inlined which goes against the proposed design but unfortunately can't be enforced via the standard wording and remains a Quality of Implementation (QoI) issue.
I don't know the exact details, but I have heard (on C++ Weekly, I believe) that it offers some advantages when linking code compiled with different compiler versions. That said, I normally avoid it and use fmtlib to avoid the extra compile time. So it isn't clear if it is a win to me. Header-only libraries are great on small projects, but on large codebases with 1000's of files, it really hits you.
It also bloats binary size if you statically link libc++ because of localization, regardless if you care for it. This wasn't true for fmtlib because it doesn't support localization. stringstream has this same problem, but it's one of many reasons embedded has stuck with printf.
The binary bloat is also caused by unnecessary inlining and the linker eliminates most of it (but it's still annoying e.g. for godbolt). {fmt} supports a superset of std::format and std::print features including localization. stringstream's bloat is unrelated and mostly caused by large per-call binary code from concatenation-based API.
Fun. I took out a tape measure to see how accurate it was. It wasn't very accurate. Also the scale on the left scrolls faster than my finger. Fennec(Firefox) on Android.
The scale on the left was also very stuttery. Even when scrolling slow I could see the distance at the bottom updating at a very high frame rate and the scale on the left only moved occasionally which felt awful.
The insanity is that both NTP and unix timestamps need to be wound back during a leap second, as well as computer hardware clocks. If we just had monotonic clocks everwhere and adjusted for leap seconds in presentation then we wouldn't have many of the associated problems.
It's not like we wound back by 24 hours on leap days, that would be insanity. So in addition of leapseconds being a rare problem, it's also handled in a uniquely bad way.
This makes sense for timestamps in traditional logs. You don't have to second guess the order of things, especially across multiple systems or services.
I meant unix and NTP times, which are supposedly just monotonic numbers marching forward (except for leap seconds), not the UTC representation over abstract time.
I know we just get a 60th second in a minute. What unix and NTP timestamps do (or originally did) was repeating a second. Then we got other hacks to keep monotonicity, like smearing. Not without tradeoffs.
11th problem here:
https://ortvay.elte.hu/2009/E09.pdf
I was the only one who handed in a solution for that particular problem, it was scored 70 out of 100. I no longer have my solution, but I doubt that it was very accurate, and I didn't have time for experiments.
reply