That's exactly how it's supposed to work: Arch expects you to check the notes on their news section always before you update. The NVIDIA driver issue and solution was posted on Dec 20th.
I'm not saying I'm reading these regularly, just that yes it's the expected way.
So it wouldn't be incorrect to refer to Arch and Arch based distros as 'well, if you want to have fun with a broken system, otherwise avoid', just so it could be mentioned in a succinct way when talking about what distros one could try.
No, I don't think that's a fair way to put it. People regularly report having quite old Arch installs without stability issues. And people also regularly advice Linux newcomers not to pick Arch.
If you check their news section, it's a reasonable number of notes, 13 for last year. I think it's fair to say it seems to work well if you are willing to follow their procedure and already know what you're doing.
I feel there's also a fundamental difference in tinkering to get something working vs. tinkering to remove user-hostile features. In the first one the goals of the OS and user are aligned, in the second one not.
It's the same for me. I understand that people do want to use them without plugging in, but I would imagine at least most developers prefer external screens, right?
For me the battery is good enough when it can last two back-to-back meetings without me getting worried, so about 2.5 hours. Otherwise it stays plugged to USB-C.
The monitor is both powered and the video comes from one USB cord. My MacBook Pro can run 5-6 hours while powering the monitor. I couldn’t do that if the laptop by itself only last 3 hours.
Every now and then I use my iPad as a third monitor.
Australia indeed has great coffee culture, I had no idea before visiting. On my holiday there I never had bad or mediocre coffee, which was a bit wild. Not once.
You can get great coffee in Europe of course, but you'll specifically have to go to a cafe that knows what they're doing, or the likelihood of a mediocre brew is high. Australians, I bet most of you don't realize how good you have it there.
When I was in Queensland I was a bit flabbergasted that every roadside sandwich shoppe, no matter how small or God-forsaken, served excellent coffee. Thankfully the world is healing and American coffee culture is improving. Starbucks has as much difficulty getting a toehold in New Orleans as it did in Brisbane and for the same reasons: there are so many places you can get better coffee, that no one wants to go to Starbucks.
I used to play Aardwolf too, great memories. I think I got to 6x remort before going on hiatus. I'd be tempted to return, except I decided a couple of decades ago that no more games that don't have an end.
My first MUD was Phoenix. At some point they did a big update that forced all players to restart from level 1, and subsequently lost their player base and died. I'm sure I learned something from that debacle.
And as far as I can see, it's a total waste of silicon. Anything running in it will anyway be so underpowered that it doesn't matter. It'd be better to dedicate the transistors to the GPU.
The latest Ryzen mobile CPU line didn't improve performance compared to its predecessor (the integrated GPU is actually worse), and I think the NPU is to blame.
If you ask NVIDIA, inference should always run on the GPU. If you ask anybody else designing chips for consumer devices, they say there's a benefit to having a low-power NPU that's separate from the GPU.
Okay, yeah, and those manufacturers’ opinions are both obvious reflections of market position independent of the merits, what do people who actually run inference say?
(Also, the NPUs usually aren't any more separate from the GPU than tensor cores are separate from an Nvidia GPU, they are integrated with the CPU and iGPU.)
If you're running an LLM there's a benefit in shifting prompt pre-processing to the NPU. More generally, anything that's memory-throughput limited should stay on the GPU, while the NPU can aid compute-limited tasks to at least some extent.
The general problem with NPUs for memory-limited tasks is either that the throughput available to them is too low to begin with, or that they're usually constrained to formats that will require wasteful padding/dequantizing when read (at least for newer models) whereas a GPU just does that in local registers.
Another upside of LIDAR is that it isn't a camera. The robot sees a one-pixel 360 scan, which is quite enough for navigation, but doesn't have the privacy implications that come with an IoT camera device. I would not take a camera equipped vacuum even for free, and I think I'm not the only one.
I have one, and specifically got one without a camera because I don't want that driving around my house. The first time it went through I made sure to stow cables and such, and I do a quick walk-through to make sure that none of the cats have barfed and that there's no obvious obstacles before I release the hypnodrone.
It still saves me time, which was the reason that I bought it in the first place.
Sadly most new models have both camera and lidar. I can see the use of a camera for avoiding things like cables and pet poo, but I don't think it's worth is especially since all the robots are controlled via the cloud
I do the same, except I track even a bit less. I track monthly all account balances, salary/income, paid taxes, transfers to investment accounts, rent/mortgage, and health insurance. This is enough to map investment performance, financial position, and necessary living costs. Mostly everything else is "a life lived" expense and I don't see a need to do any finer tracking there.
If you have plenty of income, tracking grocery expenses might be a waste of time, like tracking how much air you breathe.
If your income is barely meeting your expenses (or worse), the grocery budget is one of the few places you can make meaningful decisions that will lower your costs.
And there's a tricky middle ground where it feels like you have enough income, but if you don't pay attention, you'll die by a thousand cuts to small expenses that feel insignificant by themselves.
Yes, very true. It might be useful to start with overtracking to see where the money actually goes.
I didn't track anything until about 5 years ago as my mental model was "track all purchases", and I wasn't willing to do that. Someone had to point out that higher level tracking can be quite useful too, and this is what I found to work well for me. That's why I bring this up in related topics: it's not an all-or-nothing choice.
I don't track purchases at all, I track savings. I find that much more sustainable. I have a savings rate goal (50%), and automatic withdrawals to make that goal. If I can't pay my bills without dipping into the savings then I'll have to reevaluate my spending only then. As long as my savings rate is what it needs to be I don't worry about spending money.
I'm a freelancer with variable income so I can't do a fixed investment rate. For me it's useful to see what was the actual rate vs. what I'd like it to be, and the high level view is quite enough to track that.
I used to track every single payment, thinking I was doing excellent budgeting. But in reality it was mostly bikeshedding. The current approach works well for me, but I can totally see a use for tracking more in some cases. If I were a smoker or a heavy drinker I'd probably track those, for instance.
I'm not saying I'm reading these regularly, just that yes it's the expected way.
reply