They already refuse to comply with CPRA, instead electing to replace your username with a random 6(?) character string, prefixed with `_`, if I remember correctly.
I know, because I've been here since maybe 2015 or so, but this account was created in 2019.
So any PII you have mentioned in your comments is permanent on Hacker News.
I would appreciate it if they gave users the ability to remove all of their personal data, but in correspondence and in writing here on Hacker News itself, Dan has suggested that they value the posterity of conversations over the law.
One that people don't seem to mention enough to me is that neither macOS nor Windows have ANY feature remotely close to the magic SysRq key.[1][2]
Not even Control-Alt-Delete is remotely the same.
ALT-SysRq-f, which will "call the oom killer to kill a memory hog process, but ... not panic if nothing can be killed," should truly be available on every modern operating system, but nope, only Linux has it.
This is the first I've heard about this! I have always slightly missed the authority Control-Alt-Delete seemed to have on windows and this does seem to be a good (maybe better) alternative.
I still have a slightly older laptop that has, proving that it is totally possible to fit the keys in the size constraints. That is one of THE features that let me refrain from buying a new laptop even if I would kind of need to, but modern ones waste space with just filler space between keys, bigger keys or just empty sides.
that was fascinating, i had no idea. and, surprisingly, i dont think i knew of this channel which seems to have a ton of interesting videos. thanks for that.
I’ve always been curious what the ramifications are for security researchers who openly have accounts on such forums in order to track the dissemination of leaks.
Because everyone and their mother sends spam. It doesn't take a genius to figure out why. No, I don't want emails about the new product you're releasing. No I don't want your newsletter.
I have to actively fight to keep my inbox empty, and free of crap. And while I'm ranting, notifications I actually want on my phone are co-opted by advertisements, which Apple and Google should actively prevent, but they won't because they use push notifications to commingle advertising to you with important, sometimes time-sensitive notifications.
Everyone thinks they have something worth sharing. Over three nines of the time, people don't. I will happily be apart of the 0.1% who sends your crap straight to the spam filter for Gmail to train.
It's spam. It's almost all spam. Even the transactional stuff for logging in, I don't want. Just use an email, password, and TOTP. Stop sending me emails.
Sharing my 5 cents on the matter: in another world, gaming, where embedding scripting languages is done for modding, I hope to see WASM take off as a way for modern modders to get into game development.
I've seen smaller developers experimenting with this, but haven't heard of larger orgs doing it, possibly because UGC took the place of modders as well, and I come from an older world where what developers of my time 20 years ago would have had their hands on was an actual SDK that wasn't a part of a long microtransaction pipeline.
In my org's case, where we built an entire game engine off Lua, and previously had done Lua integration in the Source Engine, I would have loved to have had sandboxing from the start rather than trying to think about security after the fact.
To the article's point: even if you were to sandboxing today in those environments, I suspect you'd be faster than some of the fastest embedded scripting languages because they're just that slow.
I've been looking into this. There seems to be some mostly-repeating 2D pattern in the LSB of the generated images. The magnitude of the noise seems to be larger in the pure black image vs pure white image. My main goal is to doctor a real image to flag as positive for SynthID, but I imagine if you smoothed out the LSB, you might be able to make images (especially very bright images) no longer flag as SynthID? Of course, it's possible there's also noise in here from the image-generation process...
Gemini really doesn't like generating pure-white images but you can ask it to generate a "photograph of a pure-white image with a black border" and then crop it. So far I've just been looking at pure images and gradients, it's possible that more complex images have SynthID embedded in a more complicated way (e.g. a specific pattern in an embedding space).
I just tried this idea, and it looks like it isn't that simple.
> "Generate a pure white image."
It refused no matter how I phrased it ¯\_(ツ)_/¯
> "Generate a pure black image."
It did give me one. In a new chat, I asked Gemini to detect SynthID with "@synthid". It responded with:
> The image contains too little information to make a diagnosis regarding whether it was created with Google AI. It is primarily a solid black field, and such content typically lacks the necessary data for SynthID to provide a definitive result.
Further research: Does a gradient trigger SynthID? IDK, I have to get back to work.
I got downvoted heavily about a year ago saying we need to abandon Android and the industry needs to pivot back to just putting GNU/Linux on a phone already.
Of course, now Google is doing what Google was always going to do.
It seems like it's the cheapest way to access Claude Sonnet 4.5, but the model distribution is clearly throttled compared to Claude Sonnet 4.5 on claude.ai.
That being said, I don't know why anyone would want to pay for LLM access anywhere else.
ChatGPT and claude.ai (free) and GitHub Copilot Pro ($100/yr) seem to be the best combination to me at the moment.
There's a lot of hate in this thread, but there are plenty of engineers chomping at the bit for autonomous workflows, because browser-use isn't there yet, and cloud expenses from major providers are also unappealing with so much relatively powerful local compute.
It’d be fine if they included a big disclaimer at the top that this is beta software and they’re not liable for blah blah blah, but without such a disclaimer it’s reasonable to assume the software is ready for production. I think much of the hate is coming from GH misrepresenting its software and people being surprised by the many minor bugs.
I know, because I've been here since maybe 2015 or so, but this account was created in 2019.
So any PII you have mentioned in your comments is permanent on Hacker News.
I would appreciate it if they gave users the ability to remove all of their personal data, but in correspondence and in writing here on Hacker News itself, Dan has suggested that they value the posterity of conversations over the law.
reply