Hacker Newsnew | past | comments | ask | show | jobs | submit | davemo's commentslogin

I swapped to Bazzite on my gaming rig (5800x3D, 64gb DDR4, 4080 Super 16gb) and it's been fantastic. I tried going with Omarchy for a bit to try and have that machine do double duty as a dev/gaming machine, but I felt like the gaming experience on Omarchy is a second-class citizen compared to what the Bazzite experience is optimizing for, and I realized that the Hyprland setup and tiling window manager adds a lot more friction for my normal gaming needs. (I just want to have a few Path of Exile 2 windows open to tab between while gaming, and the tiling window setup in Omarchy had me hitting more hiccups between fullscreen and windowed mode than I care to troubleshoot on my gaming rig).

Immutability in OS updates is also something I didn't know I needed until I experienced it on Bazzite; pretty advantageous as a gamer using Linux with nVidia hardware these days.

This is my second go around on Bazzite, YMMV but I opted for Gnome over KDE this time and have had zero issues running the games I am into (WoW, PoE2) and no funky window management issues that I seemed to run into with KDE.

I'm considering a move to a Framework machine in the very near future, and still need to settle on a distro for dev; most of that is done on an M3 Max Macbook these days.


I was an avid user of Pipes and blogged a bit about the experience of using it to build an aggregated set of feeds from various employee blogs to feed into our company site back in 2009 [0]. It holds a special place in my memory alongside early internet greats like del.icio.us [1]

- [0] https://blog.davemo.com/posts/2009-04-06-yahoo-pipes-at-vend...

- [1] https://en.wikipedia.org/wiki/Delicious_(website)


Man I miss delicious

I can appreciate the effort put into the goal of optimization shared in the post, even if I disagree with the conclusions. All of that effort would be much better directed at doing a manual (or LLM-assisted) audit of the E2E tests and choosing what to prune to reduce CI runtime.

DHH recently described[0] the approach they've taken at BaseCamp, reducing ~180 comprehensive-yet-brittle system tests down to 10 good-enough smoke tests, and it feels much more in spirit with where I would recommend folks invest effort: teams have way more tests than they need for an adequate level of confidence. Code and tests are a liability, and, to paraphrase Kent Beck[1], we should strive to write the minimal amount of tests and code to gain the maximal amount of confidence.

The other wrinkle here is that we're often paying through the nose in costs (complexity, actual dollars spent on CI services) by choosing to run all the tests all the time. It's a noble and worthy goal to figure out how not to do that, _but_, I think the conclusion shouldn't be to throw more $$$ into that money-pit, but rather just use all the power we have in our local dev workstations + trust to verify something is in a shippable state, another idea DHH covers[2] in the Rails World 2025 keynote; the whole thing is worth watching IMO.

[0] - https://youtu.be/gcwzWzC7gUA?si=buSEYBvxcxNkY6I6&t=1752

[1] - https://stackoverflow.com/questions/153234/how-deep-are-your...

[2] - https://youtu.be/gcwzWzC7gUA?si=9zL-xWG4FUxYZMC5&t=1977


Agreed. When you have multiple developers working on the same code, you end up with overlapping test coverage as time goes on. You also end up with test coverage that was initially written with good intentions, but ultimately you'll later find that some of it just isn't necessary for confidence, or isn't even testing what you think it is.

Teams need to periodically audit their tests, figure out what covers what, figure out what coverage is actually useful, and prune stuff that is duplicative and/or not useful.

OP says that ultimately their costs went down: even though using Claude to make these determinations is not cheap, they're saving more than they're paying Claude by running fewer tests (they run tests on a mobile device test farm, and I expect that can get pricey). But ultimately they might be able to save even more money by ditching Claude and deleting tests, or modifying tests to reduce their scope and runtime.

And at this point in the sophistication level of LLMs, I would feel safer about not having an LLM deciding which tests actually need to run to ensure a PR is safe to merge. I know OP says that so far they believe it's doing the right thing, but a) they mention their methodology for verifying this in a comment here[0], and I don't agree that it's a sound methodology[1], and b) LLMs are not deterministic and repeatable, so it could choose two very different sets of tests if run twice against the exact same PR. The risk of that happening may be acceptable, though; that's for each individual to decide.

[0] https://news.ycombinator.com/item?id=45152504

[1] https://news.ycombinator.com/item?id=45152668


Veni, Vidi, Vici


Nice


I submitted an application for w24 that fits in the "Developer tools inspired by existing internal tools" category but wasn't accepted. I suspect my pitch probably needed work, and I also haven't started building at all yet and submitted as a solo-founder which it seems has less chance of being accepted.

Here's the pitch and some details, in case anyone else is interested in the idea:

> Supportal uses AI to generate internal tooling for startups that enables founders to scale customer-support without having to rely on engineering resources.

> Given some simple input context like tech-stack and a database schema, Supportal uses AI to auto-generate internal tools which allow customer-support to easily answer questions about and take action on customer-data without needing help from an engineer.

> Supportal offers founders a fully-featured self or cloud-hosted web UI.

Retool (https://retool.com), Zapier (https://zapier.com), Airtable (https://www.airtable.com), Superblocks (https://www.superblocks.com), and Google AppSheet (https://about.appsheet.com) would likely be primary competitors, although their products require heavy user interaction to build internal tools either through composition in a WYSIWYG editor, low/no-code solutions, or integrations expertise using a full programming language.

Although I'm no longer there, we actually evaluated and/or used all of these tools at Pulley, so I've had first-hand experience with their friction and where the gaps exist that Supportal would fill.

These tools are also all targeted at integrations-experts who have the technical knowledge to write code and spend time building the tool they want.

Supportal aims to generate the tooling you need intelligently via AI introspection and get you up and running with useful command and query tools to help your customer-support team take action and gain insights without help from engineering right out of the box.

My most recent experience comes building internal tools for Pulley; I built the initial version of the internal tools in ~3 weeks and added features to it within Pulley over ~2 years. Roughly ~2-3 months of full-time work spread over that time period.

Features were added as we identified gaps in our support agents ability to answer questions and take action, which often required dedicated engineering resources to help with, leading to a productivity loss for both groups.

That said, I haven't actually built out anything that would _generate_ tools like this yet, but I've done enough adjacent work in the codegen/AI space in the last couple years that I feel confident I could put the pieces together.


Merry Christmas to all my fellow hackers! :]


This may be slightly tangential but I recently discovered ncc[1] from vercel which can take a single node project and compile it and all dependencies to a single file.

As an added benefit it also collapses all contained dependencies license files into a single licenses.txt file too!

- [1] https://github.com/vercel/ncc


I have a friend who is a mail-carrier (which I think is the most accurate gender-neutral term) for Canada Post. She refers to herself often as "postie" as well, and her co-workers as "posties". :]


Ah nice, a quick search revealed it's used in Australia, NZ and Canada too :D


It comes in waves. I'm in my 22nd year of developing software and I can reflect on times when I felt incredibly confident and didn't shy away from sharing my opinions, and other times when I felt like it was better to remain silent because I had things to learn from others.

There are times for both speaking and listening in our careers; as I've progressed further in my career I have felt it is often more valuable to exercise active listening.


I’m a lifelong musician who recently started diving into the world of audio production. I like most music genres and try to experiment with producing a little from each.

Here are a few of my recent tracks that I think turned out ok:

- Rush-inspired synthwave: https://soundcloud.com/dmosher/rushing-in

- Lo-fi jazz/hip-hop: https://soundcloud.com/dmosher/the-soggy-sunday-swing

I’ve got some more experimental stuff up on Bandcamp as well: https://davemo.bandcamp.com/


Reminds me of some of the older Grafton Primary stuff. Nice work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: