Hacker Newsnew | past | comments | ask | show | jobs | submit | leptons's commentslogin

If "the level of awareness that created a problem, cannot be used to fix the problem", then you're asking too much if you expect a human to reason about an LLM output when they are the ones that asked an LLM to do the thinking for them to begin with.

This feels like a rediscovering/rewording of Kernighan's Law:

"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." ~ Brian Kernighan


It's an old saying, I think Einstein is cited most often for it... something like this according to Google:

"We cannot solve our problems with the same thinking we used when we created them."


In this case you would replace the human.

Yes, I'd fire them, and then hire a more competent human.

I'm pretty happy with the team I've built. They make solid decisions that I can trust every time. I can't say the same for the LLM.


>The LLM presents a perverse incentive here - It is used for perceived efficiency gains, most of which would be consumed by the act of rewriting and redrafting.

It might give efficiency gains for the writer, but the reader has to read the slop and try to guess at what it was intending to communicate and weed out "hallucinations". That's a big loss of efficiency for the reader.


I completely agree - The efficiency gains are purely from a selfish standpoint.

I just can't seem to square up that the same people that complained left-and-right about "code smells" are the same ones that are shitting out slop code and are proud they shipped 50k lines of code in a week. It's going to be a maintenance nightmare for someone else. I'm not sure how anyone coming in is going to learn a codebase written by LLMs when it's 10x more code than is reasonably needed to solve the problem.

Trust me, the two are not the same, and are orders of magnitude different in terms of human satisfaction.

When I walk down a street, I get 10 people stopping me to ask "Where did you get that?". When I tell them I made it, their heads explode. I know which side of that interaction is more satisfying.

We also go all-out for Halloween, and at the big Halloween festival there is literally a line down the street of people waiting to take photos with us. We created something amazing.

People aren't going to line up for slop.


In media there was a rule 1-9-90. One creates, 9 comment, 90 use or are silent/don’t care.

Richard Branson realized that a company starts to behave differently when it reaches more than stuff of 135 people that coincides with average number of people you can consider as personally known to you.

Context switching is a bitch. You cannot do it for a long time. Abundance brought by AI will somehow consolidate as people cannot digest everything created by it.

There are more than 45,000 models avail at HF (if I remember it right). Choose wisely :)


One potential solution to this is AI summarization. Imagine coming home, and while preparing dinner your AI assistant recounts what happened in all your favourite tv shows that day. Then while you're doing the laundry, it tells you about all the new games it found and tested for you.

These are just thought starters, but something like this could significantly raise the ceiling on what one person is able to consume in a 24 hour period.


Nah. These would be pseudo calories.

Adults tend to forget that they gained their powers of reasoning by exercising them.

Getting a summary, the way you described it, will be minus the effort required to think about it. This is great for information that you are already informed.

This is related to the illusion of explanatory depth. Most of us “know” how something works, until we have to actually explain it. Like drawing a bi-cycle, or explaining how a flush works.

People in general are not aware of how their brain works, and how much mental exercise they used get with the way the world is set up.

I suppose we can set up brain gyms, where people can practice using mental skills so that they don’t atrophy?


Do you think that creation only comes with hard work?

Who said it was hard work?

The satisfaction comes from actually doing a thing that improves your own skill, instead of having a thing done for you.


This is the funniest (but seriously not funny at all) thing I've seen on the internet since the start of the whole "AI" craze. And it's all too true.

Claude always says it is sorry for screwing up when I point out that it screwed up.

"Never apologize" into the customized instructions seems to work well for that specific issue.

If the "bug" shows up every time in the output given the same input, then it definitely is deterministic.

Just because there are bugs does not mean a compiler is non-deterministic. I looked through a bunch of the bug reports and there is nothing there that can't be fixed to make it deterministic.

You can't fix an LLM to be absolutely deterministic, but you can fix a compiler.


>For instance, GCC will inline functions, unroll loops, and myriad other optimizations that we don't care about. But when we review the ASM that GCC generates (we don't) we are not concerned with the "spaghetti" and the "high coupling" and "low cohesion". We care that it works, and is correct for what it is supposed to do. And that it is a faithful representation of the solution that we are trying to achieve.

The more complex the code becomes, iteration after iteration by the AI, it keeps adding more and more code to fix simple problems, way more than is reasonably necessary in many cases. The amount of entropy you end up with using AI is astonishing, and it can generate a lot of it quickly.

The AI is going to do whatever it needs to do to get the prompt to stop prompting. That's really its only motivation in its non-deterministic "life".

A compiler is going to translate the input code to a typically deterministic output, and that's all it really does. It is a lot more predictable than AI ever will be. I just need it to translate my explicit instructions into a deterministic output, and it satisfies that goal rather well.

I'll trust the compiler over the AI every single day.


The biggest vendor I work with uses "AI" for all email communications. It's like they use it to sanitize and corporate-speakify their communications, and I really hate it. They can never communicate like a real human being in email. But when we have actual zoom calls they speak like real humans, but in email it becomes so robotic. It's frustrating to feel like I'm speaking with a robot.

One person's slop is another person's treasure, I guess. I've seen a lot of slop on Youtube, and I block the channels putting it out. It's pretty awful. They use AI narration that can't pronounce simple common phrases correctly. I'm not wasting my time with that garbage, I'd rather give views to actual people producing good content. I don't have time for slop.

This is what I've been doing, it works great. I've tested up to 4GB video files transcoded in the browser, then uploading the resulting video data. Also extracts thumbnail images, then uploads them to the server. Then I just do a quick verification server-side to check that it is actually a video or jpeg file, but the user's computer does all the work, so there is essentially zero cost for the whole ffmpeg transcode operation. It's brilliant.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: