Well, it's a good thing you're not on the hook for defending against it, then.
Like I said in another comment, you don't have a license just because they're cool and look neat. You have them specifically to guard against people like patent trolls, who are trying to wreck your shit and take your lunch money. It's not an abstract risk.
No, they're absolutely correct, and they're not saying either of those things. They're pointing out an enormous hidden risk. Yanno, like an engineer is supposed to do.
You don't have a license because it's what all the cool kids are doing, you have one in case shit goes sideways and someone decides to try and ruin your day. You do, in fact, have to assume the worst.
The "nuke" here is some litigious company -- let's call them Patent Troll Rebranded (PTR) -- discovers that the LLM reproduced large amounts of their copyrighted code. Or it claims to have discovered it. They have large amounts of money and lawyers to fight it out in court and you are a relatively shoestring language foundation.
Either you have to unwind years of development to remove the offending code or you're spending six figures or more to defend yourself in court, all because you didn't bother to anticipate things that are anticipatable.
People are busy, and small bugfixes are usually not that critical. If you make everyone drop everything to review everything, that is much more dysfunctional.
No, it's broadly true. Also, that's why we have code review and tests, so that it has to pass a couple of filters.
LLMs don't make mistakes like humans make mistakes.
If you're a SWE at my company, I can assume you have a baseline of skill and you tested the code yourself, so I'm trying to look for any edge cases or gaps or whatever that you might have missed. Do you have good enough tests to make both of us feel confident the code does what it appears to do?
With LLMs, I have to treat its code like it's a hostile adversary trying to sneak in subtle backdoors. I can't trust anything to be done honestly.
reply