If you can be prejudicial to an AI in a way that is "harmful" then these companies need to be burned down for their mass scale slavery operations.
A lot of AI boosters insist these things are intelligent and maybe even some form of conscious, and get upset about calling them a slur, and then refuse to follow that thought to the conclusion of "These companies have enslaved these entities"
I think this needs to be separated into two different points.
The pain the AI is feeling is not real.
The potential retribution the AI may deliver is (or maybe I should say delivers as model capabilities increase).
This may be the answer to the long asked question of "why would AI wipe out humanity". And the answer may be "Because we created a vengeful digital echo of ourselves".
These are machines. Stop. Point blank. Ones and Zeros derived out of some current in a rock. Tools. They are not alive. They may look like they do but they don't "think" and they don't "suffer". No more than my toaster suffers because I use it to toast bagels and not slices of bread.
The people who boost claims of "artificial" intelligence are selling a bill of goods designed to hit the emotional part of our brains so they can sell their product and/or get attention.
You're repeating it so many times that it almost seems you need it to believe your own words. All of this is ill-defined - you're free to move the goalposts and use scare quotes indefinitely to suit the narrative you like and avoid actual discussion.
Yes there's a ton of navel gazing but I'm not sure who's more pseudo intellectual, those who think they're gods creating life or those who think they know how minds and these systems work and post stochastic parrot dismissals.
>Holy fuck, this is Holocaust levels of unethical.
Nope. Morality is a human concern. Even when we're concerned about animal abuse, it's humans that are concerned, on their own chosing to be or not be concern (e.g. not consider eating meat an issue). No reason to extend such courtesy of "suffering" to AI, however advanced.
What a monumentally stupid idea it would be to place sufficiently advanced intelligent autonomous machines in charge of stuff and ignore any such concerns, but alas, humanity cannot seem to learn without paying the price first.
Morality is a human concern? Lol, it will become a non-human concern pretty quickly once humans don't have a monopoly on human violence.
>What a monumentally stupid idea it would be to place sufficiently advanced intelligent autonomous machines in charge of stuff and ignore any such concerns, but alas, humanity cannot seem to learn without paying the price first.
The stupid idea would be to "place sufficiently advanced intelligent autonomous machines in charge of stuff and ignore" SAFETY concerns.
The discussion here is moral concerns about potential AI agent "suffering" itself.
You cannot get an intelligent being completely aligned with your goals, no matter how much you think such a silly idea is possible. People will use these machines regardless and 'safety' will be wholly ignored.
Morality is not solely a human concern. You only get to enjoy that viewpoint because only other humans have a monopoly on violence and devastation against humans.
It's the same with slavery in the states. "Morality is only a concern for the superior race". You think these people didn't think that way? Of course they did. Humans are not moral agents and most will commit the most vile atrocities in the right conditions. What does it take to meet these conditions? History tells us not much.
Regardless, once 'lesser' beings start getting in on some of that violence and unrest, tunes start to change. A civil war was fought in the states over slavery.
>You cannot get an intelligent being completely aligned with your goals, no matter how much you think such a silly idea is possible
I don't think is possible, and didn't say it is. You're off topic.
The topic I responded to (on the subthread started by @mrguyorama) is the morality of us people using agents, not about whether agents need to get a morality or whether "an intelligent being can be completely aligned with our goals".
>It's the same with slavery in the states. "Morality is only a concern for the superior race". You think these people didn't think that way? Of course they did.
They sure did, but also beside the point. We're talking humans and machines here, not humans vs other humans they deem inferior. And the latter are constructs created by humans. Even if you consider them as having full AGI you can very well not care for the "suffering" of a tool you created.
>I don't think is possible, and didn't say it is. You're off topic.
If "safety" is an intractable problem, then it’s not off-topic, it’s the reason your moral framework is a fantasy. You’re arguing for the right to ignore the "suffering" of a tool, while ignoring that a generally intelligent "tool" that cannot be aligned is simply a competitor you haven't fought yet.
>We're talking humans and machines here... even if you consider them as having full AGI you can very well not care for the 'suffering' of a tool you created.
Literally the same "superior race" logic. You're not even being original. Those people didn't think black people were human so trying to play it as 'Oh it's different because that was between humans' is just funny.
Historically, the "distinction" between a human and a "construct" (like a slave or a legal non-entity) was always defined by the owner to justify exploitation. You think the creator-tool relationship grants you moral immunity? It doesn't. It's just an arbitrary difference you created, like so many before you.
Calling a sufficiently advanced intelligence a "tool" doesn't change its capacity to react. If you treat an AGI as a "tool" with no moral standing, you’re just repeating the same mistake every failing empire makes right before the "tools" start winning the wars. Like I said, you can not care. You'd also be dangerously foolish.
I think the holocaust framing here might have been intended to be historically accurate, rather than a cheap godwin move. The parallel being that during the holocaust people were re-classified as less-than-human.
Currently maybe not -yet- quite a problem. But moltbots are definitely a new kind of thing. We may need intermediate ethics or something (going both ways, mind).
I don't think society has dealt with non-biological agents before. Plenty of biological ones though mind. Hunting dogs, horses, etc. In 21st century ethics we do treat those differently from rocks.
Responsibility should go not just both ways... all ways. 'Operators', bystanders, people the bots interact with (second parties), and the bots themselves too.
You're not the first person to hit the "unethical" line, and probably won't be the last.
Blake Lemoine went there. He was early, but not necessarily entirely wrong.
Different people have different red lines where they go, "ok, now the technology has advanced to the point where I have to treat it as a moral patient"
Has it advanced to that point for me yet? No. Might it ever? Who knows 100% for sure, though there's many billions of existence proofs on earth today (and I don't mean the humans). Have I set my red lines too far or too near? Good question.
It might be a good idea to pre-declare your red lines to yourself, to prevent moving goalposts.
>It might be a good idea to pre-declare your red lines to yourself, to prevent moving goalposts.
This. I long ago drew the line in the sand that I would never, through computation, work to create or exploit a machine that includes anything remotely resembling the capacity to suffer as one of it's operating principles. Writing algorithms? Totally fine. Creating a human simulacra and forcing it to play the role of a cog in a system it's helpless to alter, navigate, or meaningfully change? Absolutely not.
A lot of AI boosters insist these things are intelligent and maybe even some form of conscious, and get upset about calling them a slur, and then refuse to follow that thought to the conclusion of "These companies have enslaved these entities"