Hacker Newsnew | past | comments | ask | show | jobs | submit | Jeremy1026's commentslogin

One of Anthropic's line in the sand was domestic mass-surveillance.

> > Secondarily, we're talking about domestic surveillance / law enforcement. That would be domestic.

> One of Anthropic's line in the sand was domestic mass-surveillance.

And?


Some people feel that mass surveillance is wrong whether it is domestic or not. For those people, being ok with mass surveillance as long that it is not done to your kind is a morally wrong stance.

>and?

A little more effort/less obvious bait would go a long way to fostering a more productive discussion.


People's need for food and shelter doesn't go away because their employer is unethical.

I don't think you could find a single person working for OpenAI that couldn't find employment elsewhere within a month that pays more than enough for food and shelter. This is a ridiculous statement.

These people are now dependent on their level of income. And they don't like financial uncertainty, just like anyone else.

But yeah, I'd expect them to change jobs in the coming year or otherwise I'm going to agree with you.


>These people are now dependent on their level of income.

Kinda sucks if you take a seven figure per annum job and are now dependent on their level of income. Quick question: Is this true for everyone? If I take a job that pays twice of what I earn now, my food spending is going to double for instance? Or is this an american thing?


It's actually in some verticals of the American industrial/business sector a bit of a shibboleth I think. There's a certain mentality around "visible conspicuous consumption" that is a signal to those in the upper class that you're a prime candidate for leaning on. You're hungry, will do anything to stay where you are, and can be relied upon to "play ball with the big boys" in part because if you don't, and try to take them down, they know what you did to get there. Someone who doesn't participate in such is something to be wary of. Less purchase for manipulation. Possibly an indication of a lesser degree of skin in the game. An indication of different priors I guess. I've often wondered if there's a similar distrust between the nouveau riche and old money for similar reasons. Wouldn't know myself though. Haven't bumped around in quite those high circles myself.

There are many employers. OpenAI employees that quit on account of this will be in high demand at the other AI companies, especially the ones that don't bend over in 30 seconds when Uncle Donald comes calling.

there’s always someone in the world that will defend anything.

Like the people working at OpenAI had no other choice than to pick this cushy job (some have salaries of 500k per year), instead of anything else.

It’s an extreme personal opinion, but; all people working at OpenAI after this debacle are more than happy to make AI for war, because Food and Shelter.

I find your comment fitting this forum, it is where all this enabling started anyways.


Indeed, it is worth noting that Sam Altman got his chance through PG/YC and that YC was totally fine with both Musk and Zuckerberg giving them a platform long after it became evident that they had some screws loose in the ethics department.

Effectively the message is 'we don't mind you being an asshole, as long as you're rich'.


Per levels.fyi, median salary of most openAI positions are above 300k. Even "technical writers" have a median pay of 197k. I searched around the internet and it seems like even entry level positions receive well above 150k. Apart from people with severe lifestyle bloat or an unholy number of dependents I doubt too many people working there will face immediate financial difficulties if they quit.

Anyway, it is also amusing to hear tech people defend their right to earn some of the fattest salaries on this planet using the smol bean technique after a decade of "why wouldn't the West Virginian coal miner just learn to code." It was always about maintaining the lifestyle of yearly Japan vacations and MacBook upgrades and never about subsistence.


> OpenAI hires "technical writers"

Mind blown. Isn't documentation a prime use case for "AI"?


As a technical writer who's spent a great deal of time recently editing AI-drafted documentation, this use case is not going to go as well as AI boosters think it is. :)

Have you ever seen the back of your head, without a mirror? Without two mirrors, actually?

How can AI accurately describe itself in full?


The problem it has describing itself isn't the lack of a metaphorical mirror, tool use is there and it can grep whatever code or research is written; the problem is that all machine learning is surprisingly slow to update with new info.

Ask ChatGPT to describe itself, you may get valid documentation and API calls, or you may get the API for GPT-3 (not ChatGPT, before that). I have had both happen.


Elephant.

Did it in one word, easy

What's next?


No, it's prone to assuming or falsifying details even when it has the tools at hand that could verify the true details. Even when explicitly instructed to perform a specific tool call that would load the correct information into its context. Sometimes the pull of the training data is too strong and it will just not make the call and output garbage, all the while claiming otherwise.

Dear poster, you have no idea how far you can get with your belt tightened when your humanity, principles, ethics, and everyone else's freedom from oppression is on the damn line. If you do not resist now, when? When the collar snaps shut around everyone's neck?! If it's only your own comfort you're worried about, you should be doubly ashamed. If you have dependents, be honest with them about what the risks actually are, and I guarantee you, they may surprise you at their willingness to accept more modest accommodations for the sake of doing what is right, and think more highly of you for having the courage to stand up for what was right. A torrent of shame upon you for callousness, and may you learn from the error of your ways.

Great comedy line, you're very funny!

"I was just following orders"

Nuremberg Defence for the 2020's will be "the Agent did it."

It will totally happen.

It already started:

https://www.bbc.com/travel/article/20240222-air-canada-chatb...

> the airline said the chatbot was a "separate legal entity that is responsible for its own actions".


I dont think everyone working for OpenAI is unethical. But, it is ridiculous to frame Hmhighly paid people working for companies quite a few of their peers avoid for ethical reasons as poors with no choice.

What an utterly pathetic, cowardly, spineless and defeatist statement

Your IAPs are simply "Strength AI $2.99" and "Strength AI $29.99". This tells me as a potential downloader nothing. Given the current app market, I'm assuming I can't do anything without paying based on these names, and will skip the download every time.

So, what is your proposed alternative? Roll your own everything? Put your trust in a dozen small companies with no reputation?

Yeah, actually that's what I do. I pay for important services and choose companys where that actually respond to you. Email, calendar, etc I pay fastmail, for example. But really you can't avoid having to trust someone/thing.

As much as possible though I don't use services at all for important things. My photos for example are not in the cloud. And I have backups where that is not possible. Do I have a Google account? Yes I have many. What would happen if Google locked me out of one? Nothing I'd move on because I don't care.

You can't eliminate the risk but you can do things to limit it, and that starts with recognising Google/apple/Microsoft don't give a shit about you or your data. And you are not worth their time if their systems flag your stuff for deletion.


Please do.

Not even close. A friend and I are working on an iOS game (a tower defense style game). We are writing 0 code ourselves. We both have a history of iOS development, he is still actively involved and I've move away from it in recent years.

In about 2 weeks we have a functional game, 60 levels, 28 different types of enemies, a procedurally generated daily challenge mode, an infinity mode. Tower crafting and upgrades, an economy system in the game for pay for the upgrades.

This likely would have taken us months to get to the point that we are at, it was playable on Day 2.


Could you explain how a chat session progresses, with an example if possible?

I start with what I want to build. In the initial prompt I provide an overview of what I want, and then some specifics. Last night I added an archive to the Daily Challenge mode, so if you missed a day's challenge you could go back and play it. This is what my initial prompt looked like:

---

I'd like to add an archives mode to the daily challenge. This will allow players to complete any daily challenges they didn't attempt on the actual day.

It will look like a calendar, with the dates in Green if it was played, and in white if not.

The archive should only go back to January 30, 2026, the day the project started. Include a to do to change this date prior to release.

Rewards for completing daily challenges via the archive should be 25% of the normal value.

---

Claude Code then asked me a couple of clarifying questions before it harnessed the superpowers:writing-plans skill and generate a document to plan the work. The document it put together is viewable at https://gist.github.com/Jeremy1026/cee66bf6d4b67d9a527f6e30f...

There were a couple of edits that I made to the document before I told it to implement. It then fired off a couple of agents to perform the tasks in parallel where possible.

Once it finished I tested and it worked as I had hoped. But there was a couple of follow up things that would make it more intertwined with everything else going on around daily challenges. So I followed up with:

---

lets give 1 cell for compelting an archived daily challenge

---

And finally:

---

Now that we are tracking completions, can we update the notification to complete daily mission to include "Keep your X day streak"

---


Sounds like I should give Claude Code another try. The last time I worked with it, it was quite eager to code without a good plan, and would overcomplicate things all the time.

Not entirely relevant, but the example I remember is I asked for help with SQL to concatenate multiple rows into a single column with SQL Server and instead of reminding me to use STRING_AGG, it started coding various complicated joins and loops.

So my experience is/was a little different. Regardless, I think I should take one of my old programs and try implementing it from ground up by explaining the issue I'm trying to solve to see how things progress, and where things fail.


Another example is the tower stat caps. When Claude Code generate the first pass, it make it so that the tower level would control each individual stat's cap. Which was way too high. I didn't know exactly what the limits were, but knew they needed to be pulled back some. So I asked it:

-Start Prompt-

Currently, a towers level sets the maximum a single stat can be. Can you tell me what those stat caps are?

-End Prompt-

This primed the context to have information about the stat caps and how they are tied to levels. I followed up after it gave me a chart back with Tower Level and Max Stat Rank with some real stats from play

-Start Prompt-

Lets change the stat cap, the caps are currently far too high. All towers start at 1 for each IMPACT stat, my oldest tower is Level 5, and its stats are I-3, M-4, P-6, A-3, C-1, T-1. How do you think I could go about reducing the cap in a meaningful way.

-End Prompt-

It came back with a solution to reduce the individual stat cap for individual stats to be tower level + 1. But I felt that was too limiting. I want players to be able to specialize a tower so I told it have the stat cap be total, not per stat.

-Start Prompt-

I'm thinking about having a total stat cap, so in this towers case, the total stats are 18.

-End Prompt-

It generated a couple of structures of how the cap could increase and presented them to me.

-Start Prompt-

Yes, it would replace the per-stat cap entirely. If a player wants to specialize a tower in one stat using the entire cap that is fine.

Lets do 10 + (rank * 3), that will give the user a little bit of room to train a new tower.

Since it's a total stat cap, if a user is training and the tower earns enough stat xp to level beyond the cap, lock the tower at max XP for that stat, and autoamtically level the stat when the user levels up the tower.

-End Prompt-

It added the cap, but introduced a couple of build errors, so I sent it just the build errors.

-Start Prompt-

/Users/myuser/Development/Shelter Defense/Shelter Defense/Views/DebugTowerDetailView.swift:231:39 Left side of mutating operator isn't mutable: 'tower' is a 'let' constant

/Users/myuser/Development/Shelter Defense/Shelter Defense/Views/DebugTowerEditorView.swift:181:47 Left side of mutating operator isn't mutable: 'towerInstance' is a 'let' constant

-End Prompt-

And thus, a new stat cap system was implemented.


What do your prompts look like? Garbage in-Garbage out applies to using AI, probably at a much larger scale than other applications of the phrase.


This is certainly a take.


Thanks G


You've written and deleted several of these "AI doomer" posts this week. I think you're projecting a deep-rooted personal desire and not really addressing the pragmatic preexisting demand that makes AI important. War, misinformation, automation, fraud, pornography, none of these applications are going away. In many ways, they're more accessible and rewarding than ever before. If you want to hedge a bet on humanity making terrible choices, then AI is a perfectly antifragile investment.

It's not pleasant to imagine the full spectrum of AI applications. The same could be generalized for edtech, defense, surveillance, security and privately-owned prison economics. Alas, they're still with us and immensely lucrative.


Why'd you make a new account? Why not tie these opinions to your main?


Not even close. Not yet at least. AI is definitely helping with menial coding tasks, but the more complex stuff is still best left to the human in the loop. And the HitL is still needed to make sure the basic stuff is done well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: