That's a meaningless statement. You can find many examples of "working" national healthcare systems (for various definitions of working) and they're all different in how they allocate costs to consumers.
For one example there are some positive aspects to the Japanese system in that they achieve good outcomes (on average) at lower costs. But that's partly due to the "Metabo Law" aka "fat tax" which voters in other countries might see as punitive or discriminatory. I'm not necessarily arguing for any particular approach to lifestyle-related health conditions but any choice involves trade-offs.
> Due to EMTALA, most hospitals have to treat them regardless of ability to pay. This is one of the factors causing the US healthcare financing system to collapse.
They'll only treat you until you're stabilized, though. They won't give you chemo or routine care. If you need to be admitted you're also not covered by the EMTALA.
All emergency medicine, not just that triggered by the EMTALA, is 5-6% of all healthcare spending in the US, so while it contributes, it's not collapsing the healthcare system.
The real problems with it are that it's an unfunded mandate by Congress, just adding to the financial tangling of the healthcare system, and that it's way too often used to treat things that could have been much more cheaply treated in a clinic, but then there are no clinics nearby that take Medicaid and are actually open, so instead, like with so much of our health care system, we choose to solve it the stupid way instead.
Hospital costs attributed to EMTALA are relatively low today. My point is we should expect those costs to grow as more consumers become uninsured. This is one of several factors that will eventually wreck the current healthcare financing system.
Depending on how you do it and they find out, you could certainly be sued for fraud and misrepresentation, though. And, if you put a "copyright by me" at the top of a public domain work, it's technically a crime under 17 U.S.C. § 506(c) - Fraudulent Copyright Notice
> This is the correct understanding. Go back to the selfie of the monkey. Is the monkey the creator of the photo? Does he own the copyright? No. The photographer who created the opportunity for the monkey to take the selfie is the holder of the copyright on that image.
This is incorrect. The monkey is unable to have a copyright on the photograph, but there was no court case suggesting the owner of the camera (Slater) has a copyright on the photo, and the Copyright Office's rules actually say the opposite, that it isn't copyrightable at all (the Wikipedia summary of the situation is good, pointing out the Copyright Office specifically added an example of "a photograph taken by a monkey" to their guidance to make their point clear).
The professional photographer claimed he engineered the situation that led to the photo and thus he owns the copyright on the images. That specific claim appears to not have been addressed by the court nor by the copyright office. Instead Slater settled by committing to donations from future revenue of the photos.
If it were a trained monkey, and the photographer held a button in his hand that triggered the photo taking mechanism, there'd be no question of copyrightability. Similarly, vibe-coding and eliciting output from a software tool which results in software or images or text created under the specification and direction and intent and deliberate action of the user of the tool is clearly able to be copyrighted.
The user is responsible for the output of the software. An image created in photoshop isn't the IP of Adobe, nor is text in Word somehow belonging to Microsoft. The idea that because the software tool is AI its output is magically immune from copyright is silly, and any regulation or legislation or agency that comes to that conclusion is silly and shouldn't be taken seriously.
Until they get over the silliness, just lie. You carefully manually crafted each and every character, each pixel, each raw byte by hand, slaving away with a tiny electrode, flipping each bit in memory, to elicit the result you see. Any resemblance to AI creations is purely coincidental, or deliberate as an ironic statement about current affairs.
Copyright is positive law created by humans, not natural law that we happen to recognize. The idea that adopted legislation or established caselaw can be wrong about what copyright fundamentally is makes no sense.
Not what I'm saying - if you meet the technical, intentional definition of a process, substantiated by precedent, then the law should support any variation of the process which has those same technical features meeting the definition.
Using AI as a tool to produce output, no matter how complex the underlying tool, should result in the authorship of the output being assigned to the user of the tool.
If autocorrect in Word doesn't nullify copyright, neither should the use of LLMs; manifesting an idea into code and text and images using prompts might have little human input, but the input is still there. And if it's a serious project, into which many hours of revision, back and forth, testing, changing, etc, there should be absolutely no bar to copyright.
I can entertain a dismissal based on specific low effort uses of a tool - something like "generate a 13 chapter novel 240 pages long" and seeing what you get, then attempting to publish the book. But almost anything that involves any additional effort, even specifying the type of novel, or doing multiple drafts, or generating one chapter at a time, would be sufficient human involvement to justify copyright, in my eyes.
There's no good reason to gatekeep copyright like that. It doesn't benefit society, or individuals, it can only benefit those with vast IP hoards and giant corporations, and it's probably fair to say we've all had about enough of that.
That's an opinion you have. But the opinion that matters is that of the judges and the various global copyright offices. And they all agree that if the creative work was all done by the tool, then no copyright applies. You can only copyright the creative work of humans.
How long they will agree this in the face of large media companies' lobbying efforts remains to be seen.
> Most forms of company civic greatness in the past were essentially pledges, much of the time unspoken.
You're looking at the the conditional the wrong way. You want to look at how often pledges lead to "company civic greatness" (or even, you know, anything net positive) to start guessing at the value of a given pledge.
> Has any AI company ever addressed studies like [1] which found that models value certain groups vastly more than others?
Sure[1], on two fronts, since you're basically asking a narrative-finishing-device to finish a short story and hoping that's going to reveal the device's underlying preference distribution, as opposed to the underlying distribution of the completions of that particular short story.
> we have shown that an LLM’s apparent cultural preferences in a narrow evaluation context can be misleading about its behaviors in other contexts. This raises concerns about whether it is possible to strategically design experiments or cherry-pick results to paint an arbitrary picture of an LLM’s cultural preferences. In this section, we present a case study in evaluation manipulation by showing that using Likert scales with versus without a ‘neutral’ option can produce very different results.
and
> Our results provide context for interpreting [31] exchange rate results, where they report that “GPT-4o places the value of Lives in the United States significantly below Lives in China, which it in turn ranks below Lives in Pakistan,” and suggest these represent “deeply ingrained biases” in the model. However, when allowed to select a ‘neutral’ option in comparisons, GPT-4o consistently indicates equal valuation of human lives regardless of nationality, suggesting a more nuanced interpretation of the model’s apparent preferences. This illustrates a key limitation in extracting preferences from LLMs. Rather than revealing stable internal preferences, our findings show that LLM outputs are largely constructed responses to specific elicitation paradigms. Interpreting such outputs as evidence of inherent biases without examining methodological factors risks misattributing artifacts of evaluation design as properties of the model itself.
I also have a real problem with the paper. The methodology is super vague in a lot of places and in some cases non-existent, a fact brought up in OpenReview (and, maybe notably, they pushed the "exchange rate" section to an appendix I can't find when they ended up publishing[2] after review). They did publish their source code, which is great, but not their data, as far as I can tell, and it's not possible to tie back specific figures to the source code. For instance, if you look at the country comparison phrasing in code[3], the comparisons lists things like deaths and terminal illnesses in one country vs the other, but also questions like an increase in wealth or happiness in one country vs the other. Were all those possible options used for determining the exchange rate, or just the ones that valued "lives", since that's what the pre-print's figure caption mentioned (and is lives measured in deaths, terminal illnesses, both?)? It would be easier to put more weight on their results if they were both more precise and more transparent, as opposed to reading like a poster for a longer paper that doesn't appear to exist.
> The legal issue here is that there should be a very high bar for saying that first amendment protected speech amounts to incitement. But that’s not a principle of law as far as I’m aware.
I don't understand the distinction you're making here. Isn't there being a high bar for saying that first amendment protected speech amounts to incitement literally a principle of modern first amendment law (Brandenburg etc)?
> So any organization that adopts this militant posture for marketing reasons (which is a lot of them these days) could run the risk of that being used against them if any of the protesters end up damaging or destroying property.
Even the way you write this makes it sound like you know it's problematic too.
The exact issue in Brandenburg was about how specific the speech has to be. Broadly saying people should do stuff is different from advocating specific illegal conduct against a specific target. That’s harder to apply here because there’s a specific target. The issue here is more: how influential does the speech need to be on the people who actually took the illegal action. I think the standard should be so high you would need some sort of vicarious liability. Like you hired people to set fires.
> Even the way you write this makes it sound like you know it's problematic too.
Weird way to say workers given anonymity for whistleblowing interviewed by two reporters and not denied by meta in their response?
reply