Hacker Newsnew | past | comments | ask | show | jobs | submit | astrange's commentslogin

> That's why humans know that glue isn't a pizza topping and AI doesn't.

It's the opposite. That came from a Google AI summary which was forced to quote a reddit post, which was written by a human.


This argument won't get you anywhere because "imitating artists" and "outselling artists" aren't actually the same thing.

i.e. complaining about training on copyrighted material and getting it banned is not sufficient to prevent creating a model that can create music that outsells you. Because training isn't about copying the training material, it's just a way to find the Platonic latent space of music, and you can get there other ways.

https://en.wikipedia.org/wiki/Law_of_large_numbers

https://phillipi.github.io/prh/


I know a few EDM producers and the culture seems to consist of doing the most drugs of anyone you've ever met. Which is quite risky, true.

> they're trained a on a fixed set can only reproduce noise from here and there

This anti-AI argument doesn't make sense, it's like saying it's impossible to reinvent multiplication based on reading a times table. You can create new things via generalization or in-context learning (references).

In practice many image generation models aren't that powerful, but Gemini's is.

If someone created one that output multi-layer images/PSDs, which is certainly doable, it could be much more usable.


If image generation is anything like code generation then AI is not good at copying layout / art style of the coder / artist.

Using Visual Studio, all the AI code generation is applying Microsoft's syntax style and not my syntax style. The return code line might be true but the layout / art / syntax is completely off. This with a solution that has a little less than one million lines of code, at the moment, which AI can work off of.

Art is not constant. The artist has a flow and may have an idea but the art will change form with each stroke with even removing strokes that are not fitting. I see as AI generated content lacks emotion from the artist.


Image generation is nothing like AI code generation in this regard. Copying artist style is one of the things that is explicitly quite easy to do for open-weight models. Go to civitai and there are a million LORAs trained specifically on recreating artist style. Earlier on in the Stable Diffusion days it even got fairly meanspirited - someone would make a lora for an artist (or there would be enough in the training data for the base model to not need it) and an artist would complain about people using it to copy their style, and then there would be an influx of people making more and better LORAs for that artist. Sam Yang put out what was initially a relatively tame tweet complaining about it, and people instantly started trying to train them just to replicate his style even more closely.

Note, the original artist whose style Stable Diffusion was supposedly copying (Greg someone, a "concept art matte painting" artist) was in fact never in the training data.

Style is in the eye of the beholder and it seems that the text encoder just interpreted his name closely enough for it to seemingly work.


Greg Rutkowski

Early stable diffusion prompting was a lot of cargo cult copy pasting random crap in as part of every prompt.


You would want [4] not [3], with the last one being padding. Of course, you can't always afford that.

Probably that it's a soft food and so is easier to eat when you have no appetite. Protein isn't that relevant.

Why is fruit the example of a healthy snack? Fruits are full of fructose, which is enemy #1 for weight loss.

Not so much when you consume it along with the fibre which is typically also included in the fruit.

Everything in moderation. An understated benefit of fruit is their prebiotic nature which promotes a healthy gut. A lot of healthy eating advice is settling down towards one idea. Eat a wide range of raw and fermented plant food.

All the side effects I've seen of GLP-1s are positive, and we've had diabetes patients taking them for much longer than that.

Anyway, it's fairly obvious that discipline is not a solution to weight loss, because weight gains a) happened in lab and pet animals on the same timescales they happened to humans and b) are reversed by moving to higher altitudes.

So to be productive, you should be telling people to move to Colorado.


> Quick question on your run: did you see the signal amplification/instability I saw (values growing during the forward pass)? or was it stable for you, just neutral on loss?

I think your brain may have been taken over by ChatGPT.


The answer is yes, LLMs have different behavior and factual retrieval in different languages.

I had some papers about this open earlier today but closed them so now I can't link them ;(


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: