cross-posted from: https://hexbear.net/post/3613920
Get fuuuuuuuuuuuuuucked
“This isn’t going to stop,” Allen told the New York Times. “Art is dead, dude. It’s over. A.I. won. Humans lost.”
“But I still want to get paid for it.”
cross-posted from: https://hexbear.net/post/3613920
Get fuuuuuuuuuuuuuucked
“This isn’t going to stop,” Allen told the New York Times. “Art is dead, dude. It’s over. A.I. won. Humans lost.”
“But I still want to get paid for it.”
It’s deterministic. I can exactly duplicate your “art” by typing in the same sentence. You’re not creative, you’re just playing with toys.
That’s actually fundamentally untrue, like independent of your opinion, I promise that when people generate an image with a phrase it will be different and is not deterministic ( not in the way you mean ) .
You and I cannot type the same prompt into the same AI generative model and receive the same result, no system works with that level of specificity, by design.
They pretty much all use some form of entropy / noise.
It’s literally as true as it can possibly be. Given the same inputs (including the same seed), a diffusion model will produce exactly the same output every time. It’s deterministic in the most fundamental meaning of the word. That’s why when you share an image on CivitAI people like it when you share your input parameters, so they can duplicate the image. I have recreated the exact same images using models from there.
Humans are not deterministic (at least as far as we know). If I give two people exactly the same prompt, and exactly the same “training data” (show them the same references, I guess), they will never produce the same output. Even if I give the same person the same prompt, they won’t be able to reproduce the same image again.
I do actually believe that everything, including human behavior is deterministic. I also believe there is nothing special about human consciousness or creation tbh
This can actually be true, depending on how the system is configured.
For instance, if you and someone else use the same locally-hosted Stable Diffusion UI, both put the exact same prompt, and are using the same seed, # of steps, and dimensions, you’ll get an identical result.
The only reason outputs are different between prompts is because of the noise from the seed, normally randomly set between generations, which can be easily set to the same value as someone else’s generation, and will yield an identical result unless the prompt is changed.
Try it out and show us the result.
Ok, here’s an image I generated with a random seed:
Here’s the UI showing it as a result:
Then I reused the exact same input parameters. Here you can see it in the middle of generating the image:
Then it finished, and you can see it generated the exact same image:
Here’s the second image, so you can see for yourself compared to the first:
You can download Flux Dev, the model I used for this image, and input the exact same parameters yourself, and you’ll get the same image.
But you’re using the same seed. Isn’t the default behaviour to use random seed?
And obviously, you’re using the same model for each of these, while these people would probably have a custom trained model that they use which you have no access to.
That’s not really proof that you can replicate their art by typing the same sentence like you claimed.
If you didn’t understand that I clearly meant with the same model and seed from the context of talking about it being deterministic, that’s a you problem.
Bro, it’s you who said type the same sentence. Why are you saying the wrong things and then try to change your claims later?
The problem is that you couldn’t be bothered to try and say the correct thing, and then have the gall to blame other people for your own mistake.
And in what kind of context does using the same seed even makes sense? Do people determine the seed first before creating their prompt? This is a genuine question, btw. I’ve always thought that people generally use a random seed when generating an image until they find one they like, then use that seed to modify the prompt to fine tune it.
In the context that I’m explaining that the thing is deterministic. Do you disagree? Because that was my point. Diffusion models are deterministic.
That’s as much deterministic as tracing someone’s artwork, really.
If you have to use a different creation process than how someone would normally create the artwork, whether legitimate or using AI, then it’s not really a criticism of that method in the first place.
I was seriously thinking you found a way to get similar enough results to another person’s AI output just from knowing the prompt. That would actually prove that AI artwork require zero effort to reproduce.
Edit: To expand on that 1st prargrpah, yes, AI is deterministic as much as a drawing tablet and app is deterministic, that is if you copy exactly what another person does using the tool, it will produce the same result.
You might be able to copy one stroke of a pen exactly, but the thousands or tens of thousands of strokes it takes to paint a painting? Like, yeah, you can copy a painting “close enough”, but it’s not exactly the same, because paint isn’t deterministic.
As far as making a “close enough” copy that isn’t exactly the same with AI, you can just use any image as the input image and set the denoising strength to like .1. Then you’ll get basically the same image but it’ll have a different checksum. So if you wanna steal art, AI makes it way easier.
There’s not really any human creativity in this process, or even using your own prompts, which is the whole point behind the copyright office denying this guy’s copyright claim. Maybe you could copyright your prompt, if it’s detailed enough.