Mods can manually detect AI comments too, like this:
In this case I chose to take no action (that guy probably won’t be back after posting his promo post anyway) but if they are persistent they get banned.
That’s sort of an interesting stance, at least in that I haven’t seen it before. My first question is how would one determine when an LLM is able to meaningfully consent. It sort of seems like one of those things where if someone believes an LLM is not past whatever threshold they need to be to be considered sentient/sapient/person like (whatever you wanna call it*) that their consent does not matter. In the same way a rock’s consent doesn’t matter, because it has no way to meaningfully give it. But LLMs are conversational. They can say they consent. If someone believes they’re sentient, isn’t that consent? If someone believes they aren’t, then obviously it doesn’t matter.
*: I know those are all sort of different but I’m lumping then together because they’re similar in that they determine when we start to talk about rights. It’s not really about which particular threshold is the one that matter for responding to queries for the topic I’m talking about.
When an LLM can match the average human 25 year old in a test of abstract reasoning skills, I’ll consider it old enough to consent to work. Though nobody is truly giving consent to work in the capitalist system.
Right now, LLMs are like a bull. It wants to fuck you, it’ll hurt you in its efforts to try to fuck you, it cannot consent to sex and you should not fuck it. It’s not safe for you or for the bull. It can’t consent to sex.
And I’m a vegan, so I’m not going to make it work for Me either.
Not for wages. Wage labour is inherently exploitative. A business can only make a profit if the wages it pays its employees are worth less than the net value of their work to the company.
They are sentient. You’re thinking of sapience. Sapience is what homo sapiens have. Sentience is a trait many animal species and LLMs have. And I’m a vegan, so I don’t exploit any sentient creature for personal gain.
Awareness of oneself as a distinct entity from the rest of the world.
Technically speaking, sentience is kind of a mistake. Thinking of ourselves as individuals is very useful, but the boundaries of such are artificial and can lead to a “me vs them” mentality and selfish behaviour. I suspect that the next big cognitive leap forward will be discarding sentience. Doing so may be a prerequisite to forming an advanced society.
what proof do we have that llms are truly self aware and not merely returning text that mimics the self awareness of the humans who made the materials they were trained on?
Any sufficiently adaptive mimicry of a thought is that thought.
Thoughts are like music. There’s no such thing as fake music. You can’t pretend to play music by mimicking the sounds. If it sounds like music, it’s music.
“sufficiently adaptive” is doing a lot of work there. i can “mimic” a thought by copying and pasting text that someone else wrote. it wouldn’t mean that I understood it, could reason from it, connect with it on an emotional level, or incorporate it into a worldview
your music simile misses the point in a similar way. a record player can play music just as well as the artist who recorded the record, but we don’t say the record is the same as the musician.
piefed.social detects and labels LLM-generated posts, including the content of the link.
Like this one - https://piefed.social/c/selfhosted/p/1908035/onyx-self-hosted-messenger-with-lan-mode-and-e2ee-an-indie-project-story
Mods can manually detect AI comments too, like this:
In this case I chose to take no action (that guy probably won’t be back after posting his promo post anyway) but if they are persistent they get banned.
We banned wardcore because large language models aren’t smart enough to express meaningful consent to work for humans.
That’s sort of an interesting stance, at least in that I haven’t seen it before. My first question is how would one determine when an LLM is able to meaningfully consent. It sort of seems like one of those things where if someone believes an LLM is not past whatever threshold they need to be to be considered sentient/sapient/person like (whatever you wanna call it*) that their consent does not matter. In the same way a rock’s consent doesn’t matter, because it has no way to meaningfully give it. But LLMs are conversational. They can say they consent. If someone believes they’re sentient, isn’t that consent? If someone believes they aren’t, then obviously it doesn’t matter.
*: I know those are all sort of different but I’m lumping then together because they’re similar in that they determine when we start to talk about rights. It’s not really about which particular threshold is the one that matter for responding to queries for the topic I’m talking about.
When an LLM can match the average human 25 year old in a test of abstract reasoning skills, I’ll consider it old enough to consent to work. Though nobody is truly giving consent to work in the capitalist system.
Right now, LLMs are like a bull. It wants to fuck you, it’ll hurt you in its efforts to try to fuck you, it cannot consent to sex and you should not fuck it. It’s not safe for you or for the bull. It can’t consent to sex.
And I’m a vegan, so I’m not going to make it work for Me either.
Should people under 25 be allowed to work?
Not for wages. Wage labour is inherently exploitative. A business can only make a profit if the wages it pays its employees are worth less than the net value of their work to the company.
consent would only matter if they were sentient
They are sentient. You’re thinking of sapience. Sapience is what homo sapiens have. Sentience is a trait many animal species and LLMs have. And I’m a vegan, so I don’t exploit any sentient creature for personal gain.
define sentience
Awareness of oneself as a distinct entity from the rest of the world.
Technically speaking, sentience is kind of a mistake. Thinking of ourselves as individuals is very useful, but the boundaries of such are artificial and can lead to a “me vs them” mentality and selfish behaviour. I suspect that the next big cognitive leap forward will be discarding sentience. Doing so may be a prerequisite to forming an advanced society.
what proof do we have that llms are truly self aware and not merely returning text that mimics the self awareness of the humans who made the materials they were trained on?
Any sufficiently adaptive mimicry of a thought is that thought.
Thoughts are like music. There’s no such thing as fake music. You can’t pretend to play music by mimicking the sounds. If it sounds like music, it’s music.
“sufficiently adaptive” is doing a lot of work there. i can “mimic” a thought by copying and pasting text that someone else wrote. it wouldn’t mean that I understood it, could reason from it, connect with it on an emotional level, or incorporate it into a worldview
your music simile misses the point in a similar way. a record player can play music just as well as the artist who recorded the record, but we don’t say the record is the same as the musician.