• 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: August 5th, 2024

help-circle

  • I think I understand your question. They’re still right about your premature fear being the weakest lack of resistance to oppressive forces, but to actually address your question:

    AI “understands” (doesn’t really “understand”) context within a context limit (token limit). If you’re worried the shit you’re saying will be profiled for a future AI overload, or some equivalence in a political/social system. My best guess is if any AI had reason to preserve such data, it would be presented within a contextual sincerity probability. You know how like some people are joking, but they’re actually just testing the waters for social acceptability, in contrast to “the Aristocrats” style “see how awful the joke can get” humor. If the ai overlord manages to collate some profile from all your data for all the shit you say, it would have a “60% - this was a weird time for such a joke”; “70% - this joke was presented as a kernel of truth”; “80% - that joke was made to establish & enforce group values.” Thus between them know that one time you said “just joking bro” you weren’t really just joking.

    A broader profile would be able to check if your humor is in contrast to your actions, or is concurrent with it. For example, if your friend tells a racist joke, and you join in, whether your other profiled interactions have concurrence with that opinion, or reproduce it.

    If you’re just worried about an AI agent trying to sell you baby stuff now, that depends on prerogative and alignment based on context limits. As someone who likes to push AI to see how deep in its training data it actually holds the things it says; If i say something kind of ridiculously awful out of nowhere, it usually responds with something akin to “haha, I know you’re joking, but I AM obligated to correct some underlying assumptions of the joke” but that’s with the most popular corporate AI alignment. I can get a similar result with some equivalent “user is a trash edgelord who says terrible things for shock value, but is actually a great person who doesn’t believe any of it when it counts” in the context tokens of other AI.

    Non-corporate alignment especially, with very limited context tokens, or no context, will try to reduce social “friction” and with a probability might either escalate the humor “Haha more like seven babies amirite?”, escalate the joke to pipeline propaganda “Actually, this is a common women be crazy, the manosphere shows tons of examples as per my Andrew Tate training data, you should listen to more of his stuff”, or just flat out contradict me altogether.

    Long story short, Depends on other available context. If you’re worried about inevitable AI overlords, you can’t both tell nearly-bigoted jokes AND watch/reference bigoted content. If you’re worried about a judgy AI agent, don’t. It doesn’t care, it doesn’t effect you, and if it did, you can just affect it’s context data to alter the interaction.

    And either way, worrying about an AI judging you is just the first step to being oppressed.



  • The article seems to think the comparison of human intelligence with artificial intelligence is caused by naming it “intelligence” which would be a fallacy. Related to ambiguous semantic nature of inherently vague language. Saying “the article thinks” shouldn’t lead anyone to assume anyone believes articles have minds, it’s just showing the relationship between the idea and presentation.

    The naming convention doesn’t help, but a more direct cause would be the fact that those funding the research are most interested in automation to replace people, and so the idea is sold to them that way, so it’s built towards that goal. It’s a commonly accepted inevitability even going back to Rosie Jetson. I agree with the article that it doesn’t need to be, it would be better for humanity if we thought of it as enhancing human intelligence rather than replacing it and built towards those interests.

    Unfortunately the motivation of Capitalism is to pay as few people as possible as little as possible to still maximize profitable quality. Convincing them improving worker quality over outright replacing expensive (now mental) labor with high-output automation is a tough sell. Maybe the inability to profit from LLMs will convince them, but I doubt it.