I have read a few people mention it being an issue on here, but now I am starting to see it myself, blatant bots posting really crappy AI images. I do not want this to turn into Facebook with shrimp Jesus, so I’m just wondering what can be done to prevent bots from polluting the airwaves here. Any ideas, or work being done on this front?

  • Ada@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    18
    arrow-down
    1
    ·
    edit-2
    23 hours ago

    Make sign ups require approval and create a “trusted user” permission level that lets the regulated trusted users on the instance see and process pending sign up requests and suspend/delete brand new spam accounts (say under 24 hours old) that slip through the cracks. You can have dozens of people across all timezones capable of approving requests as the are made, and capable of shutting down the bots that slip through.

    Boom, bot problem solved

    • fizzle@quokk.au
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 hours ago

      If only where was a way users could alert mods and admins about suspicious accounts.

      • Ada@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        1
        ·
        6 hours ago

        Yeah, but that’s after the fact, and after their content has federated to other instances.

        It doesn’t solve the bot problem, but just plays whack a mole with them, whilst creating an ever large amount of moderation work, due to it federating to multiple instances.

        Solving the bot problem means stopping the content from federating, which either means stopping the bot accounts from registering, or stopping them from federating until they’re known to be legit.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          19 hours ago

          If this is something that individual instances can opt out of then it doesn’t solve the “bot problem.”

          • SorteKanin@feddit.dk
            link
            fedilink
            arrow-up
            1
            ·
            6 hours ago

            It definitely does. You just defederate from the instances that don’t do something to avoid bots.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              1
              ·
              6 hours ago

              That stops bots for a particular instance, assuming they guessed right about which accounts were bots. It doesn’t stop bots on the Fediverse.

              • SorteKanin@feddit.dk
                link
                fedilink
                arrow-up
                1
                ·
                5 hours ago

                You only need to stop it on your own instance. You can’t do anything else anyway. Users will go to the instances that aren’t flooded with bots.

                • FaceDeer@fedia.io
                  link
                  fedilink
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  5 hours ago

                  You can’t do anything else anyway.

                  Yes, this is my fundamental point. The Fediverse doesn’t have tools for Fediverse-wide censorship, nor should it.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          17 hours ago

          How else would this “trusted” status be applied without some kind of central authority or authentication? If one instance declares “this guy’s a bot” and another one says “nah, he’s fine” how is that resolved? If there’s no global resolution then there isn’t any difference between this and the existing methods of banning accounts.

          • Ada@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            2
            ·
            6 hours ago

            I mean, approving users, you just let your regular established users approve instance applications. All they need to do is stop the egregious bots from getting through. And if there is enough of them, the applications will be processed really quickly. If there is any doubt about an application, let them through, because they can be caught afterwards. And historical applications are already visible, and easily checked if someone has a complaint.

            And if you don’t like the idea of trusted users being able to moderate new accounts, you can tinker with that idea. Let accounts start posting before their application has been approved, but stop their content from federating outwards until an instance staff member approves them. It would let people post right away without requiring approval, and still get some interaction, but it would mitigate the damage that bots can do, by containing them to a single instance.

            My point is, there are options that could be implemented. The status quo of open sign ups, with a growing number of bots doesn’t have to be the unquestioned approach going forward.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              1
              ·
              6 hours ago

              This is just regular moderation, though. This is how the Fediverse already works. And it doesn’t resolve the question I raised about what happens when two instances disagree about whether an account is a bot.

              • Ada@lemmy.blahaj.zone
                link
                fedilink
                arrow-up
                2
                ·
                6 hours ago

                This is just regular moderation, though.

                It’s using the existing tool, but making a small portion of them (approving applications) available to a much larger pool of people

                it doesn’t resolve the question I raised about what happens when two instances disagree about whether an account is a bot.

                If the instance that hosts it doesn’t think it’s a bot, then it stays, but is blocked by the instance that does think its a bot.

                And if the instance that thinks its a bot also hosts it, it gets shut down.

                That is regular fediverse moderation