Emotional_Series7814

  • 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle


  • “We believe that users should have a say in how their attention is directed, and developers should be free to experiment with new ways of presenting information,” Bluesky’s chief executive, Jay Graber, told me in an email message.

    Of course, there are also challenges to algorithmic choice. When the Stanford political science professor Francis Fukuyama led a working group that in 2020 proposed outside entities offer algorithmic choice, critics chimed in with many concerns.

    Robert Faris and Joan Donovan, then of Harvard’s Shorenstein Center, wrote that they were worried that Fukuyama’s proposal could let platforms off the hook for their failures to remove harmful content. Nathalie Maréchal, Ramesh Srinivasan and Dipayan Ghosh argued that his approach would do nothing to change the some tech platforms’ underlying business model that incentivizes the creation of toxic and manipulative content.

    Mr. Fukuyama agreed that his solution might not help reduce toxic content and polarization. “I deplore the toxicity of political discourse in the United States and other democracies today, but I am not willing to try solving the problem by discarding the right to free expression,” he wrote in response to the critics.

    When she ran the ethics team at Twitter, Rumman Chowdhury developed prototypes for offering users algorithmic choice. But her research revealed that many users found it difficult to envision having control of their feed. “The paradigm of social media that we have is not one in which people understand having agency,” said Ms. Chowdhury, whose Twitter team was let go when Mr. Musk took over. She went on to found the nonprofit Humane Intelligence.

    But just because people don’t know they want it doesn’t mean that algorithmic choice is not important. I didn’t know I wanted an iPhone until I saw one.

    And with another national election looming and disinformation circulating wildly, I believe that asking people to choose disinformation — rather than to accept it passively — would make a difference. If users had to pick an antivaccine news feed, and to see that there are other feeds to choose from, the existence of that choice would itself be educational.

    Algorithms make our choices invisible. Making those choices visible is an important step in building a healthy information ecosystem.


  • Here’s the text!

    Social media can feel like a giant newsstand, with more choices than any newsstand ever. It contains news not only from journalism outlets, but also from your grandma, your friends, celebrities and people in countries you have never visited. It is a bountiful feast.

    But so often you don’t get to pick from the buffet. On most social media platforms, algorithms use your behavior to narrow in on the posts you are shown. If you send a celebrity’s post to a friend but breeze past your grandma’s, it may display more posts like the celebrity’s in your feed. Even when you choose which accounts to follow, the algorithm still decides which posts to show you and which to bury.

    There are a lot of problems with this model. There is the possibility of being trapped in filter bubbles, where we see only news that confirms our pre-existing beliefs. There are rabbit holes, where algorithms can push people toward more extreme content. And there are engagement-driven algorithms that often reward content that is outrageous or horrifying.

    Yet not one of those problems is as damaging as the problem of who controls the algorithms. Never has the power to control public discourse been so completely in the hands of a few profit-seeking corporations with no requirements to serve the public good.

    Elon Musk’s takeover of Twitter, which he renamed X, has shown what can happen when an individual pushes a political agenda by controlling a social media company.

    Since Mr. Musk bought the platform, he has repeatedly declared that he wants to defeat the “woke mind virus” — which he has struggled to define, but that largely seems to mean Democratic and progressive policies. He has reinstated accounts that were banned because of the white supremacist and antisemitic views they espoused. He has banned journalists and activists. He has promoted far-right figures such as Tucker Carlson and Andrew Tate, who were kicked off other platforms. He has changed the rules so that users can pay to have some posts boosted by the algorithm, and has purportedly changed the algorithm to boost his own posts. The result, as Charlie Warzel said in The Atlantic, is that the platform is now a “far-right social network” that “advances the interests, prejudices and conspiracy theories of the right wing of American politics.”

    The Twitter takeover has been a public reckoning with algorithmic control, but any tech company could do something similar. To prevent those who would hijack algorithms for power, we need a pro-choice movement for algorithms. We, the users, should be able to decide what we read at the newsstand.

    In my ideal world, I would like to be able to choose my feed from a list of providers. I would love to have a feed put together by librarians, who are already expert at curating information, or from my favorite news outlet. And I’d like to be able to compare what a feed curated by the American Civil Liberties Union looks like compared with one curated by the Heritage Foundation. Or maybe I just want to use my friend Susie’s curation, because she has great taste.

    There is a growing worldwide movement to provide us with some algorithmic choice — from a Belgrade group demanding that recommender algorithms should be a “public good” to European regulators who are demanding that platforms give users at least one algorithm option that is not based on tracking user behavior.

    One of the first places to start making this vision a reality is a social network called Bluesky, which recently opened up its data to allow developers to build custom algorithms. The company, which is financially supported by the Twitter founder Jack Dorsey, said that 20 percent of its 265,000 users are using custom feeds.

    On my Bluesky feed, I often toggle between feeds called Tech News, Cute Animal Pics, PositiviFeed and my favorite, Home+, which includes “interesting content from your extended social circles.” Some of them were built by Bluesky developers, and others were created by outside developers. All I have to do is go to My Feeds and select a feed from a wide menu of choices including from MLB+, a feed about baseball, to #Disability, one that picks up keywords related to disability or UA fundraising, a feed of Ukrainian fund-raising posts.

    Choosing from this wide selection of feeds frees me from having to decide whom to follow. Switching social networks is less exhausting — I don’t have to rebuild my Twitter network. Instead, I can just dip my toes into already curated feeds that introduce me to new people and topics.



  • I like this idea, would probably do well if proposed on the kbin codeberg as well.

    I really hope we don’t force users who sign up to pick one of a few preselected communities to subscribe to. No Skip button, no option to search for other communities, you must select some communities from this small list in order to move on. I’ve seen the same pattern in habit tracker apps with preselected habits instead of communities, and likely in other contexts that I’m forgetting right now. Walk the user through a tutorial to get them up and running as soon as possible, but no option to skip it or customize anything if you’re tech-savvy and don’t like the default options, and don’t need to be handheld through. It was always incredibly annoying.



  • Honestly, I think what’s going on here is “new platform good, nonadopters bad” and people wanting to believe that this Twitter user is actually confused by us, not making a joke, because we’re special smart kids for adopting the Fediverse and anyone who does not is a dummy-dumb-dumb-doodoo head. Of course, not in such crass words, or so obviously laid out, otherwise we’d all catch that kind of thinking for what it is immediately. This team good, that team bad. Fun in sporting matches, not so great when we actively want people to come here and ditch Twitter, and when we get condescending towards other human beings.

    That is my honest personal interpretation but I could be wrong :P I’m also affected by my own biases and maybe I am seeing this pattern where it doesn’t actually exist.


  • “Direct Message” and “Private Message” indeed mean different things. In practice, because both involve messaging one individual user, a good deal of people (including myself) still expect them to be functionally the same. Part of this functionality we expect is that there is an attempt to make these messages less visible and easy to access than the reply I just sent to you right now. This expectation is validated on Twitter:

    Direct Messages are the private side of Twitter. You can use Direct Messages to have private conversations with people about Tweets and other content.

    on Instagram:

    Instagram DMs are an in-app messaging feature that allow you to share and privately exchange text, photos, Reels, and posts with one or more people.

    by Cambridge Dictionary:

    a private message sent on a social media website, that only the person it is sent to can see

    and by the fact that if you go on anyone’s profile, you can see post history, comment history, and boosts, but not a list of who they tried to send an individual message to or what those messages were. I believe that more technical people could retrieve such messages, that the messages are not totally secure, but to my layman eyes, I do still expect that there was at least an attempt to make these messages private.