• 21 Posts
  • 674 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle


  • Believe me, Google does enough a/b testing, and has enough experience in psychological manipulation to know where “the line” is for most people.

    Sure, some will never use their product(s) again when pushed too far, but they don’t really need everyone to be using their products.

    Only the users they can profit from the most are of value. If a terrible UI, awful UX, or even a paid subscription doesn’t scare them away from using a Google Product, then each of those users becomes a cash cow.


  • When did it get this bad?

    Bad? Google just proved that they can get you to stay on their search page (or come back to get a different answer) for way longer than you need… this is a WIN for them.

    The enshittification of the internet + the greed inherent in these mega corps have caused websites to be designed to grab steal your attention for as long as possible. The longer they can keep you on their site, the more money/data/attention/time they can get out of you.

    If search was designed to benefit the user, a typical visit would result in maybe 5-10 seconds of someone’s time to enter a search and click on the relevant result. You proved that when you compared it to DDG 😀









  • The majority of the internet is porn.

    Again, I’ll separate entertainment from informational, since entertainment can be garbage, and still be consumed.

    Bad information doesn’t help anyone.

    it’s not like LLMs you can chat with are completely useless.

    The problem is, you wouldn’t know unless you know.

    With a legitimate website that has human writers, editors, and fact-checkers, they can at least have creditability and a reputation to uphold.

    Far too many randomly generated websites have a lot of information, but without any guardrails. If you know enough about a topic, you’ll realise that the information on these AI sites are pretty much useless. That is, you couldn’t use them as a source because enough of the info is bad/incorrect/incoherent, that it’s like asking a toddler who may or may not give you a valid question.

    I’ve contacted a manufacturer of bike stuff, and their support is given by AI. While the answers you get sound like they could be right, it’s like getting an answer from someone who heard something about something from a friend. When you actually ask for a human, the answer is often different (and correct).

    There is no accountability, or credibility, or responsibility, or integrity with AI. It has no reputation to lose if the information it provides is bad or not.

    I know that AI isn’t going away. I’d personally be OK with some human verification system for websites, and would be more than willing to use a filtered version of the internet that blocks AI generated content. Call it curated or whitelisted, but I want my information to come from a human being.


  • But you know they are spam, so it’s something you can avoid. But what if the majority (over 80%) of the calls you receive can’t be identified as spam. At some point, you may be wasting far more time than it’s worth to keep using a phone without some major whitelist/blacklist system.

    Also, what happens when the outbound calls you make are answered by AI, and you don’t know? If this AI is giving you replies that are word salad, how long are you willing to tolerate it?

    I’ve been getting text messages now from companies that I actually do business with, but they are spam. Calls from companies that I have accounts with, and they are scams. At some point, SMS and phone calls will be more trouble than its worth.

    And the thought of either having to go without it, the pain of replacing it, or the frustration of being strung along in a scam are not thoughts I want to have.


  • There will always be a large number of sites that are not capitalist hellholes that only exist to steal user’s data or scam users or do other malicious things. This may be down to things like credit unions, federated social media, and non-profits that exist to make the world better, but there will always be something that is out there that keeps it from being useless.

    No doubt that there will be people who still have morals and will run sites and services that don’t completely screw people.

    But at some point, you won’t be able to tell which are legit, and which aren’t. AI generated websites can make any scam site look completely legitimate, fake thousands of testimonials, have bots post about it on every major website (Reddit, YouTube, etc.) without being caught, etc.

    The currency of the internet is no longer about what’s valuable to users, but what’s valuable to bad actors, data thieves, and marketers.

    There will be a tipping point when the bad far, far outweighs the good, and I’m curious to know when society decides that the internet isn’t worth using anymore.


  • Let me ask you this: assuming you use the internet for information rather than entertainment, would the internet be useful if the majority of content ends up being AI generated (not fact checked, not accurate, and not original)?

    What if the overwhelming content you come across could neither be verified as true, and the majority of comments (including here on Lemmy) were bots? Would you still use it?

    For me, it would stop being useful. Almost like a library only carrying fiction, when I’m trying to research a topic.

    For entertainment, sure, it’ll be great for sucking the attention from people without having to invest in skill to be good at something. Hell, if you currently find YouTube shorts and Tiktok to be “good content”, then it’ll be around forever. Corporations and advertisers love this technology.