My short response. Yes.

  • HiddenLayer555@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    58 minutes ago

    Short answer: No one today can know with any amount of certainty because we’re nowhere close to developing anything resembling “AI” in the movies. Today’s generative AI so far from artificial general intelligence it would be like asking someone from the middle ages when the only form of remote communication was letters and messengers, whether social media will ruin society.

    Long answer:

    First we have to define what “AI” is. The current zeitgeist meaning of “AI” refers to LLMs, image generators, and other generative AI, which is nowhere close to anything resembling real consciousness and therefore can be neither evil nor good. It can certainly do evil things, but only at the direction of evil humans, who are the conscious beings in control. Same as any other tool we’ve invented.

    However, generative AI is just one class of neural network, and neural networks as a whole was once the colloquial definition of “AI” before ChatGPT. There have been simpler, single purpose neural networks before it, and there will certainly be even more complex neural networks after it. Neural networks are modeled after animal brains: nodes are analogous to neurons which either fully fire or doesn’t fire at all depending on input from the neurons it’s connected to, connections between nodes are analogous to connections between axons and dendrites, and neurons can up or down regulate input from different neurons similar to the weights applied to neural networks. Obviously, real nerve cells are much more complex than the simple mathematical representations of neural networks, but neural networks do show similar traits to networks of neurons in a brain, so it’s not inconceivable that in the future, we could potentially develop a neural network as or more complex than a human brain, at which point it could start exhibiting traits that are suggestive of consciousness.

    This brings us to the movie definition of “AI,” which is generally “conscious” AI as or more intelligent than a human. A being with an internal worldview, independent thoughts and opinions, and an awareness of itself in relation to the world, currently traits only brains are capable of, and when concepts like “good” or “evil” can maybe start to be applicable. Again, just because neural networks are modeled after animal brains doesn’t prove it can emulate a brain as complex as humans have, but we also can’t prove it definitely won’t be able to with enough technical advancement. So the most we can say right now is that it’s not inconceivable, and if we do ever develop consciousness in our AI, we might not even know until much later because consciousness is difficult to assess.

    The scary part about a hypothetical artificial general intelligence is that once it exists, it can rapidly gain intelligence at a rate orders of magnitude faster than the evolution of intelligence in animals. Once it starts doing its own AI research and creating the next generation of AI, it will become uncontrollable by humanity. What happens after or whether we’ll even get close to this is impossible to know.

  • Tenderizer78@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    Not unless our elected officials have a deluded belief in the competence of AI and assign it to tasks it never should be used in.

  • Gates9@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    4 hours ago

    First it’s gonna crash the economy because it doesn’t work then it’s gonna crash the economy because it does

  • tyo_ukko@sopuli.xyz
    link
    fedilink
    arrow-up
    26
    ·
    9 hours ago

    No. The movies get it all wrong. There won’t be terminators and rogue AIs.

    What there will be is AI slop everywhere. AI news sites already produce hallucinated articles, which other AIs refer to and use as training data. Soon you cannot believe anything you read online, and fact checking will be basically impossible.

    • pilferjinx@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      8 hours ago

      Unless we have a bot that’s dedicated to tracing the origin of online information and can roughly evaluate the accuracy to real events.

    • Lunatique @lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      6
      ·
      8 hours ago

      I agree with the slop part but you can’t say the movies get it all wrong if it hasn’t gotten to the point where it can be proven or disproven yet.

      • BlueSquid0741@lemmy.sdf.org
        link
        fedilink
        arrow-up
        7
        ·
        8 hours ago

        The movies depict actual AI. That is, machines/software that is sentient and can think and act for itself.

        The future is going to be more of the shit we have now- LLMs / “guessing software”.

        But also, why ask the question if you think the answer can’t be given yet?

  • ℕ𝕖𝕞𝕠@slrpnk.net
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    4 hours ago

    We’ve had AI in our everyday life for well over two decades now. What kind of AI specifically are you worried about?

  • collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    7 hours ago

    It will be worse than the movies because they don’t portray how every mundane thing will somehow be worse. Tech support? Worse. Customer service? Worse. Education? Worse. Insurance? Worse. Software? Worse. Health care? Worse. Mental health? Worse. Misinformation? Pervasive. Gaslighting? Pervasive.

  • DomeGuy@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    8 hours ago

    AI will likely be similar to Asimov’s robot series, but just a bit grittier.

    • Useful almost-human thing we don’t know if it’s a person or not
    • Ubiquitous and relatively harmless
    • Winds up killing millions if we put it in charge.
    • Lunatique @lemmy.mlOP
      link
      fedilink
      arrow-up
      2
      ·
      8 hours ago

      You probably just understand very little about how ASI works. These mad scientist actually want to indeed treat a machine God. The AI data center Meta, Openai and the USA gov are building is 81% the size of the city Manhattan and the name is Hyperion (The God above/before)