cross-posted from: https://pawb.social/post/28223553

OpenAI launched ChatGPT Agent on Thursday, its latest effort in the industry-wide pursuit to turn AI into a profitable enterprise—not just one that eats investors’ billions. In its announcement blog, OpenAI says its Agent “can now do work for you using its own computer,” but CEO Sam Altman warns that the rollout presents unpredictable risks.

[…]

OpenAI research lead Lisa Fulford told Wired that she used Agent to order “a lot of cupcakes,” which took the tool about an hour, because she was very specific about the cupcakes.

  • foggy@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    20
    ·
    edit-2
    1 day ago

    Okay, down vote away. Lemmy has such an ignorant hate boner against AI.

    Computers were fucking trash in the 50s. Dumb tech enthusiasts all said the same shit people say about AI today: computers are unreliable, create more problems than they solve, are ham-fisted solutions to problems that require human interaction, etc. here are the HUGE problems computers had that we solved before the 70s.

    1. Signed Number Representation

    Problem: No standard way to represent negative numbers in binary.

    Solution: Two’s complement became the standard.

    1. Error Detection & Correction

    Problem: Bit errors from unreliable hardware.

    Solution: Hamming codes, CRC, and other ECC methods.

    1. Floating Point Arithmetic

    Problem: Inconsistent and error-prone real number math.

    Solution: IEEE 754 standardized floating-point formats and behavior.

    1. Instruction Set Standardization

    Problem: Each computer had its own incompatible instruction set.

    Solution: Standardized ISAs like x86 and ARM became dominant.

    1. Memory Access and Management

    Problem: Memory was slow, small, and expensive.

    Solution: Virtual memory, caching, and paging systems.

    1. Efficient Algorithms

    Problem: Basic operations like sorting were inefficient.

    Solution: Research produced efficient algorithms (e.g., Quicksort, Dijkstra’s).

    1. Circuit Logic Design

    Problem: No formal approach to designing logic circuits.

    Solution: Boolean algebra, Karnaugh maps, and FSMs standardized design.

    1. Program Control Flow

    Problem: Programs used unstructured jumps and were hard to follow.

    Solution: Structured programming and control constructs (if, while, etc.).

    1. Character Encoding

    Problem: No standard way to represent letters or symbols.

    Solution: ASCII and later Unicode standardized text encoding.

    1. Programming Languages and Compilation

    Problem: Code was written in raw machine or assembly code.

    Solution: High-level languages and compilers made programming more accessible.

    Its just ignorant to be acting like any of the problems we face with AI won’t be sorted just as they were with computers.

    • LowtierComputer@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      21 hours ago

      A counter as I somewhat agree with you. Computers in that period weren’t purchased and used by every company under the sun. It was a specialized system, mostly used by universities and research.

      AI is being shoved into every possible orifice of modern society.

    • Rhaedas@fedia.io
      link
      fedilink
      arrow-up
      10
      ·
      1 day ago

      I agree on the point of solving a problem, it’s just a matter of time, skill, and some luck. The biggest problem I see with AI right now is that it’s marketed as something it’s not. Which leads to a lot of the issues we have with “AI” aka LLMs put in places they shouldn’t be. Surprisingly they do manage pretty well a lot of the time, but when they fail it’s really bad. I.e., AI as sold is a remarkable illusion that wow, everyone has bought into even knowing full well it’s not near perfect.

      The only thing that will “fix” current AI is true AGI development that would demonstrate the huge difference. AI/LLMs might be part of the path there, I don’t know. It’s not the real solution though, no matter how many small countries worth of energy we burn to generate answers.

      I say all this as an active casual experimenter with local LLMs. What they can do, and how they do it is amazing, but I also know what I have and it’s not what I call AI, that term has been tainted again by marketers trying to cash in on ignorance.

      • foggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        1 day ago

        What I am saying is computers were also marketed as something they were not (yet) and eventually became.

        And so, history repeats itself.

    • SippyCup@feddit.nl
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      5
      ·
      1 day ago

      Did you at any point in this raving lunacy of a rant stop to think that maybe, just maybe the reason people hate AI is because it’s bad?

      • MajorasMaskForever@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        22 hours ago

        raving lunacy of a rant

        Hey now, words have meaning. Lunacy implies there’s a brain there that can be in the state of “insane”. That entire thing was probably shit out by a LLM which is why it makes no logical sense

        • SippyCup@feddit.nl
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          21 hours ago

          It’s like hating a shitty Black and Decker oscillating clipper, when it breaks, randomly cuts your thumb off, or fails to clip the weeniest of leaves from your hedge, when a pair of manual clippers work just fine.

          If it’s a tool, it’s a bad tool being marketed as the best and only tool you’ll ever need again.

      • foggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        6
        ·
        23 hours ago

        Did you think that maybe just maybe, that people referred to computation as the mark of the beast in the 50s and associated it with satansim?

        Cool, cool.

        • SippyCup@feddit.nl
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          21 hours ago

          That is almost the most unhinged thing you’ve said today. Almost.

    • altkey@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      In your opinion, what would LLM usage look like in thirty years? Would it’s inefficiency be solved somehow? Would it’s generalistic approach (even conditioned, e.g. culinary LLM trained on recipes), become better than existing specialized tools? Would LLM cease to be the ‘natural’ playground for big corporations alone as no private citizen can train a comparable model? Would it still persist as an unpredictable black box? Would there arrive and stay new professions dedicated to be AI operators, e.g. forming correct text queues to the LLM, designing them and probably even getting patents on them?

      • foggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        1 day ago

        Lots of questions. Any of which I could only provide a very opinionated answer on. But to answer the bulk of your response here, I think we look to sociologists to predict the future of AI integrating into the division of labor.

        Basically, the division of labor will become more organic and complex, less rigid and mechanical.

        (i.e. nobody was paying bills by walking the neighborhood dogs in 1920. As technology increases, the division of labor becomes more organic/less mechanical.)

        So with this Is say that “Software Developer” is not a job in the future, but that statement carries more weight than it should. The software developer of today will be invaluable as a technician working with AI. In this example “software developer” is a mechanical division of labor where something in the future might be like “Development Strategist” as a more organic division of labor. As to what that looks like, your guess is as good as mine.

    • MajorasMaskForever@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      22 hours ago

      I’m genuinely curious, how often does spouting off random bullshit work for you? Nothing you listed backs up your argument that the problems around AI are a result of it’s infancy and first cut implementations.

      Also, half of what you say is either untrue or disingenuous as all hell. “programs use unstructured jumps and were hard to follow”? What the fuck are you talking about? Please, find me a computer that didn’t use something like a branch statement and didn’t go in numerical sequence of instructions. I’ll wait while you learn this so called “Instruction Set Standardization” of yours doesn’t exist

      • 7toed@midwest.social
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        3
        ·
        17 hours ago

        Of course the AI defender uses AI to argue, because they don’t need to understand shit if their AI girlfriend takes enough time and energy from their naysayers

      • foggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        23 hours ago

        Cool, I am a well decorated expert in my field so hate all you want.

          • foggy@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            edit-2
            17 hours ago

            Yeah, so now no one will take you seriously. And hominems are a bad look, kiddo.

            I’m a finally just gonna block you now. I emoloy a 2 strike rule on Lemmy.

            Peace out girl scout.