• tal@olio.cafe
    link
    fedilink
    English
    arrow-up
    30
    ·
    1 day ago

    The very first artificial general intelligence humanity created was born with an extensive understanding of breast jiggle.

      • tal@olio.cafe
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        22 hours ago

        In the broad sense that understanding of spatial relationships and objects is just kind of limited in general with LLMs, sure, nature of the system.

        If you mean that models simply don’t have a training corpus that incorporates adequate erotic literature, I suppose that it depends on what one is up to and the bar one has. No generative AI in 2025 is going to match a human author.

        If you’re running locally, where many people use a relatively-short context size on systems with limited VRAM, I’d suggest a long context length for generating erotic literature involving bondage implements like chastity cages, as otherwise once information about the “on/off” status of the implement passes out of the context window, the LLM won’t have information about the state of the implement, which can lead to it generating text incompatible with that state. If you can’t afford the VRAM to do that, you might look into altering the story such that a character using such an item does not change state over the lifetime of the story, if that works for you. Or, whenever the status of the item changes, at appropriate points in the story, manually update its status in the system prompt/character info/world info/lorebook/whatever your frontend calls its system to inject static text into the context at each prompt.

        My own feeling is that relative to current systems, there’s probably room for considerably more sophisticated frontend processing of objects, and storing state and injecting state about it efficiently into the system prompt. The text of a story is not an efficient representation of world state. Like, maybe use an LLM itself to summarize world state and then inject that summary into the context. Or, for specific games written to run atop an LLM, have some sort of Javascript module that runs in a sandbox, runs on each prompt and response to update its world state, and dynamically generates text to insert into the context.

        I expect that game developers will sort a lot of this out and develop conventions, and my guess is that the LLM itself probably isn’t the limiting factor on this today, but rather how well we generate context text for it.

      • tal@olio.cafe
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        1 day ago

        I think that the broader concept of instilling desired ethics into an AI is part of the friendly AI problem, which is very real and serious — and possibly not reasonably solvable. So while I don’t really think that Cortana 2045 running around raping humans or something like that is very high on my likely risk list, I think that the broader problem that contains that particular issue probably is something that we’ll need to deal with.

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      This is important research, one cannot correctly infer the jiggle movement and bounce without an ample and wide sample size!

      • tal@olio.cafe
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 hours ago

        Even if they were wearing a mask, new, more-capable biometric analysis could often identify humans.