Movies like Terminator have “AGI”, or artificial generalized intelligence. We had to come up with a new term for it after LLM companies kept claiming they had “AI”. Technically speaking, large language models fall under machine learning, but they are limited to just predicting language and text, and will never be able to “think” with concepts or adapt in real time to new situations.
Take for example chess. We have stockfish (and other engines), that far outperform any human. Can these chess engines “think”? Can they reason? Adapt to new situations? Clearly not, for example, adding a new piece with different rules would require stockfish to re-train from scratch. Humans can take their existing knowledge and adapt it to the new situation. Also look at LLMs attempting to play chess. They can “predict the next token” as they were designed to, but nothing more. They have been trained on enough chess notation that the output is likely a valid notation, but they have no concept of what chess even is, so they will spit out nearly random moves, often without following rules.
LLMs are effectively the same concept as chess engines. We just put googly eyes on the software, and now tons of people are worried about AI taking over the world. While current LLMs and generative AI do pose a risk (overwhelming amounts of slop and misinformation, which could affect human cultural development. And a human deciding to give an LLM external influence on anything, which could have major impact), it’s nowhere near Terminator-style AGI. For that to happen, humans would have to figure out a new way of thinking about machine learning, and there would have to be several orders of magnitude more computing resources for it.
Since the classification for “AI” will probably include “AGI”, there will (hopefully) be legal barriers in place by the time anyone develops actual AGI. The computing resources problem is also gradual, an AGI does not simply “tranfer itself onto a smartphone” in the real world (or an airplane, a car, you name it). It will exist in a massive datacenter, and can have its power shut off. If AGI does get created, and causes a massive incident, it will likely be during this time. This would cause whatever real world entity created it to realize there should be safeguards.
So to answer your question: No, the movies did not “get it right”. They are overexaggerated fantasies of what someone thinks could happen by changing some rules of our current reality. Artwork like that can pose some interesting questions, but when they’re trying to “predict the future”, they often get things wrong that changes the answer to any questions asked about the future it predicts.
Movies like Terminator have “AGI”, or artificial generalized intelligence. We had to come up with a new term for it after LLM companies kept claiming they had “AI”. Technically speaking, large language models fall under machine learning, but they are limited to just predicting language and text, and will never be able to “think” with concepts or adapt in real time to new situations.
Take for example chess. We have stockfish (and other engines), that far outperform any human. Can these chess engines “think”? Can they reason? Adapt to new situations? Clearly not, for example, adding a new piece with different rules would require stockfish to re-train from scratch. Humans can take their existing knowledge and adapt it to the new situation. Also look at LLMs attempting to play chess. They can “predict the next token” as they were designed to, but nothing more. They have been trained on enough chess notation that the output is likely a valid notation, but they have no concept of what chess even is, so they will spit out nearly random moves, often without following rules.
LLMs are effectively the same concept as chess engines. We just put googly eyes on the software, and now tons of people are worried about AI taking over the world. While current LLMs and generative AI do pose a risk (overwhelming amounts of slop and misinformation, which could affect human cultural development. And a human deciding to give an LLM external influence on anything, which could have major impact), it’s nowhere near Terminator-style AGI. For that to happen, humans would have to figure out a new way of thinking about machine learning, and there would have to be several orders of magnitude more computing resources for it.
Since the classification for “AI” will probably include “AGI”, there will (hopefully) be legal barriers in place by the time anyone develops actual AGI. The computing resources problem is also gradual, an AGI does not simply “tranfer itself onto a smartphone” in the real world (or an airplane, a car, you name it). It will exist in a massive datacenter, and can have its power shut off. If AGI does get created, and causes a massive incident, it will likely be during this time. This would cause whatever real world entity created it to realize there should be safeguards.
So to answer your question: No, the movies did not “get it right”. They are overexaggerated fantasies of what someone thinks could happen by changing some rules of our current reality. Artwork like that can pose some interesting questions, but when they’re trying to “predict the future”, they often get things wrong that changes the answer to any questions asked about the future it predicts.