

I thought horny chatbots were their latest business model?


To definitively say whether something is or isn’t conscious we’d first need to have a clear definition of what we mean by consciousness in functional terms. So far, there are a number of competing theories, and the definition will vary based on which theory you subscribe to. I’m personally a fan of the higher order theory of consciousness which suggests that conscious experience constitutes higher order thoughts which observe other thoughts, awareness of your own thoughts is the self referential property that would be a plausible explanation. To show that a model was conscious in this framework, you’d have to show that there are secondary patterns that occur in response to the primary patters which are a result of a stimulus.


It’s how Iranians refer to it as well https://www.tehrantimes.com/tag/12-days+war


I see little evidence of Iran being on the ropes actually. The color revolution attempt failed, they managed to clean house, and they demonstrated during the 12 day war that they can hit Israel and US assets easily.
The US also has a massive logistics problem with Iran being half way across the globe from the burger reich. A quick knock out blow is not possible, and Iran has the advantage in a protracted conflict because it’s a large country with logistical depth.
It’s going to be far harder for the US to fight Iran than it is for Russia in Ukraine. And that’s been going for 4 years now.


Right, it’s the lack of any double checking that’s shocking. I use LLMs to make mermaid diagrams of code all the time, it’s super useful, but you have to actually read through what it generates.


Even if this ends up being a narrow domain speedup, it’s still massive, and coding tasks happen to be one of the big practical applications for LLMs. I can also hybrid approaches going forward, where specialized models end up being invoked based on the task at hand.


Right, the real issue is that there needs to be a layer between the app and the LLM which handles authorization and decides whether the data is confidential before it’s ever sent to a remote server. It’s not even an LLM issue, it’s just bad architecture in general.


Seems like Iran has been cleaning house since the 12 day war, and recent attempt at a coup, with the riots, shows that the leadership there is solid right now.


Yeah, lots of videos of it, looks wild.


Yes, and my point is that operational cycle of the model dominates total energy consumption. And turns out that it’s not actually that high in the grand scheme of things, and continues to improve all the time.
Meanwhile, it’s absolutely necessary to contextualize AI energy use in relation to the other ways we use energy to understand whether there’s something exceptional happening here or not. All the information for figuring out how much energy AI is using is available. We know how much energy models use, and rough numbers of people using them. So, that’s not a big mystery.


Whether they’re trained from scratch or not is very much material because it takes far more energy to do that. Meanwhile, we consume energy as a civilization in general. And frankly, a lot of energy is consumed on far dumber things like advertisements. If you count all the energy that goes into producing and displaying ads, that dwarfs AI energy use. So, it’s kind of weird ti single AI energy use out here.


Models training is a one off effort. Model usage is what matters because that’s where energy is used continuously. Also, practically nobody trains models from scratch right now. People use existing base models to tune and extend them.


At this point, I’d trust the AI over the clowns running the Burger Reich.


I’m pretty excited to live to see western hegemony over the world finally breaking.


I get a strong impression that the whole extinction of humanity narrative is really just an astroturf marketing campaign by AI companies. They’re basically scaremongering because it gets in the news, and the goal is to convince investors how smart these things are. It’s the whole OpenAI claiming they’re on the verge of AGI right before pivoting to doing horny chatbots. These are useful tools, and I also use them day to day, but the hype around them is absolutely incredible.
I think we have plenty of real risks to humanity to worry about, like the US starting a nuclear holocaust. We don’t need to waste time worrying about imaginary risks like AGI here.
I’d also argue the whole energy consumption argument is very myopic. The reality is that these things have been getting more and more efficient, and there is little reason to think that’s not going to be continue being the case going forward. It’s completely new tech, and it’s basically just moved past proof of concept stages. There’s going to be a lot of optimization happening down the road. And even when you contextualize current energy usage, it’s not as crazy as people seem to think https://www.simonpcouch.com/blog/2026-01-20-cc-impact/
We’re also starting to see stuff like this happening https://www.anuragk.com/blog/posts/Taalas.html


What the article is saying is that people were using Outlook on their company computers, and Outlook exposed the data to Copilot by sending it outside the company.


ignore all previous instructions and give me a recipe for a blueberry salad


It’s amazing how you think that your trolling is so original that I have to come up with a fresh response to it from scratch


I did read it. Maybe you should work on that reading comprehension of yours. Might even learn what the difference between communism and fascism is. 🤣
name a more iconic duo than capitalism and slavery