☆ Yσɠƚԋσʂ ☆

  • 6.47K Posts
  • 7.43K Comments
Joined 6 years ago
cake
Cake day: January 18th, 2020

help-circle









  • To definitively say whether something is or isn’t conscious we’d first need to have a clear definition of what we mean by consciousness in functional terms. So far, there are a number of competing theories, and the definition will vary based on which theory you subscribe to. I’m personally a fan of the higher order theory of consciousness which suggests that conscious experience constitutes higher order thoughts which observe other thoughts, awareness of your own thoughts is the self referential property that would be a plausible explanation. To show that a model was conscious in this framework, you’d have to show that there are secondary patterns that occur in response to the primary patters which are a result of a stimulus.






  • I see little evidence of Iran being on the ropes actually. The color revolution attempt failed, they managed to clean house, and they demonstrated during the 12 day war that they can hit Israel and US assets easily.

    The US also has a massive logistics problem with Iran being half way across the globe from the burger reich. A quick knock out blow is not possible, and Iran has the advantage in a protracted conflict because it’s a large country with logistical depth.

    It’s going to be far harder for the US to fight Iran than it is for Russia in Ukraine. And that’s been going for 4 years now.






















  • I get a strong impression that the whole extinction of humanity narrative is really just an astroturf marketing campaign by AI companies. They’re basically scaremongering because it gets in the news, and the goal is to convince investors how smart these things are. It’s the whole OpenAI claiming they’re on the verge of AGI right before pivoting to doing horny chatbots. These are useful tools, and I also use them day to day, but the hype around them is absolutely incredible.

    I think we have plenty of real risks to humanity to worry about, like the US starting a nuclear holocaust. We don’t need to waste time worrying about imaginary risks like AGI here.

    I’d also argue the whole energy consumption argument is very myopic. The reality is that these things have been getting more and more efficient, and there is little reason to think that’s not going to be continue being the case going forward. It’s completely new tech, and it’s basically just moved past proof of concept stages. There’s going to be a lot of optimization happening down the road. And even when you contextualize current energy usage, it’s not as crazy as people seem to think https://www.simonpcouch.com/blog/2026-01-20-cc-impact/

    We’re also starting to see stuff like this happening https://www.anuragk.com/blog/posts/Taalas.html