• 1 Post
  • 166 Comments
Joined 2 years ago
cake
Cake day: June 19th, 2023

help-circle




  • Well if you really don’t have a preference for one or the other, it might be worth keeping an eye on the future.

    People’s jobs, especially expensive jobs, are going to be replaced by software.

    So ask yourself:

    • What does an accountant do that wouldn’t be possible to automate in software?

    • What does a lawyer do that wouldn’t be possible to automate in software?

    • What does a doctor do that wouldn’t be possible to automate in software?

    From where I’m sitting, medicine seems the safest bet.



  • They were doing so to find out which country you lived in, since you neglected to provide that information yourself.

    I’m British, I charge my car at home, and on the few occasions I use public chargers, I interface with and pay for them through apps.

    Knowing that you are from the US, though, means that YMMV. Your home electric supplies have significantly lower voltage than here in Europe, so home charging might be a less viable option.

    They weren’t being creepy, they were trying to give you a helpful answer.


  • Yes. Kind of. Probably.

    What we have is an issue with terminology. The thing is, “white” only makes sense when specifically referring to human vision.

    Our eyes have cells (cone cells) that are tuned to specific wavelengths in the EM spectrum. Three different wavelengths - one set of cone cells peak at 560nm that we see as Red, one at 530nm that we see as Green, and one at 420nm that we see as Blue.

    “White” is just our interpretation of a strong signal in these three frequencies.

    If, everything else being equal, our cones cells responded to higher wavelengths that our eyes can’t currently see, then our “white” might easily be what we see as “red” now, because we’d be also seeing the infra-red that we’re currently not.







  • At the end of the day, isn’t that just how we work, though? We tokenise information, make connections between these tokens and regurgitate them in ways that we’ve been trained to do.

    Even our “novel” ideas are always derivative of something we’ve encountered. They have to be, otherwise they wouldn’t make any sense to us.

    Describing current AI models as “Fancy auto-complete” feels like describing electric cars as “fancy Scalextric”. Neither are completely wrong, but they’re both massively over-reductive.