“you thought you did something there, didn’t you?”

  • 0 Posts
  • 33 Comments
Joined 1 year ago
cake
Cake day: September 25th, 2023

help-circle

  • Right, I thought that might be what you were referring to. This is where we get into weeds technically:

    Those regulations apply to active jamming, which is the use of an electronic device(s) to emit signals that interfere with lawfully approved channels. It is important to note that this holds no practical bearing upon structures as they by definition cannot engage in active jamming, only in passive blocking or coincidental interference.

    What’s being experienced with Walmart’s lack of 5G is likely due to the fact that 5G does not penetrate walls very well. Combine this with the fact that you have hundreds of devices in the same enclosed space trying to talk to the same tower some miles away on the lower bandwidth 5G channel that can penetrate walls, and you can see how 5G access is effectively being “denied” simply by the nature of the business. Walmart could implement an on-premises 5G relay to solve the issue, but why would they want to take on that tech debt? All they are required to do by law is make sure E911 is not impeded by the building or operations of the business. They don’t owe you access to other radio waves when on their premises.

    If this regulation were to somehow be applied to passive blocking like what I’ve described, then Faraday cages would be illegal-- which aren’t, again as long as E911 is not impeded. This would also make high security bamk vaults illegal due to the thick wall construction.




  • You’re remembering correctly, every other logic gate can be built from NAND gates, which is the foundation of this sort of minimal-instruction-set exercise. Beyond that, you need to be able to move data and change your program counter (jump, often conditionally). Then, if you want parity with modern instruction sets beyond just being turning complete, you need return and interrupt for control flow.



  • Bookmarking your comment so I can come back to it in a couple hours, if I hopefully remember to.

    But yes, almost. I don’t think the interrupt is necessary and the return isn’t under certain architectures. I have a doc on my computer somewhere where I was investigating what the absolute minimum was to make a turning complete machine and, to my recollection, there was only 4-6 instructions that were absolutely necessary. The ones I remember off the top of my head are NAND, MOV, JUMPIF, and then I believe I included NOP in accordance with some principle. RET and INT were convenience features in this design.







  • If I were to fully elaborate, I’d be typing for hours, so I’ll sum up:

    • pip - default behavior is to install to system-wide site packages. In a venv, it will try to upgrade/uninstall system packages without notice/consent unless you specify --require-virtualenv. Multiple things can fuck up your ENV to make the python binaries point to system-wide, while your terminal will still show you as in a venv. Also why TF would package metadata files need to be executable? Bad practice, -1/10
    • nix - they acknowledged years ago that they should probably have some kind of package signing and perhaps an SBOM or similar mechanism, but then did nothing to implement it and just said “oh well, guess we’re vulnerable to supply chain attacks, best not to think about it”
    • brew - installing packages parallel to your system packages manager, without containers. My chief complaint here is that brew is a secondary package manager that people might treat as a “set and forget” for some packages, rarely updating them. So what happens when a standard library used by a brew package is vuln? A naive Linux user might update their system packages but totally forget to update brew. And when updating brew, you can easily hit max_open_file_descriptors because kitchen sink

    From there, it’s all extremely nit-picky and paranoid-fueled-- basically, none of the package managers I mentioned are conducive, in my eyes at least, to a secure and intuitive compute environment.

    Unfortunately, there’s not much I can do about it except bang pots and pans and throw maintainers under buses when the issue that has been present for years rears it’s ugly head. Because they are the only ones who can change this, and pressure is the only thing that might motivate them to.





  • yeah I was about to point out that corpos certainly did not just invent the word “bedrot” for their own benefit. This has been a thing for a while. Nurses often have to walk patients who are admitted for several days to prevent “bed rot” symptoms.

    If you were staying in bed all day, every day, yeah you’re gonna get some severe health issues pretty damn quick. But if you’re getting up and moving around regularly, you shouldn’t worry… but in that case, it would make more sense to idk buy a desk, sit at a table, or on the couch. A laptop in bed is not practical and certainly not comfortable with the heat it generates. Quite frankly I don’t understand why anyone would ever choose to use a laptop in bed if they have other options available.



  • I’m not a lawyer, but under the definition of “Infrastructure” on page 5, they state that they will construe WhatsApp Infrastructure and Partner Infrastructure accordingly, which to my untrained eye is prima facie evidence to their acknowledgement that these are separate systems, at least one (the Partner’s) of which is not under their custodianship and not named as subject of the first stipulation you quoted. In other words “do not make it so WhatsApp’s own infrastructure would run GPL material” and potentially “do not send GPL material through our systems”

    The second one I interpret to mean “nothing with licenses that apply that runtime operation is copy left”


  • This is not the direct result of a knowledge cutoff date, but could be the result of mis-prompting or fine-tuning to enforce cut off dates to discourage hallucinations about future events.

    But, Gemini/Bard has access to a massive index built from Google’s web crawling-- if it shows up in a Google search, Gemini/Bard can see it. So unless the model weights do not contain any features that correlate Gaza to being a geographic location, there should be no technical reason that it is unable to retrieve this information.

    My speculation is that Google has set up “misinformation guardrails” that instruct the model not to present retrieved information that is deemed “dubious”-- it may decide for instance that information from an AP article are more reputable than sparse, potentially conflicting references to numbers given by the Gaza Health Ministry, since it is ran by the Palestinian Authority. I haven’t read too far into Gemini’s docs to know what all Google said they’ve done for misinformation guardrailing, but I expect they don’t tell us much besides that they obviously see a need to do it since misinformation is a thing, LLMs are gullible and prone to hallucinations and their model has access to literally all the information, disinformation, and misinformation on the surface web and then some.

    TL;DR someone on the Ethics team is being lazy as usual and taking the simplest route to misinformation guardrailing because “move fast”. This guardrailing is necessary, but fucks up quite easily (ex. the accidentally racist image generator incident)