He / They

  • 5 Posts
  • 629 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle
  • I never said afford to protect it, just to comply with the requirements for doing the checks and storing it. Passing SOC2 or PCI-DSS (if you’re doing verification via payment card) doesn’t make you more secure in reality, but if you can’t afford to do those attestations in the first place, you’re out of the game.

    This is just another way to ban “harmful” content.

    That is true, but it’s not the whole picture. KOSA applies a Duty of Care requirement for all sites, whether they intend to have adult (or “harmful”) content or not.

    So your local daycare’s website that has a comment section could be (under the Senate version that has no business size limits) taken to court if someone posts something “harmful”. That’s not something they or other small sites can afford, so those sites will either remove all UGC or shutter, rather than face that legal liability.

    The real goal of KOSA (and the reason it’s being backed by Xitter, Snap, and Microsoft) is to kill off smaller platforms entirely, to force everyone into their ecosystems. And they’re willing to go along with the right-wing censorship nuts to do it. This is a move by big-tech in partnership with the Right, because totalitarianism is a political monopoly, and companies love monopolies.



  • This is a tough and complex issue, because tech companies using algorithmic curation and control mechanisms to influence kids and adults is a real, truly dangerous issue. But it’s getting torn at from all sides to force their own agendas.

    Allowing large corporations to control and influence our social interactions is a hugely dangerous precedent. Apple and Google and huge telcos may be involved in delivering your text messages, but they don’t curate or moderate them, nor do they send you texts from other people based on how they want you to feel about an issue, or to sell you products. On social media, companies do.

    But you’ve got right-wingers clamoring to strip companies from liability protections from user-generated content, which does not address the issue, and is all about allowing the government to dictate what content is acceptable from a political standpoint (because LGBTQ+ content is harmful /s and they want companies to censor it).

    And you’ve got neolibs and some extremely misguided progressives pushing for sites that allow UGC (which is by definition all social media) to have to check ages of their users by implementing ID checks (which also of course treats any adults without an accepted form of ID as children), which just massively benefits large companies who can afford the security infra to do those checks and store that data, and kills small and medium platforms, all while creating name-and-face tracking of peoples’ online activities, and legally mandating we turn over more personal data to corporations…

    …and still doesn’t address the issue of corporations exerting influence algorithmically.

    tl;dr the US is a corporatist hellscape where 90% of politicians serve corporations either willfully, or are trivially manipulated to.

    PS: KOSA just advanced out of committee.





  • Speaking as an infosec professional, security monitoring software should be targeted at threats, not at the user. We want to know the state of the laptop as it relates to the safety of the data on that machine. We don’t, and in healthy workplaces can’t, determine what an employee is doing that does not behaviorally conform to a threat.

    Yes, if a user repeatedly gets virus detections around 9pm, we can infer what’s going on, but we aren’t tracking their websites visited, because the AUP is structured around impacts/outcomes, not actions alone.

    As an example, we don’t care if you run a python exploit, we care if you run it against a machine you do not have authorization to (i.e. violating CFAA). So we don’t scan your files against exploitdb, we watch for unusual network traffic that conforms to known exploits, and capture that request information.

    So if you try to pentest pornhub, we’ll know. But if you just visit it in Firefox, we won’t.

    We’re not prison guards, like these schools apparently think they are, we’re town guards.







  • the purpose of my car is to get me from place to place

    No, that was the purpose for you, that made you choose to buy it. Someone else could have chosen to buy a car to live in it, for example. The purpose of a tool is just to be a tool. A hammer’s purpose isn’t just to hit nails with, it’s to be a heavy thing you can use as-needed. You could hit a person with it, or straighten out dents in a metal sheet, or destroy a harddrive. I think you’re conflating the intended use of something, with its purpose for existing, and it’s leading you to assert that the purpose of LLMs is one specific use only.

    An LLM is never going to be a fact-retrieval engine, but it has plenty of legitimate uses: generating creative text is very useful. Just because OpenAI is selling their creative-text engine under false pretenses doesn’t invalidate the technology itself.

    I think we can all agree that it did a thing they didn’t want it to do, and that an LLM by itself may not be the correct tool for the job.

    Sure, 100% they are using/ selling the wrong tool for the job, but the tool is not malfunctioning.




  • Except Lvxferre is actually correct; LLMs are not capable of determining what is useful or not useful, nor can they ever be as a fundamental part of their models; they are simply strings of weighted tokens/numbers. The LLM does not “know” anything, it is approximating text similar to what it was trained on.

    It would be like training a parrot and then being upset that it doesn’t understand what the words mean when you ask it questions and it just gives you back words it was trained on.

    The only way to ensure they produce only useful output is to screen their answers against a known-good database of information, at which point you don’t need the AI model anyways.

    A software bug is not about what was intended at a design level, it’s about what was intended at the developer level. If the program doesn’t do what the developer intended when they wrote the code, that’s a bug. If the developer coded the program to do something different than the manager requested, that’s not a bug in the software, that’s a management issue.

    Right now LLMs are doing exactly what they’re being coded to do. The disconnect is the companies selling them to customers as something other than what they are coding them to do. And they’re doing it because the company heads don’t want to admit what their actual limitations are.