• 277 Posts
  • 898 Comments
Joined 3 years ago
cake
Cake day: June 24th, 2023

help-circle



  • The harm this law aims to address is grave and real. For the 99% of the population who aren’t compiling their own kernels, the ability to “age-lock” a child account to prevent young children from accessing doomscroll brainrot on Instagram is an amazing and valuable feature.

    I disagree even with this premise. I reject the idea that it’s legitimate to want to keep young people from seeing, watching, reading things that they actively want to see/watch/read simply because we have a vague idea that “it’s not good for them”.

    My parents too unfortunately agreed with your idea, and I remember being a (teenaged) minor and worried that my parents might find out too much about what I’ve been reading and doing on the Internet and punish me for it, I don’t wish that on anyone who happened to be born after me. I hereby resolve that if I ever have children, they will not have to worry about this. I think it is a very good thing that modern technology makes it somewhat harder for parents to oppress their children in such a manner.

    But there’s nothing inherently wrong with OS developers implementing such a feature if that is what their customers want. There’s a lot wrong with the government mandating it.

    The principled “linux source code is free-speech, and no government mandates can compel changes” stance is quite divorced from reality.

    No, it’s an exactly correct legal analysis; at least morally, and should be legally.

    Are crypto-exchange founders likewise free to implement whatever fraudulent schemes they like, as their source code is their speech to freely dictate?

    I’m not sure what scenario you have in mind. Distributing software (even software that can be used for illegal activities) is free speech. Running and using software isn’t (automatically) speech, it’s an action that can be declared to be criminal. Anyone can use Thunderbird to send phishing emails, but it would be absurd to prosecute the developers of Thunderbird for that.

    I agree with the idea that a user account with an age field is less bad than actual (biometric or ID-based) age verification.

    The rest of your post is so full of meaningless buzzwords that it’s impossible to write anything coherent about it.



  • With chat control we actually have to distinguish two different things that people sometimes confuse:

    • voluntary chat control (“chat control 1.0”), which is currently already the law in the EU
    • mandatory chat control (“chat control 2.0”), proposed in 2022

    Voluntary chat control is about letting operators of communication services voluntarily scan messages for certain illegal activity (without this constituting a violation of data protection laws). This doesn’t break encryption and isn’t a part of a war on general purpose computing. While there are many good arguments against it, it’s not especially catastrophic. It’s a detail of business regulation.

    Mandatory chat control is about forcing them to do so, which must necessarily break encryption and impose limits on software freedom. This is what is most important to oppose.

    The most recent win ended up rejecting even (most) voluntary chat control, which is a good sign that mandatory chat control won’t get a majority either.







  • whoever employs LLM

    incumbent upon the handler to assume liabillity

    I agree. If you make any kind of real-world decision based on the output of AI, you should be liable for it as if you’d made that decision yourself.

    But I remember reading some news stories about cases where people (often minors) chatted with chatbots and managed to get those chatbots into states where the chatbots encouraged that the users harm themselves (in some cases even commit suicide?). As tragic as that is, I don’t see how it’s morally right to hold the AI companies responsible for that unless it can be shown they did this on purpose. All the AI did in such cases was what it was advertised and understood to do: generate plausible-sounding text based on user input. Those are the cases I’m talking about.


  • I don’t, not in general.

    There are good and bad uses of AI. For example I used AI to generate my profile picture here on Lemmy (would you have noticed?). In general the creation of art is one of the best uses of AI I can think of; it doesn’t have serious consequences if it goes wrong, and it can easily be reviewed by a human whether it looks as it should.

    But using AI to make actually meaningful business decisions without any human review at all? Using AI for customer service? Any company that does that deserves VERY negative consequences.

    I don’t agree with talking points like “AI companies should be required to pay copyright holders of their training data” or “AI is bad because of the environmental impact” or “AI is bad because of RAM prices” or “AI companies should be legally responsible for any mistakes the AI makes (such as libel or encouraging users’ suicide)” or such things; I think all of these are nonsense.

    I believe in general that AI gets too much attention in the media. It’s really not that impactful.






  • Yes. There are several sections on gnu.org that talk about this, these are the ones I was able to quickly find.

    https://www.gnu.org/philosophy/free-sw.html

    You should also have the freedom to make modifications and use them privately in your own work or play, without even mentioning that they exist. If you do publish your changes, you should not be required to notify anyone in particular, or in any particular way.

    https://www.gnu.org/philosophy/free-software-even-more-important.html

    The freedom to make and distribute exact copies when you wish. (It is not an obligation; doing this is your choice. If the program is free, that doesn’t mean someone has an obligation to offer you a copy, or that you have an obligation to offer him a copy. Distributing a program to users without freedom mistreats them; however, choosing not to distribute the program—using it privately—does not mistreat anyone.)

    https://www.gnu.org/philosophy/categories.en.html

    Free software is a matter of freedom, not access. In general we do not believe it is wrong to develop a program and not release it. There are occasions when a program is so important that one might argue that withholding it from the public is doing wrong to humanity. However, such cases are rare. Most programs are not that important, and declining to release them is not particularly wrong. Thus, there is no conflict between the development of private or custom software and the principles of the free software movement.

    Nearly all employment for programmers is in development of custom software; therefore most programming jobs are, or could be, done in a way compatible with the free software movement.



  • No. I’m staunchly anti-communist and also a staunch supporter of free software. It’s also possible to have another combination of beliefs on these things, but these are mine.

    I suggest reading the section “Why Don’t You Move to Russia?” of this: https://www.gnu.org/philosophy/shouldbefree.en.html

    By contrast, I am working to build a system where people are free to decide their own actions; in particular, free to help their neighbors, and free to alter and improve the tools which they use in their daily lives. A system based on voluntary cooperation and on decentralization.

    Thus, if we are to judge views by their resemblance to Russian Communism, it is the software owners who are the Communists.

    I agree with that. Free software is about building a society more strongly based on individual rights. At least Marxism-Leninism certainly isn’t about that, though anarchism can be argued to be to some extent.