I tried finding a FOSS app that I could use to use my phone as a mouse (all options for this seem really shady IMO) and Google’s “AI overview” recommended the app “Bluetooth Keyboard & Mouse”. Seemingly, because it has a (mostly empty) Git repo with typical folders. I tried entering the search term into Copilot on my work computer and got the same result, Copilot also gave a little speech on why the app was FOSS. Only when told there was no source code in the repo did Copilot backtrack and say it wasn’t open source (and later having it listed as not recommended because of an unknown origin [eventhough it’s in the Google Play store…]).
As a user, i.e. not contributor, of FOSS I found this an interesting revelation. Is this an intentional catfishing strategy to get apps promoted by the LLMs – as a semi-illegitimate growth hack for a legitish app or for entirely illegitimate purposes? Or just a serendipitous LLM hallucination?


AI is just shit and doesn’t know what it’s talking about 90% of the time.
Strictly speaking, it doesn’t know anything. But regardless of the merits of LLMs, the AI box is the top result on the biggest search engine in the world by default, and what I’m wondering is whether this is an example of intentional manipulation of the search results to give people who don’t know better the false impression that apps are open source.
I know its not intelligent and thinking, you know what I mean. It could be manipulation, we’d really have know way of knowing. With how much it makes shit up tho, I lean towards that rather than conspiracy.
I understand you better now. Yes that’s certainly possible, plausible even. But it would seem useful for someone who wants to boost the credibility of their (hypothetical) keylogger app with exotic permissions to have a way to independently be recommended as a good open source option by two leading models in mainstream use.
I could absolutely see a situation like that, where someone tries to use it to intentionally push misinformation. In my experience with AI, its really difficult to get it to push consistent results, because as established, if really likes to hallucinate stuff. Using AI to push propaganda, at least in this stage of its development, seems like an exercise in futility. How many times has Elon Musk gone back and fiddled with Grok because its not behaving the way he wants it to behave?