From Newsgroup: uk.comp.sys.mac
How online search and AI can install malware
Be sure to read Howard Oakley's Blog of today's date!
https://eclecticlight.co/2025/12/11/how-online-search-and-ai-can-install-malware/
//As Ashenbrenner and Semon point out, this marks a new and deeply
disturbing change, that we're going to see much more of. We have learned
to trust many of the steps that here turn out to lead us into trouble,
and there's precious little that macOS can do to protect us. This
exploit relies almost entirely on our human weakness to put trust in
what's inherently dangerous.
First, distrust everything you see in search engines. Assess what they
return critically, particularly anything that's promoted. It's promoted
for a reason, and that's money, so before you click on any link ask how
that's trying to make money from you. If that's associated with AI, then
be even more suspicious, and disbelieve everything it tells you or
offers. Assume that it's a hallucination (more bluntly, a lie), or has
been manipulated to trap you.
Next, check the provenance and authenticity of where that click takes
you. In this case, it was to a ChatGPT conversation that had been
poisoned to trick you. When you're looking for advice, look for a URL
that's part of a site you recognise as a reputable Mac specialist. Never follow a shortened link without unshortening it using a utility like
Link Unshortener from the App Store, rather than one of the potentially malicious sites that claims to perform that service.
When you think you've found a solution, don't follow it blindly, be
critical. Never run any command in Terminal unless it comes from a
reputable source that explains it fully, and you have satisfied yourself
that you understand exactly what it does. In this case the command
provided was obfuscated to hide its true action, and should have rung
alarm bells as soon as you saw it. If you were to spare a few moments to
read what it contains, you would have seen the command curl, which is
commonly used by malware to fetch their payloads without any quarantine
xattr being attached to them. Even though the rest of the script had
been concealed by base-64 encoding, that stands out.
If you did get as far as running the malicious script, then there was
another good clue that it wasn't up to anything good: it prompted you
for a System Password:. The correct prompt should just be Password:, and immediately following that should be a distinctive key character that's generated by macOS for this purpose. Then as you typed your password in,
no characters should appear, whereas this malware showed them in plain
text as you entered them, because it was actually running a script to
steal your password.
Why can't macOS protect you from this? Because at each step you have
been tricked into bypassing its protections. Terminal isn't intended to
be a place for the innocent to paste obfuscated commands inviting you to surrender your password and download executable code to exploit your
Mac. curl isn't intended to allow malware to arrive without being put
into quarantine. And ad hoc signatures aren't intended to allow that
malicious code to be executed.
As I was preparing this article Google search ceased offering the
malicious sponsored links, but I expect they'll be back another time.
AI is certainly transforming our Macs, in this case by luring us to give
away our most precious secrets. This isn't a one-off, and we should
expect to see more, and more sophisticated, attacks in the future. Now
is the time to replace trust with suspicion, and be determined not to
fall victim.//
--- Synchronet 3.21a-Linux NewsLink 1.2