We all know AI is becoming a useful, daily tool to help with everything from searches (at its most basic) to organizing work emails and interpreting complicated documents. It’s also a threat. Google Intelligence Group says it has “observed threat actors using AI to gather information, create super-realistic phishing scams and develop malware.” One common attack involves phishing, where models create “believable conversations with victims to build trust.” The report adds that both the Iranian and North Korean governments have used generative AI models, including Google’s own Gemini, to trick victims. North Korea, for instance, has targeted defense systems and has impersonate corporate recruiters. Google says it will continue to build safe and responsible AI that both minimizes and identifies threats. A recent Pew Research Center report notes that 73% of adults say they’ve experienced an online scam or attack.

Keep Reading

View More
arrow-right