Report Claims Google Gemini Is Now Used to Commit Crimes
Let’s be honest—AI is super cool. Sure, there are real concerns, but it’s impossible not to feel a little excited about how futuristic it all seems.
Chatbots like Google Gemini and ChatGPT can do everything from solving tricky math problems to fixing your grammar. But just like any powerful tool, AI isn’t always used for good.
Google’s Threat Intelligence Group recently put out a report on January 29, revealing how criminal organizations are abusing Gemini’s AI features.
Groups linked to China, Russia, North Korea, and Iran have been using it to gather information on targets, write and debug code, and avoid detection. The tech itself isn’t good or bad—it’s all about who’s using it and how.
Bad actors have always taken advantage of new technology, and AI is no different. The days of obvious scam emails full of typos are fading. Now, AI is fueling more advanced scams with deepfake videos, voice cloning, fake reviews, and bogus job listings.
Staying ahead means learning to spot AI-generated content, recognizing shady job offers, and being more careful online.
Meanwhile, Google keeps rolling out updates. The latest, Gemini 2.0 Flash, is now available for everyone, including free users. It promises better results for writing, learning, and brainstorming. AI isn’t going anywhere, so rather than fighting it, learning how to use it wisely is the best move.
Have something to add? Let us know in the comments below!