AI Chatbots Used in Six-Year Cyberstalking Saga to Deceive and Harass Professor
![Best AI Writing Tools](https://techcrawlr.com/wp-content/uploads/2023/01/ai-writing.jpg)
A Massachusetts man named James Florence has agreed to plead guilty after orchestrating a seven-year campaign of cyberstalking.
The campaign involved a sinister use of AI chatbots to impersonate a university professor, directing multiple strangers to her residence under sexual pretenses.
This alarming tactic signifies a new frontier in cyberstalking, leveraging technology to amplify the impact of harassment.
Florence, 36, utilized platforms like Crushon.ai and JanitorAI, which permit users to create and direct custom chatbots in conversations.
He programmed these bots to mimic the professor, complete with her personal and professional details, such as her home address and intimate apparel preferences.
This manipulation led unsuspecting online users directly to her doorstep, expecting sexual encounters.
The misuse of AI didn’t stop there. Florence also crafted a public chatbot on JanitorAI that displayed provocative prompts involving the professor, misleading users further by offering her actual home address during interactions.
These actions formed part of a broader pattern of harassment that included the creation of false social media profiles, and the distribution of explicit, doctored images of the victim across various online platforms.
This extensive harassment, detailed in court documents, caused the professor and her husband considerable distress, leading them to enhance their home security and live in constant vigilance.
The couple installed surveillance cameras and even resorted to carrying personal defense items due to fears for their safety.
The case, which spanned from 2017 to 2024, involved Florence targeting several other women and even a minor, altering their images to depict them inappropriately and impersonating them online.
This case highlights not only the personal toll of such crimes but also the broader implications of AI in facilitating sexual harassment and exploitation.
As the legal proceedings continue, this case serves as a stark reminder of the potential misuse of AI technology in criminal activities, prompting calls for stricter regulations and more robust safeguards against the digital exploitation of individuals.