09 Nov Network Security
A recent prediction for 2019:
"AI-driven chatbots will go rogue. In 2019, cyber criminals and black hat hackers will create malicious chatbots that try to socially engineer victims into clicking links, downloading files or sharing private information. A hijacked chatbot could misdirect victims to nefarious links rather than legitimate ones. Attackers could also leverage web application flaws in legitimate websites to insert a malicious chatbot into a site that doesn’t have one. In short, next year attackers will start to experiment with malicious chatbots to socially engineer victims. They will start with basic text-based bots, but in the future, they could use human speech bots to socially engineer victims over the phone or other voice connections"—Corey Nachreiner, CTO, WatchGuard Technologies
With this in mind, Do you see this happening? How can AI affect the organization, good and bad? What can be done to prevent malicious AI chatbots from disrupting the organization? Include any security policies, training, etc. that will help prevent social engineering schemes that can negatively affect the organization.
