Team creates software to block AI phishing scams
Researchers at the University of Texas at Arlington have developed software to combat AI-based phishing scams. With the rise of AI chatbots like ChatGPT being utilized by cybercriminals to create scam websites, the team's software aims to detect and prevent malicious activity. Led by Dr. Shirin Nilizadeh, the team identified instruction prompts used for phishing websites and trained the software to recognize and block them. Their work was published at the prestigious IEEE Symposium on Security and Privacy, where they were awarded the Distinguished Paper Award. This was reported by SSPDaily.
Currently, AI chatbots have limited built-in detection capabilities, making them vulnerable to exploitation. Cyber attackers exploit these vulnerabilities to create scams, rendering traditional security measures inadequate. By addressing these vulnerabilities, the research team hopes to raise awareness about the risks associated with AI technology and drive the adoption of enhanced security practices.
The team is actively engaging major tech companies like Google and OpenAI to incorporate their software into broader AI security strategies. Their ultimate goal is to ensure that AI chatbots are equipped with robust defenses against phishing scams. The researchers emphasize the importance of collaboration within the cybersecurity community to safeguard users from emerging threats.
The positive response to their work has motivated the team to continue their research and explore further opportunities to enhance cybersecurity. They are eager to share their findings with colleagues in the field and contribute to ongoing efforts to create a safer digital environment.