An unrelenting, ravenous appetite for more and more data may be artificial intelligence’s fatal flaw.

Or, at least, the fastest way for ‘poison’ to seep in.

Cyber attackers sneak small doses of ‘poisoned data,’ in the form of false or misleading information, into all-important AI training sets. The mission: Sabotage once-reliable models to skew them in a completely different direction.

The majority of AI systems we encounter today — from ChatGPT to Netflix’s personalized recommendations — are only “intelligent” enough to pull off such impressive feats because of the extensive amounts of text, imagery, speech and other data they are trained on. If this rich treasure trove gets tainted, the model’s behavior can become erratic.

To defend against the threat of various data poisoning attacks, a team of FIU cybersecurity researchers combined two emerging technologies — federated learning and blockchain — to more securely train AI. According to a study in IEEE Transactions on Artificial Intelligence, the team’s innovative approach successfully detected and removed dishonest data before it could compromise training datasets.

Read more at FIU News.