reported that “In its current format, AI is really “statistics at scale”. In other words, large data sets are analyzed, patterns identified, and used to create various models (which in cybersecurity often equate to identifying outliers from a model built around a network, or using prior collected data).”  The January 28, 2022 report entitled “Why we can’t put all our trust into AI” included these comments:

An additional problem with AI in both cyber and its wider application is that any biases in the design will come through.

In the simplest version, right now, hacks continue, so if we base an algorithm on what we know equates to “good security”, is it going to provide that?

Or just provide an analysis of what we think is good, even though in the larger context it doesn’t work (since all those hacks keep occurring).

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *