Rogue Robots and Human Flaws: AI in Cybersecurity

machine learning, Artificial intelligence

When most people hear the term Artificial Intelligence (AI) they think about science fiction movies that portray robots that go rogue and try to annihilate humans. The truth of the matter is AI is so much more than that. The combination of machine learning in security, neural networks, and decision making can and has taken the technology to new heights.

The earliest implementations of computer AI can be traced back to 1951, and there have been major advancements as the computational power of computers has increased (Buchanan, AI Magazine, 2005). There are individuals who believe AI will play a large part in the future of cybersecurity. This is apparent because AI and neural networks can do more work than humans in shorter amounts of time. They also do not need vacations and do not get sick. Once programmed, they have a very small error rate, and if there is an error, it is typically in the programming, which is a human error.

The real problem is that AI is always going to react to a cyber incident following a specific algorithm, based on the decision tree analysis and the algorithms it uses for “logic.” This is a fundamental flaw with all computer systems. If a software program can be programmed, then it can be unprogrammed. Flaws and weaknesses can be found to exploit a weakness in the system. If this algorithm is known, then hackers can work to avoid the security safeguards that initiate a cyber response therefore bypassing and prolonging the reaction from the system. The ideal solution is to have human and machines working together. It has been documented that humans have a profound fear/hesitation of working with computers if they do not know how the computer is expected to react (Rouse, Human Factors, 1988).

I believe this research is outdated and that individuals are becoming more accustom to working with and relying on computers. With the incorporation of AI and neural networks, most cybersecurity experts may not trust the algorithm being used to make the AI decisions; therefore, the individual experience will play a large part of how they use the output from the computer. I would suggest not putting the computer in the position of ultimate response to a cyber incident, but instead allowing them to suggest responses that can be carried out by the human partner in the relationship. The AI could be programmed to analyze large amounts of data for anomalies or specific patterns that have already been known to be a cyber issue and then report that information to the human counterpart. If the human counterpart feel comfortable with the data that the AI is producing, then they would have the option to allow the AI to automatically respond to the cyber incident. I think this option will take a period for the trusted relationship to be formed. Once this relationship is formed cyber incidents will be handled more efficiently.

AI is used in many areas of the cybersecurity industry. There are needs in military scenarios to use AI robotics to make decisions in dangerous situations. Putting an AI robot in harm’s way is a better alternative than risking human lives (Yeh, P. Z., & Crawford, J., AI Magazine, 2017). Using AI robots to inspect manufacturing is a way to automate a human’s role in production (Norman, D., Research-Technology Management, 2017).

Using AI in manufacturing means that we won’t have as many line engineers doing inspections of products. Computers and cameras would do the initial checks and then a human could do a follow-up if issues arose. Therefore, the role of a line engineer must now evolve into someone who can read and understand the output of the computer system. The same can be said for a network defense or infrastructure hardening engineer/analyst: let the robotic AI do the first pass and put the foundations in place. Then have the human check to make sure it meets standards. Let the neural networks do data analysis and report on the outlying data. This will speed up the daily task, while integrating computers, AI, cybersecurity, and humans. The overall goal should be to become more secure if integrating artificial intelligence in security systems.

To fully rely on AI to handle all tasks would be a foolish endeavor and it may very well lead to the horror science fiction stories. To use AI to do tasks that can speed up the production and output of humans is an idea that is scary at times but should be embraced as technology evolves.

Emily Young
About Emily Young 70 Articles
Emily Young is the Social Media and Communities Manager for Excelsior College.