AI is the hottest topic right now, and it is utilized across industries, from content production to industrial applications. One of the most widely discussed use cases is cybersecurity, where AI can be used both to defend and attack. What does artificial intelligence actually do, and what are the most important things we should know about it?
Artificial Intelligence (AI) is the simulation of human intelligence in machines programmed to think, learn, and solve problems like humans. In a way, AI can be portrayed as a machine capable of imitating intelligent human behavior.
"Currently, AI is quite novel, and it has probably the brainpower of an 11-year-old child. Current studies show that respondents testing 1000 individuals could identify the AI-generated content approximately 55 percent of the time. As we produce and feed more data to it, it will get smarter and smarter and more capable of mimicking human capabilities,” explains Tor Indstøy, VP of Risk Management and Threat Intelligence at Telenor.
Indstøy passionately discusses AI's possibilities, regardless of whether the outcomes are classified as good or bad. Indstøy explains the secrets of AI from a fundamental point of view: not all AI is the same, and thus, it is important to know the distinctions between its different branches.
“First of all, there is Classical AI, which refers to the early era of artificial intelligence research and development, where the focus was on creating explicit, rule-based systems that manipulate symbols to perform tasks. Nowadays, when we are talking about AI, we mean a more advanced version of it, such as generative AI or LLM," Indstøy continues.
Generative AI and LLM (Large Language Models) are both subsets of artificial intelligence, but they serve different purposes and have distinct characteristics. Generative AI refers to AI models designed to generate new data or content like the input data it was trained on. On the other hand, LLMs are a specific type of AI model specializing in understanding and generating human language, images, or videos.
AI is missing cooperation
Indstøy sees AI as a transformative force – today’s industrial revolution. According to him, we are living in the era of AI. We have so much more data than ever before, and we are linked and connected to our digital devices in multiple ways, creating even more data. We have smartwatches that follow everything from our steps to sleep and connect to phone apps. This opens billions of data points for developing AI, and Indstøy has a vision:
"We could train AI with all the data and create an AI that knows everything. AI can only know and learn things that we feed it. The more we give it, the more we can get from it."
But should we give all the information to AI? More data allows AI systems to learn from diverse examples, leading to better accuracy and generalization. This is a complex and nuanced topic with both potential benefits and challenges. Using vast amounts of data raises privacy and ethical concerns, especially when dealing with sensitive personal information or trade secrets.
Indstøy places a heavy emphasis on human-centered risk understanding, which refers to a risk management approach that places a strong gravity on understanding and addressing the role of human behavior, psychology, and decision-making in the context of different risks. This approach recognizes that humans are critical in many types of risks, such as cybersecurity. We as humans asses risks all day, in the most everyday situations – and we should bring this mindset with us when dealing with AI.
“Corporates are trying to figure out the best way to utilize AI but are also concerned with the possible risks. We face complex challenges that require a deep understanding of AI technology, and we do not have all the answers yet. However, we miss one thing in the field of AI, and that is collaboration. If we could connect all the cybersecurity expertise and knowledge regarding AI in companies, between companies and the public sector, we would have more knowledge and utilize AI better," he explains.
AI will trick all of us
So, what are the risks that AI generates in cybersecurity? The easy answer is that attacks are getting better and more efficient.
“With AI, cybercriminals can create spear-phishing, where attackers can target their objects precisely, for example, through gathering info from their social media – phishing is no longer easily identifiable, badly written emails from CEOs. Also, deepfakes are threatening our perception of reality. AI can easily generate a video where anyone can be made to say anything,” Indstøy describes.
I, the writer of this article, got to experience the power of AI deepfake while interviewing Indstøy. Within a minute, Indstøy had created a deepfake of my voice with my permission. The fake voice was so fine that it could easily fool my friends – although maybe not my mother. How can we defend from a force like this?
"Everybody will be tricked at some point; there is no shame in that. The first step of our defense is to acknowledge the presence and possibilities of AI. We need to communicate more and help others learn from our mistakes. Also, remember that if something is too good to be true, it probably is."
Filtering threats is one of the most important tasks for operators – Telenor blocked more than 250 million scam calls globally!