Software security tops ENISA’s list of cybersecurity threats for 2030

The European Union Agency for Cybersecurity (ENISA) has published a report on potential cybersecurity threats for 2030, trying to anticipate future security risks based on current trends and expert opinions. While some of the less likely predictions may touch on science fiction, the top two anticipated threats are already with us today: software supply chain compromises and AI-enhanced disinformation campaigns.

Software security tops ENISA’s list of cybersecurity threats for 2030

Planning future cybersecurity measures always needs at least some predictions. While there’s no shortage of those (especially at year’s end), it’s bad enough trying to predict the year ahead – so how about the next decade? In March 2023, the European Union Agency for Cybersecurity (ENISA) published a report exploring potential cybersecurity threats for 2030. While the stated goal is to anticipate threats that could affect the “ability to keep European society and citizens digitally secure,” the findings are applicable on a global scale. 

Combining input from expert workshops with formal threat forecasting methods, the report both indicates which existing threats are most likely to stay with us and makes a foray into more speculative predictions, with “science fiction prototyping” named as one of the methods used, no less. Here’s a brief overview of the main findings (spoiler alert – application security is way ahead of the robots taking over).

First and foremost, the report provides the ten most likely cyber threat categories that we’re likely to see in 2030, considering current and emerging trends. The list was ordered primarily based on impact and likelihood, with the top four threats all getting the maximum score in terms of likelihood – and not surprisingly, since these are already present and well-known today.

#1: Supply chain compromise of software dependencies

As applications and IT infrastructures grow more complex and reliant on external components, the associated risks can only grow. With some of the biggest cybersecurity crises of the past few years (notably SolarWinds and Log4Shell) already being related to the software supply chain, it is only to be expected that similar attacks and vulnerabilities related to software and hardware components will be the #1 threat for 2030. Whatever security measures are followed, the report anticipates that the sheer complexity of future systems will keep risk high and testing difficult: “While some of these components will be regularly scanned for vulnerabilities, the combination of software, hardware, and component-based code will create unmonitored interactions and interfaces.”

#2: Advanced disinformation and influence operations campaigns

In the security industry, we tend to focus on the technical and business risks rather than on societal impact, but ENISA takes a wider view and thus sees disinformation as a major security risk to societies and economies. The early 2020s saw the rise of disinformation campaigns (whether suspected or confirmed) involving everything from public health and corporate takeovers to national politics and military operations. The report indicates that with the rapid growth of AI-powered tools, the technical capabilities for mining and manipulating data sources will continue to open new avenues for influencing public opinion and national or even global events. Researchers single out deepfake videos of prominent individuals as a particular danger, alongside the growing potential of using bots to fake digital identities or maliciously influence public opinion by building an increasingly convincing online presence and following.

#3: Rise of digital surveillance authoritarianism and loss of privacy

Closely related is another risk arising from advances in physical and digital surveillance technology combined with the widespread use of digital identities. Already today, it is often possible to track individuals across the physical and online realms. With continuous improvements to technologies such as facial recognition and location tracking, the types and amounts of individually identifiable data will likely continue to grow, posing major challenges both for personal privacy and data security. Even storing all this information and using it for legitimate purposes poses serious technical and legal challenges – but these data stores may also be abused or directly targeted by malicious actors, putting the privacy and physical safety of individuals at risk.

#4: Human error and exploited legacy systems within cyber-physical ecosystems

To start with a quick translation, this threat is all about insecure critical infrastructure and Internet of Things (IoT) systems. The assumption is that by 2030, smart (aka connected) devices will become ubiquitous to the extent of becoming unmanageable in terms of administration and security. IoT devices are notoriously insecure, and this is not expected to improve much in the coming decade. As they not only proliferate in personal use but also permeate building management, industrial systems, transport, energy grids, water supplies, and other critical infrastructure, they may be used for direct and indirect attacks against such physical systems. One example given in the report is the threat of compromised personal smart devices being used as jumping-off points for attacking and infiltrating nearby networks and infrastructures.

#5: Targeted attacks enhanced by smart device data

Taking the threat posed by omnipresent connected devices from the level of infrastructure down to the level of personal risk, ENISA expects more numerous and more precisely targeted attacks against individual users. Malicious actors may harvest and analyze data from personal and home smart devices to build highly accurate identity data sets and behavioral profiles. These victim profiles could be used for direct attacks (for example, to access financial or physical assets), more indirectly as an aid to social engineering or identity theft, or as standalone assets to be sold on the black market. Combined with other technological advances such as AI, these highly personalized attacks could be extremely convincing and hard to defend against.

#6: Lack of analysis and control of space-based infrastructure and objects

The advent of private space enterprises combined with widespread reliance on space-based infrastructure like GPS and communications satellites is greatly expanding the potential for related cyberattacks. Recent years have demonstrated the importance of space-based assets for both civilian and military uses, but the complex and non-transparent mix of public and private space infrastructure expected in 2030 will make it extremely difficult to identify threats and establish defense mechanisms. The report singles out base stations as potential weak points that can be targeted for denial-of-service attacks to disrupt civilian infrastructure or military operations. Even in non-conflict scenarios, the race to innovate faster and at a lower cost than the rivals may lead to gaps in security that could then open up a whole new field for cyberattacks.

#7: Rise of advanced hybrid threats

In this report, hybrid threats mean anything that crosses over from the digital to the physical security realm. While gathering data online to support physical operations is nothing new, the “advanced” part suggests that attackers may be able to find and correlate a wealth of data in real time using AI and related technologies to coordinate attacks spanning multiple vectors in parallel. For example, a hybrid cyberattack could combine social engineering enabled by smart device compromise with a physical security breach, a social media disinformation campaign, and more conventional cyberattacks. In a way, this category covers known threats but combined in unexpected ways or with unexpected efficiency.

#8: Skill shortages

To start with a direct quote from the report: “In 2022, the skill shortage contributes to most security breaches, severely impacting businesses, governments, and citizens. By 2030, the skill shortage problem will not have been solved.” Again, this is not limited strictly to skills in the cybersecurity industry but also touches on a more significant generational gap. Even as new technologies continue to attract interest and investment, the digital world of 2030 will still largely rely on legacy technologies and systems for which the new workforce is not trained. On top of that, the growing complexity of interconnected systems and devices of all vintages will require cybersecurity talent that will be increasingly hard to come by. And as the shortage really starts to bite, cybercriminals may resort to systematically analyzing job postings to identify security weak spots in an organization.

#9: Cross-border ICT service providers as a single point of failure

This threat is all about service providers becoming the most vulnerable link in an interconnected world, with “cross-border” referring primarily to “the physical-cyber border.” Modern countries and societies already rely heavily on internet access and internal networking to operate, and by 2030, this dependency will extend to a lot more physical infrastructure in the smart cities of the future. Communications service providers could thus become single points of failure for entire cities or regions, making them attractive targets for a variety of actors, whether state-sponsored or otherwise. The report bluntly states that “ICT infrastructure is likely to be weaponized during a future conflict” as a crucial component of hybrid warfare that combines military action with cyberattacks to cripple communications and connected city infrastructure.

#10: Abuse of AI

By 2030, AI technologies will have been improved way beyond the level of ChatGPT and will be embedded (directly or not) in many decision-making processes. By this time, attacks to intentionally manipulate AI algorithms and training data may exist and be used to sow disinformation or force incorrect decisions in high-risk sectors. As AI-based consumer applications gain popularity, some may deliberately be trained to be biased, dysfunctional, or downright harmful. Apart from slightly more conventional risks like advanced user profiling, fake content generation, or hidden political biases, the societal impact of a viral new app that can subtly influence and shape the behaviors and opinions of millions of users could be dramatic.

Serious fun with futurology

The full report runs to over 60 pages and is well worth even a cursory read. Apart from another ten future threats that didn’t make the top ten list and a detailed analysis of the trends that led there, it also presents five potential scenarios for global development, including one not far removed from Gotham City. All the same, this is a serious report exploring some very serious issues that could affect us all in the not-so-distant future. And if you think it’s all a bit too science-fiction for your liking, remember that we live in a world where more than a few crazy SF ideas from the 1950s and 60s have come true – just a thought.

Zbigniew Banach

About the Author

Zbigniew Banach - Technical Content Lead & Managing Editor

Cybersecurity writer and blog managing editor at Invicti Security. Drawing on years of experience with security, software development, content creation, journalism, and technical translation, he does his best to bring web application security and cybersecurity in general to a wider audience.