For decades, the ongoing battle between cybercriminals and security developers has been hallmarked by growing sophistication. As bad actors find more advanced ways to attack, security experts are coming up with more innovative ways to counter those attacks and to predict what the next breed of attacks will look like. The prediction, today, is that cybercriminals are developing self-learning malware and the experts believe this will be deployable in the not too distant future. Here, we’ll take a closer look at self-learning malware and how it could be deployed.
Today, there is little evidence that self-learning malware is in the wild. The majority of threats are operated by cybergangs and the actions they take during an attack are often based on the information their malware feeds back to them. The nearest thing that bad actors have in their arsenal, at present, is autonomous software. Although it doesn’t make use of machine learning, it is programmed in such a way that it knows how to infiltrate and operate in the environments it infects and can do so without being remotely controlled.
Where self-learning malware definitely does exist, is in the research lab. Both cybercriminals and defence teams are working on its development but for opposite purposes. In security labs, it is being developed to find vulnerabilities in existing, AI-based defence environments, so that security technology can be improved to counter the next wave of threats. Those self-learning threats, experts believe, will be available to cybercriminals in the next few years and will be able to beat the best security systems currently in place.
The reason that this can happen is that the technology to create this type of malware is much easier for criminals to get hold of. Open source datasets and machine learning and AI tools are easily accessible and becoming easier to use, while the low cost of cloud hosting makes it cheap to create development environments.
What will these threats look like?
According to next-gen firewall developer, Fortinet, cybercriminals will begin to use ‘hivenets’, which are intelligent clusters of compromised devices. These devices will be able to communicate with each other and undertake activity based on shared intelligence gleaned from machines swallowed up by the network. From this, Fortinet believes, they will use self-learning to attack vulnerable systems on a scale previously unimaginable.
Individual compromised machines within the hivenet will be made ‘smart’ by the self-learning malware, enabling them to carry out commands without the need for a human hacker. This will allow a hivenet to expand into a swarm of machines capable of multiple, simultaneous attacks while, at the same time, being able to counter the victim’s security system’s response.
Prior to using this form of attack, cybercriminals will also make use of ‘swarm-bots’. Like hivenets, these are also clusters of compromised devices but lack self-learning capability. What they can do, however, is prepare the ground for the hivenet attack. This is achieved by using the swarms to simultaneously identify and target different attack vectors, something that they can do on a huge scale and at lightning speed. This will allow cybercriminals to find and target vulnerable systems much quicker before the hivenet takes over to launch multiple attacks.
When it comes to cyber defence, existing AI-based cybersecurity tools will need to improve both speed and sophistication. While humans will certainly be needed in their development, the cyberwar of the not too distant future will be one of machine versus machine. Here, self-learning will be a critical component of security.
How big a threat will self-learning malware be?
While self-learning malware is destined to become a powerful tool for cybercriminals, its use will be limited to some of the more aggressive and sophisticated gangs, including state-sponsored attackers. It certainly won’t be the cheapest tool available and, for many cybergangs, it won’t be needed: too many systems are so weakly defended that hackers won’t need such advanced means to break in.
With so many existing systems vulnerable, many malware developers will concentrate on creating software that’s aimed at low hanging fruit and which is more likely to provide them with a bigger return on their investment. Similarly, cybercriminals will find it easier to use phishing and social engineering to obtain login credentials and less expensive to install cheaply available malware.
However, as organisations become more secure and the easy attack routes of today become less useful to them, they will need to adopt more advanced means. By which time, of course, self-learning malware, like ransomware today, may be sold as a service to criminal outfits. This will give them the means to attack without the need to buy the tool, only rent it; nor will they need the expertise to carry out an attack as that will be provided by the malware vendor.
Indeed, AI and self-learning are even being developed for use with phishing and social engineering. These tools will scrape information about targets, e.g. stolen data sold on the dark web, social media posts and publicly available information, etc., and use these to generate messages that are convincing enough to get victims to disclose login credentials or click on malware-installing links.
Cyberattacks are one of the biggest threats facing organisations. They take down systems, disrupt operations, steal personal data and business intelligence, and ransom companies for significant sums. On top of that, there’s the risk of heavy fines for non-compliance and data breaches, and of reputational damage caused by the news of an attack. As technology develops, the methods used to carry out an attack grow in sophistication. For all organisations, it is essential that your systems are as secure as possible.
If you are looking for secure hosting, visit our homepage to see our range of hosting solutions and security tools.