advertisement
advertisement

The new ways we could get hacked (and defended) in 2019

Experts from the NSA and Darktrace discuss AI, invisible security, and why you really need to change your passwords.

The new ways we could get hacked (and defended) in 2019
[Animation: FindingFootage/Pixabay; ElenVD/iStock]

Cybercrime is in many ways the perfect crime: low risk, scalable, and highly profitable. As more of our lives migrate online, attacks on our cybersecurity by the agile, globalized, and outsourced cybercrime industry show no signs of slowing down.

advertisement
advertisement

Billions of people were affected by data breaches and cyber attacks in 2018, including up to 500 million Marriott customers. Incidents of cryptojacking (hijacking servers to mine cryptocurrency) experienced a meteoric rise, but those attacks dropped off towards the end of the year in line with cryptocurrency prices. In contrast, banking Trojans like Emotet and Trickbot, which steal banking credentials, experienced a resurgence. North Korea, Iran, Russia, and China continued to be the main actors in nation state attacks, such as the fake think-tank and Senate sites created by a Russian-linked hacking group ahead of the U.S. midterm elections.

So what’s in store for cybersecurity in 2019? If 2018 is any indication, threats are becoming more sophisticated, harder to detect, and potentially more dangerous, but the cybersecurity technology and talent arrayed against them is evolving too.

AI-powered malware

Max Heinemeyer is the director of threat hunting at Darktrace, a company that uses AI to identify and combat cyberattacks. Human attackers using malware try to mimic normal behavior in a particular network in order to spread to more machines while avoiding detection.

“Narrow artificial intelligence is going to supercharge malware in the next couple of years,” says Heinemeyer. “It’s always takes a human in manual intrusions, to take a look at an environment and see what normal behavior constitutes. But once they use AI to do this, they can do it at machine speed, localized to every environment. What if ransomware worms or other attacks can intelligently choose, tailored to the environment, which way to move around is best?”

Traditionally, attackers maintain communications with compromised systems using command-and-control servers (also called C2). If malware can use AI to autonomously determine how to mimic normal behavior while moving around, e.g. by detecting and using local credentials, attackers no longer need C2 and the malware becomes harder to detect.

Darktrace has also seen early examples of malware selecting different payloads (the code which actually performs a malicious action like encrypting or stealing data) depending on the context. Trickbot, for instance, can now steal banking details or lock machines for ransom. Malware will become more profitable if it can act in a way that will maximize the income from a particular environment.

advertisement

Smart phishing

AI could also supercharge phishing, an attack whereby an email or other message from an apparently legitimate institution is used to lure the receiver into providing sensitive data. In a survey by CyberArk Global 56 percent of 1,300 IT security decision makers said that targeted phishing attacks were the top security threat they face.

“If we think back to the Emotet Trojan, they are scraping email data,” says Heinemeyer. “They could take all that email data, use artificial intelligence to auto-create messages that understand the context of the emails and insert themselves into legitimate email conversations. Then we don’t even talk about phishing anymore, we talk about emails that look legitimate, that have absolutely contextualized content, that go into existing email conversations, and it will be almost impossible to distinguish them from genuine emails.”

AI-powered defenses

In the cybersecurity arms race, AI techniques like machine learning are becoming increasingly important. Sixteen percent of companies already use AI-powered security solutions, according to a survey by Spiceworks, with an additional 19 percent expecting to adopt them in the next year or two.

But machine learning models can also be manipulated, according to Josiah Dykstra, the deputy technical director for cybersecurity operations at the National Security Agency. His group secures U.S. government systems that handle classified information and that are used for military purposes.

“Everything is a double-edged sword in cyber,” he says. “The same things that we can use for defense can be used against us. I see a lot of academic work on adversarial machine learning. Can an attacker manipulate the model, so that they can hide from a machine learning algorithm? There’s terrific work going on in terms of showing how models can be resilient to those kind of attacks.”


Related: Scathing House Oversight report: Equifax data breach was “entirely preventable”

advertisement

Vulnerable critical systems

Critical national infrastructure consists of systems which are so essential that their operation is required to ensure the security of a nation, its economy and the safety of its population. In areas like energy and manufacturing, critical infrastructure is often managed by industrial control systems. Thirty-one percent of professionals with responsibility for those systems experienced a security incident in the past year, according to the Kaspersky Lab’s State of Industrial Cybersecurity Study for 2018.

“We’ve seen attacks against nation-state infrastructure in Darktrace environments, as we cover a lot of government entities and critical systems,” says Heinemeyer. “Some of the industrial control systems are constantly being scanned from the internet. Sometimes they get infected by what looks like commodity malware. Opportunistic malware doesn’t care if it’s ransoming a water treatment plant or a government network or if it’s hitting an office network.”

Dykstra is also concerned about the ability to effectively combat threats to critical infrastructure. “It’s such a complex ecosystem,” he says. “There’s so many players, so many different authorities. Who’s responsible for what? How do we help people understand the threat before it turns into a disaster? That is something that really needs to be on people’s minds in 2019.”

Open source attacks

The risk associated with supply chain attacks, where a target is attacked via a partner or supplier, has been rising steadily. Fifty nine percent of companies say they have experienced a data breach caused by one of their vendors or third parties. Heinemeyer highlights a less well-known type of supply chain attack as one to watch in 2019: those on open source software.

“It’s sometimes easy to get contributor access to open source projects,” he says. “Because there’s no standard procedure to vet these people to make sure they’re trustworthy, malicious actors can get into the software supply chain, inject back doors into very well-known open source code, and then get access to many, many environments.”

Earlier this year an open source JavaScript library with 2 million downloads was distributed with a bitcoin-stealing backdoor. The library’s developer no longer had time to provide updates so he accepted the help of an unknown developer who introduced the backdoor.

advertisement

Trust attacks

Heinemeyer is also worried about an emerging form of cyber threat he calls a trust attack. “What could be more devastating than just stealing or destroying data?” he asks. “Undermining the public’s trust in data. What if data was changed really subtly?”

He gives the example of altering blood sample data used by the U.K.’s National Health Service. “If an attacker was to intrude the database of blood samples and labeling, all kept digitally these days, and to change that data without anybody noticing, it could result in people dying without people knowing the cause. Public trust could be undermined in these kinds of national institutions.”

Similarly a nation-state attacker could cast doubt on the results of an election, not by trying to influence voters as in Russia’s cyber operations leading up to the 2016 U.S. presidential election, but by changing some votes, and making the interference public later.

“This could have disastrous effects on democracy,” says Heinemeyer. “It takes the discussion about the cyber realm these days in geopolitics to a whole new level.”

Helping defenders deal

One of Dykstra’s research interests is the human factor in cybersecurity. “Cybersecurity has historically been very technology-focused,” he says. “It is one aspect of cyber security, but alone it’s insufficient. I have done a bunch of studies looking at how do we help the human defenders be more resilient and robust and more effective at their jobs?”

Cybersecurity professionals work long hours in stressful environments and mental health has not traditionally been a priority for the industry. Dykstra and Celeste Lyn-Paul, senior researcher and technical adviser at NSA Research, developed the Cyber Operations Stress Survey (COSS) to help gauge the stress levels of security personnel in high-risk environments. COSS measures factors like fatigue, frustration, and cognitive workload during real-time tactical cyber operations like those carried out by the NSA.

advertisement

“Those papers suggest ways not only to measure, but when to check in on people.” says Dykstra. “Different training is required to make people more resilient, but I think that’s still an emerging area.”

A wider range of cybersecurity talent

Both Heinemeyer and Dykstra argue that the cybersecurity industry needs to draw on a wider pool of talent in 2019. “Don’t just look at the Computer Science department of the top five universities in the world,” says Dykstra. “I have certainly found that diverse teams of psychology and economics and political science and language analysts, those skills make the STEM fields even better. I actually wish I had lots more classes in my computer science curriculum about human psychology. That would have helped me be a more effective cybersecurity person.”

Dykstra adds that the cybersecurity field needs people who didn’t go to college. “There are amazing people in cybersecurity who, for whatever reason, decided not to go, or school wasn’t exactly their thing, but they bring a lot of talent to this problem,” he says. “Lots of the skills the NSA needs aren’t taught in colleges.”

Heinemeyer mentors a team of 30 junior threat hunters (analysts who proactively search for malware or attackers that are lurking in a network) in the U.K., with 80 more across the globe. “Most of our junior analysts don’t have any IT background or any cybersecurity background,” he says. “They are data scientists, they have PhDs in chemistry, in astrophysics. Some of our best analysts come from linguistics or social science.” Roughly half of Darktrace’s U.K. threat hunters are female.


Related: The biggest tech trends of 2019, according to top experts


Invisible security

Dykstra also suggests that users need to take more responsibility for securing their own data in 2019. “If your account gets compromised, it is your individual responsibility to go change that password and to understand that if you reuse passwords, very basic cybersecurity can break down,” he says. “Picking strong passwords or having a password manager would get us so far in achieving the prevention of data breaches.”

advertisement

Some users are more at risk than others because of their security behaviors or lack thereof. Understanding how personal traits influence these behaviors can help security professionals to determine which users need additional defenses or targeted training. Research from the University of Maryland (in which Dykstra participated), for example, found that women were less likely to use strong passwords and update them regularly, while introverts were less careful about locking their devices than extroverts.

Despite the best efforts of the security community, users often resist doing the work required to implement even basic security, so Dykstra is also an advocate of what he calls invisible security.

“When your browser gets automatic updates all the time, people actually are more safe and secure,” he says. “Where else can we do that invisible security to help make people safer? How can we just bake it in so that you don’t have to think about how to do strong encryption or how to do safe software development practices?”

advertisement
advertisement

About the author

Lapsed software developer, tech journalist, wannabe data scientist. Ciara has a B.Sc. in Computer Science and and M.Sc in Artificial Intelligence

More