A couple weeks ago, Emotet sprang back to life. The first new spam messages started flowing after a five month hiatus.
Emotet has a long history of wreaking havoc across public and private sector networks. The U.S. Cybersecurity and Infrastructure Security Agency (CISA) estimates that public sector clean up costs up to $1,000,000 per incident. While Emotet is more than SPAM alone, as the SPAM messages pick back up, so too do the victim tallies.
Several groups track the Emotet activity, observing campaign after campaign. Cryptolaemus, a collective of information security professionals, might be the best known among these. Like others, Cryptolaemus publishes the new victim-facing controllers as soon as their sandboxes or processing systems see them. This helps defenders protect against countless more victims and losses. This is amazing work and a tremendous effort! This same sort of cross-company and cross-functional collaboration is the basis for a self-governing Internet. I hope they all know their efforts are very helpful and have definitely prevented more victims.
But the malicious actors continue more or less unabated.
Do we have the capabilities to stop Emotet completely? Can defenders hope to shut them down for good? What role would or should law enforcement play?
This is not the first cyber malevolence to catch the attention of defenders across many companies and countries. It is far from the first to raise questions about the ethics around the response of collaborative efforts.
Bringing some of the below issues to a wider audience seems to me to be a good thing to do. Behind the scenes, often times, the fate of global coordinated action is in the hands of a select few. Good-natured and generally altruistic in character, these folks decide the direction of global response.
When you have information at your fingertips that will impede actors, how do you use it? How do you weigh the ability to stop ongoing victimization, for a time at least, versus exposing what you know? Or a more weighted piece of information: how you know what you know?
I have firsthand experience working with some groups of defenders. In my time, I have observed a handful of similar scenarios play out. I believe a common set of ethical factors seems to come up in many threat remediation working groups. I’ll do my best to provide a broad overview of those issues as I see them.
Tipping the researchers’ hands. Many times, successful defense takes specialized knowledge. Sometimes getting this knowledge is only possible via special processes or access. Researchers want to protect these processes and access for several reasons. This concern can block sharing of information in ways that permit actors to continue their operations.
Helping threat actors. Making threat-relevant information available for free helps the most defenders. At the same time, it puts that information in the hands of the threat actors. This can and does cause the threat actors to change tactics or adapt to defensive knowledge and techniques. This concern can also block sharing of information to defenders.
Pending law enforcement action or investigations. Several threat actor groups and activity get the attention of law enforcement around the world. Many of the defender groups understand that stopping a threat wins the battle, but handcuffs can win the war. Yet, actors victimize governments, companies and people much faster than law enforcement moves. This means that the victim counter keeps on ticking while law enforcement prepares their case. (To be clear, Team Cymru supports law enforcement action.)
Pending civil legal action or investigations. When companies experience a security incident, they have several things they need to balance. Many times, the desire to control messaging means their staff can not share details of the attacker techniques right away. This can lead to a lack of defense knowledge sharing. Once again, this benefits the attackers.
Lack of developed coordinated technical response systems. To be effective at global disruption, some threats need a well-developed response. Much of the time, such systems and methods are not in place ahead of the threat. This means that actors can continue unabated until such systems become real.
Lack of developed, coordinated technical response norms. Laws, corporate legal opinions, corporate policies and liability concerns all impact response options. For each major threat, getting everyone on the same playing field takes time. But once they are there, you will often find that each player is bound by a separate and distinct set of policies. While these issues take time to reach consensus, victims keep on piling up.
Competition of threat intelligence companies. Don’t get me wrong, I am a huge fan of capitalism and competition! However, there are times that competition itself leads to a lack of sharing of defense relevant information. In many situations, this leads to a longer runway for the threat actors.
Restrictive distribution terms (TLP:RED). Data owners always have the ability to control how others distribute their data. But sometimes the terms used are so restrictive that it others cannot defend their customers with the information. Many times, this is due to information getting tagged TLP:RED when TLP:AMBER would suffice. When information is too restricted, the attackers have a larger set of victims to prey upon. (For more information on the Traffic Light Protocol (TLP), please see US CERT’s TLP page.)
Restrictive distribution terms (no commercial use). Again, data owners can place restrictions on how data gets distributed. It is very common to see ‘no commercial use’ terms placed on threat intelligence information. This may increase sharing among competitors, but prevents distributing data to any paying clients. In the end, threat intelligence vendors are not able to protect their customers. This adds to the victim pool and keeps that counter ticking away.
Privacy. This one is a very serious concern among defenders, for both legal and ethical reasons. The landscape here is complex, but there is no doubt that privacy concern complicates information sharing. The good folks protect the privacy interests of individuals. As a result, those individuals may be easy targets for the malicious actors seeking to do them harm. Make no mistake – malicious actors care very little for privacy interests of victims. Once again, the malicious actors benefit to the detriment of their victims.
Towards a Victim-Centric Approach
All the above concerns are valid and have well thought out reasons behind them. I cannot – nor do I wish to – refute the validity of any of them. What I want to do, though, is present an alternate viewpoint for you to consider.
As I mentioned before, many of the people joining the groups making these decisions have an altruistic intention. Altruism is a selfless regard for the good of others. The others herein are the victims of the malicious actors.
A victim-centric approach involves asking the question “how can I best enable the defense of the entire set of victims from these malicious actions?”. Using this, how might our approach to sharing defender-relevant information change?
I will leave the exercise of examining these issues again in light of this question up to you, dear reader. I can wax philosophic with ease, I assure you. But if you’ve made it this far, I’ll spare you suffering any more of my proclivity.