Hackers are using advertising trickery to seed falsehoods for political gain
How did we get to this level of disinformation? How did we, the citizens of the United States of America, become so intensely divided?
It’s tempting to place the lion’s share of the blame on feckless political leaders and facile news media outlets. However, that’s just the surface manifestation of what’s going on.
Another behind-the-scenes component — one that is not getting the mainstream attention it deserves — has been cyber warfare. Russian hacking groups have set out to systematically erode Western democratic institutions — and they’ve been quite successful at it. There’s plenty of evidence illustrating how Russia has methodically stepped-up cyber attacks aimed at achieving strategic geopolitical advantage over rivals in North America and Europe.
I’m not often surprised by cybersecurity news developments these days. Yet, one recent disclosure floored me. A popular meme site, called iFunny, has emerged as a haven for disaffected teen-aged boys who are enthralled with white supremacy. iFunny is a Russian company; it was launched in 2011 and has been downloaded to iOS and Android phones an estimated 10 million times.
In the weeks leading up to the 2020 U.S. presidential election, investigators at Pixalate, a Palo Alto, Calif.-based supplier of fraud management technology, documented how iFunny distributed data-stealing malware and, in doing so, actually targeted smartphone users in the key swing states of Pennsylvania, Michigan and Wisconsin. The public is unlikely to ever learn who ordered this campaign, and what they did — or intend to do, going forward — with this particular trove of stolen data.
Even so, this shared intelligence from Pixalate is instructive. It vividly illustrates how threat actors have gravitated to hacking vulnerable mobile apps. The state of mobile app security is poor. Insecure mobile apps represent a huge and growing attack vector. Mobile apps are being pushed out of development more rapidly than ever, with best security practices often a fleeting afterthought. Apps with gaping security holes are on the phones and at the fingertips of every person glued to his or her smartphone. These security weaknesses happen to align seamlessly with the spreading of disinformation.
The purveyors of disinformation know this, of course. And so they have taken to spreading data-stealing malware via vulnerable mobile apps. They’ve discovered this to be an easy way to harvest behavioral data, information which they then use to profile targeted individuals and learn everything knowable about their preferences, online behaviors and circle of family, friends and co-workers.
In doing this, the attackers are simply replicating what legitimate advertisers have always done. Online advertisers, in particular, have long conducted this sort of user behavior profiling based on monitoring our digital footprints; their goal is to identify and target individuals who appear to share certain characteristics and then direct content at them designed to influence their behaviors. This is exactly what propagandists seek to do; so it’s logical that they would take the tools and techniques designed to sell air fryers and reverse mortgages and apply them to demonizing minority groups, denying climate change and undermining the integrity of elections.
Cyber warfare operatives have, in essence, discovered how to leverage the Internet to dispense psychological trickery perfected by the advertising industry. And they’re taking full advantage of the wide-open, decentralized mobile app advertising infrastructure to very effectively assemble and instigate behaviors of targeted groups of like-minded individuals.
iFunny flew under the radar until August 2019 when the FBI arrested an 18-year-old Ohio man for making threats to shoot federal law enforcement officers, and then later that same month arrested a 19-year-old Chicago man for threatening to kill people at a women’s reproductive health clinic; both threats were made in posts on iFunny.
At a surface level, iFunny is similar to other mainstream meme venues, like Reddit and 9GAG. However, BuzzFeed news reporter Ryan Broderick took a closer look and discovered a teeming hub for white nationalism, haunted mainly by young, disaffected males. A source guided Broderick to a couple of very active iFunny message boards, one with 6,000 subscribers and another with 9,000 subscribers, both brimming with memes revolving around spreading hardcore Neo-Nazi propaganda and celebrating gun violence and mass shooters, the stuff of radicalization.
Earlier this year, Pixalate’s security analysts spotted a powerful piece of malware circulating amidst normal-looking mobile ads being automatically distributed to folks with iFunny installed on their iOS and Android smartphones. They dubbed this malware “Matryoshka,” a reference to Russian nesting dolls. The tainted ad would arrive over the normal ad distribution infrastructure. The malware carried a two-part payload. First, it would silently start executing faked ad views on the victim’s phone, generating ad payments to the attackers, and then it would commence exhaustively extracting user identification and profiling data from each phone.
The attackers succeeded in installing Matryoshka on at least two million iOS and Android handsets, earning a very nice profit: they got paid some $10 million from advertisers for faked ad views, according to Pixalate. What’s more worrisome is that they also harvested detailed personal information and user behavior data from each infected phone. They can now do anything they want with this user profile information, use it, share it or sell it.
Pixalate security analysts also documented how, in the weeks before the presidential election, the attackers turned their attention to distributing Matryoshka infections disproportionately to iFunny app users in Pennsylvania, Wisconsin and Michigan. We’ll likely never know whether the personal and behavioral data they stole from iFunny users in theses swing states came into play in the recent U.S. election. One thing seems certain: there’s little stopping them from leveraging this stolen user data to do anything they desire in the future, including launching more disinformation campaigns.
With so much going on, the advancing state of digitally-distributed propaganda isn’t top of mind with our political leaders. In fact, the iFunny hack is just one example of an untold number of cyber attacks that could, and probably should, be classified as asymmetrical cyber warfare strike. Some thought leaders, like retired Admiral Michael Rogers, former head of the NSA who served as a top White House cybersecurity advisor under both Presidents Obama and Trump, suggest cyber warfare needs to be defined more precisely to take into account different types of tangible societal damage.
The past two U.S. presidential elections provides many supporting proof points for Rogers’ argument. Spending on digital ads by political candidates shattered records topping $7 billion for the 2019-2020 election cycle, according to Advertising Analytics. That translated into 150% growth in the amount of third-party code connecting with users -- and digitally intersecting with their online activities.
Each one of these new user connections made with a mobile app represents an opportunity to siphon information from the user and, conversely, spread falsehoods to the user. There is plenty of this activity going on, much more than the average citizen realizes.
With Covid-19 continuing to spread and the global economy reeling, a full discussion about how disinformation campaigns are becoming more potent — and more tightly embedded into mobile apps — has been pushed to a side burner.
Clearly Big Tech and Big Telecom — Apple and Google, in particular — should be moving mountains to help resolve this, and there are signs they’re moving in that direction. Apple, for instance, recently announced new, more detailed disclosure rules for developers of iOS apps. With little fanfare, Apple as declared that app developers must provide details of how their app collects data, as well as explanations of how they expect any data harvested by their app will be used, in order to get their apps officially distributed by The Apple App Store. These new rules take effect December 8.
Google is known to be working on new tools and protocols that will help Android app developers do something similar, that is generally begin to keep closer track of how sensitive data gets collected, channeled and shared in new apps. But Google has said very little publicly about whether it is considering imposing rules, as Apple has done.
Apple and Google certainly should be tackling this head on. Should disinformation worsen to the point of causing our democratic institutions to utterly collapse, the consumer technology market would get profoundly altered, and there’s no telling how a dictator-controlled economy would shake out for the tech and telecom giants.
Meanwhile, each one of us, as private citizens, has an important role to play, as well. It behooves us to stay informed, do what we can to preserve our online privacy and make our voices heard when the opportunities arise.
Individual citizens must realize that they are being targeted by malicious third parties almost everywhere they go. Consequently, they can't assume that everything they see from a trusted source is trustworthy, because that source is not in control of everything which renders on their application or domain.
It’s a particularly hazardous time – our physical and mental health is intertwined with our digital health. At the very least, please take whatever steps you can to keep safe on all fronts.
Real war has come again to remind us that cyberwar, for all its terrors, is not yet on par with the damage done to flesh and family by bombs and bullets.
The rise of online tribes requires new strategies for countering the spread of misinformation. This is because your entire identity is wrapped up in a tribe and once you're a member, leaving can come with dire consequences.