Understanding different types of phishing attacks and the different triggers cybercriminals use to trick users into falling for it.
Phishing is the most prolific category of cyber scam. Others include romance scams, mugged in London scams, advance fee frauds and many more. Most scams seek to part you from your money – phishing is unique in seeking your personal data, usually in the form of passwords and bank details.
This is the defining difference we will use here. If the scam is after money, it is a scam. If it is after credentials, it is phishing. Whether it’s vishing (by phone), smishing (by chat), whaling (after the big fish), or spear-phishing (targeted phishing), it’s all basically phishing for personal data.
There are technology aids to prevent phishing – but given the FBI’s latest report notes 114,702 phishing attacks in 2019, ultimately leading to the loss of nearly $58 million, it is fair to say that they are not foolproof. The best way to protect yourself against phishing attacks is to recognize them. The best way to recognize them is through a knowledge of their anatomy and structure and how they work.
The two primary categories of phishing are standard and spear-phishing. The former is untargeted, large-scale phishing usually delivered in spam campaigns. This is often known as ‘spray and pray’ phishing. The latter, spear-phishing, is where an individual or small group of related people, are precisely targeted.
We have probably all been subject to – and have recognized – a spray and pray campaign. A quick look in our spam folder will probably show dozens, all filtered out by our email service provider. They are relatively easy to recognize, often including typos, grammatical errors and have a general unprofessional appearance. According to the 2020 Data Breach Investigations Report, click rates on phishing emails are at an all-time low at just 3.4%.
This makes spray and pray a numbers game. Any given phishing message has a 96.6% chance of being ignored by its victim, even if it makes it through the email service’s automated filters. However, this means that for every hundred messages successfully sent in a spray-and-pray campaign, we can expect three or four people to fall victim to it. If a campaign manages to distribute 100,000 fraudulent messages, this gives the hackers 3,400 compromised victims.
Spear phishing is more sophisticated, often more technical and is potentially more damaging than spray-and-pray phishing. Spear-phishers will research their target or targets using different forms of OSINT (open source intelligence, such as social media), gathering information on the target’s habits, which services they use, their contacts and more. With this information built into the email, the message can appear to come from a trusted source, and will often be convincing and compelling.
It is worth mentioning that since spray and pray phishing is easy to detect, many people believe they cannot be tricked by any phishing. This is dangerous. There is probably no one in the world who could not be spear-phished. Consider the case of the Telegraph journalist who effectively challenged a white hat hacker to break into her computer. He did just that by spear-phishing her with the opportunity for a compelling news story combined with a sense of urgency.
Victims succumb to phishing attacks because of the combination of the structure of the email and the ‘lure’ it contains, and the inclusion of various emotional triggers that elicit the response required by the attacker. We’ll discuss these next as the ‘anatomy of a phish’ and the ‘emotional triggers’. If we understand how we are phished, we will be better equipped to recognize and ignore (or report) a phishing attack when it happens.
We’ll focus on email phishing, as this is both the most common vector for phishing attacks and allows us to analyze the complete anatomy of phishing messages.
The visible email header is that part of the email we can see before we open it. It is the attacker’s first opportunity to catch our attention, but is also the point at which many spray and pray campaigns immediately fail. The intent is to appear to be an interesting subject from a trusted source. A spray and pray example from my own spam folder is from ‘𝐇𝐢𝐠𝐡_𝐏𝐫𝐢𝐨𝐫𝐢𝐭𝐲 nooreply[@]l1i.affpartners.com’. ‘High Priority’ is not a person, but is intended to add a sense of urgency to the email (see ‘emotional triggers’ below), encouraging potential victims to open it before stopping to think. A moment’s closer inspection lets us see more holes: ‘l1i.affpartners.com’ doesn’t look like a legitimate domain, and ‘nooreply’ is probably a typo for the more usual ‘no.reply’ or ‘no-reply’.
Spear-phishing attacks can be more subtle. If you have a friend or colleague with the email address John.Smith[@]company.com, attackers may message you as John.Smith[@]google.com, using an open email service and hoping you don’t notice the different domain.
The subject line of the header is key to a good phish – it is the primary factor in what makes us decide whether to read the email or not. It must be short enough to be easily processed by the target, but needs to contain strong emotional triggers to make itself hard to ignore. Effective phishing subjects will leverage either a sense of threat, urgency, or the prospect of gain for the user. According to research by KnowBe4, the most clicked phishing subject is “Change of Password Required Immediately,” with similar lines accounting for another three of the top 10 most clicked phishing subjects. International disasters also offer compelling subjects, with curiosity, fear and compassion being typical emotional triggers – and the COVID-19 pandemic is a clear illustration.
With all forms of email scam, the body, or content, contains the ‘lure’. Loosely, the lure can be seen as the bait that is used to tempt the phished to take the hook (the payload, see below).
By this stage, the attacker has successfully convinced the target to open and read the email. Now the lure in the email body must convince the victim to click a link or respond. The lure is typically full of emotional triggers designed to engage the victim favorably. Consider this example of a spray-and-pray lure:
Leaving aside that I never do online surveys, the grammatical and spelling errors as well as odd syntax typical of spray and pray attacks make it obvious that this is a phish. The two most obvious emotional triggers are greed (who wouldn’t want a free iPhone?), and urgency (the target is given just 24 hours, or this ‘opportunity’ will be taken away). Other common phishing lures range from commonplace and plausible (but fraudulent) invoices and delivery notifications to more grandiose legal threats, or simply masquerading as charitable or governmental organizations.
The payload is the crux of the phishing email. The most common payload is a malicious link; this can lead to a malicious or compromised website where a falsified login screen harvests credentials and sends them to the attacker. The link can be in the body of the email or found in an attachment.
Any payload link is usually disguised. In the above example, the ‘click here’ button is the payload. If you hover the mouse cursor above a link, most browsers will display the URL in the bottom left of the screen, allowing you to see the actual destination. To counter this, many phishers are using URL shorteners like Bitly to hide the real destination – our example shows as https:// t.co/YwTb24fxMI ?amp=1. URL obfuscation is always suspicious, so any email containing a shortened or obfuscated URL should probably be treated as phishing until proven otherwise.
The header, the subject, the lure and the payload form the fundamental structure of a phish, but success or failure rests on the emotional triggers contained within the message. Emotional triggers are there to prompt an immediate, unthinking, knee-jerk reaction, because the more we can think, analyze and consider a phishing message, the less likely it is to succeed. Consequently, it is important to give ourselves time. Even if an email appears both urgent and legitimate, setting a rule to never respond to any email without a small thinking period can go a long way to helping us keep a cool head and seeing through the emotional manipulation.
The main emotional triggers are listed below, but it is important to remember that any combination of them can be included in a phishing email, and the more sophisticated attacks will make use of them in very subtle ways.
Greed may be the earliest emotional trigger that phishing scams have tried to exploit. The greed trigger goes even further back than the infamous Nigerian Prince scams, but there are still many attempts to capitalize on greed to this day. In late 2019, Microsoft employees were targeted with a phishing campaign purporting to disclose upcoming salary increases. The payload was a link to a fraudulent login screen designed to harvest Microsoft Office login credentials. Two axioms are always important to bear in mind when opening any email: nothing in life is free; and if something seems too good to be true, it probably is.
Urgency is commonly present in phishing. Rather than being an effective emotional trigger by itself, urgency must work in conjunction with other emotional triggers: You have a free gift but must claim it within 24 hours. We have embarrassing information to share with all your contacts if you don’t respond by midnight. If the scammer can remove the victim’s thinking time or push them into a panic state, it drastically increases the chance of successfully tricking the target.
Good mental defenses against urgency are difficult to cultivate because urgency is specifically designed to disrupt those mental defenses. However, it is useful to remember that if you receive an email or a message that fills you with dread, it’s likely that whoever sent it wants you to panic. Counterintuitively, taking feelings of urgency as a sign that it’s time to stop and think carefully can help foil most of even the most effective phishing campaigns.
Fear can apply in a lot of different situations and contexts. There is often a strong interplay with urgency, especially when it comes to fear of bad consequences if we don’t respond immediately. Fear plus urgency often equals panic, and can be used in legal threats or threats to release intimate personal information.
Fear can also be less visceral – fear of missing an opportunity and fear of being uninformed can be equally compelling emotional triggers. Phishing scams often prey on both fear of danger, and fear of missing out (FOMO).
The COVID-19 pandemic has given rise to a spate of phishing scams that target both of these aspects of fear; the deadly and contagious infection has many of us fearing for our safety, and there is a strong desire for us all to stay informed and up to date on all the latest developments (or obtain one of a very limited supply of (non-existent) vaccines).
While most emotional triggers exploit our base instincts, our better nature can also be used against us. Any crisis, emergency or disaster tends to cause a spike in phishing emails attempting to capitalize on it. Phishing emails can impersonate a charitable organization or a person in need. The spate of tropical storms over 2018 and 2019 prompted a variety of organizations (including the FCC) to issue official warnings about post-disaster scams, urging users to verify any charity’s credentials carefully. These so-called charities aren’t after your donation, they’re after your bank details.
Curiosity can be a particularly dangerous trigger, as we often don’t consider that ‘just having a look’ can put us in danger. The same principles that apply to clickbait articles online can be used in phishing; any sufficiently sensationalist subject line could be irresistible to our curiosity: ‘you just won’t believe entry number 7!’ In the wake of Kobe Bryant’s death earlier this year, clickbait phishing with subjects like ‘amazing’, ‘shocking’, or ‘never before seen!’ became so widespread that it prompted the Better Business Bureau to issue an official warning to consumers.
Businesses and cybersecurity organizations are always looking for ways to prevent phishing with technology. As much as these technological solutions may help to mitigate phishing and reduce the exposure of users to phishing scams, the continuing prevalence of phishing and the damage it causes mean we have to consider them a failure so far. According to the 2020 DBIR, 22% of all breaches over the past year have involved phishing, while around 80% of all ‘social’ type attacks are phishing messages of some kind.
Browser developers maintain blacklists of known phishing websites which can help prevent users visiting malicious URLs. This is only partially effective, since the average phishing campaign lasts for just 12 minutes – the malicious website changes before it can be added to the blacklist.
Artificial intelligence is also used to scan emails to detect phishing. This can be effective with spray-and-pray phishing, but AI products entail high expenses, and tend to be useful only to companies rather than consumers.
DMARC – short for Domain-based Message Authentication, Reporting & Conformance – is a technology that unequivocally works against phishing, some of the time. We won’t go into the details of the technology (DMARC is built on the top of other technologies, notably SPF and DKIM, that were also designed to counteract phishing), but just explain why it works when it works, and why it doesn’t ultimately help the end-user.
DMARC works against what is known as ‘exact domain phishing’. This is where the phishing email appears to come from the exact, expected, and correct domain (as shown in the ‘From’ line of the header). If DMARC is fully installed by a service provider, the technology confirms that the message has genuinely come from that domain. If not, it assumes phishing and blocks it.
To demonstrate how well this works, consider the UK’s tax authority, HMRC. In 2016, this domain was the 16th most phished domain in the world. HMRC then installed DMARC, and has since dropped to number 126 in the world. During this period, DMARC stopped 300 million attempted phishing emails.
However, DMARC can do nothing to stop non-exact domain phishing. This is where the ‘from’ domain in the email header is a look-alike rather than the genuine exact domain. So, for example, I could register the domain ‘hnnrc[.]co.uk’ (or hnnrc[.]uk or hnnrc[.]org.uk or hnnrc[.]me.uk – all of which were available at the time of writing – in the genuine expectation that a percentage of recipients would not notice that hnnrc is not hmrc.
The second weakness in DMARC is that only a tiny percentage of firms have implemented it. But the real weakness is that the end-user has no way of knowing whether a received email has been DMARC-checked or not. Consequently, all received emails need to be considered suspect, whether or not DMARC is involved.
There is an attempt to solve the last problem with the introduction of yet another technology: BIMI, or Brand Indicators for Message Identification. BIMI only works where DMARC has been fully and correctly implemented. If the email service provider knows through DMARC that the email is genuine, and if the sending domain has implemented BIMI, the email service will insert the domain’s logo into the email list. So, if you check your email list and see the expected sender logo, then you can be confident the email is genuine and not a phishing email.
But again, only a tiny percentage of organizations have fully implemented DMARC, and only a tiny percentage of those have implemented BIMI. It is not likely to happen, but exact domain phishing could be eliminated if DMARC and BIMI were required email standards. In the meantime, from the end-user’s perspective, DMARC does nothing to solve the phishing problem even though it works.
We are much more likely to be fooled by phishing than we think. For example, in a survey conducted by PhishMe last year, only 10.4% of respondents believed that fear was an effective motivating factor in opening an email. However, a simulated phishing email threatening the recipient with a legal complaint was opened by 44% of participants. Just 7.8% believed they would be taken in by the prospect of an opportunity, but a spoof phish purporting that the targets were eligible for medical insurance was clicked by 39.2% of participants.
It is often said that the weakest link in security is the user, but with enough awareness and understanding, this doesn’t need to be the case. Technology does not yet have a reliable solution to phishing, and these attacks cannot be dealt with by anti-malware or general cybersecurity due to their psychological nature. If we can recognize phishing attacks by understanding the anatomy of a phish, stay calm and vigilant, and not allow our emotional buttons to be pushed, users can become the strongest link in security.
Many of the underlying algorithms we rely on are only as good as the human knowledge they come from. And sometimes, the knowledge transfer from humans to formulas falls short.
Security weaknesses align seamlessly with the spreading of disinformation. The purveyors of disinformation know this and have taken to spreading malware via vulnerable mobile apps.