John Phillips
John Phillips January 19, 2023 Digital Trust

Smishing with a fake org ID – a risk to customers, organisations, and their directors

Banks and other financial institutions use a number of channels to communicate with their customers, including post, email, phone (voice) and text (short message service, or SMS). Each of these has different qualities of security and vulnerability and hence trustworthiness. In some instances the channel may be used to provide an alert, notification, offer, or a second factor of authentication for confirmation of a high value transaction.

Well-funded and with access to highly skilled resources, a global criminal industry is dedicated to probing and exploiting cyber weaknesses that they can exploit at the organisational or individual level on every channel and attack surface.

In response, organisations are having to defend themselves against constant technical and social cyber attack.

They also have a duty to protect their customers.

In recent times, data breaches and ransomware attacks have been getting bigger and more frequent, and governments are becoming increasingly punitive in their response to what is too often seen as corporate negligence – reflecting the general public’s view and frustration.

This article focuses on one example of a weak exploitable link with particularly dangerous qualities: an SMS phishing (“smishing”) attack that injects a fake message into a stream of SMS messages from the bank to the customer, appearing to all intents and purposes as if it came from the bank.

The risk this represents is greater than “the usual” email attacks, where the originator’s email address can be seen to not be from the organisation that they claim to be. In this case, the recipient cannot tell that their bank did not send them the message. 

The case study below highlights the risk that this presents to customers, organisations and the executives that work for them. It also offers a way in which we might prevent this type of problem with existing, open-source, open-standards based technology.

A smishing case study

The following example describes a real SMS message received by a customer of one of the major banks in Australia. The customer was using a recent model Android phone, updated with all available software patches, and had a contract with one of the major telecommunication providers in the country [the person who received the SMS is also one of the authors of this paper].

On the left of the picture below is a screenshot of messages received on their mobile phone from March to May 2022. Annotation has been added around the screenshot to explain the content. The name of the bank and other identifying attributes have been obfuscated.

The phone presents these messages as a stream of messages from the same originating source, the bank. This is because the same SMS Short code is being used for each message (SMS short codes are local or personalized numbers for sending international SMS messages). Clicking on the link in the fake message would have taken the customer into the clutches of the fake organisation, and given that the link is an https “secure” link, the phone’s protection may not come into play.

There is no way for the recipient to see that the originator of the fake message is not their bank other than the suspicious nature of the message and the URL (which is a close approximation of the bank’s domain). All these messages look like they are from the bank.

It would be easy to see how a customer might be deceived and think that the message came from their bank, particularly if they are time poor and not paying too much attention – which would be most people most of the time. Imagine someone “injecting” fake post into your post-box, using the letterhead and address details of the organisation you bank with plus anything else they can find out, or guess, about you. Heck, they don’t even need to be that smart, they can just smash out 1000 messages and expect that a few will hit customers in exactly the dilemma that they present.

With many people’s mobile phone numbers and, in some cases, bank details compromised by recent Australian breaches, it seems reasonable to assume that the risk profile of this sort of attack goes from a possible risk to an almost certainty. So we should ask ourselves two questions:

  1. Who will the regulator blame for a spike in fraud on consumers that this creates? The bank? The telco? Both? 
  2. Who will the organisations hold accountable in their team?

We should point out that the bank is aware of the risk of phishing over SMS channels, and even offers examples on their website of the type of message that might be received, showing that the customer may see the message as coming from the bank. 

The image below comes directly from the bank’s own web pages, under the title of “Fake [BANK] SMS messages”:

Knowing that a criminal party can inject messages that appear to come from the bank into the stream of SMS messages received by their customers raises several questions, including:

  1. How can customers really know that the originator of communication they receive is from their bank? [this could be asked or ANY channel of course]
  2. Why is the bank using an insecure channel to communicate sensitive information and initiate banking interactions?
  3. Is the warning sufficient to protect customers, reduce liability for the organisation, and deflect government and media criticism?

Our guess on point 3. is “unlikely”.

Regaining Trust

To reduce this risk we need a way for the organisation to prove that they are the issuer of any communication to the customer. This also serves to balance out the trust relationship: we are asked to authenticate ourselves to organisations, we should ask the same of them.

Our proposal is that, in order to meet their duty to protect their customers from fraud, organisations must authenticate themselves to their customers.

This demands more than colourful logos, animations and assertions, it demands cryptographic proofs that are robust, easy to use and accessible to the institution and their customers. 

Thankfully there are ways in which that can happen now, using open-source software and open standards to authenticate and verify the issuers of communications.

Two final questions then:

  1. Do organisations care enough about their customers to actively offer them better protection rather than just “customer beware” advice?
  2. Do executives working for those organisations care enough about the risks to their company and themselves if they don’t?

About the Author
John Phillips
John Phillips John believes that there are better models for digital trust for people, organisations, and things on a global scale. He sees verifiable credentials, trustworthy communication, and trustworthy identifiers as a disruptive force for change for good, and wants to be a catalyst for that change, helping people and organisations navigate their way to a better future.

You may also find interesting...