Tackling Fake Traffic from Artificial Engagement | DMA

Filter By

Show All
X

Connect to

X

Tackling Fake Traffic from Artificial Engagement

T-tacklingfaketrafficfromartificalengagement_customer-data-council-copie-2.png

The increase in technology has empowered marketers to perform much more targeted activity and in-depth analysis than ever before.

However, artificial engagement is increasingly becoming the elephant in the room when planning digital media spend and reporting on its impact. Brands are at real risk of having their reputations damaged by being associated with - or even accidentally funding - the organised crime that is digital ad fraud.

In an ever-changing industry, it is our responsibility to identify any risks or opportunities through direction and guidance, and we believe we should now start to address the issue. It may never be practical or possible to reduce non-human, or bot traffic on digital media campaigns to zero, but by taking sensible steps to protect our digital marketing budgets we can all have an impact in reducing fraudulent activity.

Determined to try to understand the scale, and explore how marketers can future proof against it, whilst also protecting themselves and their customers, the DMA North Council have spent the past 18 months trying to tackle the question of; How do we get the benefits from the good automated activity and mitigate against the bad?

Online budgets on digital advertising that attract non-human activity are predicted to exceed £250 billion by 2020; with >21tn ads paid for, but never seen by humans, annually and digital marketing campaigns being subject to fraud rates of >50%.

With Google and Facebook forecast to generate £186.4bn and £56.7bn respectively in 2020 (71% of the UK online market -£16bn), a non-human click percentage of 50% represents a huge lost budget. The market size tempts fraudulent organisations into creating technologies to make money from advertisers, fuelling a growing problem within the digital advertising industry.

Many digital transactions are not carried out by humans, but by automated software tools (“bots”) masquerading as humans for a variety of reasons; some legitimate and some not-so. Non-human digital activity leads to a global issue around online advertising fraud; for marketers, it continues to have a huge negative impact on planning, undertaking, and reporting of accurate digital activity.

As reported in Search Engine Watch in February 2019 “$7.2 billion was lost to click fraud between 2016 and 2018”. Cybersecurity company Cheq are quoted in The Drum in June 2019 as saying that “the direct cost [of click fraud] to advertisers to hit $26bn in 2020, $29bn in 2021, and $32bn in 2022” noting that “This burden lands disproportionately on the shoulders of small firms that don’t have as much money as established brands to avoid ad fraud”. A round-up blog post by DMA North Council member, Stewart Boutcher, in March 2020, notes that “A study by Forrester found 69% of brands spending $1 million per month reported that at least 20% of their budgets were being lost to digital ad fraud.

However, all is not lost, technology solutions are available to help reduce this negative impact, identify areas where it is most prevalent and also mitigate any loss in digital marketing spend.

Any organisation carrying out digital marketing in any form should be aware of the impact and potential mitigation against artificial engagement. If you are carrying out any paid media activity, such as the following activity, you should be especially aware:

  • PPC (ads on Google, Bing)
  • Social media promoted posts
  • Programmatic advertising
  • Other paid-for media, such as display banner ads
  • Influencer marketing

How does it all work?

You are a business looking to improve your website traffic. The easiest way to do that is to place ads on the ad network for your business. The online ads advertise your business and allow click-throughs directly to your website. So, you pay for both the adverts seen and the clicks through to your site.

So, now you are paying for traffic from the ad network and getting lots more visitors. But what if we said that 30% of this traffic that you have paid for aren’t real users. This means that ⅓ of your ad spend budget is being wasted.

The question you are hopefully asking yourself is why would anybody bother creating a bot to follow an ad to my website? As one single bot, it makes no sense. But imagine if there were a thousand bots all accessing different websites. Put the actions of all these bots together and it could start to look like a real human being surfing the web.

Now the bot owner has many bots that look like real people which can then be directed to a fake website. This fake website now looks like it is generating a lot of traffic from real people when in fact it is all fake. And this is where the money is generated.

Advertisers place ads on the fake website based on the fake traffic from bots that they think it is receiving. The ad network that serves the ads pays the bot owner for the ads displayed.

This example is for one website. With cloud computing it is very easy to scale this to 10’s of thousands of fake websites.

We are all responsible for:

  1. Holding ourselves accountable for the truthful representation of data
  2. Wasting money on artificial engagement. Awareness of the problem allows for accountability and better auditing of results, in turn reducing budget waste
  3. Stoping funding fraudulent activity

Who is perpetrating this and what can be done about it?

Much non-human activity is fraudulent, run by illegal organisations who funnel the proceeds into organised crime, including money laundering, human trafficking, and drugs. Reducing the income available to these organisations by substantially cutting down malicious non-human transactions will have clear positive financial and societal impacts.

Recent research “Internet advertising: Reliability, Dilemmas, and Possible Directions” by Huddersfield University, UK, in conjunction with Bin Faisal University, UAE and technology services organisations, Beaconsoft Ltd, looks at the taxonomy of click fraud attacks, main click fraud perpetrators, contemporary countermeasures techniques and the viability of data mining and machine learning approach for detecting click fraud:

“Several methods could be employed for detecting ad bots. Most current ad botnet detection techniques typically depend on Deep Packet Inspection (DPI) by analyzing the packet payload (contents)...However, this technique might not deliver an in-depth analysis of all acquired traffic...mitigating ad bots is still an issue of on-going attention for academic research as well as the professional community.”

This makes the point that the detection of bot traffic is very complex and there are lots of players who are involved in trying to solve this.

What you can do about this issue?

Ask yourself these questions:

  1. What percentage of traffic on the Internet do you think is fake?

A conservative estimate is that 40% of all internet traffic originates from non-human sources. This is nothing new either. Fake traffic has been a problem for the last decade.

  1. What percentage of traffic on your digital media campaigns do you think might be fake?

If the answer is below 15% - then you need to be asking how this level has been achieved, what measures are in place to validate? Example - P&G has reduced their media waste by 20% with a complete rework to their practices - 5 Actions P&G is taking to improve the media supply chain (pg.co.uk) - is this true for you?

  1. Who is responsible for monitoring and preventing fake traffic in your organisation?

Fake or fraudulent traffic is the problem that nobody likes to talk about. If there is nothing in place to monitor for and mitigate against bot traffic, then a large percentage of your traffic could be fraudulent, and your budget wasted on traffic that could never achieve your commercial aims.

In the past, Google has denied all responsibility for fake traffic saying it is simply a middleman connecting online marketers with websites through its ad-brokering system. Today, Facebook now “no longer accepts 3rd party metrics” for proof of invalid traffic, relying solely on their own numbers despite many documented issues with invalid accounts and bot traffic engagement with Facebook ads.

Whilst you are never going to fully reduce the problem of artificial engagement on your own, there are steps you can take by being aware of the issue and making sure that your internal team and digital marketing agency, if you use one, are also aware.

Understand that you are not going to be able to completely remove all bots from your campaign traffic, but you and your agency and team can take steps to reduce the impact on your budget.

Questions to ask your agency or internal team before starting a paid digital campaign:

  1. Do you currently monitor digital media campaigns for fraudulent traffic - which is to say engagement with promoted posts, shares and visits to your client’s website by bots?
  2. Do you know the average percentage of bot engagement for a given channel and campaign demographic?
  3. What do you use to monitor for invalid traffic and how do you attempt to prevent bots?
  4. What differences do you see between the different channels in terms of valid vs invalid traffic?
  5. Do you report on the volume of suspected fraudulent traffic in your reporting to clients?
  6. Are your reports audited; what tools do you use for this?
  7. What steps do you currently take, and what plans do you have to prevent bot engagement with paid media (by preventing them from seeing & clicking on promoted posts & paid ads)?
  8. If a client identifies possible fraudulent traffic, do you have procedures in place to recover the costs?
  9. What pricing models does your agency offer to reduce paying for fake traffic? i.e. cost per lead or order?

If you work in a digital agency:

As an external agency, you need to be aware of the issue of bot traffic and do all you can to reduce the impact on your client's marketing campaign, results and spend. This includes monitoring the amount of invalid traffic you get across different channels for client campaigns, understanding what channels, and what content, appears to attract bots more, and taking steps to reduce bots and being transparent in your reporting at every level.

Who is responsible for dealing with this within your organisation?

Assign a stakeholder inside your business to determine what impact it has on your business and take control of mitigating it as much as possible. Speaking internally between departments to get buy into the scale of the problem and how to address it: marketing, IT, finance, C-suite.

About the DMA North Council

The Data & Marketing Association (DMA) comprises the DMA, Institute of Data & Marketing (IDM) and DMA Talent. The DMA champions the way things should be done, through a rich fusion of technology, diverse talent, creativity, insight - underpinned by our customer-focused principles. By working responsibly, sustainably, and creatively, together we will drive the data and marketing industry forward to meet the needs of people today and tomorrow.

The DMA North champions the interests of the data & marketing community throughout the North. With a particular focus on community growth and engagement, the aim of the Council is to connect marketers with guidance and best practice and educate through networking and knowledge sharing. The council maintains strong links with key business influencers and other local trade bodies and associations to promote and support competitive advantages for DMA members in the North.

The primary authors of this document are:

  • Sara Watts, Chair of the DMA North Council and Founding Director of Sheffield-based DRMi, one of the UK's largest providers of direct marketing solutions.
  • Simon Hill, Co-Chair of the DMA North Council and Founding Director of Extravision, a technical, digital agency based in MediaCityUK, Manchester.
  • Stewart Boutcher, Member of the DMA North Council, Chair of the DMA Leeds Hub and Founding Director and CTO of Beaconsoft Ltd, a UK-based company specialising in digital campaign intelligence.
  • Jeff McCarthy, Member of the DMA North Council and Senior Lecturer at Manchester Metropolitan University.

Acknowledgements

With thanks to everyone who has contributed to this project and attended the round tables and Zoom meetings, especially:

  • Adam Lee, User Conversion
  • Mark Greenwood, Netacea
  • Sara Simone, Digital Oracles
  • Jon Davies, Chatter
  • Steve Kuncewicz, BLM Law
  • Lucy Nolan, DotDigital
  • Sophie Palmer, Co-operative Bank
  • Julian Tait, Open Data Manchester
  • Rich Ashby, Dotkumo
  • Mike Townend, Beacon

How to get involved

Following the publication of this document, the DMA North Council will be continuing to raise awareness of the issues detailed herein within a wider audience by speaking at webinars and seminars and from there developing engagement with a wider audience to explore next steps for this project.

If this is a project you would like to stay in touch with, or you can offer us the opportunity to promote this to a wider audience, please engage with us on our DMA North LinkedIn page at or contact the DMA North Community Manager, Anna Lancashire on LinkedIn at or anna.lancashire@dma.org.uk

Previous articles on this topic:

Future Marketing in a Fake World

Is Artificial Engagement Impacting Your Marketing?

Bots: Do You Know Why They’re Important?

Hear more from the DMA

Please login to comment.

Comments