Distil’s 6th Bad Bot Report Reveals More Sophisticated Threats

For the last six years, the bot detection company Distil has been releasing an annual bad bot report. The Distil Research Lab analyzed thousands of website domains across 2018, tracking hundreds of billions of bad bot requests. Distil’s report is unique within the security industry for its focus on bad bot activity at the application layer – layer 7 of the OSI model, as opposed to volumetric DDoS attacks, which typically manipulate lower level network protocols.

What are Bots?

Bad bots are used to scrape data from websites without permission of their owners in order to reuse it and gain a competitive edge. There is a criminal element to the most serious use of bad bots, which can be deployed by hackers, fraudsters and rival companies in order to perform many types of attack, from account takeovers to competitive data mining to digital ad fraud.

Good bots, meanwhile, help guarantee that prospective customers can find digital businesses and gain access to their products. Through the indexing of search engine crawlers like GoogleBot and Bingbot, for instance, people can see their queries matched with relevant sets of websites.

All kinds of bot interact with applications in a similar way to a real user, making them challenging to identify and prevent. Being able to distinguish between bot traffic (and good and bad bots within that) and real traffic is crucial for the ability of businesses to make informed decision-making.

The overall findings of the 2019 Distil Bad Report reveal that bot attacks are becoming ever more sophisticated, as attackers modify their techniques to avoid detection and invalidate current defense tactics.

Bot Traffic by Percentage

The Distil report houses a number of alarming statistics. Human traffic has increased by 7.5% to 62.1% across 2018. Nonetheless, bad bots comprise a surprising one in five of all website requests (20.4% of all traffic), while good bots make up 17.5% of traffic. Even good bots can be bad news for advertisers as while they generate a good impression, the ad clicks don’t convert in the sales funnel, which leads to lower performance for advertisers.

Almost three quarters (73.6%) of bad bots are classified as Advanced Persistent Bots (APBs); these are the hardest to detect due to their impressive ability to mimic human behavior. APBs cycle through random IP addresses, entering via anonymous proxies, altering their behavior and mimicking human behavior. They are known as being “low and slow” as APBs carry out significant assaults through the use of fewer requests and are even capable of delaying requests while staying below request rate limits. This reduces the “noise” generated by typical bad bot campaigns.

The top five industries affected by bad bot traffic are:

  1. Financial – 42.2%
  2. Ticketing – 39.3%
  3. Education – 39.7%
  4. IT and Services – 34.4%
  5. Marketing & Advertising – 33.3%

Nearly half (49.9%) the bad bots out there move through Chrome, which is the most popular browser used to mask the identity of attackers. Firefox, Safari and Internet Explorer are also susceptible. 73.6% of bad bots hide in data centers, a figure down from 82.7% the previous year, although still very high. 18.0% of bad bots use the Amazon ISP, making it the source of most bad bot traffic in 2018. Last year’s top ISP, OVH Hosting, fell to fourth place with 3.1% of bad bot traffic in 2018 in contrast to 11.8% the previous year. The third largest sources of bad bot traffic were Digital Ocean and Comcast Cable.

And bad bots are a worldwide problem. Over half (53.4%) the global bot traffic originates from the United States (Distil calls it “the bad bot superpower”). However, Russia and Ukraine make up almost half (48.2%) of country-specific IP block requests with the U.S. trailing in this arena at just 6.6%.

Money is the Key Motivating Factor

Despite the news reports of bots using social media to influence the outcome of elections, the primary motivation behind most bad bot attacks is the pursuit of money; perhaps it is not surprising then that the financial sector leads the list for most attacked. Financial services tend to suffer most from bad bots attempting to access user accounts through credential stuffing attacks. The scale of this problem is made clear from Distil’s estimate that hedge funds could pay up to $2B across 2020 to gather and store data that has been scraped from websites.

Another highly vulnerable sector to bots is ticketing. Event ticketing sees the use of bad bots to check for ticket availability and the buying up of seats to resell on secondary markets at higher prices. Once again, criminals also try to access user accounts to steal valuable tickets and credit card info.  

Education entered the study for the first time this year, with academics being vulnerable to the theft of research papers, class availability, and to access user accounts.

Other sectors to those mentioned above that are particularly vulnerable include airlines and eCommerce.

Airlines are vulnerable to bots being used to scrape content (such as flight info, pricing and seat availability) so that criminals can then access user accounts to glean credit card information and steal valuable personal details. Meanwhile, the eCommerce sector sees the use of bad bots to aggressively scrape pricing and inventory information to steal gift card balances and also access user accounts and credit card information.

Over 14 billion credentials have been stolen industry-wide since 2013. Each new breach creates greater availability of stolen credentials, thus higher volumes of bad bot traffic.

Conclusion

In Distil’s press release announcing the new report, Tiffany Olson Kleemann, CEO, summed up the problems connected to bad bots as such, “Bot operators and bot defenders are playing an incessant game of cat and mouse, and techniques used today, such as mimicking mouse movements, are more human-like than ever before.”

Olson added, “As sophistication strengthens, so too does the breadth of industries impacted by bad bots. While bot activity on industries like airlines and ticketing are well-documented, no organization – large or small, public or private – is immune. When critical online activity, like voter registration, can be compromised as a result of bad bot activity, it no longer becomes a challenge to tackle tomorrow. Now is the time to understand what bots are capable of and now is the time to act.”

Digiprove sealCopyright secured by Digiprove © 2020