Uncategorized

Ex Sneaker Botter Turns Cybersecurity Expert To Protect E-Tailers

password hacker cybercriminal

A former “sneaker botter” from Australia who for years programmed bots to take advantage of e-commerce platforms now uses his experience to combat bot attacks to raid merchants’ websites and prevent Account Takeover (ATO) attacks as a data scientist and cyberthreat analyst at Arkose Labs.

The term sneaker botter originated with the practice of using sophisticated software to help quickly purchase limited-edition inventories of major brands like Nike and Adidas online for resale at a higher price. The term followed expanded bot attacks that progressed into snatching up concert tickets and other high-priority products sold on e-commerce platforms.

Mitch Davie is now a renowned global leader in bot management and account security. A friend invited him to the programming opportunity about eight years ago. That group was among the first in Australia to employ code automation techniques on e-commerce sites.

However, he never crossed over the line into fraudulently using stolen credentials to make purchases. Essentially, if the bot user commits no fraud, using bots is not illegal, he offered.

“We were not using other people’s stolen credit card details. We used our own money and had the products shipped to our own addresses. We were just making the purchases a lot quicker than other shoppers could,” Davie told the E-Commerce Times.

A few years ago, Davie decided to use his programming skills to improve cybersecurity outcomes and protect e-commerce platforms. That came as he changed his focus to raising a family and working in a career that helped many more people.

“Instead of just attacking a couple of websites, now I am protecting sort of 50-plus websites. So that is a good feeling,” he said.

Botters Attack Various Industries

The concept of automating online purchases has not gone away, according to Ashish Jain, CPO/CTO at Arkose Labs. Although automating bulk purchases using bots is not illegal [in certain jurisdictions], some attackers use them to obtain consumers’ credentials to carry out fraudulent purchases.

Bot attackers can also take over consumer accounts on e-commerce sites and create false accounts to send purchases to their own addresses. Jain is familiar with such practices from his time working at eBay validating user identity and handling risk and trust assessments for that commerce platform.

“If you look across the traffic on the internet, there are multiple reports and sites, including our own data, that 40% of the traffic you can see on the website would essentially be bots,” Jain told the E-Commerce Times.

This proportion of the bot traffic depends on the specific vertical, and the use cases differ in e-commerce versus banking versus the tech industry, he added.

“There is this fine line in between. At what point do you abuse the system? At what point do you completely become a fraud? I think this again depends on a case-by-case basis,” Jain questioned.

It is very easy to cross the line, and if the terms of the service agreement states that scraping user information is not allowed — if you have a bot and scrape it, it is considered illegal, he offered.

Legal vs. Illegal Bot Practices

Other situations exist that rely on bot automation to abuse the e-commerce system. One is making returns for profit. If you buy an item intending to keep it, a return is legitimate.

If you do that repeatedly, make it a practice, it becomes an abuse. Your intent essentially is to be able to defraud the company, Jain explained.

Another form of illegal bot use involves payment fraud. Attackers might use bots to get a list of credit cards or stolen financials, he continued. Then, they use that scraped information to buy and ship an item purchased for that purpose. That’s certainly illegal. When a bad actor is operating with a bot for the sole purpose of doing financial damage to an entity, then that comes into an unlawful category.

The key difference in determining bot usage lies in whether the activity constitutes fraudulent behavior or legitimate stockpiling, he explained. It’s crucial to assess whether the bot is simply automating tasks or being used for fraud. Additionally, an agreement between the entity using the bot and the website owner from which the data is being gathered is a significant factor in this evaluation.

An example would be an agreement between Reddit and Google to let Google use the gathered data to build large language models (LLMs) to train Google AI. According to Jain, that is considered a good bot. However, China’s bot activity is an example of bad bot usage.

“We have found multiple entities within China trying to do the exact same thing. Let’s just say on OpenAI, where they are trying to scrape the system or use the APIs to get more data without having any agreement or payment terms with OpenAI,” he clarified.

Staying Ahead of Bot Threats

According to Davie, cybersecurity firms like Arkose Labs specialize in advanced defensive measures to protect e-commerce sites from bot activity. They use constantly updated highly advanced detection technology.

“We basically monitor everything the attackers do. We are able to understand how they attack and why. That allows us to improve our detection methods, improve our captures, and stay on top of the attacks,” he said.

Bot attacks are an ever-emerging process that spans many different industries. When Arkose mitigates an attack scenario in one sector, attackers will hop to a different industry or platform.

“It flows throughout as a cat-and-mouse game. Currently, the attacks are the highest they have ever been, but they are also the most well mitigated,” Davie revealed.

Always Looking for Attack Signals

Jain, of course, could not divulge the company’s defensive secret sauce. However, he identified it as leveraging the different signals observable on the e-commerce servers. These signals fall into two categories: active and passive.

Active signals have an impact on the end user. Passive traits run behind the scenes.

“A very common example of when you can detect a bot or a volumetric activity is when you look into the passive signals, such as the Internet Protocol or IPs and the devices on fingerprinting, where they are coming from, or the behavior biometric,” he said.

For instance, look for behavioral information. If you see someone trying to log in on an app but notice no mouse movements, it indicates that the user on the other side of the login screen is likely a bot or a script.

Additionally, IT teams should check lists of known bad IP addresses. Or, if they notice a high volume of requests, such as a million requests within 30 minutes from an IP address associated with a data center, it’s a strong indicator of bot activity.

“That does not seem like a normal behavior where people like you and me are trying to log in two times in an hour from a home IP address,” explained Jain.

A third common example is doing velocity checks in place. These monitor the number of times a specific transaction data element occurs within certain intervals. You look for anomalies or similarities to known fraud behavior.

Jack M. Germain

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open-source technologies. He is an esteemed reviewer of Linux distros and other open-source software. In addition, Jack extensively covers business technology and privacy issues, as well as developments in e-commerce and consumer electronics. Email Jack.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

More by Jack M. Germain
More in Uncategorized

E-Commerce Times Channels