Site icon LIFARS, a SecurityScorecard company

How Fake Accounts and Sneaker-Bots Took Over the Internet

How-Fake-Accounts-and-Sneaker-Bots-Took-Over-the-Internet-

If you’ve used any form of social media before, you have run into fake accounts with less-than-noble intentions. Usually, these accounts target individuals with attempted social engineering to give up personal or financial information that can lead to more damaging attacks.

Perpetrators often pose as long-lost acquaintances, representatives from legitimate businesses, or even as administrators or representatives of the platforms themselves. Twitter, Facebook, and LinkedIn are just some examples of platforms where this has become commonplace.

However, in recent years, we have seen the large-scale use of fake accounts not only involved in scamming individuals but also for more sinister purposes. Nation-states, such as Russia and China, have been seen using this technique to sow disinformation to influence, divide, and distract large swathes of populations who are untrained in spotting and responding to this threat.

 

LIFAR’s interactive training modules deliver stimulating and engaging learning experiences to your employees, equipping them with the tools and resources they need to be successful active participants in the cybersecurity process. Equip your employees with the tools and resources they need to be successful in your cyber security process.

 

While previous incidents involved alleged interference with US elections, more recently this tactic has been used in relation to the COVID-19 pandemic. Between Oct-Dec of last year alone, Facebook took down over 1.3 billion fake accounts as well as 12 million pieces of content about COVID-19 and vaccines deemed to be misinformation by experts.

While not exactly new, this is clearly an ever-present and growing online threat to navigate.

How do fake accounts threaten businesses?

Fake accounts are not only an issue on social media but on any platform or system that utilizes accounts to manage identity. As you might know, this includes any type of digital/online platform or portal we use today.

If you operate a consumer-facing business, for example, you manage customer, employee, and admin accounts that have access to various degrees of functionality. While accounts may have access to various levels of security clearance and serve distinct functions on different platforms, they all have one crucial thing in common: managing identity.

This means they can be used to deceive others through social engineering tactics. For example, you can imagine how an account seeming to have the authority of an administrator or superior would easily be able to pry sensitive or compromising information from other accounts. This can result in massive fraud, or even provide the perfect entryway for cybercriminals to launch further attacks.

Sneaker (or, scalping) bots are a more difficult subject to classify as either “good” or “bad.” For one, they still pay for their goods and “botters” are quick to point out that they are only a natural extension of the free capitalist market.

However, they put other consumers at an extreme disadvantage when it comes to fair access to goods. One common tactic is to use automated bots to buy up all stock only to sell it back to the public at hugely inflated prices. This practice has plagued the launch of the latest generation of gaming consoles, such as the PS5 and Xbox Series X.

How to limit the threat of fake accounts?

Fake accounts can be an exceptionally insipid attack vector that can be hard to identify by software and humans alike. However, there are several steps you can take to make exploiting this vector more difficult and limit the damage it can cause:

  1. Education and training: Fake accounts most often target other humans and expose the human element to infiltrate and wreak havoc on internal systems. Your first line of defense is to educate and train employees to be able to identify fake accounts. They should also know which steps to take when encountering one and which channels to use to report it. The Phish Scale is one example of this type of program aimed at empowering stakeholders to deal with these types of threats.
  2. Strong log-in and identity verification practices: The easiest it is to create an account, the more likely it can be exploited on a massive scale. Using identity verification methods, such as 2FA or others can be used to block automated fake accounts. ID verification can also be used to prevent fabricated identities on more sensitive systems.
  3. Identity management (IAM): IAM is a framework of policies and technologies for ensuring that the right users have the appropriate access to technology resources. Today, it is considered a crucial aspect of any organization’s security measures. Numerous services and out-of-the-box solutions are available to help manage this in enterprise environments as part of a zero-trust model.
  4. Automated detection: Automated detection tools exist to try and help root out fake accounts or mass botting attempts. These vary in efficacy and “botters” are always trying techniques to stay one step ahead. However, they may be the only option in many situations where botting is particularly common.
Exit mobile version