• Total Members: 12959
  • Latest: SashaS
  • Total Posts: 30149
  • Total Topics: 8897
  • Online Today: 1490
  • Online Ever: 51419
  • (01. January 2010., 10:27:49)

Author Topic: Your Facebook friends may be evil bots  (Read 1958 times)

0 Members and 2 Guests are viewing this topic.


  • SCF VIP Member
  • *****
  • Posts: 725
  • KARMA: 116
  • Gender: Male
  • Pez
Your Facebook friends may be evil bots
« on: 18. April 2013., 09:29:38 »
Your Facebook friends may be evil bots

Computer scientists have unleashed hordes of humanlike social bots to infiltrate Facebook -- and they're awfully effective

Credit: Palto/iStockphoto

How safe is your online social network? Not very, as it turns out. Your friends may not even be human, but rather bots siphoning off your data and influencing your decisions with convincing yet programmed points of view.
A team of computer researchers at the Department of Electrical and Computer Engineering at the University of British Columbia has found that hordes of social bots could not only spell disaster for large online destinations like Facebook and Twitter but also threaten the very fabric of the Web and even have implications for our broader economy and society.
[ The Web browser is your portal to the world -- as well as the conduit that lets in many security threats. InfoWorld's expert contributors show you how to secure your Web browsers in this "Web Browser Security Deep Dive" PDF guide. ]

Four UBC scientists designed a "social botnet" -- an army of automatic "friends." A botmaster herds its troop of social bots, each of which mimics a person like you and me. The researchers then unleashed the social botnet on an unsuspecting Facebook and its billion-plus profiles.
These social bots masquerade as online users, adding posts that seem like they came from real people. But they secretly promote products or viewpoints, and some you might friend use their new connections to siphon off your private information. When coordinated by a botmaster, these social bots can wreak havoc and steal information at a massive scale.
Traditional botnets don't pose a threat to social networks such as Facebook, where users can easily discriminate between artificial and real people. But the social bots tested at UBC imitated people well enough to infiltrate social networks.
That's not such a big issue with only one fake profile, but if the programmer can control hundreds or thousands of them, then it becomes possible to saturate large parts of the system, to gain access to massive amounts of private data, and to wreck the security model that makes online social networks safe.
Furthermore, because so many services build on top of social networks, the risk runs deeper. Many technologies, including data sharing and backups, integrate with sites like Facebook. Their authentication schemes rely on the implicit trust network that social bots are designed to break into.
The UBC researchers came up with a program that creates Facebook profiles and friends regular users. With the right techniques, it's easy for a program to add people on Facebook as friends. The results surprised the UBC team: "We saw that the success rate can be up to 80 percent. That was quite impressive," says researcher Kosta Beznosov.
Amazingly, some of the bots even got unsolicited messages and requests for friendship by people. Perhaps unsurprisingly, female social bots got 20 to 30 times the number of friend requests from people as male social bots did: 300 requests versus 10 to 15 on average.
How to fake a person on a social network
To infiltrate a network, the bots follow a sophisticated set of behavioral guidelines that place them in positions from which they can access and disseminate information, adapting their actions to large scales, and evade host defenses.

To imitate people, social bots create profiles that they decorate, then develop connections while posting interesting material from the Web. In theory, they could also apply chat software or intercept human conversations to enhance their believability. The individual bots can make their own decisions and receive commands from the central botmaster.
The bots operate in phases. The first step is to establish a believable network to disguise their artificial nature. Profiles that people consider "attractive," meaning likable, have an average number of friends. To get near this "attractive" network size, social bots start by befriending each other.
Next, the social bots solicit human users. As the bots and humans become friends, the bots drop their original connections with each other, eliminating traces of artificiality.
Finally, the bots explore their newfound social network, progressively extending their tentacles through friends of friends. As the social bots infiltrate the targets, they harvest all available private data.
UBC researcher Beznosov recalls, "We were inspired by the paper where they befriend your friends, but on a different social network. For example, they know who your Facebook friends are. They can take this information and take a public picture of you, then create a profile on a completely different social network," such as LinkedIn. "At that point, the question we had was whether it's possible to do a targeted type of befriending -- where you want to know information about a specific user -- through an algorithmic way to befriend several accounts on the social network, eventually to become friends with that particular target account that you're interested in."
That targeting of specific users didn't work, so the researchers decided to test how many people they could befriend, with the penetration expanding over waves of friendship circles. The research exploits a principle called "triadic closure," first discovered in traditional sociology a century ago, where two parties connected by a mutual acquaintance will likely connect directly to each other. "We implemented automation on top of that."
There are plenty of tools for creating social botnets
Researcher Ildar Muslukhov notes that the UBC team had to solve many CAPTCHAs, those alphanumeric visual tests of humanness. Optical character recognition products failed frequently, getting the bot accounts blocked, so the researchers turned to human-powered services. "You can buy 1,000 CAPTCHAs for $1. It's people who are working in very poor countries, and they're making $1 a day." CAPTCHA companies coordinate the human responders and automate the service.
"We were amazed by the quality of APIs they provide you. They provide you with libraries for any possible language, like C++, C#, .Net, Java, whatever," Muslukhov says. "You just import their library and you call the function with an image inside, and they return you within five seconds a string with the CAPTCHA." Accuracy is claimed to be 87 percent, but the researchers decided to do the work manually in its testing to optimize the outcomes.
The basic infrastructure costs around $30 per bot. Ready-made networks with tens of thousands of connections can provide an instant "army of bots," as Muslukhov puts it. "We chatted with one of the guys online. He responded to us with some features -- they had this already made."
The market in malware has become standardized. Just as you can go to an email service provider to get an email account, you can go to a bot service provider to get a bot account.

It's not easy to stop the social bots
The complexity of social botnets makes it difficult to craft an effective security policy against them, the UBC researchers say. Widespread access to online services, including features such as crawling social networks and ease of participation, introduces conflicts between security and usability.
Security online relies on several assumptions. One key assumption is that fake accounts have a hard time making friends -- in other words, you can easily tell apart a real or fake account, by looking at its friendship circle. The UBC experiment proves social bots can be human enough to trump this assumption.
When the fakes ingrain themselves so well in the network that they are indistinguishable from the authentic accounts, you face a more fundamental concern: How do you rely on data in your social network? After all, many technological, economical, social, and political activities depend on that info.
For example, Facebook lets users interact automatically with the site, so outside service providers can integrate their offerings. This makes it as easy for social bots to use Facebook as it is for people. Facebook also lets users browse through extensive data sets, to make the site more convenient and useful. Social bots can take advantage of this laxity to harvest massive amounts of private data.
The UBC researchers divide the available defensive strategies into prevention and limitation. Prevention requires changing the prospects facing a potential social botnet operator. In other words, that means putting up more barriers for automated access, because such automation favors computer-driven invaders. That of course risks turning away human users who don't want to jump through the hurdles either.
Limitation means accepting that infiltrations will occur and focuses on capping the damage. Today, social networks rely on limitation to respond to adversaries: They observe differences in the structure and actions of social botnets compared to human networks, then use that detection to close down artificial accounts. But as social botnets gradually extend their tentacles into human networks, acquiring in the process a similar social structure, this limitation defense becomes less effective.
The social botnet business model
The economics also favor the botnet operators. Many cyber thieves use "zombie" PCs, systems infected with malware that turns them into free processors for the botnets; key loggers and data stealers are common uses of such "zombie" PCs today. Botnet operators could use them for powering the social bots and the botmasters, so the only significant costs are in creating the social bots in the first place.
Of course, botnet operators need enough reach to pay back their investments and make the efforts worth their while. And the cost of massively scaling the botnet -- the programming is much more sophisticated, and the costs of avoiding detection grow as well -- means there's a natural limit to how wide such infiltrations may go. The UBC researchers calculate a social botnet needs just 1,000 or so human friends to be profitable, if data theft is the business model.
That limit could be extended if botnet operators could get each social bot to befriend far more people than ordinarily possible, such as by cycling through friends as it harvests private data, maintaining an ideal-size roster of the average number of friends at any one point but changing the group over time (unlike human networks, which tend to keep the same people for years). Think of it as social climbing for social bots.
Selling Facebook friends would pull in a heftier take than data theft, the researchers found, offering another revenue stream -- or even business model.

Facebook has acknowledged that its service has tens of millions of fake accounts. Other services such as Twitter and comment sections of websites also have hefty numbers of fake accounts used by spammers and phishers. Just imagine how those numbers could grow once social bots become more than a university experiment -- and how much more effective they could be at fooling us all.

Orginal article: By Eagle Gamma | InfoWorld
Their is two easy way to configure a system!
Every thing open and every thing closed.
Every thing else is more or less complex.

Start Turfing !,8405.msg21475.html#msg21475

Samker's Computer Forum -

Your Facebook friends may be evil bots
« on: 18. April 2013., 09:29:38 »


  • SCF Advanced Member
  • ***
  • Posts: 337
  • KARMA: 41
  • Gender: Male
Re: Your Facebook friends may be evil bots
« Reply #1 on: 18. April 2013., 19:02:16 »
I don't fortunately belong Facebook! It is in my opinion completely futile place! ;D
I'm old man but still alive as well :)


  • SCF VIP Member
  • *****
  • Posts: 3512
  • KARMA: 152
  • Gender: Female
Re: Your Facebook friends may be evil bots
« Reply #2 on: 19. April 2013., 05:40:21 »
and slowly the future is getting real... Next: Turing* ;p

Luckily, most people consider me to be an evil bot too so I'm actually glad to get some new friends :>

GREAT article!



~~~ ~~~

Conare nullius momenti videri fortasse missilibus careant
All spelling mistakes are my own and may only be distributed under the GNU General Public License! – (© 95-1 by Coredump; 2-013 by DevNullius)

More information about bitcoin, altcoin & crypto in general? GO TO

Cuisvis hominis est errare, nullius nisi insipientis in errore persevare... So why not get the real SCForum employees to help YOUR troubled computer!!! SCF Remote PC Assist


  • SCF Administrator
  • *****
  • Posts: 7359
  • KARMA: 308
  • Gender: Male
  • Whatever doesn't kill us makes us stronger.
    • - Samker's Computer Forum
Re: Your Facebook friends may be evil bots
« Reply #3 on: 21. April 2013., 18:49:38 »
I don't fortunately belong Facebook! It is in my opinion completely futile place! ;D

Strongly agree...  :thumbsup:


Luckily, most people consider me to be an evil bot too so I'm actually glad to get some new friends :>



With Quick-Reply you can write a post when viewing a topic without loading a new page. You can still use bulletin board code and smileys as you would in a normal post.

Name: Email:
Type the letters shown in the picture
Listen to the letters / Request another image
Type the letters shown in the picture:
Second Anti-Bot trap, type or simply copy-paste below (only the red letters)

Enter your email address to receive daily email with ' - Samker's Computer Forum' newest content:

Terms of Use | Privacy Policy | Advertising