Response
The Journal of Popular and American Culture

User menu

Liking the Lies: An Analysis of Hoaxes on Facebook and What They Mean for the Contextual Framework of Viral Message Spread

There is nothing necessarily modern or new about hoaxes being presented on mass-media platforms. People have been trying to trick others into believing or sharing something since there were people. Historical analysis into the presence of hoaxes in newspapers of the 1700s and 1800s indicate that they shared a presence on the front page right along with hard news.1 Chain letters have been passed along from household to household through the national postal system, containing some kind of prayer request, or claiming that the recipient was only a few vital pieces of personal information away from winning the jackpot.2 Over time, the chain letters evolved into systems with complex, targeted language. Fake endorsements, usually in the form of someone saying they had good luck for passing a letter on or sending in a crisp new $1, became a form of currency with the chain letters.3 However, the introduction of the personal computer, the Internet, and online social networking platforms have introduced a new and puzzling chapter to the story of hoaxes and their role in society. As newspapers went from general entertainment to news, the hoax no longer had a home on the front page. Chain e-mail has taken the place of the chain letter. And now, we see Facebook being the platform on which hoaxes can spread “virally,” no longer in a matter of days, weeks, or months, but in a matter of seconds, minutes, and hours.4 This is driven by the fact that Facebook now has more than 1 billion accounts, has 1.19 billion active monthly users, and 1.01 billion active daily users as of May 2016. Those numbers trump the circulation of even the largest global newspaper.

One clear example of the power of Facebook as a hoax platform occurred on Thursday, November 30, 2012, when a man named Nolan Daniels posted a photo of himself on Facebook. He was smiling ear to ear, holding up a Powerball ticket that matched the numbers of the winning draw for 293 million dollars. Daniels, in the caption to the photograph, stated that he would randomly pick a person who had clicked “share” on the photograph on Facebook and happily give them 1 million dollars of his recent bounty. By late afternoon of the same day, it had been “shared” on Facebook 200,000 times.5 That evening on the same day, it was up to 450,000 “shares”. 6 By the end of the day, it was up to 500,000, but did not stop there.7 By 10 a.m. the next day, December 1, the number of “shares” shot up to 1.7 million. Daniels had managed to get his image “shared” almost 2 million times within a span of 24 hours. On Facebook, “sharing” is a powerful tool. When a user clicks “share” on something, all their “friends” on Facebook will now have that content showing up on their own feed. It was not simply two million people looking at Daniel’s image, it was two million people showing the image to everyone they are Facebook “friends” with. The actual number of people who most likely saw the image within 24 hours is unknown, but probably astounding.

The only problem for those hoping for a handout from Daniels was that the image was not real. Daniels used Photoshop to add the winning numbers to a template lottery ticket. There were some clues that the image was fake – the pattern and color of the stock the ticket was printed on did not match what Powerball uses, and the numbers themselves were out of order – yet that did not stop people from “sharing” the image. By the end of the ordeal, the image was the most-shared piece of content in the history of Facebook, surpassing the image of President Barack Obama and Michelle Obama after their 2012 election victory. That image, which was the most-shared, only had about 500,000 shares. Media outlets interviewed Daniels in the days following his fake picture. He claimed he doctored the photograph and posted it as a form of social experiment.8 He also said he wanted to see just how far the image could go.9

However, despite Facebook boasting 1.01 billion active daily users, and its growth as a paradigm in global communications, and clear examples of Facebook being used as a platform to spread to hoax message to millions and millions of people, little has been done in mass communication research to analyze Facebook hoaxes. This study seeks to act as a launching point into the presence of hoaxes on Facebook by analyzing the conceptual themes, content, and structure of Facebook hoaxes themselves. It was based on both historical understanding of the motivation of hoaxes and a modern theoretical basis for why viral content works the way it does, with the internal assumption that a hoax is a piece of viral content. On the viral side, Berger and Milkman sought to find out which emotions tended to relate to the news stories posted by the New York Times that went “viral”. 10The question was if there were clear and present patterns in the emotional framework of stories that tended to go farther than the others – clicked more often, linked more often, etc. This research used those found emotionality triggers from news articles and transposed them over Facebook hoaxes.

I examined the content of 249 Facebook hoaxes, looking for the emotionality triggers that Berger and Milkman claim lead to messages going vital, as well as the playful nature of hoaxes proposed by Fedler. This was conducted as a qualitative content analysis, where both the content of the hoax itself and the explanation of the hoax were examined.

Review of Literature

Historical Background

Fedler’s book on media hoaxes allows for historical context of work on hoaxes. The most important contribution to this study is the idea that hoaxes, and their place within the confines of mass media, are nothing new. From the earliest days of print publication for mass distribution, writers and editors were inventing stories, drawings, and etchings for the exact purpose of having the content “go viral,” or spread from person-to-person by word of mouth. The hoax themes were often spectacular – aliens seen on the face of the moon, giants discovered in the heart of Africa, or new medicines that could cause people to hop high in the air. The topics were rarely malicious; they tended to be playful. The playfulness was a result of the state of the nation after the U.S. Civil War, when the audience wanted positivity in the country. The positivity and playfulness was combined with the confusing new industrialized world that society found itself in. An inventor creating a self-serving robot butler that wheeled around and made drinks seemed just plausible enough in a world where coal-powered engines were producing many people’s daily goods for the first time. At the same time, the robot butler story was bright and positive, with rich descriptions of how great the future would be. The playfulness of newspaper hoaxes continued into the Internet age, with some very early Web-based hoaxes being about such things as hippopotamuses eating dwarfs and kittens being grown in jars.11

Fedler notes that it can be easy to see the hoaxes distributed through mass media as cruel and insensitive, fooling unsuspecting readers and betraying their trust. To the contrary, the hoaxes were rarely malicious in nature, at least when they are originally created. They were an outlet of creativity for the creator – a way reporters could make jokes, play around, and relieve some of the stress that came along with being a newspaper reporter in the late 1800s and early 1900s. That timeframe particularly is important in drawing comparisons to Facebook for this study, as during the golden age of media hoaxes, reporters were often “uneducated” compared to today’s working reporters. They rarely had college degrees, and most came from blue-collar neighborhoods, as working as a reporter was a blue-collar occupation during the late 1800s to early 1900s. Even hoaxes that have caused widespread panic or negative effects on society rarely started off that way, with the main example obviously being Orson Welles’ infamous radio play “The War of the Worlds,” adapted from a story by H.G. Wells. Welles simply intended on making a radio play in the style of a news report, with warnings at the beginning that explained it was just a play; however, people turning on their radio in the middle of the broadcast missed the warnings and thought, potentially, that aliens were indeed invading Earth. From there, the hoax went word-of-mouth, with people who had listened to the broadcast seeking out their friends, family, and neighbors to tell them about the report. This was what Fedler refers to as a type of unintended hoax – one that was not intended to even be a hoax to begin with, but based on lack of context for many consuming the materials, it became one.

Fedler notes that the only time that historical hoaxes veered into the malicious nature is when they were targeted at competing mass-media producers. For example, in the 1890s, the Associated Press pushed a story about an uprising in India led by a fearsome Rajah named “Siht El Otspueht.” The rival wire service, United Press, quickly picked up on the pushed story, and with a few quick revisions and alterations to phrasing, pushed the story out to their own subscribers without any extra reporting. The trap was sprung at that point – the UP, as well as their subscribed publications, realized that the fearsome Rajah’s name, when spelled backwards, was ‘The UP Stole This.’ Although the hoax was created maliciously, the intended target was not the individual in society reading the story, rather, it was the competing news source.

Rumor Theory

An important portion of the theoretical background for this approach is Buckner’s work on the nature of rumors.12 A rumor is simply defined as a piece of information that cannot be verified at the moment of consumption, and is not necessarily a piece of incorrect information. Buckner found that the number of people a rumor reaches is dependent partially on the “structure” the rumor passes through. There are two main structures. The first is a serial chain in a straight line like the childhood game “telephone.” The second is a network, where the rumor is spread to more than one person at once, causing a cascade as more individual people share it to multiple people. Rumors in the network structure often become uncontrollable in nature. Buckner notes that once a rumor is sent into a network, it often can make its way back around to the originator, even though by that point the original rumor may be so distorted it is not recognized.

Buckner also says the person receiving the information can face rumors with either an uncritical or critical stance. A critical stance means that the person hearing the rumor challenges it, or thinks critically about it. That challenging is usually based on previous knowledge of the topic. The critical stance has a dampening impact on rumor spread. The uncritical stance has an enhancing impact on the spread of the rumor. When someone receives a rumor and approaches it uncritically, they will be less likely to challenge the rumor. They are then more likely to pass along the rumor to other people. Some of what can determine critical versus uncritical stance is personal, such as internal levels of skepticism. However, context is often the leading factor, as some situations can cause the uncritical setup to flourish: having a very low risk associated with passing along the rumor, a very high risk associated with not passing along the rumor, a lack of background information on the subject or theme, and simply wanting the subject of the rumor to be true.

Buckner’s work, despite its age, establishes the rumor potential caused by hoaxes on online social networking sites such as Facebook. First, the structure: Facebook is a network of interconnected nodes and edges, not a straight line. Therefore, rumors, as the output of a hoax, sent out into the Facebook network will become more distorted, lacking the ability to be corrected. Networked rumors also tend to travel faster. Next, when looking at the triggers for an uncritical stance, it’s clear that some, if not most, are in play on Facebook. The most-clear context for uncritical stance is lack of repercussion – for example, if someone decides to click “share” on an image of a winning lotto ticket, there is little risk of repercussion involved in the process. Finally, there is the often uncontrolled nature of computer-mediated communication, which falls within the language guidelines of particular norms that are unlike the physical world.13

Modern Examples

Although the introduction of this paper looks at some of the more lighthearted examples of Facebook and social media-based hoaxes – such as the fake winning Powerball ticket – the practical implications of wide-spread online hoaxes took a very dark turn in 2013. In the wake of the Boston Marathon bombings, several large online social networks began attempting to find suspects based on released security footage and photos from the scene. Reddit became one of the biggest hubs, with a subsection of their site called “/r/findbostonbombers” quickly gaining users and attention.14 The subsection grew quickly, with many people pointing out people in the crowd they believed looked suspicious, which meant everything from what appeared to be a bag drooping too farto, unfortunately, those who were not white-skinned. Soon, other more mischievous sites such as 4chan.org began posting falsified photos and detailing information onto /r/findbostonbombers, and that misinformation made it to other social media platforms like Facebook.15 That misinformation turned out to be both dangerous and hurtful, as infographics and images blaming both a Saudi national and a missing Brown University student turned viral both on Reddit and Facebook. As it turned out, the Saudi national was a runner in the race who was injured by the explosions, and the missing Brown student was found dead, having committed suicide before the Boston Marathon had even occurred.16

Viral Content

For this study, successful hoaxes understood as a form of viral content, and as such, I can borrow from previous studies examining how viral content becomes viral. Berger and Milkman’s work in virility of media content sets the tone for the social construct of this study. Their work looked at viral content in a multi-layered content analysis to see if there were common themes. They used a sample of the top shared and linked stories from the New York Times during a three-month sample and coded them for emotionality triggers, calling it the intersection of emotion and social transmission. They measured the viral level of content by how often those using the New York Times website chose to use the provided buttons contained on the site to hyperlink the story, send the story to someone else, or post the story to social media. They divided their coding mechanisms into two groups: primary predictors and controls, with primary predictors being positivity, awe, anger, anxiety, and sadness, with the controls being utility, interest, and surprise.

Ultimately, Berger and Milkman found a positive relationship between positive content and stories going viral, with stories coded in the “awe” emotionality trigger scoring the highest amount of viral spread. Conversely, they found that deactivating emotions, or ones that would tend to suppress further action such as sadness, tended to not have the same level of connection to viral content spreading. This study seeks to apply the same coding of emotionality triggers used by Berger and Milkman to analyze New York Times articles to look at hoaxes on Facebook.

The literature guided the research questions for this study in two directions. The first direction, largely dealing with Berger and Milkman’s aspects of constructed frameworks of viral content, asked if Facebook hoaxes have the same emotionality triggers that were present in New York Times news articles that went viral. Thus, the first research question progressed as:

R1) Does the content of popular Facebook hoaxes fit within the established framework of viral news links?

The second direction dealt with Fedler’s work on media hoaxes. It asked if widespread Facebook hoaxes are playful in nature like historical media hoaxes, which rarely intended to harm. The big question becomes if the Facebook hoaxes that gain widespread visibility are the kind that are designed to inspire imagination and come from a sense of creativity like their historical cousins. Thus, the second research question progressed as:

R2) Does the content of popular Facebook hoaxes fit within the playful and established framework of historical hoaxes?

Methodology

This research was conducted using qualitative content analysis with some quantitative. The only quantitative method was a simple counting system for the framework borrowed on viral content. Mostly, the wording in the hoaxes themselves, as well as the explanation about the hoax, was read and annotated qualitatively to establish working contextualized frames within the content. The Australia-based Hoax-Slayer.com was selected as the database for sampling and coding hoaxes. Although other sites for popular hoaxes, such Snopes.com, tend to have more attention and traffic, they are not as robust. At the time the study was conducted, Snopes only contained 24 listed hoaxes in the “Facebook” section if their website. Hoax-Slayer, a site maintained by Australian Brett Christensen, contains a much broader array of saved hoaxes, totaling 249 at the completion of the coding. Although an academically maintained database would be preferable when completing a content analysis, no academic database of Facebook hoaxes currently exists. And, due to the often organic and sporadic nature of Facebook hoaxes, it would be nearly impossible to collect and observe the hoaxes in retrospect to examine the content of their message. Any given user of Facebook may only access information from the users with whom they are connected, so attempts to automatically collect hoaxes en-mass from Facebook would be vastly limited in perspective. Therefore, the study had to rely on a third party, non-academic collection site.

The coding mechanism for the study was based on the emotionality triggers from the Berger and Milkman study. However, the Berger and Milkman study used an intricate system of dual coding on a Likert scale with defined predictors and control variables due to the study attempting to showcase predictability. As this was a qualitative content analysis, without any intentions to predict, the variables were coded simply as either appearing within the overall context of the hoax or not – and were computed as either 1 or 0. The variables were: Anger, anxiety, awe, sadness, surprise, practical utility, and interest.

I adapted definitions for each emotionality trigger from Berger and Milkman’s article to specifically apply to hoaxes. Anger was noted if the hoax had language designed to make the reader direct frustration at a person or subject. Anxiety was noted if the hoax had information designed to make the reader nervous, afraid, or uneasy. Awe was noted if the hoax had information designed to inspire or uplift, or, as the previous study defined it: “the emotion of self-transcendence, a feeling of admiration and elevation in the face of something greater than the self.” Sadness was noted if the hoax had information designed to make the reader feel hurt or depressed about a situation or person. Surprise was noted if the hoax had information presented in an expected way. Practical utility was noted if the hoax had information that attempted to showcase helpful hints for day-to-day life. Interest was noted if the hoax had exciting or enthralling information.

None of the variables were mutually exclusive. In theory, any of the examined hoaxes could have fulfilled all the options for all categories. Some other available informational items were collected and coded. Those included date posted to archive and date originally located “in the wild,” if that information was collected by the database. Finally, the “origin material” of the hoax was recorded if the database had the origin available, with options for native-to-Facebook, native to another portion of the web-based Internet, from e-mail chain, or off-Internet.

Results

A total of 249 hoaxes were analyzed. To begin, the presence of the coded emotional aspects of the studies were counted and totaled, and can be seen in Table 1 below.

Table 1: Presence of frames often found in viral content, present in Facebook hoaxes, and source of original hoax content.
Emotional Trigger Number of Hoaxes in which the frame was present
Anger 54
Anxiety 91
Awe 10
Sadness 53
Surprise 42
Utility 51
Interest 53

Originated on:

Originated on Number of Hoaxes in which the frame was present
Facebook 206
Other online location 49
Email 16
Off-web/Real 24
Unknown 1

The overwhelming emotionality trigger present in the Facebook hoaxes was anxiety. Even the presentation of the message itself was designed to carry a semiotic sense of urgency, as many hoaxes were warnings of some kind, written in all-caps, often with many explanation points. It appears as if the anxiety context was often used to ensure the fast distribution of the materials to others by presenting the information as something that could be harmful to others if it is not spread. This suggests a powerful manipulative construct within the boundaries of Facebook, particularly if Facebook is a means to keeping in touch with family and close friends. Anxiety was also the most ubiquitous of the emotionality triggers, spanning across all the narrative frames recorded in the hoaxes. Interestingly, anger and sadness had almost the same count. Berger and Milkman’s work suggests that sadness is not a good predictor of viral content, but narrative frames and language to evoke sadness were used just as much as some of the more active-style emotionality triggers. Sadness, too, was unique in being connected most to photos as well. Usually within the “Prayer Request” narrative frame, images often showed a sick or dying child in a hospital bed, clearly indicating the intention to cause a feeling of sadness. Also interesting is the very low count for the awe emotionality trigger. Only about 10 hoaxes had some kind of trigger of awe. They were used throughout the different narrative frames, however the few purely traditional hoaxes, such as a Photoshopped image of a tiger with a black coat, tended to contain awe.

Some of the hoaxes were based in a nugget of reality. They were based on true events at one point in time, but used materials that are now out of date. Those were often in the “Prayer Request” group, which will be discussed later in this section. Often, they were based on journalistic news stories that recorded actual events – for example, doctors working to save the life of an infant found abandoned in a garbage dump. However, the message itself was circulating months, sometimes years, after the event itself took place. In the case of the infant found in the garbage dump, the wording in the hoax made it sound as if the child was found in 2011, but the original news story and images of the child were from 2007. The phrasing of the hoax uses vague terms like “recently” to extend the longevity of the message. Facebook hoaxes once based in reality, but now mutated over time, do have the potential to be harmful, as they can trick people into thinking that a one-time event is an ongoing phenomenon.

The clear majority of the hoaxes, about 139, were about some aspect of technology, either something having to do with Facebook itself, a false virus scare, or a scam to collect personal data under the guise of giving away a cutting-edge piece of technology. When one examines Facebook’s overall demographics though, perhaps this makes sense – if someone can access Facebook, they also might be more likely to be interested in an aspect of technology.

With the emotionality triggers, or the emotions that might encourage people to spread the hoax, counted and analyzed, there were several other important frames discovered as a part of this study, primarily through the qualitative methodology of combining themes through a grounded approach. The narrative frames were commonalities in the language used in the body of the hoax, the topics discussed, and the way the reader was meant to interact with the hoax. The consistent narrative frameworks that appeared in the language and presentation of the Facebook hoaxes are in the sub-categories below:

Fake-Virus

One very clear framework that appeared in the language of the hoax was that of the “Fake Virus.” This was also the most-prevalent framework, with 31 falling into that category. The narrative for those hoaxes often contained language describing a fast-spreading Trojan horse, worm, or piece of malware that was infecting the Facebook accounts and computers of those who did some kind of activity. That activity itself often ranged from accepting a “friend” request from an unknown user, to joining a Facebook group, to loading a legitimate piece of software, such as the Kik Messenger application. The full listing of things warned about as being viruses within Facebook are too numerous to list out in the constraints of one single written study.

Rarely were the origins of the virus ever actually true, and often the activity being warned about was not only mundane and harmless, such as accepting a friend request from a particular person or clicking “like” on the wrong type of image, but also completely technically impossible to be used for spreading a virus. Occasionally the narrative of the hoax was based around a real-life phenomenon, such as Facebook’s problem of “Friend cloning.” That is when someone maliciously duplicates an account by using the same Facebook profile image, same likes and “About me” and then befriends the unsuspecting victim. The completion of the scam seems to be asking people for money as the “Friend.” The emotionality trigger most-associated with the “Fake Virus” was that of surprise, anxiety, anger, with utility often used, depending on how the narrative was framed. In some instances, the narrative was less a frantic warning about the implications of clicking “Like” on the wrong thing, but more about steps to protect against privacy violations – albeit not privacy violations that were real. Those narratives were often more point-and-click instructions describing which settings to turn off and which to turn on to prevent Facebook from taking your photos and using them for promotional materials. Missing from those narratives was the knowledge that Facebook’s own Terms of Service allows for that use of content, and no specific setting turns that off.

The consequence of the “Fake Virus” theme is a clouding of the waters. Facebook itself is not a good technical conduit for spreading viruses and malware, as it’s difficult to send attachments and near-impossible to imbed anything malicious within Facebook itself. However, the Internet is full of traps for viruses and malware, and taking note of preventative steps – such as not clicking a link that takes you away from Facebook if you don’t know the source of it – is important. What these “Fake Virus” hoaxes do is confuse people into thinking that almost any action on Facebook is potentially dangerous. The owner of the database used by this study has stated on the site the implications of the “false virus” framework:

Ironically, because the warning is spreading so rapidly, it is apt to ultimately cause more problems than the supposed threat that it purports to describe. Such warnings clutter Facebook pages with misinformation, cause unnecessary alarm, and waste the time of those who read and repost them. They can also desensitize users to warnings about real threats.17

Prayer Requests

Possibly the framework that is most native to the web in its modern form, but also most mimics past cultural ritual, is that of the “Prayer Request.” There were a total of 25 hoaxes that fit within the “Prayer Request” narrative frame. Health anomalies were used often here, to the point they could almost be considered a sub-narrative. For example, an image of a baby with a rare blood disorder was pictured, and the text would say that only prayer could save the child, that doctors did not know what to do. That accounted for about 39 hoaxes, with some overlapping into the specific “Prayer Request” narrative. The hoaxes within this narrative frame tended to prey on the emotionality triggers of sadness and anxiety, with an example being:

SHARE THIS PIC IF YOU WANT TO HELP THIS BABY. IF YOU HAVE HEART, THEN YOU WILL SURELY SHARE OTHERWISE ITS UPTO YOU. THIS BABY IS SUFFERING FROM BRAIN DISORDER AND NEED PRAYERS. WE NEED ABOUT. 1,00,000 SHARES. PLZ HELP THIS BABY. SHARE AS MUCH AS POSSIBLE. AND COLLECT SO MANY PRAYERS FOR THIS BABY.18

The language in the hoax is so intentionally vague that it is worthless to someone who cares about the fate of the child. The child is not named, it is not known where the child is from, and the specifics of the brain disorder are not mentioned. It is also never explained how having “1,00,000 SHARES” will save the child. The image was of a child with facial tumors. The image was at least eight years old by the time the reader saw the hoax in 2013, as the first found use of the picture on the Internet was from 2005. A potential harmful side effect of the prayer request frame is the longevity of painful images and emotional hurt. The images of sick and dying children are often taken without permission from medical websites and personal blogs used by families to cope. A parent who has lost a child to a rare disease could see a photo of their child attached to a Facebook hoax years later, with the only apparent endgame being more people clicking “Share” on a hoax to make the originator of the hoax feel accomplished.

Pedophilia Scares

The most intense theme in the sampled hoaxes was the pedophilia scare. Out of the sample, 23 were considered pedophilia scare hoaxes. They were often written in all-caps, making them appear important and timely. However, almost none of them were rooted in any original truth. The emotionality triggers within the pedophilia scare hoaxes were almost all based on anger, anxiety, and surprise. All had a call-to-action at the end of the stated narrative.

The narratives tended to take two distinct forms. The first would often describe a specific user on Facebook, such as “Harry Graham,” “Thierry Mairot,” or “Thomas Cowling.” Then the text would claim the named user was a known pedophile who might attempt to befriend the reader’s child on Facebook, with a warning saying the police should be called if the reader is contacted by the accused pedophile. An example includes:

URGENT URGENT ……… To all the parents whose children have a profile on facebook! There is a man who tries to make contact with the children to talk about sex. His name is Thierry Mairot. please, copy and paste on your wall! Thank you for protecting your children! PLEASE share as an emergency. He poses as Justin Bieber! His profile appears as Justin Bieber! PLEASE share NFSE.19

This narrative feeds off the emotionality triggers of anger and surprise, mimicking the real-world concern of pedophiles using the Internet to contact children. However, in every listed case in the hoax database, the name identified as the pedophile either did not exist on Facebook, or there were so many people with that name that it would be impossible to know which “Harry Graham” was the potential pedophile. The narrative of the hoax has a very strong call to action, begging people to not only click “Share,” but also to copy and paste the hoax onto their own wall. This narrative represents a very dangerous potential for abuse, as it would be extraordinarily easy to simply drop in anyone’s name whom someone did not like into the template pedophilia scare hoax narrative and falsely target that individual as a pedophile, largely due to the rapid spread of the hoax.

The second style of pedophilia scares dealt with Facebook pages and internal aspects of Facebook itself. The hoax would tell people not to join certain Facebook groups, such as one titled “Becoming a father or a mother was the greatest gift of my life,” with the hoax author claiming the group steals pictures of children posted voluntarily by parents for pedophilic uses. However, in almost every case, the group named in the hoax was created after the hoax was written and distributed, making the group itself a malicious follow-up to the hoax. It’s unknown who made the group, if it was the creator of the original hoax or someone else. Because the group was made as a reaction to the hoax, there were no pictures of children in the group – rather, just people attacking the creators of the group.

Phishing and Marketing

Within the sample, 58 of the hoaxes fit the theme of being a phishing or marketing scam. Unlike the other narrative themes, the “Phishing and Marketing” theme tended to create direct victims. Whereas the “Prayer Request” or “Fake Virus” narrative themes could create potential victims through distress, the victims of the “Phishing and Marketing” victims stand to lose money.

The phishing and marketing scams often fit within three distinct types of narrative theme. The first was phrased similarly to the “Fake Virus” theme, where users were warned about some kind of change to their account, possibly because of hackers or a piece of malware. The warning was often distributed on Facebook itself as a direct message to the reader. The user was then instructed to log into a false log-in site, and the malicious creators of the hoax would have the victim’s account information. The second narrative frame was the “Bait-and-Switch.” Users were shown a sample of an enticing video clip – examples include a news clip stating pop star Rihanna had died, or Justin Bieber had checked into rehab – and then told to click the link to find out more about the story. When the victim clicks the link, they will be asked to fill out a survey about their purchasing habits and are often asked to include their email address and cell phone number. In the end, the links to get the whole story about Rihanna or Justin Bieber lead nowhere but to another survey. This type of market hoax appears to be largely created and launched by unethical marketing firms who seek to collect email addresses and phone numbers.

The third type was the “Free Give-Away.” This was a rather popular one, making up most of the hoaxes that fell within the “Phishing and Marketing” narrative frame. It also is one that very closely mimics previously popular types of chain mail giveaways. The types of items being “given away” range from gift cards to popular restaurants to cruises and computers. One example reads:

We have an extra 200 boxes of Dell computers that can’t be sold off because they have been unsealed. Therefore we are giving them away. Want one of them? Just SHARE this photo & LIKE our page. We will choose 200 people completely at random on July 30 and winners will be notified via inbox message. LAST STEP: Confirm Your Entry at [Link Removed].20

This example is standard for all the other malicious phishing-type hoaxes. The reader is lured in with the promise of a free item and is asked to perpetuate the hoax by “Sharing” and “Liking” in order to be eligible. Then the reader must click on a link and “confirm” their entry. The link leads to a site that again asks for the “winner’s” address, phone number, and place of employment.

About Facebook

Fifty-four of the sampled hoaxes were specifically about Facebook. Hoaxes about Facebook tended to fit into two contexts. Both were about Facebook making changes to their website. The first context involved charging money for some service that was previously free. The second involved shutting down a service. The first, about charging money, was the most popular. One example was actually rooted in a small semblance of truth. It read:

NEWS FLASH….ON LASTS NEWS IT STATED THAT FACEBOOK IS GOING TO CHARGE PEOPLE TO USE MESSAGES ON HERE..HOW MUCH THEY DIDNT SAY…WELL I THINK ENOUGH IS ENOUGH.21

The misspelled hoax warns about Facebook charging money for their popular Facebook Messenger system, which Facebook did indeed start doing – but only for sending messages to people you’re not already “Friends” with on the site. And even then, the charge is minimal – about $0.75.

This style of hoax has already proven detrimental to social networking websites of the past. Ellison suggests that one of the catalysts for Friendster.com, one of the original social networking websites, losing to the up-start MySpace.com, was a series of hoaxes posted on the website claiming that Friendster was going to start charging people to have an account, and that MySpace was a free alternative.22 It was completely false. Friendster was, until its recent closure and reboot, completely free. However, Ellison claims the rumor was responsible for the beginning of the end of the site’s popularity. The procedural understanding of social networking websites was less well known back then, so perhaps it didn’t seem as absurd that a site might charge money.

Call-to-Action

The most consistent item in the text of the Facebook hoaxes was the “call-to-action.” The call-to-action may also be the reason why the hoaxes became prevalent. Within the sample, 160 of the Facebook hoaxes contained a blatant “call-to-action” statement. It normally was language that asked the reader to either click “Share” or “Like.” The statement was almost always at the end of the post, and at times was combined with emotional enticement. For example, when paired with the “Prayer Request” narrative frame and the sadness emotionality trigger, the “call-to-action” was usually the hoax falsely claiming that for every “Like” and “Share” on Facebook, a set amount of money would be donated to the family of a sick child. Of course, no money was being donated, but the small amount of money listed may be a part of why it works. The “call-to-action” seems to rely on low-stakes outcomes. There is little perceived downside to spreading the hoax. But there is potential for upside if, by some random chance, someone is donating money based on social media interaction. It is low risk, high potential reward. When paired with another narrative framework, such as either the “Fake Virus” or the “Pedophilia Scare,” which then is paired with the anger and anxiety emotionality triggers, the “call-to-action” takes the form of the claim that one must “Share” the post to keep children safe or prevent identity theft. The “call-to-action” was prevalent in almost every narrative frame and was paired with almost every emotionality trigger, and the language of the “Call-to-Action” presents, potentially, the most interesting finding.

Discussion

This study does have several clear limitations. One of the most prominent is that the ability to quantifiably measure the “reach” of each hoax is not addressed. It is unknown through the research how many “Shares” and “Likes” that each analyzed hoax truly had before it was debunked and posted on the database.

Future study into Facebook hoaxes would be best benefitted from being conducted either in an experimental manner, or by using the methodologies that go along with phenomenology. Hoaxes should be studied in a grounded manner that allows a holistic examination of their themes and actions, from the people who make them in the first place to the people who propagate them using built-in tools on social networks, to the very fabric of the language used to make them shocking or appealing. Facebook hoaxes contain a set of themes and contextual frameworks that intentionally spark aspects of emotion – usually that of anxiety, anger, and surprise. Not only that, but the “call-to-arms” that all-too-often is present in the narrative themes, seems to have major potential for perpetuating the hoax.

Conclusion

The first research question asked about the nature of Facebook hoaxes and overlapping emotionality triggers found in viral content. The study progressed with the assumption that a hoax is a piece of viral content, as it fits the working definition used by Berger and Milkman. The emotionality categories originally used by the Berger and Milkman study were present in the Facebook hoaxes, with anxiety and anger being the two most identified. This could be explained by the idea that active-type emotionality triggers – as in ones that tend to cause emotional arousal – cause viral content to be spread more. Meaning, the authors of the Facebook hoaxes included active-type emotionality triggers into their phrasing, either intentionally or subliminally, to increase the sharing potential. Non-active, or deadening-type emotionality triggers, were used less often. That included emotionality such as sadness. Although sadness did appear in almost the same amount of cases as anger, they all tended to carry the same theme of the “prayer request” – often showing a picture of a sick or injured child and asking for the consumer of the hoax to pass it along via the “Share” and “Like” mechanics on Facebook.

The emotionality triggers of anger, anxiety, and surprise were not only the most prevalent, but were also the most-diverse in their themes. Anger was connected to both the previously mentioned “prayer request” themes when packaged with a picture of a sick child, often with a narrative about how a parent had caused the afflictions, as well as posts specifically about Facebook within the “fake-virus” theme. Anxiety was also well-represented in the “fake-virus” theme, but also in several posts specifically about Facebook changing policies, such as charging for access or erasing content.

The second research question, which was based on the playful nature of historical media hoaxes, doesn’t have quite as clear of an answer. Although there were a few hoaxes with narrative frames that seemed to fit the playful nature of the days of old, many of the hoaxes dealt with dark and difficult topics, ranging from protecting children from pedophiles to children with debilitating medical deformities.

One example of a purely playful-style hoax was the Photoshopped image. Although there were not enough examples of Photoshopped images in the sample to constitute its place as a stand-alone narrative theme, they did exist. One example was an image of an African lion that was Photoshopped to look like it had black fur. The language used in the hoax was simple and playful, simply stating that the lion looked interesting. There was no call to action, asking others to share the image. It simply showed the picture – the users themselves did the work of distributing it completely under their own power. The lion photo was similar in nature to the hoaxes Fedler discusses in his book. Nobody is at risk of being emotionally distressed or having their credit card information stolen from looking at a picture of a fake lion.

The “Phishing and Marketing” narrative theme especially acts as an example of modern Facebook hoaxes not fitting within the historical framework of hoaxes being playful, or the result of blowing off steam and exercising creative abilities on the behalf of the author. Instead, they were intentionally manipulative strictly for economic gain through trickery. One might argue that grand hoaxes in the late 1800s served the same purpose – back then, the intention was to get someone to purchase a newspaper under the guise of the information inside being correct. And there were petty hoaxes, such as intentionally hiding fake sources in your article so the reporter at the competing newspaper is tricked. However, the unwitting victim was only out the cost of a newspaper at the end of the transaction, and at the very worst, they might bring up some incorrect nugget of information with a friend at the pub. The victims of Facebook phishing scams risk having all sorts of intimate information stolen, potentially even items leading to identity theft and banking fraud.

The grand hoaxes of the 1800s were intended to be playful, while the Facebook hoaxes of today, particularly those under the “Phishing and Marketing” and “Prayer Request” narrative themes, appear to be intended only to harm, the latter of which clearly exploits both emotionality triggers and narrative themes to perpetuate the hoax. The answer suggests modern Facebook hoaxes do not fit into the narrative frame of playfulness that past media hoaxes were founded upon.

This study shows that Facebook is a platform on which hoaxes can become widespread due to, potentially, the language people use when writing out a hoax. The people making the hoaxes are using inciting language, preying on the feelings of people using emotionality triggers of anger and surprise, and pleading with people to pass the message along. We face a world where people are increasingly relying on social media for news and information daily. It doesn’t seem that harmful for someone to pass along a picture of a sick child asking for prayers, or a message saying that Facebook will start charging money for accounts. But what happens when the message contains some false or dangerous remedy for the Zika virus? Or if the “Pedophilia Scares” narrative frame leads to someone getting shot and killed as a form of vengeance? The conclusion of this work, describing the emotionality triggers and narrative frames used in Facebook hoaxes, should be carried forward to help figure out a way to curb hoaxes on Facebook before they become dangerous.


  1. Fedler, Fred. Media Hoaxes. Ames: Iowa State University Press, 1989. ↩︎

  2. VanArsdale, D. W. “Chain Letter Evolution.” http://www.silcom.com/~barnowl/chain...↩︎

  3. Bennett, Charles H., Ming Li, and Bin Ma. “Chain Letters and Evolutionary Histories.” Scientific American 288, no. 6 (2003): 76-81. ↩︎

  4. Heath, Chip, Chris Bell, and Emily Sternberg. “Emotional Selection in Memes: The Case of Urban Legends.” Journal of Personality and Social Psychology 81, no. 6 (2001): 1028-041. ↩︎

  5. Chen, A. “Reddit Apologizes for Leading Boston Bomber ‘Witch Hunt’.” Gawker. April 22, 2013. http://gawker.com/reddit-apologizes-...↩︎

  6. Jauregui, A. “Nolan Daniels Powerball Hoax: Man Posts Fake Lottery Ticket to Facebook.” Huffington Post. November 30, 2012. http://www.huffingtonpost.com/2012/1...↩︎

  7. Taylor, Jeremy. “Photo of Winning Lotto Ticket That Went Viral Is Fake.” The FW. December 3, 2013. http://thefw.com/winning-lotto-ticke...↩︎

  8. Kendall, Jason. “Nolan Daniels Speaks: I want to make Powerball hoax a positive thing.” Savannah Now. December 4, 2013. http://savannahnow.com/latest-news/2...↩︎

  9. Carlson, D. “Facebook Powerball Hoax: Nolan Daniels tells is the full story.” Social News Daily. January 23, 2013. http://socialnewsdaily.com/7046/our-...↩︎

  10. Berger, J., and K. L. Milkman. “What Makes Online Content Viral?” Journal of Marketing Research 49, no. 2 (2012): 192-205. ↩︎

  11. Boese, Alex. Hippo Eats Dwarf: A Field Guide to Hoaxes and Other B.S. Orlando: Harcourt, 2006. ↩︎

  12. Buckner, H. T. “A Theory of Rumor Transmission.” Social Psychology Quarterly 29, no. 1 (1965): 54-70. ↩︎

  13. Bordia, Prashant. “Studying Verbal Interaction on the Internet: The Case of Rumor Transmission Research.” Behavior Research Methods, Instruments, & Computers 28, no. 2 (1996): 149-51. ↩︎

  14. Shea, Matt. “People on Reddit Are Trying to Catch the Boston Marathon Bomber.” Vice. April 18, 2013. http://www.vice.com/en_se/read/reddi...↩︎

  15. Pikert, K. “Inside Reddit’s Hunt for the Boston Bombers.” Time. April 23, 2013. http://nation.time.com/2013/04/23/in...↩︎

  16. Abad-Santos, A. “Reddit’s ‘Find Boston Bombers’ Founder Says It Was a Disaster, but ‘Incredible’.” The Wire. April 22, 2013. http://www.thewire.com/national/2013...↩︎

  17. Christensen, Brett. “New Gifts For You Facebook Virus Warning.” HoaxSlayer. August 24, 2010. http://www.hoax-slayer.com/new-gifts...↩︎

  18. Christensen, Brett. “Another Facebook Nonsense Post - Share to Help Baby With Brain Disorder or Brain Tumor.” HoaxSlayer. January 24, 2012. http://www.hoax-slayer.com/baby-brai...↩︎

  19. Christensen, Brett. “Thierry Mairot Wants to Talk to Children About Sex.” HoaxSlayer. September 23, 2010. http://www.hoax-slayer.com/thierry-m...↩︎

  20. Christensen, Brett. “Dell Computer Giveaway Survey and Like Farming Scam.” HoaxSlayer. July 24, 2013. http://www.hoax-slayer.com/dell-comp...↩︎

  21. Christensen, Brett. “Optional Facebook Trial Allows Users to Pay to Send Messages to Non-Friend Inboxes.” HoaxSlayer. April 9, 2013. http://www.hoax-slayer.com/facebook-...↩︎

  22. Boyd, Danah M., and Nicole B. Ellison. “Social Network Sites: Definition, History, and Scholarship.” Journal of Computer-Mediated Communication 13, no. 1 (2007): 210-30. ↩︎

About the Author: 

Dr. Jeffrey K. Riley is a journalism professor at Florida Gulf Coast University, where he specializes in teaching news literacy. His two research interests are the development of insular ideologies in online forums, and the history of press coverage of 1960s and 1970s NASA. He is a former newspaper reporter from the central Florida area.

Volume 1, Issue 2

Also in this issue

2015 advertisement for "Haunting on DoG Street"

The Timeshare Ghost Hunt: Interpretative Techniques at a Historical House Museum

Mariaelena DiBenigno
The College of William and Mary, American Studies Program

Video Presentation: One Man, One Bullet: Mad Max Feminist Road

Jessica McCall, Ph.D.
Delaware Valley University

Final Piece: Kristy McNichol’s (Re)Performance of Childhood in Samuel Fuller’s White Dog

Ron DePeter, Ph.D.
Delaware Valley University
Publicity photo of McMathy

The Road: Cormac McCarthy and the Death of the American Road Narrative

Jesse Gipko, Ph.D.
Belmont College

Features in this issue

Living Things (13:36, 2013)

Producer/Director: Jeremy Newman

IN REVIEW: The War Game Files

Kevin M. Flanagan, Ph.D.
University of Pittsburgh
From the Editors June 2017

Submit a response

Do you have something to say? Submit a response to one of the articles in this issue. Our editorial staff will be in touch soon.