Repelling Fake News
Data Scientist Jon Christiansen Explains How to Spot and Deter Misinformation
Related Trend Reports
Art & Design, Billboards, Fashion, Games, Hip Fashion, Marketing, Media, Mobile, Photography, Political, Pop Culture, Social Good, TechDr. Jon Christiansen is a data scientist who was trained in Computational Analytics at Carnegie Mellon. He holds a Bachelor’s and Master’s in Economics, as well as a PhD is Higher Education. Currently, he teaches Data Mining at Charleston Southern University, and works as Sparks Research’s Chief Intelligence Officer.
We had the opportunity to sit down with him to talk about the rise of deepfake technology and how it relates to the spread of fake news, how common signs can be spotted, and what companies can do to ensure that they’re responsibly spreading information.
You’ve stated that one factor that leads to the spread of fake news is confirmation bias. Can you explain this?
Confirmation bias refers to a way of using new information to confirm certain beliefs that someone already holds. It basically allows us to borrow from information that we already know or believe, and then add to it. The more ambiguous something is, the easier it can be to form a natural bias. When people are given too much room to ideate about a concept that they know very little about, it can be really easy for them to get carried away, and difficult for them to come back from that.
Have you noticed that this phenomenon affects some demographics more than others?
There’s been a lot of studies surrounding that, and it seems that most conclude that older demographics, people who are 65 and up, are more susceptible to fake news. With platforms like Facebook largely used to spread fake news in the past, and younger demographics turning to other platforms like Snapchat and Instagram, those in that age bracket tend to interact with it more frequently.
Those that are either the very far left or right of the political spectrum tend to be more at risk. You see the numbers rise in regards to how conservative someone is especially. That said, we also have to question if they’re sharing more fake news content because of this prior belief, or if there is a higher frequency of fake news targeting these types, and they have greater accessibility to it. It really could go either way.
What can the everyday person do to combat the spread of fake news in their personal networks? What major flags should they be aware of?
First and foremost is critical thinking. You want to start with a simple question, like “What does this want me to do?” A fake news creator has one of three goals: to make you click, share, or feel something. The source gets money for clicks, gets reach from shares, and if it’s able to make you feel something or change your belief, it adds more fire to the message and prompts someone to take a form of action.
Before spam filters were so widely used, people got really good as a society at sniffing it out for themselves. The same rules we used to spot spam applies to vetting fake news in a lot of ways. The excessive use of exclamation marks, all caps, bad grammar, and stolen or photoshopped images are a few of the red flags.
Deep image analysis tools can help to determine if an image is fake. When we take a photo, there’s metadata saved on it. You can go into a tool called FotoForensics and drop an image URL or upload an image file, and it’s able to analyze it in a matter of seconds. When an image is uploaded it offers anywhere from 50 to 300 data points about that image. If it was copyrighted by an artist or photographer, you know that it has a certain level of credibility. You can see if there’s any hidden pixels, what processing software was used, and the time of day that it was created.
With more anti-spam software integrated into our technology, would you say that people are less aware of how to deal with it?
I would say so, yes. The beautiful thing about AI is that it seems to make our lives a lot easier. It takes out a lot of noise in our lives so that we can dedicate more time to the things that actually matter. We’re wired to the law of least effort, so when all of the sudden a new phenomenon like the spread of fake news comes out, we have a tendency to believe that it would have already been filtered out if it were problematic. Due to this, we’re more likely to just assume it’s real. We’re entering a new stage of spam, however it’s at a much more technologically advanced level and right in our faces—we’re constantly engaging with it. And when someone in our personal social network shares fake news, another tlayer of credibility is added to it.
How do you think the rise of deepfake technology will affect the consumption of fake news moving forward?
It certainly adds another layer, doesn’t it? Although the movements in a lot of deepfake videos seem off when you pay attention, our brains our also wired to fill in the gaps we see. Our brains process images a lot faster than text, and impressions can give them more credibility. If we’re constantly scrolling through our Facebook or Instagram feeds and see a post multiple times, it can have a powerful effect and lead us to assume it’s real when we come across it again.
That said, we can also recognize someone we haven’t seen in 10 years when we pass by them at a grocery store. We can point out what’s different about that person pretty quickly, and run through little cognitive tests that help us to determine if it’s who we think it is or not. When we talk about celebrities or major political figures like the US president, we naturally run through those same tests and because of this, it’s usually not too hard to tell if something is off. When an image is superimposed there tends to be slight irregularities that are pretty obvious when you look for them. There might be a small disconnect at the neck, or at the lips when they move.
With so much misinformation in circulation, how can legitimate publications help to instill more trust in their readers?
From a technology standpoint, you have to fight algorithms with other algorithms. These can be embedded within individual websites, and used to weed out fake imagery and illegitimate sources.
Of course, it’s important to cross check sources yourself as well. It’s easy to create and share fake news, and you need to make sure that you not only have the technology in place to filter out common occurrences but also have a team that knows how to think critically, fact check, and understands the position that they’re in to responsibly drive information.
We had the opportunity to sit down with him to talk about the rise of deepfake technology and how it relates to the spread of fake news, how common signs can be spotted, and what companies can do to ensure that they’re responsibly spreading information.
You’ve stated that one factor that leads to the spread of fake news is confirmation bias. Can you explain this?
Confirmation bias refers to a way of using new information to confirm certain beliefs that someone already holds. It basically allows us to borrow from information that we already know or believe, and then add to it. The more ambiguous something is, the easier it can be to form a natural bias. When people are given too much room to ideate about a concept that they know very little about, it can be really easy for them to get carried away, and difficult for them to come back from that.
Have you noticed that this phenomenon affects some demographics more than others?
There’s been a lot of studies surrounding that, and it seems that most conclude that older demographics, people who are 65 and up, are more susceptible to fake news. With platforms like Facebook largely used to spread fake news in the past, and younger demographics turning to other platforms like Snapchat and Instagram, those in that age bracket tend to interact with it more frequently.
Those that are either the very far left or right of the political spectrum tend to be more at risk. You see the numbers rise in regards to how conservative someone is especially. That said, we also have to question if they’re sharing more fake news content because of this prior belief, or if there is a higher frequency of fake news targeting these types, and they have greater accessibility to it. It really could go either way.
What can the everyday person do to combat the spread of fake news in their personal networks? What major flags should they be aware of?
First and foremost is critical thinking. You want to start with a simple question, like “What does this want me to do?” A fake news creator has one of three goals: to make you click, share, or feel something. The source gets money for clicks, gets reach from shares, and if it’s able to make you feel something or change your belief, it adds more fire to the message and prompts someone to take a form of action.
Before spam filters were so widely used, people got really good as a society at sniffing it out for themselves. The same rules we used to spot spam applies to vetting fake news in a lot of ways. The excessive use of exclamation marks, all caps, bad grammar, and stolen or photoshopped images are a few of the red flags.
Deep image analysis tools can help to determine if an image is fake. When we take a photo, there’s metadata saved on it. You can go into a tool called FotoForensics and drop an image URL or upload an image file, and it’s able to analyze it in a matter of seconds. When an image is uploaded it offers anywhere from 50 to 300 data points about that image. If it was copyrighted by an artist or photographer, you know that it has a certain level of credibility. You can see if there’s any hidden pixels, what processing software was used, and the time of day that it was created.
With more anti-spam software integrated into our technology, would you say that people are less aware of how to deal with it?
I would say so, yes. The beautiful thing about AI is that it seems to make our lives a lot easier. It takes out a lot of noise in our lives so that we can dedicate more time to the things that actually matter. We’re wired to the law of least effort, so when all of the sudden a new phenomenon like the spread of fake news comes out, we have a tendency to believe that it would have already been filtered out if it were problematic. Due to this, we’re more likely to just assume it’s real. We’re entering a new stage of spam, however it’s at a much more technologically advanced level and right in our faces—we’re constantly engaging with it. And when someone in our personal social network shares fake news, another tlayer of credibility is added to it.
How do you think the rise of deepfake technology will affect the consumption of fake news moving forward?
It certainly adds another layer, doesn’t it? Although the movements in a lot of deepfake videos seem off when you pay attention, our brains our also wired to fill in the gaps we see. Our brains process images a lot faster than text, and impressions can give them more credibility. If we’re constantly scrolling through our Facebook or Instagram feeds and see a post multiple times, it can have a powerful effect and lead us to assume it’s real when we come across it again.
That said, we can also recognize someone we haven’t seen in 10 years when we pass by them at a grocery store. We can point out what’s different about that person pretty quickly, and run through little cognitive tests that help us to determine if it’s who we think it is or not. When we talk about celebrities or major political figures like the US president, we naturally run through those same tests and because of this, it’s usually not too hard to tell if something is off. When an image is superimposed there tends to be slight irregularities that are pretty obvious when you look for them. There might be a small disconnect at the neck, or at the lips when they move.
With so much misinformation in circulation, how can legitimate publications help to instill more trust in their readers?
From a technology standpoint, you have to fight algorithms with other algorithms. These can be embedded within individual websites, and used to weed out fake imagery and illegitimate sources.
Of course, it’s important to cross check sources yourself as well. It’s easy to create and share fake news, and you need to make sure that you not only have the technology in place to filter out common occurrences but also have a team that knows how to think critically, fact check, and understands the position that they’re in to responsibly drive information.
References: sparksresearch
Featured Articles
Defending Against Propaganda
Jack Nolan's Propaganda Talk Shows Strategies for Mental Self-Defense
Observing Objective Truths
Michael Patrick Lynch Gives a Philosophical Speech About Truth
Politically Informed Fashion Ads
SHOOP's Collection is Accompanied by a Playful Fake News Ad
Fake News Video Games
'Bad News' Hopes to Stop Misinformation By Teaching People How It Works
Fake News-Testing Games
Factitious Tests Players' Ability to Detect False News Articles
Gun Control Video Games
Game For Our Lives Raises Awareness For The Changing of Gun Control Laws
Fake News-Inspired Capsules
The SHOOP Fall/Winter 2018 Collection Highlights "Post-Truth"
Trust-Building Journalism Platforms
Open Knowledge Brasil's Libre Lets You Microfinance News Sites
Mobile Message-Limiting Features
WhatsApp Now Has a Forwarded Messages Limit