FOLLOW

FOLLOW

SHARE

Should Psychographic Profiling Be Banned?

Psychographic profiling is a moral gray area, but can authorities begin policing it?

1Aug

Data use has a bad reputation. From the public disasters like the data hacks that stole millions of people’s details, through to the secretive and scary potential uses by companies like Palantir and Google, the public has a major trust issue.

Despite the vast majority of us using and storing data correctly, it is unfortunately the well publicized failures that grab the headlines and the imaginations. However, perhaps the most disturbing elements of data use are the gray areas, where the morality of how data is being used is questionable and only seem legal because law enforcement are yet to catch up.

One of the best publicized versions of this is the use of psychographic profiling data.

Psychographic profiling is the practice of directing messaging towards a group of people based on their actions, likes, and personality, rather than the more traditional demographic data that’s historically been used. The practice has become widespread amongst companies and political campaigns over the past few years, led primarily by the huge amount of psychographic data that exists within social media. In order to collect and collate this kind of data in the past, companies would need to make certain jumps or target based on broad brushstroke assumptions. A good example of this would be the targeted adverts of a pair of shoes after you had spent 10 minutes browsing a shoe website, although this is also a recommendation engine it is also a very basic version.

However, as the amount of data available on everybody from their social media presence has grown, so has the power of psychographic profiling. Now from the thousands of likes, comments, and status updates that people make every year, it is possible to create profiles on them that reveal personality traits to companies and campaigns that the person they’re targeting doesn’t even realize they have themselves.

A prime example is a study done by the Online Privacy Foundation, which conducted a test on those with high authoritarian tendencies (who generally tend to be more right wing) and those with low authoritarian tendencies (who tend to be more left wing), with the groups themselves being sorted through publicly available data. These groups were then asked the question ‘with regards to internet privacy: if you’ve done nothing wrong, you have nothing to fear’, with 25% of the low authoritarian group agreeing compared to 61% of the highly authoritarian group.

Both groups were then presented with differently worded adverts around online privacy, with the high authoritarian tendency groups being shown ‘They fought for your freedom. Don’t give it away!’ as an anti-surveillance advert and the low authoritarian group being shown the pro-surveillance message ‘Crime doesn’t stop where the internet starts: say YES to state surveillance.’ Both elicited significantly better results and were more likely to be shared by both sides.

The phenomenon has become increasingly dangerous, though, especially as it is something that Facebook actively promotes for political campaigns. The use of the data by analytics companies including the controversial Cambridge Analytica has been widely credited with what pushed Donald Trump’s narrow electoral college victory over the line. So the question is whether this should remain legal or be made illegal?

In the 1960s and 1970s companies began advertising using subliminal messaging, with the most famous being Hūsker Dū?, a board game advertised in 1974, which prompted the UK, Canada, Australia, and Europe to ban subliminal adverts. They essentially knew that it was attempting the emotionally manipulate people to buy their products using fairly basic visual or audio prompts. The use of psychographic data takes this same broad concept and creates the same issues, it is making people act in a certain way without them realizing it. As we saw with the experiment by the Online Privacy Foundation, through effective use of this data it is possible to almost completely change the way people view a particular subject. Given the kind of power this has, it is no surprise that it is having a huge impact on the world.

In fact, the two most surprising and unexpected political results of 2016 have been largely put down to the use of it, with Cambridge Analytica paid millions by the Trump and Leave campaigns in the US and UK to use these methods on the voting public. It should come as no shock that after these messages have stopped being shown, Donald Trump’s approval rating now sits at 38%, whilst his disapproval rating is at 56% after starting at 41% at his inauguration. Similarly in the UK 45% of voters believe that voting to leave the EU was a mistake compared to 43% who believe it is a good thing.

There is a case to argue that the messaging being used to target specific people should be controlled in the same way as TV ads, but the reality of the delivery systems for these kinds of adverts makes it almost impossible at present. Facebook ads, for instance, are known as ‘dark ads’ because they aren’t widely known, often unreported by campaigns, and target only a specific group, making their policing incredibly difficult.

However, the move is a smart and ultimately unsurprising one given the increasingly powerful use of data in political campaigns, so it seems that, regardless of whether it is morally right or wrong, it worked. It is also a complicated subject, after all, political campaigns have targeted specific voters ever since the party political system was established, so how do you draw the line between smart profiling and psychometric manipulation? How do you define who can and can’t use it, too? Companies have been using this method for years, and to some extent almost every advert that you see online today is somehow tailored to an action you’ve taken, so can you just block political parties from using it? Then what happens with Super Pacs, who aren’t part of a specific party, but support particular candidates?

Psychographic profiling is certainly a moral gray area, but the reality is that attempts to police it are going to be even more complex, but as we have seen, somebody needs to start or we could end up in a very dark situation that simply throws more fuel on the fire of public perception around data.

Comments

comments powered byDisqus
Comprehension small

Read next:

Expert Insight: 'Actually Understanding Your Data Is Crucial When Creating Effective Data Visualizations.'

i