<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=892803334150033&amp;ev=PageView&amp;noscript=1">

Crobox Blog

A peek into the world of psychology through the lens of eCommerce professionals.

Online Marketing Psychology: How to truly understand your customer

Jelle Fastenau | May 30, 2018
Banner Subconscious by Antonino Visalli.jpg

For psychologists, it’s common knowledge that a large chunk of our behavior can be attributed to the subconscious mind. In fact, it’s estimated that 95% of our brain activity is driven by hidden motivations that we aren’t aware are taking place.

Of course, marketers have been familiar with this concept for quite some time as well. Equipped with detailed knowledge of what marketing tactics tap into human drives, we can make smarter decisions in our communications and design.

But here’s the crux: how can you know what really makes your customers tick when they aren’t even aware of their true motivations themselves?

 

People are inherently bad at explaining behavior

Just because most decisions are made on autopilot doesn’t mean we can’t try to understand them. However, simply asking someone to delve into their own brain to provide you with an explanation for what they just did will most likely result in the wrong answer.

Which brings us to the big, fat elephant in the room: humans are fundamentally bad at explaining why they do what they do. We don’t even consciously pay attention to the vast majority of our actions. And when pressed for an answer, our experiences are perceived through our own personal lens, making it hard for us to give an objective account of events.

Let’s be clear, there’s rarely only one factor contributing to a decision. However, bias and personal preference generally make us only see the things we want to see.

To illustrate that thought, let’s consider the following two examples that cloud our self-evaluation (just two for now, but we could go on).

 

Confirmation bias

While we’re not always exactly sure why we do the things we do, we like to at least think we’re doing it for the right reasons. This desire is so strong that confirmation bias drives us to look for evidence that supports our past decisions while blocking out contradicting information that is less in favor of our choices.

Using this type of selective reasoning, you can easily justify your decision to buy a brand new smartphone (“It takes such great pictures!”), while ignoring the fact that there’s nothing wrong with your current model or could have spent the money better elsewhere.

null

 

Attribution bias 

To get an even clearer picture of how subjective our observations are, the attribution bias can’t be excluded from the conversation. Imagine yourself walking down the street and coming across a stranger shouting on his phone. An easy conclusion would be that this person must have a bad temper and likely often acts this way.

Conversely, when it’s our own behavior that’s being called into question, we’re probably not so quick to single out an action as an accurate reflection of our complete personality. What if we just received some bad news, like losing a job or a relationship ending? The fact that we would not give a stranger that benefit of the doubt, is called fundamental attribution error.

Moreover, the closely related self-service bias dictates that we use those situational explanations only when it suits us. Positive events are always down to internal factors, whereas negative events are blamed on external ones.

Won a football match? Congratulations, you must have been the better team. Did you lose? Obviously, the referee was against you the whole time - how on earth did he not see that was a penalty?

 

null

 

Measure behavior, not attitudes

With so many biases playing tricks on our minds, it’s hard for marketers to know if any of the data gathered from user surveys can be trusted. This question has been long up for debate in the field of behavioral science as well, with two general schools of thought.

 

Surveys have an attitude problem

In 1980, Ajzen & Fishbein developed an attitudinal questionnaire framework built for measuring people’s thoughts and feelings about various topics to predict future behavior. While their tool was widely adopted by behavioral scientists, the attitudinal approach has not been without criticism. If you’ve been paying attention so far, you can probably guess why.

 

null

 

For starters, one of the major downsides of using intangible concepts like attitudes, intentions, and beliefs, is that they’re highly subjective forms of measurement. What one person experiences as an 8 out of 10 level of excitement, it could be just a 5 or a 6 for the next. There’s really no telling which number is right, nor is there any way to verify whether a person is accurately describing how they feel.

And it gets worse. As it turns out, there’s generally little predictive ability to attitudinal questionnaires. Attitudes only explain so much as 40% to 50% of people’s intentions, and a mere 19% to 38% of their behavior.

This incongruence gets often blamed on social desirability: the urge that respondents feel to select the answers they think researchers are hoping to hear or to only provide the information that makes themselves look better. Even in completely anonymous settings.

But let’s assume for the moment that respondents aren’t actively trying to sabotage your experiment - most of the time, they aren’t. Even in real life, intentions won’t always match behavioral outcomes.

Just consider how many new year’s resolutions are already broken after just a few weeks time. No matter how genuine our intentions may be, they’re never a guarantee for future behavior.

 

The behavioral alternative

So, instead of focusing on what people say, why not focus on what they do? A fierce advocate of this viewpoint was Patricia J. Labaw, who described attitudes as “mere surface manifestations of larger structural movements, beyond the control or even the consciousness of individuals.”

 

null

 

Enter the behavioral framework (1980). By throwing all subjective measurements out and focusing on objective indicators such as past behavior, environmental factors, and knowledge instead, Labaw’s alternative was guaranteed to result in more trustworthy data.

Furthermore, the attitudinal questionnaires turned out to be easier to implement as well; solving past issues with social desirability, but other common survey issues such as respondent fatigue and decreased question comprehension.

 

Psychographic data-analysis

While this shift already marked a significant step in the right direction, it still leaves us with one big question: now that we have the right data, how can we detract meaning from it?

It’s no secret that good market researchers also need to be able to think like a 

Consumer Psychoanalysis

psychologist. At one point, one Harvard Business School professor started advocating for extensive one-on-one interviews with individuals to pry the hidden meanings from the metaphors they used to describe their feelings.

Sure, this near Freudian approach might sound a little extreme - not to mention impractical in the internet age. But the guy wasn’t that far off from the average job description of a modern consumer psychologist.

With plenty of technological tools at our disposal, shoppers no longer have to lay down on a sofa in a dusty psychiatrist’s office in order to be understood. Instead, the psychoanalytical part relocated to the digital realm of numbers and statistics.

 

Behavioral insights for eCommerce

A/B testing

Crobox - A B TestingAs we are all well aware by now, more and more consumers are doing their shopping online, which calls for different research methods. One of the most widely used practices for measuring consumer behavior on webshops is called A/B testing.

Common A/B experiments work by running two versions of a website simultaneously: often a regular one (control condition) and a slightly altered version. For example, in the picture above you’ll see an example of two very similar versions of a product detail page, only displaying different messages underneath the product image.

During the experiment, all visitors will be randomly divided over two options. This experimental design allows us to make simple, testable hypotheses about which persuasive text-message will work best. The one that has generated the most conversions, sales, or click-throughs at the end will show whether we were right.

Naturally, A/B testing offers several advantages, such as:

  • Being the ability to test in a natural environment, which removes social desirability effects completely.
  • Measuring actual behavior, making the results as objective as possible.

However, that doesn’t mean that A/B testing is perfect. As the name implies, it only allows you to test out two alternatives at a time. If you’re absolutely keen on testing all the variations you can think of, it will take you a very long time.

 

Introducing AI-powered smart notifications

 

null

 

Luckily, there are solutions to make your customer research more effective with AI. In A/B testing, you simply choose beforehand what types of variations you want to be testing. But, what if AI could help you to make those decisions for you in real time?

Whereas A/B testing is a static approach, the possibilities with AI are endless. Not only does it allow you to simultaneously test as many different copy variations as you want, it also factors in a range of multivariate conditions contributing to a decision.

So, when combining pre-existing knowledge of (subconscious) psychology with the power of machine learning, a website can find the best psychological tactics to use, the optimal copy to express it, and the best performing format. 

 

Smart Notifications Applied

Relevance PredictionPromotion Prediction

Crobox uses this approach to optimize webshops’ performance. Our AI first uses relevance prediction to chose the products that meet certain specifications (e.g., the most popular products use social proof messaging). After this it goes through a process of promotion prediction to test multiple copy-variances to find the optimal combination. 

The great thing about using AI is that it automatically chooses the best message/product/format combination based on the shopper's location, device usage, or on-site behavior. Greatly reducing the cumbersome task of A/B testing all variations. 

Sound cool? Check out our recent interview to hear more about the technology behind it.

 

Applying Customer Intelligence

So, what does all this data ultimately tell you about your consumer? This is where psychology comes back into play.

For instance, if you notice a big increase in conversion rates for social proof messages, it’s likely that your audience values the opinion of others. For them, knowing what’s popular will be a key motivator for buying behavior. Corresponding conclusions can be drawn from shoppers that respond to messages using scarcity, novelty, authority, etc.

From there, you’re able to work towards a more accurate understanding of your audience. So, instead of just saying “We know that our users favor version A of our website,” you can conclude that shopper x prefers message y, in situation z.

 

Future implications

In the long run, smart notifications should prove to be helpful to both consumers and organizations. By making the process of customer research easier and more accurate, webshops can gain a better understanding of their audience and will be better at providing them with the information they’re looking for.

Moving forward, keep the following in mind:

1. Transparency

There has been much debate about user privacy, with the new GDPR legislation as the most recent culmination.

Luckily, behavioral customer research generally requires none of your user’s personal information. Like with most effective practices of data analysis, only small, non-obtrusive measurements of on-site behaviors are enough to make precise predictions.

Yet, no matter how small the amount of data you gather, people still generally don’t like the idea of being watched or manipulated. So, for the most ethical application of this strategy, be transparent about what you do. Provide a notification to inform your visitors you’re running a test to measure the site’s performance.

2. Honesty

And more importantly, make sure your notifications are truthful! Ensure that the items that are displayed as most popular, actually are the most popular - and so on. The key for proper consumer research is to discover the information your customer is most interested in hearing, not to deceive them with false communication.

 

Key takeaways

And there you have it! To summarize our main points:

  • People are generally bad at understanding and explaining their own motivations. For more reliable consumer research, choose behavioral measurements over attitudinal ones.
  • Technology can help you to make more accurate predictions of consumer behavior, combined with the psychological insights to interpret the data.
  • Conduct your research in an ethical way: be transparent about the data you collect and truthful in the claims you make.

 

Curious to find out how Crobox can help you use smart notifications to help you better understand your customer? Download our whitepaper to learn more!

New Call-to-action

 

Subscribe to our blog!

Stay up to date with our monthly newsletter!