A look at the flaws in this innocent way to experiment with your pricing.
Pricing is all about experimentation. Some of the best pricing strategies in the world start from some well thought of experiments. When you take a look at how Facebook charges for their advertisements or how Amazon charges for its web-services, you get a feeling that the pricing is unique and makes a lot of sense. No way these guys came out with that pricing model on day 1!
The main idea behind the experimentation is to find out how your customers react to different prices in different situations.
Unfortunately, the first thing that comes to mind when you think of experiments is the much loved or much ridiculed A/B tests. Now, pricing is a sensitive topic, both for the business owner and the customers. Be it an online store selling vegetables, toys or photographs, pricing can be the difference between “Oh, I should get that” and “Whaaat, that’s way too much!”. Needless to say, you need to be very careful if you are considering some form of price experiments.
There has been a lot of shout about it on Quora with mixed opinions and there have been a few success stories too. But, every business is different and you will never know all the caveats present in a story.
There are a lot of website A/B testing tools too, like VWO and Optimizely that provide an easy out of the box solutions. These solutions are the best out there for experimenting on marketing and visual aspects of your website. But when you have a go at pricing, things start to change. What could go wrong? You ask.
I agree, It’s always best to begin with the easiest version(the A/B test) when starting an experiment, but lets dig a little deeper and understand what it really entails.
What it really is…
A/B Tests were inherently designed for testing which of the two (or more) comparable items in a scenario; where all other variables are kept exactly the same; is better for the desired result.
The key parts in the definition above are comparable items, all other variables are kept exactly the same and for a desired result. Shop-owners or business managers forget at-least one of these key phrases while designing tests. This brings us to the major flaws it has when pricing is tested.
Are you sure the about what you are measuring?
Usually, A/B testing is used to test marketing communications, landing page banners or even the colour of a button on your landing page! The options being tested bear similar costs to the tester. The tester is indifferent to the result of the experiment. That is not the case in pricing. Different prices give different rewards (read revenue or conversions or margins). So, even if $5 is out-performing better than $50 by five times, it may not necessarily be the right choice.
What you need to do is have a fixed success metric in mind before starting the experiment. That could be an increase in revenue or conversions or just an average increase in margins and profits. If you do change this metric midway, you will need to start again.
There’s this other thing too. Let’s say the test is done and you stick to the winning price, you might soon realise that the situation has changed as Christmas is here and people are on a spending spree. So, what the ever-changing markets are doing is changing the variables of your experiment, thus screwing it up.
The important puzzle solving piece here is to embrace the changing variables in the test and plan the tests such that you include the effects of the variables (read market changes) when you are analysing the results. This essentially means that you are running many A/B tests simultaneously and using the results when you encounter similar market conditions in the future. So you are ready with your findings when the next Christmas season or the next weekend arrives.
Do you really want to lose money while experimenting?
Secondly, A/B testing, by nature takes a long time to reach any kind of statistical significance. It means that it requires a lot of runs to determine which of the test-item is doing well. This would mean you would want more and more users to actually make the purchase before reaching a number that is slightly satisfactory. But with online conversion rates hovering in low single digits, it’s a bit of an ask to get the option in a month or two.
Now Imagine, you are testing four prices, $1, $2, $3, $4. Let’s say after 4 months you found out that $3 is converting most users. Now here is the pitfall, you were showing the sub-optimal prices to your users for 75% of the time, thereby losing on revenue for the whole time you were “testing”. Plus, you will never get a buzzer that says the test is complete, so you won’t even have a way to know for sure.
To summarise, A/B testing is better suited for places that generate a lot of traffic, but don’t ask for large commitments (think landing pages and SEO).
Therefore, using standard A/B testing may not be the best idea in case of pricing experiments. I’d say a more scientific and a smarter approach would do the trick. It might involve constantly monitoring the experiment and optimise the prices according to the changing variables.
In the end, whatever happens, you should never stop experimenting with prices because the one thing we know for sure is that the market trends, they are a changin. Here’s a little quote to keep you motivated.
“Take the time to run the right ongoing experiments, even at the risk of a little pricing plan nuttiness in the present.” — Jason Lemkin (SaaStr)