Ever wondered whether that uplift in conversion did anything to your bottom line? Introducing Revenue Impact

Share this article

Have you ever wondered whether optimizing for conversion rate is actually driving revenue? Does the constant push for conversions actually mean you are always getting the best AOV and the best margins you can?

Or, have you ever been told by a vendor that you have got a 40% uplift in conversions because of their product or test idea? Have you ever been disappointed when you then cross-referenced this with your bottom line revenue?

We understand here at Qubit that conversion rates are not the full story. That’s why our developers have been cracking on with a brand new way of measuring the true impact of your tests in our Deliver platform. To determine the success of the test, we will now provide both revenue uplift reporting, uplift probability and the confidence interval as new data points in the test results interface.

Revenue uplift is a complicated statistical measurement. A lot of the algorithms commonly used in this market are not statistically rigorous enough for us. So we have taken the time to really figure out the most robust way of determining revenue impact. (For more information on this, our excellent data scientist Adam Davison, will be writing for our blog to explain this).

How does it work?

In the user interface, it will now be possible to select two new goals: Revenue Per Customer (RPC) and Revenue Per Visitor (RPV).

RPC is calculated based on the number and order value for each purchasing visitor. Looking at the RPC is recommended where the objective of the experiment is to maximise order value or life- time value.

RPV takes into account all visitors and essentially factors the conversion rate in too. RPV is most recommended where the objective of the experiment is to maximise revenue across your total user population.

Getting rigorous with statistics: how do we measure this?

Measuring revenue from a statistical standpoint is very complicated. Take an extreme example:

Variation A: 1 x $10,000 sale

Variation B: 10,000 x $1 sales

Which one wins?

While a conversion is a binary event (Yes or No), revenue is dependent on the underlying distribution of the order amounts per user. The statistics applied are therefore fundamentally different and have to be treated as such accordingly. We therefore use a historic view of revenue per customer in the control to determine a “usual” distribution and understand if the average of the distribution has moved considerably under the variation controls.

Unlike some simple revenue testing models, our model can also account for extreme values such as frequent purchasers or purchases of a high value. Instead of merely removing these high value customers, we can incorporate them into our analysis.

So, what are the benefits for you?

You can understand the true impact of the A/B tests and experiments

The feature helps marketers and e-commerce managers understand the true impact of an experiment. Conversion rate is good for understanding how many users get through the funnel, but it doesn’t tell you much about the average order value, retention, or whether the conversion uplift was affected by other factors (such as a sale, or voucher codes).

You can use more efficient metrics

Revenue per visitor and revenue per customer are both metrics that allow the marketer to have a better view of how an A/B test or other experiment affects the website and hits the bottom line.

Going global is simple

Deliver will automatically convert transactions into your preferred currency to ensure consistency of results across regions.

To learn more about this feature, why not check out our FAQs? Or see how Topshop, by measuring revenue impact, was better able to evaluate their tests.


Subscribe to stay up to date