At Linkedin, we constantly evaluate the value our products and services deliver, so that we can provide the best possible experiences for our members and customers. This includes understanding how product changes impact key metrics related to those experiences. However, simply looking at connections between product changes and key metrics can be misleading. As we know, correlation does not always imply causation. When making decisions about the path forward for a product or feature, we need to know the causal impact of that change on our key metrics.
The ideal way to establish causality is through A/B testing, where we randomly split a test population into two or more groups and provide them with different variants of the product (which we call “treatments”). Due to the randomized assignment, the groups are essentially the same, except for the treatment they received, and so any difference in metrics between the groups can be attributed solely to the treatment. Our T-REX experimentation platform allows us to do A/B testing at scale, adding 2,000 new experiments on a weekly basis, serving a user population of more than 850 million members.
Read The Full Article