Testing Mind Map Series: How to Think Like a CRO Pro (Part 37)
Interview with Lucrezia Platé
With not one, but two Master’s degrees in Law and Finance, Lucrezia’s background might not seem like an obvious career path in optimization. But as it turns out, her passion for numbers, copywriting, and design, led her down the path of testing and experimentation eventually.
What’s most intriguing about Lucrezia’s approach to optimization is her understanding of the breadth of competencies required. Optimization demands a versatile mind, capable of traversing through statistics and data, user psychology, design, product, and marketing. It’s not just about crunching numbers; it’s about painting a complete picture of user behavior and using this to drive business growth efficiently.
She stresses the need to understand the “why” and “why not” behind observations to ensure that results are rooted in common sense, not just the numbers game.
We can all learn a thing or two from Lucrezia’s expertise, so let’s dive into the interview and see what optimization gems she has to share…
Lucrezia, tell us about yourself. What inspired you to get into testing & optimization?
After earning two Master’s Degrees, one in Law and one in Finance, and gaining some experience in the PE/VC industry, I fell in love with Product Management and landed a job as a Product Manager at a Fintech Group called ION.
My passion for statistics and econometrics, along with a deep interest in copywriting and design, led me to further develop my skills in testing and experimentation.
Currently, I work at Bending Spoons, one of the biggest app developers in the world, as a Growth Product Manager. My role involves driving user acquisition and monetization by leveraging data, user psychology, and continuous experimentation through A/B testing.
What I find most inspiring about optimization is the wide variety of competencies involved. To really excel, you need to have knowledge of Statistics and Data, as well as User Psychology, Design, Product, and Marketing. The bar is set extremely high, and a daily commitment to learning is required.
Another aspect I really appreciate is the importance of numbers and hard facts when evaluating performance. If you put in the required effort, it will show.
How many years have you been optimizing for?
My interest in testing goes back six years to my second degree when I had my first encountered statistics and began applying Hypothesis Testing and Optimization to my quantitative trading in financial markets.
After completing my Master’s in Finance, I transitioned to Tech Product Management (at the end of 2021), where I had the opportunity to transfer my experience to completely new fields, Product and Growth.
What’s the one resource you recommend to aspiring testers & optimizers?
It’s difficult to choose just one, but I would probably recommend the book “Trustworthy online controlled experiments”, by Kohavi, Tang, and Xu. It provides a primer on the most important concepts you need in order to build statistically robust A/B tests.
Answer in 5 words or less: What is the discipline of optimization to you?
Efficient use of experimentation tools
What are the top 3 things people MUST understand before they start optimizing?
- It’s not just about data and statistics. Sure, these elements are necessary foundations, but it doesn’t end there. If you are serious about becoming a great optimizer, you must develop a versatile mind and a wide array of competencies, including User Psychology, UX design, and data analysis (but not limited to it).
- It’s not just about A/B testing. People often try to reduce Optimization to A/B testing, but I think this is a limiting perspective, leading professionals to think that whenever optimization is required, A/B testing should be involved. This is not always the case. Optimization shouldn’t mean just maximizing KPIs, but doing so with the most efficient use of resources, which might or might not include A/B testing (sometimes an A test is more than enough, for example).
- A/B testing is expensive. Each experiment should be planned and prioritized carefully, taking into account the knowledge gathered from past tests, UX research, and market/competitor analysis.
How do you treat qualitative & quantitative data to remove bias?
I strongly feel that quantitative and qualitative data should be given equal importance in the optimization process. While quantitative data tells you the “what”, qualitative data is essential to uncover the “why” and “why not” behind your observations, also informing your experiment backlog.
Every time I brainstorm ideas for future tests, I have the habit to ask myself three questions:
- Why should it work?
- Why could it not work?
- Which evidence, qualitative and quantitative, do I have to prove or disprove my assumptions?
Doing so helps me gather the full picture and avoid getting lost in the numbers and following a statistically significant result that might not be backed by common sense.
What is the most annoying optimization myth you wish would go away?
One myth that I find particularly frustrating is the idea that you can start an experiment without having a clear idea of how long it will run, and decide to stop it when the results become statistically significant. This behavior can lead to monitoring test results too often and drawing conclusions too soon.
When the number of observations is low, positive or negative lifts will be likely observed by chance because the conversion rate is averaged over a low sample size. As the test collects more data points, the conversion rates converge toward their true long-term values.
I always suggest calculating the sample size relative to the effect size of interest before starting the experiment (also running a cost analysis beforehand). Then, it’s best to wait for the calculated sample size to be reached before stopping the test. Also, I recommend tracking the p-value since the beginning of the experiment and observing its movements throughout the entire period of the experiment. The more erratic the behavior, with p-values going below and above the significance line in a random-like fashion, the less reliable the results.
Download the infographic above and add it to your swipe file for a little inspiration when you’re feeling stuck!
Thanks for tuning in to this exclusive interview with Lucrezia! We hope you’ve gained some valuable insights from their experiences and advice, and we strongly encourage you to put them into action in your own optimization efforts.
Check back twice a month for upcoming interviews! And if you haven’t already, check out our past interviews with CRO legends Gursimran Gujral, Haley Carpenter, Rishi Rawat, Sina Fak, Eden Bidani, Jakub Linowski, Shiva Manjunath, Deborah O’Malley, Andra Baragan, Rich Page, Ruben de Boer, Abi Hough, Alex Birkett, John Ostrowski, Ryan Levander, Ryan Thomas, Bhavik Patel, Siobhan Solberg, Tim Mehta, Rommil Santiago, Steph Le Prevost, Nils Koppelmann, Danielle Schwolow, Kevin Szpak, Marianne Stjernvall, Christoph Böcker, Max Bradley, Samuel Hess, Riccardo Vandra, Lukas Petrauskas, Gabriela Florea, Sean Clanchy, Ryan Webb, Tracy Laranjo, Lucia van den Brink, and our latest with LeAnn Reyes.