Testing Mind Map Series: How to Think Like a CRO Pro (Part 72)

Interview with Matt Tilling
Every couple of weeks, we get up close and personal with some of the brightest minds in the CRO and experimentation community.
We’re on a mission to discover what lies behind their success. Get real answers to your toughest questions. Share hidden gems and unique insights you won’t find in the books. Condense years of real-world experience into actionable tactics and strategies.
This week, we’re chatting with Matt Tilling, Optimisation Consultant at Conversion, the world’s leading CRO agency.
Matt, tell us about yourself. What inspired you to get into testing & optimisation?
I’ve always been fascinated by human behaviour. Even back in university when I was studying marketing, I was drawn to anything that involved understanding what makes people tick. The idea that tiny tweaks could shift decisions always intrigued me. Honestly, at times I felt like I was picking up the skills to be a great con artist but luckily I found a much better use for it in optimisation!
How many years have you been testing for?
I’ve been testing for around 10 years.
What’s the one resource you recommend to aspiring testers & optimisers?
There are loads of great resources out there—testing calculators, books on statistics, behavioural economics—but if I had to pick just one, I’d say LinkedIn.
I know that sounds a bit cliché, but if you follow the right people, you get a front-row seat to industry debates, emerging trends, and new ways of thinking. So many people assume A/B testing is the whole game, but when you see thought leaders challenging that idea in real time, you start to appreciate how much bigger the field really is. My advice? Follow the experts, listen in on the debates, and see how different perspectives shape your own thinking.
Answer in 5 words or less: What is the discipline of optimisation to you?
“Understanding and testing human behaviour.”
OR
“The great debate settler.”
What are the top 3 things people MUST understand before they start optimising?
- Know Your Numbers – If there’s one thing I’ve learned, it’s that knowing your stats is non-negotiable. So many tests out there are underpowered or don’t meet basic statistical requirements, which means people end up making decisions based on dodgy data. It’s not just about hitting statistical significance; it’s about knowing when the numbers actually mean something. If you can’t spot the red flags, bad sample sizes, flawed methodologies, or just plain noise you’re not optimising, you’re just gambling. The key is balancing statistical rigour with real-world pragmatism.
- User behaviour is messy and unpredictable – What works for one audience might completely bomb for another. People aren’t robots, they’re influenced by psychology, external factors, and even their mood at the time. A test that wins today might flop six months down the line. That’s why you can’t rely purely on data alone, you need qualitative insights from user research, session recordings, and surveys to get the full picture.
- Optimisation is way bigger than A/B testing – Testing is a tool, not the end goal. The real game is about constantly refining the experience, questioning assumptions, and making small, meaningful changes that stack up over time. Some of the biggest wins don’t come from A/B tests at all, they come from understanding what actually motivates your users and solving their real problems.
How do you treat qualitative & quantitative data to minimize bias?
Bias creeps into data so easily, and if you’re not careful, you can end up drawing the wrong conclusions. That’s why I always try to take a mixed-method research approach, blending both qualitative and quantitative data. Numbers can tell you what’s happening, but they don’t explain why. On the flip side, qualitative insights—like session recordings, surveys, and interviews, give you that why, but they’re often based on small sample sizes, which can be misleading on their own. The sweet spot is combining the two, so you can cross-check findings and get a more complete, reliable picture of user behaviour.
I also make a conscious effort to avoid confirmation bias. It’s so easy to cherry-pick data that supports what you want to believe. That’s why I always form a hypothesis before I start digging into numbers, and I actively look for evidence that contradicts my assumptions. The best way to minimise bias is to accept that no dataset is perfect—so you need to question everything.
How (to you) is experimentation different from CRO?
Experimentation and CRO get lumped together a lot, but they’re not the same thing. CRO is all about driving measurable improvements—conversions, revenue, engagement—stuff you can easily track. Experimentation, though, is way broader. It’s about using a scientific approach to reduce uncertainty and answer big questions, even when the outcome isn’t tied to an obvious KPI.
Experimentation can help tackle strategic questions like What builds long-term customer trust? or How does brand perception influence behaviour? These insights might not lead to an instant conversion boost, but they can fundamentally change how a business operates. CRO is a powerful part of experimentation, but experimentation itself is much bigger—it’s about shaping the future, not just optimising the present.
Talk to us about some of the unique experiments you’ve run over the years.
Some of the most interesting experiments I’ve run have been the ones that go against best practices. One example was testing scarcity messaging for a travel brand. Instead of just running a basic A/B test with urgency-driven messages like ‘Only 2 seats left!’, we tested different levels of urgency to see if it actually made a difference. Turns out, overly aggressive scarcity tactics did more harm than good—while some urgency boosted bookings, too much eroded trust, especially among repeat visitors who saw the same ‘low stock’ warnings every time. That led the brand to rethink its entire urgency strategy and focus on more authentic scarcity signals.
Another fun one was around pricing transparency for a subscription business. The common assumption was that showing pricing too early would scare people off, so we tested different levels of disclosure, some users saw the full price upfront, while others got a softer introduction to features before seeing the cost. The results? Full transparency actually increased conversions, particularly among high-LTV customers, because it built trust and removed hesitation. Sometimes reducing friction means being more upfront, not less.
The experiments I’d love to run but rarely get the chance are the ones that go completely against the grain—things like raising prices instead of discounting, removing flexibility instead of adding it, or stripping out features that everyone assumes are essential. These are the types of tests that never make it past a boardroom discussion but could unlock the most interesting insights. Pushing boundaries and challenging conventions—that’s where the real fun is.
Cheers for reading! If you’ve caught the CRO bug… you’re in good company here. Be sure to check back often, we have fresh interviews dropping twice a month.
And if you’re in the mood for a binge read, have a gander at our earlier interviews with Gursimran Gujral, Haley Carpenter, Rishi Rawat, Sina Fak, Eden Bidani, Jakub Linowski, Shiva Manjunath, Deborah O’Malley, Andra Baragan, Rich Page, Ruben de Boer, Abi Hough, Alex Birkett, John Ostrowski, Ryan Levander, Ryan Thomas, Bhavik Patel, Siobhan Solberg, Tim Mehta, Rommil Santiago, Steph Le Prevost, Nils Koppelmann, Danielle Schwolow, Kevin Szpak, Marianne Stjernvall, Christoph Böcker, Max Bradley, Samuel Hess, Riccardo Vandra, Lukas Petrauskas, Gabriela Florea, Sean Clanchy, Ryan Webb, Tracy Laranjo, Lucia van den Brink, LeAnn Reyes, Lucrezia Platé, Daniel Jones, May Chin, Kyle Hearnshaw, Gerda Vogt-Thomas, Melanie Kyrklund, Sahil Patel, Lucas Vos, David Sanchez del Real, Oliver Kenyon, David Stepien, Maria Luiza de Lange, Callum Dreniw, Shirley Lee, Rúben Marinheiro, Lorik Mullaademi, Sergio Simarro Villalba, Georgiana Hunter-Cozens, Asmir Muminovic, Edd Saunders, Marc Uitterhoeve, Zander Aycock, Eduardo Marconi Pinheiro Lima, Linda Bustos, Marouscha Dorenbos, Cristina Molina, Tim Donets, Jarrah Hemmant, Cristina Giorgetti, Tom van den Berg, Tyler Hudson, Oliver West, Brian Poe, Carlos Trujillo, and Eddie Aguilar.
Written By
Matt Tilling
