Testing Mind Map Series: How to Think Like a CRO Pro (Part 54)
Interview with Georgiana Hunter-Cozens
Every couple of weeks, we get up close and personal with some of the brightest minds in the CRO and experimentation community.
We’re on a mission to discover what lies behind their success. Get real answers to your toughest questions. Share hidden gems and unique insights you won’t find in the books. Condense years of real-world experience into actionable tactics and strategies.
This week, we’re chatting with Georgiana Hunter-Cozens, Senior Strategy Consultant at Creative CX, a specialist experimentation consultancy.
Georgiana, tell us about yourself. What inspired you to get into testing & optimization?
Hi, I’m Georgiana – a Senior Strategy Consultant at CreativeCX (a specialist experimentation consultancy). I began studying Maths at university before changing into Social Sciences as I realised I was more interested in analysing people and their behaviors than the abstract side of theoretical maths.
I always knew I wanted some sort of job in Digital, and during my final year of University, I worked at a small EduTech startup. It was here that I started using customer feedback to suggest improvements to the website, which I credit with sparking my interest in optimization.
From there I started working as a CRO specialist at a small PPC agency, where I began my formal education of testing and analytics. I loved the intersection of qualitative and quantitative analysis and the necessity to think both creatively, with test ideas, and logically, with test analysis.
How many years have you been testing for?
My first CRO role was in 2018, so about 6 years.
What’s the one resource you recommend to aspiring testers & optimizers?
A lot of my basic knowledge of experimentation came from CXL. They have a huge range of courses which I really recommend for aspiring testers & optimizers, covering basic theories and concepts as well as some really interesting courses about psychological concepts.
Answer in 5 words or less: What is the discipline of optimization to you?
Consistent and measurable iterative improvements
What are the top 3 things people MUST understand before they start optimizing?
- The power of data
When you’re just starting in optimization there can be a temptation to ideate based on gut feelings and generally just coming up with ideas for what you (or other stakeholders) think might work, or just ideas you want to try. However, the more data you use, the better you will be able to understand the problem, and therefore the more successful your solutions.
It’s also really important to do your analysis first, and make hypotheses based on that, rather than trying to find data to suit your ideas, which can lead to selective analysis and data bias. There is also a huge range of different data types out there, so if you’re able to combine qualitative and quantitative, or even just a few different types of one, you’ll also get a much better picture.
- It’s ok to fail!
When you start experimenting you will quickly realise that not every test is a winner – quite often you may get either a losing test or a flat one. Rather than viewing these as failures, which will get you down pretty quickly, you should instead view these as opportunities to learn. Why was your hypothesis incorrect? What were users doing instead? Why didn’t they like the new variation? You could also view it as avoiding a harmful rollout – rather than costing a lot of time and resource to develop and roll out a harmful design, you’ve proven it would be a bad idea and saved that resource instead!
- Why experimenting is so valuable
Not every business is bought into experimentation, and you will often come up against someone who sees experimentation as a waste of time. It’s important you understand how valuable experimentation is before you start, so you feel empowered!
Experimentation is a low-risk, low-cost way to prove or disprove new ideas without spending lots of time or resource fully developing and releasing them. A great programme can provide a sandbox for innovative ideas, which often lead to large impacts where successful, and even without these large ideas unlock ongoing and iterative improvements.
How do you treat qualitative & quantitative data to minimize bias?
I think it’s really important you don’t have a clear hypothesis before collecting or analysing your data. I have a question in mind when executing my analysis, but like to keep it broad (‘how are users currently using this page’, ‘is there any indication for why this page has such a large exit rate’), rather than looking for data to support my own theory. For qualitative data, I similarly think it’s best to leave questions broad and allow users to behave naturally, with as little prompting as possible.
For me, I naturally have less bias when analyzing my clients’ data as I am removed from any potential internal politics regarding design or features. For others, I know they’ve limited bias by having one team perform analysis and a different team ideation on it.
When it comes to analyzing a test, a lot of that comes down to planning thoroughly beforehand. I keep my initial hypothesis in mind and determine success depending on whether that hypothesis is proven or disproven. We agree on primary and secondary metrics before running the test, so we have already established what behaviors we’re looking for, and ultimately understand that if the data doesn’t support the hypothesis it is a losing/inconclusive test… in which case I start looking for something to learn!
How (to you) is experimentation different from CRO?
To me, CRO is very focused on improving conversion rates, rather than user experience, and ‘optimizing’ the site – so tweaking what is there. ‘Experimentation’ is much larger – it’s focused on testing new and innovative features and designs alongside optimizing the existing experience. Success is not always based on conversion rates, which opens the door to a larger variety of tests than the typical A/B/n tests associated with CRO. Psychologically, I think the term ‘Experimentation’ brings to mind thoughts of scientific rigor and well-designed processes, and hopefully more respect.
Talk to us about some of the unique experiments you’ve run over the years.
I like to think every experiment is unique, as I have worked with a lot of different clients across my career, each with different goals and opportunities. If I had to pick a few unusual ones:
- Improving the visibility of a coupon code within a checkout funnel, which worked too well, as too many customers were using it. It led to a significant increase in CVR, but a decrease in RPV, which led to a really interesting discussion regarding how we should measure success! I count it as a win, as the hypothesis was that it would lead to an increase in sales, although for the client it was a loss as their revenue was hit.
- Similarly, testing the visibility of a Google Maps embedded feature and balancing the positive impact of the test against the cost of the feature to establish whether it was beneficial enough.
- Testing AR and VR features, and AI created product images, on the same client.
- Adding a link to the homepage to explain a client’s logo. The test was inconclusive, as it turns out no one really cared (or at least, not enough for it to affect their willingness to convert).
- Not unique, but shows the power that a small change can have. We changed the copy of a button leading to a Lead Gen form from “Let’s talk” to “Get started”, which nearly tripled the clicks and nearly doubled the leads.
Our thanks go out to Georgiana for taking part in this interview! To our lovely readers, we hope you found the insights useful and encourage you to apply them in your own optimization efforts.
Don’t forget to check back twice a month for more enlightening interviews! And if you haven’t already, check out our past interviews with CRO pros Gursimran Gujral, Haley Carpenter, Rishi Rawat, Sina Fak, Eden Bidani, Jakub Linowski, Shiva Manjunath, Deborah O’Malley, Andra Baragan, Rich Page, Ruben de Boer, Abi Hough, Alex Birkett, John Ostrowski, Ryan Levander, Ryan Thomas, Bhavik Patel, Siobhan Solberg, Tim Mehta, Rommil Santiago, Steph Le Prevost, Nils Koppelmann, Danielle Schwolow, Kevin Szpak, Marianne Stjernvall, Christoph Böcker, Max Bradley, Samuel Hess, Riccardo Vandra, Lukas Petrauskas, Gabriela Florea, Sean Clanchy, Ryan Webb, Tracy Laranjo, Lucia van den Brink, LeAnn Reyes, Lucrezia Platé, Daniel Jones, May Chin, Kyle Hearnshaw, Gerda Vogt-Thomas, Melanie Kyrklund, Sahil Patel, Lucas Vos, David Sanchez del Real, Oliver Kenyon, David Stepien, Maria Luiza de Lange, Callum Dreniw, Shirley Lee, Rúben Marinheiro, Lorik Mullaademi, and our latest with Sergio Simarro Villalba.