Testing Mind Map Series: How to Think Like a CRO Pro (Part 67)

Tyler Hudson
By
October 7, 2024 ·

Interview with Tyler Hudson

Every couple of weeks, we get up close and personal with some of the brightest minds in the CRO and experimentation community.

We’re on a mission to discover what lies behind their success. Get real answers to your toughest questions. Share hidden gems and unique insights you won’t find in the books. Condense years of real-world experience into actionable tactics and strategies.

This week, we’re chatting with Tyler Hudson, Director of Analytics & Insights at Roboboogie, a Digital Optimization consultancy that partners with digital experts to significantly improve website performance. Roboboogie is also a valued Convert certified partner.

Tyler, tell us about yourself. What inspired you to get into testing & optimization?

I lead the experimentation practice for Roboboogie, a CRO consultancy out in Portland Oregon. My background leans heavier on data and statistics, but I also have a passion for design. This crossroad of interests eventually led me to experimentation.

How many years have you been testing for?

I’ve been in the AB testing and experimentation space for just over 7 years now. 

What’s the one resource you recommend to aspiring testers & optimizers?

Tricky question, in my mind it depends where you are in the process. To start, I always recommend a refresh on statistics. Especially if that word makes you shift in your seat. It’s frequently overlooked for people new to the field, but it is essential to understand the components of test analysis & structure before launching your first AB test.

While I can’t point to 1 resource, the main thing is to be critical about what you do and don’t know. It’ll set you up for a career in testing.

Answer in 5 words or less: What is the discipline of optimization to you?

Proving your ideas wrong, frequently.

What are the top 3 things people MUST understand before they start optimizing?

  1. Pre-Test Planning – I mentioned it above, but a solid understanding of statistical components in CRO is critical before stepping into experimentation. With all the factors that can impact “conversion rate,” pre-test planning is imperative to a successful program.
  2. Win Rates – You’ll be wrong way more frequently than you are right. If you were right all the time, why even bother with testing? A more positive spin would be to reframe it as you’re always learning. Don’t get discouraged by losing tests, leverage the insights collected to make future iterations more impactful.
  3. Data Integrity – Audit your data sources and integrations frequently. The only thing worse than no data, is inaccurate data. Carve time in your schedule to do quarterly (at least) audits and always always always run an A/A test before stepping into your AB testing backlog to ensure your program has measured baseline variability.

How do you treat qualitative & quantitative data to minimize bias?

Qualitative data should be a core component of every testing program. However, it’s often overlooked or segmented into a different department.

I find it imperative to utilize both types of data to better understand the why behind the performance shifts analyzed during an AB test. We know that quantitative tells you the what in the data and qualitative highlights the why. In the absence of qualitative, we often see programs implement wins without fully understanding the impact to the most important part of experimentation, the customer.

Also, don’t overlook the importance of qualitative data in identifying what should be tested. I recommend using a mix of user testing, surveys and session capture recordings to help bring clarity into any friction point identified in your analysis. Ask your customers what they need or watch them navigate through your journey and you’ll have a better understanding of the gaps leading to stronger hypotheses and a more well rounded customer experience. 

How (to you) is experimentation different from CRO?

From my work, I see CRO as a methodology that fits into the larger umbrella of experimentation. CRO is often focused on output, whereas experimentation is a framework to make better decisions across every aspect of a company.

Our CRO engagements are always more result driven where we see experimentation programs focus on structure, learnings and outcomes.

That being said, experimentation programs rarely exist without a CRO component, but where some CRO programs fall short is the inability to factor in customer experience.

Talk to us about some of the unique experiments you’ve run over the years.

I’ve been running a lot of “feature” tests focused on increasing average order value for a few eCommerce accounts. I like the ability to think outside of a template to introduce custom features that meet customers where they are in the buying process while providing interactive elements that allow customers to engage without distracting from the primary conversion path. Works for SaaS accounts too!

These tests have included: product configurators, dynamic upsells based on product engagement, 1 click add ons placed in high attention areas (I call it the Amazon effect, though i’m not sure how I feel about that term), product recommendations and others.

If your find your development team shudders at the thought of a new feature, consider running a “fake door” test to a subset of customers to measure the engagement and then iterate from there. 

CRO Expert Profile  Tyler Hudson

Cheers for reading! If you’ve caught the CRO bug… you’re in good company here. Be sure to check back often, we have fresh interviews dropping twice a month.

And if you’re in the mood for a binge read, have a gander at our earlier interviews with Gursimran Gujral, Haley Carpenter, Rishi Rawat, Sina Fak, Eden Bidani, Jakub Linowski, Shiva Manjunath, Deborah O’Malley, Andra Baragan, Rich Page, Ruben de Boer, Abi Hough, Alex Birkett, John Ostrowski, Ryan Levander, Ryan Thomas, Bhavik Patel, Siobhan Solberg, Tim Mehta, Rommil Santiago, Steph Le Prevost, Nils Koppelmann, Danielle Schwolow, Kevin Szpak, Marianne Stjernvall, Christoph Böcker, Max Bradley, Samuel Hess, Riccardo Vandra, Lukas Petrauskas, Gabriela Florea, Sean Clanchy, Ryan Webb, Tracy Laranjo, Lucia van den Brink, LeAnn Reyes, Lucrezia Platé, Daniel Jones, May Chin, Kyle Hearnshaw, Gerda Vogt-Thomas, Melanie Kyrklund, Sahil Patel, Lucas Vos, David Sanchez del Real, Oliver Kenyon, David Stepien, Maria Luiza de Lange, Callum Dreniw, Shirley Lee, Rúben Marinheiro, Lorik Mullaademi, Sergio Simarro Villalba, Georgiana Hunter-Cozens, Asmir Muminovic, Edd Saunders, Marc Uitterhoeve, Zander Aycock, Eduardo Marconi Pinheiro Lima, Linda Bustos, Marouscha Dorenbos, Cristina Molina, Tim Donets, Jarrah Hemmant, Cristina Giorgetti, and Tom van den Berg.

Mobile reading? Scan this QR code and take this blog with you, wherever you go.
Originally published October 07, 2024 - Updated October 09, 2024
Written By
Tyler Hudson
Tyler Hudson
Tyler Hudson
Director of Analytics & Insights @ Roboboogie.
Edited By
Carmen Apostu
Carmen Apostu
Carmen Apostu
Head of Content at Convert

Start Your 15-Day Free Trial Right Now.
No Credit Card Required

You can always change your preferences later.
You're Almost Done.
I manage a marketing team
I manage a tech team
I research and/or hypothesize experiments
I code & QA experiments
Convert is committed to protecting your privacy.

Important. Please Read.

  • Check your inbox for the password to Convert’s trial account.
  • Log in using the link provided in that email.

This sign up flow is built for maximum security. You’re worth it!