A Shopify merchant’s guide to A/B Testing on Pack: clean data, no flickers
Learn how to run flicker-free A/B tests on your Shopify Hydrogen storefront with Pack's server-side testing tools. Get trusted results and boost CVR up to 2x like Manly Bands. Step-by-step guide.

Pack makes it easy to run server-side A/B tests on landing pages or your Hydrogen storefront. Instead of replacing your data and analytics platforms, Pack works alongside your existing tech to give you results you can trust.
Let’s explore what you can do with Pack’s A/B testing feature, how you can set it up, and some best practices to make your tests as successful as possible.
Step 1: Understand what you can test with Pack
Pack makes it easy to test:
-
Content changes—ex. Custom layouts, imagery, text etc. (anything that you can control in the Pack Customizer)
-
3rd party apps or built-in site features—ex. What’s the impact of providing product recommendations on our product page?
-
Cart elements—ex. What’s the impact of having a package protection widget at checkout?
-
Code-based changes—ex. Webhooks and packages so you can create your own unique tests
Pack’s biggest strengths:
Out of the box, Pack’s optimized for:
-
Super-fast page loads during tests
-
Dynamic content delivery
-
No test variant flickering
-
Minimizing privacy regulation
That means they’re going to give you cleaner results than most tests, since the shopper experience is top-notch. Pack can deliver this smooth experience because it can fully compile a test before it even hits your shopper’s device (that’s thanks to server-side rendering—the new technology that powers Shopify Hydrogen!).
Pack’s testing is particularly powerful and useful when you’re experimenting with personalization. Soon you’ll be able to use data from your existing "CDP" or anywhere you have access to your shoppers information or user behavior (I.e. location, page visits, segment IDs, etc) to change content dynamically on the page without requiring the shopper to do a refresh to see the personalized content. Because all the data is compiled on the server, it can also help avoid some of the client-side data privacy challenges.
When Pack won’t work for you:
Pack does not enable tests for data that is not stored within Pack (ex. product pricing), which means you’ll want to integrate other tools for more intricate or complex tests (ex. Multi-step content activation).
Step 2: Get a handle on your data
How Pack works with your data
Pack’s A/B Testing is meant to work alongside your existing data infrastructure (such as GA4 or a more sophisticated provider). In other words, Pack will provide you with A/B testing in the front-end, and then surface test event data that your data platforms can consume.
While Pack does not replace your data and analytics tools, Pack’s dashboard can help you visualize the baseline insights of your test and help you see the metrics for your control and variant groups, such as:
-
Total users and control/variant users that viewed the test
-
Total users that engaged with the test (view, click, ATC, etc.)
-
CVR, total purchases, AOV, Revenue
To visualize your test metrics, today you can connect to Google Analytics and/or Big Query, and Pack uses Bayesian statistics to calculate a winner. If you want more control over your data—ex. changing confidence levels or acceptable risk thresholds, Frequentist experimentation instead of Bayesian, or if you want to analyze complex event funnels within the A/B test, you’ll want to export your results and pull in additional tooling (ex. data warehouse, modeling and visualization tools, etc.) to more deeply analyze your data. We are exploring closer partnerships with newer tools that are in the space!
To ensure your tests are set up for success, here’s what we recommend you consider before you run your first test:
Event data collection methods + data flow
First, you will need to have a good handle on your existing data and attribution - we typically see brands working with data platforms such as Elevar, Northbeam, Fueled, Blotout, Littledata, or a combination of tools or a custom solution. It’s important to confidently outline: How does your event data flow through those systems? What’s your source of truth?
Once you’ve gathered a list of your data platforms, we recommend mapping out your data flows so you’ve got a clear picture of where data is flowing, and add Pack’s A/B testing events to the flow.
Event data accuracy + server-side event tracking
If you don’t have a good sense for how accurate your data is, your test results won’t mean much.
Having a robust conversions API (CAPI) solution in place like Fueled.io, Blotout, or Littledata that handles server side event tracking and matches it with user activity—ie. Deduping, syncing with Meta, consistent/accurate attribution—is a great first step.
If the answer is “not yet” or “I’m not sure” to data accuracy, feel free to book some time with Pack’s team and we can talk you through some helpful strategies and find the right tools for your team.
Start with a strong baseline for shopper behavior—or create a new one
Having some historical benchmarks for your data can help you tease out external factors that may be impacting your test results—like seasonality, or baseline page performance.
If you don’t have access to historical data, we recommend running an A/A test—where you test a variant of the page with no changes—to help you establish a testing baseline that can help you interpret your results.
Step 3: Establish Your Testing Strategy
Before you run any tests, you’ll want to clearly define your:
-
Process or framework for ideating and prioritizing high-impact optimizations (Ex: scoring ideas based on reach, impact, confidence, effort (RICE))
-
Business objective (Ex: Boost AOV)
-
Hypothesis (Ex: by adding a product recommendations slider to the product page, we can boost AOV x%)
-
Key metrics (Ex: AOV with do-no-harm for conversion rate)
-
Plan for data analysis and interpretation (Ex: Fueled for data collection, Blotout for analysis, covered in step 1)
-
Process for documenting and sharing learnings
With a documented testing strategy, you’re much more likely to end up with test results that are both accurate and actionable.
Testing big swings vs. incremental iteration: Tips from growing brands Generally we’ve found that brands will test bigger swings—ex. A completely new page design and imagery—to see bigger incremental wins, then make finer incremental improvements to their winning pages based on their results, changing one element at a time to zero in on CVR impact. If you’re looking to fine-tune your CVR or AOV, this is where making a single page change and optimizing an existing page is the better approach. It’s lower risk, and offers a clearer result. |
Step 4: Set up your first test
To start your first A/B test, you’ll want to pull in a developer / Pack’s support team to help you install some packages on your storefront (see A/B testing developer documentation).
Once you’ve run through setup, it’s easy to create your test in Pack’s visual editor and analyze the results:
Step 5: Iterate + apply your learnings
Wins: For winning tests, you may want to test the same test across multiple pages on your website. With Pack, you can create a template to add / modify sections that appear on multiple pages at once, and assign them to specific pages with tags.
In-between: For tests that aren’t a huge win or loss, it might be worth an iteration if you have a strong hypothesis as to why it didn’t have the impact you predicted—qualitative user research can be a powerful tool to double click on the “why” behind a test result. You may also decide to scrap the test in favor of bigger needle movers.
Loss: You’ll want to check in with your tests frequently to monitor for massive drops in core metrics. Once you’ve identified a test as a loss, you can turn it off with a simple click.
Common Pitfalls to Avoid
1. *Rushing Into Testing*
Without proper data infrastructure, you'll waste time on unreliable results.
3. *Not Connecting All Data Sources*
Make sure Pack's test data flows properly to all your analysis tools.
4. *Skipping Strategy*
Testing random elements without clear objectives wastes resources.
Ready to Start Testing?
Before launching your first test with Pack, ensure you have:
1. Clean, reliable event tracking
2. Server-side tracking solution
3. Connected analytics platforms
4. Clear testing strategy
5. Analysis plan
Need help evaluating your testing readiness? Book some time with our team.
Remember: The key to successful testing isn't just having the right tools—it's having the right foundation and strategy in place first.