You've run an A/B test on ad copy for a week. Variant A got 12 conversions from 84 clicks. Variant B got 17 from 98. One looks better, but how do you know it isn't just random luck? Guessing wastes ad budget. You need a clear, statistical winner – not a gut feeling.
Quick Scan
- Best for: PPC specialists, paid search managers, digital marketers running ad copy tests.
- Output: A report with winner declaration (confidence level), effect size, and copy insights.
- Inputs: Impressions, clicks, conversions for two ad variants.
- Saves you from: Manual spreadsheet analysis and second-guessing.
- Caveat: Requires at least 5 conversions per variant for reliable results.
What It Does
The Ad Copy Split Tester skill reads your A/B test data and applies a two-tailed z-test for proportions. It calculates conversion rates, statistical significance, and a p-value. If the difference is significant (default 95% confidence), it declares a winner. Then it goes beyond numbers: it examines the winning copy and identifies what likely made it work (urgency, specific numbers, benefit framing) and gives you concrete suggestions for your next test.
When to Use (and When Not to)
Use it when:
- You have an active A/B test with at least a few hundred impressions and some conversions.
- You need an objective winner to roll out across campaigns.
- You want to understand why a variant won, to feed your copy playbook.
Skip it when:
- You have fewer than 50 conversions total – statistical tests become unreliable.
- You're testing more than two variants (this skill handles pairwise comparison).
- You need to test landing pages or image ads (focus is ad copy).
How to Install and Use
- Download the ZIP from this page.
- Extract it into your AI workflow skills directory.
- Start a new conversation with Claude and activate the skill by typing:
Analyze my ad copy test results and tell me the winner. - Provide your data in this format:
Variant A (Original): Impressions=1200, Clicks=84, Conversions=12
Variant B (New headline+desc): Impressions=1150, Clicks=98, Conversions=17
- The skill will return a structured report within the conversation.
A Concrete Example
Input: A PPC manager for a mid-size ecommerce brand tests two versions of a search ad headline for a summer sale campaign.
A: "Summer Sale – Up to 50% Off" → Impressions=3500, Clicks=140, Conversions=18
B: "Save 50% on Summer Favorites Today" → Impressions=3420, Clicks=165, Conversions=27
Output (summary):
- Variant B wins with 96.8% confidence.
- Relative lift in conversion rate: +45%.
- Copy Insight: The phrase "Save 50%" combined with "Today" creates urgency and a concrete benefit, which likely drove more clicks and higher purchase intent.
- Recommendation: Roll out the winning headline to other sale ad groups. Next, test a version that emphasizes scarcity ("Ends tonight").
Limitations
- Data requirements – Statistically valid only with at least 5 conversions per variant (preferably 50+). Less data means lower reliability.
- Two variants only – If you test more than two copies, run separate pairwise comparisons.
- No causal analysis – The skill points out copy patterns but can't prove causation. Confounding factors (day of week, audience differences) are for you to control.
- Doesn't replace good test design – Randomization and timing are still your responsibility.
Get the Skill
Bring statistical clarity to every ad copy test. Download the Ad Copy Split Tester below and stop guessing.
---
Related skills: None yet – this is a standalone tool for ad optimisation.
A focused productivity tool for PPC specialists that adds statistical rigor to ad copy testing.
Changelog
- May 12, 2026 Initial release.
Related skills
Skills that pair well, or that you'd reach for instead:
Install
Start with the ZIP package, then choose the AI tool workflow that fits your setup.
- Download
ad-copy-split-tester.zip(6 KB). - Open the package and read
SKILL.md, plus any included references, templates, or scripts. - Use the instructions directly in any AI tool that supports reusable instructions, project knowledge, custom agents, or uploaded reference files.
For Claude Code, unzip into ~/.claude/skills/ so the folder lands at ~/.claude/skills/ad-copy-split-tester/, then reload Claude Code. For Claude.ai, upload the same ZIP from Customize → Skills.
Use with other AI tools
This package is not locked to one vendor. If your AI tool does not support Claude-style skills, copy the core instructions from SKILL.md into the tool's custom instructions, project prompt, agent setup, or reusable prompt library.
- Upload or paste any included reference files as project knowledge where your tool supports it.
- Keep the output format from
SKILL.mdintact so results stay predictable. - Run a small test with your own data before using the workflow in production.
Compatibility depends on the features your AI tool provides. Treat scripts as optional local helpers unless your environment can run them safely.
Claude installation reference
The ZIP also follows Claude's Agent Skills structure: a folder with a required SKILL.md file plus optional scripts, references, templates, and resources.
- Agent Skills for Claude Code: official guide for creating, installing, testing, and debugging skills.
- Creating custom skills for Claude.ai: official guide for packaging a skill as a ZIP and uploading a skill.
- Using Skills in Claude: plan availability, enabling Skills, and how Claude invokes them.
Review this skill
What did you think of Ad Copy Split Tester?
Star ratings and written reviews show up in the public review feed after a light moderation pass. No login required.