# Testing by label

Multi ad group testing using labels helps you to split-test **sets of ads** across multiple ad groups or campaigns. Labels allow you to include and exclude ads as needed.

This approach can be **useful for low-volume accounts** that don’t accrue enough data to reach statistical significance with [single ad group testing](https://docs.adalysis.com/tools/ad-testing/single-ad-group-testing).&#x20;

### **How to run multi ad group tests with labels**

#### Manual tests

<figure><img src="https://3203600314-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FGlXujWJyreRidNjZDCw2%2Fuploads%2FIt5i7kx9qbkCmOE2u9Kg%2Ftesting%20by%20label.png?alt=media&#x26;token=97ac3781-7a8b-4ba0-aa27-d2048faea773" alt="" width="273"><figcaption></figcaption></figure>

You can run multi ad group tests as often as you need. Go to **Ad testing** > **Multi ad group** > **Test by labels**:

1. Select your campaigns and the date range for this test. A common date range means Adalysis will calculate a date range during which all ads within the test were active.
2. Specify the ad labels you want to use for comparing your ad performance. You can also exclude ads in the **Label contains none** column.&#x20;
3. Optional: Select the banner sizes you want to compare.
4. Click **Find current winners**. &#x20;
5. If you want Adalysis to run this test automatically, click **Save & run this test daily**.

<div align="center"><figure><img src="https://s3.amazonaws.com/cdn.freshdesk.com/data/helpdesk/attachments/production/1152927401/original/QkMeoo6vQLOOC7epYpP7lyKs3j58w_mBAg.png?1735293020" alt="" width="375"><figcaption></figcaption></figure></div>

{% hint style="warning" %}
Please be careful when using a **common test date range**. If one matching ad out of hundreds was enabled only a few days ago, it will cause the whole test to run using only the last few days of data.&#x20;
{% endhint %}

#### Daily test runs

Unlike single ad group testing, you'll need to define your multi ad group test before Adalysis can automate it for you. You can save your manual tests by clicking **Save & run this test daily** or choose **Define an automated test.**

#### Understanding the test result data

As well as the aggregate performance of each label (over the date range used), you'll also see the confidence the algorithm has in each metric. You'll also get performance boost projections based on pausing all ads with the losing label(s).

Look at the details of each label you included or excluded, and how many ad groups and ads it applied to. This includes the individual ads for each label. (Included labels are blue, while excluded labels are red.)

<figure><img src="https://s3.amazonaws.com/cdn.freshdesk.com/data/helpdesk/attachments/production/1132949079/original/oB7pgr8crZ-8UIOWSx9kNBi8HChYOu_5CQ.png?1686981948" alt=""><figcaption></figcaption></figure>

<figure><img src="https://s3.amazonaws.com/cdn.freshdesk.com/data/helpdesk/attachments/production/1132949083/original/sQsrPoGbW2dnGIzMdrTFbYmzTZrzAJ-NDg.png?1686981985" alt=""><figcaption></figcaption></figure>

#### **Bulk actions**

You can pause all ads with the losing label(s) for a specific metric. Once you pause the ads, the test is marked as archived and you can always find it under the **Archived test results** tab.&#x20;

<div align="left"><figure><img src="https://3203600314-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FGlXujWJyreRidNjZDCw2%2Fuploads%2F80FxXPAhoaFZezBrMRT1%2Fpause.png?alt=media&#x26;token=8d32ee1d-3c07-4657-8654-579e7332ba24" alt=""><figcaption></figcaption></figure></div>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.adalysis.com/tools/ad-testing/multi-ad-group-testing/testing-by-label.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
