Skip to content
Home » Blog » End-to-End Testing for Send Time Optimization

End-to-End Testing for Send Time Optimization

  • by

The AI (synthetic intelligence, knowledge science & machine studying) staff right here at Iterable strives to ship high-quality, up to date machine studying fashions to our prospects. However, even when we’ve made an important mannequin, how can we determine earlier than deployment that this mannequin is sweet sufficient to deploy? How can we keep away from deploying “bad” fashions (fashions which don’t replicate our prospects’ knowledge properly or have errors or biases of their design)? How can we automate such testing to suit inside our CI/CD framework?

In this text, I’ll define the kinds of fashions utilized by our staff and the challenges in testing them, adopted by an outline of our end-to-end “synthetic data” testing answer for Send Time Optimization, one among our staff’s merchandise.

Unsupervised Machine Learning

Why Use Unsupervised ML?

The kinds of machine studying fashions we use are solely motivated by the kinds of knowledge out there to us. The overwhelming majority of Iterable’s knowledge doesn’t include a “ground truth” label—for instance, is there a “ground truth” finest time to ship an email to a given buyer, and even when there’s, how might we establish that “ground truth” and validate the “ground truth” label for accuracy? This implies that supervised ML strategies can’t be utilized right here, as we can’t outline a loss perform to optimize. Instead, we should use unsupervised ML algorithms to extract insights from our huge portions of information.

Measuring Performance of Unsupervised ML Models

The AI staff screens the efficiency of our manufacturing fashions “in the wild” by key efficiency indicators (KPIs) comparable to open charge carry (please obtain “The Growth Marketer’s Guide to Email Metrics” for extra details about these metrics!). Although KPIs are an important instrument, the purpose for this mission is to establish issues earlier than the mannequin makes it into manufacturing, as a substitute of afterwards. Therefore, I’ll concentrate on checks I can carry out instantly after coaching, not checks in manufacturing.

One of the challenges in working with unsupervised fashions is evaluating their efficiency — because you don’t have a “ground truth” for comparability, it’s not all the time apparent choose your mannequin’s predictions or outcomes. For instance, one of many basic knowledge science initiatives for exploring NLP (pure language processing) is matter modeling on the New York Times articles dataset.

Most knowledge scientists can simply produce a mannequin which may output lists of phrases describing varied “topics” inside the dataset. It’s way more difficult, nevertheless, to find out if these matters replicate the info properly; typically the scientist will merely learn the matters and determine if the matters match some inside expectations they’ve about what the “correct” matters needs to be — not a really reproducible or quantifiable check! In addition, the “human eyeballs” check is extraordinarily prone to biases (the scientist may overlook clusters they’ll’t interpret clearly, or they may ignore or discredit clusters from matters outdoors of their expertise).

For the AI staff, nevertheless, such “eyeball” checks aren’t actually doable — I can’t feasibly assessment tens of millions of email opens on my own and personally decide if our fashions match my expectations.

It’s fairly straightforward to rule out the “eyeball test”, however what choices do we’ve left? Let’s contemplate clustering algorithms for instance. We can consider the within-cluster sum-of-squares (WCSS) to find out the “spread” or inertia of our clusters; nevertheless, many clustering algorithms reduce this worth by design, and our baseline for comparability could be the WCSS from earlier fashions (not nice if each fashions have the identical biases!).

Since we don’t have any “ground truth” labels, we are able to’t use lots of the conventional checks of cluster high quality, comparable to homogeneity (whether or not all knowledge factors in a cluster have the identical “ground truth” label) or completeness (whether or not all knowledge factors with the identical “ground truth” label belong to the identical cluster).

Synthesizing Data With Labels

Bootstrapping in ML

Our strategy to dealing with this downside is to synthesize labeled knowledge by way of bootstrapping. Bootstrapping is a catch-all time period referring to checks or metrics which use random sampling with alternative. A preferred ML use case for bootstrapping is “bagging” (i.e. “bootstrap aggregating”), a meta-algorithm mostly used with resolution forest fashions to enhance their efficiency. Bagging creates a singular pattern (a “bootstrap sample”) for every resolution tree within the forest by randomly sampling the unique dataset with alternative. Because a bagged random forest averages the outcomes of a number of classifiers, this leads to decrease variance.

For our functions, nevertheless, we’ll use bootstrapping to pick out knowledge from identified distributions or distributions designed by hand, labeling the factors based mostly on their supply distribution.

What is Send Time Optimization?

In this instance, I’ll concentrate on testing our Send Time Optimization (STO) product. For email blast campaigns, STO experiments try to maximise the email open charge by reviewing recipients’ historic engagement habits (email opens). The message is distributed to every recipient on the hour they’re most probably to open it (based mostly on their earlier opens). Therefore, the “ground truth” that we’re creating in our artificial knowledge is every recipient’s most popular time to open emails.

Bootstrapping for Sample Synthesis

Let’s say we need to create a pattern of artificial email open knowledge the place all prospects are both “morning people” (i.e. their favourite time to open emails is a standard distribution centered round 9AM) or “night people” (similar factor however centered round 9pm).

  • To create artificial knowledge for a single buyer, we’ll first randomly select this consumer to be both a morning individual or an evening individual. We will even select n_user_emails, the variety of distinctive email opens per consumer (chosen from a standard distribution centered on an affordable worth).
  • To synthesize a consumer’s email opens, we randomly pattern from their “source distribution”, both the morning-centered or night-centered regular distribution, repeating this sampling n_user_emails instances to generate n_user_emails opens.
  • Repeat this sampling process for as many customers as desired and save the ensuing desk of artificial buyer email open with day/evening labels to a Delta desk.

Data is synthesized for every consumer based mostly on their chosen label and n_user_emails.

At this level, we’ve formally created our labeled knowledge!

Evaluating Performance Using Synthetic Samples

Now let’s say that someone on the AI staff has made adjustments to the STO mannequin and want to affirm that the up to date mannequin performs properly as a part of CI/CD testing.

  • When the mannequin change is able to merge and deploy, our CI/CD instruments practice a STO mannequin reflecting the change utilizing a latest set of artificial knowledge.
  • Next, ship a set of artificial knowledge (both the identical as used for coaching, or a separate testing set) by the mannequin API.
  • The mannequin assigns every consumer to a “result distribution” which ought to replicate that consumer’s open habits, and the API samples that distribution to return the “best” open time for that specific consumer.
  • Repeat this sampling a number of instances for every consumer, then carry out a goodness-of-fit check to find out the probability that these samples got here from a distribution matching the consumer’s “source distribution” (or a really similar-looking distribution).
  • If the end result and supply distributions have the same form, then the samples from the API ought to go a goodness-of-fit check (comparable to an Anderson-Darling test) when in comparison with the supply distribution.
  • On the opposite hand, we would discover that the outcomes from the API have been unlikely to return from a distribution matching the supply (low goodness-of-fit); this might point out that there’s a problem with our mannequin or with the API.

To consider efficiency, we evaluate the samples coming from the mannequin API to the supply distribution chosen throughout synthesis for goodness-of-fit.

If the mannequin’s efficiency meets a suitable threshold that we outline, the developer can select to undergo with deploying the mannequin to customers. Otherwise, this means that there’s some errors within the mannequin and the developer ought to attempt to right any errors.

Although I described a quite simple testing case right here (morning vs. evening), this framework might be expanded to incorporate extra advanced check situations.

Next Steps

If you’ve made it this far, thanks for studying! In this publish, I summarized the explanations for synthesizing knowledge for testing unsupervised ML fashions. I additionally gave a easy instance of how you possibly can use this check to gauge mannequin efficiency in a CI/CD system.

This “end-to-end” check is an instance of model testing, testing which confirms that the mannequin follows sure anticipated behaviors. Adding our end-to-end check is one among a number of kinds of testing utilized by the AI staff to make sure a top quality product. Please look for future posts from AI on ML testing and technical debt discount; that is an lively space of improvement for our staff, and we look ahead to sharing our progress with you!

Leave a Reply

Your email address will not be published. Required fields are marked *