So this is fundamentally that situation where you find an insight. It might be keyword research. It might be from technical auditing of the site, whatever it might be. You have a theory. You have a hypothesis or something that is going to benefit your website. You implement the change as a result, and you fall flat on your face. You fail spectacularly, and your test result data looks a little bit like this.
Now, this is actually quite an exaggerated case. A lot of the failures that we australian email list see are -2%, -3%, or just flat line, and those -2% and -3% type ones can be really hard to pick up without scientifically controlled testing, which is what we focus a lot of our time on, on really big websites. They can really add up. If you are continuously rolling out those little negative changes through the course of the year, it can really be a drag on your SEO program as a whole.
. You roll out that change, and it can get lost in the noise, the seasonality, other sitewide changes, Google algorithm updates, things your competitors get up to. That's what we're trying to spot and avoid. What can you learn? So what can we learn from losing tests, and when can they benefit us as a business? Well, one of the, perhaps, counterintuitive benefits is the drop in effort that you might be asking of your engineering team.