When we started CookiesHQ, we always had the idea of becoming an open and always improving company.
We’ve been refining our processes, found ways to become a remote first company, introduced and played around with our Cookies Lab idea, and overall made sure that we were not becoming an incredibly dull machine.
But sometimes I feel that we could do a bit more, or a bit better.
There are some core elements that feel very ‘status quo’ at the moment (and I’ve always hated the status quo).
This is why, starting next year, we’ve decided that CookiesHQ will run A/B tests on itself.
Those experiments will be structured around the idea of making the company a better place to work for and with.
In our initial openness idea, we will share all those experiment s, ideas and results on the blog.
What can we learn from A/B testing a website and how can we apply this to a team?
When growing your application, you should always be A/B testing.
To get meaningful data from your app A/B tests, you first need to define a conversion that you want to increase. This could be increasing specific page views, increasing sign ups, increasing in clicks, etc. The list goes on.
Then you isolate the relevant conversion triggers (Button, sentence, colour, form, image etc.) and compare how it performs against another version, usually of the same kind.
To make sure that your data is correct, you will probably run only a small amount of experiments concurrently, and ideally on separated triggers, to ensure that experiments and conversion results don’t collide.
I’m sure we can A/B test a company
If we think about it, we can certainly apply the same idea to a team (at least to a small team).
As a director, I want to improve specific metrics in CookiesHQ
- People happiness (employees and clients)
- Knowledge (always do things better / share more about what we know)
- Employee health
Those are the primary metrics of any company (or should be).
If those are the metrics we want to improve, we need to find what triggers can improve them.
If we take the productivity metric, we may want to test the impact of reducing our consulting time to 6 hours cap per day.
For people happiness, Gemma suggested to test the impact of an office pet.
Employees health might ask for everyone to take a walk at lunch time.
The catch is, apart from January, we haven’t actually fixed these triggers yet. We have a Trello board full of ideas discussed internally, but if you have any ideas, please feel free to leave a comment!
In order to ensure that we don’t obtain biased results, we will try to only run one experiment at a time.
We’ve decided that since the year is divided in 12 months, we will probably run each experiment for one calendar month at a time.
At the end of each month, we will review the experiment to see whether we want to continue with it, move back to the old way, or create a mixture of old and new.
Calculate the conversion rate
Let’s face it, this one is going to be tricky. When there is a metric that can be calculated (be it time, number of exceptions on live code or steps per days) then deciding if the experiment had a positive or negative impact will be straightfoward.
When it’s based on something less tangible (wellbeing / happiness / general productivity), then we will have to rely to our old untrustworthy brains and emotions.
In those scenarios, maybe a vote amongst the team with discussions and comments will be the best option.
To wrap up
I don’t know yet what will be the outcome of A/B testing CookiesHQ, but I find this extremely exciting.
There is something fascinating about injecting new habits and stacking small improvements over time.
If your team ran the same kind of tests on themselves or if you would like to suggest possible experiments, please feel free to submit them in the comments or contact me at email@example.com.