Pie Insurance which is a company transforming small business insurance. Pie has seasoned technology and insurance experts on a mission to make insurance less expensive, simpler, and more transparent for small business owners.
Tight timelines - the biggest issue
You have all of these different actors in your organisation, engineers, product, multiple projects and juggling different timelines, right? Sometimes you just don't have the exact amount of time that you would like to do your research. And some research is better than none. So sometimes you have to get creative. And these are some things that I've used myself over time that have helped me cut down the time that I spend on research.
Adapting to timelines
There is not a one size fits all solution for this.
Sometimes when somebody asks you to do some research in an amount of time that is impossible, you will have to say no to that. When the timelines are tight but feasible, this might seem like more time, but I can tell you from personal experience that 30 minutes of an alignment meeting can save you 30 hours later on or more.
You want to bring in a representative from each stakeholder group that will be viewing or is interested in your research. The goal of an alignment meeting is to set expectations.
From an integrative thinking perspective, what that really means is you are, for example, bringing in sales and seeing what they really want to get out of this. You're bringing in customer service, you're seeing what they really want to get out of this. You're thinking about your research from the customer and seeing what they really want to get out of this. And when you go through that exercise, you find these really nice synergies and you're really trying to maximise the value of the piece of research you're doing across all those different stakeholders.
You're also setting expectations with all the different various stakeholders that will be interested in this piece of research. You can even set up the expectation that folks are not going to get a super nice PowerPoint presentation, well designed with lots of videos spliced together. It can be a verbal readout if you need to do it faster, It can be just bullet points. So long as your group knows that's coming, that's often quite acceptable. And you can waste a lot of time actually documenting your research and not moving towards the outcomes for the business that you're actually trying to move towards.
Finally, it's helpful to share the goal of the research. You can send it in writing afterwards if you want with a summary of the alignment meeting to make sure that everyone remembers once you get to the end what was agreed to.
In this alignment meeting, you will probably have a lot of folks adding different things that they want. And if you have a short amount of time, you probably can't do that. In certain cases, you can just say, yep, that's a great idea. Let's add that to our research backlog for the next iteration, because you simply can't on a short timeline add in everything else. And you don't necessarily want to tell people no, because you may be able to do it in the future. But you want to tuck it away as something that we'll look at later on, we'll assess the value of, and we'll see if we can prioritise that.
You're not going to be able to please everyone all of the time, but if you go through that exercise, you often kind of get out in front of this person wanting this from the research instead of getting that at the end when you're presenting the research or halfway through.
Start with existing data
Always start with the data you already have. You can check and get creative about the data sources that you can kind of consider that might answer your research question.
It's surprising how many folks really don't know what data is available in other departments in the organisation. It's worth asking about things like recording calls, notes in customer relationship management systems like Salesforce. Is there an inbox with customer emails? And one of the really nice things about these types of things is that when you set up, say, for example, a user interview, you're really artificially creating that situation. And you're coming up with questions, you're trying to get people to imagine what they were doing. These are like in the moment pieces of data, especially recorded calls. You can get some really great insights from those I have found. If you're looking for opinions, you can use tools like social searchers or UVRX. And you can search to see what's being said about your company if that's something that you're interested in or your company plus keywords if you're interested in perspectives about a certain thing that your company does. You can also look at session recordings. If you have tools set up like Mouseflow, Datadog, there's a whole host of them. You have tools that can record what users do online, and so you may just want to filter down from those recordings into key areas that you're interested in.
You might not need to do usability tests. You might be able to observe those and see from those recordings what the problem area is. Review websites are good as well. If you're trying to see, for example, in a competitive analysis, not just what your competitor has but what do people think about what your competitor has, you can go to review websites. The information is either there or it's not. But you can also be pretty proactive if you want to set yourself up for success in the future. If you have a research archive, use it. Look at it. Search through it. If you don't have one, set one up. I found Dovetail to be a great tool for this. And it really is amazing how much research gets done because people weren't able to find a previous test or it gets lost in some set of folders somewhere.
You might just want to go to your web analytics tool and look for drop off in your funnel, in your experience flow. Using the same example, a shopping cart, maybe you find on a certain page, you've got the majority of the drop off, So you're not researching everything. You may be able to do a much more focused piece of research and you're always really trying to descale the amount of research that you are doing to be just enough to make the decision.
Unmoderated usability tests can be very quick. You typically buy a platform like usertesting.com or any number of platforms out there. There's tons of them. They often have a pool of participants. You can set up a test. Someone will pick up that test and you have written the test script in advance and they go through that themselves. There's a number of advantages to this. First off, if you're doing moderated tests where you have to talk the participant through, you have to do the calendar scheduling, figure it all out. This kind of happens on its own once you set it up when you do remote unmoderated usability testing. Tests can run in parallel. Depending on the type of participants and the type of testing that you're doing, it could easily happen if you've got a very generic audience that you have multiple tests going on at the same time and, you know, uploaded in the matter of a couple of minutes. Now, usually I would say you want like for most tests, at least in my experience, if you've got a general audience, you kind of want to give it a day or two days, right?
One of my favourite usability tests is to start people on Google or on a blank page. A blank page is better because you're not leading the folks as to what search method to use or where to start. And you just ask them, go ahead and do X. Now, one of the benefits of an unmoderated remote usability test is that you can't interfere with that test. You can't ask questions like you could in a moderated session that might give you different behaviours than if people just went and did it themselves, right?
Another really great thing about this type of test here is often people focus on the experience of their products in the context of just their own product. Like they'll start people in their own quote flow, for example. But really, some of the best information I've found is that you start people and you see where they shop and who they look at even before they get to your experience. What are the things that they're thinking about in the broader context before they even get to you. This is a super simple usability test and it's super, super valuable.
One thing I'll also say is if you are exploring different partners for recruitment, a really good question to ask them is if they have recruited this type of user before. It is not ideal to enter a relationship with a recruitment provider where it's their first time trying to figure out how you get these people that can of course extend your timeline.
I'd also say if you're under time pressure, over recruit by two users in these tests. There's a lot of professional testers out there at the moment, right? People that are just kind of like faking the tests to get paid the money. If you're really under time pressure, you want to account for that a little bit, not have to get to the end and then kind of redo the tests and launch it again. So if you really care about speed, you might want to over recruit a little bit for that.
You don't need to have everything built out to the nth degree fidelity. Sometimes you can get some really good feedback with that.
Another thing that you can do, again, this requires you to be a little bit more proactive, but having design systems. You might not have to design everything out high fidelity because you already have basically the components on your website. And so having a design system either coded, which is what I was just talking about, or even just a whole bunch of templates in Figma or whatever tool you are using. So that you can kind of just copy and paste some of these bits and pieces and just edit them as you go. Again, that can save you a lot of time. You have to build that up, of course, but it's much better than designing from scratch.
Sometimes also you'll be in a meeting and you'll be trying to gain consensus. Instead of saying, when can you, stakeholder, get me this piece of information that I need for my research or make a decision? Ask how soon can you? Because that puts the emphasis on this is actually urgent and I need this. Give me the earliest possible time that I can get it.
And a phrase that is super helpful I've found is based on what we have observed, you might state what those observations are. I hypothesise we should trial X, you make the suggestion on how to move forward, unless there are any violent objections. There will always be little things that could be done better. But you're kind of saying, hey, group, this is what we're going with unless there are any big objections. And that's often a good way to kind of move the group forward.
Another one is just making sure you kind of set those expectations and timeframes. As a next step, we'll meet with whoever and whomever by Y date to determine Z. Make sure you're setting that expectation for the next step. Too often people leave meetings and it's like, oh, man, like actually we should have said what we're going to do next when we're in that meeting. And you can just keep up a nice cadence each time by always setting the next deadline to keep the project moving forward.
First you're starting with what we have observed. You're not introducing your own opinion into this necessarily. You’re saying “I hypothesise” so you're not saying, hey, this is definitely the best way to go. It's just a hypothesis to test. Sometimes when you're very definitive, people can get defensive about that but it can start a whole conversation. And then others can kind of chime in on that. So it can be good to uncover and resolve objections quickly.
Tight timelines aren’t going anywhere. Running alignment sessions helps to remove ambiguity and misunderstanding that can often derail research timelines. After that, using unmoderated tools can deliver very quick turnaround times, particularly paired with low fidelity prototypes. Finally, it is really important to put timelines on actions so that the insights of the research get put into action.
My key takeaway is that it is possible to do high quality research quickly, it just involves identifying what takes a lot of time in your current processes and trying to implement approaches to minimise them.