By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Continuous User Involvement

When we say we are a digital agency specialised in data-driven product development, it means exactly that: data-driven. We help our clients execute on their business goals by understanding their audience and making sure the product resonates with them. Data is a crucial factor in this process.

Authors
Virginia Rispoli
Human Centred Designer
Joy Jansen
Human Centred Designer
Table of content

Introduction

The involvement of people through user research results in key insights that you would probably not be able to predict or hypothesise. By continuously involving them, you are on the path to making the right product right. 

For us at fresk.digital it is second nature to involve people in our work process, we use it for understanding and validating. Our clients don’t question this form of work, because they have seen first hand what the value of it is. But what we do see is that not all our clients (or peers) know how to move towards continuous user involvement.

That is why one of fresk.digital’s talented Human Centred Designers Joy Jansen dived deeper into the practical side of continuous user involvement. She delved into various challenges that we often observe at companies during the planning and execution of user research. One of these challenges is the blank page syndrome. It means that a concrete goal is missing in the plan, making it difficult to set a sharp scope. This is actually one of the main pitfalls and barriers when carrying out user research, but it is not an insurmountable challenge. 

In this report we have combined four articles written by Joy Jansen and Virginia Rispoli (including new input) that help tackle this challenge. It is a four-phase approach to counter the blank page syndrome and gain valuable insights:

  • Phase 1: How can you best define scope?
  • Phase 2: What research methodologies should you use?
  • Phase 3: How to gather the right participants for your validation?
  • Phase 4: Which tools should you use to support your research?

Before we start, it’s good to remember that user research is an evolving process, and it's not uncommon to face challenges along the way. The key is to stay adaptable and be open to trying different approaches until you find what works best for your specific project and team. This report contains our approach at this moment.

{{Highlight-component}}

Phase 1: How can you best define scope?


The first step in starting your user research project is to define the scope. Doing so helps you maintain grip on your results, keep focus and know what you are actually researching. What we have unfortunately seen is that this is a challenge for many people and can even turn into a barrier instead of a guiding tool. Because it can become a barrier, the risk is that user research is either postponed in its entirety or the scoping step is skipped over to go straight to gathering insights as quickly as possible without a real plan. 


Both risks are a shame, as skipping over it means you are not validating with your audience, and starting without a plan means you probably don’t even really know what you are validating with your audience. Or it is just a really messy process where you'll most likely have a long list of findings, without knowing exactly which question to answer or which next step to take. Usually this means findings remain unresolved and no further action is taken. In addition, testing without a scope might easily cause you to opt for the more obvious research method as a default. A pity, because this means that other more fitting methods that could provide you with better and even newer types of insights and can actually be more economical than the standard you are used to, are ignored.

So, if you don’t know where to start, let us help you with the steps that help you define the scope for your user research:

  1. Leverage existing knowledge and identify gaps
    Efficiency and thoroughness are essential in user research. That is why we advise you to first start with identifying what's already known and what gaps exist, this way you ensure that the research efforts are both informed and targeted. This can be anything from access to research tools that the client is already using (Google Analytics, Hotjar, previous research results, you name it), to planning initial chats with key stakeholders. This sets the basis for the next steps, but it can definitely be done simultaneously to the other steps.
  2. Define clear goals and priorities 
    To choose a scope, it is necessary to set goals and priorities. To do this, it often helps to think about the process "in reverse" and ask yourself first what you think you need to learn in order to really improve a product or service. 
    Drawing from a case like the NS-TIER integration, where a goal setting-session (as shown in the picture below) with the involved stakeholders from the client side was instrumental in building consensus, this step has set the tone for the entire research process. By aligning stakeholders and team members around specific goals, you pave the way for focused and effective research outcomes.

    How to get ready for such a session? This session can be done online, in person (preferred) and can last from 1 to a max of 2 and a half hours, depending on the size of the project and the amount of people involved. To select the best activities to do during it, we have a list of questions we use to prepare for the meeting. Our advice is to try to identify what is not yet known and what could be possible uncertainties or risks within or for the project, and what really needs to be figured out. In this way, gaps, usage problems and limitations can be identified and tackled early.

    The questions that can help with preparing and running the session:
    - In general: What do we want to find out? What topics or questions do we want to explore? What questions do we already have? What do you not yet know about the project, product or service? 
    - Target audience:
    Who will soon be using this product or service? And in what context? How well do we know this person? 
    - Challenges:
    Are there potential challenges that we already foresee? What could possibly pose a problem for success? 
    - Business value: What role will this product soon take within the company and what goal does it help achieve? What decision will this research enable? And what are the questions from our stakeholders? Is any particular decision dependent on this research? If so, how will stakeholders/interested parties make a decision for the product/service based on this research? 

    In the answers to these questions, themes will emerge that will give you a better idea of possible research questions, challenges, topics or research directions that are important to explore further. Together with the team and stakeholders, you can analyse these topics and discuss which topic or question is your main priority. In particular, consider the phase of the project and the impact the question has on the next phase or on the success of the product. Through this consideration, you can prioritise within the topics and choose a scope or direction for the upcoming research. 

Goal setting-session with the NS-team during the NS-TIER integration project.

  1. Engage stakeholders and align with business objectives
    Define what business objectives this product or service will support. By knowing the business objectives, you get a better idea of what the use of the product is and how you can investigate whether the product or service works and supports this in the right way. Engaging stakeholders and linking research goals to broader business objectives is crucial for ensuring the relevance and impact of your findings. This ensures that the user research track is not only methodologically sound but also directly contributes to overarching business goals.

    During a project with IPE (part of FDMG), we started our user research process with a kick-off event focused on acquiring a deeper understanding of the company and its products’ objectives. Through collaborative efforts with key stakeholders, a comprehensive understanding of the essential features, key service channels, and business objectives have been established. These insights were acquired through a series of workshops, enabling us to determine the focal points that the user validation should endorse and the significance of the products in their overall business operations. Together with the stakeholders, we analysed the outcomes of the sessions. This collaborative effort helped us in formulating research questions and creating a scope that enabled us to delve into the company's assumptions and evaluate the significance of their service. By doing this research, we were able to pinpoint potential areas where enhancements could be made to optimise their products in reaching their business’ goals. 

Mapping the customer journey, business objectives and channels of IPE during a workshop with their stakeholders.

  1. Understand the moment in the project
    Know where you are in the product development process. Understanding the current phase of the project or service is pivotal in tailoring your research questions and choosing the right research methods. Each phase presents unique challenges and opportunities, and asking the right questions at the right time can uncover invaluable insights. This step ensures that your research efforts are finely tuned to address the immediate needs of the project.

    In fact, the research goals of a project will look different depending on the stage the product or service is in. When we look at our way of working, for example, we see three phases where a different purpose and form of research is needed to reach the right insights:

    Discover & strategise:
    During this phase, we investigate what is most important to the ultimate user, stakeholders and business. We examine the context, the concept, the end users and the ecosystem of the business to get a better understanding of what really matters and what we are trying to solve. The goals of this phase are mainly:
    - Gaining an understanding of the situation, gaining insights into the context and understanding the problem/opportunity;
    - Specifying a concept/context of a possible solution (service/product) to the problem.

    Create & Launch:
    During this phase, we work towards a Minimal Meaningful Product (MMP) and test each iteration with users to make sure we are creating the most meaningful product. The goals of this phase are mainly:
    - Validating the design and design choices (the user interface); 
    - Validating the technology (performance).

    Optimise & Grow:
    During this phase, we go live with the validated MMP and aim to keep learning and iterating to continuously improve and develop the product. The purpose of this phase is mainly to:
    - Continue to listen and analyse feedback and user behaviour (through analytics & e.g. Hotjar) in order to evaluate possible improvements.

  1. Defining Audiences with a Scenario-Based Approach
    Identify the key user groups and usage scenarios most relevant to the project. Whether it's through demographic characteristics or usage patterns, this step lays the foundation for a focused and representative research effort. Working scenario based means to analyse and understand the context where the people will use the products or services. This gives the possibility to focus on the goals and the needs of the users rather than describing the people that are using the products.
    Scenario based research seeks to describe systems in terms of the actions that users will try to do when they use those systems, ensuring that the focus goal will remain on the goals of the users. By defining scenarios you define different types of user groups, or as we call them, audiences, that might use your service.
    The definition of these audiences was crucial during the Optimise & Grow phase we have done for Ravlling, where we took the time to see how the platform was performing after a couple of months of being live. We started the project by defining 3 different audiences, each defined by a goal for which they would use the platform. Whether it was related to getting inspiration, or to comparing options, or to actually booking a trip, all these scenarios were analysed through the eyes of a specific audience. This led us to see that there were behaviours which would not fit into any of the previously defined audiences, and got us digging deeper on what was the goal of this new audience we discovered. By understanding how the platform was used, we could then make new informed decisions for future improvements.
Framework for audience definition done with Ravlling.

With these 5 steps, you have the tools to successfully define the scope and target audience for your research and allow you to set the foundation for your user research. Nice! 

Phase 2: What research methodology should you use?


Now that you have defined the scope, it’s time to select the right validation methods and to create a test plan. 

Continuous user involvement literally means continuous, this means recurring involvement of people and therefore continuously validating and understanding if you are making the right product right. The method that supports this iterative and ongoing testing best is hypothesis-driven research. This method helps keep validation rounds small and focused and allows for multiple validations in smaller iterations rather than at a large-scale at once. Using hypotheses helps us test the product with clear focus and lets the results support or contradict each other.

What is hypothesis-driven research?

Hypothesis-driven research and design serve as the foundation for our user-testing approach, where we are able to understand users extensively through both qualitative and quantitative data. We formulate clear hypotheses based on existing user insights (if available), or previous knowledge and anticipated behaviours if it’s our first round of research, and we structure our research and design process around specific assumptions to validate or refute. 

This method guides us in crafting tests and experiments that incorporate qualitative methodologies such as interviews, contextual inquiries, and usability testing alongside quantitative measures like surveys, analytics, and A/B testing. By integrating these approaches, we not only uncover nuanced user behaviours, needs, and motivations but also quantify and validate these findings, ensuring a holistic view of user experiences. This blend of qualitative depth and quantitative facts allows us to iteratively refine our products, ensuring they authentically address user needs while meeting measurable success criteria.

How can you identify your research hypotheses?

Drawing from the hypothesis driven research approach, it's essential to articulate clear and testable hypotheses. The hypothesis-driven approach helps in structuring research activities, interpreting results, and drawing meaningful conclusions that directly address the defined goals and priorities. In fact, these hypotheses serve as a thread to follow towards the outcome of the research and provide a focused direction for data collection and analysis, ensuring that the research efforts are purpose-driven and results-oriented.


Defining the best hypotheses is not a solo activity. It is important that core stakeholders are involved as well. Their perspective helps give a very tangible vision of what we are going to test and which assumptions are already there. As we work for clients, we involve them early on in the research process, usually in the form of a hypothesis drawing session with them, this can for example take place in the kickoff meeting we mentioned in phase 1.

Hypothesis writing session with the NS team

Writing effective hypotheses for user tests involves a mix of understanding user needs, using available data, and framing statements that can be tested and validated throughout the testing process. We use the following 5 steps to make the hypothesis writing session a success:

  1. Identify assumptions
    Start by pinpointing what you believe to be true or what you're uncertain about regarding user behaviour, preferences, or the product's usability.
  2. Use data and insights
    Base your hypotheses on existing user data, research insights, or observations. This helps in creating informed assumptions that are more likely to yield valuable results. And in the case of a new project or greenfield situation, there’s always a possibility to use desk research in selecting assumptions that you might want to validate.
  3. Formulate clear statements
    Write clear and specific hypotheses that articulate what you expect to discover or confirm. A good hypothesis typically follows this structure: "If [this change or action], then [this specific outcome] will happen because [reasoning based on user insights or data]."
    What also helps is to have this structure visible on a screen and/or printed on cards so all stakeholders are reminded about the parts that need to be filled in. Limit each hypothesis to one specific aspect or assumption at a time. This keeps your testing focused and helps in maintaining clarity,  and avoiding confusion in interpreting the results.
The template we normally use to help people write hypothesis

  1. Ensure testability
    Make sure your hypotheses are testable. This means they should be specific enough that you can measure and observe results that support or contradict the hypothesis.
  2. Have some examples handy
    This approach is probably new for most participants involved, so make sure to bring some clarity about what you want to achieve by bringing in some examples of well written hypotheses. We’ll help you out with these two:
    - "We believe that if we simplify the checkout process by reducing the number of steps, then we expect to see an increase in completed purchases because our user interviews highlighted frustration with the current lengthy process."
    - “We believe that by adding a FAQ call to action at the bottom of all the content pages we will reduce the amount of customers calls to the service desk. This will be validated by both more clicks on the FAQ call to action and less calls to the service desk.”


By anchoring your hypothesis to a well-defined scope, you ensure that your research efforts are targeted, leading immediately to applicable insights for enhancing products or services. In this way, your hypotheses serve as a guideline throughout the research phase of your project. They can be scalable and reusable, as they serve as adaptable guides throughout a project’s lifecycle. While they are created to test specific assumptions or ideas at a given point, they can be reused or adapted for ongoing research. Think of them as foundational building blocks: their core elements might remain consistent, guiding the interpretation of results over time, yet they can evolve or expand as the project progresses or new insights emerge. By maintaining a hypothesis-driven approach, you can flexibly iterate, refine and build upon previous hypotheses, ensuring continuous alignment with the project goals and possible changes in users' needs.

Phase 3: How to gather the right participants for your validation?


You have defined the research scope and the hypotheses that help you in creating the structure for the analysis of the insights. Now that this is done, it’s time to think about the recruitment of participants for the research and the right validation methods to use. These two things go hand-in-hand, however we often notice that the recruitment of participants can take a lot of time and might bring many challenges along the way. Therefore, we strongly recommend to start thinking about this early in the process. 

In this section, we would like to share our insights and best practices to give you some inspiration for recruiting participants for your next project. 

Defining your needs

Before you’ll be able to recruit participants, it is important to get a clear picture of your needs and the type of participants you are looking for in order to select the best recruitment solution. Here are the steps that might help you in defining the list for your ideal participants: 

  • Start from your research goals or questions
    As we mentioned earlier other phases in this report, your research goal is key for the success of your analysis. We often already describe the type of users in this goal: do we want to attract new users? Do we want sleeping users to become more interactive with our application? Based on these goals, the first outlines of your participant definition can be scribbled down. 
  • Existing users (known) or external users (new) or both
    Your research goal determines whether it makes sense to recruit existing users, if you have to look further for a representative audience or if it is wise to validate this with both groups. 
    1. It makes sense to talk to existing users when you’re updating a product that already exists, your research participants require extensive experience with the product/brand or if you want to test for usability with experts (and not really with beginners). 
    2. It makes sense to talk to new users when you're developing a new product or brand, you need to test usability with potential new users, you want to test with potential new customer segments or you want to understand the users of competitor products/brands.

It’s often useful to combine the two groups. Be aware however to clearly separate your insights for each group and to analyse them with the knowledge that it’s a different type of participant that was mentioning this. 

  • Type of users/demographics:
    It is important to gain a clear view on the type of users you need for the research. For qualitative research, you need to be more specific in describing the ideal participant for your research. Aside from using demographics, start clarifying this by asking yourself who can answer your research question the best.
    There are many persona formats/forms that could help you in creating this overview by filling in, for example, behavioural assumptions. Also an interesting approach is to think about what scenarios (what we call scenario based design) the users will encounter and based on that define what kind of participants you are looking for to test your assumptions or validate your ideas. However: you do need to update those after your research with actual insights!
    Do keep in mind to think about inclusiveness when setting up profiles like this: are you planning to host a usability test? Well, perhaps you want to run some accessibility checks as well with someone who is visually impaired. Most often you’ll find new insights and when making the product right for someone who’s visually impaired, you often are also improving it for other users. To design a product that will work for all types of users, it’s important to see how a diverse group interacts with their environment and technology. Observe differently-abled people across age groups, activity levels, and familiarity with technology. Younger and older users may use products with varying degrees of proficiency.

  • Define the number of participants
    For every research methodology, there’s a rule of thumb on how many participants are needed in order for the research to be significant. For interviews, you might think about 3 to 10 users, but for surveys you’ll have to think about at least 100 participants. In case of a qualitative analysis, we always advise you to have one extra participant available in case of unfortunate events or the absence of a participant. We still want to keep the validation flowing!

  • Qualitative versus quantitative insights?
    As mentioned in the introduction, choosing the methods and recruiting the participants go hand in hand. Therefore the choice of methodology also influences the needs of recruiting your participants. For qualitative analysis, it is important to look for participants that exactly meet the chosen demographic and behavioural criteria relevant to your study. You have to be a bit picky here and search for the right fit according to these descriptions! Therefore this type of research involves non-random sampling, screening, and a lot of communication. 
    For quantitative research it’s more a matter of the numbers game. For data analysis to be meaningful and statistically significant, you need a lot of data. Which means you need to do more extensive research with a lot of people; and there’s often less room in having a long and specific checklist on who’s in and out. This makes the list with checkboxes often shorter and more generic. When deciding who to recruit for quantitative research, you first have to define the population you want to study. A population can be broad (“developers,” “males between the ages of 30-45”) or slightly more specific (“musicians in Amsterdam,” “unmarried Dutch women between the ages of 40-50”).  From there, you’ll be able to start recruiting a big group of participants.

  • Define the context
    Think about the contextual situation of your ideal users: where are they going to use this product? At home or at their work? For the research, you want the situation to be as realistic as possible and therefore matching their context. For the recruitment sake, it’s necessary to know if participants need to be in a specific situation or at a specific location when participating in your research. Will they be able to come to this location or does their context match your description? 
  • Define and agree upon your budget
    Before you can actually recruit, it might be good to have a clear view on the available budget for recruitment costs, when validating face to face; the rental of a test location, travel expenses and the incentives you might want to give to your participants. This often will influence the size and possibilities of your validation or at least the way you are going to recruit your participants. 


Ways to recruit participants

You can find participants for your study in many different ways. Each method has its strengths and weaknesses, and, depending on your research goals, you may want to recruit via multiple sources. This format, based on the research by NNgroup gives you a quick glance on the pros and cons:

Our experiences with these types of methods: 

  • Outsourced to a recruitment agency
    Qualitative research


    Pros: 
    - Big database available to the agency: a lot of participants they could pick from, so the guarantee that enough participants would be found.
    - Quick process: no dreadful and long lasting process to find a handful of participants. 
    - When someone didn’t show up to the session, the recruitment agency had the possibility to arrange other participants as a replacement.

    Cons:
    - Finding the right match: we didn’t always have the flexibility to choose participants ourselves and to do the screening to find the right match.
    - Communication: we did not communicate directly with the participants prior to the validation, but had to do this via the agency. This sometimes reduces the flexibility of sending content beforehand. 

Qualitative research via Google Meets

  • Outsourced to an automatic recruitment platform
    Quantitative


    Pros: 
    - Big groups were quickly recruited: bigger sample sizes were easily found and targeted automatically by their system. 
    - Help with the administrative tasks: the recruitment agencies have a lot of experience with quantitative research and were therefore a great help in formatting our questions in their tooling, adding assessment questions to validate if the participant was filling in correctly and for making the validation statistically significant. 
    - Delivery of the data: the delivery of the outcomes was done in a proper XML format, helping us in drawing conclusions and to create simple data visualisations. Thereby, the recruitment agency helped us in highlighting the outstanding outcomes and the segmentation of interesting insights that wouldn’t have been so easy when using for example Google Forms.

    Cons:
    - Flexibility: there’s no flexibility in changing questionnaires or adjusting this when the validation has been started
    - No follow up: in our case, there was no possibility to do a follow up with a participant when we thought their answers were interesting. Therefore we couldn’t invite them for next rounds of validations.  
  • (Internal) panel of existing users
    The testpanel we created with BNR, FDMG: ‘De kritische vrienden van BNR Nieuwsradio

    Pros: 
    - Dedicated group: when being part of an internal test group, the users most of the time have an intrinsic motivation to be part of this group and are willing/dedicated to help in improving a service. This makes it easier to recruit a group of participants for your validation.
    - Expert group: this group of participants knows your service and are, most of the time, expert users of your services. Therefore they are a great target group for testing product updates. 
    - Follow up: since this group is part of your test panel, you’ll have easy access to their contact information and will be able to reconnect with them for follow up sessions and other validation rounds if they are relevant to them. 

    Cons:
    - Effort: it takes more effort to keep the test panel connected and in the loop when creating a test panel by yourself. It is important to keep them connected with the brand, so you can ask them to join validations. 
    - GDPR regulations: when hosting a test panel by yourselves, you need to keep in mind that there are regulations on collecting data. You’ll have to think of a process to correctly proceed with this, store the data and of course delete the data when it’s not necessary anymore.
    - Existing users/fans: this group of participants only contains existing users and persons that know your brand or sometimes even people that can be called “fans”. Therefore you have to be mindful of the research questions you’d ask them and if the participants are not already biassed. In a lot of scenarios, it still might be good to also look for a pair of fresh eyes and to validate with unbiased and potentially new users.

"De kritische vrienden" of BNR

  • Online platform
    Forums/groups, i.e. Shared Mobility research through various digital channels


    Pros: 
    - User reach and diversity: online platforms and social media platforms provide access to a diverse pool of participants from various demographics and locations, which you might not reach with just in person interviews.
    - Users anonymity: participants might feel more comfortable in sharing honest feedback due to the nature of these tools and methods, as they provide complete anonymity, often leading to more honest and direct feedback.

    Cons:
    - Follow up: if anonymity is definitely a pro, it might lead to the fact that it might be very difficult to follow up on interesting insights since we don’t have personal information to contact the participants again. 
    - GDPR regulations: when collecting data online, also here you need to take care of the privacy regulations. Again, you’ll have to think of a process to correctly proceed with this, store the data and of course delete the data when it’s not necessary anymore.

Invitation to take part in an online questionnaire

  • Intercept studies
    Guerilla Research at stations

    Pros:
    - Contextual insights: observing users in their natural environment provides rich contextual insights into their behaviours, preferences and needs. This real time observation can uncover nuances that might not emerge in remote studies
    - Immediate feedback: these methods help you gather immediate feedback as you can interact with users that are using the products or services during your test (like we have done for the Shared Mobility research where we interviewed people next to parking spots of bike sharing or moped sharing brands). This allows for on-the-spot discussions which will provide fresh and unfiltered insights. 

    Cons:
    - Limited control: we have mentioned as a ‘pro’ the fact that in these types of studies you will get contextual insights, and so the most spontaneous behaviours you can expect to collect. At the same time it can also mean that you might have less control over the environment, leading to potential distractions or external factors influencing user behaviour, making it sometimes challenging to isolate interesting insights. In some cases this leads to great insights, in some cases it is the opposite.
    - Sample bias: Intercept studies might attract certain types of users more than others, leading to sample bias. Not everyone may be willing to participate, potentially skewing the data towards a specific demographic or mindset.
    - Logistical challenges: conducting intercept studies requires logistical planning and coordination, often involving permissions, access to locations, and managing unpredictable scenarios, which can be time consuming.

Asking people on the street to participate in a quick research for NS/TIER

You’ve recruited your group of participants.. What’s next?

You might want to start validating as soon as possible, but there are still a few steps to take in order for things to run smoothly. 

  1. Screening/call
    In case of a qualitative validation, it might be good to run some screenings prior to starting researching. If the method allows it, we plan a call with the participants to briefly meet and check if they are really representative for this research. In case you’re collaborating with a professional recruitment agency, they might do this for you. 
  2. GDPR and the project needs:
    With the GDPR laws in place, it’s also important for us as researchers to take the privacy of participants seriously and therefore create a clear plan on what we will do with the data we collect. Early in the process, we need to explain our goals and plans to the participants so we know that they are fine with, for instance, recording the conversation and noting down our insights. Make sure to collect a signature/agreement on this and to keep the GDPR rules in mind: try to anonymise data as much as possible and don’t store it when it’s not necessary anymore. 
  3. Keep the participants in the loop:
    Especially when you’ve planned this validation a few weeks prior to the actual date, it might be good to remind the participant once more when the date is coming. You don’t want them to forget about your appointment! 
  4. Be on time:
    Make sure to invite the participants in time and to clearly communicate where and when you expect them to be present. Try to fit the sessions into their own schedule and make sure that the duration of the validation is clearly communicated, so the participant is aware of this and doesn’t unfortunately have to leave earlier. It’s always good to communicate that it might take a little longer. In case of a digital validation, try to explain all the steps of getting into the validation clearly to them (for instance by sharing screenshots and circling where they have to click). Not everyone is used to Zoom, Teams or Google Meet. You can always ask them to call in 10 minutes prior to the start of the validation so you can still help them out when it’s not working. And it goes without saying make sure to always be on time yourself in case participants show up earlier than planned.
  5. Think about an incentive:
    Make sure to always have a gift or incentive ready to give to the participant when attending your validation session. You’ll have to find the sweet spot however in communicating about this gift; sometimes we notice that people are only attending validation sessions for the vouchers they get and are not always the right match for a specific validation. Screening helps in preventing these kinds of situations, but be aware of ‘coupon hoarders’. 
  6. Share the outcomes:
    If possible, try to share the outcome of your research or next steps with the participants when finishing up your validation. Especially when these are existing users that already feel connected with your brand: most often, they are really interested in the next steps, feel proud of their contribution and might want to help you out again in future scenarios when they feel like it was meaningful. 

Phase 4: Which tools should you use to support your research?


With the scope being ready, hypotheses being defined and a group of participants that can't wait to share their experiences with you, it's time to think about the tools that will support you in your validation to avoid pitfalls and barriers when carrying out user research.

To better explain how to select the best research tools for your project, hypothesis and research scope, we want to walk you through one of the projects we did for our client Nederlandse Spoorwegen, where we integrated TIER as a shared mobility option in their offering. 

A picture taking during the shadowing session done for the TIER integration in the NS-app

In user research you want to test your hypotheses, usually you use a selection of different tools to validate your question. The actual set of tools depends on the scope of your project, hypotheses, the goals you’ve set and the participants that have been selected.
In our process of selecting tools, we take our time to consciously make choices, because tools are not just instruments, but have to be seen as strategic enablers that need to be aligned with the chosen methodology to streamline data collection, analysis, and interpretation. By carefully choosing the right tools, we are able to collect the contextual data that will allow us to draw meaningful conclusions.

To be able to select the best fitting tools for our research projects, we follow two main steps: 

  1. Selecting the research tools;
  2. Planning a pilot to test the tools.

Step 1. Selecting the research tools

When it comes to selecting research tools, our process begins with a thorough examination of our defined hypotheses, goals, scope, and participant profiles. This initial step is crucial for ensuring that the tools we choose are meaningful and aligned with our research objectives. For instance, we take a moment to reflect on our research goals: are we aiming to uncover insights into user behaviours, preferences, pain points, or perhaps evaluate the usability of our product? By clarifying our objectives, we can better guide our selection of an appropriate set of tools. Moreover, because the selection of tools and the recruitment of participants are intertwined, we also consider our target audience or our potential participants. Understanding their characteristics, preferences, demographics, technological literacy, contextual factors, and cultural backgrounds is important. This comprehension enables us to tailor our research approach to effectively engage with and capture insights from our selected participants. For example, if our target group is less tech-savvy, we may opt for face-to-face research methods instead of digital ones.

Once we have a clear understanding of these key areas, we can then delve into choosing the specific tools to employ. We familiarise ourselves with a range of research tools commonly used in our field, including qualitative methods such as interviews, focus groups, and ethnographic studies, as well as quantitative methods like surveys, experiments, and analytics analysis. Each tool comes with its own set of strengths and limitations, so it's essential to select one that aligns closely with our research goals and the type of insights we aim to uncover. 

When we’ve selected the appropriate tools to validate our hypotheses with, we further explore software solutions that support us in their application.



Qualitative-driven insights research tools:

Qualitative user research involves rich, detailed insights that go beyond numbers, focusing on understanding user behaviours, motivations, and experiences. Here is a selection of the ones we have previously used in projects. 

  • User interviews
    Conducting one-on-one or group interviews to gain qualitative insights, understanding user behaviours, motivations, and pain points.

    Scope:
    5-10 participants

    Phase:
    could be valuable at multiple stages in a project to validate new concepts, existing services or potential new features.

    Type of participants:
    existing users or potential users. It is important to talk with participants that are vocally strong so they can answer questions and you’ll have the ability to ask further and dive deeper when necessary.

    Software:
    for the process of interviewing, we prefer to speak face to face with our participants in their own context, because this gives a better understanding of their context and way of working. It gives them the possibility to show their process not only digitally but also physically (by for example being able to grab something from a drawer of their desk). When this is not an option, we revert to virtual options like Google Meet or another preferred video conferencing software. In both options we recommend recording the conversations for reference purposes. Do make sure you collect the consent of the participant and process the data according to the GDPR standards. We are currently also experimenting with a tool called Tellet. Tellet is an AI interview tool that can be used to conduct, summarise, and analyse hundreds of interviews in a very short time. This helps with a lot of administrative work in our user research process. 

  • Focus groups
    This research tool is often used with people from the target group of the product or service that is being developed. It’s a methodology that can be used to gain contextual information about an existing product, or to test different concepts in a wider group environment. It’s beneficial when you don’t have the resources to plan in multiple user interviews, since you can speak to multiple users at once. It is also a clever tool to use when you’re creating a new concept or functionality and are seeking feedback; collecting data by facilitating a group interaction often sparks new insights because of the discussions that lead to candour.

    Scope:
    8-10 participants per session

    Phase:
    for interviews, focus groups could be valuable at multiple stages in a project to validate new concepts, existing services or potential new features.

    Type of participants:
    existing users or potential users. Since this is a group activity, it is important to make sure participants are recruited as diverse as possible, so you can gather the most interesting insights they are willing to share with you.

    Software:
    we recommend hosting this as a face to face discussion. We prefer it over virtual options, because it limits the risk of losing the participants’ attention when hosting group discussions online, it also makes it easier for everyone to articulate their opinion evenly. When face to face isn’t an option, we recommend using Google Meet or any other preferred video conferencing tool. As a support we like using card sorting or post-it’s, for digital meetings we recommend using Miro or Figjam as a collaborative tool that will stimulate the same interaction and discussion.
  • Usability testing
    This research tool is meant to observe users as they interact with prototypes or the actual product to identify usability issues and gather feedback on the user experience.

    Scope:
    5-7 participants

    Phase:
    it could be valuable at multiple stages in a project, however the availability of a prototype or a live site or page is necessary for this type of test to be valuable.

    Type of participants:
    existing users or potential users. It is important to talk with participants that are vocally strong so they can answer questions and you’ll have the ability to ask further and dive deeper when necessary. They need to be able to express their questions, frustrations and thoughts while using the product. 

    Software:
    we recommend hosting this as a face to face discussion. We prefer it over virtual options, because it’s often easier to see or notice the emotions of users while walking through a certain prototype or to see what they’re trying to achieve. However, when this is not possible, our choice of preferred tool is dependent on the type of device we would like to use for the test. If the user needs to share their screen on a desktop device, we prefer video conferencing with for example Google Meet. When we want to test the interaction on mobile devices, we preferably use Lookback which helps gain the same insights on other devices as well. 
  • A day in the life/shadowing session
    This is a research tool that involves researchers spending an entire day with selected users, closely observing their daily routines, behaviours, and interactions in their natural environments. By shadowing users throughout their typical day, you can gain deep insights into their habits, pain points, and needs as they engage with products or services. We often apply this tool when redesigning or developing solutions where we have to gain a better understanding of the current processes and how they take place or are affected by the user’s contexts’ and/or scenarios they face within this context.

    Scope:
    3-5 sessions

    Phase:
    it is interesting for shadowing participants that are potential users for a service (to gain a better understanding of their needs, prior to designing/building a certain product) or for shadowing participants when they are using a first version (MMP) of a service (for example a pilot).

    Type of participants:
    existing users or specific potential users.

    Software:
    you can use anything that helps you log and take notes. We also prefer to use the camera on our phones to document the shadow sessions (in video of photo’s). Please make sure you always have consent from the participants.
  • Diary studies
    This research tool focuses participants keeping a record of their experiences over time. The goal is to understand long-term behaviours, preferences, and pain points.

    Scope:
    2-5 participants

    Phase:
    the focus of a diary study can range from very broad to extremely targeted, depending on the topic being studied. It can be used for understanding scenarios of use and the testing of a specific feature of a product over time.

    Type of participants:
    this research tool requires more involvement over a longer period of time, this makes it important to inform the participants what you expect of them and over which period of time. Also make sure to be extra prudent in the recruiting process, to understand the level of commitment you will most likely get from your participants during the study.

    Software:
    we’ve approached diary studies with multiple solutions: from physical diaries with questions to digital variants. In recent projects, we preferred to use forms.app for our data collection. We found this app to be flexible and useful when trying to set up different types of scenarios to gain insights on. 

  • Guerrilla testing
    This is an agile and informal user testing tool that is conducted in “real-world” settings, such as coffee shops, public spaces, or workplaces, where you can approach individuals randomly to gather quick and candid feedback on a product or prototype.

    Scope:
    5-15 participants

    Phase:
    Guerrilla testing is interesting for very initial phases, when you can approach participants with one of two questions to better define your research scope. It is also interesting in concepting phases, or when you need direct feedback on the designs you have created.

    Type of participants:
    the interesting side of Guerrilla testing is the fact that you have no control over participants. So there’s no right recipe for the right participants, being as diverse as possible is the best approach.
An example of the tool Hotjar



Quantitative-driven insights research tools


Quantitative user research involves collecting and analysing objective, numerical data from various types of user testing. With this type of research, we utilise large sample numbers to produce bias-free and measurable data about a service. Other than with qualitative user research, there’s no rule of thumb for the number of participants that you should recruit for this type of research. The number of participants needed to be statistically significant, depends on several factors, including the goals of your analysis, the complexity of user behaviour and the level of confidence you require in your findings. This is why we have taken the ‘scope’ out of the definition, as it is not as important as in the quantitative methodologies where you look more at the characteristics rather than the amounts. 

Several tools cater specifically to the needs of collecting and analysing quantitative data in user research, here is a selection of the ones we have used previously in projects.  

  • Surveys
    Gathering quantitative data from a large number of users to understand trends, preferences, and demographics.

    Scope:
    The number of participants in a survey, often referred to as the sample size, depends on various factors, including the population size, desired level of precision and the diversity of the population. As a rule of thumb, a sample size of at least 100-200 participants is often considered as a starting point for basic surveys.

    Phase:
    surveys can be used throughout and across project phases and continuously but have a different purpose per phase. It can be valuable  to validate new concepts, existing services, potential new features and to get a better understanding of the wishes and needs of a certain target group.

    Type of participants:
    This is really dependent on your research objective, but since it’s most of the time an anonymous and individual way of providing us with data, this could be done by almost anyone that matches the set demographics/user profile. However, a few considerations for groups might be: your target audience, new potential target groups, users of competitive services, former users or expert users.

    Software:
    we would recommend to use either/or Google Forms or FormsApp here. Both of these products are services to create online forms and surveys that users can individually fill in. They are easy to use for creating forms, for customization and as possibilities to collaborate with colleagues and/or stakeholders. In our preference, we experienced FormsApp to grant us with more flexibility in creating breaches or decision tree structures that we weren’t able to run this smoothly within Google Forms. 
The forms.app for surveys

  • A/B testing
    Comparing two or more versions of a design to determine which performs better in achieving specific goals, often used for websites or apps.

    Scope:
    The appropriate number of participants is depending on several factors, like effect size, variability in data and the desired level of statistical confidence. However, a general guideline is a sample size of as few hundred participants per variant as a starting point.

    Phase:
    A/B testing is most often used and the most valuable when applied in the optimise and grow phase.

    Type of participants:
    Any user of the product or service. However, based on your objectives, you could define specific target groups and funnels to filter a specific target group within the participants and measure their interactions and the impact of the proposed variations.

    Software:
    Google Analytics. We’re currently also using LaunchDarkly to roll out specific features to specific customers. At this moment we have not yet used it for a scaled A/B test. They do provide A/B testing capabilities as part of its feature management platform and enable teams to efficiently and safely experiment with different variations of features, gather insights and make informed decisions to optimise user experiences.

  • Heatmaps and analytics
    Analysing user behaviour through tools like heatmaps and analytics to understand where users click, scroll, or spend the most time.

    Phase:
    Heatmaps and analytics are most often used and the most valuable when applied in the optimise and grow phase to gain insights to propose potential improvements and changes.

    Type of participants:
    Any user of the product or service. However, based on your objectives, you could define specific target groups and funnels to filter a specific target group within the participants and measure their interactions and the impact of the proposed variations.

    Software
    : Google Analytics and Hotjar are our main recommendations when analysing user behaviour. Hotjar is very convenient for analysing user behaviour from a more quantitative and qualitative perspective. It helps in understanding how users interact with digital products. Hotjar provides a range of features to collect data, analyse user behaviour and in feedback gathering. From our perspective the heatmaps, funnels and recordings are really valuable in gathering insights on a bigger scale than just user testing. 

Research tools connected to methodologies

Choosing the right research tool is not per-say singular, you can often combine different tools and gather both qualitative and quantitative insights, providing a comprehensive understanding of user needs and behaviours. Flexibility and adaptation are key, allowing the research approach to evolve as insights are gathered throughout the design process.


Continuous user validation

Something crucial to integrate into your selection process is not just the mindset of validating or invalidating hypotheses by specific tools, but also to actively take action based on the gained insights. In our approach, we advocate for continuous user validation and iterative research. This means embracing unexpected discoveries and using them as opportunities to delve deeper into the hypotheses. By planning for iterative validation sessions with for example a variety of tools, we ensure that we have the necessary resources available to act upon the insights we gained.

By adopting this mindset and approach, we can leverage various research tools to test hypotheses effectively, leading to more informed design decisions. For example, combining qualitative and quantitative insights provides a comprehensive understanding of user needs and behaviours. 


We applied this way of working within the NS-TIER project: during a session with our stakeholders, we defined the best tools to use based on the hypotheses and chosen participants. It was clear that we needed a twofold approach:

  1. We started with a quantitative approach:
    Through an online questionnaire we were able to reach a bigger number of participants and start drawing the first insights patterns. We knew that all the participants would have had access to a mobile phone while using the NS-TIER integration, so we created the online questionnaires on Forms.App. Easily fillable from the phone while on the go, we kept it open for 3 weeks for people to fill it in as soon as they would have used a TIER bike from the NS-app. This led to a total of 60+ filled in questionnaires that we analysed and helped us create a list of patterns and deep dive questions for further research.
Impression of some of the questions from the Form.App questionnaire that we have used

  1. Next was a qualitative approach:
    This was necessary to validate the insights we got from the quantitative round. For this second phase, we decided to use a series of shadowing sessions as a research tool. Because adding TIER as a shared mobility option in the NS app was a completely new feature, we wanted to observe all possible issues and inconveniences up close. Shadowing sessions are a great tool to spot issues for first time users, but at the same time dig deeper and help them where necessary.


A shadowing session done with one of the participants

This twofold approach and the tools selected helped us collect contextual and workable findings to be solved before going live with the NS-TIER integration

Step 2. Plan a pilot-test to test your research tools

Planning a pilot-test to check if your testing tools are the right ones for your scope is like a practice round before the big game. It's really valuable, since it enables us to try out an approach in a smaller, controlled setting. This way, you can spot any issues, tweak how you're running the test, and figure out what works best. It's like a test drive, and it helps you in fine-tuning everything before you go full steam ahead. This practice round helps to learn, adapt, and make sure that when you roll out your tools on a larger scale, they're totally ready to give you top-notch insights.

For the NS-TIER project we had planned 2 days of shadowing sessions, with 8 participants per day. It was a tight schedule, and to make sure the sessions would go as smoothly as possible, we made sure to have everything ready 2 days before the actual tests and tried the shadowing technique ourselves. 

While testing the shadowing tool, we realised we needed a bit more time to introduce the purpose of the test to the participants. It gave us great insights about planning, what could eventually go wrong and how to be ready for that. At the start of the tests, we were well prepared to catch possible issues and we had a possible plan B (and a Plan C as well) to refer to if necessary. 

The whole preparation cost us only 2 hours, but it definitely resulted in a smoother and streamlined process during the actual shadowing sessions, leading to no delays, no hiccups and a very nice and calm atmosphere in general. 


Conclusion


We have covered a variety of different topics. We've shared our insights and approaches on how we do research at fresk.digital, our view on how to set the research scope, how to define the research hypothesis, how to gather the best fitting participants and how to match the best research methodologies to the best research tools. These practical tools will help you in your user involvement process.

Most importantly though this process brings you value while you are creating digital solutions. By making sure you understand the ecosystem you are working with and validating your assumptions, you will be able to really make the right decisions while you are designing and creating for your end users.

From experience we’ve learned that applying this research process helps in gathering meaningful insights that aid us in better understanding the users of the digital products and services we create. By delving into their processes and flows, identifying pain points and needs, we are able to gather insights that enable the construction of an insightful customer journey that visualises the user experience, highlighting moments of potential friction and opportunities for enhancement. Through opportunity mapping, we pinpoint these areas and chances for improvement. 

Armed with these findings and opportunities, we craft an idealised customer journey, integrating functionalities or services that enhance the overall user experience. Ultimately, this will grow into an actionable digital roadmap that outlines concrete steps towards improvement.

Of course, we're aware that there's no one-size-fits-all solution in the world of research. However, experience has shown us that this approach will provide you with a framework to help you plan and execute a meaningful validation process. You are not limiting yourself with only the obvious set of research tools, but are starting with the core and purpose of your validation project in order to select tools that will support and amplify your process in fulfilling that purpose.

We recommend you to try and experiment with a variety of available tools. Get experienced with applying those and see what works for you in which contexts. In the meantime, we’ll do the same thing: we stay committed to exploring new methods of validation and embracing emerging tools, recently we started experimenting with Tellet for example. 

As we encounter various contexts within our projects, it remains intriguing to assess how we can apply the most optimal research methods in each situation. These steps enable us to stay critical and make informed decisions, acknowledging that we continuously validate and iteratively improve our very own validation methods as well ;-). In the end, tackling the blank page syndrome isn't about following a strict formula: it's about embracing creativity, curiosity, and collaboration.

So, let's keep the conversation going, share our stories, and together, let's keep pushing the boundaries of what's possible in digital design research.

MORE INFORMATION?

Get in touch

Blank page syndrome

The blank page syndrome, also known as writer's block, is a common phenomenon experienced by writers and other creative people. It refers to the state of being unable to produce new content or ideas, often feeling stuck or uninspired when facing a blank sheet of paper or a computer screen. A form of blank page syndrome can occur in the context of user research in digital product development. In this context, it might manifest as a challenge in identifying the right scope, creating the right questions to ask, difficulty in formulating effective surveys or interview scripts, or struggling to come up with fitting research methods.