Uncovering What Members Truly Value

Often, member surveys include a method known as “stated importance” to gauge how respondents feel about various topics or benefits. For example, a member survey question may ask members to rate a list of benefits (or attributes) based on their importance using 10-point scale in which the lower end of the rating scale (1) represents ‘not important at all’ and the high end of the scale (10) is labeled as ‘extremely important’ or ‘very important.’

Example of a Stated Importance Survey Question:

Please rate each of the following member benefits based on their importance to you, with “1” being not at all important and “10” being extremely important:

  • Free admission

  • Invitations to members only events

  • Free parking

  • Member previews

  • Supporting the museum’s mission

  • Evening lectures

Two other techniques for measuring stated importance include “rank order,” in which respondents are asked to rank the list of benefits in order from most important to least important, and “constant sum,” in which respondents are asked to divide 100 total points among all possible benefits so that the most important benefits receive the greatest number of points.

On the face of it, this type of approach appears to provide a simple way for an organization to assess how a member values a specific benefit. However, the stated importance method often leads to inflated importance scores and does not allow the organization to see how respondents may be considering the relationship between benefits and the decision to join. For example, asking for stated importance based on a rating scale tends to result in respondents rating many (or most) of the benefits listed as “very important.” Thus, the organization is unable to discern what really matters to audiences when it comes to making a choice to join or renew their membership. If everything is important, then nothing is.

Further, such techniques do not provide respondents with any context about how to evaluate each benefit. A member may be asking themselves, “How important is free parking compared to what?” To get to the real, actionable information that the organization is trying to understand, we need context. Unfortunately, it is not as simple as rephrasing the question to provide context (e.g. “Please rate each of the following member benefits based on their importance to you when considering the decision to renew your membership, with “1” being not at all important and “10” being extremely important.”). Why not? Rephrasing the question in this way only gives the illusion of meaningful context. Now, to be able to answer the question faithfully, a respondent is required to first determine if the benefit is important and then evaluate its importance in the context of the decision to renew their membership. Besides being a very mentally taxing exercise, such an approach requires a respondent to make an unrealistic leap in evaluating the importance of each benefit. That is, the member is being asked to make a hypothetical judgement about what is most important when considering renewing their membership which can lead to an artificial response.

So, what is the best way to determine which attributes or benefits members actually value? Here are three alternative research methods that allow for more valid and actionable insights to determining what members truly value:

Conjoint Analysis

Designed to uncover perceived value by simulating the trade-off decisions customers actually make in the real world, conjoint analysis is a statistical research technique that can aid in membership pricing and program design strategy. In a conjoint analysis survey, respondents are presented with different product features (think, membership benefits) or “attributes” and asked to choose which option they would be most likely to buy.

The simulated trade-off process in a conjoint study reveals which benefits are most valued and can also be used to understand willingness to pay (the maximum amount a customer will pay for a product or service). Using conjoint analysis, the value of each attribute can be estimated. By including price as an attribute, conjoint analysis can be used to translate the value into a dollar amount to get at willingness to pay.

Importantly, this research method does not directly ask what a customer would be willing to pay for a certain benefit or a membership overall. Instead, this information is uncovered through the respondent’s choices. Thus, conjoint analysis allows us to assess how a prospective member would trade-off the value of membership benefits with the price of joining.

Empathic Research

Simply, empathy is the ability to put yourself in another person’s shoes. In practice, empathy allows an organization to gain a deeper appreciation and understanding of its audiences’ emotional and physical needs, and to learn how individuals see and interact with the world around them. By focusing on understanding members’ perspectives, feelings, and goals, empathic research can help museums design products and services to address latent needs. Importantly, empathic research emphasizes observation and co-creation activities to gather information before making assumptions about how to solve a particular problem.

Empathic research takes many forms, from user studies and in-depth interviews to journaling and card sorting exercises. Regardless of the method, the goal of empathic research is to reveal hidden motivations and identify barriers to participation. For example, we might ask participants to co-create an advertisement for membership, complete with imagery, tagline, and hashtag. This type of exercise can provide insight into barriers to adoption such as negative perceptions as well as underlying motivations such as pride or a sense of community.

By providing a window into the decision-making process, empathic research allows us to make sense of what is not being said or what is being hinted at (i.e. the motivations or barriers that lie beneath the words, behaviors, and body language). Leveraging an empathetic lens, we can use abductive reasoning to make inferential leaps and answer questions for which we don’t yet know the answer. For instance, we might ask ourselves, “Why do some visitors who attend frequently never join as a member?” and then answer this question with abduction based on the insights gleaned from empathic research, “Because they don’t think they will visit as often as the actually do.” Through abductive reasoning, we can look for new data points, challenge accepted norms, and infer possible new innovations for membership.

Once we’ve gained an understanding of our audiences’ perceptions, feelings, and needs, we can begin to formalize a series of insight statements that can become the basis for a new membership product or service. In answering the question, “Why do some visitors who attend frequently never join as a member?” we can craft an initial insight statement, “Visitors don’t see the value of membership because they don’t think they will “get their money’s worth,” but they are often wrong because they end up visiting much more frequently than they thought they would.” From here we can connect the dots and begin to explore solutions aimed at addressing this previously hidden challenge by asking “what if?” What if we offered all visitors a risk-free trial membership? What if we let visitors convert their third visit into an annual membership? What if eliminated the barrier of price for people who aren’t ready to commit to a standard membership? What if we invited frequent non-member visitors to join a new membership category that looked more like a season pass?

Experiments

Observing audiences interacting with a product, experience, or marketing in real time can uncover important insights about how people think and feel about something. While most organizations are familiar with the concept of conducting “offer testing” in direct mail, digital advertising, and email campaigns, few are leveraging experimentation to design new membership products and services.

One of the most underutilized techniques in museum membership is the use of experiments to test new ideas. Too often, museums make wholesale changes to their membership program without the evidence to prove out their assumptions. This can be a costly mistake with long-term financial implications. Further, without applying the scientific method to validate a specific change, museum leaders will be unable to discern which aspects of the new membership program are working and which are not.

Consider the following example: A museum makes major changes to its membership program, including (1) adding two new benefits to each category, (2) increasing the price of all categories, and (3) eliminating one level of membership. A year from now, how will this museum be able to identify what caused a 20 percent drop in membership revenue? Moreover, what if the cost of delivery for these new benefits was significant, but the museum later learned that the new benefits were inconsequential in the decision to join? Knowing the answers to these questions before rolling out a new membership structure is imperative. Developing an experiment to test each aspect of the new program with real customers allows the organization to make a data-informed decision about how such program changes will impact member acquisition, renewal, and participation.

Carefully designed experiments can help museum leaders avoid disastrous outcomes by ensuring that they make evidence-based changes to their membership program. To arrive at a valid conclusion, it’s imperative to establish a clear hypothesis and the ability to measure the outcome. A good hypothesis includes a controlled independent variable and a measurable dependent variable. That is, a well-designed experiment clearly shows what will be tested and what the expected effect of the change will be.

The most basic form of a randomized controlled experiment is an A/B test in which a control (A) is tested against a single variation (B). For example, website users can be shown (at random) two different versions of the membership landing page and results can be analyzed to determine how a specific change on the landing page influences a metric like conversion rate. To be valid, A/B tests must include enough participants to be statistically significant and randomization must be used to minimize the chances that other factors will affect the results.

By applying A/B testing, we can reveal which benefits, messaging, imagery, and value propositions are most compelling. One big caveat for A/B and multivariate testing is that these techniques should not be used for pricing strategy. Why? First, testing pricing in a live environment risks anchoring customers to a particular price. Second, this type of testing is not designed to illicit customer valuations or to understand willingness to pay. Other methods such as conjoint analysis are better suited for determining an optimal pricing strategy.

Ultimately, most researchers agree that no one method is adequate on its own. In an ideal scenario, a mix of quantitative and qualitative research methods would be used during the membership product development process.

Share your ideas, comments, and questions with fellow choice architects!

Has your museum used conjoint analysis, empathic research methods, or experiments to design new membership products or services? What question would you love to get an answer to when it comes to members’ motivations for joining? Do you have a new membership idea you’d like to explore?

Previous
Previous

Closing the Intention-Action Gap