Help Users: Explain Think Tank Branding & Jargon

About a year ago, Forbes declared Apple to be the most brand ever, reporting its value at over $200 billion.

But up until 2007, Apple’s official name was still “Apple Computer.” Apple kept telling the world it was a computer company, even though it had produced iconic products like the Apple II, the Mac, and the iMac.

Apple’s “Think Different” campaign decorated every computer lab and library I ever entered as a kid. We all knew Apple and they were synonymous with computers.

Still, for 31 years Apple continued to include “computer” in the brand name, reinforcing what they did and what they sold.

When they changed their name it might have been because they were entering new markets (though everything they do still involves computers) or because they settled legal matters with The Beatles.

But they had the option to become a monomial brand (like Prince) because their brand was already so strong. They even have their own unicode character: .

Regardless of the reason, the most valuable brand in the world reminded consumers they were computer company for three decades. Yet think tanks, the clients we work with, assume that the branded name of their newsletter, blog, or programs requires no further explanation.

Cato

For example, Cato doesn’t explain what “Cato Unbound” or “Downsizing the Federal Government” are when it displays them as search filters on is search pages:

Manhattan

Manhattan Institute uses “On the Ground,” which communicates little to the user, when it could use “Pilot Programs,” two words it hides under the hover effect on desktop:

Heritage

Why would a user of Heritage’s search function filter results for “Heritage Impact” or “Heritage Explains” when they aren’t familiar with those formats?

Learn from E-Commerce

E-Commerce sites use subtitles and tooltips to explain difference between products or product specs.

Nike provides helpful subtitle explanations for people who aren’t sneaker heads:

Build.com provides tooltip explanations on hover for what capacity appliance might make sense for a user:Both sites are anticipating user questions and answering them then and there.

Provide Context & Explanation

Just as Apple reminded consumers for over three decades that they were a computer company, telling users that “Cato Unbound” is “Monthly Debate Essay Series” with a quick subtitle would give users a mental handle to grab onto. Without this context, users won’t user filters or explore categories that have no meaning to them.

Wherever you are using sub-branded products, non-descriptive project names, or industry jargon, you should unpack that by including a few simple words of explanation. Subtitles, tooltips, or even a sentence-long description can take something from mystifying to comprehensible.

The audiences think tanks are trying to reach—journalists, policy makers, and academics—will be better able to understand the content and format of a given publication if this little bit of extra information is provided.

Think Tanks Should Avoid This Usability Hurdle

What is a “center” at a think tank?

  • Is it an administrative distinction?
  • Is it a way of packaging research for fundraising?
  • Is it a way of highlighting joint research efforts?

The answer to any of these is unclear and think tank websites aren’t making it any clearer.

But the most important question to ask is:

  • Are website users helped or hurt by highlighting centers?

For the average user, centers are likely to create confusion when offered as a means of sifting through content or filtering search results.

The Urban Institute

For example, The Urban Institute offers both “Research Areas” and “Policy Centers & Initiatives” as potential search filters.

If a user were looking for how Covid-19 is affecting education policy, would they be best off filtering their “Covid-19” research results by using “education” under “Research Areas” or would “Center on Data and Policy” be a better choice?

The only way for the user to know is to guess and check, which is laborious and discouraging.

This is why libraries use the Dewey Decimal System, rather than offering the Huey, Dewey, and Louie decimal systems.

Parallel systems of organization are confusing to users. One system, even an imperfect one, is better than several competing and semi-overlapping systems.

The Hudson Institute

Similarly, The Hudson Institute offers users two ways to drill into their content offerings, “Topics” and “Policy Centers.”

Here, the Policy Centers at least seem more narrow in their focus that the Topics, but again, the average user who isn’t familiar with Hudson’s work and internal organization could be left wondering where to start.

For example, a journalist looking for information about the “Strait of Hormuz” could plausibly look under any of these:

  • National Security
  • International Relations
  • Center for American Seapower

Where to start? Again, it’s time to guess and check.

Brookings

Larger groups with more content, the groups who would most benefit from good sorting mechanisms, seem hellbent on confusing users the most.

Brookings offers dozens of topics listed alphabetically (without making allowances for the word “the”) alongside “Research Programs” that are offered in a three-level organization that takes six flicks of the scroll wheel to make your way through.

There is no guidance offered to users explaining the difference between these options for narrowing results.

You guessed it! It’s guess and check time.

Pew Research Center

Useful guidance doesn’t mean descriptions of what a research program or center means or does, instead guidance can be offered visually through priority and emphasis, as Pew Research Center does with their search results.

Though Pew does offer users the ability to filter by program, they emphasize “Filter by Topics” by placing it higher in the list of available filters and offering checkboxes as the selection mechanism, causing it to stand out as the primary filter.

These ordering and user interface (UI) choices tell users that “Filter by Date” is the most useful filter and that “Filter by Topic” is probably the next most useful option.

By placing “Filter by Programs” at the bottom of the list of options, Pew’s search UI communicates that programs are not a common option while still making them available for users familiar with Pew’s work.

Rules for Improving Categories

Think tanks who want to keep their category and filtering options understandable to users can follow a few simple guidelines.

Nielsen Norman Group, a globally-recognized leader in user-experience research, says that any set of website categories should be:

  • Appropriate: Address the aspects of the content that users find most important, like date and topic.
  • Predictable: The categorization offered should be familiar to users, like “education choice” as opposed to “Center for Educational Dynamism and Alternative School Governance.”
  • Jargon-Free: This means avoiding the initialisms, acronyms, and high-octane wonk terms that think tankers tend to love.
  • Prioritized: The most broad and commonly-used filters should be shown to users first. This might be nesting subcategories within larger categories or initially displaying only the top-ten filters.

To follow these guidelines, Nielsen Norman Group recommends that website owners ask themselves the following questions:

  • Which characteristics are most influential to users in making their choice?
  • What words do users use to describe these characteristics?
  • Do users understand our labels, or do they look like jargon to them?
  • Which filter values are the most popular or most commonly used?

The best way to answer these questions is, of course, to talk to actual users! They need to be the ultimate arbiters of how your think tank website looks and how it works.

Unfortunately for many think tanks, department heads and research staff are setting web priorities while actual users are left out of the conversation entirely.

That’s why filters like “The Center for the Analysis of Governance Efficacy and Efficiency” persist when “Accountability” might serve users better.

Quick and Dirty Feedback

A quick method for getting feedback without interviewing users is to set up an experiment using Google Optimize.

You can offer users different versions of your website categorization, search filters, or other elements, and see how different options perform.

For example, offer 50% of users search results with “Topics” as a fully expanded and visible filter list while “Programs” are collapsed by default. Create another search results template that does the reverse for the other 50% of users.

In other words, create a simple A/B test.

Run the experiment for 30 days and see which search results version produces better results in terms of filter user, time on site, dwell time, etc.

We Can Help

If you want to learn more about avoiding this and other usability pitfalls for your think tank’s website, you can call us for a free 20-minute consultation. We can help you identify problem areas and prioritize the fixes that will have the biggest impact for your mission.

Don’t worry, this phone call won’t be “salesy.” Our call will be focused on learning about your goals and the problems you’re facing. That way, we can determine if our approach would be a good fit for your needs.

 

Book a free 20-minute consultation

Think Tanks Need to Invest in Better Search

One of our missions is to demonstrate that success on the web does not require having the largest budget. Instead, it requires slowing down, being thoughtful, and working with people who know how to get results out of the web.

Too many think tanks seem to be working with designers who care more about making things pretty, or developers who care about making things technically efficient, rather than working with usability experts who care about making a website into a tool that a human being can actually use.

This is why even groups with incredible, gargantuan, aircraft-carrier-group-sized budgets have internal search that is an absolute embarrassment.

Let’s start with the hugest of the huge. The World Economic Forum’s most recently public disclosure places their annual expenditures at nearly $500 million. Yet this how their site search looks:

What’s wrong with these search results?

  • No autocomplete. There are no suggestions like “Covid-19 model” or “Covid-19 WHO” or even correction of misspelling, which are common, especially for words like “hydroxychloroquine.”
  • Results cannot be filtered. There’s no way to see only items published in the last week, or only event videos, or only reports. There’s no way to single out a particular author’s work. Filters would be handy considering my search for “Covid-19 produced 153,000 results.” That list needs winnowing.
  • Sorting is broken. The only sorting options offered are “Relevance” and “Date” and half the time selecting “Date” resulted in an API error.

The problem here is not that the World Economic Forum can’t afford a decent search experience, it’s that they don’t care to provide one. Throw up a quick and dirty implementation of Google site search and let users struggle.

Users Want Well-Implemented Search

Some of this is self-fulfilling prophecy. We hear this all-too often when talking to our think tank clients:

Users don’t really use our search, so we don’t invest in it.

We understand this thinking. Think tanks have to invest their dollars strategically, but this analysis gets the causality backwards.

If you don’t invest in search, it will work poorly, so users won’t use it.

I know the causality works this way because it’s born out by the research. When sites have simple, visible search functionality, users buy more products—or in the case of think tanks, download more research. A Baymard Institute study of e-commerce search found that sites with better search delivered better results to users (as in they closed more sales), and the study made this important observation:

As the poor overall state of search is present within all industries, most sites will have an opportunity to create a true competitive advantage by offering a vastly superior search experience compared to that of their competitors.

This is 100% true when it comes to think tanks. To prove this, let’s look at some well-known groups and how they perform in search.

Think Tanks with Search Problems

Let’s first look at the problem children. Cato, Hudson, and Heritage are all highly-respected groups with great research that were it findable, would serve to make the world a better place, yet each are failing at search in pretty significant ways:

The Cato Institute

Cato offers no filtering by relevant topics, no filtering by author, no filtering by date, and only shows users 10 results at a time. Content categories like “Cato Unbound” or “Downsizing the Federal Government” don’t matter to most journalists or policymakers. Internally relevant content distinctions should be replaced by commonly recognized policy topics.

The Hudson Institute

The Hudson Institute offers three results at a time, no filtering, and no sorting options.

The Heritage Foundation

A journalist might be looking for information on how Covid-19 effect defense policy, public schools, or state unemployment programs, but The Heritage Foundation offers no filters for the nearly 50 policy areas they cover. Instead, users are given filters of report formats that they probably don’t understand. Does the average journalists or hill staffer know the difference between “Heritage Impact,” “Heritage Explains,” or “Report?” Not likely. If formats like these are offered, they need to be shown with explanations. On desktop, this should be done with tooltips that show up when users hover over a format.

Think Tanks with Great Search

Now let’s look at two examples of think tanks that get search right:

The Reason Foundation

What is Reason doing right in this example?

  • Showing the number of results. This not only shows users that you have a lot of content to begin with, it helps them understand if they should keep filtering. In this case, I got down to 3 results after filtering by date and topic, leading me directly to the content that’s most relevant.
  • Offering all content attributes as filters. If you associate a piece of content with a topic, author, or publication type, make those search filters. Notice how Reason puts Publication Types last on their list of filters, recognizing that format usually doesn’t matter as much to search users.
  • Using “load more” instead of pagination. Before I filtered down on my list, Reason showed the number of results I had an offered an option to “Load More” at the bottom of the list. This is better than pagination as it invites users to commit only to expanding the current page, rather than beginning a journey into deeper and deeper pages. This may seem like a distinction without a difference, but loading another page is more of a commitment in the minds of users who are looking to get results fast, rather than get lost down blind alleys.
  • Filters work as checkboxes. This allows users to check multiple filters and easily turn filters on and off. Offering the “Clear” function is also key here, as it allows users to restart the filtering process without scrolling up and down the list of filters.
  • Attributes are distinct. I can easily scan by title, data, or author, the attributes that most users care about. It’s crucial to make all attributes visually distinct through difference in font size, color, typeface, etc.

Reason could improve their page by loading more results and also making filtering persistent, so that once a user clicks an individual result and then clicks the back button, the same list is presented.

Pew Research Center

The Pew Research Center also nails search with a good-looking, thoughtfully design search filtering page:

Like Reason, Pew shows the number of results prominently, offers attributes as filters, uses checkboxes for most filters, and presents results in an easily scannable list.

Pew offers pagination, rather than “Load More,” which we think is a mistake, but it does keep results persistent, so back button functionality works when dipping in and out of search results.

Pew also offers two really great features that Reason does not:

  1. Visual filter management. By stacking up filters at the top of the search results, Pew uses conventions users understand from e-commerce and makes removing filters intuitive.
  2. Previewing results numbers. This guides users to filters that will help to narrow their results quickly. Pew also grays out some filter options, indicating when a filter has eliminated some content from the results.

Bigger Picture: Embracing Inbound

Larger groups may not value search as much because they believe their content gets seen anyway. They reach journalists through their large communication teams, reach Hill staffers through their government outreach teams, and place research directly in the hands of other researchers, often mailing it out as physical publications.

Social media too is a way of extending that old-school, “we’ve got a great list” mentality. Just as groups have thrived based on their direct-mail lists or rolodexes filled with friendly journalists, their social media followers become another means of broadcasting messages out. Same model, different mechanism.

It’s all part of the megaphone, outbound approach. But your website opens a new front, a new way of doing things. It’s not a megaphone, it’s a magnet.

By embracing search that works well—one part of making a website more usable—think tanks can embrace the inbound “magnet” model of marketing in addition to their well-establish outbound marketing efforts.

In this mode of thinking, research is still used as fodder for direct outreach to journalists, policymakers, and fellow scholars, but it’s given further purpose by populating search engine results and being permanently available (and hopefully discoverable) on your website.

I suppose this stuff is obvious—of course we understand we’re no longer in they days of relying solely on media outreach and pushing our content to our desired audiences.

But if that’s true, then why are so many think tanks website difficult to navigate, impossible to search, and poorly ranked on search engines?

The answer: it’s easier to keep doing the same thing then to change, especially if you’re still getting good-enough results.

Start Measuring Inbound

In order to take advantage of what the web really has to offer, think tanks need to start measuring their website performance and making it clear to their donors that this stuff matters.

So instead of measuring only outbound success indicators like:

  • Op-Eds
  • Media Citations
  • Television/Radio Appearances
  • Social Media Followers

Think tanks need to also measure inbound indictors like:

  • Search Engine Ranking
  • Organic Search Traffic
  • Time on Site
  • Pages per Visit
  • Dwell Time
  • Newsletter Sign-Ups
  • Contacts Generated from Web Forms

Think tanks can take this even further by using service like CallRail to track how many phone calls were the result of visits to their webpage.

Only by measuring inbound marketing performance will think tanks start to invest in their website as much as they invest into outbound marketing methods.

We Can Help

If you want to learn more about optimizing search or any other part of your think tank website, you can call us for a free 20-minute consultation. We can help you identify problem areas and what should be addressed first to create the biggest return on your investment of time and money.

Don’t worry, this phone call won’t be “salesy.” Our call will be focused on learning about your goals and the problems you’re facing. That way, we can determine if our approach would be a good fit for your needs.

 

Book a free 20-minute consultation

Making the Most of Your Quarantine Time

During times of crisis, sound public policy is sorely needed and often in short supply.

The think tank industry can prevent very bad choices from being made if its store of ideas and its experts are able to reach policymakers.

But social distancing means many of the tools think tanks rely on are off the table:

  • Meeting with Capitol Hill staffers
  • Testimony before committees
  • Panel discussions
  • Public talks

Think tank offices are closed. The battle of ideas has moved onto our laptops and into our makeshift home offices.

But there’s a lot we can do to improve the chances of sound policies being adopted even under these new constraints.

Reinvest in the Web

Just as think tankers are working from home, journalists, academics, and even policymakers are working remotely.

Social distancing means that both the production and the consumption of public policy research will be mediated online.

Given this sudden new reality, now is a good time to take stock of the state of your website and make changes that will build your citations, increase your subscribers, and encourage crucial donations.

Focus on Usability

When we talk to think tank communications teams we hear the same problems over and over again:

  • “Our scholars can put hundreds of hours of work into a single research paper that quickly gets buried in our website after it’s released.”
  • “Reporters call me for basic information, things they should see on our website. I worry that other reporters get discouraged and don’t even bother to call.”
  • “Search on our site is so broken, our own scholars use Google to find things they’ve written for us.”
  • “Other think tanks are growing their online donor base, but we’re still focused on direct mail because our website doesn’t convert.”

All of these problems have one things in common: usability.

That term may seem like woo-woo tech speak, but usability addresses a concrete, fundamental question:

Can users complete the tasks necessary on your website to reach their goals?

It’s Science!

Thankfully, the question of whether or not users can complete tasks can be answered through a reliable method:

  • Hypothesize
  • Experiment
  • Analyze
  • Repeat

This should sound familiar because it’s the scientific method! It’s the same thing that scientists (even social scientists like economists) use to arrive at their conclusions.

And just like any other scientific discipline, usability observations turn into laws and theories, called “guidelines.”

These general rules allow web designers to build on best practices arrived at over decades of observing users interact with technology.

Room for Improvement

Basic usability—task completion—is still in need of improvement.

A Nielsen Norman Group web study in 2016 found that only 82% of assigned tasks could be completed in users tests.

That means that roughly 1 in 5 web users give up on basic tasks, like searching/browsing for something, subscribe to something, contacting someone, or making a donation.

That’s pretty abysmal.

Outshine the Competition

A think tank that embraces usability could easily outshine its competition online if only because the competition is so very bad.

Usability is simply overlooked by most think tanks.

Managers seem satisfied with websites that don’t crash and look good, and while those things are necessary, policy scholars and communications professionals understand they aren’t sufficient.

Too many think tank websites simply don’t allows users to perform these basic tasks:

  • Find the most relevant research on an issue or proposed legislation/regulation
  • Contact a scholar or communications team member to schedule an interview
  • Subscribe to an email newsletter or podcast
  • Follow the group on social media
  • Make a one-time or recurring donation

By first applying general usability guidelines consistently across their website and then engaging in regular user testing, even smaller, scrappier think tanks could leap ahead of the competition.

Getting Started

To improve your usability, we recommend starting with an UX audit.

For us this means:

  1. Listing Goals: We mean big goals. How do the things in your board report relate to your website? Let’s focus on making the big performance indicators move.
  2. Creating User Personas: Who visits your website? What do you they want? How does that relate to your goals? What are these folks trying to avoid?
  3. Mapping User Journeys: If you could lead your users by the hand, where would you take them? Do they ideally contact you, subscribe to your newsletter, or become donors?
  4. Analytics: How are your goals being served now? Where can we see that users are getting stuck or aren’t getting where you’d like them to go?

Once these things are known, each page and element of your website undergoes detailed scrutiny.

Our audits range from 150 UX elements to over 300 UX elements and address things like the structure of your categories, the functionality of search, mobile menus, breadcrumbs, publication filtering, and donation flow.

We’ve combined studies of non-profits, leading e-commerce sites, and corporate PR pages to arrive at a set of guidelines that address the unique tasks think tank website users are trying to complete.

If you want to conduct an audit like this internally, you can obtain up-to-date research from sources like Steve Krug, Rosenfeld Media, Nielsen Norman Group, or the Baymard Institute.

When to Hire a Pro

The biggest value a UX consultant audit brings is speed. UX Audits are time-consuming, and most think tanks don’t have the bandwidth.

It also helps to have experience across several websites. At Tallest Tree, we’re in the unique position of having over a decade of experience working with dozens of think tanks, so we know how usability issues can affect citations, subscribers, and donations.

Hiring a third party also provides objectivity. A professional UX auditor won’t be emotionally invested in your design or the content decisions you’ve made. A consultant can look at your site with new eyes, something your team cannot do.

Get in Touch

If you’d like to use your work-from-home time to dig into website usability, or to address another problem you’ve identified with your think tank’s website, please get in touch. Visit our Contact Page and we can book a call.

We find that we have a little more free time these days, so we’ll probably get back to you the same day.