Scanning and Selecting Enterprise Search Results: Not as Easy as It Looks
Scanning and selecting enterprise search results sounds easy. Users start with the challenges of constructing a query and dabbling with filters and facets to get a reasonably sensible-sized set of results, then they must scan the list of results in detail and select the most relevant ones for their information need. It seems so simple when in fact it is where the difficult (and time-consuming) work begins.
The Challenge of Search Result Scanning
We often talk glibly about scanning a list of search results without considering what this action involves. The speed at which the results can be scanned and appreciated in terms of their potential relevance varies from searcher to searcher. The concept of perceptual speed — the cognitive ability that determines how quickly someone can compare or find figures or symbols, or carry out other tasks involving visual perception — is usually ignored.
Perceptual speed isn't the same as readability. Perceptual speed relates to the ability of the searcher to make out words and other information elements. Readability is about comprehension of those elements in the process of extracting information and knowledge.
Perceptual speed is not easy to measure but the impact on the search user can be quite dramatic, especially if they are dyslexic to any degree. An outcome might be that the process of scanning is slow enough for a user to give up on the process after a few pages of results, and so not find all the relevant items.
Related Article: Diagnosing Enterprise Search Failures
Reviewing Results: Simple, Right?
The next step is look at each result and decide whether it is relevant enough to click on to view the associated content. Simple! Or is it? As with many aspects of enterprise search, there seems to be no research on how snippet length and design support making informed decisions on relevance.
There is some (but arguably not enough) research on snippets for web search queries, but in general these snippets are linking to a web page which can be scanned and assessed reasonably quickly. In enterprise search the content item could be several hundred pages long and it may be far from obvious where to find the relevant information (according to the ranking algorithm).
There are three fundamental ways of generating a snippet.
- Present the query term in a text sequence that should provide enough context to assess the relevance.
- Create a computer-generated summary of the content item.
- Reproduce the first few lines of an abstract (see Google Scholar for examples).
Some search application vendors provide a thumbnail of a page that contains the query term, but that ignores the accessibility problems arising from having to view a small image displayed as the result of very precise mouse control.
Related Article: We Need to Build Accessibility Into Our Digital Workplaces
Learning Opportunities
Metadata Matters
Metadata is also of course a consideration here. In the context of snippets, I am not considering the value of metadata but only the potential benefits of presenting the metadata values within the snippet. One might argue on principle that displaying metadata provides value, but it also can complicate the array of information presented and also has implications on the length of the snippet (and the extent to which 10 results exceed a single SERP page length) and on the scanning speed of the SERP (as discussed above re: perceptual speed).
Another issue to consider is the implications of federated search, where you might see considerable disparity between the snippet construction and display from the wide range of applications that are being searched. This is primarily a problem with query-time federation but can also arise in index-time federation.
Related Article: Enterprise Search Development: Start With the User Interface
Relevance and Ranking Results
We know from recent research that people may make different decisions from the information they perceive initially as relevant based on their expertise. Equally, most search metrics are based around the notional relevance of the results being presented in response to a query. If the true value of relevance cannot be well judged from the snippet, that calls any metrics associated with query performance (especially precision) into question.
The User Experience
There are no easy solutions to the issues raised in this column. In the quest for achieving an acceptable user experience the points to consider are:
- Are the techniques used by the search application to create snippets appropriate to the types of content being searched?
- Can the format of snippets be customized by the user?
- How easy is it to scan and assess results from a federated search?
In the final analysis, it doesn't matter how sophisticated the search technology is (in terms of semantic analysis, etc.). What matters is if the user can make an informed judgement of which piece of content in the results serves their information requirement, reinforces their trust in the application and maintains the highest possible level of overall search satisfaction.
Learn how you can join our contributor community.
About the Author
Martin White is Managing Director of Intranet Focus, Ltd. and is based in Horsham, UK. An information scientist by profession, he has been involved in information retrieval and search for nearly four decades as a consultant, author and columnist.