It’s no secret that metrics and analytics are crucial to determine and measure the success of a site or web application. However, many organizations often track metrics that reveal little about their target objective—and higher education is no exception to the plague of inefficient UX metric tracking. In this blog, I want to explore the why and how behind effective user metrics in higher education.
Why You Should Measure Higher Ed Pages
First, the why. Essentially all universities want to know everything they can about prospective and current students in order to deliver an experience that aligns with university culture and values. This in mind, universities operate much like businesses by competing for customers (in this case, students). Tracking the right metrics to measure these users’ experiences and acting on those findings can aid any business (including universities) in engaging their ideal audience.
So what are the right metrics? Among the different aspects of student interactions that universities and colleges would like to measure, one of the most important is the effectiveness of the institute’s site. This brings up a two-part question: What is the goal of this site, and how do we know when the site achieves this goal by reaching desired effectiveness levels?
One of the main purposes of a university site is to provide information to prospective students (about institutional culture, values, campus life, etc.) and make sure that information is actually being viewed and processed by the target audience. Measure these factors, and you’ll know in detail how effective the website is.
There are a number of metrics (as part of a greater measurement strategy) that can reveal details of how effective the website is at providing information to prospective students. The first step needed before deciding on specific metrics is identifying what information these students want/need. This is usually identified through discovery research and stakeholder interviews. Assuming these needs have already been identified, the next step is to create content that meets the students’ informational needs and then test how well this content works.
How to Measure Effectiveness of Higher Ed Pages
Now for the how: Testing and measuring your content. What we truly want to measure is if the students are reaching these content pages on the website and if those pages are efficiently delivering that information (content quality). In this case, no single metric will answer both. The first part (Are students reaching the page?) can be answered by tracking page views. The second part can be trickier since there are multiple metrics that can indicate efficiency but can also suggest contradictory conclusions.
One of these metrics can be the exit rate of a page. A page’s exit rate tracks the percentage of visitors that ended their session on that page. In this particular case, a high exit rate can indicate both efficiency and inefficiency. We can assume one of two things about a user who exited a page: they either found what they were looking for and then left the site, or they couldn’t find what they were looking for and left the site. Quite the paradox.
Because it’s hard to know which of these two assumptions actually caused the exit, we can take a better guess by looking at secondary metrics. One of these can be time on page. If there is a predictable average time on a page, we can assume that content was easily digestible. For example, if we predict that a piece of content takes about two minutes to consume, we expect that the average time on page will be around two minutes.
However this does not account for users who may have walked away from their computer while the page was open, racking up time and skewing the average. Depending on the platform you are using for metrics, you may be able to remove outliers and not have them influence your average time on page. Although averages can be robust and not heavily influenced by outliers, their robustness is dependent on the size of population (number of pageviews in this case, as an increase in population size will lead to an increase in robustness). Be wary of low visitor numbers coupled with an unexpected average.
By combining these two metrics, you can get a better picture of what’s going on in a user’s mind. Too much time spent on the page before exiting (“too much” will dependent on the length and depth of the content) can indicate your content is complex and difficult to understand. If this is the case, it would make sense to review your content and determine if it should be simplified or broken up into multiple pages. Too little time spent on the page can indicate a user did not end up at the content they were looking for and left the site. If this is the case, you might have a bigger problem as this can indicate a navigational issue that would not be as easy to fix as editing content.
Measure Online Applications in Higher Ed
Another objective that universities should track (if they don’t already) is anything to do with online applications. This is broad, so let’s zero-in on the more important ones. For higher ed institutions, applications are a direct path to tuition revenue and as such, this path (navigation) should be optimized to convert as many users as possible.
To begin assessing the performance of an application’s online presence, start by looking at conversion rates. This will be the percentage of users that fill out the application out of the total that opened it (filled out/total opens). An unexpectedly low conversion rate can reflect poor application design. It is important to track how users are reaching your application as it can provide opportunities for path optimization. For example, if your application only lives on your homepage but you find a significant number of users travel back to the homepage from a campus culture page to find the application, it will make sense to include a link to your application on that campus page.
Another metric to track when looking at your application efficiency is click-through rate. This will let you know the percentage of users that clicked to the application from pages that contain the option. This will give you an understanding of which pages are more suitable to house the application.
Measurement for Illumination, Not Support
“He uses statistics as a drunken man uses lamp-posts… for support rather than illumination.”
Digital measurement of student engagement can shed light on all kinds of opportunities to improve higher ed sites and web applications. However, any recommendations of change to a website (or another digital asset owned by a university) should not be solely based on metrics. Numbers will only give you one side of the equation (no pun intended) and the other side is more subjective understanding which requires an element of expertise in the industries and understanding of higher ed users.
So as you apply an appropriate measurement strategy, always keep your university’s business objectives in mind. It is also important to cross-reference metrics whenever possible to avoid relying too heavily on one figure. Ultimately, you want to rely on digital measurement to inform your site strategy, then verify your new ideas through testing rather than rushing a big change.
To read firsthand how to combine UX research and content strategy best practices for a higher ed client, take a look at our case study for Campbell University.