Getting Started With Assessment Analytics
Assessment Analytics highlights key performance metrics and benchmarks for each assessment, and is designed to help you optimize your technical hiring process. This deep dive view is available for every assessment, whether that's a custom assessment or a Certified Evaluation backed by a Skills Evaluation Framework.
Here, you'll see data about your test-takers, including their progression through the assessment funnel, average scores, skill areas, feedback, and language preference. These are quantitative metrics that describe how each assessment is performing, and help drive discussions with stakeholders and with your Customer Success Manager about process improvements that can help you acquire and retain quality test-takers.
We'll go into each section in detail, so follow along to see how Assessment Analytics can enhance your understanding of your Assessments' impact. Please note that Assessment Analytics is only available to users with certain account permissions, so contact your company admin if you have questions about your access.
Where to find Assessment Analytics
There are two main ways to access Assessments Analytics. For both methods, first select Assessments in the top toolbar, then click on the Assessments tab.
From here, you can select Analytics on the right side of any assessment:
Or click in to any assessment and select All Analytics in the analytics bar:
Remember that neither of these options will be available if your user account does not have the appropriate permissions.
Navigating Assessment Analytics
Assessment Analytics is made up of the following sections:
- Assessment Funnel Breakdown
- Scoring Overview
- Skill Area Distribution Overview and Skill Area Analysis*
- Test-taker Feedback Results
- Submission Attempts Per Language
*Please note that some metrics, including the Skill Area Distribution Overview and Skill Area Analysis modules, will not be populated for Custom Assessments. These metrics rely on the comparability and validation of Skills Evaluation Frameworks, and are only available for Certified Evaluations.
We calculate these metrics for the individual Assessments selected. Use the date picker in the top right of the screen to change the period for which the analytics are measured.
Selecting Custom in the date picker will give you the option to select any date range between when you first started using CodeSignal and today. All of the metrics in Assessments Analytics will automatically update to reflect the time period you choose, and will be saved for you to come back to, for that individual Pre-Screen only.
Assessment Funnel Breakdown
Are you retaining test-takers effectively throughout the assessment process? The Assessment Funnel Breakdown is a great place to start thinking about how efficient your funnel is. These metrics show you how many test-takers have been invited, how many have responded, and of those, how many have made an attempt. Statistics like these highlight the location of test-taker engagement bottlenecks—or potential drop-off.
The first section of the Assessment Funnel Breakdown shows how many private invitations have been sent for that individual assessment. These are invitations tied to a specific email, for a specific test-taker. Note that we are not able to track invites for shared links, as these links are publicly available rather than tied to a single test-taker.
Next, we see how many responses this assessment has received. A response is the test-taker opening the assessment, rather than ignoring or actively declining the invitation. Here, we can measure the response rate for private invitations, or how many test-takers who received a private invitation have actually clicked into it and opened the assessment. We can track how many test-takers open a shared link, and that number is included as well.
For Certified Evaluations backed by a Skills Evaluation Framework, we can also give a comparison between your test-takers' response rate and the response rate we see across every assessment backed by that Skills Evaluation Framework. If your assessment is performing under the benchmark, it may be time to look at how you select which test-takers receive invitations, the messaging associated with those invitations, or how long test-takers have to complete the assessment. Since custom assessments are not associated with a Skills Evaluation Framework, you won't see this comparison metric for custom assessments.
Finally, the Assessment Funnel Breakdown shows your test-takers' attempt rate for this assessment. Of all of the test-takers who open the assessment, regardless of how they got there, this percentage of them have actually attempted the assessment. As with Private Invitation Responses, we can provide a comparison against the CodeSignal benchmark for Certified Evaluations. If your attempt rate is low, that might mean test-takers decide it's not worth attempting the assessment. Perhaps that assessment is too long or too complicated for the test-taker pool that is seeing it, or perhaps the assessment is not aligned with what test-takers expect to be asked about given the position for which they're applying.
Scoring Overview
Is your assessment attracting quality test-takers? Scoring Overview helps you recognize when your assessments might be targeting the wrong test-taker pool. For Certified Evaluations, we can also compare your score distribution with a benchmark, helping you see whether there might be untapped potential in the market. The scoring threshold tool gives you insights to help manage your interview volume by selecting an appropriate threshold for advancing test-takers to the next stage.
The Scoring Overview graph shows the distribution of scores for your assessment. Because we can directly compare scores for Certified Evaluations backed by the same Skills Evaluation Framework, those assessments have benchmark comparison bars. This information is not available for custom assessments.
Another key feature in the Scoring Overview module is the scoring threshold tool. If you use scoring thresholds to progress test-takers to the next phase, you can now see how changing the threshold for any given assessment will affect your volume at the next stage. Click the pencil icon to modify the score threshold, and we'll automatically update the percentage of test-takers who score at or above that threshold. For Certified Evaluations, we'll also calculate a benchmark to compare against all test-takers who complete this evaluation on CodeSignal.
Skill Area Distribution Overview and Skill Area Analysis
What are your test-takers' strengths, and where might you need development programs for this role? The Skill Area Distribution Overview and Skill Area Analysis modules offer a well-rounded view of your test-takers' skill proficiency. With two ways of visualizing skill area proficiency, this section dives into deeper detail than test-taker scores alone.
The Skill Area Distribution Overview and Skill Area Analysis metrics rely on skill area information from validated Skills Evaluation Frameworks. Because this data is not available for custom assessment, this module will appear blurred out when visiting Assessment Analytics for a Custom Assessment.
For Certified Evaluations, these modules display test-taker proficiency for each skill area measured by the assessment. These visuals give you a sense of where your test-takers' strengths and areas of improvement lie. Test-takers are ranked from Developing to Expert depending on their proficiency. Each Skills Evaluation Framework covers different skill areas, and some cover a larger number of skill areas than others. Where available, click View All below the Skills Area Distribution Overview graph to uncover the full analysis.
Test-taker Feedback Results
Are test-takers who take your assessment satisfied with their experience? We call out three specific feedback metrics in the test-taker Feedback Results module, placing test-taker experience at the forefront for discussions about assessment positioning. Monitoring these metrics can give you a sense of where test-takers may be encountering hidden friction.
The three Test-taker Feedback Results metrics are user experience, fairness, and relevance. All test-takers completing assessments, whether those are custom assessment or Certified Evaluations, are invited to rate their experience after completing an assessment. Test-takers evaluate three statements to generate these feedback metrics. The statement test-takers rate for UX is: "The platform provided a good user experience." For fairness: "Given the purpose of this evaluation, the questions seemed fair." And for relevance: "The questions represented what I would expect to do on the job."
For Certified Evaluations, we can also compare these scores with average scores across all test-takers taking an assessment backed by that same Skills Evaluation Framework. Comparing your test-takers' feedback with the benchmark is a good starting point for evaluating your recruitment messaging and test-taker targeting strategies.
Submission Attempts Per Language
Are the programming languages your test-takers are most comfortable with aligned with your team's work? Submission Attempts Per Language shows the coding languages that your test-takers prefer to use for that assessment. This information can indicate whether you might expect new hires to need some more support when settling into the team.
The Submission Attempts Per Language graph draws its data from test-taker code executions, so the languages here represent what test-takers are actually using, not just what they say they prefer. Although most assessments tend to see four or five common preferred languages, you can click View All to see a full breakdown across all available languages for that assessment. Note that some Certified Evaluations are backed by frameworks that only accept certain languages, so some Assessments may naturally have a less diverse language preference graph.
Submission Attempts Per Language statistics are calculated across both custom assessments and Certified Evaluations. However, users looking at Certified Evaluations will also see a benchmark comparison in this module. This benchmark comparison gives you a sense of where the broader market is trending.