Getting Started With Executive Analytics
Executive Analytics is a robust, data-packed dashboard that connects the performance of your hiring process to bottom-line concerns like cost savings, time reduction, and test-taker quality metrics.
Here, we also collate snapshots of your test-taker pools, showing you where test-takers are coming from, what kinds of experiences they're having, and what their preferences and skill areas are. This can inform strategies to improve test-taker response rates, experience metrics, and retention through the process. Whether you're glancing at trends or unpacking deep insights, Executive Analytics will help bring out the information that matters to your business.
As you explore Executive Analytics, you can use this article as a tour guide. Please note that account permissions dictate who has access to certain features within Executive Analytics, so contact your company admin if you don't see what you think you should be seeing.
Where to find Executive Analytics
If Executive Analytics is enabled for your account, this view will become your new CodeSignal home screen. You can also navigate to it from other sections by clicking Analytics in the top navigation bar in CodeSignal, to the left of Assessments.
Use the date picker tabs in the top right of the screen to change the period for which Executive Analytics are measured, such as the past 12 months or the year to date. Selecting Custom in this date picker will give you the option to select any date range between when you first started using CodeSignal and today. All of the metrics in Executive Analytics will automatically update to reflect the time period you choose.
The five main Executive Analytics sections are:
- Estimated Value
- Test-taker Performance
- Talent Map
- Popular Languages
- Test-takers' Experience Ratings
All of the metrics in these modules are based on your company's assessments usage.
Estimated Value
The estimated value section provides at-a-glance metrics that help you quantify the time and money you're saving in your hiring process. All four metrics are calculated by comparing your CodeSignal usage now against estimates of your hiring process pre-CodeSignal. Check out this article for a deeper dive into how we calculate each metric.
Some of the variables used in these calculations are based on estimates from industry research. Users with the appropriate permissions will see a "Customize" button along the bottom right of the panel. The customization screen lets you override the default research-based values if you have more specific data.
Estimated Value is how much money we estimate you've saved so far using CodeSignal. Total Engineering Hours Saved shows how much less time your engineers are spending on the dreaded administrative work of screening applicants, time that they now have to do the tasks that directly impact your product or service.
Average Reduction in Cost per Hire zooms in to show you your average savings per offer, when accounting for the cost of screening all of the other test-takers in that pool as well. And Average Reduction in Time to Hire estimates your time savings overall for each new hire, and how effectively you're minimizing the "limbo period" for test-takers.
Test-taker Performance
Test-taker Performance is a per-assessment analysis of your test-takers' skills. Because scores for Certified Evaluations backed by the same Skills Evaluation Framework are directly comparable, we can compare your test-takers' scores with test-taker scores across CodeSignal for that Skills Evaluation Framework (the "Benchmark"). We do this both with score distributions, showing how many test-takers scored in each range, and with a comparison of the average scores of your test-takers versus the benchmark. Each quarter, we update these benchmark averages for the Score Distribution Overview, to show you a timely comparison of your test-takers versus the general pool.
In addition to the score distribution, you'll also see a Skill Area Distribution Overview and a Skill Area Analysis for each Certified Evaluation your organization uses. These visuals give you a sense of where your test-takers' strengths and areas of improvement lie. Test-takers are ranked from Developing to Expert depending on their proficiency. Each Skills Evaluation Framework covers different skill areas, and some cover a larger number of skill areas than others. Click View All below the Skills Area Distribution Overview graph to uncover the full analysis.
Talent Map
The Talent Map shows where your test-takers are coming from, along with their invitation response rates. These insights can tell you if test-takers from certain regions are interested in the invitations you're sending, or if you might need to reevaluate your sourcing strategy.
We measure response rate by looking at IP addresses and seeing how many test-takers from any given country begin an assessment after receiving an invite. Both Certified Evaluations and custom assessments are included in this data. Please note that since reliable IP address data is not always available, there may occasionally be a small discrepancy in total responses.
You can also compare your company's country-level response rates with aggregated response rates across all CodeSignal customers. This can also inform sourcing by showing you countries where there might be untapped potential based on other customers' Assessments usage. For busier regions where you want a closer look, zoom in using the buttons on the bottom right of the map.
Popular Languages
Popular Languages is a graph that shows the coding languages that your test-takers prefer in Assessments. This information can be useful in aggregate to see if test-takers roughly align to your engineering team's preferences. If your engineering team prefers PHP and 60% of your test-takers are using Python or Javascript in their assessments, you might expect new hires to need some more support in that area as they settle into the team. You can also see a language preference distribution across all test-takers running code in assessments, giving you a sense of where the broader market is trending.
Each data point in this graph represents one code execution from one test-taker, so the languages here represent what test-takers are actually using, not just what they say they prefer. However, note that some Certified Evaluations are backed by frameworks that only accept certain languages. For example, if a large part of the invites you're sending are for Certified Evaluations leveraging the General Coding Framework, you'll likely see a wider language distribution than if all of your Certified Evaluations use the Machine Learning Core Framework. Popular Languages are calculated across both custom assessments and Certified Evaluations.
My Test-takers' Experience Ratings
My Test-takers' Experience Ratings tracks how your test-takers rate their user experience, plus the fairness and perceived relevance of the assessment they've just completed. All test-takers completing assessments, whether those are custom or Certified Evaluations, are invited to rate their experience. The three questions ask test-takers about UX, fairness, and relevance, and we aggregate their responses into this timeline graph, showing how test-taker sentiments change over time.
The statement test-takers rate for UX is: "The platform provided a good user experience." For fairness: "Given the purpose of this evaluation, the questions seemed fair." And for relevance: "The questions represented what I would expect to do on the job."
These insights on test-taker experience can point out places where test-takers might be struggling. Monitoring these metrics can also give you a sense of how changes in your hiring practices have affected test-taker experiences. Test-takers might not be comfortable giving feedback about pain points in person, and anonymized data like My Test-takers' Experience Ratings can be more representative of hidden friction.