CodeSignal's writing question type (formerly known as the free-text question type) is a flexible way to evaluate a variety of skills in areas like customer or stakeholder interactions, internal emails and comms, and documentation.
In these questions, test-takers are presented with a prompt and are asked to provide a text response - no coding required.
Note that Writing questions are replacing free-text questions. Your existing free-text questions will not change, but you won't be able to duplicate or create new free-text questions.
In this article, we will review:
Functionality
The writing question type is a tool you can use within our Assessment product. It is not currently available within our Interview product.
Writing questions do not include an IDE. They enable test-takers/candidates to answer questions using free-text.
These questions are to be manually graded unless our AI checker is enabled for automatic grading. Please read below to learn more about this feature.
Create a Writing question
There are two ways to create a new writing question:
Duplicate an existing writing question
Once duplicated, you can make desired changes to the question prompt and title and then save the new question. Learn how to duplicate an existing question here.
Create a brand new writing question
Here are the steps to create a writing question from scratch:
1. From your CodeSignal account, click on the Question Library icon on the upper right side of the screen.
2. Click + Create Question.
3. You will be taken to a page where you can choose the type of question you want to create. Select the Writing question type and click CREATE.
4. Enter a question name and a question description. The Description field supports both free-text and Markdown, and you can choose to hyperlink to other webpages or upload photos.
5. [Optional] Enable AI grading
Note: AI grading must be enabled for your organization by CodeSignal. If you do not have access to this feature and desire it, please contact your customer success manager or the CodeSignal Support team.
AI grading allows our platform to automatically grade responses based on a rich text analysis, making it easier to gather insights and compare responses. For example, if you wish to assess how correct a candidate’s grammar is, you can configure the question to measure grammar correctness.
Create an Evaluation Rubric
To begin, you must create an Evaluation Rubric that outlines the attributes that are important to you. There is no limit to the number of attributes you can assess in one question, but you must be sure to carefully define criteria to minimize variability. Here is a sample rubric structure:
### Attribute 1
__Criteria List__
- Criteria 1
- Criteria 2
- Criteria 3
- Criteria X
**Very Unsatisfactory**
- Met 0 of the given criteria ( this can be more generic if there is no criteria list)
**Unsatisfactory**
- Met only 1 of the given criteria with some inaccuracies ( this can be more generic if there is no criteria list)
**Satisfactory**
- Met 2 of the given criteria with minor inaccuracies ( this can be more generic if there is no criteria list)
**Very Satisfactory**
- Met 3+ criteria with good technical accuracy ( this can be more generic if there is no criteria list)
---
### Trait 2
[...]
Best Practices
Here are some best practices:
- Use Markdown for clear formatting. Read more about formatting your questions here.
- Define an attribute, such as “AI Understanding”
- Be specific when describing criteria, such as “Demonstrates understanding of Machine Learning fundamentals”
- Prepare a rating scale on a spectrum (e.g. Unsatisfactory to Satisfactory). Please note that numeric values are not supported.
Here is an example of how to define an attribute and list criteria:
6. Add a question label as needed.
7. Once you’re done creating the question, you can click PREVIEW to view your question in advance of public presentation.
8. Last, click Save. Your question will now be available to use within Assessments.