Help Pages (EN)‎ > ‎

Self Reflection Editor

One of the two assessment types that utilizes natural language processing in real time is "Self Reflection". The Self Reflection assessment type can be inserted anywhere in your AutoTutor Lite module. A unique feature of the Self Reflection assessment type is that it can be inserted in any AutoTutor Lite module (e.g., Tutoring, Multiple Choice, Fill in the Blank, etc). 

The self reflection assessment type is best thought as a typical "short answer" question. Ideally, you would like your students/users to answer a single question with two or three sentences. For questions that require lengthier answers (two+ paragraphs) the "Tutoring" assessment type is best. 

The three major components of a self reflection assessment type include: the Seed Question, the Semantic Answer, and the Feedback 

You can insert a self-reflection slide almost anywhere in your information delivery. To insert a self-reflection into your information delivery click on the "L" button under the Information Delivery tab.

You can rearrange the order of your slides and your self-reflections by clicking on the slide you want to move, and then clicking on the "A" or "V" buttons.


After you have created a new self-reflection, you can edit the title and author notes by clicking on the "Author Notes" tab. For more information about author notes, click here.

Seed Question: Your "Seed Question" is the overarching question that you are wanting your users to answer. The self-reflection assessment type is better suited for asking questions that require shorter answers (~3 - 4 sentences). The "Reflection Question Title" box is what will be displayed as the question to the user. Edit the spoken text of the question by placing what you would like your Avatar to say about the question in the "Spoken" box under "Reflection Question Title". 




Semantic Answer: Your "Semantic Answer" is what AutoTutor Lite will compare your users' input to. Generally speaking, the semantic answer does not need to include articles (e.g, the, and, a) or other common words. The type of words you include in your semantic answer should be selected based on how you configure your semantic engine

Feedback: Configuring your self-reflection Tutor feedback can be a difficult task. It is important to test your self-reflection several times to determine whether or not the feedback provided by the tutor is helpful and relevant. Here are some important things to keep in mind when you are configuring your self-reflection feedback:

  • How many "turns" do you want to give the user to fully answer the question? Each "turn" equates to one sentence inputted by the user. In order for AutoTutor Lite to detect an end of a turn, a user must end each input with a period. 
  • A set of feedback triggers are needed for each turn your user will be taking. One way of thinking about this is: imagine you have a classroom full of students all trying to provide an answer to this self-reflection question. Some students will provide very detailed and complete answers in the first turn. You will need to provide feedback to these students letting them know they fully answered the question and can move forward. Other students will provide little to no relevant information on the first turn. You will need to provide feedback to these students in the form of hints that help guide them to the correct answer over their next turns.

To insert a feedback trigger, click on the "add rule" button located at the bottom of the "Configure Feedback" screen.

The combination of these rules functions as a micro-model of student knowledge. 


RN: Relevant New -- Relevant & New indicates the relevancy of new student information provided for each turn. In the above example the student provided no relevant and new information for turn 3.
IN: Irrelevant New -- Irrelevant & New indicates new information the student provided per turn, but information that is irrelevant to the target answer (semantic answer).
RO: Relevant Old -- Relevant & Old information indicated relevant information the student is repeating. For example, on turn two the student provided relevant information, but part of the answer has already been stated. 
CO: Total Coverage -- Total Coverage indicates the total % of the answer the student has covered

Let's walk through a few examples of how to create feedback triggers for turn 1.
  1. Click on the "CO" button next to "Add Rule for:" 
  2. Click on the number under "Turn" and select "1" to set this feedback to trigger on turn 1.
  3. Select the "trigger value" by clicking on the number next to the "relation" column. For total coverage (CO), this value corresponds to the % of coverage of the semantic answer. In the above example, the student provided about .5 CO on turn 1, .75 CO on turn 2, and .75 CO on turn 3. Let's set this value to .3 for this trigger.
  4. Select the relation to the value you just set. Let's select "near" for this trigger.
  5. Now we can edit the actual feedback that will be provided. This trigger will provide feedback to the user if he or she provided information that covered about 30% of the semantic answer on the first turn. I would say that he or she is on the right track, and maybe just needs to elaborate on their previous statement. So let's set the feedback to say "Good. You're on the right track. Please try to elaborate on your answer."


Comments