Granular Diagnostic Feedback

Resources, ideas and prompts for discussion related to Feedback.

A health warning first…

Diagnostic Feedback relates to the question level analysis that can be gleaned (*possibly*) following an exam series, mock exam, or major assessment. It is as much about feedback for the teacher in order to inform planning and curriculum decisions than anything else. The reason the word ‘possibly’ was tentatively added to the opening sentence is because we have to be incredibly careful about the inferences we make using this approach and they come with a health warning.

Essentially your question level analysis for exam seasons might have been provided by an exam board, where they break down how students did from your school in various question types, topics and assessment objectives compared to others nationally. From that it is a question of whether you could infer if a cohort could or could not do something (did they have a knowledge gap?). This is where you have to take care, as the answer could be “yes” or “no”, or “yes and no”!.

You might come away from looking at the exam board charts and graphs and topic analysis and think that in Biology, genetic cross diagrams are simply not understood by learners at your school and something has gone wrong. There may be some truth in that they struggle or struggled with this (as this data is usually for past cohorts), however it could be that the particular question just threw them in terms of structure, presentation or sheer difficulty. As a result it may be exam technique causing the issue and not any material or delivery method that needs changing around the subject knowledge itself. Another issue is that you can come away worrying about big knowledge gap problems without noticing that the topic in question was only contributing a very small proportion of marks to the exam. It doesn’t mean it is not important but you could inadvertently find yourself adding a ‘crisis topic list‘ to a Department meeting agenda for a matter that was worth 1 or 2 marks!

There is the flip-side to this with any mode of testing and assessment and the flaw comes inherent design limitations of the tests themselves. Imagine a student is asked a question about formation of sedimentary rocks. They answer it very well, achieving full marks. Should we infer they have mastered that topic? Or even mastered the wider topic of rocks and rock cycles? Or have they just been able to appropriately and accurately respond to that particular question and they may equally falter on another that is also about sedimentary rocks? Adam Boxer expresses all of these ideas incredibly well on his website (achemicalorthodoxy.wordpress.com) in the blog post, What to do after a mock? Assessment, sampling, inferences and more.

What is it and how does it work?

Given all that has been covered above (and Adam’s thoughts), we may wonder if there is any point in continuing with this at all!? However, question level analysis can have some useful information for teachers, providing we are mindful of the inferences we should or should not make.

Besides the exam board versions, you can produce your own for internal mock exams or end of topic assessments. They require the assessment to be split into distinct questions and the knowledge domain or topic that the question relates to, to be clearly identified in the set up. This form of analysis works much better with longer tests or tests that assess substantial portions of knowledge from a given topic or range of topics. As with the example of the exam board data, discussed above, if there are many disparate 1 or 2 mark questions with no common topic theme then the data you generate from this won’t be particularly helpful at all. Of all the techniques mentioned in this section it also runs the risk of pushing close to the wire in terms of taking up a good deal of time, as you have to type in marks into a spreadsheet per student per question.

A blank copy of the Excel file explained herein is available below. For it to work you must ensure you enable macros when using it (the sheets are password protected and cannot be modified beyond adding data). It was sourced from Peter Atherton (dataeducator.wordpress.com)

Blank copy of the spreadsheet along with Peter’s user guide: click here and select Exam Feedback Tool

How to use:

  1. First complete the ‘Exam Setup’ tab (purple button). You add your assessment name/title, any grade boundaries, and enter details about the questions on the paper.
  2. Complete the ‘Input Students & Marks’ tab (orange button). This is where you enter names, assign the class group and type in the marks scored for each question.
  3. Now you can view the ‘Question Analysis by Student’ tab (red button). the excel file will have made a single page view for each student showing their (suggested) strengths and weaknesses and how they did in relation to all others who sat the test.
  4. Finally use the ‘Paper Analysis by Class’ tab (green button) for an overall topic performance overview, showing where marks were gained and lost by the class. If you used the file to collate test marks for more than one class (for example for a whole cohort of Y11s from a mock) you can filter this page by class or look at whole year group ‘issues’.
  5. There is a ‘Print All Students’ button (blue button) that can be clicked on the ‘Question Analysis by Student’ tab which prints single page summaries for each student to keep, however in most school settings with printing restrictions and policies this can be tricky to get to work effectively (as the macro that controls it gets blocked).

Below is an example of the ‘Paper Analysis by Class’ tab generated from inputted class results following a Year 11 Biology mock paper:

As you can see there were immediate limitations with using diagnostic question level analysis for this assessment because questions on the paper had sub-questions [part a), b) and so on…] that dealt with slightly different topics. For example Q5 touched upon plants and their deficiencies and then also had a part question about monoclonal antibodies. The reason marks were not entered for every sub-question separately is because the balance of workload vs. output impact would have been tipped unfavourably.

So what could be done with this? Well it suggested three questions or areas of the paper that the class struggled on more, Q4, Q5 and Q6. Although Q5 created the most diversity in success (shown by the proportion of students with full/partial/no marks). It is often not practical to review whole exam papers but it could be that this is somewhere to start with a class during a feedback lesson based on modelling some of the answers. We could consider the exam paper itself too, was Q7 that bad? or was it linked to time management because it was toward the end of the mock? This is still a potentially useful conversation to have with a class.

At a student by student level the following snippet of the ‘Question Analysis by Student’ tab illustrates what they could be shown or given. Perhaps most powerful here is for them to understand performance in the context of the class (comparing % marks achieved with the average % of all students), without having to invade privacy and see other names or marks on papers directly. It could reassure them or point out where they could also seek support from a peer as well as the teacher.

Associated further reading and references:


%d bloggers like this: