This post is in two parts, which are simultaneously in the right order and back-to-front. Let me explain…
Gap Task 1
My previous post described how day one of the ITP course ended with us being set a ‘gap task’. This involved us choosing at least one of the teaching and learning strategies that we identified during the day, and making a commitment to use this strategy with at least one class before day two. The feedback on this gap task was actually given at the end of day two of the ITP course, but it makes more sense to write about it at the start of this post in that it follows on more neatly from the previous one.
In the lesson in question, I actually used a mix of strategies. The initial task was a card sort on the theme of the effects of desertification in the Sahel, with three categories being given to students. I then followed this up with spy time, in which one student in each group remained with their cards while the other students circulated the room to give verbal feedback to members of other groups. In some cases (and with my encouragement), students were able to explain to one another why some cards were in the wrong category – or, at least, that they should be in between two categories rather than squarely in one or the other. One group had opted to use five categories of their own, instead of the three that I gave them, so the thinking behind this became a focus of the debrief.
I then asked students, in their groups, to pick two cards from different categories and to explain the connections between these cards to the rest of their group. This done, the students then turned the cards over and picked two at random, the challenge being to then explain the connections between these cards to the rest of their group too.
The lesson concluded with a written task consisting of three questions. By the time we got to this stage, the students were so confident in their level of understanding of the case study material that neither I nor the students said another word for the next ten minutes; they were so completely engaged by completing the task.
ITP Day Two
As I mentioned above, this feedback was given at the end of day two. The day itself continued the theme of ‘challenge’ and ‘engagement’ from day one, with an added focus on starters. In keeping with the course aim to model practice, the day began with a series of group tasks related to these themes, followed up by some quite in-depth discussion regarding the relative merits of different teaching and learning strategies. This discussion was punctuated by a ‘ward round’ that took us to three classrooms for 5 minutes each, looking for examples of best practice regarding challenge and engagement, and timed so that the first class that we visited were seen during their starter activity.
As with day one, we were encouraged to identify ‘golden nuggets’ of teaching and learning to incorporate into our own practice, which are the focus of the rest of this post.
Our initial group task was a series of Odd One Out activities, using words, numbers and pictures, but with a twist – the options were deliberately chosen so that more than one of them could be the odd one out. The idea of this is to promote challenge and engagement through being open-ended and allowing students to think more creatively, rather than playing “Guess what’s in the teacher’s head”. Of course, there is no reason why this task couldn’t be used as a plenary activity, either during the lesson or at the end, with the aim being for students to use that lesson’s learning to come up with the most original reasons for their choices.
This naturally led in to a second task, in which we were asked to draw a house; however, part way through, some of us were given a set of criteria by which the drawings would be marked. Half of the group then had to give comment-only feedback, while the other half used the criteria to give an overall numerical score. Those of us who had not been given the criteria had obviously had to play “Guess what’s in the teacher’s head” in trying to decide how to draw our house. The discussion led on to how best to encourage students to generate their own success criteria for a task, with the best way being the use of model answers and exemplar work – success criteria and modelling being the two elements that, for Tom Boulter , make the difference between ‘pale’ and ‘pure’ AfL. Students could start the lesson by putting a series of examples into rank order from best to worst, with the discussion either focussing on what a good one looks like (WAGOLL) or what a bad one looks like (WABOLL). In the former case, students would then go on to try and draft a ‘perfect’ piece of work, with other students using the success criteria to give feedback through peer assessment, which the first student then uses to redraft and improve their work. In the latter case, students start by consciously producing a lower level piece of work and then use the success criteria and self-/peer-assessment to move through successive iterations before arriving at a higher level piece of work by the end.
The discussion also led on to the idea of comment-only marking. Dylan Wiliam’s research is very clear in this area:
‘In 1998, when Paul Black and I published “Inside the Black Box,” we recommended that feedback during learning be in the form of comments rather than grades, and many teachers took this to heart. Unfortunately, in many cases, the feedback was not particularly helpful. Typically, the feedback would focus on what was deficient about the work submitted, which the students were not able to resubmit, rather than on what to do to improve their future learning. In such situations, feedback is rather like the scene in the rearview mirror rather than through the windshield. Or as Douglas Reeves once memorably observed, it’s like the difference between having a medical and a postmortem.’ 
The only thing I can add to this quote is to point out that Wiliam advocates comments ‘rather than,’ not ‘as well as,’ grades.
Another of Wiliam’s recommended techniques, for ensuring maximum student engagement and thereby closing the achievement gap, is the ‘Pose-Pause-Pounce-Bounce’ model of questioning. In Wiliam’s words, the teacher ‘poses the question, pauses for at least five seconds (sometimes, to help her measure the time, she mutters, under her breath, “One, two, three, four, got to wait a little more”), pounces on one student at random for the answer, and then bounces that student’s answer to another student, again at random, saying, “What do you think of that answer?”’  On the ward round, I heard one question that would make a good alternative ‘bounce’ – it was, simply, “How did you remember that?”
One further lesson activity that stood out was an interesting starter based on key words – students had an image of ten bowling pins like the one below:
On to the top pin, they wrote a key word given by the teacher; then, on the two pins below, they wrote two subject-specific terms that directly linked to the first word. On the three pins below those, they wrote three more subject-specific terms that directly linked to the ones above, and so on, in a kind of ‘cascade’ of technical vocabulary. This was used as a revision activity; assuming sufficient prior knowledge, it could also be used as an alternative way of generating success criteria for a lesson focusing on using technical vocabulary to improve the quality of a written piece of work (perhaps combined with some mid-lesson ‘spy time’, with a specific focus to seek out and suggest further key terms).
Finally, a favourite plenary of the course facilitators is ‘Question/Feeling/Favourite’ (the latter sometime replaced by ‘Learning’). This involves a small amount of preparation – specifically, these three words are displayed on separate posters around the classroom and, at the appropriate time in the lesson, students are asked to go and stand at one of these stations. The teacher can then find out what questions students have about the lesson so far, how they are feeling about the lesson so far, and what their favourite part of the lesson has been so far (or what they have learned in the lesson so far), as a way of gauging progress up to that point and, if necessary, of deciding whether or not to alter the course of the rest of the lesson.
So – how did I remember all of that?
Maybe next time…
 T. Boulter (2012) ‘AFL – from Pale to Pure’, http://thinkingonlearning.blogspot.co.uk/2012/07/afl-from-pale-to-pure.html (post dated 14 July 2012; site accessed 5 June 2013)
 D. Wiliam (2011) Embedded Formative Assessment (Bloomington IN, Solution Tree Press), quote from p.120
 Wiliam, ibid., p.82