My third grade class finished up a cumulative assessment last week. This particular assignment was completed independently and covered skills from January – March. The assignment spanned the last two units of study and reviewed topic of factors, multiples, composite/prime numbers, area, fractions, decimals, measurement conversions, using standard algorithms, and angles. There was a hefty amount of content found in fairly large assignment. It took around two classes to complete the task.

It’s my personal belief that an assessment should be worthwhile to the student and the teacher. Why take the time to give the assessment in the first place?? Well …. don’t answer that – especially when state standardized testing is right around the corner. : ) There are some assessments that teachers are required to give and others that are more optional.

My assessment for learning belief stems from past experiences that weren’t so thrilling. I remember being given a graded test and then immediately moving on to the next topic of study. There wasn’t a review of the test or even feedback. A large letter grade (usually in a big red marker) was on the front and that was that. This left me salty. All teachers were students at some point and this memory has stuck with me.

I like to have students review their results and take a deeper look into what they understand. In reality the assessment should be formative and the experience is one stop along their math journey. It should be a worthwhile event. It’s either a wasted opportunity or a time slot where students can analyze their results, use feedback, and make it more of a meaningful experience.

So back on track … These third graders took the cumulative assessment last week. I graded them around mid-week and started to notice a few trends. Certain problems were generally correct, while others were very troublesome for students. Take a look at my chicken-scratch below.

As you can tell, problems 2, 4, 8, 11 and 22 didn’t fare well. It seemed that problems 3, 17, 18, and 21 didn’t have too many issues. My first thought was that I might not have reviewed those concepts as much as I should have. There are so many variables at play here that I can’t cut the poor performance on a particular question down to one reason. That doesn’t mean I can’t play detective though. My second thought revolved around the idea that directions might have been skimmed over or students weren’t quite sure what was being asked. So, I took a closer look at the questions that were more problematic. I looked in my highlighter stash and took out a yellow and pink. I highlighted the problems that were more problematic pink. Yellow was given to the problems that were more correct.

The next day I was able to review the assessment results with the class. I gave back the test to the students and reviewed my teacher copy with the pink and yellow with the class. I used the document camera and made a pitstop each pink and yellow highlight and asked students what types of misconceptions could possibly exist when answering that particular question. I was then able to offer feedback to the class. For example, one of the directions asked students to record to multiplicative comparison statements. Many students created number models, but didn’t use statements.

Students also mixed up factors and multiples

Many students forgot to include 81 in the factor pair and thought they didn’t have to include it since it was in the directions. Hmmmm…. not sure about that one.

Some of the problems required reteaching. I thought that was great opportunity to readdress a specific skill, but I could tell that it was more than just a silly mistake. I think the default for students is to say that 1.) they were rushing or 2.) it was a silly mistake. Sometimes it’s neither. I had a mini lesson on measurement conversions.

I also reviewed how to use the standard algorithm to add and subtract larger numbers. Some students had trouble lining up the numbers or forgot to regroup as needed.

I offered up some graph paper to students that needed to keep their work organized.

After the review, which took about 10-15 minutes, I gave students a second opportunity to retake the problems that were incorrect the first time around. I ended up grading the second attempts and was excited as students made a decent amount of progress. The majority of pink highlighted problems from earlier were correct on the second attempt. #Eduwin! The feedback and error analysis time seemed to help clarify the directions and ended up being a valuable use of time. I’m considering using sometime similar for the next cumulative assessment, which will most likely occur around May.

Now, I don’t use this method for all of assessments. My third grade class has eight unit assessments a year. After each assessment I tend to have students analyze their test performance in relation to the math standard that’s expected. Students reflect and observe which particular math skills need bolstering and set goals based on those results. There’s a progress monitoring piece involved as students refer back to these goals during there next unit.

Side note: I had trouble finding a title for this post. I was debating between misconception analysis and assessment analysis. Both seemed decent, but didn’t really reflect the post. So I tried something different – I wrote the post and then created the title. I feel like error analysis fits a bit more as the errors that were made weren’t necessarily misconceptions. Also, this post has me thinking of problematic test questions. That could be an entirely different post.