Math Error Analysis

Screen Shot 2018-03-27 at 1.15.45 PM

My third grade class finished up a cumulative assessment last week.  This particular assignment was completed independently and covered skills from January – March. The assignment spanned the last two units of study and reviewed topic of factors, multiples, composite/prime numbers, area, fractions, decimals, measurement conversions, using standard algorithms, and angles.  There was a hefty amount of content found in fairly large assignment.  It took around two classes to complete the task.

It’s my personal belief that an assessment should be worthwhile to the student and the teacher.  Why take the time to give the assessment in the first place??  Well …. don’t answer that – especially when state standardized testing is right around the corner.  : ) There are some assessments that teachers are required to give and others that are more optional.

My assessment for learning belief stems from past experiences that weren’t so thrilling.  I remember being given a graded test and then immediately moving on to the next topic of study.  There wasn’t a review of the test or even feedback.  A large letter grade (usually in a big red marker) was on the front and that was that. This left me salty.  All teachers were students at some point and this memory has stuck with me.

I like to have students review their results and take a deeper look into what they understand.  In reality the assessment should be formative and the experience is one stop along their math journey.  It should be a worthwhile event. It’s either a wasted opportunity or a time slot where students can analyze their results, use feedback, and make it more of a meaningful experience.

So back on track … These third graders took the cumulative assessment last week.  I graded them around mid-week and started to notice a few trends.  Certain problems were generally correct, while others were very troublesome for students.  Take a look at my chicken-scratch below.

Screen Shot 2018-03-27 at 1.03.36 PM

As you can tell, problems 2, 4, 8, 11 and 22 didn’t fare well.  It seemed that problems 3, 17, 18, and 21 didn’t have too many issues.  My first thought was that I might not have reviewed those concepts as much as I should have.  There are so many variables at play here that I can’t cut the poor performance on a particular question down to one reason. That doesn’t mean I can’t play detective though. My second thought revolved around the idea that directions might have been skimmed over or students weren’t quite sure what was being asked.  So, I took a closer look at the questions that were more problematic.  I looked in my highlighter stash and took out a yellow and pink.  I highlighted the problems that were more problematic pink.  Yellow was given to the problems that were more correct.

Screen Shot 2018-03-27 at 1.24.40 PM.png

The next day I was able to review the assessment results with the class.  I gave back the test to the students and reviewed my teacher copy with the pink and yellow with the class.  I used the document camera and made a pitstop each pink and yellow highlight and asked students what types of misconceptions could possibly exist when answering that particular question.  I was then able to offer feedback to the class.  For example, one of the directions asked students to record to multiplicative comparison statements. Many students created number models, but didn’t use statements.

Screen Shot 2018-03-27 at 1.01.36 PM.png

Students also mixed up factors and multiples

Screen Shot 2018-03-27 at 1.01.54 PM.png

Many students forgot to include 81 in the factor pair and thought they didn’t have to include it since it was in the directions.  Hmmmm…. not sure about that one.

Some of the problems required reteaching.  I thought that was  great opportunity to readdress a specific skill, but I could tell that it was more than just a silly mistake.  I think the default for students is to say that 1.) they were rushing or 2.) it was a silly mistake.  Sometimes it’s neither.  I had a mini lesson on measurement conversions.

Screen Shot 2018-03-27 at 1.02.03 PM.png

I also reviewed how to use the standard algorithm to add and subtract larger numbers.  Some students had trouble lining up the numbers or forgot to regroup as needed.

Screen Shot 2018-03-27 at 1.02.12 PM.png

I offered up some graph paper to students that needed to keep their work organized.

After the review, which took about 10-15 minutes, I gave students a second opportunity to retake the problems that were incorrect the first time around.  I ended up grading the second attempts and was excited as students made a decent amount of progress.  The majority of pink highlighted problems from earlier were correct on the second attempt.  #Eduwin! The feedback and error analysis time seemed to help clarify the directions and ended up being a valuable use of time.  I’m considering using sometime similar for the next cumulative assessment, which will most likely occur around May.

Now, I don’t use this method for all of assessments.  My third grade class has eight unit assessments a year.  After each assessment I tend to have students analyze their test performance in relation to the math standard that’s expected.  Students reflect and observe which particular math skills need bolstering and set goals based on those results.  There’s a progress monitoring piece involved as students refer back to these goals during there next unit.



Side note: I had trouble finding a title for this post.  I was debating between misconception analysis and assessment analysis.  Both seemed decent, but didn’t really reflect the post.  So I tried something different – I wrote the post and then created the title.  I feel like error analysis fits a bit more as the errors that were made weren’t necessarily misconceptions.  Also, this post has me thinking of problematic test questions.  That could be an entirely different post.

 

Advertisements

Student Reflections and Assessments

Reflecting

This past week my third grade class took their third unit assessment.  This particular unit focused on computation of single-digit numbers, data analysis and order of operation procedures.  While grading the assessments I started to identify a few patterns in the student responses.  Specific problems were missed more often than others.  This isn’t an anomaly on assessments, but these particular problems stood out.  One skill area that seemed to jump out to me dealt with the skills of being able to identify the median, mode and range of a set of data.  These skills were introduced during the first few weeks of school and the class hasn’t revisited them in some time.  Also, I found that students were having trouble identifying the differences between factors and multiples. Some of the student responses mixed up the terms while others seemed like guesses. Both of these skill are necessary moving forward as the third grade class explores prime and composite numbers next.  A colleague and I and came up with a limited list of reasons why we thought the problems were missed.

1.) Students aren’t yet able to apply their understanding of the skill

2.) The question on the test was confusing

3.) Students made a simple mistake

Optimistically, I’d like to say that most of the mistakes fall into category two or three. I don’t think this was the case with this particular assessment.  After looking over the class results I concluded that most students that missed skill-associated problems fell into category one.  In addition to not grasping a full understanding, I felt like students were not given enough time to practice the newly learned concepts.

I believe students should be given additional opportunities to show understanding.  Coming from that thought line, I decided to have students reflect on their assessment results in their math journal.  I’ve done this in the past but I wanted to also include an addition to the process.

Screen Shot 2015-10-24 at 9.56.59 AM

After completing the page above students reflected on their performance in relation to the expectation. Students were then given a list of four problems.  The problems were similar to the most missed skills on the assessment.  Students were asked to pick three of the problems to complete.  Students were encouraged to pick skills that were missed or topics that they felt needed strengthening.

After both sheets were completed, students brought their math journals up to me and we had a brief 1:1 conference. This time is so valuable. The student and I identified skill areas that showcased strengths and areas that needed strengthening.  We then reviewed the responses to the questions on the reflection sheet.  I spent around 2-3 minutes with each student.

Students were then asked to work independently on another assignment that I planned for the day.  Overall, I thought this reflection process has helped students become self-assessors.  Students have a better understanding of their own skill level in relation to the expectation.  I plan on using this strategy a bit more as the year progresses.

Creating Common Assessments

Focusing in on Common Assessments
Focusing in on Common Assessments

Yesterday was a teacher institute day. Along with middle and high school teachers I took part in a session dedicated to discussing common assessments. The session covered topics of what role common assessments play and why they should be given. We discussed what qualifies as a common assessment and the need for teachers to be involved in the creation process. As we delved deeper into conversations I found that many of middle and high school colleagues create their assessments since there isn’t really a textbook that covers all the standards that they teach.

This often isn’t the case at the elementary level. In math, I find that the content publisher creates assessments and teachers rely on giving that piece to students in the form of quizzes/tests. Although the publisher-created content is decent, it can sometimes provide little value to the teacher beyond writing a score in the grade book. In addition, the teacher may have been required to give the assessment per district protocol.  In many cases teachers might not have any type of ownership to the pre-created content.

Later in the session the participants were given the opportunity to create their own common assessment. During the process we filled out a common assessment mapping tool. The mapping tool included fields for the learning objective, item number and type (i.e. multiple choice, short answer …) , item domain (i.e. informational or skill item), item depth (i.e. recall, understanding, strategic thinking, evaluation/creating), and point value.

While filling out the map we had to keep in mind what type of question was being asked. We eliminated some of the multiple-choice questions and decided to add questions that give students opportunities show their mathematical thinking. After picking the questions we looked at the item depth. The item depth determines the depth of understanding that the teacher is seeking. At the end of each question the team decided on a point value for that assessment item. Near the end of the session our mapping tool looked something like this:

mapping tool

I could see my team using this mapping tool for additional common assessments. Not only does it give our team more information that can be used to adjust/inform our instruction, it’s also valuable to the student. After completing the common assessment, students can reflect, set and make goals.  Creating common assessments may also provide opportunities for teachers to take more of an ownership role because they helped in the creation process.

Assessments and Growth Mindset

8081867203_7ce422a6f5_z

School has been in session for over month and many of my classes had a unit assessment last week.  The district adopted math program has 10-12 unit checkpoints (depending on the grade level) for the school year and each assessment covers specified math strands.  These assessments are designed to assess understanding and include an open response that emphasizes students’ conceptual understanding and math communication skills.  The entire unit assessment takes about 50+ minutes to complete.

I usually try to administer and grade all the tests on the same day.  This doesn’t always happen.  Before passing the tests back to the students the class generally has a discussion about certain problems that were missed more than others.

Screen Shot 2014-10-04 at 7.14.33 AM
What’s up with problem eight ?

We also have celebrations as a class.  During the class discussion we don’t blame, but reflect on what the numbers might mean.  This idea has taken time to cement and required a bit of modeling.  Based on the results I might even teach a brief mini lesson to help address and reduce misconceptions.  This is also an opportunity for students to analyze their own test and look for correlations.  Afterwards, students are given a sheet to reflect on their own analysis. Students are asked to review their assessment and give feedback on their own performance.

Screen Shot 2014-10-04 at 7.23.50 AM
Click for file

After the students fill out the above sheet they visit the teacher for a brief conference.  These last a quick 2-3 minutes and include a time to check-in with the student. We have a conversation about the student’s reflection and look for opportunities to improve in the future.  This is also a time to set some possible goals.  The sheet is glued into the student’s math journal and can be a document that the student will look back on as the year progresses.

I feel like the process of analyzing, reflecting and setting goals is important.  I believe it reinforces a growth mindset mentality, but it also has me wondering about the role of different assessments in the learning process.  I’d say about 95% of what is used at the elementary level is formative.  I could see how that changes as students progress through middle and high school.  Feedback and the possibility to make positive strides towards improvement can often be utilized with most assessments, regardless if you label it formative or summative.  If a school truly embraces a growth mindset model, what role do summative assessments play? I believe that summative assessments have a role.  I’m just thinking that they may be perceived a bit differently if a school emphasizes a growth mindset model.


image credit: Woodley Wonderworks 

Low-Risk Formative Assessments – Kahoot

Using Kahoot as a Formative Assessment Tool
Using Kahoot as a Formative Assessment Tool

Over the past few weeks I’ve focused in on using low-risk formative assessments in the classroom.  I continue to find that these types of assessments bring out the best in students. I want my students to feel comfortable enough in class to take an educated guess without negative judgement.  Moreover, I want my students to be able to use the formative assessment and teacher feedback to improve their mathematical understanding.

In the past I would give my students a paper exit card.  A typical exit card would have a few questions on a half-sheet of paper.  The questions would relate to the concepts covered in class.  I’d gather up the sheets and write feedback on the pieces for students to read during the next class. I also found that some students weren’t willing to take a risk to showcase their skills.  They might leave a question blank or put a question mark in the blank space.  I wanted to find a way to increase the willingness of the students to take a risk.

I came across the website Kahoot.it after following a Tweet by Matt.  I explored it a bit further and found it to be very similar to Socrative.  I enjoyed using Socrative with my classes and thought that Kahoot had some potential to be used for formative assessment purposes.

After creating a teacher account I decided to browse lessons on the site. I was surprised as there were over 160 thousand quizzes in the lesson bank. Many of the lessons were shorter quizzes, but I found some to use with my math classes. The students used the iPads in the class to go to www.kahoot.it and enter the PIN. Many of the students had no problem with this.  As long as their device had an Internet browser, students could use a tablet, computer or phone to access the quiz.  Once the students all joined the quiz I started it from my computer.  The questions popped up on the whiteboard for students to see.  You can add your own pictures to the quiz.  I found this to be helpful as I took pictures of the classroom and imported them into the quiz.

Screen Shot 2014-05-15 at 8.07.26 PM
As seen on the whiteboard

Students are able to see the whiteboard and read the question.  Students answer questions on their device. Their device looks like the image below.

From student device
From student device

Students receive a certain amount of “Kahoots” for answering the questions in a certain time period.  I’m a fan of rewarding quality over speed in math so I give students the maximum time allotted.  This can be changed when creating questions.   Students pick an answer and at the end of the countdown the correct answer is revealed.  During this time I can stop the class to check the answer choices that were made.

Reviewing student choices
Reviewing student choices

This can be a great time to clear up student misconceptions as you can see all the responses without names.  I’ve had lengthy math discussions after completing this activity with students. I felt the conversations were rich and gave insight to student understanding.  When finished I opted to download a report for later perusal.  The report gives all the student response and how long each student took to respond to the answers.  Both of these are valuable to me as I can use the student responses to group students and differentiate instruction going forward.


Note:  I’ll still be using general exit cards in class, but I’m finding a variety of tools useful in collecting data and providing feedback to students. I’m finding that diversifying formative assessment measures has its benefits.  It also gives students a variety of options to showcase mathematical understanding.

 

Web-Based Math Differentiation in Elementary Schools

Different Learning Paths for Different Students
Different Learning Paths for Different Students

It’s apparent that student achievement data, in many different forms (formative, standardized, norm-referenced, common assessments, etc.) is becoming increasingly valued by administrators and teachers alike.   Teacher PLC teams analyze this data to become more aware of strengths/concerns and differentiate their instruction accordingly.   Instead of whole group instruction, teachers are beginning, or already using guided groups to meet the diverse academic needs of their classroom.

Once needs are identified, teachers put together plans to address the needs in the classroom. Generally, teachers utilize guided reading/math groups, small groups, resource specialists, to meet the needs of individual students, whether the needs are remedial or for enrichment purposes.  One of the goals is to meet the need with some type of teacher support or intervention; although this is not always possible with time constraints and limited staffing.  Have time to be able to individualize instruction is vital for any teacher.  At times, time and staffing limit the amount of differentiation that can occur.  Teachers continue to look for ways to supplement their instruction for differentiation in and outside of their classroom.  Which leads me to this question …

What free online tools can be used to supplement math differentiation in and outside of the classroom?

Note:  All of the tools below are aligned to the Common Core Standards and can be accessed at school or home.  I’m not suggesting that these tools replace school interventions, but they may be helpful if used appropriately.  Click the pictures to enlarge.

MobyMax 

MobyMax is an adaptive online curriculum provider that creates individual education plans for students.  Student take a pre-assessment that seems to be fairly accurate (at least in my opinion).  The pre-assessment determines where to start instruction and helps students practice skills that they haven’t yet mastered.  Student data is collected on every lesson and problem that is completed, so progress monitoring is quite painless.  Mobymax also has an app for easy access.

MobyMax - You can assign specific concepts for differentiation
MobyMax – You can assign specific concepts for differentiation

Scootpad

Scootpad is another adaptive curriculum provider that enables teachers to assign specific CCSS concepts to individual students.  Teachers determine the mastery level and they are able to keep track of individual student progress.  As of right now, there aren’t any lessons associated with the questions.

Scootpad - You can analyze performance on specific concepts
Scootpad – You can analyze performance on specific concepts

TenMarks

TenMarks is used to introduce or reinforce teaching in the classroom.  Students are able to review online lessons and are asked questions related to the topic.  Teachers are able to track student progress over time with TenMarks.

TenMarks - Allows lessons to be interactive with feedback
TenMarks – Lessons are interactive with feedback

XtraMath

Xtramath is designed to help students improve their math computation fluency.  This isn’t a program that’s for everyone.  I’ve found that students who need practice with multiplication/division tables benefit from this web-based intervention.  The program is very user-friendly and has a progress monitoring component which seems beneficial.

XtraMath - Students practice math computation facts
XtraMath – Students practice math computation facts

Photo credit: weesen via photopin cc


What tools do you use to differentiate instruction in/out of the classroom?

Standardize This

Bubble Test?

Education reform continues to make headlines as US student achievement is compared to the achievement of other countries.  An overall increasing focus on standardized assessments has been at the forefront of many of these reform discussions.  Teachers and school districts often get caught in the middle of these types of discussions   From what I’ve observed, what seems to agitate some educators is the notion that one high-stakes standardized assessment can validate/invalidate the success of a school year.  Even though educators have been critical of this notion, federal, state, and local school boards continue to look at standardized assessments as the go-to for quality control/accountability purposes.  I truly feel as though these boards have good intentions, but I would like to encourage them to look at alternative ways to measure school achievement.

I don’t know a teacher that doesn’t believe in accountability.  Teachers inherently feel a sense of accountability for their students.  The way that accountability is being measured and the consequences that occur if growth isn’t met is what’s causing concern.  Critics emphasis that only focusing on standardized test scores encourage teaching to the test, massive amounts of test prep and unfortunately cheating.  I’m not downgrading the value of standardized assessments as I believe a limited amount are beneficial in providing valuable feedback that can inform instructional decisions.  Appropriately utilizing student assessment results may prove beneficial for a teacher or school, but using that data outside of its context to manipulate accusations can cause problems.

Proactive Steps …

By now most educators have realized that student achievement data is starting to make up an increasing portion (20% + ) of one’s evaluation.  In some cases one VAM assessment could be used to measure student growth and impact employment decisions.  Instead of using one standardized assessment to determining teacher effectiveness, administrators should enable teachers to show student learning through a variety of means. This is a difficult task to tackle as administrators are also being assessed on standardized assessment results.  While one assessment shows a singular brush stroke of learning, the picture becomes much clearer when multiple data points are used.  Even NWEA, the makers of the MAP assessment encourage school leaders to use multiple data points (not just MAP) to measure student growth.  Regardless, some districts are already using singular assessments for evaluation/employment purposes.  I’m advocating that principal’s take a closer look at multiple student achievement data points instead of relying on one growth indicator.

How …

Formative assessments, student projects, presentations, and pbl activities can show learning at varying levels.  This collection of student data can not only help inform instructional decisions, but show evidence of student learning.  Digital portfolios are making a splash in education and I’m hoping that more districts start using them in conjunction with standardized assessments to provide evidence of student learning.  Showcasing student learning through a variety of formative assessment tools gives more meaning to the learning that’s happening. If communicated appropriately, state and local schoolboards will take notice and become more interested in multiple data points to determine effectivenessss, rather than a singular one.

photo credit: CliffMuller via photopin cc