Chimacum Middle School Eagles!

Science Curriculum
Science Links
Parent Links
Class Links
Student Work
Educational Links
Homework and Daily Activities
Mr. G's Homepage
Mr. G's Web Portal
To email me, just click this button. Chimacum Middle School Chimacum School District
Chimacum Middle School

Assessment for Learning (AfL) Strategies

AfL || Sharing Learning Expectations || Questioning || Feedback || Self-Assessment & Peer Assessment

Sharing Learning Expectations

The best way for teachers to share learning expectations is, well, to know them. Start with your state standards (which may mean the Common Core and the Next Generation Science Standards). In WA state we have pretty well organized Science standards and I also make use of the Benchmarks for Science Literacy and the National Science Education Standards and now the Next Gen standards as well. I recommend choosing power standards that you will work on with students because one mistake I see too many teachers make is trying to cover too much in one year. I’d much rather hit a few standards or learning targets well than rush through many and have my students learn none (or few anyway). Identifying power standards is something we all worked on this year in my district. We used Power Standards: Identifying the Standards that Matter the Most and Unwrapping the Standards: A Simple Process to make Standards Manageable by Larry Ainsworth. What I liked about WA state’s Science standards is that they are already written in understandable, well-chunked language. Once you determine which standards you will focus on you need to re-write them into a usable format and Unwrapping the Standards helps you do just that.

Once you have your standards figured out you need to back up. In the power standards books you learn how to write essential questions that once answered will show whether or not your students understand the standard but first you need a plan to help students learn what is needed to understand the standard. The plan is called a Learning Progression (I won’t use an acronym, thank goodness, right?). “A Learning Progression is a sequenced set of subskills and enabling knowledge that, it is believed, students must master en route to mastering a more remote curricular aim.” (Popham 2008) Learning progressions are great ways to plan what you are going to do with students and the best part is that in a learning progression you actually plan out what formative assessments you will use and when and where! So the idea is that you communicate the standard to the students as well as the learning targets that they will need to reach along the way to learn the standard. To learn more about learning progressions read Transformative Assessment by W. James Popham.

science_learning_progression.pngThis is a sample Science learning progressions for the learning of insects. If you click on the image you can view it larger or you can download a Word document of the learning progression here so that you can actually read it. The big, red oval is the standard students will be learning about. The blue rectangular boxes are the learning targets that students need to learn, in the order students could learn them, to understand what they need to know for the standard. The green rectangular boxes are the formative assessments given along the way to see if students are learning the targets. By giving formative assessments between learning targets teachers can decide if they should move on or review before going on to the next learning target activity. Here is another graphic organizer for creating learning progressions. Here is a Learning Progression Tool document to help you make your own learning progressions for any subject. The tool is helpful and goes well with the graphic organizer. I was able to create a learning progression for an Earth Science unit I was teaching with another teacher who was teaching the same topic and here’s what we came up with. A warning we got in our PD, “Keep a learning progression sufficiently lean so that it is likely to be used. The only building blocks to include are those for which you plan to collect assessment evidence.” –Popham To make learning targets it helps to know the difference between enabling knowledge and subskills needed. Enabling knowledge is what students will need to know to achieve the learning target and the subskills are what students will need to do to achieve the learning target. It helps to know the difference when choosing an assessment. We were given examples of learning targets that were too big, too small, and just right. Here are some Science examples:

Too BIG – “Students know that: Earth is a system that contains a fixed amount of each stable chemical element existing in different chemical forms. Each element on Earth moves among reservoirs in the solid Earth, ocean, and atmosphere as part of biogeochemical cycles driven by energy from the Earth’s interior and from the Sun.”

Yeah! No duh, right? That should be the standard.

Too small – “Students know that energy can be transferred from one place to another.”

What makes this too small for a learning target is the fact that it’s too low on bloom’s taxonomy that students know energy can be transferred. A better target would be for students to demonstrate how energy can be transferred.

Just right - “Students are expected to sort plants and animals according to their structures (e.g. presence of hair, feathers, or scales on their skin) and behaviors (e.g. grazing, hunting or diving for food).”

This works as a learning target that can be assessed before moving on to the next target. Assessing before moving on to the next learning target is called assessing at critical junctures (the green rectangular boxes on your learning progression). For a critical juncture assessment to be effective it needs to be, “diagnostic of student understanding,” “quick,” and it needs to “inform the next step.” Informing the next step seems crucial to me because that is where we move from hit and miss educating to deliberate educating.

Feel free to leave me a comment :o)

Back to the TOP

Eliciting Evidence (formerly Questioning)

At our OMSP AfL training we were shown four ways to consider collecting evidence from students:

•Personal Communication (responding to individual journals or logs, 1:1 interview, asking questions in class)

•Performance Assessment (teacher observation of presentation or lab)

•Extended Written Response (writing that shows higher level thinking)

•Selected Responses (forced choice where students choose from a list then explain their thinking)

For eliciting student thinking we want to have our “Students choose from a likely set of student responses which should be developed to reveal their level of thinking. The options should include best answer and other answers representing incorrect or incomplete student understanding.” And unless you are using a tool like a chat room or twitter to have a back channeled class discussion, direct questioning of students isn’t the most effective because, “Questioning typically involves a three-turn exchange in which the teacher asks a question, a student answers and a teacher evaluates the answer. In too many classrooms, teachers try to get students to accept the ‘right’ answer, instead of engaging them in a conversation that elicits their ideas and uses those ideas as a starting point.”

Before OMSP I was part of another Science partnership, the North Cascades and Olympic Science Partnership (NCOSP, from which OMSP started) and one of the best lessons about questioning I learned was at one of our content courses. In those courses the instructors tried something, which drove me crazy. They worked their darndest to NOT answer any of our questions. Think about that for a moment. Here you are a teacher, a professional, and you ask a question only to get another question in return. When we were students they treated us like students and not teachers learning pedagogy. We were students learning Science. From that simple experience I learned that by answering questions the curiosity stops, the mind actually stops thinking about the topic because it’s satisfied. It was quite shocking. To think that by answering my students’ questions in class I was curtailing their curiosity! I learned then to practice the skill of answering my students’ questions either by just plain NOT answering, or by using guiding questions. In Science it’s more beneficial for my students if I ask them questions and have them figure things out for themselves. It’s all about thinking. With guiding questions I can lead them in the right direction without answering their questions for them and without doing the thinking for them. Ever since then I have practiced not answering questions in class.

When questioning students the questions should

-       cause thinking, and

-       provide data that informs teaching.

We were given the following examples to improve teacher questioning:

–generating questions with colleagues

–closed vs open

–low-order vs high-order

–appropriate wait-time

–basketball rather than serial table-tennis (I like this one, easy to visualize)

–‘No hands up’ (except to ask a question)

–class polls to review current attitudes towards an issue

–‘Hot Seat’ questioning

The part that I also find very important is to allow all students the opportunity to think about the questions and to be able to answer. To get all students to participate teachers can use ABCD cards that all students have so they can hold up a card to show their understanding, mini white-boards, exit passes, qwizdom, chatrooms or Twitter. These are preferable to the usual class discussions where only the outgoing students participate. Even if you draw names out of a bag only one student is engaged at any time. Makes it easy for the others to tune out and miss a chance to show you if they are learning or not. The idea of a hinge question was brought up where hinge questions must, “make students choose from a likely set of student responses which should be developed to reveal their level of thinking. The options should include best answer and other answers representing incorrect or incomplete student understanding.”

Here is an example of a Science hinge question where the incorrect responses show typical student misconceptions:

The ball sitting on the table is not moving. It is not moving because:

A. no forces are pushing or pulling on the ball.                                                    

B. gravity is pulling down, but the table is in the way.

C. the table pushes up with the same force that gravity pulls down

D. gravity is holding it onto the table.

E. there is a force inside the ball keeping it from rolling off the table

Students can choose the best answer using tech like qwizdom or Google Forms, or with ABCDE cards. That way all students can answer quickly and the teacher can quickly see who got it or not and of those who didn’t quite get it their misconceptions will be revealed.

So in your learning progression you have the standard you are addressing and your learning targets to help students achieve the standard. In order to assess for learning you will begin writing essential questions and hinge questions that are diagnostic and quick to help you determine the next steps to take for your students.

Feel free to leave me a comment :o)

Back to the TOP


Now that you have a learning progression and you’ve planned regular, diagnostic, quick assessments for learning it’s time to use the data gathered from students to inform your next steps. Besides informing your next steps you also need to give your students feedback. Feedback to help them determine what their next steps should be in their learning. We need to give feedback that will empower our students.

Before we create a culture in our classroom of feedback it’s helpful to surface our own preconceptions about how people learn. Do you believe that we have a set amount of intelligence and that we can’t change it? Do you believe we each have our own talents and that we can’t change those? Check out this video on the difference between fixed vs growth mindsets:

Here’s a website on Fixed vs Growth Mindsets from Ramp up for Readiness with materials to see what your students believe. We need to be careful what we say to our students whose views of self are based on being intelligent or smart. Those are the students who struggled the most with not getting graded in my Science classes. They rely on getting their A’s and B’s and without those carrots they are at a loss as to how to proceed. They’ve lost some curiosity and plain love of learning.

To watch what you say you have to read Carol S. Dweck’s The Perils and Promises of Praise. In short we do students a disservice when we praise their intelligence or seem to praise their intelligence by they way we praise them. Praising intelligence may give students a short burst of pride but in the long run leads to negative consequences.


This graphic from OMSP shows how having a fixed mindset leads to fear of failure, which happens when we praise students for their intelligence. The goal is to praise students for their effort. Effort sends the message that achievement comes from hard work and not from having or lacking smarts. It sends the message that anyone and everyone can succeed if they put forth the effort. I like that.

This book is a good read for learning about giving feedback, Feedback as Part of Formative Assessment from How to Give Effective Feedback to Your Students by Susan M. Brookhart and together with this book How to Grade for Learning, K-12 by Ken O’Konnor will help you figure out ways to assess students without giving grades. Even if you grade assignments you really need to think what it means to grade formative assessments. How can you punish a student with a low grade for not quite knowing something yet? Failure is important to learning and we don’t want to make our students fear failure! And if you will give A’s to all who complete a formative assessment what’s the use of the A? Feedback makes more sense than letter or number marks or grades and way more sense than a percentage.

Feedback is more complicated than I ever thought. Turns out you can have seven content attributes and four strategies for giving feedback. Who knew?

Seven content attributes

Focus (avoid personal comments, focus on learning targets, focus on learner)

Comparison (used when comparing work to criteria or rubrics)

Function (point out strengths & weaknesses in work without evaluating or judging)

Valence (being positive & constructive while giving suggestions for improvement)

Clarity (be clear to make sure student understands your feedback)

Specificity (be descriptive but don’t do the work for the student)

Tone & Word Choice (to communicate respect for the student as an active learner)

Four strategies

Timing (don’t take too long to give feedback)

Amount (don’t correct everything or write too much, focus on learning targets)

Mode (choose appropriate feedback: written, oral, demo)

Audience (individual vs whole group or small group)

This document, Assessing_Feedback_Strategies_Content.doc, is fantastic as it lays out all seven content attributes and all four strategies and explains them all with examples of good and bad feedback!

The best part of the session on feedback was that as we practiced writing feedback for sample student responses we were then supposed to read each other’s work and give each other feedback on our feedback! The more we can peer review each other’s work the better we get! Deprivatization is the way to go.

Here’s a sample question with rubric that we used. They gave us three sample student responses where one student achieved the learning target, one student surpassed the learning target and one student who fell short of the learning target (achieved the simpler content). I purposely removed the number column because the written feedback is more powerful than numbers anyway and if we include the numbers wit the written feedback students will focus on the numbers and not on the information. One important point I remembered is to give students who achieve the learning target feedback too and not just the ones who haven’t achieved it yet. This stuck out to me from Brookhart’s How to Give Effective Feedback to Your Students, “It is not fair to students to present them with feedback and opportunities to use it. It is not fair to students to present them with what seems like constructive criticism and them use it against them in a grade or final evaluation.”

We also used some reproducibles off of Marzano’s site from his Formative Assessment & Standards-Based Grading book to practice writing learning goals. We were learning how to determine an appropriate learning goal with a more complex goal and a simpler goal to go with it. Our discussion got me thinking about the way I assessed my students this past spring. I think Marzano makes a good point about assessing students on achievement based on what they know and what they’ve learned. Things like effort, organization and behavior should be separate from that but also important. I created a behavior/work ethic standard so that parents can get information on how well their children are working and behaving in Science. It would be great if more colleges and universities would accept information about students’ content achievements and skills as well as their work skills. Grades alone from typical high school transcripts don’t tell the whole story. What would colleges prefer to remediate the content or the work skills? If you have the work skills wouldn’t content be all that much easier to make up?

Feel free to leave me a comment :o)

Back to the TOP

Self-Assessment and Peer Assessment

The last two AfL strategies are important because students are too dependent on adult feedback. Last year I had 150 students and I told them that waiting for me to give them all feedback would take too long, they needed to help each other out. This was especially true when they blogged. I used to read over and approve each blog one at a time. If it got published it got my stamp of approval and if it didn’t get published then they had to check it for my feedback to read what was needed to get their post published. With 150 students that wasn’t going to work so I took a risk. I told students that I’d publish all their blog posts as soon as they came to my inbox and that it was up to them to read each other’s work and give each other feedback. I was reading a blog recommended by someone on my PLN, can’t remember whose blog, sorry, but I liked the way this teacher had his students write feedback comments on blogs so I’m going to try it:

1. Quote something that stands out from the blog.

2. Say why this stands out or make a personal connection to the post from your own experience.

3. At the end make a compliment and be nice.

If somebody knows where I got that from please tell me!

Some techniques we were showed for student self-assessment and peer assessment are:

Students assessing their own/peers’ work

—with rubrics

—with exemplars

—“two stars and a wish

Training students to pose questions/identifying group weaknesses

Self-assessment of understanding

—Traffic lights

—Red/green discs

End-of-lesson students’ review

If you have anymore ideas for this section please share them with me and I’ll add them here or leave a comment on the AfL post. Thank you.

Feel free to leave me a comment :o)

Back to the TOP