Published: March 13, 2015

Professor Derek Briggs offered his expert opinion on the expectations, design, and controversy surrounding the PARCC assessments being administered for the first time in schools across the state.

, or see below.

Want to hear more from Professor Briggs on the current state of standardized testing, especially for students with special needs? Join us for the last forum in our Education Series at the Chautauqua Community House on Wednesday, March 18th, 7:00-8:15pm: "Standardized Testing and Special Needs." 

Ceritificates for 1.5 hours toward recertification for educators will be available at the event. Educators and CU faculty and students can contact kristen.davidson@colorado.edu for information on discounted tickets.


Put to the test: CU Boulder’s Derek Briggs on PARCC expectations

Posted By Eric Gorski on March 12, 2015

We’ve conducted many interviews over the past few weeks about Colorado’s next-generation student assessments in math and English, which debuted last week in classrooms, computer labs and even cafeterias across the state.

As is nearly always the case, what you see in print and online is just a snippet of what are often long, interesting conversations.

One of the beauties of the Internet is the lack of space restrictions. Here, we start a series of posts with extended excerpts from our interviews with a variety of sources about the Colorado Measures of Academic Success, which include math and English language arts tests developed by the Partnership of Readiness for College and Careers (Ìę°Ú1±Ő).

First up is Derek Briggs, a professor of quantitative methods and policy analysis in the School of Education at the  [2] who sits on the technical advisory committee of four large testing consortia, including PARCC.

Briggs got the parting shot in our Ìę°Ú3±Ő.

Briggs sees a lot of good in PARCC, but also shortcomings. He believes PARCC takes great strides forward in being more transparent about what kids are expected to know. He also describes being “bored to tears” by a  [4] He thought the passage had “a lot of gratuitous vocabulary.” He said he got the answer wrong.

A condensed and lightly edited transcript of our conversation 


On how the new tests seek to measure critical thinking:

If you really want to look at critical reasoning in the way people write, you have to give them something rich to write about and the time to compose it. If you take that seriously, it is going to change the length of the test. And you can certainly see that in the PARCC ELA (English language arts) exam. On math, similarly, you see not quite an essay but more multi-step questions, where there are a lot of different pieces people go through to find a solution, and make their thinking and reasoning more transparent.

On concerns or criticism that some math questions also ask students to write:

It’s a classic problem. There is communication inherent to everything we do. Traditionally, in the way tests are created, mathematics is mathematics and reading and writing are reading and writing. To the extent you have this word problem, are you conflating the ability to read and communicate with mathematics? But if you look at mathematics in the Common Core, it’s so critical to mathematics to explain your answer. I would argue the definition of what the term “mathematical construction” is is changing in the Common Core, and that you would expect more of an aspect of communication being a part of it. (Briggs also noted that students aren’t being asked to write long paragraphs in the math responses).

On the expectations for PARCC:

I try to think of PARCC as a long-term enterprise. This is sort of a first draft they are putting forward. What could they be if given the time and possibility to grow and evolve? To learn from the first iteration of these assessments and what they get back from it? There is a lot of possibility in the sense we are trying to learn more, in more depth, about what students know and can do and what it really means to be ready for college and career. This whole enterprise has been to try to figure that out. And we don’t really know till we have a series of cohorts taking these. We’ll have to wait until some go off to college, and learn whether students who perform well on these new tasks, to what extent it is predictive? Right now, I can’t say. How has PARCC been successful? I think they clearly have taken up the challenge to be more transparent in terms of what kinds of claims they are trying to make in the assessments — in what are they claiming a high-scoring student should know. Whether the scores will support that claim, we don’t know. From the aspect of design, they have been very thoughtful about it and have a good framework to make it better. Does that mean I think it’s a great, perfect test right off the bat? I don’t know. I am sure we will find areas to improve. I do worry that we have this expectation in the public that we’ve had four years, so why isn’t it perfect off the bat?

On the test’s reliability in truly gauging where students stand:

This is going to be a very hard test. There will be a lot of students that struggle on this test. You will see more of a floor effect – more students scoring at the same level as if they had guessed at random. For those students, can you get a good measure of what they can know and do? Probably not. So for those students, the test may not be reliable.

On his takeaway from that sleep-inducing passage he read:

I had a hard time getting through it. The question, is that the right passage? How do you decide what is the right passage? What is the target we are trying to get at for kids? Is it that, in times in life, you have to focus in on something you are not that interested in?

On parents who are Ìę°Ú5±Ő:

On the one hand, I totally appreciate that. My son has autism and takes an alternative assessment. When kids now are taking PARCC tests, my kid does not get to be in the main classroom working on reading. I feel it, being taken away from instruction. I am sympathetic to that. But somehow, with education, people are seeing tests as separate from what goes on in instruction. But they need to be integrated. What is on the test should be related directly to instruction. Where the deeper problem lies is that I think the districts are testing so much already, and PARCC comes along and it is trying to get at things at a deeper way. It is more testing time, and it riles people up. It’s sad they are kind of reacting against this test that, on the whole, the quality is probably better than a lot of existing tests at the district level. I wish the objection were based on it being a low-quality test. Instead, it’s a knee-jerk reaction that it’s more time toward testing.

On the challenges to designing a good test:

I just want to point out how hard it is to do this. Put yourself in the shoes of a good test designer, who asks, ‘What do I want to get from my students?’ With reading, what is the right passage, the right level of complexity? All these things are really tricky. Then you need permission to use (a passage). I am sure in this first year, they’ll have gotten things wrong. There will be passages where people are going to get together and say, “You know, we shouldn’t have picked that passage.” Does that passage stand out as one that students perform poorly on? That is where we have feedback and can make the test better over time.


Article printed from Colorado Classroom: 

URL to article: 

URLs in this post:

[1] PARCC: 

[2] University of Colorado Boulder: 

[3] Sunday story that sought to provide an overview of the tests and what they may or may not accomplish: 

[4] practice PARCC 11th grade reading passage from a 1928 novel.: 

[5] holding their children out of tests by opting out or refusing: 

Related Faculty: Derek Briggs