-->

## Tuesday, June 20, 2017

### The Cubic Regression Rules!

The actual exam for the June 2017 New York Algebra 1 (Common Core) has not been released, and teachers are strictly forbidden to communicate via the web on the exam until after June 30, so as I begin my yearly review (as a NY citizen who happens to be a retired teacher), all I can comment on is the conversion chart. Here I have plotted it and included (dotted line if you look closely) the cubic regression curve based on the data contained in the conversion chart itself.
I have always looked at level 1 as the failing range, and level 2 as the "safety net" where Special Ed passes but non-Special Ed does not. That might have changed by now. Needless to say, a regular ed- student has to get to level 3 to be considered "passing"

I noticed that, in comparison to the conversion chart for last June, every raw score from 4 to 21 received a scaled score 1 or 2 points higher this year.  No raw score from 24 and up got any boost this year, and a few lost a point on the conversion side.

Here are June 2016 and June 2017 plotted together:

Needless to say, the cubic regression is pretty much the same. I do wonder why NYSED prefers it so much.

Here I have added (in green) the conversion chart for Algebra II from 2016:

Ever since NYSED started using the raw-score to converted-score approach to scoring these exams, there has been battle between those who liked the old "straight-line" approach and NYSED itself. In case you are wondering, the straight-line approach is based on the concept that if you answer 50% of the test correct, your score is 50. Answer 75% correct, get a 75, etc. If I added the graph of the straight-line approach, it would be a straight line from bottom left (0,0) to upper right (86, 100).
What the heck- here it is:
It is easy to see that almost everywhere the scaled score exceeds the 'straight-line" score, with a minimal gain at the top end, and increasingly large gains as one heads from the bottom towards that magical mystical number known as "65".

I am partially from the "old school". When I took the Regents Exam in algebra 1 (I believe it was properly called "9th Year Mathematics" back then) the exam consisted of 30 short-answer questions (2 points each) and 7 Part 2 ten-point questions, of which you had to answer 4.  There was a bit of a top end hammer since a student capable of answering all 7 part 2 questions could get credit for no more than 4 of them. (If a student answered more than 4, only the first 4 answered would count)

Thinking back to those tests, the typical 9th Year Mathematics exam contained a dozen multiple choice questions, each with 4 choices. The random guesser would, on average, get 3 correct out of 12, contributing 6 points towards passing. That student would have to earn 59 points out of the remaining 76. In last year's Algebra I exam there were 24 multiple choice questions, with 4 choices each, so the random guesser would average out with 6 correct, for 12 raw score points. To get to the minimum passing score that student would need 15 out of the 36 remaining possible points. Which is more difficult, 59 out of 76 or 15 out of 36? Hard to tell, recognizing that a student who has to guess on all of them probably doesn't know much of the course.

The other end does intrigue me: the student who gets all the multiple choice correct, be it by guessing or by knowledge or some combination of the two.