Teachers get it handy!

Started by wherefromreferee?, June 20, 2008, 08:49:07 AM

Previous topic - Next topic

CK_Redhand

Without knowing the exact details of this case, I'll say that the drop from A* to A or B could be explained by either the teacher's predicted grade (unlikely though as the teacher would almost certainly predict am A*) or the class ranking.

If the model sees two individuals with the same AS result and same predicted grade but different rankings, then the one with the lower class ranking will get a lower predicted output.  This is quite cruel on the pupil because I would argue the class ranking assigned by the teacher is a poor quality variable.  Correlations between the input variables could cause them to have unreasonably small or large effects.

If the pupil has the best AS result, the best predicted grade and the best class ranking then it should be impossible for them to not get the best predicted output.

Milltown Row2

But it has given multiple decisions which have been against strong previous results.

While results across the board has risen, the affects it will have on ones that haven't achieved what would have been predicted, could be telling.

I know of lads that have benefited from getting places in colleges that they'd never have got into based on their actual results.

The modelling has left some teachers I know to the point that if it ever happened again they wouldn't bother doing it, as they said it made no difference, and now parents will be aggrieved at how their kids teachers were not listened too.
None of us are getting out of here alive, so please stop treating yourself like an after thought. Ea

CK_Redhand

Those decisions then must have been made then on class rank, assuming AS result and teacher predicted grade are equal.

If there is an example of pupils having the same value for As result, teacher predicted grade and class rank in the same subject but got a different model prediction yesterday, then there is definitely something wrong with the model.

Also, if there was an accurate dataset with these variables, one could reverse engineer the algorithm to get the actual numbers for the model in a particular subject.

David McKeown

Quote from: CK_Redhand on August 13, 2020, 11:09:30 PM
Without knowing the exact details of this case, I'll say that the drop from A* to A or B could be explained by either the teacher's predicted grade (unlikely though as the teacher would almost certainly predict am A*) or the class ranking.

If the model sees two individuals with the same AS result and same predicted grade but different rankings, then the one with the lower class ranking will get a lower predicted output.  This is quite cruel on the pupil because I would argue the class ranking assigned by the teacher is a poor quality variable.  Correlations between the input variables could cause them to have unreasonably small or large effects.

If the pupil has the best AS result, the best predicted grade and the best class ranking then it should be impossible for them to not get the best predicted output.

That also assumes that the predicted grade/ranking was a variable. Some of the teachers I've spoken to (at multiple schools) have been told that it was not but was requested as a way of confirming the accuracy of the system. I haven't been able to verify that nor have I been able to confirm the opposite because of the lack of information.

In addition I'm aware of one teacher who's three year average for an a* of a is 97% and whose Lowest average figure over the last 11 years for a top 2 grade is 93%. They predicted 95% a and a* and were awarded 80%. The As and GCSE results for them did not support that. A model that can't allow for that teachers considerably above national average performance and which punishes 2 children as a result is for me not fit for purpose.
2022 Allianz League Prediction Competition Winner

CK_Redhand

From what I understand from yesterday's interview teacher predicted grade is used as a predictor/input variable but we would need clarification on that.

The argument against using past school or teacher performance would be that it "punishes" pupils from weaker schools.  Really there is no perfect solution and it opens the wider debate as to what the purpose of the examination is and the whole idea of "fairness" in academic selection.

Milltown Row2

Quote from: CK_Redhand on August 14, 2020, 09:55:26 AM
From what I understand from yesterday's interview teacher predicted grade is used as a predictor/input variable but we would need clarification on that.

The argument against using past school or teacher performance would be that it "punishes" pupils from weaker schools.  Really there is no perfect solution and it opens the wider debate as to what the purpose of the examination is and the whole idea of "fairness" in academic selection.

If a child from a weaker school is performing really well that will reflect from the results he/she has produced over the years, GCSE/AS level. Then the A level result will be based on his previous performance and the predicted grade that the teacher has given. 

So why would the likes of the grades given by my wife to her students at AS and A level all match what CEEA gave?

Have they used a different model?
None of us are getting out of here alive, so please stop treating yourself like an after thought. Ea

johnnycool

Quote from: CK_Redhand on August 14, 2020, 09:55:26 AM
From what I understand from yesterday's interview teacher predicted grade is used as a predictor/input variable but we would need clarification on that.

The argument against using past school or teacher performance would be that it "punishes" pupils from weaker schools.  Really there is no perfect solution and it opens the wider debate as to what the purpose of the examination is and the whole idea of "fairness" in academic selection.

Not an expert on modelling or education but no process was ever going to be perfect and there does seem to be considerable weight placed upon the class ranking and the schools historical data which obviously causes issues with outliers at both ends of the scale where a really bright class this year may see some of those still in the A bracket and a genuine A student lose out if there's x number of kids ahead in the class ranking and similarly if there's a weak class some may be bumped up due to the historical record of the school. In the world of mathematically modelling that's OK as everything averages out, but we're talking kids here who're a bit more than pieces of statistical data.

What now needs to happen is a robust appeals process and the fees mentioned really need to be dropped to allow all aggrieved students a fair crack of the whip.

CK_Redhand

Every pupil is scored using the same model. The three variables I've mentioned will be used for all predictions.  Maybe an example with completely made up numbers will help explain how it would work.

Pupil X got a B at AS level, teacher predicted grade of A and class rank of 5th out of 20.
Pupil Y got a B at AS level, teacher predicted grade of A and class rank of 6th out of 20.

Pupil X model score comes from the three components so let's say
B at AS level = 0.2
Teacher predicted grade of A = 0.3
Class rank of 5th = 0.3
Total score = 0.2 + 0.3 + 0.3
= 0.8
They are in the 80th percentile of all pupils so get given an A by the model.

Pupil Y model score will be made up for the same components
B at AS level = 0.2
Teacher predicted grade of A = 0.3
Class rank of 6th = 0.25
Total score = 0.2 + 0.3 + 0.25
= 0.75
They are in the 75th percentile of all pupils so get given a B by the model.

So in this case pupil X gets the same grade their teacher predicted but pupil Y gets lower.

Now the exact numbers and weights of each component of the models are unknown to us.  We could reverse engineer them given a large enough dataset with accurate inputs. If CCEA were to release the technical details of the models we could scrutinise more closely their methodology but I doubt that will be available any time soon.  Hope this helps.

CK_Redhand

Quote from: johnnycool on August 14, 2020, 11:06:09 AM
Quote from: CK_Redhand on August 14, 2020, 09:55:26 AM
From what I understand from yesterday's interview teacher predicted grade is used as a predictor/input variable but we would need clarification on that.

The argument against using past school or teacher performance would be that it "punishes" pupils from weaker schools.  Really there is no perfect solution and it opens the wider debate as to what the purpose of the examination is and the whole idea of "fairness" in academic selection.

Not an expert on modelling or education but no process was ever going to be perfect and there does seem to be considerable weight placed upon the class ranking and the schools historical data which obviously causes issues with outliers at both ends of the scale where a really bright class this year may see some of those still in the A bracket and a genuine A student lose out if there's x number of kids ahead in the class ranking and similarly if there's a weak class some may be bumped up due to the historical record of the school. In the world of mathematically modelling that's OK as everything averages out, but we're talking kids here who're a bit more than pieces of statistical data.

What now needs to happen is a robust appeals process and the fees mentioned really need to be dropped to allow all aggrieved students a fair crack of the whip.

The schools historical data has not been used in the model according to the CCEA rep yesterday.  I agree with the rest of your post.

macdanger2

No matter what model / system is used, somebody is going to feel (rightly or wrongly) aggrieved because the results are out of the hands of the students. I think that they should have went ahead with exams rather than using this type of system.

johnnycool

Quote from: CK_Redhand on August 14, 2020, 11:15:35 AM
Every pupil is scored using the same model. The three variables I've mentioned will be used for all predictions.  Maybe an example with completely made up numbers will help explain how it would work.

Pupil X got a B at AS level, teacher predicted grade of A and class rank of 5th out of 20.
Pupil Y got a B at AS level, teacher predicted grade of A and class rank of 6th out of 20.

Pupil X model score comes from the three components so let's say
B at AS level = 0.2
Teacher predicted grade of A = 0.3
Class rank of 5th = 0.3
Total score = 0.2 + 0.3 + 0.3
= 0.8
They are in the 80th percentile of all pupils so get given an A by the model.

Pupil Y model score will be made up for the same components
B at AS level = 0.2
Teacher predicted grade of A = 0.3
Class rank of 6th = 0.25
Total score = 0.2 + 0.3 + 0.25
= 0.75
They are in the 75th percentile of all pupils so get given a B by the model.

So in this case pupil X gets the same grade their teacher predicted but pupil Y gets lower.

Now the exact numbers and weights of each component of the models are unknown to us.  We could reverse engineer them given a large enough dataset with accurate inputs. If CCEA were to release the technical details of the models we could scrutinise more closely their methodology but I doubt that will be available any time soon.  Hope this helps.

That makes sense on how the CAI aspect is worked out within the school itself, its the "standardising" part carried out by the CCEA which refers to using the last three years data per school or college for GCSE anyway but they kinda skirt over it for A levels..

https://ccea.org.uk/summer-awarding


IMO they looked at how many Grades they'd awarded over the last three years and then modified the model to suit accordingly and if you look at the video on the link there almost certainly is a schools historical data being used in conjunction with the class rank.





CK_Redhand

Ah ok cheers I hadn't seen that page or video.  It's still unclear exactly how the schools past performance data is used but it's possibly similar to previous years standardisation process.

If like to know more about who built the models and the peer review process but it seems like they used UK wide data. I also learned they incorporate resit data into it somehow on addition to the variables we knew about.

FermGael

So next week's GCSE results are going to a bigger mess up.
Using teachers predicted grades for quality assurance.
Not using them at all for the predictions.
This is just unbelievable

Wanted.  Forwards to take frees.
Not fussy.  Any sort of ability will be considered

David McKeown

Quote from: CK_Redhand on August 14, 2020, 11:15:35 AM
Every pupil is scored using the same model. The three variables I've mentioned will be used for all predictions.  Maybe an example with completely made up numbers will help explain how it would work.

Pupil X got a B at AS level, teacher predicted grade of A and class rank of 5th out of 20.
Pupil Y got a B at AS level, teacher predicted grade of A and class rank of 6th out of 20.

Pupil X model score comes from the three components so let's say
B at AS level = 0.2
Teacher predicted grade of A = 0.3
Class rank of 5th = 0.3
Total score = 0.2 + 0.3 + 0.3
= 0.8
They are in the 80th percentile of all pupils so get given an A by the model.

Pupil Y model score will be made up for the same components
B at AS level = 0.2
Teacher predicted grade of A = 0.3
Class rank of 6th = 0.25
Total score = 0.2 + 0.3 + 0.25
= 0.75
They are in the 75th percentile of all pupils so get given a B by the model.

So in this case pupil X gets the same grade their teacher predicted but pupil Y gets lower.

Now the exact numbers and weights of each component of the models are unknown to us.  We could reverse engineer them given a large enough dataset with accurate inputs. If CCEA were to release the technical details of the models we could scrutinise more closely their methodology but I doubt that will be available any time soon.  Hope this helps.

See the problem I have with that is class ranking means nothing when compared to other schools. Take my year for example. I finished 3rd in the UK in computing science in my year. I finished second in my class. I would have ranked first at every other school in the UK except one. In a similar scenario now why would my 2nd place rank be worth considerably less than nearly everyone else's first place rank. Which if class ranking is factored in and previous school performance is not is exactly what will have happened.

Similarly my year had two of the top 10 pupils in Accounting and not for the first time. Why is that not relevant?
2022 Allianz League Prediction Competition Winner

CK_Redhand

Really specific scenarios like that are impossible to account for in modeling as there wouldn't be enough data to support it.  Again as I said in an earlier post, models do well at predictions across large populations but can be very bad at the individual level.