Assessment and Evaluation

Provincial Results – National and International Assessments

Manitoba participates in two large‑scale assessment programs that provide insight into the performance of Manitoba students and into the context in which education takes place. They are the Pan‑Canadian Assessment Program (PCAP) and the Programme for International Student Assessment (PISA).

Each program is described below. Selected results are provided that have been extracted from public reports, to which links can be found on the Council of Ministers of Education, Canada (CMEC) website. Detailed analyses may be found in reports available on the web pages referenced.


Considerations when Interpreting Results

Participation and Exemption Rates

When analyzing trends and making comparisons, particularly when anomalies are observed, it is important to verify whether there are anomalies in the rates of participation or exemption/exclusion of the selected schools and students. For example, an increase in mean score accompanied by a drop in participation rate must be interpreted with greater caution as there is a higher risk that the participating schools and students are less representative of the jurisdiction. Likewise, a notably different rate of exemption (the decision of whether to exempt/exclude a student based on, for example, an intellectual disability, is made at the school-level according to common criteria provided to schools) can have an effect on scores with implications for interpretation. Since jurisdictional mean scores are often quite close to each other, a small effect on score related to jurisdictional differences in these rates may have a notable effect on ranking to an extent that cannot be estimated. The published reports contain participation rate and exemption rate information.

Contextual Analyses and Reports

Both PISA and PCAP collect contextual data through questionnaires. These data provide insight into the contexts in which education takes place within and across jurisdictions, and how these contextual factors relate to PISA and PCAP scores. When interpreting these analyses, it is important to be mindful that these studies are better viewed as surveys, and not as experiments where students are randomly assigned to undergo certain interventions to assess the impact of these interventions on student performance. As such, interpretations of ‘cause and effect’ should be done with great care, if done at all.

Scoring Scales

Both PCAP and PISA use a performance scale with an arbitrary mean of 500 that is established the first time a domain (e.g., Reading) is studied as the main domain, and a standard deviation (a measure of spread of the scores) of 100. Though both assessments use the same scale, there is no connection or equivalence between them.

Reported mean scores are accompanied by a range called the ‘confidence interval’, in numerical form in tables, and illustrated using ‘whiskers’ for PCAP, and rectangles for PISA. The ranges indicate where the true mean score for the jurisdiction is most likely to be. These are provided because mean scores are based on samples of students, not on all students in the jurisdiction, resulting in a margin of uncertainty around each score. When comparing mean scores of two jurisdictions, a substantial overlap of the confidence intervals means there is not conclusive evidence of a performance difference between the jurisdictions. (There are statistical tests to confirm such conclusions, but visual analysis is sufficient in many cases.)

Where a difference in mean score is clear (e.g., no overlap of the confidence intervals), the question remains as to how large the difference is in practical terms; that is, in terms of demonstrated degree of knowledge and skills. A sense of this is possible by using the standard deviations of the scales. The scales, as noted earlier, have a standard deviation of 100 points. For comparison, a typical standard deviation of a set of final course grades or of a major exam, on a percentage scale, may be around 20 percentage points. Therefore, as an estimate, a 100-point difference on the PCAP or PISA scale is about the same as a 20-point difference on a percentage scale. In other words, dividing a PISA or PCAP score difference by 5 approximates that difference on the familiar percentage scale.

The Pan-Canadian Assessment Program (PCAP)

At the national level, the Pan-Canadian Assessment Program (PCAP), developed by the CMEC and administered in collaboration with provincial and territorial ministries of education and schools, focuses on achievement in reading, mathematics, and science. It was first administered in the spring of 2007, and is scheduled to be administered once every three years. At each administration, one of the domains is the ”major” domain (meaning it is more thoroughly assessed with a larger sample of students), while the other two are present as minor domains (less coverage of the domain, fewer students are sampled, less analysis).

Most recent release:

Focus: Science
Target Group: Grade 8

Provincial Performance by Performance Level – Science (2013) 1

Provincial Performance by Performance Level - Science 2013

Interpretation: Most students in all provinces performed at Level 2 – the expected level for Grade 8 – or above.

Mean Score by Jurisdiction — Science (2013) 2

Provincial Performance by Performance Level - Science 2013

Interpretation:
The difference between Manitoba’s score (465 points) and Canada’s score (500 points) of 35 points is roughly equivalent to 7 percentage points (35 divided by 5). This is notable, but not large in practical terms. The footnote ‘denotes significant difference’ indicates that a difference would most likely be observed again if this study were repeated with a different random sample of schools and students. It does not mean the difference is large in practical terms.


1
2
The Programme for International Student Assessment (PISA)

The Programme for International Student Assessment (PISA) is an initiative of the Organization for Economic Cooperation and Development (OECD), of which Canada is a member. PISA focuses on the same domains as PCAP (see above), but assesses students who are 15 years old on December 31 of the prior year (largely Grade 10). PISA is administered every three years, and began in 2000.

Most recent release:

Focus: Science
Target Group: 15 years old on December 31, 2014

Provincial Performance by Performance Level – Science (2015) 3

Provincial Performance by Performance Level - Science 2013

Interpretation: Most students in all provinces performed at Level 2 – the expected level for 15-year-olds – or above. (Level 2 is considered the baseline level of science proficiency that is required to participate fully in modern society.) Manitoba’s performance is on par with OECD countries, on average, though with a smaller percentage of students performing at or below Level 2.

Mean Score by Jurisdiction – Science (2015) 4

Provincial Performance by Performance Level - Science 2013

Interpretation: The difference between Manitoba’s PISA 2015 score (499 points) and Canada’s score (528 points) is 29 points, roughly equivalent to 6 points (29 divided by 5) on a percentage scale. This is notable, but not large in practical terms.

Manitoba’s average score (499) is on par with the average score of OECD countries (493 points). It has been stable since 2009 when it declined by 17 points – equivalent to approximately 3 percentage points – compared to 2006.


3
4