OFSTED, moderation of inspection grades, and what we can learn from universities

InspectI’ll begin this post by writing about how assessment works in higher education, and then explain how this shows us potential shortcomings in the OFSTED system of grading schools, with reference to one Lead Inspector and a sample of 50 inspections.

In universities, we do a lot of high-stakes marking, so we have to be very careful about our processes to make sure they are transparent and defensible at all times. There are different ways of approaching this, for example second marking (when a colleague marks papers to check the original marks make sense), blind second marking (where said colleague has no idea what the original marks might have been) and even third marking (where the first two colleagues are in dispute, and another view is felt to be appropriate). These approaches generally apply to essay-based answers, projects, dissertations and reports. Marking is double checked by an external examiner, to ensure conformity with national standards with comparable courses, and any anomalies investigated.

For certain subjects, that might involve more scientific or mathematical answers involving technical or numerical content, there are techniques such as scaling that are typically used. This means that if a cohort of, say, 200 students all score uncharacteristically badly on a question, the pass mark for that question can be raised or lowered if it was felt that the original question was pitched wrongly in the context of the overall examination, as well as what is usually expected at a particular level. That way students get a fair and reasonable result, and standards do not fluctuate wildly if there happens to have been a change in the assessment team, for example. Again, marking is checked by an external examiner, and any scaling has to be justified to him/her in the context of national standards.

If an examination board really wants to probe assessment standards, it is possible to track how individual colleagues or groups of colleagues assess over time, in terms of a particular set of criteria, or statistical norm, depending on the subject under consideration and the group size. This then feeds into ongoing staff training. Academics tend to see assessment standards as an ongoing work in progress, routinely checked and altered, and underpinned by principles of fairness and parity. People take it very, very seriously indeed. This is why I am not being my usual jokey self in this blog post.

With this in mind, recently I spent some time looking at how OFSTED inspection grades vary amongst inspectors, and which factors influence this. To that end, I would like to present a case study of an individual inspector to give an example of quite how variable grades can be in comparison with a national norm. I am not saying this is the case for everyone, or making a pronouncement about OFSTED in general. I am just saying that, in this case, there is a case for OFSTED/Serco/Tribal to moderate grades internally, as part of ensuring professional standards are met. Then, and only then, can the public have real confidence in inspection findings.

My approach

  1. I carried out an internet search to locate all OFSTED officially published inspection reports with the same Lead Inspector (n=50), who has inspected primary schools for two different subcontracting agencies, but who does not appear to have been an HMI (a centrally employed inspector).
  1. I have gone to considerable lengths to find as complete a data set as possible, by digitally searching published OFSTED reports, with triangulation against relevant newspaper articles reporting school inspections. I was also allowed access to the Watchstead website to check the data, which was very useful (thank you, Watchstead).
  1. I logged the overall inspection grade given by this Lead Inspector in each case.
  1. I calculated the overall proportion of inspection grades in each category given by this Lead Inspector, in percentage terms.
  1. I compared this percentage to the officially published OFSTED average grades overall for all inspectors in each category, which were available on the OFSTED website.

Findings

        1. Some caution is required in interpreting the data, as a sample of 50 inspections means that one or two unusual incidences may skew the findings more than it should, more so than if we had a sample of, say, 100 inspections.
        2. The table below lists the 50 inspections carried out by the Lead Inspector over the last 9 years, and overall inspection grades awarded in each case. I have removed the names of the schools as it identifies the Lead Inspector concerned very easily and that is not the point of the exercise here.
        3. The figure below demonstrates the pattern of the Lead Inspector’s inspection grades over time. It tells us that in recent years, the inspector has become considerably more likely to award Level 4 grades to schools. This corresponds to a decreased frequency of inspections carried out by this Lead Inspector during the period 2010-2014, when the new regulations applied. Click the icon in the bottom right hand corner if you want to enlarge the chart/table.
  • As stated above, the inspection regulations changed after 2010, but the overall OFSTED proportions of schools getting level 2 or 4 has roughly stayed the same during that period.
  1. In the case of this Lead Inspector, the table below represents the proportion of grades given during  the period 2005-2013. The second column represents the OFSTED average for the same period. I have listed the 2010-2014 OFSTED averages in column 3, but I have not done so for the Lead Inspector as we only have data for fifty school inspections, so that seems unhelpful. Note: columns will not add up to 100% due to complex rounding.

                        Inspector   OFSTED 05-14    OFSTED 10-14

Level 1           10%                13%                            10%

Level 2          30%                50%                           50%

Level 3          40%                34%                           36%

Level 4          20%                7%                              6%

  1. It would fair therefore to conclude, with the caveat that this is a relatively small sample of 50 inspections, that on the basis of the publicly available data, this Lead Inspector appears to be around three times more likely to give a Level 4 grade to a school than the overall OFSTED average.

The problem with this is that we don’t know:

1. If this inspector is being specifically sent to schools in trouble, hence the lower grades. However it is usually directly-employed HMI that are sent to schools in trouble, as I understand it, rather than a sub-contracted inspector, as in this case (I am sure someone will correct me if I am wrong).

2. If this inspector has become more or less reliable in terms of judgements over time, compared to the OFSTED guidelines and the opinions of inspection peers (I found many incidences where this inspector was working alone, in small primary schools).

3. How inspection grades are defended internally by inspectors to one another. And if we don’t know this, then we have no idea how accountable inspectors are for their decisions.

This is why OFSTED needs to tell us more about how its moderation processes work, or if it has none, then simply to implement some as soon as possible. Otherwise wild vacillations and inconsistencies will continue to make parents, teachers and pupils very nervous indeed. If surgeons can publish their personal outcomes, then surely so can inspectors?

How *not* to approach your doctoral viva exam. Or indeed your research.

hattujamiekka15_0If there were a league table of cool places to do doctoral degrees, it would have to be topped by Finland. It’s the only place I know where you are given a top hat and a sword on graduation (see picture), which is rather like being transformed into a rather elegant academic Ninja. Anyway, today I thought it might be amusing to upload a spoof viva script I use when teaching on doctoral courses. For those who haven’t been subjected to this rite of passage, a viva is an oral exam where you have to spend a couple of hours (at least) being interrogated fiercely about your thesis by two or more seriously clever people. This script shows how to get almost everything wrong in the exam, and describes what might be the world’s worst doctoral project by the world’s most unaware student. Do not try this research project at home.

  1. How did you choose this topic of study?

I thought it would be easy to get a tobacco company to sponsor it. Plus I have just given up smoking myself.

  1. Tell us a bit about your study, could you explain the title in more detail?

I can’t remember the title, it’s so long. Hang on, I have got it written on a bit of paper here. “Puff the Magic Dragon: Adolescence and Smoking Culture in the Secondary School Environment”

  1. What is the gist of your study? What is your ‘thesis’?

That there is still a very entrenched smoking culture in secondary schools, despite recent legislation. I have developed an idea based on the work of Pierre Bourdieu, that in addition to social, cultural and intellectual capital, there is also a kind of ‘smoking’ capital that kids develop over time. They form a social identity as smokers, and base this on the smoking culture they see around them. If you like, this is almost a kind of smoking habitus, which is an all-encompassing identity as a smoker. I have linked this to Foucault’s work on the design of prisons and the relationship with control. I argue that schools are generally not very well designed, because pupils still find lots of places to hang out and smoke, which means the surveillance aspect of things isn’t working sufficiently well.

  1. What would you say is the main contribution from your study?

I have completely rethought Bourdieu’s theory on capital and shown that there are four kinds rather than three. I think this is so important internationally that I have started writing a journal paper for the Harvard Review of Education on this.

  1. What is NEW or novel about what you are saying?

Nobody has identified the existence of smoking capital or a smoker’s habitus before. If you are going to stop children smoking, then you have to nip this habitus formation in the bud, frankly, otherwise all the nicotine patches in the world won’t work.

  1. What methodologies did you follow and how did you prepare for doing fieldwork?

I have a friend who works in a secondary school in Harlow, so I carried my fieldwork out down there. I thought it was very important to do this discreetly, otherwise the school would try to hide evidence of pupils smoking, because of school policy. So I managed to get in one weekend to install Axia webcams in the key areas that my friend had previously identified as likely smoking venues. Then I monitored them remotely over the internet to track what sort of pupils used the smoking venues and how often.

  1. How did you think through or prepare for an ethical concerns you might have had about how you conducted the study? Did you not think of doing participatory research?

I did wonder about the ethical dimension of what I was doing, but I knew that an American researcher called Humphreys published an article with the title “Tearoom Trade” in 1970, which discussed his observation of homosexual men carrying out sex acts in public toilets. He managed to acquire really good data about sexually transmitted diseases which he was able to transfer to a later study, so it seemed to make sense to me to use a more modern version of the same methodology, and it seemed legitimate to base the fieldwork in school toilets and so on. I was also aware that Petticrew et al published a journal article in 2007 in the British Medical Council’s journal Public Health, about the smoking ban in Scotland, entitled “Covert observation in practice: lessons from the evaluation of the prohibition of smoking in public places in Scotland”. In the article they describe how complicated it is actually remaining incognito whilst carrying out covert research. That’s why I decided the webcams were vital for this project. Clearly if all these people are doing covert research, then it’s acceptable to do it if your reasons are genuine.

  1. What did you find out through your study?

I discovered that school toilets are only used for smoking during certain key periods of the day, such as during lessons, and that kids throw wet wodges of toilet paper at smoke detectors to make sure they won’t react to the smoke. During breaktimes and after school, pupils tend to smoke behind the school kitchens, and not the bike sheds as people popularly believe. Regarding the smoking habitus, it is more likely that girls take up smoking than boys, and this is generally because they are worried about putting on weight, and they think smoking will suppress their appetites. It is part of the habitus of being a modern young woman. Boys tend to do it to look cool and be part of the gang, on the other hand. I could hear all this over the webcams.

  1. Talk to me about the sorts of literature sources you read that led you or inspired you to study this topic.

Well, the main inspiration was probably the Humphreys ‘Tea Room’ article. There was also an article in the American Journal of Public Health called “The power of policy: the relationship of smoking policy to adolescent smoking” by Pentz et al in 1989, that looked at school policy towards smoking and whether it was likely to decrease smoking amongst adolescents. There was also a really great article in Social Science and Medicine in 2004 by Aveyard et al, that explored the influence of school culture on smoking amongst pupils. These three articles were the main inspiration for me. I read some books as well, but there aren’t so many of those. I am hoping to write the first.

  1. How did you collect, organise and manage your data? Did you follow any documentation/data set management procedures? What is your evidence base?

I observed pupils over the Axia webcams whenever I had time, by logging in remotely. I made notes in a fieldwork diary while I was watching, and referred to these notes when I was writing up. Really I was looking for good examples of my theories about design and control, and habitus.

  1. How did you analyse your data?

As I said, I selected the most interesting examples that supported my theories, and then made sure they were prominent when I wrote up the dissertation. I was a bit disappointed because most of the time, pupils weren’t actually smoking in the toilets, so I had to make the most of the few times that they were.

  1. How did you get access to your subjects of study?

I was lucky to have a friend doing his teaching practice in the school, and he organized things so I could set the webcams up. He cares a lot about pupil health as much as I do, and I was very grateful for his help.

  1. What would you do differently if you had to do this again?

I think it was a bit limiting doing it in only one school, so I would be inclined to find another school and use that as a kind of control group to show that what was happening in the first school wasn’t unusual in any way. It’s important to be scientific, even when you’re just looking at education.

  1. Where do you think your research leads? i.e., what next for you or your research agenda?

I want to disseminate the research as widely as possible, so I am planning to write a book about smoking in schools, and put some of the film clips up on my website. This is so school managers have an idea what is going on in schools today. Obviously I will anonymise the name of the school because it’s important to be ethical.

  1. What sort of supervisory support or guidance did you have – cause frankly, it’s not looking too good for you just now…. :-)

I had a supervisor for the first month, but I found that after I found it very difficult to get on with him because he just wanted me to spend the whole time in the library. So I didn’t bother going to supervisions after that. He wanted to read the final dissertation but I didn’t see the point, because I don’t think he really understood habitus in the same way as me, which would make him biased when he was reading it. So I can proudly say this is all my own work.

Do sponsored primary academies improve faster than local authority primaries?

Group of five happy children jumping outdoors. I remember the first time I heard the strongly asserted DfE claim that primary schools who become sponsored academies improve a lot faster than other types of school, such as local authority schools, and that this leads to impressive educational results. Wonderful! I thought. Just what we need! Magic formula! We will soon be a country of brain surgeons, engineers, philosophers and international diplomats!  Our literary canon will swell, our educated voices will rise in song, and our British hearts will beat with pride at having discovered such a simple answer to all our educational woes. I therefore decided to carry out a very small investigation into whether this stands up as a claim. Here is how I went about it.

  1. I have taken three South Cambridgeshire primary schools with broadly similar intakes and structures. These are Fawcett Primary School[1], William Westley School and Stapleford Community School. These represent schools starting from a high base in terms of their improvement.
  2. I have taken the three worst performing schools in the UK according to a 2009 BBC article based on official data, but which have stayed in local authority control rather than becoming academies. These schools are Willows Primary School, Bankwood Community Primary, and Bysing Wood Community Primary. These represent schools starting from a very low base in terms of their improvement.
  3. I have taken four examples of primary schools that have become sponsored academies. They are been chosen to give a spread of dates in terms of when they became academies, with two changing status in 2012 when there was a large increase in the number of schools being required to do this. These schools are Usher Junior School (now Priory Witham Academy, since 2008), Ashburton (now Oasis Shirley Park, since 2009), Downhills (now Philip Harris Lane, 2012) and Nightingale School, Wood Green (now Trinity Primary, since 2012). These schools represent a mix of both high and low bases in terms of their improvement. In other words, some schools were already improving quite rapidly before they became academies, whereas others weren’t.

Next I tracked each school’s KS2 SATS results in English and Maths[2] from 2008-2013[3]. I derived the data from both primary and secondary sources as the official reporting method has changed twice during this period and it is extremely time consuming to derive this information direct from the raw data. I am perfectly happy to be corrected on any of the raw data, by the way – just point out any errors via the comments below and I will upload a correction. However I am reasonably confident the data are reasonably indicative of the overall picture in each case. Finally I plotted the annual results for each school against the average for England overall. My findings were very interesting.  

  1. Improvement nationally in England in KS 2 English and Maths results has been relatively static between 2008-2013.
  2. Schools that underperform but which remain under local authority control tend to make rapid improvements once any underperformance is identified in the official data.
  3. Schools that underperform but which become sponsored primary academies tend to experience slower improvement once they become academies.
  4. Schools that become sponsored primary academies tend to experience a drop in results for 2-4 years before getting back to the original performance levels. The exception to this is Oasis Shirley Park, but this school experienced a particularly marked results ‘bounce’ after an initial steep drop from 2009-2012.

Therefore from this admittedly small, but carefully constructed sample, it looks as though underperforming schools remaining in local authority control are likely to experience faster improvement in KS2 results in English and Maths for many pupils, and avoid a 2-4 year dip in performance. Oh dear. Perhaps someone might like to check this against a larger dataset, just in case the sample really was too small to be helpful? After all, it would be very disappointing to find out all that public money was spent for no purpose at all.

Look below for my workbook, if you would like to see the data I was using or the chart I created to track improvement. Click on the icon in the bottom right hand corner if you would like the workbook to grow to its full size. As I say, it’s useful to know if there are any arithmetical corrections needed, so feel free to comment.

Footnotes [1] In the Fawcett Primary School data, the 2011 data are anomalous due to an unusual cohort of children (small class, extremely high levels of ESOL, most children in cohort there for <18 months). [2] KS2 English and Maths results represent one very crude measure of how a school is doing, which emphasis summative test performance, and the measure may be influenced by parents paying for private tutoring over which schools have no control. [3] In 2010 many schools did not submit results due to NAHT industrial action, so I have carried forward the previous year’s data in each case in order to populate the chart, as our main concern is with change over time.

 

Dear OFSTED, now about this Data Dashboard malarky …

ID-10088180Hello OFSTED,

Perhaps you can help me with a knotty little problem I am having this morning? I am trying to look at the comparative results of some local primary schools, over time. First of all, I wanted to see how they have been doing over the last ten years or so. Now for the period 2004-2010, this has been comparatively straightforward, as I can look up the English, Maths, and Science Key Stage 2 SATS results in each case, where they exist. If I ferret about on the BBC News website, I can easily find simple and easily readable data to tell me what sorts of pupils attend these schools derived from OFSTED’s own data. It’s then a simple matter to plot this onto a graph so I can map trends over time. (That’s not to say that I think SATS have ever been anything other than a blunt instrument in terms of assessing learning, but for the purposes of statistical comparisons, we have a fairly straightforward methodology there). Now it’s the period 2011-2014 that has started causing me all the problems. We appear to have had what sociologists may provocatively call a ‘rupture’, as though the data’s entrails have emerged in a disorderly fashion. I am sitting here looking at the Data Dashboard, and I am seeing the national picture, but not the regional one, which is the first step in the data being decontextualised for me. I can get the regional data, but I have to know where to dig for it. The Data Dashboard tells me that the school is ‘compared to the national picture’ on the basis of Grammar/Punctuation/Spelling, Reading, Writing and Mathematics. My eye is then drawn to the series of little boxes labelled ‘quintiles’. Now these are particularly baffling, as these quintiles are based on what is called ‘similar schools’. If I click on the list of schools associated with one of these primaries, in each case there is a massive list of institutions that vary significantly according to characteristics like these, all of which may impact on school processes and outcomes, especially when they are combined:

  • School size (the larger the school, the more valid the sample)
  • Number of pupils eligible for Free School Meals/Pupil Premium/Ever 6 (social deprivation is linked to pupil under-attainment in certain circumstances, making it difficult to measure the exact impact of an individual school, especially when deprivation is linked to pupil mobility, as it often is)
  • Pupil mobility (unless we know how long a pupil has been in a school, we cannot assess the impact of the school – we may be assessing the impact of the previous school or even an education system in a different country altogether)
  • English as an Additional Language (once again, unless we have an idea about children’s prior level of English, as well as how long they have been in a particular school, we cannot usefully determine the impact a particular school has had on their reading, writing, spelling, punctuation and grammar in English).
  • Prior attainment of pupils prior to joining the school (see my points about deprivation and pupil mobility, above)
  • Number of pupils with special education needs (in the present system, this is often defined in terms of chronology, i.e. development being behind peers, so we need a nuanced method to establish school impact once again, otherwise smaller schools with many children who have developmental delays will look as though they are underperforming compared to larger schools with those with few children who happen to be developmentally delayed).

Having established that a different definition of ‘context’ is being used to determine ‘similar schools’, I looked up OFSTED’s official documentation in order to establish exactly how these similarities were calculated. I looked here:

School data dashboard guidance http://dashboard.ofsted.gov.uk/sdd_guidance.pdf

and I also looked at the technical guidance: http://dashboard.ofsted.gov.uk/news.php

However I have some questions about the way you are calculating these ‘similarities’.

1. When you say ‘average’, do you mean the mode (most common outcome), median (the mid point of the pupil results range) or the arithmetical mean (adding up all the results and dividing them by the number of results)? These may tell us starkly different things about the way the Reading, Writing, Maths and Spelling/Punctuation/Grammar tests are formulated, and what type of results pupils in particular schools attain. To be honest, OFSTED, I am not even sure if you take your calculations down to pupil level or whether you just take the arithmetical mean for the whole cohort, and compare it to a crude arithmetical mean for the whole country (which would be fairly meaningless statistically and educationally, so I hope that’s not your approach). Which brings me to my next point.

2. Do you remove outliers from your calculations? Clearly the results of smaller schools are likely to vary more annually, and pupil mobility and the development of local housing will be significant factors here. If you can’t remove isolated results at the extremities, you are not really getting a true picture of the impact of a school’s teaching on a cohort.

3. Finally, is it true that what you are doing here is taking past performance based on some sort of average, and then projecting it forwards on the assumption that this is a stable measure (sometimes called a ‘predict and control’ model)? And then linking up the ‘averages’ (however these are calculated, see questions 1 and 2 above) to create these groups you term ‘similar schools’? This is certainly what you seem to be saying in your guidance. If so, that makes me a bit worried.

If you look in my book ‘Teachers Under Siege’ , on pages 77-78, you will see why blindly modelling forwards like this is a bad idea. I give the example of UK birth rates between 1951 and 2001. If we plotted these on a graph, we would see a particular trend over time during this period, which is down. However if we went back in time to 1963/64, we would see an ongoing and quite dramatic increase that we might assume would continue indefinitely, possibly eventually resulting in 1 million births a year. With the benefit of hindsight we know that the birth rate actually started to fall as dramatically as it initially rose, resulting in falling school rolls and school closures later on.  (Another example people often use as to why the blind modelling on limited variables fails is the oil crisis of the 1970s. Many oil companies assumed continues exponential growth and ordered new tankers and plant accordingly. Shell, on the other hand, asked itself the question ,”What do we do if it doesn’t continue to grow?” and positioned themselves more intelligently within the market. They got to eat all the proverbial pies while other companies were left with oil tankers and plant they couldn’t use). Now clearly school and pupil attainment are a different kettle of fish. First of all, it is very difficult to quantify the impact of schooling precisely, particularly amongst 11 year olds of varied backgrounds. This is why I said earlier that SATS were something of a blunt instrument. We are not counting the numbers of births or barrels of oil here. Also drops in birth rate are not an indication of failure amongst the childbearing population, just as politically-driven drops in oil production and distribution in 1973 did not automatically mean that oil executives had failed. However even if we take test results at face value, the modelling you are using still looks odd. Why would it be helpful to group schools together on the basis of their ‘average’ results without taking into account any other variables? If you really think this is worthwhile, OFSTED, then you need to make your methodology and justification a lot clearer than they are here, I would suggest. Otherwise it is difficult for us to have confidence in your processes and outcomes.

Now OFSTED, I want you to feel free to comment below on this. Many of us are genuinely perplexed by the Data Dashboard and would welcome clarification.

With best wishes,

Dr Leaton Gray

[Image Courtesy of Stuart Miles, Free Digital Photos]

Excellent or going wrong?

sandraleatongray:

You make some valid and important points, particularly about the extra-curricular aspects of a good school-based education.

Originally posted on Mr Brown says:

I have been looking at two posts on the characteristics of an excellent school or a school that is going wrong. It is not a definitive checklist for good or bad practice but it is pretty hard to argue with the characteristic placements

I did think there were a couple of notable exceptions. For example nothing was made of intervention. A good programme of targeted intervention can promote rapid progress in certain students. One pupil I teach would be dallying through the curriculum missing lessons and falling behind but because of the excellent intervention sessions set up. However some intervention sessions can just rob Peter to pay Paul. Pupils need catch-up reading during maths lessons which causes them to require numeracy catch-up later in the term so he is removed from PE (his favourite subject). Pupil gets annoyed because he misses PE, his self-esteem is lowered because people keep…

View original 132 more words

When education is going wrong

Bentham's Panopticon - prisoners can be watched from the watch tower at any time of the day or night, thanks for permanent illumination.

The flip side to my previous post about what excellent education looks like, is what poor education looks like. Again, this is all open to debate, but my list is aimed at making a start. Even some schools generally regarded as successful may want to think about their professional practice if they see any of these negative characteristics on their own turf.

  • It is hard for others to work out exactly what is going on in a teacher’s classroom.
  • There are high rates of casual and long term teacher absence, and high use of supply teachers and trainee teachers.
  • There is a blame culture within the school, with teachers excusing poor attainment on the grounds of children’s social disadvantage, management shortcomings, or children’s resistance to schooling.
  • The same children tend to answer questions and participate in lessons, while others remain quiet and sometimes disengaged.
  • Children in the top ability groups are regularly used to coach other children.
  • Children in the bottom ability groups are confined to separate tables and effectively taught by (usually unqualified) Teaching Assistants.
  • Teachers review children’s progress only in response to Government policy.
  • Teachers insist on high levels of pupil conformity in terms of humour, interests, and attitudes towards society.
  • Staff morale is low and teachers do not socialize with each other outside school.
  • Teachers and children feel they need to behave in a physically rigid, controlled manner in school. for example always looking forwards and attentive.
  • Teachers are not sure what children know, and where there are gaps.
  • Access to books and educational resources is controlled and rationed.
  • Children and teachers spend a lot of time discussing discipline.
  • The methods used when teaching do not fit the task, for example using direct instruction at the wrong time, or too much unfocused small group discussion that is off task.
  • Lessons drift away at the end, without any summary.
  • Learning is focused on achieving Government targets.
  • Higher status is theoretically given to mathematics, the sciences and English, but children are not taught by teachers properly trained in these subject areas.
  • In school, the emphasis is on getting through the curriculum rather than developing knowledge and expertise, and developing an intellectual life.
  • Children and parents are reluctant to come into school, speak to teachers, and support school events.
  • When parents come into school, they are spoken to as pupils and required to sit on small chairs.
  • Discipline is obvious, sometimes loud and varies in its application.
  • Former pupils do not remain involved with the school.
Posted in General. 1 Comment »

What does excellent education look like?

handsIn a school governors’ meeting recently, I was speaking my regular motherhood and apple pie piece about the need for excellent education, and about nothing being too good for the kids in our care, and quite rightly one of the other governors asked me the $64,000 question. that I had repeatedly sidestepped in previous meetings.

“Sandy, can you tell us exactly what excellent education is? And how will we know it when we see it?”

That put me on the spot. I am aware very many people have tried to define excellent education, and that there is great variation in the priorities different people have in seeking to ensure it happens, which makes describing it rather daunting. However here I have decided to lay out what I think the process looks like when it is happening, and what prospective parents and teachers might want to look for if they visit a school. Feel free to comment if you want to debate it; I am open to persuasion and argument.

  • Teaching is well organized and teachers have well-established, consistent routines easily understood by other teachers, children and parents.
  • Teacher absence rates are low, and there is little use of supply teachers.
  • Teaching is personalized and properly differentiated. Teachers are aware of what their pupils know, and don’t know.
  • All children are routinely encouraged to answer questions and participate in lessons.
  • Children in the top ability groups are given enrichment tasks and further study opportunities when they have finished their work, rather than being used to coach other children.
  • Children in the bottom ability groups have plenty of contact with the most experienced teachers in the school, and are not confined to separate tables and effectively taught by Teaching Assistants.
  • Teachers review children’s progress frequently, and communicate this to children, teachers and parents.
  • Teachers understand the context of their children’s lives outside school.
  • People associated with the school like each other and are happy working together.
  • Teachers and children feel they can express their own personalities at school.
  • Children have no gaps in their knowledge. If a child misses something at school because of illness or other absence, the teacher advises the parents and helps the child fill the gap.
  • Children have access to good books and educational resources, and willingly take advantage of what is on offer.
  • Children and teachers spend a lot of time discussing teaching, learning and knowledge, to mutual advantage.
  • Learning involves a mix of methods, appropriate to particular tasks. These can include direct instruction, small group discussion and collaboration, self-study, and plenary sessions.
  • Lessons are summarized at the end, usually through group discussion.
  • Learning is linked with the local area and the outside world.
  • Equal status is given to mathematics, the sciences, the arts and the humanities and children are taught by teachers properly trained in these subject areas.
  • Children have the possibility to extend their personal knowledge and interests through independent or guided study.
  • Children and parents enjoy coming into the school, speaking to teachers, and supporting school events.
  • Discipline is quiet and consistent.
  • Former pupils are happy to return to the school and support it.
Follow

Get every new post delivered to your Inbox.

Join 1,095 other followers

%d bloggers like this: