Education evolving

Jeremy Chen

I recently read an article titled Why We Must Recreate the Wheel that was very appropriately filed under the category of Teacher Leadership. It advocates taking time to review and re-engineer teaching material so as to revisit the intent behind the original design and to breathe new life into educational content. It is an excellent read and I strongly recommend it.

The purpose of this article is to recommend this activity be carried out at a national scale so education in Singapore may evolve to greater heights.

The upshot is clear, with improvements in the state-of-the-art in science and industry, and as it becomes appropriate for such to be brought into the classroom, within such a global curriculum re-engineering framework, efforts will be automatically made to incorporate such knowledge by teachers.

These teachers will, most likely, have varying fields of expertise and varying teaching duties, so multiple perspectives are likely to be tried automatically. (Furthermore, no “one-off” and “limited-scope” efforts to assess the value of such knowledge will have to be made.) This means that, as with evolution, there will be a measure of robustness and tactical multiplicity in the incorporation of such knowledge into the curriculum.

The proposal

What I advocate is that, every year, each teacher linked to the Ministry of Education will have to re-engineer/create at least one piece of coursework, including notes on the underlying pedagogical intent. Underlying this is the simple belief that teachers should be capable of producing teaching material. To make time for this, slight reductions in non-educational duties will have to be made. This will have to be a priority. Naturally, quality control must be in place, so such work should be certified for quality by the author’s head of department (HODs should be allowed to self-certify).

To impress on teachers the importance of this exercise, this will be a major factor in the variable component of salary, perhaps enabling teachers recognized for strong contributions to innovation to possibly double their annual salary. While not excessive, this is sufficiently substantial to provide incentives for teachers to take this considerably more seriously than NSFs take the “mandatory” annual “three suggestions” exercise.

One implementation of this might be as follows. Teaching innovations (proposed over one’s career) that have been well-used over the past few years (say 2) should count towards the variable salary component. How much they contribute depends, of course, on the quality of the innovation as dictated by, first, blind peer review (by fellow teachers randomly selected from other schools) and later by usage. After blind peer review, coursework proposals will be released in a knowledge repository that all teachers will be able to peruse. (Blind peer review refers to peer review whereby that the name of the contributor of a coursework proposal is masked out for evaluators.)

Testing things out

To enable teachers to evaluate new teaching methods, flexibility in how they go about teaching their classes should be given to teachers. One way is to allow them to vary what is taught in each class up to a limit (say up to 10% of teaching time). Another way, which I favor, is to trim the syllabus by 10% and offering that 10% time to teachers for such experimentation.

Coursework proposals evaluated by usage should be rated on multiple dimensions of effectiveness. I will not propose a set of these here because I am not knowledgeable in pedagogy or instructional design. But I will add that they should also be rate-able on the basis of “perceived quality” by those who have not or do not intend to test it themselves.

Sifting through the mass of proposals

In order to be able to sift through the huge mass of contributed coursework proposals for potential things to try out, an information system should back this up. Besides the ratings implied by the above discussion, another important tool will be textual search backed up by a system of tagging. Authors, evaluators and readers/users should be able to tag proposals with an appropriately designed taxonomy and also free form tags. Readers should be able to “vote up/down” tags to indicate appropriateness and enhance search-ability. This enhancement would enable the creation of an effective “newsfeed” that pushes appropriate new releases to each teacher.

Returning to the implied ratings, it is clear that there are intricacies associated with making sense of the evaluation results. Firstly, the effectiveness of a proposal will be, as previously mentioned, multi-faceted. Secondly, there is the question of how to choose between a proposal with a “poor average” from few samples is better than one with a “so-so average” from many samples. The former may be a good proposal that got poor results due to simple bad luck, while for the latter one, we are a lot more confident that it is merely mediocre. Knowing that position on search results affect how likely a proposal is to be tried, how can we determine which proposals to give a better position on search results to?

(This paragraph alludes to somewhat technical content that are important implementation considerations but may be skipped by the lay reader.) On the second issue, machine learning provides a way forward, but the interested reader is directed to the following well known paper on the question of how to simultaneously “learn and earn” by balancing “exploration” and “exploitation” (http://link.springer.com/article/10.1023/A:1013689704352). Specifically, I refer to the material on “upper confidence bounds”. (This broad area of online learning is actually a personal research interest of mine.) The first issue may be dealt with in various ways that are well-accepted in practice but somewhat unsatisfactory theoretically. One that I am less uncomfortable with is to elicit weights for the various dimensions of evaluation and then use the weights in a weighted average. Yet it is unclear what exactly we are measuring if, after solving the second problem, we cannot rigorously articulate what the numbers we average are. On the other hand, one might argue that we should not get too caught up in tangential technical matters and lose sight of the big picture — that we want to support the evolution of teaching methodology and that we should go with something that works regardless of mathematical prettiness.

Conclusion

It would be natural that the above provides machinery by which it might be determined which proposals might be tried out in small scale trials (possibly leading to national implementation). But that is taken as a given. I am personally more interested in the prospect of emergent  waves of innovation that emerge organically. This might arise if some teachers get excited about an area and their work excites others who also start working on the area in earnest. Ideas are challenged and refined, and so the curriculum evolves as multiple lines of inquiry are pursued simultaneously.

In closing, let me reiterate that the body of teachers should play a major role in curriculum development. I believe that the above proposals or similar ideas can provide the needed impetus. Just as the world changes rapidly, so too should the curriculum evolve. If we are to maintain and extend our crucial human capital advantage (our most important economic advantage), then it is clear that we must have our education system advance more effectively than those of other nations. Effective and organic curricular evolution is one way to support this. 

Jeremy Chen is pursuing his PhD in Decision Science at the NUS and is a member of the SDP’s housing policy panel.

%d bloggers like this: