One of the main activities of The SEACEN Centre is capacity building, so understanding the effectiveness of our training courses is of vital importance. We want to know what works and what does not and how we can improve our training material and teaching modalities to best serve the needs of our member central banks. The Macroeconomic and Monetary Policy Management (MMPM) group at The SEACEN Centre started trialing an Entry-Exit Knowledge Exercise (EKE) in 2020 to enhance our understanding of the effectiveness of our teaching. More specifically, we used the EEK in four different courses ranging from forecasting to capital flows management. At the beginning of these courses, participants were given a set of around 15 questions related to the issues and topics that would be covered on the course. The questions are either multiple choice with five answers, one of them being ‘I don’t know’, or simple True/False-type questions. In many instances, they can be quite specific and technical in nature. The participants were then given the same questions at the end of the course. We collected the data from these four courses and analysed them. We matched the answers for individual participants (without identifying them) and only kept the results by individuals who filled in both the entry and the exit knowledge test.
Before we get to the results, both tests are informative on their own, which is particularly the case for the entry knowledge test. Results from this test allow us to identify challenging issues early on during the training event. Instructors can then focus more on those areas that garnered worse results and shift attention to more advanced topics in areas that scored well.
In Figure 1 below we plot the pre- and post-course evaluation scores of all the matched individuals who participated in both tests. Each dot on the figure represents one individual and the 45-degree line (in yellow) is added to make the interpretation easier. If an individual scored the same percentage of correct answers in both tests, she would be on the 45-degree line. Point A represents such an individual: she scored 35 per cent in both pre- and post-course tests. Any observation that is above the 45-degree line indicates an improvement by the individual from the pre- to the post-course test. Point B provides such an example: the person represented in point B received 6.3 per cent in the pre-course test but improved to 50 per cent by the end of the course. Most of the observations are above the 45-degree line, showing an improvement for a large majority of the participants.
Nevertheless, there are some individuals, although their share is small, whose scores went down (equivalent to observations below the 45-degree line). This could be for several reasons, the most important being that the delivery of the training material, or perhaps the material itself, did not hit its mark. Another is that the question itself is badly phrased. Although the share of such individuals is small, we take these observations very seriously. After the course, the instructor can take a look at either delivery or the course material to make amendments. We note that individual comments on lectures, which we also collect, often provide useful pointers on what can be improved. Overall results are very positive, though. The red line shows the estimated simple regression line where the post-course score is a function of the pre-course score. The relationship is positive, suggesting that there is an upward relationship between the two scores. In other words, post-course scores are higher than pre-course scores (on average).
Figure 2 shows the same observations for the individuals in the following way: the y-axis is the improvement in an individual’s score from the pre-course test to the post-course test, and the x-axis shows the pre-course test of the same individual. The negative estimated trend line in red suggests that an individual who scored the lowest points in the pre-course test subsequently recorded the highest improvement (on average).
Point A, for example, shows an individual who scored 31.3 in the pre-course test. This person improved her score by 44 points to 75.3 per cent in the post-course test. People who already scored high in the pre-course test naturally have much less room to improve further. The extreme example of that would be point B, who scored 100 per cent in the pre-course test and could obviously not improve at all (thus recording 0 on the y-axis). Overall, this graph shows that the course had the highest value added to those participants whose knowledge was much less at the start. But that does not necessarily mean that the people who knew a lot going in did not improve either.
In Figures 3 and 4, we show the identical relationships that we discussed for individual respondents in Figures 1 and 2 for the included test questions. In Figure 3, we plot the proportion of correct answers for the same questions in pre- and post-course tests. Here, the idea is that we would like to understand if the questions (and, implicitly, the topic) became clearer during the course and hence the correct answers for the same questions increased. If people’s knowledge of a particular question remained the same, the data point would be on the 45-degree line (the yellow line). But we observe that most of the observations (54 out of 61) are (well) above the 45-degree line, indicating a marked increase in knowledge.
Figure 4 shows that the questions or topics that were found to be the most difficult, measured by low pre-course scores, experienced the highest improvement by the end of the course. Like in Figure 2, we can put some numbers on this relationship: a 1 per cent increase in the difficulty level of a question translates into a roughly 0.25 per cent decrease in the average score by the end of the course (on average).
In summary, at the SEACEN Centre we take evaluation of our courses seriously. We are happy with – and encouraged by – the positive results so far, although there is always room for improvement. For the time being, the results presented in this blog come from a small sample, but we will report back from time to time as our sample size increases. Nevertheless, we believe that the information we summarised here is not only useful for us but also for our stakeholders. As the EKE tests become part of our regular analysis going forward, we will end up with more information that can be analysed in-depth to improve our training courses.