Programming: Teachers and Pedagogical Content Knowledge in the Netherlands

.


Introduction
The goal of this study is to measure Dutch teachers' Pedagogical Content Knowledge (PCK) in Informatics education using an instrument developed for this purpose.The reason to undertake such a study is the concern about the quality of Informatics education in secondary schools (Van Diepen et al., 2011).In order to assess the current Dutch situation, we choose to analyse Dutch teachers' PCK.PCK is that expertise that allows teachers to present, in an effective way, the subject to their students (Shulman, 1986).It can be seen as a special combination of content knowledge (CK) and pedagogical knowledge (PK), which grows with the years of teaching experience.The instrument used, the OTPA (Online Teachers' PCK Analyser) reported in Appendix A, is an adaptation of the research instrument CoRe, used to portray the PCK of chemistry education (Loughran et al., 2001) and programming education (Saeli et al., 2011a).
The teachers' answers to the online questionnaire are then compared with the standard PCK of programming portrayed in a previous study (Table 1, left column), Table 1 Scheme representing the relation between this study and two previous ones.The terms used are CoRe (Content Representation), OTPA (Online PCK Teachers' Analyser), PCK (Pedagogical Content Knowledge) and PTA (PCK Textbook Analyser) which is fully available in the technical report (Saeli et al., 2010) and briefly represented in Appendix B. Furthermore, possible relations between teachers' PCK and the textbooks they use are studied by comparing teachers' results with those of another study (Table 1, right column), in which aspects of PCK in Dutch textbooks were analysed (Saeli et al., 2011b).In Table 1, a scheme is provided representing the relations between the CoRe instrument and the definition of PCK and the construction of the OTPA, as well as the relation between the results of this study and those of two previous ones.These two studies will later be referred to the 'international study' (Saeli et al., 2011a) and the 'textbook analysis study' (Saeli et al., 2011b).The final goal of the study described in this paper is to understand whether teachers' disciplinary backgrounds (e.g., literature, mathematics, arts, etc.) are related to teachers' PCK.

PCK Development
Having a good PCK means that teachers have several representations of the most commonly taught topics within a certain subject.The more representations teachers have at their disposal and the better they recognise learning difficulties, the more effectively they can deploy their PCK (Van Driel et al., 1998).This implies that PCK is knowledge that grows with years of teaching experience and can be nearly absent at the beginning of the teaching career.In fact, research shows how novice teachers' PCK was inadequate to support teaching in field experiences (Rovegno, 1992).This does not mean that novice teachers cannot teach, but that they might not have an 'armamentarium of representa-tions' at their disposal (Shulman, 2012).However, teacher training provides a framework on which novice teachers can build their PCK (Grossman, 1990).
A different scenario is presented when teachers, with several years of teaching experience, are teaching a subject outside of their certification.One study (Sanders, 1993) shows that teachers, when teaching a topic outside their science specialty, sometimes acted like novices (e.g., difficulties in answering students' questions; determining the depth and the extent of the topic to present to the students) and sometimes acted as experts.The conclusion is that PCK is knowledge that is transferable, although not completely.It seems that experienced teachers with strong PCK can reuse their knowledge to teach subjects outside of their certification.Their PCK helps them to recognise the need to transform their knowledge for the students, even though there might be difficulty in determining how much to present at a given time and how to sequence their presentations.Through their PCK they can recognise the need to deal with students' input and try to det ermine students' background knowledge.

Measuring PCK
Educators and researchers have developed several techniques and methods to study PCK (An et al., 2004;Carpenter et al., 1988;Kromrey and Renfrow, 1991;Carlson, 1990;Rohaan et al., 2009).Baxter and Lederman (1999) give a description of the most general techniques that are used and their criticisms.They organise the different methods into three groups: convergent and inferential techniques; concept mapping, card sorts and pictorial representations; and multi-method evaluation.Convergent and inferential techniques include Likert self-report scales, multiple-choice items and short answer formats.These techniques seem to be an economical means of improving general teacher tests, but it is unclear if these tests are actually tapping into new domains of knowledge.The assessment and measurement of PCK concerns the study of a teacher's ability to deal with the unusual, non-generalisable aspects of teaching.Accordingly these techniques seem to be inadequate because they are too restrictive.Concept mapping, card sorts, and pictorial representations are tools that have been largely used to study teachers' knowledge and beliefs, and to measure short-term changes.These tools are not suitable to study the persistence of changes and (therefore) they have little value in understanding the development and change of a teacher's PCK (where PCK involves changes that take place throughout the years).Multi-method evaluations include a variety of techniques to collect data such as interviews, concept maps, and video-prompted recall (Magnusson et al., 1999).Studies conducted with multi-method evaluations are effective in assessing PCK but they are time and energy consuming.For certain studies (Hashweh, 1987) difficulties exist concerning be the feasibility to replicate the measurements.In some cases there is the need to make difficult decisions as to which data sources are needed to build a global profile of PCK.The description of the multimethod eva luations suggests that the assessment of PCK is neither simple nor obvious.
Methods and instruments to measure and assess PCK are being studied and experimented with.The most common trend is to rely on qualitative approaches.However, these produce results that do not allow one to generalise concepts about teaching and PCK because they often consist of case studies.It seems that quantitative approaches have been rarely adopted and their results give a partial view of a teachers' PCK.Both methods are effort and time consuming.The qualitative methodology requires time for the data analysis part (e.g., interviews transcripts), and in contrast the quantitative method requires time for the development of the research instruments (e.g., adequate multiple-choice items design).
In this study we use an instrument that has both qualitative and quantitative aspects because we think that a combination of the two techniques will lead to a stronger measurement.

PCK of Programming
The results of the first effort to portray the PCK of programming are reported in the international study (Saeli et al., 2010(Saeli et al., , 2011a)).These results were obtained by using the research instrument CoRe (Content Representation), which has already been successfully used by Australian researchers in chemistry education (Loughran et al., 2001).PCK has been defined as "the armamentarium of representations that teachers need to have at their disposal when teaching a certain subject" (Shulman, 1987).To initialise the process to create such a set, a total of 31 experienced teachers and teacher trainers have been asked to take part in semi-structured group interviews, organised in the format of workshops.These interviews had a length of roughly two hours, involving around five teachers at a time.Each interview was divided into two parts.In the first part, teachers had to individually list what, in their opinion, the "Big Ideas" of programming are (the CK component).Big Ideas are those concepts within a subject which are at the heart of the learning process for that specific subject according to well-educated and experienced teachers.The Big Ideas within the context of learning to program that have been named by more than two groups are reported in Table 2.At this point it is important to understand that although Informatics experts could come up with other Big Ideas (e.g., pointers), PCK refers to well-educated experienced teachers knowledge, it is not possible to add extra (though relevant) Big Ideas to the list.Also, problem solving skills have been counted as a distinct answer (named by only one group), as a result from the previous study.We keep this distinction to maintain consistency between the two studies.In the second part of the workshops teachers chose, based on their interests, one or two of the Big Ideas and answered, for each Big Idea, the eight questions listed (Table 3).These results comprise the standard against which the PCK Dutch teachers will be measured.
The data have been collected in four countries (Italy, Belgium, Lithuania and the Netherlands) and constitute the first contribution to the efforts to portray the PCK of programming for secondary education.Having the possibility to freely choose from the eleven Big Ideas, the teachers chose seven topics according to their interest, namely: control structures, with a focus on loops; data structures; arrays; problem solving skills (named by only one group); decomposition; parameters; and algorithms.The PCK of  3 The eight questions of the CoRe instrument Questions 1.
What do you intend the students to learn about this Big Idea?

2.
Why is it important for the students to know this Big Idea? 3.
What else do you know about this Big Idea (and you do not intend students to know yet)? 4.
What are difficulties/ limitations connected with the teaching of this Big Idea? 5.
What do you think students need to know in order for them to learn this Big Idea?6.
Which factors influence your teaching of this Big Idea? 7.
What are your teaching methods (any particular reasons for using these to engage with this Big Idea)?8.
What are your specific ways of ascertaining students' understanding or confusion around this Big Idea?
these topics is now available integrally in the technical report (Saeli et al., 2010), and in Appendix B only the known PCK of the Big Idea algorithms is reported.

The Dutch Situation
In the Netherlands, Informatics at secondary schools first appeared in 1998, when the Dutch Ministry of Education had the content and quality of all existing courses controlled and new ones introduced.Informatics was one of these new courses.Concerning the content of Informatics courses, it should be noted that in the Netherlands, lower grade students (up to 14 years old) are all expected to become IT literate (Hulsen et al., 2005), which is to achieve the minimal level of familiarity with technological tools like word processors, e-mail, and Web browsers (On Information Technology Literacy, 1999).This means that by attending secondary Informatics education, students should foster higher achievements.Recently (fall 2007), secondary education underwent modifications, which for Informatics implied a simplification of the curriculum.It became less detailed and schools were granted more autonomy and choice in the way they organise education.At the moment, Informatics is an elective subject for students and for schools as well.
Presently, on a total of roughly 550 secondary schools, there are about 350 Informatics teachers (Schmidt, 2007b) in the Netherlands.Most of these teachers did not receive formal teacher training in Informatics but were offered an in-service training spread over two school years known as the CODI course (Consortium Omscholing Docenten Informatica -Consortium for the schooling of Informatics teachers).The content of this course covers about half of the first year of a Bachelor of Science in Informatics and the pedagogical aspect of teaching Informatics in the last two years of secondary education (ages 16 to 18).A common scenario is that there is only one Informatics teacher per school, in case a school offers Informatics.Considering the number of Dutch schools, it means that only around 65% of the schools could adequately provide an Informatics curriculum.Furthermore, the majority (67%) of these teachers were 50 years old or over in 2007, which means towards the end of their care er.It seems that Informatics in the Netherlands is at a crossroad, where there is even a risk to withdraw the teaching of Informatics from schools (Van Diepen et al., 2011).The curriculum is divided into four domains, namely Informatics in perspective (possible uses and scope of Informatics), basic concepts and skills, systems and their structures, and applications in connection.These four domains are divided into 18 sub-domains (Schmidt, 2007a) but the level of understanding and depth students are expected to achieve are not specified.
Three textbooks are available in the Netherlands: Fundamentals of Computer Science (original title: Fundament Informatica), Enigma, and Active Computer Science (original title: Informatica Actief).The authors of the three Dutch books claim that, in general, the content of the exam programme has been inspirational for the writing of the textbooks (Schmidt, 2007b).The exam program is divided into several subdomains, and programming is mentioned in the subdomain Software (domain B, basic knowledge and skills, retrieved from the website www.examenblad.nl,2010).The results of the textbook analysis study (Saeli et al., 2011b) suggest that these books sufficiently support teachers' PCK on the content component (CK), but fail to support the pedagogical component (PK).

Research Questions
The aim of this study is to measure Dutch Informatics teachers' PCK.To do so, we develop a research instrument, called OTPA.Moreover, we are interested in exploring the relation between teachers' PCK on the one hand and their background studies and the textbooks they use on the other hand.Therefore, the research questions are: -Is it possible to assess teachers' PCK with the use of the OTPA?-What is Dutch teachers' PCK of programming for secondary school?-To what extent is teachers' PCK related to the textbook they use?-To what extent is teachers' PCK related to their disciplinary background?Table 4 The scheme summarising the different stages to answer the first two research questions.The term used is CoRe (Content Representation).The details of the used method follow in the next sections

Methods
In this section we describe the methods to answer the four research questions.The methods for the first two questions are summarised in Table 4: the evaluation of the OPTA instrument (leftmost column) and the assessment of Dutch teachers' PCK (centre and rightmost column).Regarding the third research question, finding a relation between teachers' PCK and the textbooks they use, we will use the results of the textbook analysis study to find any trend that suggests whether a textbook with a high PCK could support teachers' PCK.The fourth question is answered by grouping teachers according to their disciplinary backgrounds.

Participants Teachers' PCK
The OTPA has been available online between January 2011 and April 2011, and has been filled in by 92 teachers, but only 69 of those reached the end of the questionnaire.The other 23 teachers abandoned the process, though they were given the opportunity to complete the questionnaire in a later stage by anonymously signing in again.We report the results of those teachers who reached the end of the questionnaire.The majority of the participants is either at the beginning of their teaching career (Table 5) -less than 10 years experience -or quite experienced -between 20 and 40 years of teaching experience.As for the teaching experience of Informatics education, the majority of teachers has less than 10 years of teaching experience (Table 6).Two of these teachers might have taught Informatics in another country or in higher education, because they state that they had more than 20 years of Informatics teaching experience.Of these teachers, 37 also teach another discipline and the majority of those teachers teaches a scientifically oriented subject (e.g., Mathematics, Physics) (Table 7).In the Netherlands, subjects are categorised as follows: alpha are subjects such as Dutch and English; beta are subjects such as Mathematics and Physics; gamma are subjects such as economics and geography; and delta is the category for Informatics (Mulder, 2002).This categorisation sees Informatics as the only delta discipline because this subject is unique in that it has different aspects of other disciplines in its nature.

OTPA (Online Teacher's PCK Analyser) Evaluation
In order to measure Dutch teachers' PCK, we developed an instrument by adapting the CoRe, an instrument used to portray the PCK of programming (Saeli et al., 2011a).The adaptation is made into two directions, the content and the form.Originally, the CoRe instrument was designed to portray the PCK of chemistry education (Loughran et al., 2001), whereas this study deals with Informatics education, with a focus on programming.As for the form, the CoRe was initially designed to be used in the context of semi-structured group interviews, while in this study it is adapted to be used in the format of an online questionnaire.Also, questions about teachers' disciplinary backgrounds and teaching experience are added.
In order to answer the first research question, the quality of the instrument OTPA, we used Nieveen's quality assessment (Nieveen, 1999), which has been primarily designed for educational products.Its applicability in various domains of educational product development, such as for example learning material and computer support systems, has also been proven (Nieveen, 1999).The use was further extended to evaluate a research instrument to analyse textbooks (Saeli et al., 2011b).We extended the list of possible target products, using Nieveen's quality assessment on the OTPA.Nieveen's framework for product quality consists of checklists on the following three criteria (Table 4, leftmost column): validity, which refers to its content and its construct; practicality, focusing on the easy of use of the instrument; and effectiveness, referring to the time requirement for the use of the instrument.
In order to evaluate the different aspects of the OTPA evaluation, the following steps are covered.For the content validity, the theoretical framework of the OTPA is compared with the theoretical framework of PCK to verify whether the instrument assesses all aspects of PCK; while for the construct validity it will be verified if the components of the instrument are consistently linked to each other.Regarding the practicality and the effectiveness of the OTPA, an independent researcher is asked to assess the easiness and the time consumption of the instrument, in terms of low, medium or optimal.
A further step regards the evaluation of the reliability of the instrument.To evaluate the instrument's reliability, another independent researcher is asked to use the OTPA to analyse three randomly chosen teachers' responses to the online questionnaire.The two researchers' results are compared with each other and the percentage of agreement is calculated (Table 4, lower part of the leftmost column).

Quality Evaluation
The goal of this phase is to answer the research question: Is it possible to assess teachers' PCK with the use of the OTPA?
The CoRe instrument, which has been readapted to build the OTPA, has been used successfully in different subjects (Loughran et al., 2001;Saeli et al., 2011a) and has been assessed positively on how the eight questions actually cover the different aspects of PCK (Saeli et al., 2011b).There is an almost one-to-one correspondence between the OTPA and the results obtained with the CoRe.
The instrument analyses both aspects of pedagogy (PK) and content (CK), two of the three main components of PCK.The data obtained with this instrument is then compared with the known PCK.In order to measure the practicality and effectiveness, a second researcher participated in this study.The second researcher commented that: "Controlling the urge to be too interpretive and give meaning where none was obvious was not easy.I do not think this is a reflection of the instrument.I believe this is probably a common factor in any qualitative research where respondents' words represent the data.When the respondents' comments were of good quality and quantity, the scoring was easy."The second researcher's comment reveals a difficulty of analysing qualitative data that is not specific to this study.Therefore, her positive score, in terms of the practicality of PTA, is at a medium level in terms of easiness.When asked to score its effectiveness, the second researcher reported the instrument to be high in terms of time consumption, commenting that "It is operational and perfectly usable for scoring the respondents' answers".
Moreover, regarding the reliability of the instrument, a pilot round has been run.Two researchers (the first author of this article and a second external researcher) independently analysed two teachers' answers to the open-ended questions (CK3 and PK2 to PK8, as reported in Appendix A. The answers to be compared have been randomly and blindly chosen by the second researcher.The scores have been compared and this resulted in a percentage of agreement (POA) of .81.After discussing the answers that produced different scores, a full agreement on the method was found.

Teachers' PCK Analysis
The second research question, assessing Dutch teachers' PCK, is tackled by using our research instrument.The data obtained are compared with the standard of PCK for programming, obtained through workshops, fully available in the technical report (Saeli et al., 2010) and partially reported in Appendix B. Similarly to the CoRe, the OTPA focuses on Big Ideas (core concepts of a subject), considered as the CK (content knowledge) aspect of PCK.For each of these Big Ideas, an analysis of its PK (pedagogical knowledge) aspect is conducted through the eight questions introduced earlier (Table 3).Below, the details of the CK and PK components of the OTPA (Table 4, centre and leftmost column) follow.

Content Knowledge Component
Assessing the CK component is done in two phases (Table 4).In the first phase we analyse the answers to two multiple-choice questions (see Appendix A) which are labelled as CK1 and CK2.Question CK1 has been constructed using the aforementioned Big Ideas of programming, obtained from the international study (Saeli et al., 2011a), with the addition of concepts that were not mentioned by well-educated and experienced teachers, which were therefore considered as incorrect answers; while CK2 is constructed using the experts' opinion regarding which terms are related to the teaching of control structures, obtained from the textbook analysis study (Saeli et al., 2011b), with the addition of the incorrect terms.In the list of choices regarding these two questions, there are both correct and incorrect answers, and therefore teachers have to recognise the correct choices and cross them.For each correct choice, they get 1 point, up to 11 for CK1 and up to 8 for CK2.In order to b e consistent with the scales of questions CK3 and PK2 to PK8, the scores of questions CK1 and CK2 are then labeled as 'low', 'medium' and 'high'.To do produce such labels, CK1 and CK2 scores are divided into thirds (e.g., for CK2: a 0 to 2 score is labelled as 'low', 3 to 5 as 'medium', and 6 to 8 as 'high').
In the second phase, the answer to the question: "1.What do you intend for the students to learn about this Big Idea?" (Table 3) is found.This is the first question of the PK part of the questionnaire, but because it refers to content knowledge it is labelled as CK3.The answer is then compared with experienced and well-educated teachers' PCK, which is considered as the standard and it is reported in (Saeli et al., 2011a).It is evaluated as: blank (0) if the answer is not given; low (1) if only 1/3 of the standard is listed by the participant; sufficient (2), if 2/3 of the standard is named by the participant; and high (3) if the full standard is listed by the participant.Because of the qualitative nature of this analysis, one does not find exact matching answers but similar concepts are also scored positively.
An example of the CK3 could be comparing a teacher's answer on the topic 'algorithm' with the standard (Appendix B and Table 8): Teacher: 'Algorithm' = a plan of steps/recipe.Writing an algorithm = solving a problem.You can write simple algorithms, using parameters, iterations, conditions and simple data structures (variables/arrays).This teacher's answer is scored with a 'medium' because it indeed lists topics such as a sequence of actions (plan of steps) and a combination of structures to reach the solution of a problem (parameters, iterations, etc.), however it fails in naming its representation, realisation and the necessity for algorithms.

Pedagogical Knowledge Component
As for the PK (Pedagogical Knowledge) aspect of teachers' PCK, as a term of comparison, the standard of the PCK regarding of seven topics in the context of programming is used, namely: loops, data structures, arrays, problem solving skills, decomposition, parameters and algorithms.We offered teachers, in the context of the OTPA, a choice out of one of the seven topics listed above.They were then prompted with the eight questions introduced in Table 3 and had the possibility to skip them or leave them blank.As question 1 is used to analyse the CK component, we only use questions 2 to 8 for the PK component.Each answer is then assessed for comparison with the results of the international study (Table 4, right column) as: blank (0), low (1), sufficient (2), or high (3) in relation to their quality and quantity.Similarly to the CK3, teachers' answers are compared and scored (see for example in the previous paragraph).Because of the qualitative nature of this analysis, it is not always possible to find exact matching answers but more often than not, similar concepts can be found.When teachers' answers are different from the standard, but still seem to be of good quality, an expert has been asked to assess these questions.
In the course of the international study (Saeli et al., 2010) the PCK of some Big Ideas has been collected using a different set of questions to Table 3.For the Big Idea 'problem solving skills' question 5 was conceived as follows: Which knowledge about your students' thinking influences your teaching of this Big Idea?Consequently for those teachers answering that question of the OTPA on the concept of problem solving skills, an expert has been asked to assess teachers' answers.

Relation with Textbook
In order to explore whether there is any relation between teachers' PCK and the PCK of the textbook they use, the results of this study are compared with those of the textbook analysis study, in which the PCK of textbooks has been measured (Saeli et al., 2011b).The resulting histogram graphs are studied in search of relations and trends (see Tables 16  and 17).

Relation with Background
In order to test whether there is any relation between teachers' PCK and their backgrounds, the results of teachers' PCK will be sorted according to teachers' disciplinary backgrounds according to the alpha, beta, gamma, delta categorisation introduced earlier.
Similarly to the previous section, the resulting histogram graphs are studied in search for relations and trends (see Tables 20 and 21).

Results
This section is divided into three parts: the results relative to the analysis of teachers' PCK; the results of a possible relation between teachers' PCK and the textbooks they use; and finally the results of a possible relation between teachers' PCK and their background (alpha, beta, gamma and delta).

PCK Assessment
In this section the results relative to the CK and PK component are reported.

Content Knowledge Component
Regarding the CK1 and CK2 components, the majority of teachers scored medium values in both multiple-choice questions (Table 9 and Table 10).The method used to obtain these scores is reported in section 2.4.1.In the CK3 component there is not much variation between low and medium scores, though a consistent number of teachers skipped this question (Table 11).

Pedagogical Knowledge Component
In this section, the results of the PK component of teachers' PCK is reported.Teachers had the opportunity to freely choose one topic from the available seven concepts.The majority chose to answer the questions for Problem solving skills and Algorithms (Table 12), while only 2 teachers chose to discuss Decomposition.The scores relative to the answers of the seven questions (PK2 to PK8) are reported in Table 13.Here, the scores reported are the result of the sum of each answer (scoring between 0, blank, and 3, high) for the 69 teachers, which means that the maximum score can be 207.The question with the highest score is PK2 (93 out of 207), concerning the reasons to teach a certain concept.The questions with the lowest score are PK3, concerning extracurricular knowledge about a concept, and PK6, concerning other factors influencing the teaching of a concept, scoring relatively 51 and 55 out of 207.Table 14 provides an overview of the blank answers, either because the questions were not answered, or because the answer was out of context.Teachers who chose to answer the questions relative to the Big Idea "data structures" actually understood the term data structure to mean "databases", therefore the 6 teachers have been given the score '0'.

Relation with Textbooks
Three textbooks are used in this study: Instruct (29 teachers), Informatica-actief (20 teachers) and Enigma (only 6 teachers).Because a relevant number of teachers does not report to use textbooks (13 teachers), their results are also reported.However, because we do not have details about the content of teachers' own material, it is not possible to infer or speculate from the results relative to these teachers.Also, the research question leading the analysis of these data concerns only the possible relation between teachers' PCK and the textbooks they use.The scores regarding the CK (Table 16) and PK (Table 17) components are reported below.As for the use of textbooks, the majority of the respondents, 29 teachers, use the Instruct textbook (Table 15).This book is designed to be used with one or more extra modules.In Table 15 the averages of teachers' scores using these textbooks are also reported, relative only to the PK component, which will be analysed in detail later in this section.
The following results have been scaled up to 100%, though it should be considered that the sample groups for the different categories (textbook or disciplinary background) have very different sizes, as reported in the graphs' captions.This choice was necessary in order to be able to compare the different groups.Also, for the clarity of presentation, teachers who did not answer the questions are represented in the graphs by the gap between the top of the bars and the 100%.
Regarding the CK1 and CK2 (Table 16), there is little variation among the teachers using different textbooks where the most frequent score is 'medium'.An exception is found in the CK2 for teachers using the Enigma textbook because they are the only ones without any 'low' scores.As for the CK3, where teachers were given the opportunity to leave blank answers, the scenario is different.There is no consistency between the different groups.The only teachers who score -in a very small portion-'high' are those using Instruct and Informatica-actief.Enigma teachers mostly score 'low', whereas teachers who do not use textbooks are almost equally spread between 'low' and 'medium'.
Regarding the questions relative to the PK component (Table 17), teachers were given the opportunity to give blank answers.On average 40% of the teachers skipped these questions (not considering teachers using 'other' kind of teaching material) and the ques-Table 16 The results relative to teachers using the same textbook or no textbook, divided per question (CK1, CK2 and CK3) and shown as percentage.Instruct N = 29, Enigma N = 6, Informatica-actief N = 20, No textbooks N = 13.The gaps between the top of the bars and 100% represent the teachers who left blank answers tions skipped the most were PK3, PK6 and PK8.Those who answered are almost equally distributed between 'low' and 'medium' (see Table 15).The exception are the teachers using Informatica-actief, who mostly score 'medium' (35%).We can see the biggest variation of scores in questions PK2 and PK3 (relatively the reasons to teach a concept and extracurricular knowledge) where additional 'high' scores were also found in all the textbooks, except for those teachers who do not use textbooks.On these last two questions, teachers using Informatica-actief are those with the smallest percentage of 'low' scores.For the other questions there does not seem to be any textbook with which teachers score re markably higher than others, though it is possible to notice some high scores (PK4, PK7 and PK8).The answers to question PK4 (difficulties connected with the teaching) present less difficulties for teachers using the book Informatica-actief.Question PK5 (students' prior knowledge needed) results in a similar distribution of scores irrespective of the textbook with most teachers scoring 'medium' and a smaller percentage scoring 'low'.While for PK6 (factors influencing the teaching of a concept), teachers using the Enigma book score the highest.The answers to question PK7 (teaching methods) are in general 'low' for Instruct and Enigma teachers, while mostly 'medium' for teachers using Informatica-actief and teachers not using textbooks.Lastly, regarding the question PK8 (ways of ascertaining students' understanding) there are more 'low' scores, although Instruct teachers and teachers not using textbooks score slightly better than the others.
Table 17 The results relative to teachers using the same textbook or no textbook, divided per question (PK2 to PK8) and showed as percentages.
Instruct N = 29, The gaps between the top of the bars and 100% represent the teachers who left blank answers

Relation with Disciplinary Background
All the teachers participating in this study have different disciplinary backgrounds: alpha (16 teachers), beta (26 teachers), gamma (only 6 teachers) and delta (15 teachers).The other 6 teachers did not specify their background studies, and they will therefore will not be shown in the results.The scores relative to the CK (Table 20) and PK (Table 18 and  21) components are reported below.
In Table 18 we can see that most teachers have a beta background (26 teachers), while only 6 teachers have a gamma background.Roughly half of the teachers have attended the CODI course (Table 19), the in-service training for teachers with another disciplinary background to get basic knowledge for the teaching of Informatics.
Regarding the CK component (Table 20), teachers from different disciplines score quite similarly ('medium') in the context of multiple-choice questions, except for gamma teachers in question CK2.A completely different scenario is presented in the context of the open-ended question, were there is a definite distinction between the delta teachers (followed by the gamma teachers) and the others.
Regarding the PK component (see Table 18), teachers with an alpha background score the worst out of all the teachers answering the questions.The group of delta teachers is performing slightly better than the others, while the teachers with a beta background are evenly spread between 'low' and 'medium'.As for the number of blank answers, teachers who answered most of the questions (Table 18) are those with a beta background, while the ones answering the least have a gamma or an alpha background.
Table 20 The results relative to teachers having the same background, divided per question (CK1 to CK3) and shown as percentage.Alpha N = 16, Beta N = 26, Gamma N = 6, Delta N = 15.The gaps between the top of the bars and 100% represent the teachers who left blank answers Further (Table 21), in questions PK2 (the reasons to teach a concept) and PK3 (extracurricular knowledge) there is more variation between 'low', 'medium' and 'high' scores among teachers of different backgrounds.On the same two questions, teachers with a delta background also have the largest percentage of 'high' scores.
With respect to question PK4, about the difficulties connected with the teaching of a concept, gamma and delta teachers generally score 'medium', while alpha and beta have consistent 'low' scores.A similar scenario is presented for the questions PK5 (students' prior knowledge needed), except for alpha teachers (mostly scoring 'low').For PK6 (factors influencing the teaching of a concept) scores are between 'low' and 'medium', except for gamma teachers (mostly scoring 'medium').For the questions PK7, relative to teaching methods, alpha and gamma teachers mostly score 'low', while beta and delta teachers have a larger proportion of the 'medium' scores.Lastly, regarding question PK8 (ways of ascertaining students' understanding), gamma teachers perform distinctively better than the others.

Conclusions and Discussion
In this paper we discussed the use of PCK as a framework to assess secondary school teachers.The four research questions leading this study concern the use of the OTPA, teachers' PCK assessment, the possible relation between teachers' PCK and the textbooks they use and the possible relation between teachers' PCK and their disciplinary background.
Table 21 The results relative to teachers having the same background, divided per question (PK2 to PK8) and showed as percentage.Alpha N = 16, The gaps between the top of the bars and 100% represent the teachers who left blank answers

About the Instrument
Regarding the first research question, "Is it possible to assess teachers' PCK with the use of the OTPA?", the results suggest that it is in general indeed possible to use PCK as a framework for this kind of assessment.A second researcher used the instrument and positively assessed it in terms of ease of use and practicability.Though, for the qualitative aspect of the instrument, the interpretation of teachers' answers is difficult at times.However, this is a common difficulty of those interpretative processes that are specific to qualitative methods.

Dutch Teachers' PCK
The goal of the second research question of this study is to assess Dutch teachers' PCK from a general perspective.The results show that Dutch teachers have generally scored 'medium' on the content knowledge component when given multiple-choice questions.
In the context of open-ended questions, teachers perform less well and the scores are almost equally distributed between 'low' and 'medium'.These results might indicate that Dutch teachers do not have enough disciplinary background to answer open-ended questions, which involves knowledge production.However, when they are in a the condition of recognising knowledge (multiple-choice questions) their performance improves.A possible explanation of this phenomenon might be found in the difference between knowledge recognition and knowledge production (synthesis).According to the hierarchical classification of cognitive processes (Krathwohl, 2002), knowledge recognition is considered to be of less difficulty than knowledge (re)production.Teachers answering multiple-choice questions are actually recognising knowledge, because they need to choose between correct and incorrect answers.However, when teachers answer open-ended questions, they are in the process of producing knowledge.One might argue that clicking on a multiplechoice question is faster than answering an open-ended question, influencing teachers' performance (better with multiple-choice questions and worse with open-ended questions).However, if the results of the other open-ended questions are considered (Table 13) it is noticeale that scores vary between 'low' and 'medium' scores, suggesting that some aspects of Dutch teachers' PCK are stronger than others and that it is sufficient to answer open-ended questions, according to the hierarchical classification described above.
As for the pedagogical component, Dutch teachers have scored 'medium' on the questions concerning the reasons to teach a certain Big Idea, on the difficulties connected with the teaching of a certain Big Idea, and students' prior knowledge that is required to learn a certain Big Idea.At the same time, they score poorly on questions relating to extracurricular knowledge around the Big Idea of their choice, factors influencing their teaching of that Big Idea, teaching methods, and ways to ascertain students' understanding.The 'low' scores on these domains are also influenced by the consistent number of teachers skipping these questions.This is especially the case for the extracurricular knowledge, where more than half of the teachers skipped the question on this topic.This result provides evidence on how Dutch teachers may be lacking a solid disciplinary knowledge in programming ('low' CK).
Summarising, the answer to the second research question is that Dutch teachers perform sufficiently on the content knowledge component, especially when in a condition of recognising knowledge, as with multiple-choice questions.One might have expected Dutch teachers to score poorly on the CK component, because most of them have a disciplinary background different than Informatics.However, Dutch textbooks (Saeli et al., 2011b) have been positively assessed in terms of the CK component and could have supported Dutch teachers' weak disciplinary background.We could speculate that these teachers would benefit from well designed teaching material to further improve their performance.These outcomes are confirmed by Dutch teachers' results on the pedagogical knowledge component, especially for the question relative to extracurricular knowledge, where scores were poor for both teachers and textbooks.Also for the teaching methods, teachers seem to need more support.This result is actually a confirmation of the need for more teaching materials and teaching examples, which is also underlined by participants of the international study (Saeli et al., 2011a).It is reassuring, on the other hand, to see that Dutch teachers score 'medium' on topics such as reasons to teach, students' prior knowledge and difficulties concerning the teaching of a topic.The results of this study could help Dutch scholars (Van Diepen et al., 2011) in the process of revising the whole subject.From the results of this study we can conclude that Dutch teachers are indeed neither strong on the CK nor the PK component, though they perform much better when knowledge is made explicit (e.g., multiple-choice questionnaires).Effort should therefore be made concerning the production of more teaching material and guidebooks for teachers.

PCK and Textbooks
To what extent is there a relation between teachers' PCK and the textbooks they use?From the results of this study (see Table 16) there seems to be no relation regarding the CK component, in the sense that teachers using the same textbook do not score better than teachers using another textbook, but generally score 'medium' in the context of multiple-choice questions, and between 'low' and 'medium' (except teachers using the book Informatica-actief) in the context of open-ended questions.A reason for this homogeneity can be attributed to the CK component of Dutch textbooks, which has been positively assessed in the textbook analysis study.
Regarding the PK component, the first sign of a relation between textbooks and teachers' PCK is on the choice of Big Ideas to discuss.Teachers mainly chose those concepts that are also present in the textbooks.Of the few participants who decided to discuss the 'data structures' concept, all teachers actually answered as if the Big Idea referred to 'databases'.The latter is a concept included in the Dutch curriculum (Schmidt, 2007a), while the concept 'data structures' is the only concept that is not explicitly addressed in any of the Dutch textbooks.Also, it should be noted that none of these teachers had an Informatics background.Interestingly, most of the teachers (33 out of 69) decided to discuss the concept 'problem solving skills', which is considered to be at the heart of teaching programming (Saeli et al., 2011c).11 teachers decided to discuss either Loops and another 11 chose Algorithms, which are both concepts largely covered in the textbooks.
When considering the different parts of the PK component, it is possible to see from the graphs (see Table 17) that there is no real difference between teachers using different textbooks, except for those that use Informatica-actief in questions PK2 and PK3 (the content to teach and reasons to teach that content).Dutch textbooks were assessed in a previous study as weak on the PK component.When comparing the results of these two studies, Dutch teachers obtained better scores in comparison with the results of the textbooks they use.One exception is made on the question regarding the teaching methods, where some more consistent 'low' and 'medium' scores were found from Dutch teachers, while their textbooks generally scored 'medium'.Although in some aspects of PCK (reasons to teach a certain Big Idea and extracurricular knowledge) there is more variation in teachers' answers, showing also some 'high' results, we cannot identify any relation with the textbo oks they use, which all had poor scores on the same questions.Remarkably, teachers scored mostly 'medium' but also sometimes 'high' on those questions to which no answers were found in the textbook analysis.These questions concern difficulties connected with the teaching/learning of a concept and factors influencing the teaching of a concept.The answers to these questions can usually be found in a teacher's guide to the textbook, which is not available for any of the Dutch textbooks.As for the question concerning extracurricular knowledge, mostly teachers scored low, except for teachers using Informatica-actief.This is the only relationship between teachers' low performance and the low PCK assessed in the textbook study.
Summarising, the answer to the second research question is that, in the Dutch scenario, there seems to be no strong relation between teachers' PCK and the textbooks they use, though for some aspects teachers' performance might be linked with the quality of the textbooks (e.g., CK component).

PCK and Disciplinary Background
The goal of the fourth research question is to understand whether there is a relation between teachers' PCK and their disciplinary backgrounds.Disciplinary backgrounds are divided according to the Dutch categorisation: alpha (e.g., Dutch, English), beta (e.g., Mathematics, Physics), gamma (e.g., Geography, Economics) and delta (Informatics).From the results there does not seem to be any relation regarding the content knowledge component when teachers are given the opportunity to answer multiple-choice questions.When answering open-ended questions, delta teachers scored remarkably better than the other teachers, while alpha teachers scored the worst.This is a confirmation that teachers with a solid disciplinary background have better knowledge of the subject and manage to reproduce their knowledge in the context of open-ended questions.
Regarding the pedagogical knowledge component, one might expect teachers teaching another discipline to actually be supported on some aspects of their PCK from their teaching experience, as research has shown (Sanders, 1993).Also, from the results of this study, it is possible to see how teachers with different disciplinary backgrounds than Informatics, do at times actually score 'medium', for example on questions regarding reasons to teach a certain concept, students' prior knowledge needed, and factors influencing the teaching.Regarding the question about teaching methods, the teachers that seem to score better are those with a beta or a delta disciplinary background.As for the question regarding the methods to ascertain students' understanding, all different disciplines seem to have equally distributed results between 'low' and 'medium', with the exception of gamma teachers (mostly scoring 'medium').Only for teachers with an alpha and gamma background there is a noticeab le tendency to skip open-ended questions, and unfortunately the reasons for this are unclear.
Summarising, the answer to the fourth and last question is that there is no strong relation between teachers' PCK and their disciplinary backgrounds, except in the context of content knowledge reproduction (CK3), where Informatics teachers clearly scored better than other teachers.Also, teachers with an alpha and gamma background are those who mostly skipped questions, though we cannot infer the reasons for such a choice.Quite surprisingly, teachers with an Informatics background scored quite similarly to teachers with a non-Informatics background on the content knowledge component in the context of multiple-choice questions.This might be due to the fact that textbooks have been positively assessed on their CK component and might support teachers with a weak disciplinary background.As for the pedagogical knowledge aspect, it is not possible to evidence a single disciplinary background scoring better than the other.However, teachers with an alpha background scored the worse on average and skipped the most questions.Beta and delta teachers are those that, in average, scored better.One might have expected teachers of non-Informatics disciplines to score better on the PK because they are supported, in some cases, by the PCK developed through their teaching experience.

Limitations of the Study
The instrument OTPA, intended as a teachers' PCK measurement instrument, has the limitation that it does not give the teacher the chance to discuss his/her own knowledge.Though teachers were given the opportunity to write their answers in open-ended questions, they might have not had the chance to fully express their knowledge.Another limitation of the instrument is that several teachers skipped the open-ended questions.Reasons might be different, as for example time pressure, lack of knowledge or interest.Also, all teachers answering the questions relative to the teaching of data structures actually answered the questions referring to databases.
One might argue that the PCK assessed in this study does not reflect the teachers' actual PCK because teachers had the chance to choose the topic to discuss.Probably, teachers chose the topics in which they felt more confident.A consequence could be that even if a teacher's PCK on her/his topic of preference is good, it does not automatically imply that her/his PCK of programming is good.However, the goal of this study is not to measure a single teacher's PCK, but to assess the Dutch scenario using PCK as a framework.In order to be able to generalise our results from single teacher's PCK a large sample size was chosen.
One final observation should be made regarding the quality of the sample of teachers.Participants of this study were invited to fill in the questionnaire anonymously, either through e-mail, advertisement on the portal for Dutch speaking Informatics teachers (www.informaticavo.nl)or through mailing lists.Though the participants of this study consist of almost 1/5 of the Dutch Informatics teachers, we suspect a bias in our results due to the possibility that only those teachers who feel confident in programming might have completed the questionnaire.

CK1 Component
Please, tick the concepts that you find are at the heart of learning programming for students at the VO level.If you find the need to add more, please use the reserved space below the list.-Algorithmic thinking is something new that student needs to understand.
They will also find it in other subjects.
-Autonomous ability of the student to analyse a problem.
-The need to realise the job into a working experience.
-Often students do not understand the need to develop an algorithm, until they will have to solve complex problems.At that point it will be too late to start clarifying their ideas.
-To spot elementary actions of the executor.5. What do you think students need to know in order for them to learn this idea?-Mathematics logic knowledge.
-Limits the curriculum, because there is the need to spend time on helping students understanding difficult topics.
-Students aim to immediately obtain results.
-Students think they are unimportant.There is the need to show them the actual utility of algorithms.
-Choice of concrete cases to propose.

Which factors influence your teaching of this idea?
-Motivation, Success, Feedback.
-The use of formalism to present an algorithm.
-Possibility to compare other areas of study.
-Logical skills, knowledge of specific students' reasons.
-The interactive environments of programming languages give the opportunity to obtain results also without a proper preparatory development.In the past, the slowness of the machine would force programmers plan their actions, because they had "only one chance per day".Nowadays students have possibility to try out their code as many times as they want.
7. What are teaching methods (any particular reasons for using these to engage with this idea)?-Give problem solving to good students, more examples,read algorithmic structures.
-Analysing examples with the students and suggestion of implementation.
-Development of well known subjects to demonstrate the functionality of the method to represent the algorithm.
-The idea would be to start for algorithms as they are and not relative to Informatics.First of all students should have an idea of what the automatic functioning is.That in a prefixed way, from an input it will come an output.
For example students are proposed some data and the output of "an operation".Then students are asked to guess how the results have been obtained.This could be a way to let students discover that there is an internal logic, and they have to understand how it works.
-Identifying algorithms in the everyday life before to work with algorithms to calculate something more abstract.-Exercises in full autonomy.
-From small particulars and by asking specific questions.
-Formalise algorithms from different domains, as for example cooking recipes.
-Propose other examples in which students have to autonomously reach to identify algorithms.
M. Saeli received her bachelor degree in computer science from the University of Ferrara (Italy) and her master degree in mathematics and science education from the University of Amsterdam (The Netherlands).In 2012 she completed her PhD at the Eindhoven School of Education (Eindhoven University of Technology) on the teaching of programming for secondary school.

J.
Perrenet participated in various mathematics education research projects and was involved in the development and innovation of higher technological education for many years.Nowadays he is associated with the Eindhoven School of Education, for teacher training in science education and communication, for coaching PhD students, and for research into mathematics and informatics education.He is also associated with the mathematics and computer science programmes of the TU/e for developmental advice and participates in the project Academic Competencies and Quality Assurance that measures the academic profile of programmes at the TU/e and at other technical universities.W. Jochemsreceived his master degree in educational psychology and methodology from Utrecht University.He did his PhD in technical sciences at Delft University of Technology (TU Delft).In 1989 he became full professor in educational development and educational technology at TU Delft.From 1993 until 1998 he was dean of the Faculty of Humanities at TU Delft.In 1998 he became dean of the Educational Technology Expertise Centre at the Open University of the Netherlands (OUNL) and full professor in educational technology.From 2006 onwards prof.Jochems was dean of the Eindhoven School of Education and full professor in educational innovation at Eindhoven University of Technology.Recently, he became the founding dean of the Teacher Institute at the Open University of the Netherlands.B. Zwaneveld, emeritus professor in mathematics education and informatics education at the Ruud de Moor Centrum of the Open University of the Netherlands, expertise centre in Professional development of teachers.B. Zwaneveld is professor in mathematics education and informatics education at the Ruud de Moor Centrum of the Open University.His main fields of interest are the training of intending informatics teachers, the teaching of mathematical modelling and the teaching of mathematics in primary education.He started his career as a mathematics teacher in secondary education.Afterwards he was a course developer in the Faculty of Computer Science of the Open University.He has a PhD in didactics of mathematics.

Table 2 A
list of the core topics within programming at the heart of its learning

Table 8
An extract of the CoRe about algorithms fully reported in Appendix A.

Table 9
Scores of the CK1, the first multiple-choice question

Table 18
Disciplinary background studies of 69 teachers and relative average scores with respect to the PK component

What are your specific ways of ascertaining students understanding or confusion around this idea
TWO DIFFERENT APPROACHES -Flowcharts are not suitable for a better understanding of algorithms.-Flowcharts used next to the code help students to visualize what is the flow of the program.8.
? -Reactions, asking to write what happens in a given algorithm.