Computational Thinking: A Disciplinary Perspective

. Over its short disciplinary history, computing has seen a stunning number of descriptions of the field’s characteristic ways of thinking and practicing, under a large number of differ - ent labels. One of the more recent variants, notably in the context of K-12 education, is “compu-tational thinking”, which became popular in the early 2000s, and which has given rise to many competing views of the essential character of CT. This article analyzes CT from the perspective of computing’s disciplinary ways of thinking and practicing, as expressed in writings of com - puting’s pioneers. The article describes six windows into CT from a computing perspective: its intellectual origins and justification, its aims, and the central concepts, techniques, and ways of thinking in CT that arise from those different origins. The article also presents a way of analyzing CT over different dimensions, such as in terms of breadth vs. depth, specialization vs. generaliza - tion, and in terms of skill progression from beginner to expert. Those different views have differ - ent aims, theoretical references, conceptual frameworks, and origin stories, and they justify their intellectual essence in different ways.


Introduction
We are in a computer revolution.Nearly every device now has computers in it -phones, tablets, desktops, watches, navigators, thermometers, medical devices, clocks, televisions, DVD players, and even toothbrushes.Nearly every service is powered by software -bookstores, retail stores, banks, transportation, hotel reservations, filmmaking, entertainment, data storage, online courses, and even daily fitness.These changes have brought enormous benefits as well as worrying concerns.It looks like everything that can be digitized is being digitized and computers are everywhere storing and transforming that information. 1  This presents an enormous challenge for educators (Guzdial, 2015).What do we need to understand about computers?What must we do to put a computer to work for us?How do computers shape the way we see the world?What are computers not good for?What should be taught to learners at different stages of their education?
Computational thinking (here abbreviated CT) has offered educators some answers to these questions (National Research Council, 2010, 2011).From the days of Charles Babbage, we have wanted computers to do for us jobs that we cannot do ourselves.Much of CT is specifically aimed at figuring out how to get a computer to those jobs for us, and algorithms are the procedures that specify how the computer should do them.Computers are much better at carrying out algorithms than humans are -modern computers can do a trillion steps in the time it takes a human to do one step.A major task for educators is to teach children how to think so that they can design algorithms and machines that reliably and safely do jobs no human can do.
But there is more.Information and computational processes have become a way of understanding natural and social phenomena (Kari and Rozenberg, 2008).Much CT today is oriented toward learning how the world works.Physical scientists, life scientists, social scientists, engineers, humanists, artists, and many others are looking at their subject matter through a computational lens (Rosenbloom, 2013;Meyer and Schroeder, 2015).Computer simulation enables previously impossible virtual experiments.The information interpretation of the world offers conceptual and empirical tools that no other approach does (Kari and Rozenberg, 2008;Fredkin, 2003).Another major task for educators is to teach children how to bring an information interpretation to the natural and virtual worlds without sacrificing wisdom in the process (Weintrop et al., 2016).
Despite the enthusiasm for the power of computers, most jobs cannot be done by computers in any reasonable amount of time, if at all.Students who understand the limits of computing can avoid the trap of thinking that all problems are ultimately solvable by computers.
Most of the education discussion of CT has been formulated for K-12 schools (García-Peñalvo et al., 2016;Guzdial, 2015;Lockwood and Mooney, 2017).It is oriented to helping beginners learn to think about computers.Some of the definitions are shallow and have sown confusion among teachers who do not understand how to teach basic computing and assess student progress (Mannila, 2014;Lockwood and Mooney, 2017).Some definitions lead to conclusions that seem to defy students' common sense about computers, and teachers are asking for clarifications.
1 Shoshana Zuboff's Second Law, "Everything that can be informated will be informated" describes a techno-deterministic development of society and technology.Zuboff traced her three "laws" to the 1980s in Be the friction -Our Response to the New Lords of the Ring (Frankfurter Allgemeine, June 25, 2013) and they became popular through her mid-1990s lectures and were passed around through word of mouth.
The original sources, a formulation of her laws, and a yet-unpublished book-length analysis of them, were lost in a tragic accident (Zuboff, December 8, 2018, personal communication).
Our objective in this essay is to describe computational thinking from the viewpoint of computing as a discipline.We will examine CT in three dimensions.First, we will show that CT has a long and distinguished genealogy that began over 4,000 years ago.Many of the concepts of modern CT existed well before digital computers were invented, and many key concepts of CT were painstakingly developed by large numbers of people through the formative years of computing as a discipline.Understanding the evolution of CT engenders a deep respect for CT as well as a wisdom about its applications based on human experience that came before.
Second, the essay will demonstrate that the practices of CT fall on a spectrum from beginner to professional.Much of the CT literature has focused on CT for beginnersa natural consequence of the desire to bring CT into K-12 schools.But there is also a considerable literature on advanced CT as used by professional designers, engineers, and scientists.Beginner CT, aimed at inspiring students' interest in computing, has little to say about a profession that relies on advanced CT.Because of all it leaves out, beginner CT does not describe ways of thinking and practicing of professional computer scientists, either.At the very least, the discussion of a spectrum can assist teachers in showing their students the path they must follow to become professionals in computing.
Third, the essay will show that much of what is today labeled as CT grew out of the computational science movement of the 1980s.That movement emphasized computing as a new way of doing science capable of cracking the visionary "grand challenge" problems.Its wide acceptance in science and engineering created a background of listening that left us open to the resurgence of CT in the 2000s.Scientists who found computing to be materially different from the traditional ways of theory and experiment used the term "computational thinking" to describe the mental disciplines needed for this new kind of science.They also discovered that many natural phenomena can be understood by modeling them as information processes and using computing to simulate them.Thus, CT in the sciences has had a tremendous shaping effect on CT in computing.
This essay is aimed at computing educators interested in situating CT ideas in the broader picture of computing as a discipline and in the sciences in general.It describes computational thinking as an extension of several centuries-long traditions in science, mathematics, and engineering, and it describes how many ideas today labeled "CT" have been presented, in different forms, in many other sciences much before the birth of the modern computer.Science and engineering through the ages are replete with basic "computational thinking" ideas like abstraction, modeling, generalization, data representation, and logic (see, e.g., Grover and Pea, 2018;Barr and Stephenson, 2011;Bundy, 2007;Yadav et al., 2014;Hemmendinger, 2010).The essay describes how the most common descriptions of CT -"CT for beginners" -are just a small subset of computing's disciplinary ways of thinking and practicing.Finally, the essay also explains the ways in which computing really is a new and unique way of looking at problemsolving and automation, as well as of interpreting phenomena in the world.

Defining Computational Thinking
Computational thinking (Papert, 1980) has become a buzz word with a multitude of definitions.Much has been written and said about it since 2006 (Wing, 2006;Saqr et al., 2021).Numerous books, journal articles, blog posts, and large educational initiatives have contributed to the development of the concept.High-level workshops have been organized to discuss and define it (National Research Council, 2010, 2011).Countless years of labor have been invested into descriptions of what exactly CT is, how it is different from other kinds of thinking, and, perhaps most visibly, how to teach it to schoolchildren (Lockwood and Mooney, 2017;Guzdial, 2015;García-Peñalvo et al., 2016;Shute et al., 2017;Grover and Pea, 2013;Mannila et al., 2014;Apiola, 2019;Larsson et al., 2019).The public face of CT is that of beginner, or basic, CT -the kind of computational insights and ways of thinking and practicing that can be taught to children in K-9 or K-12 education.That is a laudable goal and a noble continuation of the "computing for everyone" efforts that span over half a century (Guzdial, 2015).
There is a certain degree of consensus on CT basics.The most commonly mentioned skills and concepts include decomposition, abstraction, debugging, iteration, generalization, and algorithms and their design (Shute et al., 2017).Other recommendations include representing, collecting, and analyzing data; automation; parallelization; problem decomposition; and simulation (Barr and Stephenson, 2011).Although touted as the foundations of computing, none of the basic CT descriptions shows students the path to becoming a computing professional.Other descriptions have aimed to show such a path, including computational design (Denning, 2017), computational participation (Kafai, 2016), computational making (Tenenberg, 2018), computational doing (Barr, 2016;Hemmendinger, 2010), computationalist thinking (Isbell et al., 2010), computational literacy (diSessa, 2000), computational fluency (Resnick, 2017) 2 and computational practices (Lye and Koh, 2014), to mention a few.These names do not capture the full gamut of names used for CT -in previous generations, CT has been known as algorithmizing, procedural thinking, algorithmic thinking, procedural literacy, IT literacy, fluency with ICT, and proceduracy (Tedre and Denning, 2016).
Despite the success of CT in convincing many decision-makers, teachers, and curriculum designers to include and integrate computing in K-12 educational systems, much literature in CT is critical of aspects of the current wave of CT.Concerns have been raised about narrow views of CT, such as undue focus on programming, or even coding, at the cost of high-level CT strategies (Armoni, 2016;Mannila et al., 2014).Concerns have been raised about attempts to separate computing from computers (cf.Connor et al., 2017;Nardelli, 2019;Armoni, 2016;Kafai, 2016;Lye and Koh, 2014;Lu and Fletcher, 2009;Shute et al., 2017;Bers et al., 2014;Repenning et al., 2010).Concerns have been raised about the uniqueness claims for basic CT -there are many similarities between CT and other kinds of thinking in STEM fields (cf.Grover and Pea, 2018;Barr and Stephenson, 2011;Bundy, 2007;Yadav et al., 2014;Hemmendinger, 2010;Pears, 2019;Sengupta et al., 2013;Werner et al., 2012).Concerns have been raised about the lack of clear demarcation between CT and computer science (Nardelli, 2019;Armoni, 2016;Barr and Stephenson, 2011).Concerns have been raised about the ahistoricity of the CT story -CT is often presented as a new phenomenon without any consideration of its co-evolution with science and mathematics -which has led to further critiques that CT ignores past lessons from computing education (cf.Guzdial, 2015;Voogt et al., 2015;Denning, 2017;Tedre and Denning, 2016).One analyst of CT wrote that "flow optimisation in a cafeteria, the classic example offered by Wing, is a clear example of the application of techniques first used in time and motion studies for process optimisation" (Pears, 2019), and another argued that "considering CT as something new and different is misleading: in the long run it will do more harm than benefit" (Nardelli, 2019).And while there is little discord over the importance of CT to sciences, the relationship between CT and other fields is complicated (cf.Grover and Pea, 2018;Barr and Stephenson, 2011;Bundy, 2007;Yadav et al., 2014;Tedre and Denning, 2017;Hemmendinger, 2010;Hambrusch et al., 2009).
From our study of the genealogy, the science, and the beginner-professional continuum, we have distilled the spirit of the multitude into a definition used throughout this essay: Computational thinking is the mental skills and practices for designing computations that get computers to do jobs for us, and for explaining and interpreting the world in terms of information processes.
The design aspect reflects the engineering tradition in computing; in which people build methods and machines to help other people.The explanation aspect reflects the science tradition in computing; in which people seek to understand how computation works and how it shows up in the world.In principle, it is possible to design computations without explaining them, or explain computations without designing them.In practice, these two aspects go hand in hand.
Computations and jobs for computers are not the same.Computations are complex series of numerical calculations and symbol manipulations.Jobs are tasks that someone considers valuable.Today many people seek automation of jobs that previously have not been done by a machine.Computers are now getting good enough at some routine jobs that loss of employment to automation has become an important social concern.We do not equate "doing a job" with automation.Well-defined, routine jobs can be automated, but ill-defined jobs such as "meeting a concern" or "negotiating an agreement" cannot.

Algorithmic Genealogy of Computational Thinking
One of the more common descriptions of computing's disciplinary work is that it thinks in terms of algorithms, procedures, or well-defined processes (e.g.Knuth, 1981;Dijkstra, 1974;Harel, 1987).That perspective, which is central to today's descriptions of CT, is also one of the oldest characterizations of computer science3 .Long before computer science existed, Ada Lovelace, who with Charles Babbage designed the first programs for a programmable computer in the 1840s, described computing as a new "science of operations" (Menabrea, 1842;Priestley, 2011).In the late 1950s, when computing was starting to emerge as a new field, Alan Perlis argued that algorithmizing would eventually become necessary for everyone (Katz, 1960).Algorithmizing was his name for computing's unique kind of reasoning for designing solutions to problems.The path from Lovelace to Perlis was created from a number of separate historical milestones in the first half of the 1900s.
This section has three parts: The first traces the historical roots of algorithm-oriented CT concepts, the second presents some central insights from theoretical computer science, and the third examines contemporary views of those concepts.The CT concepts that were born in the algorithmic tradition of computing range from beginner concepts, such as unambiguous computational steps, to advanced concepts, such as regular expressions and computational complexity.

Algorithmic Genealogy
The algorithmic view of CT has roots in computational methods of applied mathematics.Algorithm-like procedures have been found on ancient Babylonian clay tablets (Knuth, 1972), and the term "algorithm" comes from the 800 CE Persian mathematician Muhammad ibn Mūsā al-Khwārizmī whose procedures preceded the modern notion of the algorithm (Knuth, 1981).In the history of mathematics, computational methods helped traders, builders, and scientists to reliably perform important calculations (Grier, 2005;Cortada, 1993;Westfall, 1980).Famous examples abound: Euclid's method found the greatest common divisor of two numbers, the Sieve of Eratosthenes found prime numbers, and Gauss Elimination found solutions to systems of linear equations (Chabert, 1999).These methods were motivated by the pragmatic goal of enabling laypeople to perform mathematical procedures without deep knowledge of mathematics.
Over the centuries, algorithmists gradually developed a complex set of ideas for making algorithms effective.These included representing numbers and other data, specifying unambiguous steps, establishing a rigorous logical framework for a procedure, and dealing with round-off errors that result when continuous quantities are represented with finite numbers of bits.In the next paragraphs, we will comment on each of these elements of the algorithmic tradition of computing.
Start with representations.Through the long history of algorithms, every algorithm designer has had to think about how to represent numbers and other symbols (Grier, 2005;Cortada, 1993).The numbers computed by algorithms are actually codes standing for numbers, using a finite set of symbols.Binary coding systems, which use just two symbols, can be found as far back as Babbage's Analytic Engine, and before.The Hollerith machines built for the 1890 US Census used punched cards with patterns of holes representing a person's age, education, and marital status -again just two symbols, a hole or non-hole at a location on the card.Since the 1940s, digital computers used just two symbols, 1 and 0, represented as high or low voltages in the circuits (De Mol et al., 2018).Today the process of encoding information into a binary representation is called "digitization."Designing good representations is fundamental issue in CT.
Next is the issue of specifying the computational steps of algorithms.The birth of calculus in the mid-seventeenth century gave scientists a much more reliable way to deal with problems requiring copious calculations of functions (Westfall, 1980;Grier, 2005).But that posed a problem: the procedures had to be composed of unambiguous operations, for otherwise they might not give the same results in the hands of different persons.Each step of a computation had to be so precisely defined that there would be no need for human interpretation, intuition, or judgment -or error (Grier, 2005).The use of procedures built from unambiguous steps has become a cornerstone of CT.
Next is the issue of devising a logical plan for computing a function.A procedure specifies individual operations, such as addition and subtraction, and also choices between different sets of operations.Mathematicians turned to formalization of logic to do this precisely and unambiguously.The usual story of the influence of logic on computing starts with the philosophers René Descartes and Gottfried Leibniz, who sought to formalize how humans reason (Dasgupta, 2014;Davis, 2012;Tedre, 2014).George Boole made a breakthrough when he presented an algebra of logic that represented logical formulas with expressions composed from connectives and, or, and not (Boole, 1854;Davis, 2012).In 1937 Claude Shannon showed how to represent the switching circuits of telephone systems and computers with Boolean formulas (Shannon, 1937).But Boole's work did not include formal means to deal with making choices and repeating operations.The basis for that was provided in 1879 by Gottlob Frege (1879).The combination of Boole's and Frege's insights became the basis for many programming languages and an important element of CT.
It is an irony that despite the great lengths those designing algorithms and machines went to avoid error, errors have been a plague for programmers in all ages.One study of 40 volumes of old mathematical tables found 3700 errors in them, and another found 40 errors on just one page (Williams, 1997).To reduce execution errors, many algorithms include elaborate acceptance checking for whether the computation is producing results that meet specifications.Sometimes different algorithms designed by different teams are run in parallel.And a great effort has been made to apply formal logic to prove that programs meet their specifications.A similar effort is done by hardware designers to increase trust that the machine implements the basic operations correctly.Minimizing or removing errors will continue to be an important element of CT from the begin-ning -CS Unplugged, for example, uses a fun and engaging "magic trick" for teaching children error checking4 .
Next is the problem of designing algorithms to cope with the limited precision of finite-string representations.For example, numbers are represented in many computers as 32-bit quantities, which are incapable of representing all possible numbers.Algorithms can be designed to ensure that round-off errors do not accumulate over long calculations.Mathematical pioneers such as Euler, Lagrange, and Jacobi worked out methods to minimize round-off errors in algorithms long before there were computing machines (Grier, 2005;Bullynck, 2016;Goldstine, 1977).
Finally, there is one more important aspect of the algorithmic tradition: characterizing the limits of computation.The late 1800s to early 1900s were a heyday of formalism in that quest: Not only did Frege's predicate logic fill in gaps where Boole's logic could not reach, but mathematics and logic merged in Principia Mathematica, the magnum opus of Russell and Whitehead.Logical empiricism came to rule the sciences.Mathematicians fervently believed that logic would finally allow them to realize Descartes's and Leibniz's dream of formalizing human thought.They sought an ultimate algorithm that could definitively solve the Decision Problem: whether a statement in predicate logic is true or false.The quest to find an algorithm for the Decision Problem was taken in 1928 as one of the major challenges in mathematics (Hilbert and Ackermann, 1928).That problem was resolved in the 1930s simultaneously by several people, among them a young Cambridge mathematics student named Alan Turing.Turing developed a mathematical model of a computing machine capable of hosting any algorithm for the Decision Problem (Hodges, 1983).He called his machines a-machines, with "a" for automatic (Turing, 1936).Turing's conclusion was negative: an algorithm for the Decision Problem is impossible.
Alonzo Church labeled Turing's mathematical model the "Turing machine".Turing presented a universal machine that could simulate any other machine, leading to a universal way of representing all computable activities (Cooper and van Leeuwen, 2013).Turing then showed that an algorithm for the Decision Problem was logically impossible on any machine.Turing's machine model of computing was a signal achievement in mathematical logic.It soon became a cornerstone of the theory of computing and a rallying point for a new kind of computational thinking (Daylight, 2014(Daylight, , 2016)).It led to the theory of noncomputable functions and to algorithmic complexity theory.
Nearly all the CT concepts underlying algorithms existed before the dawn of the Information Age and were used in many fields including mathematics, logic, science, and engineering.The contribution of Computer Science was to unify them together into a framework for getting electronic computers to reliably use algorithms to solve problems.

Efficiency of Automation
Turing's model came to symbolize the question of what can be automated -later dubbed one of the most inspiring philosophical questions of contemporary civilization (For-sythe, 1968;Arden, 1980).But another question vexed those who worked with human computing projects: how to minimize the hand-computing effort and keep the time spent on computing within bearable limits.In the 1800s, even before fully programmable computing machinery had been built, Babbage foresaw the issue of how can "results be arrived at by the machine in the shortest time" (Babbage, 1864, p.137).After the birth of the programmable, fully-electronic, digital computer, early programmers grappled with matters of efficiency, and with the issue that some problems were inherently more open to efficient algorithms than others.For the nascent computer science community, a formalization of that phenomenon was provided in 1965 (Hartmanis and Stearns, 1965), and the concept of computational complexity quickly became a central feature of computational thinking.
As another example of the diverse origins of central CT ideas, an early discussion of an "NP-complete" problem and its consequences was started by Gödel, on a question concerning linear vs. quadratic time for proofs in first-order logic (Fortnow and Homer, 2003).In 1971 Steve Cook gave a formal definition of a set of "NP-complete" problems -their known algorithms took impractically long much time to find solutions, but any solutions could be rapidly validated (Cook, 1971).
This idea forever changed computational thinking: Tens of thousands of optimization problems from flight scheduling to protein folding were shown to be NP-complete (Vardi, 2013).This was gloomy news for those searching for fast algorithms for those hard problems -Moshe Vardi commented, "first-order logic is undecidable, the decidable fragments are either too weak or too intractable, even Boolean logic is intractable" (Vardi, 2013).Over time, algorithm experts found approximations, probabilistic methods, and other heuristics that do surprising well for problems in the harder complexity classes (cf.Fortnow and Homer, 2003;Vardi, 2013).Understanding the framework of computational complexity, its foundations, limitations, and its theoretical vs. practical consequences has become essential for intermediate to advanced CT.

Is the Algorithm The Spirit of Computing?
Today's notions of algorithms are rooted in the mathematical definitions of computability that emerged around the 1930s from the pioneering work of Church, Gödel, Kleene, Post, and Turing (Chabert, 1999, p.457).But with the birth of the digital computer, the concept of algorithm developed along a different, much less mathematical path (Bullynck, 2016;Chabert, 1999), shaped by the pragmatics of getting software to run reliably on real computers (Daylight, 2016).These pragmatics drove a consensus on the main features of algorithms: they are finite sequences of definite, stepwise machine-realizable operations that manipulate symbols; they may have inputs; they always have outputs; and they finish in a finite length of time (Knuth, 1997).Notice how much this definition of algorithm is tied to computers.Knuth saw algorithms as different in kind from nearly all types of human-executable plans: "An algorithm must be specified to a degree that even a computer can follow the directions" (Knuth, 1997, p.6).
In the 1950s, programming was regarded as a process to specify algorithms in a formal language that instructed a machine to carry out its steps.Originally, the formal language was assembly language, which was simply the instruction set of the machine.
Programming in assembly language was tedious and error prone.In the mid-1950s, higher level languages began to appear.These languages provided single statements corresponding to sequences of many machine instructions.They simplified the expression of algorithms and came with compilers that translated to machine code (Mahoney, 2011).They included Fortran, algol, cobol, and Lisp.The quest for efficient compilers was a strong driver of the research on automata.
By the 1960s, algorithms, programs, and compilers were seen as the heart of computing.A number of prominent people even proposed that the field be renamed to "algorithmics" (Traub, 1964;Knuth, 1981;Harel, 1987).In a series of papers, Knuth described how algorithmic thinking differed from classical mathematical thinking (Knuth, 1974a(Knuth, , 1981(Knuth, , 1985)).He concluded that the main differences are the design of complex algorithms by composition of simpler algorithmic pieces, the emphasis on information structures, the attention to how actions alter the states of data, the use of symbolic representations of reality, and the skill of inventing new notations to expedite problem-solving.
Others joined Knuth in clarifying how computing differs from mathematics.One author highlighted computing's use of procedural (action-oriented) knowledge instead of mathematics' declarative knowledge (Tseytin, 1981).Another insisted that while mathematicians might be interested in syntactical relations between symbols and their semantics, computing is inherently pragmatic because it aims for software that works (Gorn, 1963).Another argued that the concerns of mathematicians and computing people are fundamentally different (Forsythe, 1968).Another wrote that computer scientists differ from mathematicians by their ability to express algorithms in both natural and formal languages, to devise their own notations to simplify computations, to master complexity and agilely switch between abstraction levels, and to design their own concepts, objects, theories, and notations when necessary (Dijkstra, 1974).
The pure algorithmic view of computing began to be challenged in the late 1960s from a new direction by a larger view of computing that included many people sharing information and machine resources via operating systems and networks (Denning, 2016).Operating systems had a pragmatic origin in the late 1950s.Computers were scarce and were housed in computing centers where engineers could keep them running.Computing centers had to process many programs submitted by many independent users.Their personnel queued jobs for execution, loaded them into the machine, allocated machine resources such as memory and input-output, and delivered results back to their users.Computing center engineers invented the first operating systems to automate this work.However, users detested those early operating systems for their long turnaround times, often 24 hours.In 1960, researchers began to experiment with time-sharing to eliminate turnaround times by enabling interactive programming.Time sharing operating systems were much more complex.By 1970, a set of operating systems principles had emerged to deal with the complexity -concurrent processes, virtual memory, locality of reference, globally naming digital objects, protection, sharing, levels of abstraction, virtual machines, and system programming (Denning, 2016).An operating system was seen as a "society of cooperating processes" rather than a set of algorithms.Control of concurrency to avoid nondeterministic behavior and coordinate signalling across networks became central concerns.The algorithmic view was insufficient to capture everything people wanted to do in their shared systems.
Probably because of its relative simplicity, the algorithms viewpoint has dominated the K-12 CT movement since 2006.A common description in the K-12 curriculum recommendations is that CT is the habits of mind involved in formulating problems in a way that allows them to be solved by computational steps and algorithms (Aho, 2011).These habits include designing abstractions that hide details behind simple interfaces; dissecting solutions into discrete, elementary computational steps; representing data with symbols; and knowing a library of common useful algorithms (Shute et al., 2017).Little or nothing is said about operating systems, networks, concurrency, memory management, information sharing, and information protection -concepts often seen as more advanced forms of CT for which beginners are not ready.

CT: Automation and Machine Control
Turn now to automation -how to get computing machines to do jobs for us (Forsythe, 1969;Arden, 1980;Denning and Tedre, 2019).Automation is a bigger issue than finding an algorithm that will solve a problem.Autopilots, for example, fly planes as well as pilots.They are complex mechanisms with gyroscopes, GPS sensors, algorithms, and feedback loops.In computing we have looked to automation to enable tasks that humans might be able to do at small scale but cannot do at large scale.An example is finding out if a particular person is in a video of a moving crowd.Humans can do this reliably only for small crowds.By combining neural networks that can recognize faces with algorithms that search images, we can now automate this task for large crowds.Many forms of computer automation aim to extend small human tasks to large scales (Connor et al., 2017).Another example is drawing the next frame of a video on a computer screen; it would take a human calculating every second of every day for a year to do this job, but a graphics system can do it in 10 milliseconds.
Some proponents of CT have ignored the distinction between doing small and large versions of a task.They argue that algorithms are executed by "information agents" and that humans are information agents5 .This misleading claim is embedded into some K-12 definitions of CT.While this claim applies to small tasks that can be completed in a short time, it does not apply to large tasks.A machine "agent" can, in a short time, complete large tasks that are completely beyond human capabilities. 6oday's CT discussions about the role of machinery in thinking about computing are strikingly similar to debates in the 1960s over the status of computing machinery in the nascent computing discipline (Tedre, 2014).At the time, many scientists in other fields claimed that computer science cannot be a science because it is about humanmade computing machines, not processes of nature.The presence of machines meant that computer science was not really a science.The modern equivalent of this is the idea that an algorithm cannot be an algorithm if it depends on a machine.Even the most ardent supporters of the algorithmic view of computing do not endorse this view.They emphasize that algorithmic processes must be machine realizable.Donald Knuth, for example, in his monumental work The Art of Computer Programming (Knuth, 1997), uses a machine language, MMIX, that closely resembles a von Neumann machine instruction set (Knuth, 1997, p. ix ).One Turing Award winner Richard Hamming quipped that without the computer almost everything in computing would be idle speculation, similar to medieval scholasticism (Hamming, 1969).

The Machine Counts
In practice algorithms and computing machines are strongly intertwined.On the one hand they seem separable because the history of science knew algorithms for centuries, if not millennia, with only a scattering of machines to implement them such as Pascal's arithmetic calculator (ca 1650) and slide rules inspired by Napier's logarithms (ca 1620).Early algorithms aided people to undertake complex computations.On the other hand, they seem inseparable today: Even the most shining examples of theoretical computer science are often investigated with machine-like terminology -such as the Turing Machine and Knuth's The Art of Computer Programming.Turing himself argued that manipulating symbols mechanically was essential for computing numbers.
In his 1936 paper, Alan Turing defined computability using an automaton that imitates a mathematician carrying out a proof.As observers, we would see the mathematician writing symbols on paper, then moving to adjacent symbols and possibly modifying them, all the while mentally keeping track of some sort of state.He modelled this behavior with an infinite tape and a finite state control unit.His simple machine, which he called an a-machine (a for automatic), was soon called a Turing Machine.Its moves were of the form, "possibly change the current symbol, move a square right (left), and enter a new state."From this he proved the existence of a universal machine (one that can simulate any other) and his remarkable proof that the Decision Problem could not be solved by any machine.
The Turing model of computation won out over competing models because its mechanical, machine-like form was the most intuitive (Kleene, 1981;Church, 1937).The modern definition of algorithm depends on the machine realizability of individual instructions as in the instruction cycle of a von Neumann CPU.Machines and algorithms intertwine.
Today's debates frame algorithms and automation as points of view that can be compared and contrasted.The algorithmic view sees computations as abstract entities one can reason about (cf.Smith, 1998, pp. 29-32).The automation view sees computations as the operations of physical machines that realize algorithmic tasks on physical media (Smith, 1998).Whereas the algorithmic view sees computations as abstract, the automation view sees them as physical processes (cf.Smith, 1998;Tseytin, 1981).
When historians analyze the progress of computing, they invariably cite progress with machines over progress with algorithms (Williams, 1997;Campbell-Kelly and Aspray, 2004).Many stories of computing reach back to roots with the Jacquard loom, which demonstrated that weaving machines could switch to new patterns by changing the cards (a sort of "program" for weaving).Electromechanical tabulating machinery enabled the 1890 US Census and eventually took over the jobs of thousands of people (Williams, 1997;Cortada, 1993).Analog computers such as mechanical integrators could trace the values of complex functions and solve differential equations (Williams, 1997;Tedre, 2014).Today's competition for world leadership in computing is measured by the speed of supercomputers or banks of Graphics Processing Units, not just the algorithms they run.
In the middle 1980s, John Rice, a pioneer of mathematical software, tried to achieve a more balanced view.He said that the mathematical software of the day had improved by 10 12 , of which 10 6 was attributable to improvements in hardware and 10 6 to improvements in the design of algorithms.This is still true today.For example, machines to recognize faces were very slow and error prone in the 1980s whereas today they use the advanced algorithms of deep neural networks combined with the superior speed of GPU chips to do the job.Despite the desires of some advocates to simplify CT by ridding it of machines, it cannot be done: machines will continue to gather our attention in computing.
Some proponents of basic CT mistakenly conflate the stored program idea with Turing's universal computer idea.Historians have showed that those two have developed on two parallel historical trajectories (Haigh, 2013).They are separate ideas.
The occasional attempts to separate algorithms from computers floundered in the past and will continue to flounder (MacKenzie, 2001).Even the staunchest advocates did not model the distinction.For example, Dijkstra showed great prowess writing efficient compilers and operating systems and yet said "the computing scientist could not care less about the specific technology that might be used to realize machines, be it electronics, optics, pneumatics, or magic" (Dijkstra, 1986).This was not a passing statement.He repeatedly said "computer science is not about machines, in the same way that astronomy is not about telescopes" (Fellows, 1993;Dijkstra, 2000;Daylight, 2012).
In the end, the marriage of the algorithm and automation views drove CT into central questions that shifted as new algorithms and machines were developed.For example, Babbage's idea that computers could eliminate human error was displaced a century later with the realization that machines were so complex that no one could be sure they did what the algorithms told them to do -and even less sure whether they met their designers' intentions (Smith, 1985(Smith, , 1998)).The early question of measuring "cost" of an algorithm as the CPU time it consumed gave way to network performance measures such as throughput and response time.

Is Programming Essential in CT?
The invention of the computer created a new concept -the program -that came to symbolize the computer age.Useful computer programs written in high-level languages can be transferred to different computers, where they can be compiled to different machine code and executed.Program libraries such as mathematical software became standard features of computing systems.Downloadable software "apps" are a standard feature of today's portable devices.Software libraries are universally available.CT aims to elucidate the thought processes behind the designs of all these programs (Bell and Roberts, 2016;Wing, 2006).
The preponderance of public discourse on CT is not the abstract algorithm but the executable computer program.Code.org,International Society for Technology in Education (ISTE), Computer Science Teachers Association (CSTA), and Google for Education all refer heavily to programming skills and concepts in their CT material, typically using Python (Google), Blockly (code.org),or Scratch (CSTA). 7he skilled practice of programming is seen widely as central to CT.Yet, many programming language concepts that are today regarded as self-evident -such as whileloops, data structures, and recursion -were not initially apparent, but were the result of much work by brilliant people over many years (Knuth and Trabb Pardo, 1980).A significant body of software was written before crucial programming language concepts started to emerge (Glass, 2005).At least one computing pioneer wondered aloud how all those non-trivial programs were made to work by people who had only "primitive" mental tools for programming (Dijkstra, 1980).Programming methodology, developed since 1970, aimed at improving dependability, reliability, usability, security, safety, and even elegance of programs, which are not always compatible goals (Daylight, 2012).Evolving programming methodology brought new programming language constructs and programming techniques such as structured programming and object-oriented programming (Liskov, 1996).Much CT terminology and concepts originate directly from developments in programming methodology and software engineering.

Origins of Tools for Computational Problem Solving
The five key programming aspects of basic CT -modularity, data structures, encapsulation, control structures, and recursion -are often held to be unique to computing.But this is not strictly true.Each has deep roots in many fields.This is good for CT because these ideas have stood the test of time.
Because programming is such a central aspect of computing, much effort has gone into the design of programming languages starting in the 1950s.High level languages simplified the programming job and reduced errors in programs.They came in many flavors -such as procedural programming, functional programming, symbolic programming, script programming, artificial-intelligence programming, object-oriented pro-gramming, and dataflow programming -each attuned to a particular style of problemsolving (Knuth and Trabb Pardo, 1980;Sammet, 1972;Wexelblat, 1981).The number of programming languages multiplied over the 1950s and 1960s (Sammet, 1972).Let us take a closer look at the origins of the five basic ideas of CT.
Modularization.The very first programming textbook from 1951 (Wilkes et al., 1951) noted the need to divide programs into smaller, manageable pieces.Reducing complex systems to structures of many simple components is an old engineering practice.Modularization of programs was achieved by subroutines, functions, procedures, and classes in programming languages.The Atlas machine at the University of Manchester introduced hardware support for subroutine calls.The major languages of the late 1950s -Fortran, Algol, Lisp, and Cobol -all included subroutines.Structuring programs as callable modules fascinated educators, who called it procedural thinking (e.g., Solomon, 1976;Abelson et al., 1976).The early development of software engineering after 1968 used metaphors from industrial and mechanical engineering (Mahoney, 2011, pp. 93-104), emphasizing parallels with the automobile industry, interchangeable parts, machine tools, and industrial mass production (Naur, 1969;Mahoney, 2011;Randell, 1979) Data Structures and Encapsulation.In the early 1960s, experienced programmers advised their students to start the design of a program with the organization of the data.They had found that choosing the right data structure for the job at hand was key for finding a simple algorithm.By the late 1960s, this practice was called "data abstraction".It specified that a data structure would be hidden behind an interface of operations presented to users; users could not access the data directly.This approach allowed improvements to be made to a module without requiring changes to other modules that used it.For instance, Simula 67, a simulation language, incorporated this idea into the structure of a language (Holmevik, 1994).The idea evolved into object oriented programming by the early 1970s (Krajewski, 2011;Liskov, 1996;Hoare, 1972;Sammet, 1981).
Control Structures.The early ideas of von Neumann architecture conceived of instructions as "orders" that the machine obeyed.Programming was seen as a way to control machines.Thus a lot of attention was paid to the organization of control.In 1966 Böhm and Jacopini published a theorem that said three control structures (sequence, iteration, and selection) are sufficient for any program (Böhm and Jacopini, 1966).In 1968, Dijkstra introduced structured programming, which had specific statements for sequencing, iterating, and selecting; he emphasized that these are the three ways that we organize our proofs that a program works correctly.(Although structured programming was fundamentally about good abstraction practices, many well-known expert programs did not endorse it (Hoare, 1996).Unfortunately the debate derailed into one about whether it was wise to allow the go to statement (e.g., Dijkstra, 1968;Knuth, 1974b).)The Böhm-Jacopini minimalistic insight was taken as a loose programming analog of the basic logic gates for computer circuits.But it is clear from the notes of Babbage and Lovelace that they used the same programming and machine structures without giving them explicit names, and the same concepts have arisen in different contexts throughout history (Rapaport, 2018).
In the 1960s, the idea of control structures blossomed into many new ways to specify the order of operations in programs.They included new ways to control instruction flow between blocks of statements such as repeat-until, do-while, if-then-else, and case statements.They also included ideas to allow concurrent operations within a program, controlled by fork and join operations and synchronized with semaphore operations (Hoare, 1996;Knuth and Trabb Pardo, 1980;Glass, 2005).
Recursion.The technique of recursion was known to mathematicians in the 1800s as "definition by induction".It entered computing as a theoretical construct from mathematical logic in the 1930s (Soare, 1996).It was an integral part of Gödel's and Kleene's models of computation.It entered as a practical means of programming through the languages Algol and Lisp (Daylight, 2012, Ch.3).It entered as a means to specify elegant algorithms, such as Hoare's 1961 Quicksort.In the 1960s, the Burroughs Corporation built the B5000 and B6700 machines to provide highly efficient stack-oriented execution environments for recursive programs.These machines removed any doubts that recursive program execution could be efficient.Implementation of stacks in hardware and operating system software became permanent fixtures in computing.
These five ideas appropriated from early computer science and other fields of engineering, science, and mathematics formed the core of a new way of solving problems (Forsythe, 1959;Katz, 1960).Hundreds of articles and books described computational methods and CT concepts as tools for problem-solving in different programming languages.These ideas have become the core of the modern movement for beginner CT (Aho, 2011;Wing, 2006) 8 .

Software Development and Design
Programming methodology promoted best programming practices for designing and writing programs.It helped programming evolve from a "black art of obscure codes" to a rigorous discipline (Wirth, 2008;Backus, 1980;Dijkstra, 1980).It provided the mental tools for analyzing problems in a way that permitted computational solutions.But, by the late 1960s, the software industry and its customers were painfully aware of how inadequate their programming methods were for large software systems, and just how difficult it is to write reliable program code for large systems (Ensmenger, 2010).Developers of large software systems faced chronic problems with missed deadlines, overrun budgets, poor reliability, usability, unmet specifications, managing software projects, and safety (Mahoney, 2011;Ensmenger, 2010;Friedman and Cornford, 1989).None of those problems could be addressed with improvements in programming methodology.In 1968 a NATO conference acknowledged the software crisis and agreed to launch a new field, software engineering, to do something about it (Friedman and Cornford, 1989).As software engineering gradually became a respected profession (Ensmenger, 2001), its new ideas gradually entered advanced computational thinking.
Software engineering had broad appeal.It suggested that many traditional ideas from engineering could be brought to the development of large software systems.Soon the term software engineering turned into an umbrella term for a variety of practices to bring large, complex, safety-critical software systems into production (Ensmenger, 2010;Tedre, 2014).There was soon a debate among educators about whether software engineering is a branch of computer science or of engineering.Many doubted whether the mathematical mindset of computer science departments would be amenable to an engineering mindset for software.Many aspects of software engineering, such as design strategies, management of software projects, customer service issues, and safety issues did not seem to fit in computer science departments (Naur and Randell, 1969).
The terms programming in the small and programming in the large were used to distinguish between the design of single procedures, algorithms, or programs, and the design of large systems possibly consisting of many interacting programs.Computing pioneer David Parnas summed up programming in the large as managing "multi-person development of multi-version programs" (Parnas, 2011).He cited the issues of communicating with the intended users and elucidating their requirements, managing large teams of programmers, coordinating software development projects, dealing with complexities that arise from millions of lines of code and increasingly complex hardware, maintaining and improving software after its release, and training programmers to think like engineers (Ensmenger, 2010;Parnas, 2011;Mahoney, 2011).All the efforts for large systems opened a whole world of advanced CT concepts, practices, and professional skills.

Software Systems Thinking for Professionals
Systems engineering emerged when new sociotechnical systems grew so complex that single individuals could no longer design them.Grace Hopper pointed out the turning point in computing: "Life was simple before World War II.After that, we had systems" (Schieber, 1987).
Operating systems were among the first large complex software systems.There have been a few instances in history where the entire operating system was designed and implemented by one or two persons -for example, the THE multiprogramming system around 1968 (Dijkstra, 1968), the UNIX system around 1972 (Ritchie and Thompson, 1974), and the XINU system around 1980 (Comer, 2012).When large systems have been put together by large teams, they become too large for any one person to understand (Brooks, 1975).
Similar to other engineering fields, as software systems grew too large for any single person to develop and maintain, there was a need to recognize new ways of planning, designing, and developing systems.The emergence of software engineering was a systems thinking-based response that superseded the older programming-in-thesmall practices (Brooks, 1975(Brooks, , 1987)).The systems responses typically arose to meet problems encountered in production.One computing pioneer reminisced, "I have never seen an engineer build a bridge of unprecedented span, with brand new materials, for a kind of traffic never seen before -but that's exactly what has happened on OS/360 and TSS/360" (Randell, 1979).
Software modules assumed a strong place in software engineering.Like hardware bolt-on modules familiar to engineers, software modules are system components that can be developed and maintained independently.Most modules are designed as black boxes that internally hold a hidden data structure, with an external interface that specifies the functions that can be performed on the internal data.Modules have external interfaces that provide all the functions implemented by the module.Modules developed for one system can be reused in another.Modularization facilitates decomposition of a large problem into small subproblems whose modules are easier to design.Modularization is the pragmatic approach of software developers to putting principles of abstraction to work.
For a while, development engineers believed that modules were the key to large systems, where the programming and testing had to be distributed among many programmers.Each programmer was given a detailed specification of the interface and asked to prove and validate through testing that the interface worked as intended and all the internal data were completely hidden.Yet, when independently developed modules were brought together in a system, they often failed.The failures arose from subtle differences in the ways that the development teams interpreted the interface operations.Somehow the overarching principles of the system must be communicated and understood by all the module development teams (Brooks, 1975) Portability was an important side benefit of modularity.This meant that a module developed on one system could be transported into another system with possibly different operating systems and hardware.One approach is to gather a set of related modules into a library, such as mathematical software or Java language add-ons, and provide the library to users on many machines so they could link modules as needed.Another approach was to design the modules in high level languages and use the compilers to translate them into machine code for the specific machine.Still another was to design a family of machines (such as IBM OS/360) with the same instruction set, which allowed modules to be reused on other members of the family without recompilation (Brooks, 1975).And finally is the approached of the Java language, which defined a middle level virtual machine that can be compiled for each host machine.The compilers of the modules translate module operations to the virtual machine interface, which in turn the virtual machine translates into machine code.
In the 1990s, expert designers concluded that exchanging models is not necessarily the best way to share design expertise.Inspired by the work of Christopher Alexander, a famous architect (Alexander, 1979), they specified a number of important thought patterns that appear in software systems (Gamma et al., 1994).They identified design patterns for a large number of common programming situations, such as where a program needs only one instance of a class, or where the program needs to sequentially access elements of a set.Another approach to sharing experts' computational thinking was design principles (e.g.Saltzer and Schroeder, 1975), which are holistic ways of thinking about rigorous designs for systems consisting of numerous interacting components.Another approach was design hints, which acknowledged the difference between designing algorithms and designing systems.Design hints were an attempt to crystallize the design choices and judgments skilled systems designers had learned to make: They included maxims like "separate normal and worst cases," "make actions atomic," and "keep interface stable" (Lampson, 1983).Design patterns, principles, and hints are advice from experts to other experts and probably would make little sense to novice programmers.
All these aspects of system thinking for professionals are examples of advanced concepts of computational thinking.Whereas basic CT is typically more generic, more widely applicable, and less unique to computing as a discipline, advanced CT is typically more specialized, born and honed through experience in design, implementation, and maintenance of large computer and information systems.

Computational Thinking and Science
There is wide appreciation that computing has transformed science and engineering in fundamental ways.This appreciation is one of the most important reasons for the attractiveness of CT.Computing fundamentally improved the collection and analysis of data, the design of simulations and models, and the ability to model information processes found in nature.Computing has been called the "third pillar" of science (Oberkampf and Roy, 2010), the "fourth great scientific domain" (Rosenbloom, 2013), and the "most disruptive paradigm shift in the sciences since quantum mechanics" (Chazelle, 2006).Along with this shift, computing professionals deemphasized the idea that computing is a science of automation and embraced the idea that it is a science of natural and artificial information processes (Denning, 2007).Throughout computational science, computing does not just "enable" better research, but often drives productive new kinds of research (Meyer and Schroeder, 2015, p.207) -although many "new" ideas in computational science have clear counterparts in pre-computer science, too (Agar, 2006).
Despite early claims that basic computing ideas are easily transferred across domains, STEM educators have concluded that CT is not domain-independent; it looks different in different disciplines (Weintrop et al., 2016;Yadav et al., 2017;Barr and Stephenson, 2011).Critics have called the over-zealous push of a standardized notion CT into other domains of science "arrogant", "imperialistic", and "chauvinistic" or just plain "ill considered" (Hemmendinger, 2010;Denning et al., 2017).What is more, the info-computational (Dodig-Crnkovic and Müller, 2011) or algorithmic revolution in science has not been a monolithic single revolution that overthrows an old regime (Tedre and Denning, 2017).The transformation has been gradual.Four distinctions emerged that were emblematic of computational thinking in science.They are discussed next.

A new instrument of science.
Massive increases in computer speed and memory allowed scientists to run simulations and evaluate mathematical models that were previously untouchable (Grier, 2005).For example, scientists in computational fluid dynamics knew how to build models for complete aircraft simulation, but did not have access to supercomputers capable of running them until the late 1980s.Experimental scientists embraced data science as a new set of analytic methods to analyze very large data sets.Theoretical scientists got tools for numerically solving equations that had no closedform solutions (Tedre and Denning, 2016).
These tools allowed complex models of dynamic systems to be evaluated in near real-time.Models for weather forecasting (Grier, 2005, pp.142-144,169) and nuclear reactions (Haigh et al., 2016, p.5) pushed the state of the art since the 1940s.In the 1980s, scientists from all fields compiled a list of "grand challenge" problems that would be solved with sufficient computing power and, with help from Moore's law, they predicted when these solutions would be feasible (Executive Office of the President: Office of Science and Technology Policy, 1987).These problems included fusion energy, design of hypersonic aircraft, full simulation of aircraft in flight, cosmology, and natural language understanding.
2. New scientific methods.Since 1950, scientists were able to bring to their investigations new methods enabled by the electronic digital computer.Early computers during World War II allowed rapid calculation of ballistic trajectories of new ordnance and cracked the German Enigma cipher.Monte Carlo simulation became fashionable in the mid-1940s to find probabilistic approximations for thermonuclear and fission devices, cosmic rays, high-temperature plasma, and many other phenomena (Eckhardt, 1987).In the 1980s, supercomputers led to a rapid proliferation of simulations in the sciences, leading to discoveries that earned Nobel Prizes on topics such as phase transitions in materials and interactions between tumor viruses and cells (Tedre and Denning, 2017).
Computer modeling and simulation evolved into a new way of doing science.The new way, was investigations using the computer as the instrument and experimental apparatus.Physicists studied phase changes of materials, chemists design of new molecules, economists simulation of national and world economies, cosmologists the evolution of the universe, biologists the structures of DNA, and much more.None of these investigations could be done with the traditional methods of experimental or theoretical science.Computation was seen as a new way of doing science.All the fields using computational methods established new branches of computational science.The term "computational thinking" came into vogue to characterize the new kind of thinking required for this new way of science.The computational sciences movement received political support in the US on the passage of the High Performance Computing and Communications Act (1991), which opened new streams of funding for computational science research and development.Simulation has become so important that it is today inconceivable that major infrastructure investments could be built without exhaustive simulations in advance.
3. New lens for interpreting results.Simulations enabled "virtual experiments" in which natural processes could be modeled as information processes.The good agreement between these models and the real processes led many scientists to change their views and interpret their fields as study of "natural information processes".Biology was the first field to fully embrace this in its study of DNA sequencing and genome editing -in 2001, David Baltimore, who won a Nobel Prize in Biology, claimed "Biology is an information science" (Baltimore, 2002).Leonard Adleman declared himself to be a scientist studying information processes in DNA transcription and cell metabolism (Adleman, 1998).Many other fields soon followed.For example, cognitive science said it studies natural information processes in the brain, physics said that quantum processes are fundamentally information processes and can be used to power quantum computers, and economics said it is an information science.In short, computing changed the epistemology of science.
Since the computational science revolutions of the 1980s, many scientific fields established computational branches that interpret natural processes as information processes and study them with computers (Kari and Rozenberg, 2008).The term "natural computing" is often used for them.It is now common in many fields to explore natural phenomena with computational models such as cellular automata, neural networks, and quantum computing.
The computational methods of science are subject to the same limitations as any computations.Computational methods do not help with problems for which computational solutions are intractable.Computing's major question "P=NP?" has become a fundamental question in science, too.When science becomes more computational, the limits of computation draw new boundaries for knowledge.
An interesting side effect of this transformation of science is that the early controversy about whether computer science is science has disappeared.
4. New Speculations on the Structure of the World.The enthusiasm for natural information processes has led some prominent scientists to claim that the universe itself is an information process.For example, some theoretical physicists believe that the quantum wave functions that govern all the basic particles are information processes; since all matter is build from quantum particles, they speculate that the whole world an information processing system (Dodig-Crnkovic and Müller, 2011;Dodig-Crnkovic, 2013;Fredkin, 2003).Others argue that the unreasonable effectiveness of computational models in sciences demonstrates that everything in nature computes.Molecules compute their bonds and interactions (Hillis, 1998, pp.72-73), living organisms compute life (Mitchell, 2011), the universe computes its own time-evolution (Chaitin, 2006), the universe is a cellular automaton (Zuse, 1970;Wolfram, 2002), the universe is a quantum computer (Lloyd, 2007), and everything physical is information-theoretic by nature (Wheeler, 1990;Davies, 2010).In the "it from a bit" interpretation (Wheeler, 1990), information in the form of bits -or recently qubits -is the fundamental building block of the world.There are many forms of computational accounts of the world (Piccinini, 2017).But many of those views are controversial and are not widely accepted.
All these information views of the universe give the potential of CT being a fundamental thinking tool for understanding the mechanisms of the universe.

Reflections
We have taken a deep dive into several fundamental aspects of computational thinking: its definition, its genealogy, its continuum from beginner to professional, and its inheritance from computational science.Our findings are based on extensive literature on computing's disciplinary history and computational thinking.Now the question is, what is important for us educators to focus on as we continue our journey with computational thinking?Here are our reflections on this.

The Importance of Our History
A number of analysts have raised warnings about the tendency to present CT devoid of its historical context (e.g., Nardelli, 2019;Guzdial, 2015;Voogt et al., 2015;Denning, 2017).Why is it important to know the history of those ideas?There are many reasons.None of the ideas about CT have formed in the emptiness of a new mind-space opened in the early 2000s.The disciplinary history of computing includes many attempts to describe the unique intellectual core of computing (Tedre, 2014).There is no shortage of literature to support investigations of the origins of ideas.These histories enable us to trace CT concepts back through the beginnings of computer science in the 1950s, and in some cases back much further, hundreds or even thousands of years.In short, the ideas we work with today are distillations of the work of many people before us.Our predecessors have sharpened and honed them, adopting ideas that work and steering clear of ideas that do not work.For example, our idea of information-hiding software structures originated as the engineering idea of modules -components with a definite interface whose inner workings are hidden.Software modules can be treated like hardware modules; they can be replaced by new versions without disrupting the rest of the system.When we make ourselves familiar with the history of our ideas, we become wiser and can tell our students why things have evolved as they are.
Awareness of history can also reveal blindnesses we have in the current day.Consider the Turing machine model for computation.When he introduced his model in 1936, Turing entered a competition to provide an answer to a problem in mathematical logic.Other proposals to represent computing included the string substitution systems of Post, the lambda-calculus of Church, and the recursions of Gödel.Within a few years, it was established that each model could be simulated by any of the others, showing that they are all equivalent in their power to represent computations.The Turing model won out as the standard because its mechanical, machine-like form was the most intuitive (Kleene, 1981;Church, 1937).Later, pioneers of computing learned much about what real computers could do (or not do) by studying the capabilities and limits of Turing machines.In our thinking today, we have inherited the Turing machine notion that algorithms are step-by-step procedures carried out by machines.Few of us think of computations as string substitutions, function evaluations, or recursive functions.The Turing model may be too narrow to allow us to understand new forms of computation such as deep learning networks and quantum computers.

Programming and Machines Are Essential
Some CT proponents have tried to distance CT from the computer and a few also from programming (see discussions in, e.g., Connor et al., 2017;Nardelli, 2019;Armoni, 2016;Lye and Koh, 2014;Lu and Fletcher, 2009;Shute et al., 2017).We have argued that it is difficult to understand many CT concepts without understanding the machine in the background.We repeat our prior warning that attempting to define and study algorithms without reference to a computing machine creates an unrealistic image of algorithms that is disconnected with how algorithms are understood in today's broader scientific and engineering discourse.Without a machine to execute it, an algorithm is an abstract mathematical construct that cannot produce real results in the world.If there were no computers, programming would be limited to narrow theoretical uses.In today's computing, algorithm is the connection from our mental idea of what we want done to a machine that carries out our intent.Since the birth of the field, computing as a discipline has been driven by the union of algorithms and machinery.
Another casualty of treating algorithms independently of machines is an understanding of differences between what a machine can do and what a human can do.A machine can carry an enormous number of calculations in the time a human can do just one calculation.Human agents are limited by their biology: they can carry out small algorithms that complete in a few minutes or hours.It is utterly impossible for a human agent to carry out the operations of most software programs.For example, a single frame refresh for a graphics display would take a human years to calculate.Similarly, most things that are routine for humans turn out to be computationally intractable for machines.The way humans reason about problems is fundamentally different from the way computers calculate solutions to problems.
We are able to get machines to go fast because the individual calculations are completely independent of context.They are executed by circuits that respond to their inputs by well-defined local rules.Humans bring great wisdom and understanding to their jobs and decisions because through their biology they can sense the context.Humans and machines are not equivalent.Computational thinking is, to all intents and purposes, not about how to design and reason about algorithms, but about how to make machines do algorithmic tasks for people.Without the machine there would no computational thinking today.It important to keep the machine in view, even if from a distance.

Basic CT Is Not Unique to Computing
We have expressed our concern that beginner CT, which is the public face of CT, leaves out many aspects computing's rich body of knowledge.We have stressed the importance of recognizing that there is a range of CT skills from the beginner to the professional.The beginner skills emphasize basic programming and algorithm design.It is entirely appropriate for K-12 curriculum recommendations to emphasize beginner skills -because the students are beginners.
However, therein lies a dilemma: basic "computational" thinking for beginners consists of skills and concepts that are not unique to computing: most of its central concepts are found in many disciplines.The ideas that make computing unique ideas are found much further along the spectrum from basic to advanced CT.The advanced skills of professionals include designing and building large, reliable, and safe software, simulations, and artificial intelligence, as well as performance evaluation of systems, distributed networks and operating systems, and interfaces for complex systems.Our teachers need an appreciation for what professionals do because many students will ask what comes next.

Domain Knowledge is Essential to Computational Thinking
One of the conceits of CT has been a claim that CT enriches the mind and enables problem-solving in many domains.This notion, which dates back to the 1950s (Forsythe, 1959;Katz, 1960), appears to have been reinforced by the 1980s computational science movement, when scientists from many fields claimed that computing is a new way to do science.It seemed that every field of science defined a computational branch to apply computing.This gives the appearance that computing concepts entered science and transformed how science is done.But all experience in computational sciences tells us that the participating scientists need deep knowledge of the domain in addition to their computing knowledge.Take aircraft design as an example.In the 1980s, the largest aircraft were too big for any wind tunnel facility.In collaboration with the aircraft industry, NASA embarked on "aerodynamic simulation," meaning the simulation of air flows around the wings and bodies of planes.The objective was a full simulation of an aircraft in flight -designed by supercomputer without wind tunnel testing.This required supercomputers doing computations of advanced fluid-flow equations around the aircraft.The scientists who programmed the simulations needed deep knowledge of fluid dynamics to understand how to design grids and prevent round-off errors from accumulating.No computer science curriculum teaches computational fluid dynamics.
This is true of every domain using computers.Algorithms and systems are designed with deep knowledge of the domain.They are not simply straightforward, uncomplicated applications of computing techniques.

Basic CT is not Computer Science
It is important to avoid the trap of equating CT with the academic discipline of computing.Basic CT does not teach how professional computer scientists see the world; it consists of a set of basic ideas that are the foundation for learning many skills and concepts central to computing (and other fields).For example, basic CT does not discuss operating systems.Operating systems, which have contributed a number of fundamental ideas to computing such as autonomous processes, concurrency control, and virtual memory are a core course in a CS curriculum.They are not discussed in basic CT.The basic CT skills come nowhere near describing what an Apple Genius knows.
A less obvious but more important reason is that basic CT is a practice of the computing discipline, along with advanced CT practices including large-scale programming, design, and modeling.The discipline of computing includes all these practices and has become one of their best teachers.Basic CT is not aimed at teaching the advanced practices.

Conclusion
The latest CT wave has done a remarkable job bringing the need for K-12 computing education into the global limelight.The arguments for integrating CT in the classroom have persuaded national decision-makers, and resources flow in.The concerted effort of educators in schools has resulted in impressive advances in methods for teaching computing in schools, both with computers and without.But the CT community continues to struggle with what seems an impenetrable fog of interrelated concepts.We have argued that much of the fog would disperse if we broaden CT's perspective to include advanced (professional) CT.Some of this broader perspective can be integrated into the upper ends of a K-12 curriculum.We have also argued that a historically grounded view of computing practices increases understanding of what works and what does not, and reveals why certain ideas have stood the test of time.With these expansions, many will come to see the full richness of the computing field.