Media History

Georgetown University
Graduate School of Art and Sciences
Communication, Culture & Technology Program

Professor Martin Irvine
CCTP 711: Computing and the Meaning of Code
Fall 2022

This course introduces the key concepts for understanding everything we call “code” (i.e., interpretable symbolic systems) in ways that apply directly to any professional career. Computing and data resources are now essential in every professional field, but few people have the opportunity to learn the key design principles of computing systems, where the ideas for computation come from, and why everything we do in computing is connected to a much longer history human symbolic thought and to the symbolic systems we use in all forms of communication and representation. With the methods and concepts in this course, you will be able to open up a big “black box” -- not only of computing and programming “languages,” but of all our “systems of meaning,” from language, mathematics, and images to the binary encoding systems for all types digital data.

The course is designed especially for students from non-technical backgrounds. But if you have done some computer coding already, you will understand more clearly how and why programming languages and digital media are designed the way they are, and how and why we can use “code” in one form (in the “digital language” of computers) for representing, processing, interpreting, and transmitting all forms of the “primary code” of human thought and expression (in words, images, numbers, graphics, sounds, and video).

In order to “crack the code” for understanding how all symbolic systems work, you will learn methods and concepts from several disciplines, including: design thinking, systems thinking, semiotics (the study of symbol systems), cognitive science and philosophy, information theory, and computer science. In this course, you will learn about computing and symbolic systems in two parallel paths: by learning the ideas that made our digital computing systems possible, and by actually seeing how it all works in “hands on” practice with software, programming code, and Internet apps. By focusing on the essential background for answering the “why” and “how” questions, you will also gain a new motivation for applying this understanding to the “how to” side of programming (if you want to learn how to code or work with others in designing applications).

Course Objectives and Outcomes

By the end of the course, you will be able to:

(1) Understand how the coding and logic of computer systems and digital information are based on our core human symbolic capabilities, and how and why the design principles for computer systems and digital media connect us to a longer continuum of symbolic thought, expression, and communication in human cultures;

(2) Use the foundational knowledge of this course to go on to learning programming in a specific programming language and in a specific domain of application, if you want to develop these skills;

(3) Apply the knowledge and concepts of this course to developing a leadership-level career in any kind of organization where you will be a knowledgeable “translator” of computational concepts: you will be able to help those without technical backgrounds to understand how computing is used in your field, and be a communicator with people responsible for computing and information systems (“IT”) who need to understand the needs and roles of others in your organization. This “translator” role is in big demand, and one that many CCT students have gone on to develop great careers.

View and download the Syllabus Document in pdf for Georgetown Policies and Student Resources.

Course Format and Syllabus Design

The course will be conducted as a seminar and requires each student’s direct participation in the learning objectives in each week’s class discussions. The course has a dedicated website designed by the professor. The web syllabus provides a detailed week-by-week "learning map" with links to weekly readings (in a shared Google Drive folder). Each syllabus unit is designed as a building block in the interdisciplinary learning path of the seminar.

To facilitate learning, students will write short essays each week based on the readings and topics for that week. Your short essay must demonstrate that you've done the readings, and can comment on and pose questions about what you find to be the main points. At first, you will have many questions as everyone in the class begins learning new concepts to work with and working toward better technical knowledge in how the concepts in the course apply to computer systems, code, and digital media. Try to apply some of the main concepts and approaches in each week’s unit to examples and cases that you can interpreter in a new way. Students will also work in groups for in-class exercises and for collaborative presentations.

Students will participate in the course by using a suite of Web-based online learning platforms and e-text resources:

(1) A custom-designed Website created by the professor for the syllabus, links to readings, and weekly assignments:
(2) An e-text course library and access to shared Google Docs: most readings (and research resources) will be available in pdf format in a shared Google Drive folder prepared by the professor. Students will also create and contribute to shared, annotatable Google Docs for certain assignments and dialogue (both during synchronous online class-time, and working on group projects outside of class-times).
(3) Zoom video conferencing for synchronous class meetings and group discussion (if we are in online mode), and virtual office hours (both for campus-based and online mode).
See: Students Guide for Using Zoom.
(4) Additional learning tools in Canvas, Georgetown’s course management system. To learn more about Canvas, see the Canvas Guide for Students.


Grades will be based on:

(1) Class Participation (50% of grade in two components): Weekly short writing assignments (in the course Canvas Discussion module) and participation in class discussions (25%). Collaborative group projects on topics in the syllabus (to be assigned) to be posted in the Canvas Discussion module and presented for discussion in class (25%).

Important: Weekly short writing assignments must be posted at least 6 hours before each class day. Everyone must commit to reading each other's writing before class to enable us to have a better-informed discussion in class.

(2) A final research project written as a rich media essay or a creative application of concepts developed in the course (50% of grade). Due date: 7 days after last day class.

Final projects will be posted on the course Canvas Discussion module, but you can also develop you project on another platform (Google docs, you own website, etc) and link to it in a Canvas discussion post for Final Projects. Your research essay can be used as part of your "digital portfolio" for your use in resumes, job applications, or further graduate research.

Professor's Virtual Office Hours (via Zoom)
Before and after class, and by appointment. If needed, I will modify the virtual office hours schedule based on students’ schedules and needs.

Professor's Contact Email:


  • Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015.


  • Luciano Floridi, Information: A Very Short Introduction. Oxford, UK: Oxford University Press, 2010. ISBN: 0745645720James Gleick, The Information: A History, a Theory, a Flood. New York, NY: Pantheon, 2011.
  • Janet H. Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.

Links to Online E-Text Library and University Resources

A Note on Readings in the Course

  • I have written several introductions to the course units so that students will have a framework for understanding the interdisciplinary sources of ideas and methods for each topic. These introductions are drafts of chapters of a book that I am writing for students, and are based on my 25 years of research and teaching. Please give me your honest feedback on what works and what doesn't, what needs more clarification or more examples. There are no textbooks for the "big picture" interdisciplinary approach that we we do in CCT, so we have to make our own.

Professor Irvine's Introductory Video Series: Key Concepts in Technology

  • I produced these videos for an earlier course, CCTP-798, Key Concepts in Technology. The basic background in these videos is relevant for topics in this course, and for your general learning in CCT. (They are short, mostly ~ 6-10 mins each. Note: the Week numbers don't correspond to the weeks in this course.)
  • Key Concepts YouTube Playlist (I will link some in specific weeks of the syllabus).

Learning Objectives:

  • Foundational background for the course. Introduction to key concepts, methods, approaches.
  • Introducing the multi-/ inter-disciplinary knowledge domains, research, and theory.

A framework of research questions for our guiding our inquiry and knowledge-building:

  • What underlies what we call "code," and how / why is "code" based on the human symbolic capacity in all of its implementations in symbol systems from language, mathematics, and art works to the design principles of computer systems, digital information, and software programming?
  • How do we connect signs and symbols that represent meanings (data) with signs and symbols for actions (operations) on the meaning representing signs/symbols to generate unlimited new representations?
  • How/why can we use "code" in one set of symbol representations to represent, process, interpret, and transform all our other symbol systems (language, sounds, graphics, images, film/video, multimedia)?

Course Introduction: Requirements and Expectations

  • View and download the Syllabus document in pdf:
    Course description, complete information on requirements, participation, grading, learning objectives, Georgetown Policies, and student resources.
  • We will discuss further in class, and you can ask questions about anything.

Personal Introduction:

Using Research Tools for this Course (and beyond)

Required Weekly Writing for Class: Canvas Discussion module for course

  • Read the Instructions for your weekly writing assignment (read carefully before posting!)

First Day: Introductory Lecture and Presentation (Prof. Irvine) (Google Slides)

In-class discussion of basic concepts, examples, and case studies:
sign systems, symbolic-cognitive artefacts and computational devices

Learning Objectives:

"Booting up" our thinking with key concepts from the fields that we will be drawing from for our interdisciplinary research model. In this orientation to the course, students will gain a background in using key terms and concepts from semiotics, systems thinking, linguistics, and computer science for understanding the many kinds of "code," symbols, representations, and interpretation processes that we use every day.

We will be studying the ideas and background from two directions: (1) learning how our contemporary computing systems and digital data structures are designed and how to describe these designs accurately, and (2) learning the history of ideas and principles of symbolic systems that make modern computing and all our digital media possible. To open up the "black boxes" of both our "symbolic minds" and how code is designed to work in computers, we will go back into the deeper foundations of the human "symbolic capacity" that have defined "being human" for at least 100,000 years. And, yes, it all connects and combines in what we do with computer systems!


Introduction to Using "Key Words and Concepts"

Discussion and "workshop" in class:

  • We will have an open seminar style discussion in class this week: survey the background texts for key terms and concepts, and we will work through the contexts and backgrounds in class, and apply the ideas to examples and cases that interest you.

Writing assignment (Canvas Discussion module)

  • Even though many of these concepts and approaches are new for you, discuss one or two examples of how you can apply the concepts, or, ask questions about how we can apply them. Propose an example of a type of human expression or a feature in computing that you would like to analyze in detail in our class discussion.
  • Read the general "Weekly Writing Instructions" first. We are using the discussion module in Canvas for your discussion of main points and questions; do not use "blog" style writing.

Learning Objectives and Main Topics:

Learning the major implications of the research findings on the evolution of human "symbolic cognition" (thinking, reasoning, and communicating with language and symbolic artefacts), and how modern computing systems and programming code build on longer and deeper human capabilities.

Key Questions:
What are the distinctive features of our shared human "symbolic capacity"? Why do humans have: (1) the "language faculty" (the ability to acquire natural language and immediately become members of a social group capable of unlimited expression and forming new concepts in the language); (2) the "number sense" or "numeracy" (ability to think in quantities, patterns, and high-level abstractions, and ability to learn how to represent abstract values and operations in mathematical symbols), and (3) the capacity for learning and using many cultural symbolic systems (writing systems, image genres, music, art forms, architecture)? How and why are modern computers and the symbolic forms that we represent ("encode") in digital information an extension of a longer continuum of human symbolic thought -- and why does knowing this matter? We won't pretend to answer these questions in one week (!), but learning how and why to pose the questions is the essential first step in building your own knowledge.

The Continuum of the Human Symbolic Capacity


Readings & Video

  • Prof. Irvine, "Symbolic Cognition and Cognitive Technologies" (Video, from Key Concepts in Technology)
  • Prof. Irvine, "Introduction to the Human Symbolic Capacity, Symbol Systems, and Technologies." [Read first for the conceptual framework for this week; print out for reference.]
  • Thinking in Symbols (Video Documentary, American Museum of Natural History)
    • See also the Archaeology documentary video (by the TRACSYMBOLS archaeology project) on findings in South Africa, which allow us to date human abstract symbolic thought to at least 100,000 years ago.
  • Kate Wong, “The Morning of the Modern Mind: Symbolic Culture.” Scientific American 292, no. 6 (June 2005): 86-95.
    • A short accessible article on the recent state of research on the origins of human symbolic culture and the relation between symbolic thought, tool making, and technologies. Archaeological findings in the video documentary above are discussed in this article.
  • Michael Cole, "On Cultural Artifacts," From Cultural Psychology. Cambridge, MA: Harvard University Press, 1996. Short excerpts.
    • Background: A good summary of cognitive psychology research on cultural artefacts (human-designed and made "technologies" that support communication, cultural meaning, and symbolic thought). Embracing views also shared in anthropology, Cole provides a descriptive model of the human artefact that opens up an understanding of a long continuum of cognitive artefacts in human social history. This view allows to see the implications of our longer history of using culturally adopted kinds writing surfaces (cave walls, clay, wood, parchment, paper, pixel screens), with technologies developed for inscribing writing and imposing images, and the more recent history of our technical media for representing, storing, and transmitting a symbolic system. (Note: these cultural facts unite European and Asian cultural history in a common human capability). Further, while tools are also artefacts (and only humans make tools to make other tools), we have a class of artefacts that are not simply instrumental (that is, used as tools to do something), but are designed to support human cognition (thought, conceptualization, symbolic expression) and to mediate (provide a material medium for) representing and transmitting cultural meanings in physical forms. This school of thought provided important concepts for Human-Computer Interaction (HCI) design theory in the 1960s-2000s.Computer interfaces are designs for using cognitive-symbolic artefacts in a specific technical design.
  • Chauvet Cave Paintings | 3D Virtual Tour
    • This documentary site is for fun, and a great "wow" effect in seeing how computing applications and HD photography can be used for the study of ancient Symbols, Code, and Interfaces. The paintings and patterns of signs from over 30,000 years ago reveal that our ancient ancestors were capable of complex symbolic representations. We don't have their "code" to interpret what the paintings and signs mean, but our shared sense of representational forms allows to know that they meant something important in the culture. The symbols and images on the walls are interfaces to a system of cultural meaning for the communities who understood the codes.
    • Werner Herzog directed a fascinating documentary on the cave paintings, The Cave of Forgotten Dreams. Excerpt on Vimeo.

Prof. Irvine, (Slides) "Introduction: The Human Symbolic Capacity, Language, Artefacts, Symbols" (Part 1) (for discussion in class and study on your own)

Writing assignment (Canvas Discussion module)
Choose one topic to focus your thoughts and questions about the readings for this week:

  • Discuss one of the main hypotheses developed from the research findings in the literature and video documentaries cited in the readings above for explaining how humans are the "symbolic species."
  • Discuss one or two new discoveries that you made when thinking about the findings in the readings. Explain in your words how we can understand our modern "symbolic technologies" in computing and digital media as part of a longer continuum of symbolic thought with signs and symbols that we all belong to.

Learning Objectives and Topics:
Semiotic Theory and Models for all Human Symbolic Systems

Developing an understanding of human sign systems (from language and writing to multimedia and computer code) by using the terms and concepts first developed by C. S. Peirce, and now being applied and expanded in many fields.


Presentation: "Intro to Symbol Systems, Semiotics, and Computing: Peirce 1.0"

Writing assignment (Canvas Discussion module): Semiotic Exercise

  • This is a fun exercise for becoming aware of a “semiotic process” and the way we use tokens and types of symbolic forms. Go very slowly with your actions and describe as many steps as you can by using the terms and concepts in the readings.
  • Peirce stated that the meaning of any set of signs are the signs it can be “translated” into. Translation is an obvious form of interpretation (in English and other languages, a “translator” is also called an “interpreter”). The symbolic transformation in translations (as we do translations) is possible by, first, correlating one set of sign tokens to their types (the lexical forms in a language) and grammatical patterns, and then, second, interpreting ("processing") the symbolic forms with Interpretants (correspondence codes), which, third, generate a second set of sign tokens as output. The "output" tokens represent the interpretation in additional sign tokens (which, of course, can be reinterpreted in other signs).
  • First, go to the Google Translate page. The screen interface displays text box windows for the “source” language text and the “target” language text. Choose the source and target languages. Note that by doing this you have signaled the Google server to provide the corresponding code layers for representing the tokens in each window. The interface calls on a human agent to insert character and word tokens into the “source” text box. Type in or copy and paste at least three sentences in the “source” window (text box). The translation will appear in displayed tokens in the “target” language text window. (Google's "translation" service represents one way of assigning or delegating a very complex semiotic process to approximations of the human process in computation.)
  • What are we doing; what is happening? We will let the Google Translate service remain a black box of computational functions for now (mappings from source tokens to language types, mappings to Interpretant processes in translation routines, which combine linguistic analysis and probabilistic mappings). “Machine translation” (so-called) has many layers of processes and code correlations that perform Interpretant functions, “that by means of which an interpretation is made.” Without knowing what is -- or needs to be -- happening in the blackbox of Google’s cloud server processing, what is happening semiotically that we can observe? Think through what needs to happen to display the tokens of the “target” language translation. (Input tokens -- passed on to black box of Interpretant processes and code correlators -- receive representations of output tokens.)
  • Next, copy the text in the both source and target windows, and “paste” the text tokens into your discussion post, pasting 3 times for each set. Use the style features in the edit window and change the font style and/or size of the text characters in 2 of the sets of texts. What have you just done? What is happening when we “retokenize” tokens from one digital instance to another? How do we recognize the characters and words no matter how many times we do this? Haven't you just proved the type/token principle?
  • Google has designed the "target" window with layers of interpretive features that again call on a human semiotic agent. Mouse over sections of text. Can they be "better" translated as you understand the language codes from source to target?
  • Translation is one of the most computationally difficult processes to automate. Describing and revealing the steps in semiotic terms exposes the challenges for designing such a system that seeks to "mirror" human semiotic actions.

Learning Objectives and Main Topics

In this unit, students will learn the basic concepts and terminology developed in contemporary linguistics as essential foundations for understanding natural language, and other "language-like" systems. Why learn this here? The terms and concepts established in linguistics are now the common terms used in computer science, cognitive science, and many other fields. We can extend the knowledge-base provided by linguistics research for understanding the similarities and differences in other human symbolic systems. Students will learn why we need important distinctions between natural language and "formal" symbolic systems also called "languages" (e.g., metalanguages, mathematical and scientific notation, and computer programming "languages" or "code").

Linguistics also includes the specialized field of computational linguistics and natural language processing (NLP), which are now a major fields in computing and information science: data analytics, AI, and Machine Learning all depend on concepts formalized (given precise meanings and systems of notation) in linguistics.

With this background, you will be prepared to understand how to answer questions like: "what makes a human natural language a language?"; "are visual image genres a language?", "what do we mean by a 'computer programming language', code, and 'language processing'?"; "is music a language?"; "are the genres of film, video, 'languages" or like language?", what does it means for other symbolic systems to be described as "like language?"

Key terms and concepts to learn: 

  • Phonology: the system of spoken sounds of a language.
  • Syntax: the underlying rule patterns for combining words ("parts of speech") to form grammatical structures; more generally called grammar.
  • Semantics: the basic meanings understood by speakers of a language community; often assumed as a "dictionary" level of meanings in contexts of use.
  • Lexicon: the inventory of words that form the "dictionary" or vocabulary in a language; all word are coded as belonging to a class ("part of speech").
  • Pragmatics: the assumed contexts and situations of language use in which meanings, intentions, and purposes are understood; includes shared assumptions, knowledge backgrounds, speech and discourse genres, and kinds speech acts.
  • Generative Grammar: the unconsciously used production rules and constraints for speaking and understanding an unlimited set of new grammatically well-formed phrases and sentences in one's language.
  • Metalanguage: a special set of terms used to describe the features of language: "language [at another level] about language"; the specialized terminology used in linguistics, philosophy, and computer science for defining uses of language and other language-like symbolic systems (including writing, systems of mathematical symbols, and computer code).
  • Natural Language and Formal (Artificial) Languages



  • Steven Pinker, "Language and the Human Mind" [Video: 50 mins.][start here]
    • A well-produced video introduction to the current state of knowledge on language and cognitive science from a leading scientist in the field.
  • Martin Irvine, "Introduction to Language, Linguistics, and Symbolic Thought: Key Concepts" (intro essay; read first).
  • Steven Pinker, Words and Rules: The Ingredients of Language. New York, Basic Books, 1999. Excerpt, Chapter 1.
  • Andrew Radford, et al. Linguistics: An Introduction. 2nd ed. Cambridge, UK: Cambridge University Press, 2009. Excerpts. Use for reference.
    • This is an excellent text for an introduction and reference. Review the Table of Contents so that you can see the topics of a standard course Introduction to Linguistics.
    • Read and scan enough in each section to gain familiarity with the main concepts. You don't have to read the whole selection of excerpts. Focus on the Introduction to Linguistics as a field, and the sections on Words (lexicon) and Sentences (grammatical functions and syntax).

Assignment: Experiment with Visualizing Syntax Structures

  • In the readings above, you were introduced to the way that we can use mathematical models (tree graphs, "icons" in Peirce's terms) for mapping out the syntax of statements in a language. Take the next step to see how this linguistics background applies directly to Natural Language Processing (NLP) in computing (and programming). The following will help prepare you for the writing assignment (below):
  • First, view this short video: Crash Course Computer Science: Natural Language Processing (PBS).
    • You will see right away that the fundamentals of linguistics are assumed and applied almost everywhere in computing today, especially in any type of language processing.
    • [For more background on computing that you can study throughout the course, see the whole series of excellently produced Crash Course: Computing tutorials.
      There is also a Crash Course tutorial series on Linguistics, but that's too much for this week.]
  • Note: the first step in any kind of NLP is representing the word tokens of a sentence, and then mapping the tokens to the types of words (the word classes or "parts of speech") that can be combined to form grammatical (syntactic rule-governed) phrases and sentences.
    • The design of digital systems always makes semiotic structures explicit for understanding: in the background, word and letter character tokens are represented internally in a computer system in bytecode units, and rendered (interpreted) through software/hardware processes in a program window on a screen.
  • Experiment with XLE-Web: This site, provided by a linguistics research group, aggregates useful computational analysis tools for studying syntax.
  • In the "Grammar" pull-down menu, you will see the languages that can be "parsed" (syntax-mapped) in the the version of the software and language database. Note the many languages available for auto NLP analysis, and on the "Treebanks" page, you will find examples in Chinese and many other languages. (The "Tree Banks" are data sets of example sentences already parsed and mapped. Note that you have to click on "accept terms of use" for this section of the database. We will explore more in class).
  • Assignment: Choose "English" from the XLE-Web menu, and insert a short sentence in the text box, and click on "Parse sentence." The software will give you a very complex map for the sentence (including options for which syntax "path" is to be used), using two forms of formal linguistic notation: a constituent (c- ) tree structure and functional (f- ) bracketed notation structure. We will focus on the "c-" (constituent) structure, so can click off the "f-" structure after viewing the notation. This will be new for you, so don't worry about all the complexities and unfamiliar notation. Do your best to follow what is being presented in the visualization. You can experiment with the settings, and also mouse over and experiment with choosing different ways of mapping the tree (almost all sentences, longer than three words, will have out-of-context ambiguities, and we know how to choose the right syntax path). You can also get a "map" of the word tokens in your sample sentence (click on "Tokens"). Take notes on what you find. Backgrounds:
    • The term "parse" comes from traditional grammar, which means decomposing sentences into their word classes or "parts of speech" (from Latin, pars = "part"; thus, partes orationis, "parts of a sentence, parts of speech") (see Wikipedia: Parsing).
    • The software working in the background on this site is designed to generate thorough abstract maps of sentence structure from your input tokens, including "placeholder" elements that may not appear in your example sentence.
    • A "tree diagram" is a type of mathematical graph, and other structure notation systems used in linguistics are also derived from mathematics (especially set theory). The tree-graph visualization reveals that natural language syntax is formed with "rule-governed" relation-positions. Each position or "slot" (like a variable) is defined for grammatical function in a relation pattern with other word-types for forming "grammatical" phrases and sentences. Anyone in any language community has this syntax pattern competency, and we use it unconsciously.
    • Other take-aways from the exercise: syntax visualization tools provide a very useful window into how natural language gives us abstract patterning competencies that we use on other levels to formalize patterns for regularities in the symbol systems defined in mathematics, logic, and programming "languages". Even an introductory view of how syntax (combinatorial rules) works for defining natural language will also help you understand (1) what makes any natural language a language, and (2) how we can distinguish natural language from other human symbolic systems and other forms that we call a "language" (programming languages). (All other symbolic systems, like image genres and music, have a syntax or system of underlying combinatorial rules and patterns. Some overlap with those of language, but most are specific to a symbol system.)

In-Class Exercises:

Writing assignment (Canvas Discussion module)

  • Consider one or two of these questions to write about: From your example sentences in the XLE-Web parser, what did you learn about the (unconscious) rules for combining words in patterns (syntax) that function like a hierarchical tree structure (top level nodes and many levels of grammatical form connected to them)? Did you discover new insights for yourself about how language is a symbolic system based on abstract patterns that we continuously "fill in" with new combinations of words and new patterns of meanings? Do you understand more clearly the token -> type relation? Could explain the basic features of "natural language" and what makes language different from other symbolic systems and and "formal" systems like mathematics and programming languages?
  • What questions do you have, and what should we discuss more in class?

Background, Main Topics, and Learning Objectives

Your main learning goal for the next few weeks is to discover for yourself a clear conceptual understanding of the technical concepts of information, data, and the semiotic design principles of computing systems. And further, to discover why learning this basic knowledge can empower anyone – especially people who don’t think they are “techies” – to understand why and how and all our computing and digital systems are designed the way they are, rather than some other way. You will then be on your way to claim ownership over these technologies as being part of our human birthright as symbolic thinkers and communicators, who always use technically designed physical media for expression, representation, communication, and community identity. Hang on, work as hard as you can on reading and understanding, ask lots of questions, and I will help you discover why learning this is worth the effort, and comes with lots of fun "aha" moments!

This week, you will learn the key terms, concepts, and design principles for “information” as defined in digital electronic communications and computation, and why we need to distinguish the technical concept of “information” from uses of the term in ordinary discourse and other contexts. You will learn the reasons why we use the binary system (from the human symbolic systems of mathematics and logic) for structuring and designing electronic information. You will learn why and how we use this designed system to map units of other symbolic systems (what we call "digital data") into arrays of structures of controlled states of electricity (patterns of on/off placeholders) in a second designed layer.

With the clarifying concepts from Peirce's definitions for the physical/material structures of tokens and representations required in every symbolic system, you will understand how digital, binary information is necessarily designed as a semiotic subsystem, a structured substrate, for holding and reproducing patterns of all our digitized symbolic systems. And not only structures for representations (strings or clusters of tokens), but also in the subsystem for encoding the kinds of interpretation and calculation that "go with" each data type as a system. This is the full "inside" view of "encoding" and "computation" with digital electronic systems. Deblackboxing computing and information for a true view of the designs for semiotic subsystems is the master key for understanding "code."

Next week you will learn the technical definition of "data" as structures of units of “information” that are encoded, in precise ways, in the design of what we call "digital architecture." This architecture means the whole master design for a system with three connected physical structures: (1) for representing tokenized units of human symbolic systems (data representations), (2) for using clusters of binary logic processes for interpreting, calculating, and transforming input data representations into further output representations, and (3) for reliable, "packaging" of data structures for sending and receiving across networks (Internet protocols).

Key Terms and Concepts:

  • Information defined as quantifiable units of energy + time, also involving probability and differentiation (differentiability) from other possible states.
  • The Transmission Model of Communication and Information: the model from electrical engineering and telecommunications: what it is, and is not, about.
  • The Binary number and Boolean logic systems: for logic, computation in the base 2 number system, and encoding longer units of representations (bytes). Why do we use the binary system for logic and data representations?
  • The bit (binary unit) as the minimal encoding unit with arrays of two-state electronics (on/off). We can map human symbolic abstractions for two-value systems onto the electronic on or off states: yes/no, true/false, presence/absence; 1,0 in the base 2 number system. When we "read" the value of the two possible states in an information context, we say we get 1 bit of information.
  • Discrete (= digital/binary) vs. Continuous (= analog) signals.


  • Introductory videos:
    •, How Computers Work series (short videos): watch Lesson 3: Binary and Data, and Lesson 4:Circuits and Logic (whole series list)
    • Crash Course Computer Science: Electronic Computing (background on the electronics for digital information)
    • Note: these are good quick intros, but they have to skim over some important facts about digital system design. There are no "1s" and "0s" in the physical components of digital information and computing systems or in binary code at the electronic level. "1" and "0" have meanings in human symbol systems, and, by using semiotic design principles, we map (correlate) human meanings and values represented in symbols into a system of on-or-off electronic states. These on/off states are meaningless until assigned a symbolic value from "outside" the physical system.
  • Martin Irvine, "Introduction to the Technical Theory of 'Information' (Information Theory + Semiotics)"
  • Daniel Hillis, The Pattern on the Stone: The Simple Ideas That Make Computers Work (New York, Basic Books: 1998; rev. 2015) (excerpts).
    • For this week, read the Preface and Chaps. 1-2 (to p.37). Hillis provides good explanations for how we use binary representation and binary logic to impose patterns on states of electricity (which can only be on/off). The key is understanding how we can use one set of representations in binary encoding (on/off, yes/no states) for representing other patterns (all our symbolic systems). Binary encoded "information" (in the digital engineering sense) can be assigned to "mean" something else when interpreted as corresponding to elements of our symbolic systems (e.g., logical values, numerals, written characters, arrays of color values for an image). Obviously, bits registered in electronic states can't "mean" anything as physical states themselves. How do we get them to "mean"?
  • Denning and Martell. Great Principles of Computing. Chap. 3, "Information," 35-57.

Optional and for Your Own Study:

  • James Gleick, The Information: A History, a Theory, a Flood. (New York, NY: Pantheon, 2011). Excerpts from Introduction and Chapters 6 and 7.
    • Readable background on the history of information theory. I recommend buying this book and reading it throughout the semester, together with Denning and Martell.
  • Luciano Floridi, Information, Chapters 1-4. PDF of excerpts.
      • For background on the main traditions of information theory. Alas, all is not well-explained, and he is wrong about "semantic information" (we will discuss why in class).

In-Class: Demonstration of Telegraph Signals and Code in a working telegraph system!

Writing assignment (Canvas Discussion module)
Choose at least one topic to focus your thoughts and questions about the readings for this week:

  • From what you've learned about symbol structures so far, can you describe how the physical/perceptible components of symbol systems (text, image, sounds) are abstractable into a different kind of physical signal unit (electronic/digital) for transmission and recomposition in another place/time? (Hint: as you've learned from Peirce and semiotic theory, meanings aren't properties of signals or sign vehicles but are relational structures in the whole meaning-making process understood by senders/receivers in a meaning community.)
  • Consider specific cases for doing "de-blackboxed" descriptions: can you explain how the information structures (as patterns for replicable signals and tokens) must be work to enable us to use digital electronics for creating and sending a text or email message, or making a digital photo and saving and transmitting it? Why is the meaning of any encoded "message" not part of the engineering design solution for the digital electronic system? Can you use semiotic descriptions and concepts to explain how "digital information" is (and must be) designed as a semiotic subsystem in the whole design of digital computing?

Learning Objectives and Main Topics:

After learning some basic background on “data” as understood in computing, we will focus on the encoding methods for data types or that we use every day: text and images. We send and receive more text today than ever before in human history (text messaging, email, blog posts, etc.). All this digital text is now possible by adoption of an international standard for encoding the characters of all languages -- Unicode.

Similarly, we routinely make photo images, send and receive images, and view digital images in many software and digital device contexts. This is all possible, too, by standards for image formats and ways to define images as data types.

When we open up the black boxes of all that happens behind the scenes to create the representations of text and images on our screens (our main symbol systems and data types), we can discover why and how digital data needs to be designed in subsystem levels that correspond to the affordances of digital electronics and binary logic. Following the requirements for every instance (token) of a symbolic representation to be interpretable as a pattern (type) in physical-perceptible structures, we find that the whole binary system must be designed to maintain structure preserving structures across all instances. This provides us with consistent ways to assign kinds of meaning and functions to "data types" as patterns of structure (bytes). At the "information" level, the system guarantees bit-structure replication of the data units so that whatever was input (wherever, and whenever), can be continually interpreted in a digital system and rendered on screens (wherever, whenever). Why? So that the symbolic and meaningful patterns are preserved for recognition and interpretation by human symbolic agents. Here's where the "information theoretic" principles (digital bit/byte representation preservation) underlie the formatting and management of the "data types" actually used in our computing system's memory and software designed for types and formats of data.

More about the big "reveal" when we make the implicit design principles explicit. Whenever we open up more about digital design principles, the more we see how the designs exist only for one main reason: creating semiotic subsystems for all our digitized symbol types and for the correlated patterns of interpretation that go with each symbolic type. This is why the deep "infrastructure" of digital code is designed the way it is, and why all these unobservable semiotic design principles explain (and are demonstrated in) everything that we do observe and understanding in digital data.


Case Studies: Common Examples of Data Types -- Text and Images

Text as Data Type: Character Set Encoding
How do we get all kinds of computer devices to display written characters in any language? What are the principles for encoding written characters in byte code that can be interpreted in any computing system (PC or smart phone) and rendered with software in the pixels of anyone's screen? Answer: all makers of devices and software adopt an International Standard: Unicode.

  • Unicode and Character Sets (see background in "Introduction to Data Concepts" above first)
    • Digital Character Sets: ASCII to Unicode (video lesson, Computer Science)
    • Wikipedia overview of Unicode and character encoding is useful.
    • The Unicode Consortium official site (everything is open-source and international standards-based)
    • The current Unicode Standard, 14.0.0 (Sept. 2021) [use for reference]
      • Code Charts for All Languages (experiment with different languages)
      • DIY example: This HTML Web "page" (our course syllabus) is encoded as UTF-8 (Unicode Transformation Format, 8 Byte Units), the most commonly used Unicode standard in almost all US and European text encoding software. If you are using the Chrome browser for this course page, do "Cntrl U" (PC) or "Command U" (Mac) and you will see this line of code a few lines down:
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">.
      • This line of HTML "meta" code "tells" all Web software (including mobile apps) what kind of text is encoded in this file, so that the decoding module in the program uses the correct "context of interpretation" for the Unicode representations. We never see the "raw" Unicode bytecode strings: all our software is designed to use the standard bytecode (including numbers), but produce the screen representations that we understand.
    • Unicode Emoji (pictographic symbols) | Emoji Charts
      • [Yes! All emoji must have Unicode byte definitions or they wouldn't work consistently for all devices, software, and graphics renderings. Emoji are not sent and received as images but as bytecode definitions to be interpreted in a software context. Again, code data and device-software contexts and rendering methods are separate levels is the system design.]
      • Current Unicode Emoji Chart (with current skin tone modifications)
      • Unicode test file of all currently defined "emojis" [2020] (to test how they display in software)
    • David C. Zentgraf, “What Every Programmer Absolutely, Positively Needs to Know About Encodings and Character Sets to Work With Text.” Kunststube, April 27, 2015.
      • This is a useful explanation from a programmer's perspective; read through the section on UTF-8 and ASCII.
    • "Han Ideographs in the Unicode Standard," Yajing Hu (CCT student, final project essay)
      • This is a good essay that looks at the background of Unicode standards for Han characters, and other Asian language families. The Unicode consortium had to consider the same issues for encoding for Arabic and other languages with "non-Roman" character sets and marks.]

Digital Images as Data: Digitizing Light, Pixels, Image Formats, and Software
How are images encoded as digital data? What are the basics that everyone can understand?

Writing assignment (Canvas Discussion module)

  • Create an example (Instance) of a digital "data type" (a text string in any language or a photo image in a standard format), and write a detailed "biography" of your "instance tokens" from input encoding to the stages of memory instances to output display. Use a PC with the corresponding data type software, not a mobile phone. If you want to trace the steps in the digital representation of a photo image, you can take a photo from your phone, but "send" it to yourself (email, or Cloud storage that you can access from a computer) so that you can use PC/Mac software to "view" it. (The software that we use -- for text characters or images/graphics -- is designed to maintain and transform tokens of specific data type instances by having access to our system's active memory and processing units.) Insert your data example in this week's post.
  • Try thinking through these steps your data instance's "biography." Hint: when we "copy" or "move" data items we are communicating intentions, through software routines (the subprocesses in any program) for ongoing retokenizing of the underlying physical bit/byte-level "information" in other physical instances in other digital memory locations, which our operating systems and other memory components "index" as data types.
    • (1) Describe (and ask questions) about the encoding/decoding processes of the data type instance as data. What is the relationship between software specifically designed for creating (inputing) and displaying (outputting) a data type for how the instances are rendered in representations on our pixel-based displays?
    • (2) Describe (with as much detail as you understand so far) the E-information and Data levels of your example. Our "local" PC (or Web-enabled app on a mobile device) and the "remote" Canvas server are designed to facilitate "copies" of your data instances (re-tokenizations), and return "copies" (token instances) to be output through the memory, software, and screens on our devices.
    • "E-information" explains how the systems (our devices, the Internet, Web servers) are designed to manage copying/replicating tokens of bits/bytes reliably in the background (regardless of the data type). But we deal with bytes at another level up in the system design as data types interpreted in software. Our interfaces give us a Data view for "uploading" to the Canvas "data server" (and the "Library" of media types), and then for how a data type is "re-instanced" on any of our screens via the software and graphics-screen hardware on our individual devices.
  • Can you understand how the Data Type assignment level is applied to the E-information level? Is it clearer how digital design is about layering structures (imposing patterns to be interpreted as kinds or types of digital representations)? are designed as "abstraction levels" (the complexity of the details "abstracted" out of view) that are almost entirely blackboxed from "users." You will learn more about the software code layers that are designed precisely to maintain these data instances of our symbolic systems.

Learning Objectives and Main Topics

Learning the main design principles for modern digital electronic computer systems, and how this design exists to facilitate symbol processing in the levels of code, encoding, and the physical architecture of systems. You will learn some the key concepts of digital computing systems, and why the whole model for computing is based on, and is designed to serve, our symbolic capabilities (in thinking, expression, communication, and creativity).

The video lessons and readings will provide important background on digital computer system design, but mostly from the "how" and "how to" perspective. With this background, your main learning goal (beginning this week) is to begin applying our key concepts for understanding why our systems are designed the way that they are, rather than some other way, for understanding what a computer system is.

Building on your learning "building blocks" for Information and Data concepts for digital, binary systems, you will learn the reasons why computing systems are (must be) designed as semiotic systems. You'll see that to get computer systems do what we want them to do (perform delegated and automated symbolic tasks at electronic speed), we have to decompose the larger how to problem into several levels of subsystems. The subsystems are designed to inter-operate as part of a whole system "architecture" (the master design), the whole combination that we "orchestrate" through different levels of "code" representations (categories and levels of symbols for meaning and for performing operations and interpretations)

So, the whole computer system = (the specifically designed system of subsystems for symbolic structures and processes for them) + (human agents/interpreters who provide symbolic inputs, direct program processes, and interpret results). This whole combined system is what makes a digital computer a computer. A "computer" is not the blackbox of hardware that we get as a commercial product.

By applying the true ideas implicit in the design principles for computer systems, you will discover the answers to the big questions about why computer system seems like black boxes:

  • How can we make human abstractions and concepts (which we symbolize in the sign systems used in mathematics and logic, and combine in programming languages) perform actions in the physical electronic components of computers?
  • What were the design principles that enabled "computers" to develop from giant calculating machines ("number crunchers") into the multi-data-type, symbolic and multimedia systems that we use today?
  • How and why can we use numbers (an abstract symbol system) as a substrate or subsystem for all our "data types," which we "code" for representing the symbolic systems that we use computers for?

Readings and Video Lessons:

For Reference: Background on the Technical Design of Today's Computers

  • These texts are not for reading straight through, but are good reference sources for understanding computer system design and physical components. How Computers Work is just that: there is no why or explanation of design concepts, but it's a good, well-illustrated "inside the blackbox" view of standard hardware (which also applies to smartphones and mobile devices).
  • David A. Patterson, and John L. Hennessy. Computer Organization and Design: The Hardware/Software Interface. 5th ed. Oxford, UK; Waltham, MA: Morgan Kaufmann, 2013. Excerpts from Chapter 1.
    • Excellent overview of important concepts for system architecture from PCs to tablets. For beginning computer engineering students, but accessible.
  • Ron White, How Computers Work. 9th ed. Indianapolis, IN: Que Publishing, 2007. Excerpts.

Presentation: Computational Thinking: Implementing Symbolic Processes

  • For discussion in class and individual self-paced study.

Writing assignment (Canvas Discussion module)

  • Expanding on your data "case study" from last week (or choose a different example), think through how you can now add more of the details from the computer system design levels for your descriptions and explanations. Take the next step to describe and explain the "whys" (reasons for) the "how to" or "facts" of the physical components the binary electronic system.
  • Why does this design for the physical structures in a binary system (memory + processors) enable us to create the representations and interpretations that we see and "interact" with at the input/out levels? Although most of the processes are unobservable as encoded binary electronic structures, can you see how they are really not "black boxes" (in the sense of being beyond human understanding) or just facts about machine parts?

Learning Objectives and Main Topics:

In the course units for Weeks 9-10, students will continue learning about computer system design and data as elements of symbolic systems, and learn how we communicate with the components in the designed architecture of computer systems through the levels of symbols in programming "code."

Students will also learn about computational thinking -- a universal form of thinking and reasoning that calls on our cognitive-symbolic abilities for abstraction, planning step-by-step procedures, and modeling the kinds of interpretations and operations that we use for our symbolic systems (language, math, images). Computational thinking -- upon which all computing systems depend -- is a specialized application of our symbolic-cognitive capabilities. This form of applied thinking underlies the design of programing languages and computer code. Students will learn how programming makes this way of thinking explicit so that we can develop formal (or "artificial") languages got assigning representations (elements of symbols) and actions (processes) to computing systems.

"Computational Thinking" is NOT learning to think like a computer (whatever notion of "computer" you may have). Rather, it's exposing common logical and conceptual thought patterns that everyone has, can develop, and can learn to apply in programming and digital media design.

learning by doing, and seeing first hand how computing code is a way of implementing levels and classes of symbols:

  • signs/symbols for representing "data types" that correspond to our symbolic systems ("symbols that mean" = represent values)
  • signs/symbols for defining the relations and actions (processes, procedures, interpretations) that a computer system can enact for each type of data representation ("symbols that do" = "meta" symbols taking the other orders of symbols as their "content").

The video lessons help you visualize how a programming language (and thus a software program or app) is designed to specify symbols that mean things (represent values and conceptual meaning, mainly through variables for data types) and symbols that do things (symbols that are interpreted in the computer system to perform actions and/or operations on other symbols = signs/symbols for syntax and operations).

Summing up our learning building blocks so far, you'll see that:

  • Programming languages are, and must be, "formal languages" (metalanguages) with strictly defined symbols for syntax and semantics (what the signs/symbols must mean -- stand for -- in the internal logic of the programming language design), as compared with natural languages (Week 5).
  • The strict formalism of programming languages is based on logic and mathematics (human symbol systems with signs for representing values + signs for operations/ interpretations on symbols. Only by using precisely defined formal signs and symbols of a programming "code" is it possible for us to to map (assign, impose a corresponding structure for) the formal necessity (represented in logically precise symbols) onto the causal necessity in the corresponding design of a binary digital computer system. The mapping of abstract human-symbol-to-physical actions-in-components happens when the symbols that we understand in computing code are "translated" into binary code, which is the form that can be mapped to binary electronics. The translated binary encoded representations can thus be assigned to physical structures in components for both memory (holding, moving, and storing binary representations of data) and actions (processes, interpretations, and rules for transforming data representations) in the binary computing circuits of processors.
  • You can see how "E-Information" and "Data" representations (Weeks 6-8) become assigned to different levels in the architecture of a computing system, and how programming code puts them into action.
  • Computation in action (as "running" software) is a way of defining transitions in information representations that return further interpretable symbol sequences in chains of "states" that combine meanings and actions. Stages in the results of the processes are programmed to be "output" in our screens and audio devices, and we can continue directing and re-directing the processes by ongoing dialogic input in interactive software and GUI interfaces (more to come in Week 12).
    • This is what the software layers running on your device right now are doing to render the interpretable text, graphics, images, and window formatting from the digital data sources combined in a Web "page," image or video file, and many other behind-the-scenes sources (Weeks 6-8).

Introductions and Video Lessons:

  • Video: Prof. Irvine, Introduction to Computational Thinking and Software (From "Key Concepts in Technology" course)
  • Jeannette Wing, "Computational Thinking." Communications of the ACM 49, no. 3 (March 2006): 33–35. [Short essay on the topic. Wing launched a wide discussion in CS circles and education for this approach to introducing computing principles. These principles become embodied in the design of programming languages and coding principles.]
  • Video: Computational Thinking: What Is It? How Is It Used? (Computer Science Intro)
    • Main "Computational Thinking" strategies:
      Decomposition (of a complex problem into manageable units that go together),
      Pattern Recognition (discovering patterns in examples of the problem for making generalizations that hold over any example or instance),
      Abstraction (focusing on one level of a problem at at time, bracketing off the complexity of dealing with other levels), and
      Algorithm Design (designing the steps for a general procedure that can be coded in a program).

Crash Course Computer Science: Video Introductions to Programming

  • In the in-Learning Lesson below, Python is used as a teaching language for introducing programming fundamentals. With the background so far, you should also be able to understand the more universal programming principles that every programming language must include.
  • Continue with the Crash Course Computer Science Lessons: 9 (Instructions and Programs), 11 (Early Programming); 12 (The First Programming Languages); and 13 (Programming Basics: Statements and Functions).

Main Reading for Introduction to Coding Tutorial:

Main Assignment: Video Lessons for Hands-On Learning

  • in-Learning: Programming Foundations: Fundamentals
    • Sign in to this online course with your GU ID.
    • Short video lessons that introduce programming concepts with Python as the learning language, mainly using coding programs and interfaces for the Mac platform.
    • Study Units 1-3 for this week. You can follow the basic concepts in units 1-3 without installing your own IDE program ("Integrated Developer Environment," a program to write programs) and the Python Interpreter for your platform (OS).
    • To go further in trying out your own code, install the Python Interpreter on your own PC (instructions for Mac and Windows platforms in video), and an IDE for writing code and then "running" it on your PC. The video will explain how Python uses an "interpreter" program to send "runnable" (executable) binary code to your system.
    • Take notes on what you learned and questions you have about programming concepts and how our "code" gets communicated and interpreted in a computer system.
  • Option: Some students may have had a general introduction to programming with Python as the the teaching language. If so, you can chose to study and try out the code for Web pages and apps (HTML, CSS, and JavaScript) that we will study in Week 11 (On the Internet and World Wide Web). Go to the lessons in Week 11.

Writing assignment (Canvas Discussion module)

  • Describe what you learned from working through the in-Learning video lessons. Were you able to make connections to the computing principles and concepts for "code" that we've studied? Were any key concepts clearer? What questions would you like explained further in class?

Learning Objectives and Main Topics:

Main goal: By continuing your video lessons and background readings, think for yourself about what the rule-governed procedures mean (the step by step methods for using different kinds of signs and symbols).

By seeing the visual representations of programming code signs and symbols, and then what happens in the results from computational processes and actions in the "output" representations, can you understand more clearly how programming and software is about combining:

  • "symbols that mean" ("coded" by using a set of symbols for variables as "place-holders" to be filled-in by data-representing symbols when the program is "run"), and
  • "symbols that do" (the signs and symbols that create operations, actions, interpretations, and processes on or for the "meaning representing" symbols.

When we pause to observe how we use the whole computer system to encode symbolic representations (interpreted in binary representations) and cause symbolic actions, with and for those representations, can you catch a glimpse of what it means both to "code" and "run" programs? Can you explain on a conceptual level, what it is we are doing:

  • (1) when we program with a specific programming language for creating a "source code" file (that is, when writing code for software programs -- including "importing" reusable already-written code from code libraries), and using a source code file as "input" for interpreters or compilers that translate our text code file into binary "executable" files; and
  • (2) when we "run" software (from binary executable files in any computing device) for different kinds of data (e.g., text, images, graphics, audio/video), and "interact" with the program dynamically (in "real time") for directing actions and interpreting new/additional data.

Key Concepts

  • Source Code
  • Executable Code
  • Programs/software: how the symbol systems are designed to work, and how a program file is allocated to (or assigned) memory locations, and how the design of the computing system (binary code representations in memory + processors (taking inputs and generating outputs + cycles of time) directs access and memory for outputs.
  • The combined systems design for programming and computation.

Readings for Programming Fundamentals:

  • David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines.
    • For this week, read chapters 3-6 (Programming; Problems and Procedures; Data; Machines). [Download the book: you can always return to other chapters for reference and self-study.]
    • These chapters introduce the more technical and logical-mathematical aspects of the "how" and "why" in programming language design. In our context, the point is learning about the reasons for the design of a special code language (C, Python, Java, etc.), a symbolic code that allows us to communicate with the structures of digital computer systems.
    • A programming language must be designed to implement step-by-step procedures that can be represented formally in special symbols (the "code" vocabulary) that allow us to (1) assign human logical actions to perform computations (in the processors) on (2) physical representations of symbol tokens for data (in "memory" components). These two dimensions of programming are based on the principle of "one-to-one correspondence" mappings to (3) the combined binary architecture of digital computer systems.
  • Denning and Martell, Great Principles of Computing. Chapters 5, 6, 10 (Programming; Computation; Design). [These chapters will fill in your background for how programming and code are implmented in computer systems.]

Crash Course Computer Science Lessons

Main Assignment: Continuing Lessons for Hands-On Learning

  • in-Learning: Programming Foundations: Fundamentals
    • Study Units 4 - Conclusion for this week. Again, you can follow the basic concepts and procedures presented in the video lessons without installing the Python Interpreter and your own IDE program, but you will get more into "hands on" coding if you have the software tools on your own system.
    • Continue to take notes about what you are doing and learning, as well as questions about the programming principles.
    • If you have chosen the Option to study the coding methods for the Internet and Web (in next week's units), continue in the same way as in the point above, and try to reflect on what you are doing with the code and questions you have about the coding process.

Writing assignment (Canvas Discussion module)

  • First, capture your main learning steps and questions from the readings and video lessons for this week. Do you have further "aha!" connections from your studies from the past two weeks to this week, and new questions that emerge?
  • Next, refer to the learning goals for this week in the "Learning Objectives and Topics" above, and explain, as far as you can, how or whether these foundational principles of programming and software are more understandable, and ask questions that you have from our two-week unit on coding and fundamental of programming.

Learning Objectives and Main Topics:

This unit has two main objectives: learning the basic design principles for the Internet and Web as systems for semiotic systems, and learning some of the basic features in the code languages for the Web (HTML, Hypertext Markup Language, CSS, Cascading Style Sheets, and JavaScript (a script for of code for interaction and encoding digital media).

Learning the basics of the "HTML code suite" is a great way to begin learning and doing code. Since we "write" the suite of HTML code families in a text file, we have a first-level visualization of the relation between metasymbolic symbols (the signs/symbols of the code as a metalanguage) and the symbolic forms (in data types) that we use for meaningful representations of a symbolic system (text, graphics, images, etc.). The "meta" code level is designed to define, describe, and prescribe the functions of all the digitally-encoded representable forms packaged in an HTML file, but the "meta" code does not get "displayed" in the screen output. You can see right in your HTML code window how we use and distinguish between "symbols that mean" and "symbols that do" in "coding" for computer systems.

These basic first coding steps will open up the design principles that enable us to send, retrieve, and format data from Internet/Web servers so that the data can be presented in GUIs (interactive Graphical User Interfaces). You will get a first look at the code that makes everything in our Web browsers and mobile apps work as dialogic interactive systems. You will discover how many of our core human symbolic capabilities can be assigned, delegated, and “outsourced” to large-scale, interconnected networked systems that store and analyze data, which can then be retrieved through Internet/Web connections and be interpreted in software layers for the final "output" presented on our Web “pages” and app “screens”.

With this first view of one level of code (used for fetching, describing, and formatting what we see on our screens) you can go further into the "black box" to understand the operations that we can’t see that are initiated in our interactive commands and choices communicated to networked computer systems. And here we meet all kinds of software programmed in several "languages."

Key Terms and Concepts Learned:

  • Levels and layers of computing systems and code.
  • Metadata and data.
  • Basic concepts for coding for data types and interactive commands for networked systems (Internet/Web).
  • Code used for what we see in all Web “pages” and mobile app screens (HTML, CSS. JavaScript).

Readings and Video Lessons:

HTML and Web Coding Lessons

For Inserting Your HTML Test Code in a Shared Google Doc

In Class: Follow the Code

  • JavaScript discovery html file.
  • Examination of the "code source" of Web pages.
  • Experimentation and practice with HTML basic code. Group project on HTML file.

Writing assignment (Canvas Discussion module)

  • This weeks assignment has two parts:
    (1) With the background on the Internet and Web, and from your learning about the HTML code suite in the lessons, discuss some main points that you learned about the Internet/Web and coding for the Web. Can you describe some features of the HTML code suite and Web "metamedia" interfaces that subsume and combine many of the principles that we have studied for semiotic systems and subsystems, data types, and digital media? What are the main design ideas behind "hyperlinking" and multimedia display interfaces (hint: realizing some of Engelbart's and Kay's ideas).
    (2) From what you learned in the HTML Web coding lessons, write some HTML markup and code for data that you would like to try out and see "run" from a web server and your own browser. Copy and paste this into the shared Google doc.

Learning Objective and Main Topics:

  • Learning the background history for the models of computation that led to the development of interfaces for human symbolic interaction with programmable processes.
  • Understanding the design steps and technical means (in the 1960s-1980s) that enabled computer systems to become general symbol processors and not simply calculating machines.
  • Learning the conceptual and semiotic foundations for the development of "graphical interfaces" for multiple symbol systems (data types). This development gave rise to "human computer interaction" (HCI) as a design discipline.
  • Learning the design concepts behind the technical architectures in all our devices that support user interfaces to computer systems (small or large) so that they perform as interfaces for semiotic and cognitive systems.

Readings & Video Introductions:
Interactive Design Principles and Metamedia for Semiotic Processes

Semiotic Foundations of Interaction Design Principles

  • Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012. Excerpts from Introduction and Chapter 2.
    • Read the introduction for this week. This book is an excellent recent statement of the contemporary design principles developed in the cognitive design tradition, which assumes that computer interfaces are designs for semiotic systems.

Optional: Background in the Technical History of Interaction Design

Brad A. Myers, “A Brief History of Human-Computer Interaction Technology,” Interactions 5, no. 2 (March 1998): 44-54.

  • This excellent synthesis of the history was written ten years ago, and we continue to use the interface design principles summed up here. Think about how the different design concept "leaps" (with supporting technologies as they became available) were motivated by semiotic-cognitive uses and finding ways to bring more cognitive agency to using computer systems and digitized symbolic media types.

Supplementary Sources for Further Research: Historical Background

  • Collection of Original Primary Source Documents on Interface Design (in pdf).
  • I'm providing this group of readings because graduate students should have access to the primary texts of their field in their original form.
  • You do not need to read these texts fully, but review them for their historical significance and as they are referenced in the readings. These sources will also be important if you want to do a final project related to the design principles for semiotic structures in interfaces and interactions. We can discuss the readings and documentary videos further next week.
    • Contents of the Collection of Documents (descriptions are in the pdf file):
    • Vannevar Bush, "As We May Think," Atlantic, July, 1945.
    • Ivan Sutherland, "Sketchpad: A Man-Machine Graphical Communication System" (1963).
    • Douglas Engelbart, "Augmenting Human Intellect" (project from 1960s-1970s).
    • Alan Kay, Xerox PARC, and the Dynabook/ Metamedium Concept for a "Personal Computer" (1970s-80s)
  • History and Theory Background:
    • Lev Manovich, From Software Takes Command (2012): excerpts on the background ideas for for Allan Kay's "Dynabook" Metamedium design concept, and "hypertext" (Ted Nelson), both of which extended what Kay learned in Engelbart's lab.
    • Alan Kay, "Programming Your Own Computer," World Book Encyclopedia, 1979. (Think about what PCs would be like if Kay's view had been adopted in the PC consumer industry!)
  • Documentaries videos on the history of interface and interaction designs:

Writing assignment (Canvas Discussion module)
Reflecting on your learning over the past few weeks and this week, develop your own description of an interactive feature:

  • Using the concepts and methods from the readings (and any connections with prior weeks), describe some of the concepts that enabled computing systems to be designed as general symbol processors (not just calculating machines). How was this major "conceptual leap" connected with ideas for user interfaces that enable communicating with a computer system and directing the input and processing of symbolic representations, actions, and intentions?
  • Use an example of a software feature that requires our current interface designs (PC or mobile app), and that illustrates how these symbolic-cognitive functions are now always assumed and built-in to the technical components (e.g., pixel-mapped screens, inputs-outputs, data types translated into pixels and/or audio sounds).

Learning Objective:

Discussion of main learning achievements, and further thoughts about how to apply and extend the concepts and methods of the course to any aspect of computing, code, digital media, and symbolic systems.

Learning basic research methods for your final Capstone Project.

In class:
Discussion of your main learning discoveries and "take-aways" from the course

Instructions and How to Prepare for Your Final "Capstone" Project (pdf).
(Save and print out)

Readings for Synthesizing Thoughts and Learning

  • Mahoney, Michael S. "The Histories of Computing(s)." Interdisciplinary Science Reviews 30, no. 2 (June 2005): 119–35.
    • This is a rich and well-informed essay, but skim the first pages and begin close reading at the bottom of p.128-134. Though the examples are from earlier stages of computing, the main points about multiple communities "computing" and designs for symbolic processing will always be true.
  • Denning and Martell, Great Principles of Computing. Read Chap. 10 (Design), Chap. 12 (Afterword), and "Summary of the Book", pp. 241-255.
    • For your own further reading and research, be sure to consult the notes and bibliography. For Final Projects on any topic covered in this book, you will do well to begin with references cited.

Examples of Published Articles (study for the structure of the article and uses of references)

  • Brad A. Myers, “A Brief History of Human-Computer Interaction Technology,” Interactions 5, no. 2 (March 1998): 44-54.
    • This article is written for a broad computer science and HCI design readership; it is a hybrid of magazine style and research article. It is an good example of the "historical synthesis" method for describing and interpreting major ideas over several decades. Notice the extensive list of references and the author summarizes the work.
  • Jiajie Zhang and Vimla L. Patel. "Distributed Cognition, Representation, and Affordance." Pragmatics & Cognition 14, no. 2 (July 2006): 333-341.
    • This article is a good example of an interdisciplinary view of these design topics. Pay attention to the structure of the article and uses of references.

Planning for Writing Your Final Capstone Project: The Structure of a Good Essay

  • As you plan your research and writing, consult my Writing to be Read (also a pdf version to print out): a guide for the structure and logic of research papers.
    • This guide, developed from many years of teaching writing, takes you through the process of developing a thesis (your main point), which is also called the research question, the leading hypothesis (or hypotheses) of an argument, or the simply the main hypothesis to be supported and justified by your research.
    • This is the method for interpreting your research and organizing your thoughts in the way that we present them in the structure of a research paper, article, or academic thesis, and feature news media articles. Use it, and you will succeed in being read, because this is the form everyone expects.

Writing assignment (Canvas Discussion module)

  • As you reflect over what we've studied and what you have learned, what stands out for you in what you have learned and discovered? Earlier questions answered, and new questions that you want to follow up on?
  • Consider, too, how the methods, key concepts, approaches that we have studied will apply to other topics or courses that you want to study in CCT.
  • Looking toward your final "capstone" project, was there a topic or approach that you would like to learn more about, and develop further on your own?

In Class: Open Discussion and Presentation of Final Projects

  • We will have a roundtable discussion of your current state thinking and research, and a chance to get feedback and suggestions from the class.

Resources for your Research:

Examples of Student Final Projects from Prior Classes of 711:

  • Fall 2020 [Other years of courses do not seem to be archived on Wordpress]

Final Projects

  • Follow the the Instructions for your Final "Capstone" Project (pdf).
  • Use Zotero for organizing and formatting your references. All references in your essay must conform to a professional style for citing and formatting references (choose one in Zotero). Professional practices matter!
  • Final projects are due to be posted in the Canvas Discussion space 7 days after the last day of class. Insert (paste) your written work in as Discussion post in the topic "Final Projects."
  • If you have a document with images and formatting that you want to preserver, you can insert a link to a document (shared Google doc or pdf), and copy your abstract in the Canvas discussion post under your link.