Media History

Georgetown University
Graduate School of Art and Sciences
Communication, Culture & Technology Program

Professor Martin Irvine
CCTP 711: Computing and the Meaning of Code
Fall 2022

This course introduces the key concepts for understanding everything we call “code” (i.e., interpretable symbolic systems) in ways that apply to any professional career. Computing, software, and data resources are now essential in every professional field, but few people have the opportunity to learn the key design principles of computing systems, where the ideas for computation come from, and why everything we do in computing is connected to a much deeper and longer history human symbolic thought, and why it matters to understand this fact. Everything in modern computing and digital information has developed from human symbolic capabilities (language, writing, math, images), and these forms of human thought and expression are openly observable in all forms of communication and representation that go back long before modern technology.

With the methods and concepts in this course, you will be able to open up a big “black box” that is ordinarily closed off from view and understanding -- the "why" of how modern computing is designed, and the meaning of everything that we do with computing and information. the "whys" and "hows" in computing systems and "code-- not only of computing and programming “languages,” but of all our “systems of meaning,” from language, mathematics, and images to the binary encoding systems for all types digital data.

The course is designed especially for students from non-technical backgrounds. But if you have done some computer coding already, you will understand more clearly how and why programming languages and digital media are designed the way they are, and how and why we can use “code” in one form (in the “digital language” of computers) for representing, processing, interpreting, and transmitting all forms of the “primary code” of human thought and expression (in words, images, numbers, graphics, sounds, and video).

In order to “crack the code” for understanding how all symbolic systems work, you will learn methods and concepts from several disciplines, including: design thinking, systems thinking, semiotics (the study of symbol systems), cognitive science and philosophy, information theory, and computer science. In this course, you will learn about computing and symbolic systems in two parallel paths: by learning the ideas that made our digital computing systems possible, and by actually seeing how it all works in “hands on” practice with software, programming code, and Internet apps. By focusing on the essential background for answering the “why” and “how” questions, you will also gain a new motivation for applying this understanding to the “how to” side of programming (if you want to learn how to code or work with others in designing applications).

Course Objectives and Outcomes

By the end of the course, you will be able to:

(1) Understand how the coding and logic of computer systems and digital information are based on our core human symbolic capabilities, and how and why the design principles for computer systems and digital media connect us to a longer continuum of symbolic thought, expression, and communication in human cultures;

(2) Use the foundational knowledge of this course to go on to learning programming in a specific programming language and in a specific domain of application, if you want to develop these skills;

(3) Apply the knowledge and concepts of this course to developing a leadership-level career in any kind of organization where you will be a knowledgeable “translator” of computational concepts: you will be able to help those without technical backgrounds to understand how computing is used in your field, and be a communicator with people responsible for computing and information systems (“IT”) who need to understand the needs and roles of others in your organization. This “translator” role is in big demand, and one that many CCT students have gone on to develop great careers.

View and download the Syllabus Document in pdf for Georgetown Policies and Student Resources.

Course Format and Syllabus Design

The course will be conducted as a seminar and requires each student’s direct participation in the learning objectives in each week’s class discussions. The course has a dedicated website designed by the professor. The web syllabus provides a detailed week-by-week "learning map" with links to weekly readings (in a shared Google Drive folder). Each syllabus unit is designed as a building block in the interdisciplinary learning path of the seminar.

To facilitate learning, students will write short essays each week based on the readings and topics for that week. Your short essay must demonstrate that you've done the readings, and can comment on and pose questions about what you find to be the main points. At first, you will have many questions as everyone in the class begins learning new concepts to work with and working toward better technical knowledge in how the concepts in the course apply to computer systems, code, and digital media. Try to apply some of the main concepts and approaches in each week’s unit to examples and cases that you can interpreter in a new way. Students will also work in groups for in-class exercises and for collaborative presentations.

Students will participate in the course both in classroom discussion and with a suite of Web-based online learning platforms and e-text resources:

(1) A custom-designed Website created by the professor for the syllabus, links to readings, and weekly assignments:https://irvine.georgetown.domains/711/

(2) An e-text course library and access to shared Google Docs: most readings (and research resources) will be available in pdf format in a shared Google Drive folder prepared by the professor. Students may also create and contribute to shared, annotatable Google Docs for certain assignments and dialogue (both during synchronous online class-time, and working on group projects outside of class-times).

(3) Zoom video conferencing and virtual office hours. See: Students Guide for Using Zoom.

Grades:

Grades will be based on:

(1) Class Participation (50% of grade in two components): Weekly short writing assignments (in the course Canvas Discussion module) and participation in class discussions (25%). Collaborative group projects on topics in the syllabus (to be assigned) to be posted in the Canvas Discussion module and presented for discussion in class (25%).

Important: Weekly short writing assignments must be posted at least 6 hours before each class day. Everyone must commit to reading each other's writing before class to enable us to have a better-informed discussion in class.

(2) A final "Capstone" research project written as a rich media essay or a creative application of concepts developed in the course (50% of grade). Due date: 7 days after last day class.

Final projects will be posted on the course Canvas Discussion module, but you can also develop you project on another platform (Google docs, you own website, etc) and link to it in a Canvas discussion post for Final Projects. Your research essay can be used as part of your "digital portfolio" for your use in resumes, job applications, or further graduate research.

Professor's Office Hours and Virtual Meetings
Before and after class, and by appointment. I will announce a virtual office hours schedule in the second week of classes.

Professor's Contact Email: Martin.Irvine@georgetown.edu

Required:

  • Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015.

Recommended:

  • Luciano Floridi, Information: A Very Short Introduction. Oxford, UK: Oxford University Press, 2010. ISBN: 0745645720James Gleick, The Information: A History, a Theory, a Flood. New York, NY: Pantheon, 2011.
  • Janet H. Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.

Links to Online E-Text Library and University Resources

A Note on Readings in the Course

  • I have written several introductions to the course units so that students will have a framework for understanding the interdisciplinary sources of ideas and methods for each topic. These introductions are drafts of chapters of a book that I am writing for students, and are based on my 25 years of research and teaching. Please give me your honest feedback on what works and what doesn't, what needs more clarification or more examples. There are no textbooks for the "big picture" interdisciplinary approach that we we do in CCT, so we have to make our own.

Professor Irvine's Introductory Video Series: Key Concepts in Technology

  • I produced these videos for an earlier course, CCTP-798, Key Concepts in Technology. The basic background in these videos is relevant for topics in this course, and for your general learning in CCT. (They are short, mostly ~ 6-10 mins each. Note: the Week numbers don't correspond to the weeks in this course.)
  • Key Concepts YouTube Playlist (I will link some in specific weeks of the syllabus).

Learning Objectives:

  • Foundational background for the course. Introduction to key concepts, methods, approaches.
  • Introducing the multi-/ inter-disciplinary knowledge domains, research, and theory.

A framework of research questions for our guiding our inquiry and knowledge-building:

  • What underlies what we call "code," and how / why is "code" based on the human symbolic capacity in all of its implementations in symbol systems from language, mathematics, and art works to the design principles of computer systems, digital information, and software programming?
  • How do we connect signs and symbols that represent meanings (data) with signs and symbols for actions (operations) on the meaning representing signs/symbols to generate unlimited new representations?
  • How/why can we use "code" in one set of symbol representations to represent, process, interpret, and transform all our other symbol systems (language, sounds, graphics, images, film/video, multimedia)?

Course Introduction: Requirements and Expectations

  • View and download the Syllabus document in pdf:
    Course description, complete information on requirements, participation, grading, learning objectives, Georgetown Policies, and student resources.
  • We will discuss further in class, and you can ask questions about anything.

Personal Introduction: My Background (pdf)

Using Research Tools for this Course (and beyond)

Required Weekly Writing for Class: Canvas Discussion module for course

First Day: Introductory Presentation and Discussion (Prof. Irvine)

Learning Objectives:

"Booting up" our thinking with key concepts from the fields that we will be drawing from for our interdisciplinary research model. This week provides an introductory background to some of the key terms and concepts from computer science, semiotics, systems thinking, and design thinking for understanding the many kinds of "code," symbols, representations, and interpretation processes that we use in computing every day.

We will be studying the ideas and background from two directions: (1) learning how our contemporary computing systems and digital data structures are designed and how to describe these designs accurately, and (2) learning the history of ideas and principles of symbolic systems that make modern computing and all our digital media possible. To open up the "black boxes" of both our "symbolic minds" and how code is designed to work in computers, we will go back into the deeper foundations of the human "symbolic capacity" that have defined "being human" for at least 100,000 years. And, yes, it all connects and combines in what we do with computer systems!

Readings (read in this order:)

Discussion and "workshop" in class this week:

  • We will have an open seminar style discussion in class this week, and also a "workshop" where you will be able to practice some coding concepts. Read the background texts for the key terms and concepts, and make notes on questions and any "aha" moments in your thinking.

Writing assignment (Canvas Discussion module)

  • Read the general "Weekly Writing Instructions" first. Important: We are using the discussion module in Canvas for your discussion of main points and questions in each week's readings and topics. You're not doing a "blog" post.
  • Even though many of the concepts and approaches in the readings are probably new for you, discuss two of the key terms and concepts, and examples of how you can apply to understanding or explaining what we do in computing and code. Ask questions that you would like to have discussed in class. We will work through the backgrounds in class, and apply the ideas to examples and cases that interest you.

Learning Objectives and Main Topics:

Learning the key multidisciplinary research findings about the "human symbolic capacity," the development of symbolic thought and symbol systems, and the close relationship between this human capability and the development of symbolic artefacts and technologies.

Key Questions:
What are the distinctive features of our shared human "symbolic capacity"? Why do humans have: (1) the "language faculty" (the ability to acquire natural language and immediately become members of a social group capable of unlimited expression and forming new concepts in the language); (2) the "number sense" or "numeracy" (ability to think in quantities, patterns, and high-level abstractions, and ability to learn how to represent abstract values and operations in mathematical symbols), and (3) the capacity for learning and using many cultural symbolic systems (writing systems, image genres, music, art forms, architecture)? How and why are modern computers and the symbolic forms that we represent ("encode") in digital information an extension of a longer continuum of human symbolic thought -- and why does knowing this matter? How can we use this interdisciplinary background to help "deblackbox" what is closed of from understanding in the design and manufacturing of computing technologies.

Readings & Video

  • Prof. Irvine, "Symbolic Cognition and Cognitive Technologies" (Video, from Key Concepts in Technology) [This is a video that I produced for earlier course, and introduces the "big picture" view of symbolic thought and computing that we are studying this week.]
  • Prof. Irvine, "Introduction to the Human Symbolic Capacity, Symbol Systems, and Technologies." [Read first for the conceptual framework for this week; print out for reference.]
  • Thinking in Symbols (Video Documentary, American Museum of Natural History)
    • See also the Archaeology documentary video on the findings in South Africa, which allow us to date human abstract symbolic thought to at least 100,000 years ago.
    • Note: From the earliest surviving use of symbolic artefacts to the bits and bytes and screens in today's computing technology, (1) symbols require physical, perceptible form, (2) symbols come in systems with rules and conventions for interpretation understood by a community, through which meanings, intentions, and values are encoded and decoded, and (3) symbols are based on replicable patterns.
  • Kate Wong, “The Morning of the Modern Mind: Symbolic Culture.” Scientific American 292, no. 6 (June 2005): 86-95.
    • A short accessible article on the recent state of research on the origins of human symbolic culture and the relation between symbolic thought, tool making, and technologies. Archaeological findings in the video documentary above are discussed in this article.
  • Michael Cole, "On Cultural Artifacts," From Cultural Psychology. Cambridge, MA: Harvard University Press, 1996. Short excerpts.
    • Background: A good summary of cognitive psychology research on cultural artefacts (human-designed and made "technologies" that support communication, cultural meaning, and symbolic thought). Embracing views also shared in anthropology, Cole provides a descriptive model of the human artefact that opens up an understanding of a long continuum of cognitive artefacts in human social history. This view allows to see the implications of our longer history of using culturally adopted kinds writing surfaces (cave walls, clay, wood, parchment, paper, pixel screens), with technologies developed for inscribing writing and imposing images, and the more recent history of our technical media for representing, storing, and transmitting a symbolic system. (Note: these cultural facts unite European and Asian cultural history in a common human capability). Further, while tools are also artefacts (and only humans make tools to make other tools), we have a class of artefacts that are not simply instrumental (that is, used as tools to do something), but are designed to support human cognition (thought, conceptualization, symbolic expression) and to mediate (provide a material medium for) representing and transmitting cultural meanings in physical forms. This school of thought provided important concepts for Human-Computer Interaction (HCI) design theory in the 1960s-2000s. Computer interfaces are designs for using cognitive-symbolic artefacts in a specific technical design.
  • Video: A Brief History of Number Systems (TED-ed, Math in Real Life series)
    • This short video provides the background on our decimal (base 10) numerals (number symbols).

Prof. Irvine, (Slides): "Introduction: The Human Symbolic Capacity, Language, Symbols, Artefacts, Technologies" (Part 1) (for discussion in class and study on your own)

Writing assignment (Canvas Discussion module)

  • Discuss one or two discoveries that you made when you thought about the research and interdisciplinary background in this weeks readings and video lessons. Did you have any "aha!" moments, when some connections became clearer? Were you able to understand how and why our modern "symbolic technologies" (computer systems, digital data, networks) are part of longer continuum of human symbolic thought and the way we use, think in, and communicate with signs and symbol systems? This background will be completely new for you, so will have many questions: what would you like to discuss further in class?

Learning Objectives and Topics:
Understanding the Basic Concepts of Semiotic Theory

Developing an understanding of human sign systems (from language and writing to multimedia and computer code) by using the terms and concepts first developed by C. S. Peirce, and now being applied and expanded in many fields, including computing systems, programming, and information.

Readings

  • Prof. Irvine, "Introduction to Peirce's Semiotic Theory for Studying Computing Systems as Semiotic Systems."
    • We will work through the main concepts outlined here over several weeks. I don't expect the concepts to be understandable at first; it's hard work, but working out the ideas for yourself really pays off. Don't worry; I will explain things step by step.
  • Winfried Nöth, “Human Communication from the Semiotic Perspective” (excerpt). From Theories of Information, Communication and Knowledge: A Multidisciplinary Approach, ed. Fidelia Ibekwe-SanJuan and Thomas M. Dousa. Dordrecht; New York: Springer, 2013: 97–119.
    • This is an approachable overview of how Peirce's key concepts apply to explaining communication as a semiotic process. We will go further and show how semiotic theory explains all forms of interpretation and computation.

Presentation:
Prof. Irvine, Intro Peirce's Semeiotic and Computing: Peirce 1.0

  • We will go over these concepts in class; you can also study the presentation on your own.

Writing assignment (Canvas Discussion module):

  • This is a fun exercise for becoming aware of a “semiotic process” and the way we use tokens and types of symbolic forms in an important kind of interpretation process -- translation. Go very slowly with your actions in the instructions below, and describe as many steps as you can by using the terms and concepts in the readings. (Next week, you will learn about language structures with an introduction to Natural Language Processing [NLP} in computing.)
  • Background for the assignment: Peirce observed that the "meaning" of any set of signs is represented in the further signs it can be "translated into." Updating his terms a little for today, we can say that the interpreted meanings of {symbol set1} take the form of "outputs" from a semiotic interpretation/translation process by a semiotic agent into {symbol set2}.
    • The word "translation" in English and other languages comes from the Latin word translatio, which, in its basic sense, means "transfer," "bring across," hence "interpret" as "transferring" the meaning of one set of expressions into another. Language-to-language translation is thus an obvious form of interpretation. (In English and other languages, a “translator” is also called an “interpreter.”)
  • The symbolic transformation that we do in language translations is possible by, first, correlating one set of sign tokens to their types (the lexical forms in a language) with their grammatical patterns, and then, second, interpreting ("processing") the symbolic forms with interpretants (correspondence codes), which, third, generate a second set of sign tokens as output. The "output" tokens represent the interpretation in additional sign tokens (which, of course, can be further reinterpreted in other signs in an ongoing process).
    • Translation is one of the most computationally difficult processes to automate. Merging semiotics and linguistics, we say that word and sentence meanings are understood with "semantic maps" and the possible "semantic fields" for word-meanings shared by speakers in a language community. As we know, there is never a direct mapping from one language to another, but we have ways for managing differences and ambiguities. (We do this "disambiguating," as it's termed, by understanding contexts outside single sentences, by sharing background knowledge assumed by speakers/writers and receivers/interpreters of sentences, and by understanding kinds of meanings that belong to specific situations and social-cultural uses.) But computational processes require strict "specification" (well, as strict as we can get) in algorithms and sample data sets. The terms and concepts developed in semiotics allow us to describe and expose the required steps for translation as a process, and further expose the challenges for designing a computational system that seeks to "simulate" human semiotic actions.
  • Assignment: First, go to the Google Translate page. The screen interface presents text box windows for the “source” language text and the “target” language text. Choose the source and target languages from the menu. Note that by doing this you have signaled the Google server to provide (in an interactive "real time" "Web service" layer) the corresponding code layers for representing the written language tokens in each window. The interface calls on a human agent to insert character and word tokens into the “source” text box. Type in or copy and paste at least three sentences in the “source” (text box) window. The translation will appear in the displayed tokens in the “target” language text window.
    • Note: Google's "translation service," a black box with many layers of Cloud-server-side processes, represents one application of assigning or delegating a very complex semiotic process -- translation -- to computational processes. Everything "computational" involves physical token representations (as data) and complex software interpretations (processes) that output further tokens. But we can't automate the language-to-language mappings that we "human computers" do, so we rely on a complex system design that can produce approximations of the human process. How? This approximation is computationally possible by using pattern recognition algorithms and fast statistical analysis over huge data set samples of language "tokens" and "strings" (chunks of text tokens). (We will study this topic further in the weeks on information and data.)
  • What are we doing, what is happening, in Google's two text windows? We will let the Google Translate service remain a black box of computational functions for now, and focus on the observable inputs and outputs. “Machine translation” (an unfortunate term of the trade) has many layers of processes and code correlations that perform Interpretant functions, “that by means of which an interpretation is made,” which are projected into symbol-token outputs. Without knowing what is happening in the black box of Google’s Translation Cloud of data and machine-learning software, what is happening semiotically that we can observe -- even though the semiotic process can only be done as approximations? Think through what needs to happen (without knowing anything about what's inside the black box) for (1) taking in your "input" tokens (registered in your computer's memory and displayed in the tokens in your screen window), and (2) displaying the tokens produced server-side and sent to the “target” language window.
    • Hints about the process: Input tokens as data --> passed on via an Internet data connection to a black box of Interpretant processes and code correlators into output tokens via an Internet data connection --> receive output tokens as data interpreted through the software and hardware of our systems. (Each step involves a physical process of tokenization, i.e., creating token instances.)
    • Google has also designed the "target" window with layers of interactive interpretive features that again call on a human semiotic agent. Mouse over sections of text. (Sometimes the text generated comes with embedded alternative translations.) Can they be "better" translated as you understand the language codes from source to target? What was missed as you understand the languages?
  • Next, copy the text in both source and target windows, and “paste” the text tokens into your discussion post, pasting 3 times for each set.
    • Note: when we use a software routine to "copy" anything, it stores another token instance of what we "copy" as byte-tokens in a temporary memory called a "buffer"; when we "paste" or "insert" the data, the byte-tokens are re-"copied" -- tokenized -- in different memory locations for the file being "edited," then tokenized again through the software and graphics processor to the physical pixel-mapped locations in our screens (our necessary perceptible token instances).
  • Next, use the style features in the Canvas edit window, and change the font style and/or color or size of the text characters in 2 of the sets of your text tokens. What have you just done? What is happening when we “retokenize” tokens from one digital instance to another? How do we recognize the characters and words no matter how many times we do this? Haven't you just proved the type/token principle?
  • Harder question to think through: how do we "know" what the text tokens "mean," no matter how they are morphed and retokenized, and can we design software to "know" what we know?

Learning Objectives and Main Topics

In this unit, students will learn the key terms and concepts developed in contemporary linguistics for understanding the nature and structure of human natural language and writing, and for distinguishing natural language from formal and artificial symbol systems also called "languages" (e.g., mathematical and scientific notation, the formal notation "metalanguage" used in linguistics, and computer programming "languages" or "code"). The terms and concepts established in linguistics are now the common terms used in computer science, cognitive science, semiotics, and many other fields.

Further, linguistics now also includes the specialized field of computational linguistics and natural language processing (NLP), which is an important field in computing and information science. Data analytics, AI, and Machine Learning depend on concepts formalized in linguistics (that is, given precise meanings and systems of notation used in programming and algorithms).

With this background, you will be prepared to answer important questions like:

  • what do we mean by natural language, and what are the distinctive features that make a human natural language a language in the precise terms of linguistics?
  • can we describe other symbolic systems like image genres and music as being 'a language" or 'like language,' and can we be more precise in our terminology?
  • what do we mean by a 'computer programming language', code, and 'language processing'?

Readings:

  • Steven Pinker, "Language and the Human Mind" [Video: 50 mins.][start here]
    • A well-produced video introduction to the current state of knowledge on language and cognitive science from a leading scientist in the field.
  • Martin Irvine, "Introduction to Key Concepts in Linguistics." (Intro essay; read first).
  • Steven Pinker, Words and Rules: The Ingredients of Language. New York, Basic Books, 1999. Excerpt, Chapter 1.
  • Andrew Radford, et al. Linguistics: An Introduction. 2nd ed. Cambridge, UK: Cambridge University Press, 2009. Excerpts. Use for a reference to the major topics of linguistics..
    • Review the Table of Contents so that you can see the topics of a standard course Introduction to Linguistics. You don't have to read the whole selection of excerpts. Focus on the Introduction to Linguistics as a field, and the sections on Words (lexicon) and Sentences (grammatical functions and syntax).

Video Lessons: Crash Course: Linguistics

  • Good basic short lessons. For this week, view Lessons 1-4 ("What is Linguistics" to "Syntax Trees") and Lesson 16 (Writing Systems).

Background for this Week's Assignment: Visualizing Syntax Structures

  • In the readings and video lessons above, you were introduced to the way that we use mathematical models (tree graphs) for mapping the syntactic structure of a sentence. Understanding syntactic patterns is also important for understanding how programming languages must be designed, and how we can encode digital data. For this assignment, you will use software developed for computational linguistics and Natural Language Processing (NLP) for visualizing the syntax structures of sample sentences in a "parse tree."
    • The term "parse" comes from traditional grammar, which means decomposing sentences into their word classes or "parts of speech," like noun, verb, preposition (from Latin, pars = "part"; as in classical Latin grammar, partes orationis, "parts of a sentence, parts of speech"). See Wikipedia: Parsing.
    • Note: Most NLP begins with sorting words tokens and mapping them into a parse tree or parsed with metadata labels for each word..
  • Experiment with the XLE-Web: This site, provided by a linguistics research group, aggregates useful computational analysis tools for studying syntax.
    • In the "Grammar" pull-down menu, you will see the languages that can be "parsed" (syntax-mapped) in the the online version of the software and database. For trying it out, choose any language you know. You type in or paste sentences in the text box.
    • Note that many languages have been analyzed by this research group, and are listed on the "Treebanks" page (though not available yet for the Web auto parser interface). You will find examples in Chinese and many other languages. (The "Tree Banks" are data sets of example sentences already parsed and mapped. You may have to click on "accept terms of use" for this section of the database. We will explore more in class).

In-Class Exercises: Experimenting further with NLP "Sentence Parsers"

Writing assignment (Canvas Discussion module)

  • From the XLE-Web grammar menu, choose "English," and insert a sentence in the text box. Use a short sentence, but one that has a relative clause (a "that," "who," or "which" clause). Then click on "Parse sentence." (You can also chose a second language that you know also for visualizing sytax in the tree graph, but we will use English as a common reference.)
    • The software will give you a very complex graph for the sentence (including options for which syntax "path" seems most likely), using two forms of formal linguistic notation: a constituent (c-) tree structure and functional (f-) bracketed notation structure.
    • We will focus on the "c-" (constituent) structure, so click off the "f-" structure and "discriminants" boxes after viewing the notation. You can also get a "map" of the word tokens in your sample sentence (click on "Tokens"). Take notes on what you discover in the syntax tree and token list. Next, uncheck all the options except "c- structure." This will generate a compact tree without all the syntax details. Use this compact tree for discussion.
  • This will be new for you, so don't worry about all the complexities and unfamiliar notation. Do your best to follow what is being presented in the visualization for the "c-" structure. You can experiment with the settings, and also mouse over and experiment with choosing different ways of mapping the tree.
    • Note: The software working in the background on this site is designed to generate thorough abstract maps of sentence structure from your input tokens, including "placeholder" elements that belong with the full syntactic structure but may not appear in your example sentence.
  • In your post, describe your experience using the syntax tools, what you learned about syntax and mapping word tokens with all the information in the detailed parse tree, and whatever was unclear about what the syntax visualization means. Insert your sample sentence, and, if possible, an image from a screen shot of the compact syntax tree (with only the "c-structure" checked). I'm sure you will have many questions, so include questions that we can discuss in class.

Background, Main Topics, and Learning Objectives

Your main learning goal for the next few weeks is to discover for yourself a clear conceptual understanding of the technical concepts of information, data, and the semiotic design principles of computing systems. And further, to discover why learning this basic knowledge can empower anyone – especially people who don’t think they are “techies” – to understand why and how and all our computing and digital systems are designed the way they are, rather than some other way. You will then be on your way to claim ownership over these technologies as being part of our human birthright as symbolic thinkers and communicators, who always use technically designed physical media for expression, representation, communication, and community identity. Hang on, work as hard as you can on reading and understanding, ask lots of questions, and I will help you discover why learning this is worth the effort, and comes with lots of fun "aha" moments!

This week, you will learn the key terms, concepts, and design principles for “information” as defined in digital electronic communications and computation, and why we need to distinguish the technical concept of “information” from uses of the term in ordinary discourse and other contexts. You will learn the reasons why we use the binary system (from the human symbolic systems of mathematics and logic) for structuring and designing electronic information. You will learn why and how we use this designed system to map units of other symbolic systems (what we call "digital data") into arrays of structures of controlled states of electricity (patterns of on/off placeholders) in a second designed layer.

With the clarifying concepts from Peirce's definitions for the physical/material structures of tokens and representations required in every symbolic system, you will understand how digital, binary information is necessarily designed as a semiotic subsystem, a structured substrate, for holding and reproducing patterns of all our digitized symbolic systems. And not only structures for representations (strings or clusters of tokens), but also in the subsystem for encoding the kinds of interpretation and calculation that "go with" each data type as a system. This is the full "inside" view of "encoding" and "computation" with digital electronic systems. Deblackboxing computing and information for a true view of the designs for semiotic subsystems is the master key for understanding "code."

Next week you will learn the technical definition of "data" as structures of units of “information” that are encoded, in precise ways, in the design of what we call "digital architecture." This architecture means the whole master design for a system with three connected physical structures: (1) for representing tokenized units of human symbolic systems (data representations), (2) for using clusters of binary logic processes for interpreting, calculating, and transforming input data representations into further output representations, and (3) for reliable, "packaging" of data structures for sending and receiving across networks (Internet protocols).

Key Terms and Concepts:

  • Information defined as quantifiable units of energy + time, also involving probability and differentiation (differentiability) from other possible states.
  • The Transmission Model of Communication and Information: the model from electrical engineering and telecommunications: what it is, and is not, about.
  • The Binary number and Boolean logic systems: for logic, computation in the base 2 number system, and encoding longer units of representations (bytes). Why do we use the binary system for logic and data representations?
  • The bit (binary unit) as the minimal encoding unit with arrays of two-state electronics (on/off). We can map human symbolic abstractions for two-value systems onto the electronic on or off states: yes/no, true/false, presence/absence; 1,0 in the base 2 number system. When we "read" the value of the two possible states in an information context, we say we get 1 bit of information.
  • Discrete (= digital/binary) vs. Continuous (= analog) signals.

Readings

  • Introductory videos:
    • Code.org, How Computers Work series (short videos): watch Lesson 3: Binary and Data, and Lesson 4:Circuits and Logic (whole series list)
    • Crash Course Computer Science: Electronic Computing (background on the electronics for digital information)
    • Note: these are good quick intros, but they have to skim over some important facts about digital system design. There are no "1s" and "0s" in the physical components of digital information and computing systems or in binary code at the electronic level. "1" and "0" have meanings in human symbol systems, and, by using semiotic design principles, we map (correlate) human meanings and values represented in symbols into a system of on-or-off electronic states. These on/off states are meaningless until assigned a symbolic value from "outside" the physical system.
  • Martin Irvine, "Introduction to the Technical Theory of 'Information' (Information Theory + Semiotics)"
  • Daniel Hillis, The Pattern on the Stone: The Simple Ideas That Make Computers Work (New York, Basic Books: 1998; rev. 2015) (excerpts).
    • For this week, read only the Preface and Chaps. 1-2 (to p.37). Hillis provides good explanations for how we use binary representation and binary logic to impose patterns on states of electricity (which can only be on/off). The key is understanding how we can use one set of representations in binary encoding (on/off, yes/no states) for representing other patterns (all our symbolic systems). Binary encoded "information" (in the digital engineering sense) can be assigned to "mean" something else when interpreted as corresponding to elements of our symbolic systems (e.g., logical values, numerals, written characters, arrays of color values for an image). Obviously, bits registered in electronic states can't "mean" anything as physical states themselves. How do we get them to "mean"?
  • Denning and Martell. Great Principles of Computing. Chap. 3, "Information," 35-57.

Optional and for Your Own Study:

  • James Gleick, The Information: A History, a Theory, a Flood. (New York, NY: Pantheon, 2011). Excerpts from Introduction and Chapters 6 and 7.
    • Readable background on the history of information theory. I recommend buying this book and reading it throughout the semester, together with Denning and Martell.

In-Class: Demonstration of Telegraph Signals and Code in a working telegraph system!

Writing assignment (Canvas Discussion module)
Choose at least one topic to focus your thoughts and questions about the readings for this week:

  • From what you've learned about symbol structures so far, can you describe how the physical/perceptible components of symbol systems (text, image, sounds) are abstractable into a different kind of physical signal unit (electronic/digital) for transmission and recomposition in another place/time? (Hint: as you've learned from Peirce and semiotic theory, meanings aren't properties of signals or sign vehicles but are relational structures in the whole meaning-making process understood by senders/receivers in a meaning community.)
  • Consider specific cases for doing "de-blackboxed" descriptions: can you explain how the information structures (as patterns for replicable signals and tokens) must be work to enable us to use digital electronics for creating and sending a text or email message, or making a digital photo and saving and transmitting it? Why is the meaning of any encoded "message" not part of the engineering design solution for the digital electronic system? Can you use semiotic descriptions and concepts to explain how "digital information" is (and must be) designed as a semiotic subsystem in the whole design of digital computing?

Learning Objectives and Main Topics:

After learning some basic background on “data” as understood in computing, we will focus on the encoding methods for data types or that we use every day: text and images. We send and receive more text today than ever before in human history (text messaging, email, blog posts, etc.). All this digital text is now possible by adoption of an international standard for encoding the characters of all languages -- Unicode.

Similarly, we routinely make photo images, send and receive images, and view digital images in many software and digital device contexts. This is all possible, too, by standards for image formats and ways to define images as data types.

When we open up the black boxes of all that happens behind the scenes to create the representations of text and images on our screens (our main symbol systems and data types), we can discover why and how digital data needs to be designed in subsystem levels that correspond to the affordances of digital electronics and binary logic. Following the requirements for every instance (token) of a symbolic representation to be interpretable as a pattern (type) in physical-perceptible structures, we find that the whole binary system must be designed to maintain structure preserving structures across all instances. This provides us with consistent ways to assign kinds of meaning and functions to "data types" as patterns of structure (bytes). At the "information" level, the system guarantees bit-structure replication of the data units so that whatever was input (wherever, and whenever), can be continually interpreted in a digital system and rendered on screens (wherever, whenever). Why? So that the symbolic and meaningful patterns are preserved for recognition and interpretation by human symbolic agents. Here's where the "information theoretic" principles (digital bit/byte representation preservation) underlie the formatting and management of the "data types" actually used in our computing system's memory and software designed for types and formats of data.

More about the big "reveal" when we make the implicit design principles explicit. Whenever we open up more about digital design principles, the more we see how the designs exist only for one main reason: creating semiotic subsystems for all our digitized symbol types and for the correlated patterns of interpretation that go with each symbolic type. This is why the deep "infrastructure" of digital code is designed the way it is, and why all these unobservable semiotic design principles explain (and are demonstrated in) everything that we do observe and understanding in digital data.

Readings:

Case Studies: Common Examples of Data Types -- Text and Images

Text as Data Type: Character Set Encoding
How do we get all kinds of computer devices to display written characters in any language? What are the principles for encoding written characters in byte code that can be interpreted in any computing system (PC or smart phone) and rendered with software in the pixels of anyone's screen? Answer: all makers of devices and software adopt an International Standard: Unicode.

  • Unicode and Character Sets (see background in "Introduction to Data Concepts" above first)
    • Digital Character Sets: ASCII to Unicode (video lesson, Computer Science)
    • Wikipedia overview of Unicode and character encoding is useful.
    • The Unicode Consortium official site (everything is open-source and international standards-based)
    • The current Unicode Standard, 14.0.0 (Sept. 2021) [use for reference]
      • Code Charts for All Languages (experiment with different languages)
      • DIY example: This HTML Web "page" (our course syllabus) is encoded as UTF-8 (Unicode Transformation Format, 8 Byte Units), the most commonly used Unicode standard in almost all US and European text encoding software. If you are using the Chrome browser for this course page, do "Cntrl U" (PC) or "Command U" (Mac) and you will see this line of code a few lines down:
        <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">.
      • This line of HTML "meta" code "tells" all Web software (including mobile apps) what kind of text is encoded in this file, so that the decoding module in the program uses the correct "context of interpretation" for the Unicode representations. We never see the "raw" Unicode bytecode strings: all our software is designed to use the standard bytecode (including numbers), but produce the screen representations that we understand.
    • Unicode Emoji (pictographic symbols) | Emoji Charts
      • [Yes! All emoji must have Unicode byte definitions or they wouldn't work consistently for all devices, software, and graphics renderings. Emoji are not sent and received as images but as bytecode definitions to be interpreted in a software context. Again, code data and device-software contexts and rendering methods are separate levels is the system design.]
      • Current Unicode Emoji Chart (with current skin tone modifications)
      • Unicode test file of all currently defined "emojis" [2020] (to test how they display in software)
    • David C. Zentgraf, “What Every Programmer Absolutely, Positively Needs to Know About Encodings and Character Sets to Work With Text.” Kunststube, April 27, 2015.
      • This is a useful explanation from a programmer's perspective; read through the section on UTF-8 and ASCII.
    • "Han Ideographs in the Unicode Standard," Yajing Hu (CCT student, final project essay)
      • This is a good essay that looks at the background of Unicode standards for Han characters, and other Asian language families. The Unicode consortium had to consider the same issues for encoding for Arabic and other languages with "non-Roman" character sets and marks.]

Digital Images as Data: Digitizing Light, Pixels, Image Formats, and Software
How are images encoded as digital data? What are the basics that everyone can understand?

Writing assignment (Canvas Discussion module)

  • Create an example (Instance) of a digital "data type" (a text string in any language or a photo image in a standard format), and write a detailed "biography" of your "instance tokens" from input encoding to the stages of memory instances to output display. Use a PC with the corresponding data type software, not a mobile phone. If you want to trace the steps in the digital representation of a photo image, you can take a photo from your phone, but "send" it to yourself (email, or Cloud storage that you can access from a computer) so that you can use PC/Mac software to "view" it. (The software that we use -- for text characters or images/graphics -- is designed to maintain and transform tokens of specific data type instances by having access to our system's active memory and processing units.) Insert your data example in this week's post.
  • Try thinking through these steps your data instance's "biography." Hint: when we "copy" or "move" data items we are communicating intentions, through software routines (the subprocesses in any program) for ongoing retokenizing of the underlying physical bit/byte-level "information" in other physical instances in other digital memory locations, which our operating systems and other memory components "index" as data types.
    • (1) Describe (and ask questions) about the encoding/decoding processes of the data type instance as data. What is the relationship between software specifically designed for creating (inputing) and displaying (outputting) a data type for how the instances are rendered in representations on our pixel-based displays?
    • (2) Describe (with as much detail as you understand so far) the E-information and Data levels of your example. Our "local" PC (or Web-enabled app on a mobile device) and the "remote" Canvas server are designed to facilitate "copies" of your data instances (re-tokenizations), and return "copies" (token instances) to be output through the memory, software, and screens on our devices.
    • "E-information" explains how the systems (our devices, the Internet, Web servers) are designed to manage copying/replicating tokens of bits/bytes reliably in the background (regardless of the data type). But we deal with bytes at another level up in the system design as data types interpreted in software. Our interfaces give us a Data view for "uploading" to the Canvas "data server" (and the "Library" of media types), and then for how a data type is "re-instanced" on any of our screens via the software and graphics-screen hardware on our individual devices.
  • Can you understand how the Data Type assignment level is applied to the E-information level? Is it clearer how digital design is about layering structures (imposing patterns to be interpreted as kinds or types of digital representations)? are designed as "abstraction levels" (the complexity of the details "abstracted" out of view) that are almost entirely blackboxed from "users." You will learn more about the software code layers that are designed precisely to maintain these data instances of our symbolic systems.

Learning Objectives and Main Topics

Learning the main design principles for modern digital electronic computer systems, and how this design exists to facilitate symbol processing in the levels of code, encoding, and the physical architecture of systems. You will learn some the key concepts of digital computing systems, and why the whole model for computing is based on, and is designed to serve, our symbolic capabilities (in thinking, expression, communication, and creativity).

The video lessons and readings will provide important background on digital computer system design, but mostly from the "how" and "how to" perspective. With this background, your main learning goal (beginning this week) is to begin applying our key concepts for understanding why our systems are designed the way that they are, rather than some other way, for understanding what a computer system is.

Building on your learning "building blocks" for Information and Data concepts for digital, binary systems, you will learn the reasons why computing systems are (must be) designed as semiotic systems. You'll see that to get computer systems do what we want them to do (perform delegated and automated symbolic tasks at electronic speed), we have to decompose the larger how to problem into several levels of subsystems. The subsystems are designed to inter-operate as part of a whole system "architecture" (the master design), the whole combination that we "orchestrate" through different levels of "code" representations (categories and levels of symbols for meaning and for performing operations and interpretations)

So, the whole computer system = (the specifically designed system of subsystems for symbolic structures and processes for them) + (human agents/interpreters who provide symbolic inputs, direct program processes, and interpret results). This whole combined system is what makes a digital computer a computer. A "computer" is not the blackbox of hardware that we get as a commercial product.

By applying the true ideas implicit in the design principles for computer systems, you will discover the answers to the big questions about why computer system seems like black boxes:

  • How can we make human abstractions and concepts (which we symbolize in the sign systems used in mathematics and logic, and combine in programming languages) perform actions in the physical electronic components of computers?
  • What were the design principles that enabled "computers" to develop from giant calculating machines ("number crunchers") into the multi-data-type, symbolic and multimedia systems that we use today?
  • How and why can we use numbers (an abstract symbol system) as a substrate or subsystem for all our "data types," which we "code" for representing the symbolic systems that we use computers for?

Readings and Video Lessons:

For Reference: Background on the Technical Design of Today's Computers

  • These texts are not for reading straight through, but are good reference sources for understanding computer system design and physical components. How Computers Work is just that: there is no why or explanation of design concepts, but it's a good, well-illustrated "inside the blackbox" view of standard hardware (which also applies to smartphones and mobile devices).
  • David A. Patterson, and John L. Hennessy. Computer Organization and Design: The Hardware/Software Interface. 5th ed. Oxford, UK; Waltham, MA: Morgan Kaufmann, 2013. Excerpts from Chapter 1.
    • Excellent overview of important concepts for system architecture from PCs to tablets. For beginning computer engineering students, but accessible.
  • Ron White, How Computers Work. 9th ed. Indianapolis, IN: Que Publishing, 2007. Excerpts.

Presentation: Computational Thinking: Implementing Symbolic Processes

  • For discussion in class and individual self-paced study.

Writing assignment (Canvas Discussion module)

  • Expanding on your data "case study" from last week (or choose a different example), think through how you can now add more of the details from the computer system design levels for your descriptions and explanations. Take the next step to describe and explain the "whys" (reasons for) the "how to" or "facts" of the physical components the binary electronic system.
  • Why does this design for the physical structures in a binary system (memory + processors) enable us to create the representations and interpretations that we see and "interact" with at the input/out levels? Although most of the processes are unobservable as encoded binary electronic structures, can you see how they are really not "black boxes" (in the sense of being beyond human understanding) or just facts about machine parts?

Learning Objectives and Main Topics:

In the course units for Weeks 9-10, students will continue learning about computer system design and data as elements of symbolic systems, and learn how we communicate with the components in the designed architecture of computer systems through the levels of symbols in programming "code."

Students will also learn about computational thinking -- a universal form of thinking and reasoning that calls on our cognitive-symbolic abilities for abstraction, planning step-by-step procedures, and modeling the kinds of interpretations and operations that we use for our symbolic systems (language, math, images). Computational thinking -- upon which all computing systems depend -- is a specialized application of our symbolic-cognitive capabilities. This form of applied thinking underlies the design of programing languages and computer code. Students will learn how programming makes this way of thinking explicit so that we can develop formal (or "artificial") languages got assigning representations (elements of symbols) and actions (processes) to computing systems.

"Computational Thinking" is NOT learning to think like a computer (whatever notion of "computer" you may have). Rather, it's exposing common logical and conceptual thought patterns that everyone has, can develop, and can learn to apply in programming and digital media design.

learning by doing, and seeing first hand how computing code is a way of implementing levels and classes of symbols:

  • signs/symbols for representing "data types" that correspond to our symbolic systems ("symbols that mean" = represent values)
  • signs/symbols for defining the relations and actions (processes, procedures, interpretations) that a computer system can enact for each type of data representation ("symbols that do" = "meta" symbols taking the other orders of symbols as their "content").

The video lessons help you visualize how a programming language (and thus a software program or app) is designed to specify symbols that mean things (represent values and conceptual meaning, mainly through variables for data types) and symbols that do things (symbols that are interpreted in the computer system to perform actions and/or operations on other symbols = signs/symbols for syntax and operations).

Summing up our learning building blocks so far, you'll see that:

  • Programming languages are, and must be, "formal languages" (metalanguages) with strictly defined symbols for syntax and semantics (what the signs/symbols must mean -- stand for -- in the internal logic of the programming language design), as compared with natural languages (Week 5).
  • The strict formalism of programming languages is based on logic and mathematics (human symbol systems with signs for representing values + signs for operations/ interpretations on symbols. Only by using precisely defined formal signs and symbols of a programming "code" is it possible for us to to map (assign, impose a corresponding structure for) the formal necessity (represented in logically precise symbols) onto the causal necessity in the corresponding design of a binary digital computer system. The mapping of abstract human-symbol-to-physical actions-in-components happens when the symbols that we understand in computing code are "translated" into binary code, which is the form that can be mapped to binary electronics. The translated binary encoded representations can thus be assigned to physical structures in components for both memory (holding, moving, and storing binary representations of data) and actions (processes, interpretations, and rules for transforming data representations) in the binary computing circuits of processors.
  • You can see how "E-Information" and "Data" representations (Weeks 6-8) become assigned to different levels in the architecture of a computing system, and how programming code puts them into action.
  • Computation in action (as "running" software) is a way of defining transitions in information representations that return further interpretable symbol sequences in chains of "states" that combine meanings and actions. Stages in the results of the processes are programmed to be "output" in our screens and audio devices, and we can continue directing and re-directing the processes by ongoing dialogic input in interactive software and GUI interfaces (more to come in Week 12).
    • This is what the software layers running on your device right now are doing to render the interpretable text, graphics, images, and window formatting from the digital data sources combined in a Web "page," image or video file, and many other behind-the-scenes sources (Weeks 6-8).

Introductions and Video Lessons:

  • Video: Prof. Irvine, Introduction to Computational Thinking and Software (From "Key Concepts in Technology" course)
  • Jeannette Wing, "Computational Thinking." Communications of the ACM 49, no. 3 (March 2006): 33–35. [Short essay on the topic. Wing launched a wide discussion in CS circles and education for this approach to introducing computing principles. These principles become embodied in the design of programming languages and coding principles.]
  • Video: Computational Thinking: What Is It? How Is It Used? (Computer Science Intro)
    • Main "Computational Thinking" strategies:
      Decomposition (of a complex problem into manageable units that go together),
      Pattern Recognition (discovering patterns in examples of the problem for making generalizations that hold over any example or instance),
      Abstraction (focusing on one level of a problem at at time, bracketing off the complexity of dealing with other levels), and
      Algorithm Design (designing the steps for a general procedure that can be coded in a program).

Crash Course Computer Science: Video Introductions to Programming

  • In the in-Learning Lesson below, Python is used as a teaching language for introducing programming fundamentals. With the background so far, you should also be able to understand the more universal programming principles that every programming language must include.
  • Continue with the Crash Course Computer Science Lessons: 9 (Instructions and Programs), 11 (Early Programming); 12 (The First Programming Languages); and 13 (Programming Basics: Statements and Functions).

Main Reading for Introduction to Coding Tutorial:

Main Assignment: Video Lessons for Hands-On Learning

  • in-Learning: Programming Foundations: Fundamentals
    • Sign in to this online course with your GU ID.
    • Short video lessons that introduce programming concepts with Python as the learning language, mainly using coding programs and interfaces for the Mac platform.
    • Study Units 1-3 for this week. You can follow the basic concepts in units 1-3 without installing your own IDE program ("Integrated Developer Environment," a program to write programs) and the Python Interpreter for your platform (OS).
    • To go further in trying out your own code, install the Python Interpreter on your own PC (instructions for Mac and Windows platforms in video), and an IDE for writing code and then "running" it on your PC. The video will explain how Python uses an "interpreter" program to send "runnable" (executable) binary code to your system.
    • Take notes on what you learned and questions you have about programming concepts and how our "code" gets communicated and interpreted in a computer system.
  • Option: Some students may have had a general introduction to programming with Python as the the teaching language. If so, you can chose to study and try out the code for Web pages and apps (HTML, CSS, and JavaScript) that we will study in Week 11 (On the Internet and World Wide Web). Go to the lessons in Week 11.

Writing assignment (Canvas Discussion module)

  • Describe what you learned from working through the in-Learning video lessons. Were you able to make connections to the computing principles and concepts for "code" that we've studied? Were any key concepts clearer? What questions would you like explained further in class?

Learning Objectives and Main Topics:

Main goal: By continuing your video lessons and background readings, think for yourself about what the rule-governed procedures mean (the step by step methods for using different kinds of signs and symbols).

By seeing the visual representations of programming code signs and symbols, and then what happens in the results from computational processes and actions in the "output" representations, can you understand more clearly how programming and software is about combining:

  • "symbols that mean" ("coded" by using a set of symbols for variables as "place-holders" to be filled-in by data-representing symbols when the program is "run"), and
  • "symbols that do" (the signs and symbols that create operations, actions, interpretations, and processes on or for the "meaning representing" symbols.

When we pause to observe how we use the whole computer system to encode symbolic representations (interpreted in binary representations) and cause symbolic actions, with and for those representations, can you catch a glimpse of what it means both to "code" and "run" programs? Can you explain on a conceptual level, what it is we are doing:

  • (1) when we program with a specific programming language for creating a "source code" file (that is, when writing code for software programs -- including "importing" reusable already-written code from code libraries), and using a source code file as "input" for interpreters or compilers that translate our text code file into binary "executable" files; and
  • (2) when we "run" software (from binary executable files in any computing device) for different kinds of data (e.g., text, images, graphics, audio/video), and "interact" with the program dynamically (in "real time") for directing actions and interpreting new/additional data.

Key Concepts

  • Source Code
  • Executable Code
  • Programs/software: how the symbol systems are designed to work, and how a program file is allocated to (or assigned) memory locations, and how the design of the computing system (binary code representations in memory + processors (taking inputs and generating outputs + cycles of time) directs access and memory for outputs.
  • The combined systems design for programming and computation.

Readings for Programming Fundamentals:

  • David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines.
    • For this week, read chapters 3-6 (Programming; Problems and Procedures; Data; Machines). [Download the book: you can always return to other chapters for reference and self-study.]
    • These chapters introduce the more technical and logical-mathematical aspects of the "how" and "why" in programming language design. In our context, the point is learning about the reasons for the design of a special code language (C, Python, Java, etc.), a symbolic code that allows us to communicate with the structures of digital computer systems.
    • A programming language must be designed to implement step-by-step procedures that can be represented formally in special symbols (the "code" vocabulary) that allow us to (1) assign human logical actions to perform computations (in the processors) on (2) physical representations of symbol tokens for data (in "memory" components). These two dimensions of programming are based on the principle of "one-to-one correspondence" mappings to (3) the combined binary architecture of digital computer systems.
  • Denning and Martell, Great Principles of Computing. Chapters 5, 6, 10 (Programming; Computation; Design). [These chapters will fill in your background for how programming and code are implemented in computer systems.]

Crash Course Computer Science Lessons

Main Assignment: Continuing Lessons for Hands-On Learning

  • in-Learning: Programming Foundations: Fundamentals
    • Study Units 4 - Conclusion for this week. Again, you can follow the basic concepts and procedures presented in the video lessons without installing the Python Interpreter and your own IDE program, but you will get more into "hands on" coding if you have the software tools on your own system.
    • Continue to take notes about what you are doing and learning, as well as questions about the programming principles.
    • If you have chosen the Option to study the coding methods for the Internet and Web (in next week's units), continue in the same way as in the point above, and try to reflect on what you are doing with the code and questions you have about the coding process.

Writing assignment (Canvas Discussion module)

  • First, capture your main learning steps and questions from the readings and video lessons for this week. Do you have further "aha!" connections from your studies from the past two weeks to this week, and new questions that emerge?
  • Next, refer to the learning goals for this week in the "Learning Objectives and Topics" above, and explain, as far as you can, how or whether these foundational principles of programming and software are more understandable, and ask questions that you have from our two-week unit on coding and fundamental of programming.

Learning Objectives and Main Topics:

This unit has two main objectives: learning the basic design principles for the Internet and Web as systems for semiotic systems, and learning some of the basic features in the code languages for the Web (HTML, Hypertext Markup Language, CSS, Cascading Style Sheets, and JavaScript (a script for of code for interaction and encoding digital media).

Learning the basics of the "HTML code suite" is a great way to begin learning and doing code. Since we "write" the suite of HTML code families in a text file, we have a first-level visualization of the relation between metasymbolic symbols (the signs/symbols of the code as a metalanguage) and the symbolic forms (in data types) that we use for meaningful representations of a symbolic system (text, graphics, images, etc.). The "meta" code level is designed to define, describe, and prescribe the functions of all the digitally-encoded representable forms packaged in an HTML file, but the "meta" code does not get "displayed" in the screen output. You can see right in your HTML code window how we use and distinguish between "symbols that mean" and "symbols that do" in "coding" for computer systems.

These basic first coding steps will open up the design principles that enable us to send, retrieve, and format data from Internet/Web servers so that the data can be presented in GUIs (interactive Graphical User Interfaces). You will get a first look at the code that makes everything in our Web browsers and mobile apps work as dialogic interactive systems. You will discover how many of our core human symbolic capabilities can be assigned, delegated, and “outsourced” to large-scale, interconnected networked systems that store and analyze data, which can then be retrieved through Internet/Web connections and be interpreted in software layers for the final "output" presented on our Web “pages” and app “screens”.

With this first view of one level of code (used for fetching, describing, and formatting what we see on our screens) you can go further into the "black box" to understand the operations that we can’t see that are initiated in our interactive commands and choices communicated to networked computer systems. And here we meet all kinds of software programmed in several "languages."

Key Terms and Concepts Learned:

  • Levels and layers of computing systems and code.
  • Metadata and data.
  • Basic concepts for coding for data types and interactive commands for networked systems (Internet/Web).
  • Code used for what we see in all Web “pages” and mobile app screens (HTML, CSS. JavaScript).

Readings and Video Lessons:

HTML and Web Coding Lessons

For Inserting Your HTML Test Code in a Shared Google Doc

In Class: Follow the Code

  • JavaScript discovery html file.
  • Examination of the "code source" of Web pages.
  • Experimentation and practice with HTML basic code. Group project on HTML file.

Writing assignment (Canvas Discussion module)

  • This weeks assignment has two parts:
    (1) With the background on the Internet and Web, and from your learning about the HTML code suite in the lessons, discuss some main points that you learned about the Internet/Web and coding for the Web. Can you describe some features of the HTML code suite and Web "metamedia" interfaces that subsume and combine many of the principles that we have studied for semiotic systems and subsystems, data types, and digital media? What are the main design ideas behind "hyperlinking" and multimedia display interfaces (hint: realizing some of Engelbart's and Kay's ideas).
    (2) From what you learned in the HTML Web coding lessons, write some HTML markup and code for data that you would like to try out and see "run" from a web server and your own browser. Copy and paste this into the shared Google doc.

Learning Objective and Main Topics:

  • Learning the background history for the models of computation that led to the development of interfaces for human symbolic interaction with programmable processes.
  • Understanding the design steps and technical means (in the 1960s-1980s) that enabled computer systems to become general symbol processors and not simply calculating machines.
  • Learning the conceptual and semiotic foundations for the development of "graphical interfaces" for multiple symbol systems (data types). This development gave rise to "human computer interaction" (HCI) as a design discipline.
  • Learning the design concepts behind the technical architectures in all our devices that support user interfaces to computer systems (small or large) so that they perform as interfaces for semiotic and cognitive systems.

Readings & Video Introductions:
Interactive Design Principles and Metamedia for Semiotic Processes

Semiotic Foundations of Interaction Design Principles

  • Janet Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012. Excerpts from Introduction and Chapter 2.
    • Read the introduction for this week. This book is an excellent recent statement of the contemporary design principles developed in the cognitive design tradition, which assumes that computer interfaces are designs for semiotic systems.

Optional: Background in the Technical History of Interaction Design

Brad A. Myers, “A Brief History of Human-Computer Interaction Technology,” Interactions 5, no. 2 (March 1998): 44-54.

  • This excellent synthesis of the history was written ten years ago, and we continue to use the interface design principles summed up here. Think about how the different design concept "leaps" (with supporting technologies as they became available) were motivated by semiotic-cognitive uses and finding ways to bring more cognitive agency to using computer systems and digitized symbolic media types.

Supplementary Sources for Further Research: Historical Background

  • Collection of Original Primary Source Documents on Interface Design (in pdf).
  • I'm providing this group of readings because graduate students should have access to the primary texts of their field in their original form.
  • You do not need to read these texts fully, but review them for their historical significance and as they are referenced in the readings. These sources will also be important if you want to do a final project related to the design principles for semiotic structures in interfaces and interactions. We can discuss the readings and documentary videos further next week.
    • Contents of the Collection of Documents (descriptions are in the pdf file):
    • Vannevar Bush, "As We May Think," Atlantic, July, 1945.
    • Ivan Sutherland, "Sketchpad: A Man-Machine Graphical Communication System" (1963).
    • Douglas Engelbart, "Augmenting Human Intellect" (project from 1960s-1970s).
    • Alan Kay, Xerox PARC, and the Dynabook/ Metamedium Concept for a "Personal Computer" (1970s-80s)
  • History and Theory Background:
    • Lev Manovich, From Software Takes Command (2012): excerpts on the background ideas for for Allan Kay's "Dynabook" Metamedium design concept, and "hypertext" (Ted Nelson), both of which extended what Kay learned in Engelbart's lab.
    • Alan Kay, "Programming Your Own Computer," World Book Encyclopedia, 1979. (Think about what PCs would be like if Kay's view had been adopted in the PC consumer industry!)
  • Documentaries videos on the history of interface and interaction designs:

Writing assignment (Canvas Discussion module)
Reflecting on your learning over the past few weeks and this week, develop your own description of an interactive feature (not a mobile app):

  • Using the concepts and methods from the readings (and any connections with prior weeks), describe some of the concepts that enabled computing systems to be designed as general symbol processors (not just calculating machines). How was this major "conceptual leap" connected with ideas for user interfaces that enable communicating with a computer system and directing the input and processing of symbolic representations, actions, and intentions?
  • Use an example of a software feature that requires our current interface designs (PC or mobile app), and that illustrates how these symbolic-cognitive functions are now always assumed and built-in to the technical components (e.g., pixel-mapped screens, inputs-outputs, data types translated into pixels and/or audio sounds).

Learning Objective:

Discussion of main learning achievements, and further thoughts about how to apply and extend the concepts and methods of the course to any aspect of computing, code, digital media, and symbolic systems.

Learning basic research methods for your final Capstone Project.

In class:
Discussion of your main learning discoveries and "take-aways" from the course

Instructions and How to Prepare for Your Final "Capstone" Project (pdf).
(Save and print out)

Readings for Synthesizing Thoughts and Learning

  • Mahoney, Michael S. "The Histories of Computing(s)." Interdisciplinary Science Reviews 30, no. 2 (June 2005): 119–35.
    • This is a rich and well-informed essay, but skim the first pages and begin close reading at the bottom of p.128-134. Though the examples are from earlier stages of computing, the main points about multiple communities "computing" and designs for symbolic processing will always be true.
  • Denning and Martell, Great Principles of Computing. Read Chap. 10 (Design), Chap. 12 (Afterword), and "Summary of the Book", pp. 241-255.
    • For your own further reading and research, be sure to consult the notes and bibliography. For Final Projects on any topic covered in this book, you will do well to begin with references cited.

Examples of Published Articles (study for the structure of the article and uses of references)

  • Brad A. Myers, “A Brief History of Human-Computer Interaction Technology,” Interactions 5, no. 2 (March 1998): 44-54.
    • This article is written for a broad computer science and HCI design readership; it is a hybrid of magazine style and research article. It is an good example of the "historical synthesis" method for describing and interpreting major ideas over several decades. Notice the extensive list of references and the author summarizes the work.
  • Jiajie Zhang and Vimla L. Patel. "Distributed Cognition, Representation, and Affordance." Pragmatics & Cognition 14, no. 2 (July 2006): 333-341.
    • This article is a good example of an interdisciplinary view of these design topics. Pay attention to the structure of the article and uses of references.

Planning for Writing Your Final Capstone Project: The Structure of a Good Essay

  • As you plan your research and writing, consult my Writing to be Read (also a pdf version to print out): a guide for the structure and logic of research papers.
    • This guide, developed from many years of teaching writing, takes you through the process of developing a thesis (your main point), which is also called the research question, the leading hypothesis (or hypotheses) of an argument, or the simply the main hypothesis to be supported and justified by your research.
    • This is the method for interpreting your research and organizing your thoughts in the way that we present them in the structure of a research paper, article, or academic thesis, and feature news media articles. Use it, and you will succeed in being read, because this is the form everyone expects.

Writing assignment (Canvas Discussion module)

  • As you reflect over what we've studied and what you have learned, what stands out for you in what you have learned and discovered? Earlier questions answered, and new questions that you want to follow up on?
  • Consider, too, how the methods, key concepts, approaches that we have studied will apply to other topics or courses that you want to study in CCT.
  • Looking toward your final "capstone" project, was there a topic or approach that you would like to learn more about, and develop further on your own?

In Class: Open Discussion and Presentation of Final Projects

  • We will have a roundtable discussion of your current state thinking and research, and a chance to get feedback and suggestions from the class.

Resources for your Research:

Examples of Student Final Projects from Prior Classes of 711:

  • Fall 2020 [Other years of courses do not seem to be archived on Wordpress]

Final Projects

  • Follow the the Instructions for your Final "Capstone" Project (pdf).
  • Use Zotero for organizing and formatting your references. All references in your essay must conform to a professional style for citing and formatting references (choose one in Zotero). Professional practices matter!
  • Final projects are due to be posted in the Canvas Discussion space 7 days after the last day of class. Insert (paste) your written work in as Discussion post in the topic "Final Projects."
  • If you have a document with images and formatting that you want to preserver, you can insert a link to a document (shared Google doc or pdf), and copy your abstract in the Canvas discussion post under your link.