Georgetown University
Graduate School of Art and Sciences
Communication, Culture & Technology Program
Professor Martin Irvine
CCTP 5021:
Computing and the Meaning of Code
Spring 2024
This course introduces the key concepts for understanding everything we call “code” (i.e., how we use symbolic systems) in ways that apply directly to any professional career where knowledge of the key concepts in computing, information, data, and software are important for leadership roles. Although many kinds of computing devices, applications, and data resources are essential in everyday practice in every field, few people have the opportunity to learn the "why" and "how" beyond just knowing "how to" as a user of various hardware and software systems. This course opens up the key concepts that make computing possible and explainable. You will learn why everything we do in computing is connected to a much longer history of human symbolic thought and all the forms of human communication and representation. This includes how we use our symbolic capacity for abstraction (thinking in different levels) in order to design technologies that can represent, "encode," and interpret (process, compute) human symbolic processes physically. With the methods and concepts in this course, you will be able to open up a big “black box” we call "code" – not only for computing and programming “languages,” but for all our “systems of meaning,” from language, mathematics, and images to the binary encoding systems for all types digital data and software itself.
The course is designed especially for students from non-technical backgrounds. But if you have done some computer coding already, you will understand more clearly how and why programming languages and digital media are designed the way they are, and how and why we can use “code” in one form (in the “digital language” of computers) for representing, processing, interpreting, and transmitting all forms of the “primary code” of human thought and expression (in words, images, numbers, graphics, sounds, and video).
We will follow a CCT interdisciplinary approach that allows us to “crack the code” for how everything in computing is based on our shared human symbolic capacity, and how and why computer systems and digital data are designed to serve one or more of our symbolic forms of expression and representation (language, text, numbers, graphics, images, sounds). How is it possible to design systems that make "encoded" representations of all these forms of human expression computable? What does it mean for something to be computable? By the end of the course, you will be able to meaningfully answer these questions (and more). To get there, we will draw from key concepts and methods developed in disciplines devoted to the study of human symbolic thought and the kinds of "code" understood in all branches of computing: philosophy and logic, information theory, computer science, linguistics, design thinking, systems thinking, semiotics (the study of symbol systems), and cognitive science.
In this course, you will also learn about computing and symbolic systems in two parallel paths: by learning the ideas that made our digital computing systems possible, and by actually seeing how it all works in “hands on” practice with devices, software, programming code, and Internet apps. By focusing on the essential background for answering the “why” and “how” questions, you will also gain a new motivation for applying this understanding to the “how to” side of programming (if you want to learn how to code or work with others in designing applications).
Course Objectives and Outcomes
By the end of the course, you will be able to:
(1) Understand how the coding and logic of computer systems and digital information are based on our core human symbolic capabilities, and how and why the design principles for computer systems and digital media connect us to a longer continuum of symbolic thought, expression, and communication in human cultures;
(2) Use the foundational knowledge of this course to go on to learning programming in a specific programming language and in a specific domain of application, if you want to develop these skills;
(3) Apply the knowledge and concepts of this course to developing a leadership-level career in any kind of organization where you will be a knowledgeable “translator” of computational concepts: you will be able to help those without technical backgrounds to understand how computing is used in your field, and be a communicator with people responsible for computing and information systems (“IT”) who need to understand the needs and roles of others in your organization. This “translator” role is in big demand, and one that many CCT students have gone on to develop great careers.
View and download the Syllabus Document in pdf for Georgetown Policies and Student Resources.
Course Format and Syllabus Design
The course will be conducted as a seminar and requires each student’s direct participation in the learning objectives in each week’s class discussions. The course has a dedicated website designed by the professor. The web syllabus provides a detailed week-by-week "learning map" with links to weekly readings (in a shared Google Drive folder). Each syllabus unit is designed as a building block in the interdisciplinary learning path of the seminar.
To facilitate learning, students will write short essays each week based on the readings and topics for that week. Your short essay must demonstrate that you've done the readings, and can comment on and pose questions about what you find to be the main points. At first, you will have many questions as everyone in the class begins learning new concepts to work with and working toward better technical knowledge in how the concepts in the course apply to computer systems, code, and digital media. Try to apply some of the main concepts and approaches in each week’s unit to examples and cases that you can interpreter in a new way. Students will also work in groups for in-class exercises and for collaborative presentations.
Students will participate in the course both in classroom discussion and with a suite of Web-based online learning platforms and e-text resources:
(1) A custom-designed Website created by the professor for the syllabus, links to readings, and weekly assignments:https://irvine.georgetown.domains/5021/
(2) An e-text course library and access to shared Google Docs: most readings (and research resources) will be available in pdf format in a shared Google Drive folder prepared by the professor. Students may also create and contribute to shared, annotatable Google Docs for certain assignments and dialogue (both during synchronous online class-time, and working on group projects outside of class-times).
(3) Zoom video conferencing and virtual office hours. See: Students Guide for Using Zoom.
Grades:
Grades will be based on:
(1) Class Participation (50% of grade in two components): Weekly short writing assignments (in the course Canvas Discussion module) and participation in class discussions (25%). Collaborative group projects on topics in the syllabus (to be assigned) to be posted in the Canvas Discussion module and presented for discussion in class (25%).
Important: Weekly short writing assignments must be posted at least 6 hours before each class day. Everyone must commit to reading each other's writing before class to enable us to have a better-informed discussion in class.
(2) A final "Capstone" research project written as a rich media essay or a creative application of concepts developed in the course (50% of grade). Due date: 7 days after last day class.
Final projects will be posted on the course Canvas Discussion module, but you can also develop you project on another platform (Google docs, you own website, etc) and link to it in a Canvas discussion post for Final Projects. Your research essay can be used as part of your "digital portfolio" for your use in resumes, job applications, or further graduate research.
Professor's Office Hours and Virtual Meetings
Before and after class, and by appointment. I will announce a virtual office hours "drop-in" schedule in the second week of classes.
Professor's Contact Email: Martin.Irvine@georgetown.edu
Required:
- Peter J. Denning and Craig H. Martell. Great Principles of Computing. Cambridge, MA: The MIT Press, 2015.
- Janet H. Murray, Inventing the Medium: Principles of Interaction Design as a Cultural Practice. Cambridge, MA: MIT Press, 2012.
Links to Online E-Text Library and University Resources
- Link to shared Google Drive folder for weekly readings (Georgetown students access only)
- Directory of Drive folders for all topics in the Etext Library (for your own reading and research)
- You can do a keyword search of all documents in the Etext Library ("Search in Drive").
- Georgetown Library Main Page (click on "Journals" tab to search for Journals)
A Note on Readings in the Course
- I have written several introductions to the course units so that students will have a framework for understanding the interdisciplinary sources of ideas and methods for each topic. These introductions are drafts of chapters of a book that I am writing for students, and are based on my 25 years of research and teaching. Please give me your honest feedback on what works and what doesn't, what needs more clarification or more examples. There are no textbooks for the "big picture" interdisciplinary approach that we we do in CCT, so we have to make our own.
Professor Irvine's Introductory Video Series: Key Concepts in Technology
- I produced these videos for an earlier course, CCTP-798, Key Concepts in Technology. The basic background in these videos is relevant for topics in this course, and for your general learning in CCT. (They are short, mostly ~ 6-10 mins each. Note: the Week numbers don't correspond to the weeks in this course.)
- Key Concepts YouTube Playlist (I will link some in specific weeks of the syllabus).
Learning Objectives:
- Learning about the foundational background for the course. Introduction to key concepts, methods, approaches.
- Introducing the multi-/ inter-disciplinary knowledge domains, research, and theory.
A framework of research questions for our guiding our inquiry and knowledge-building:
- What underlies what we call "code," and how / why is "code" based on the human symbolic capacity in all of its implementations in symbol systems from language, mathematics, and art works to the design principles of computer systems, digital information, and software programming?
- How do we connect signs and symbols that represent meanings (data) with signs and symbols for actions (operations) on the meaning representing signs/symbols to generate unlimited new representations?
- How/why can we use "code" in one set of symbol representations to represent, process, interpret, and transform all our other symbol systems (language, sounds, graphics, images, film/video, multimedia)?
Course Introduction: Requirements and Expectations
- View and download the Syllabus document in pdf:
Course description, complete information on requirements, participation, grading, learning objectives, Georgetown Policies, and student resources. - We will discuss further in class, and you can ask questions about anything.
Introductions
- Professor's Personal Introduction (Where is this guy coming from?)
- Student Introductions: who are we? what would you like to learn from this course?
Summary of Class Requirements [as above]
Required Weekly Writing for Class: Canvas Discussion module for the course
- Read the Instructions for your weekly writing assignment (read before posting!)
Using Research Tools for this Course (and beyond)
- Required: Learn how to use Zotero for managing bibliography and data for references and footnotes.
- Directions and link to app, Georgetown Library (click open the "Zotero" tab).
- You can export and cut and paste your references into writing assignments for this course, and all your courses and professional writing.
- Required: Learn how to use Georgetown Library Research Resources
- Georgetown Library Main Search Page (all books, periodicals, media, databases)
- Click on the "Journals" tab to search for journals by title.
- Prof. Irvine's e-Text Library for Students (Shared Googled Drive; GU login required):
- Browse Prof. Irvine's Student Library (shared Drive folder) for readings and other research materials you can use in this course (and beyond). Folders by Topic:
First Day: Introductory Presentation and Discussion (Prof. Irvine)
- Introductory Presentation (Google slides)
- Open Discussion, Q&A about the course.
Learning Objectives:
"Booting up" our thinking with key concepts from the fields that we will be drawing from for our interdisciplinary research model. This week provides an introductory background to some of the key terms and concepts from computer science, semiotics, systems thinking, and design thinking for understanding the many kinds of "code," symbols, representations, and interpretation processes that we use in computing every day.
We will be studying the ideas and background from two directions: (1) learning how our contemporary computing systems and digital data structures are designed and how to describe these designs accurately, and (2) learning the history of ideas and principles of symbolic systems that make modern computing and all our digital media possible. To open up the "black boxes" of both our "symbolic minds" and how code is designed to work in computers, we will go back into the deeper foundations of the human "symbolic capacity" that have defined "being human" for at least 100,000 years. And, yes, it all connects and combines in what we do with computer systems!
Readings (read in this order:)
- Prof. Irvine, Course Introduction: Our Interdisciplinary Framework
[Print out. Keep for further reference]- Don't worry if many of the concepts and topics are new to you, and not understandable yet. We will discuss everything in class, and in a few weeks you will be learning how all the ideas connect, step by step.
- Prof. Irvine, Introducing Key Terms and Concepts: Code, Signs, Symbols.
[Print out. Read for discussion in class and keep for reference.] - Peter Denning and Craig Martell, Great Principles of Computing (pdf). Read the preface and Introduction (xll-18).
- Although this introduction focuses on the development of computer science as a discipline and on the industries and sciences that have formed around computing, Denning is very much a "history of ideas" thinker and is also widely known for his writings on design and computational thinking. This is a great book for CCT students; read it throughout the course.
- A First Look at Recent Definitions and Conceptions of Computing (pdf).
- Short text excerpts by leading thinkers in computer science on the history of ideas about symbolic systems and computing.
- Think over these brief definitions, and reflect on these texts with the other background readings for this week.
Discussion and "workshop" in class this week:
- We will have an open seminar style discussion in class this week, and also a "workshop" where you will be able to practice some coding concepts. Read the background texts for the key terms and concepts, and make notes on questions and any "aha" moments in your thinking.
Writing assignment (Canvas Discussion module)
- Read the general "Weekly Writing Instructions" first.
- Even though many of the concepts and approaches in the readings are probably new for you, try working with some new ideas. Discuss two of the key terms and concepts in the readings, and give examples of any "aha!" moments in your thinking. Even from this brief orientation to our course methods and topics, were able to apply some of the concepts to your understanding of an aspect of computing and code? Questions that arose in your thinking? Ask questions that you would like to have discussed in class. We will work through the backgrounds in class, and apply the ideas to examples and cases that help explain things further.
Learning Objectives and Main Topics:
Learning the key multidisciplinary research findings about the "human symbolic capacity," the development of symbolic thought and symbol systems, and the close relationship between this human capability and the development of symbolic artefacts and technologies.
Key Questions:
What are the distinctive features of our shared human "symbolic capacity"? Why do humans have:
(1) the "language faculty" (the ability to acquire natural language and immediately become members of a social group capable of unlimited expression and forming new concepts in the language);
(2) the "number sense" or "numeracy" (ability to think in quantities, patterns, and high-level abstractions, and ability to learn how to represent abstract values and operations in mathematical symbols), and
(3) the capacity for learning and using many cultural symbolic systems (writing systems, image genres, music, art forms, architecture)?
(4) What kind of technology is a computer system?
How and why are all the design principles for computer systems based on, and intended to serve, our human symbolic capacity and uses of specific symbol systems (writing, numbers, images, etc.)? How and why are modern computers and the symbolic forms that we represent ("encode") in digital information an extension of a longer continuum of human symbolic thought -- and why does knowing this matter?
(5)
How can we use the related multidisciplinary knowledge and methods for "deblackboxing" what is (accidentally) closed off from understanding in the design and manufacturing of computing and digital data technologies?
Readings & Video Lessons
- Prof. Irvine, "Symbolic Cognition and Cognitive Technologies" (Video, from Key Concepts in Technology) [This is a video that I produced for earlier course, and introduces the "big picture" view of symbolic thought and computing that we are studying this week.]
- Prof. Irvine, "Introduction to the Human Symbolic Capacity, Symbol Systems, and Technologies." [Read first; print out for reference.]
- Thinking in Symbols (Video Documentary, American Museum of Natural History)
- See also the Archaeology documentary video on the findings in South Africa, which allow us to date human abstract symbolic thought to at least 100,000 years ago.
- Note: From the earliest surviving use of symbolic artefacts to the bits and bytes and screens in today's computing technology, (1) symbols require physical, perceptible form, (2) symbols come in systems with rules and conventions for interpretation understood by a community, through which meanings, intentions, and values are encoded and decoded, and (3) symbols are based on replicable patterns.
- Kate Wong, “The Morning of the Modern Mind: Symbolic Culture.” Scientific American 292, no. 6 (June 2005): 86-95.
- A short accessible article on the recent state of research on the origins of human symbolic culture and the relation between symbolic thought, tool making, and technologies. Archaeological findings in the video documentary above are discussed in this article.
- Michael Cole, "On Cultural Artifacts," From Cultural Psychology. Cambridge, MA: Harvard University Press, 1996. Short excerpts.
- Background: A good summary of cognitive psychology research on cultural artefacts (human-designed and made "technologies" that support communication, cultural meaning, and symbolic thought). This school of thought provided important concepts for Human-Computer Interaction (HCI) design theory in the 1960s-2000s. Computer interfaces are designs for using cognitive-symbolic artefacts in a specific technical design. Cole combines approaches from cognitive psychology and anthropology for a useful descriptive model of the human artefact, which opens up understanding the function of cognitive artefacts in human social history from the earliest artefacts for inscription, memory, and transmission to everything designed for our digital and computational world today.
- Numbers and Mathematics as Symbol Systems:
- A Brief History of Number Systems (TED-ed Video, Math in Real Life series)
- George Ifrah, The Universal History of Computing, skim Part 1, pp.3-96.
- Part 1 is a summary of number systems and notation (symbol representation) throughout human civilization in many historical periods, See pp. 57-60 for background on our modern positional place-based decimal numbers system. This is background to the history of computing in this masterful book (download and keep for reference).
Prof. Irvine, "The Human Symbolic Capacity, Language, Symbols, Artefacts, Technologies" (Slides): for discussion in class and study on your own
Writing assignment (Canvas Discussion module)
- Discuss one or two discoveries that you made when you thought about the research and interdisciplinary background in this weeks readings and video lessons. Did you have any "aha!" moments, when some connections became clearer? Were you able to understand how and why our modern "symbolic technologies" (computer systems, digital data, networks) are part of longer continuum of human symbolic thought and the way we use, think in, and communicate with signs and symbol systems? This background will be completely new for you, so will have many questions: what would you like to discuss further in class?
Learning Objectives and Topics:
Understanding the Basic Concepts of Semiotic Theory
Developing an understanding of human sign and symbol systems (from language and writing to multimedia and computer code) by learning the key terms and concepts first developed by C. S. Peirce (1839-1914), the American "polymath" scientist, mathematician, and philosopher, who was way ahead of his time. He also developed many of the concepts in logic and binary computation that we use in computing today (before computers!). Peirce's concepts for describing symbol systems and symbolic thought are very useful for explaining our contemporary uses of "code" and "information," and you will find his ideas applied and expanded in many fields, including linguistics, philosophy of mind, cognitive science, computing system design, and programming languages.
The learning goal for this week is to become familiar with the main terms and concepts (as defined in our context), and to begin thinking with them for understanding and explaining aspects of computing and code that you didn't see or understand before.
Readings
- Prof. Irvine, "Introduction to Peirce's Semiotic Theory for Studying Computing Systems as Semiotic Systems."
- We will work through the main concepts outlined here over several weeks. I don't expect the concepts to be understandable at first; it's hard work, but working out the ideas for yourself really pays off. Don't worry; I will explain things step by step.
- Prof Irvine, "Semiotics in Computing and Information Systems." Chapter in Bloomsbury Semiotics, ed Jamin Pelkey (London: Bloomsbury, 2022), vol. 2, 203-237. Read pp. 204-212 for this week. Download and print for reference.
- This is a more formal presentation of the main concepts in Peirce's theory for computing. I also explain the interdisciplinary framework of systems and design theory that we are also using in this course.
Presentation:
Prof. Irvine,
Intro Peirce's Semeiotic and Computing: Peirce 1.0
- We will go over these concepts in class; you can also study the presentation on your own.
Writing assignment (Canvas Discussion module)
- Read the Background for the assignment: This is a fun exercise for becoming aware of a “semiotic process” (using and interpreting symbols in a symbol system) by making explicit how we use tokens (individual material instances) and types (the patterns or forms instantiated) in an important kind of semiotic process -- translation. We will use Google
- After following the steps in the assignment instructions, copy the text in both source and target windows in your Google translate example, and “paste” the text tokens into your discussion post, pasting 3 times for each set.
- Next, use the style features in the Canvas edit window, and change the font style and/or color or size of the text characters in 2 of the sets of your text tokens. What have you just done? What is happening when we “retokenize” tokens from one digital instance to another? How do we recognize the characters and words no matter how many times we do this? Haven't you just proved and demonstrated the type/token principle, and the principle of translation as interpretation from one set of symbol representations to another?
Learning Objectives and Main Topics
In this unit, students will learn the key terms and concepts developed in contemporary linguistics for understanding the nature and structure of human natural language and writing systems for languages, and for distinguishing natural language from formal and artificial symbol systems also called "languages" (e.g., mathematical and scientific notation, the special terms and notation, the "metalanguage," used in linguistics, and computer programming "languages" or "code").
The terms and concepts established in modern linguistics are now the common terms used in computer science, cognitive science, semiotics, Machine Learning and AI, and many other fields. Our concepts for syntax (rule-governed sequences and combinations) and word units have been transposed into the design principles for formal languages (math, logic, scientific notation, programming languages). So, what we "know" unconsciously from competence a natural language (combined with the number sense) is applied consciously, deliberately, and at a different level of abstraction in the designs for our formal languages.
Further, linguistics now also includes the specialized field of computational linguistics and natural language processing (NLP), which is an important field in computing and information science. Data analytics, AI, and Machine Learning depend on concepts formalized in linguistics (that is, given precise meanings and systems of notation used in programming and algorithms), and then defined for the designed syntax ans symbols of formal "languages."
With this background, you will be prepared to answer important questions:
- what do we mean by natural language, and what are the distinctive features that make a natural human language a language in the precise terms of linguistics?
- what do we mean by a formal language, like a "computer programming language," which is intentionally designed with specially defined symbols and syntax for "code"?
- can we describe other symbolic systems like image genres, art forms, and music as being "a language" or "like language"? how are they different from natural language? how can we be more precise in our terminology for making distinctions between different kinds of symbol systems?
Readings and Background:
- Steven Pinker, "Language and the Human Mind" [Video: 50 mins.][start here]
- A well-produced video introduction to the current state of knowledge on language and cognitive science from a leading scientist in the field. It's a bit long, but you can watch at 1.5x playback speed if you'd like to.
- Martin Irvine, "Introduction to Key Concepts in Linguistics." (Intro essay; read first).
- Andrew Radford, et al. Linguistics: An Introduction. 2nd ed. Cambridge, UK: Cambridge University Press, 2009. Excerpts. Use for a reference to the major topics of linguistics.
- Review the Table of Contents so that you can see the topics of a standard course Introduction to Linguistics. Don't read the whole selection of excerpts. Focus on the Introduction to Linguistics as a field, and the sections on Words (lexicon) and Sentences (grammatical functions and syntax).
Video Lessons: Crash Course: Linguistics
- Good basic, short lessons. For this week, view Lessons 1-4 ("What is Linguistics" to "Syntax Trees") and Lesson 16 (Writing Systems).
Background for this Week's Assignment: Visualizing Syntax Structures
- In the readings and video lessons above, you were introduced to the way that we use mathematical models (tree graphs) for mapping the syntactic structure of a sentence in a natural language. Understanding syntactic patterns is also important for understanding how programming languages must be designed, and how we can encode digital data. For this assignment, you will use software developed for computational linguistics and Natural Language Processing (NLP) for visualizing the syntax structures of sample sentences in a "parse tree."
- The term "parse" comes from traditional grammar, which means decomposing sentences into their word classes or "parts of speech," like noun, verb, preposition (from Latin, pars = "part"; as in classical Latin grammar, partes orationis, "parts of a sentence, parts of speech"). See Wikipedia: Parsing.
- Note: Many NLP systems begin with sorting word tokens and mapping them into a parse tree or parsed with metadata labels for each word.
- Experiment with the XLE-Web syntax parser: This site, provided by a linguistics research group in Norway, aggregates useful computational analysis tools for studying syntax.
- Directions: In the "Grammar" pull-down menu, you will see the languages that can be "parsed" (syntax-mapped) in this web-based system. This NLP system is designed with linguistics models for mapping syntax in "Constituent Structure" (c-structure, the nested-level tree structure) and "Functional Structure" (f-structure, a set of logical labels for the syntactic function of the words in the sentence).
- Try it out: choose any language you know, and copy and paste a sentence in the text box (include a period at the end), and click on "Parse sentence". This will generate a very complex tree-structure and table with functional labels. The NLP analysis will also suggest possible alternative syntax maps for how the word units in the sentence can be read.
- Next, uncheck all the boxes except "c-structure," and run the Parse again. This will generate a simplified version of the c-structure tree with the labels for the syntax nodes filled in by the tokens in your example sentence.
- Last, Tokens and Tokenization. Uncheck all the boxes, and click on "Tokens" for the sentence in the input menu. This will generate a list of the word unit instances and punctuation that the system detected for analysis.
Writing assignment (Canvas Discussion module)
- Preparation: From the XLE-Web language menu, choose "English," and insert a sentence in the text box. Use a sentence of around 20 words, and one that has a relative clause (a "that," "who," or "which" clause). Then click on "Parse sentence." (You can also chose a second language that you know also for visualizing syntax in the tree graph, but we will use English as a common reference.)
- As in your first experiment above, the software will give you a very complex graph for the sentence (including options for which syntax "path" seems most likely), using two forms of formal linguistic notation: a constituent (c-) tree structure and functional (f-) bracketed notation structure.
- Next: We will focus on the "c-" (constituent) structure, so uncheck all the boxes except "c-structure" after viewing the full complex tree and notation. Click on "Parse" again, and this will generate a compact tree without all the syntax details. Use this compact tree for discussion. Take notes on what you discover in the syntax trees.
- Note: This will be new for you, so don't worry about all the complexities and unfamiliar terms and notation. Do your best to follow what is being presented in the visualization for the "c-" structure. You can experiment with the settings, and also mouse over and experiment with choosing different ways of mapping the tree by clicking on the branching nodes.
- Note: The software working in the background on this site is designed to generate thorough abstract maps of sentence structure from your input tokens, including "placeholder" elements that belong with the full syntactic structure but may not appear in your example sentence.
- For your discussion post: Insert your sample sentence in your post, and, if possible, a screen shot image of the compact syntax tree (with only the "c-structure"). Describe your experience using the syntax tool, and what you learned about syntax and mapping word tokens to the structures in the parse tree. I'm sure you will have many questions, so include questions that we can discuss in class.
Background, Main Topics, and Learning Objectives
Your main learning goal for this week and next is to discover for yourself a clear conceptual understanding of the technical concepts of information, data, and how they belong to the semiotic design principles of computing systems. And further, to discover why learning this basic knowledge can empower anyone – especially people who don’t think they are “techies” – to understand why and how and all our computing and digital systems are designed the way they are, rather than some other way. You will then be on your way to claim ownership over these technologies as being part of our human birthright as symbolic thinkers and communicators, who always use technically designed physical media for expression, representation, communication, and community identity. Hang on, work as hard as you can on reading and understanding, ask lots of questions, and I will help you discover why learning this is worth the effort, and comes with lots of great "aha" moments!
This week, you will learn the key terms, concepts, and design principles for “information” as defined in digital electronic communications and computation, and why we need to distinguish the technical concept of “information” from uses of the term in ordinary discourse and other contexts. You will learn the reasons why we use the binary system (from the human symbolic systems of mathematics and logic) for structuring and designing electronic information. You will learn why and how we use this designed system to map units of other symbolic systems (what we call "digital data") into arrays of structures of controlled states of electricity (patterns of on/off placeholders) in a second designed layer.
You will also have a "hands-on" use of actual electrical telegraph equipment to see how Morse Code is designed for transmitting minimal units of electricity using electromagnetic switches wired together. (Telegraph code was translated into binary code, which then became the basis for all electronic character encoding all the way to our present-day Unicode.)
With the clarifying concepts from Peirce's definitions for the physical/material structures of tokens and representations required in every symbolic system, you will understand how digital, binary information is necessarily designed as a semiotic subsystem, a structured substrate, for holding and reproducing patterns of all our digitized symbolic systems. And not only structures for representations (strings or clusters of tokens), but also in the subsystem for encoding the kinds of interpretation and calculation that "go with" each data type as a system. This is the full "inside" view of "encoding" and "computation" with digital electronic systems. Deblackboxing computing and information for a true view of the designs for semiotic subsystems is the master key for understanding "code."
Next week you will learn the technical definition of "data" as structures of units of “information” that are encoded, in precise ways, in the design of what we call "digital architecture." This architecture means the whole master design for a system with three connected physical structures: (1) for representing tokenized units of human symbolic systems (data representations), (2) for using clusters of binary logic processes for interpreting, calculating, and transforming input data representations into further output representations, and (3) for reliable, "packaging" of data structures for sending and receiving across networks (Internet protocols).
Key Terms and Concepts:
- Information defined as quantifiable units of {energy + time + transmitability in a physical medium}.
- The bit (binary unit) as the minimal encoding unit with arrays of two-state electronics (on/off). Defined groups of bits are termed bytes (the minimal units of data).
- The Transmission Model of Communication and Information: the model from electrical engineering and telecommunications: what it is, and is not, about.
- The Binary number and Boolean logic systems: for logic, computation in the base 2 number system, and encoding longer units of representations (bytes).
- Discrete (= digital/binary) vs. Continuous (= analog) signals.
Video Lesson Introductions:
- Code.org, How Computers Work Series (whole series list)
- Why do Computers Use 1s and 0s: Binary and Transistors Explained (Basics Explained)
- Crash Course Computer Science:
- Electronic Computing (background on the electronics for digital information)
- Boolean Logic and Logic Gates
- Note: these are good quick intros, but they have to skim over some important facts about digital system design. There are no "1s" and "0s" in the physical components of digital information and computing systems or in binary code at the electronic level. "1" and "0" have meanings in human symbol systems, and, by using semiotic design principles, we map (correlate) human meanings and values represented in symbols into a system of binary electronic states (on-or-off in logic circuits, voltage present or absent in memory cells). These physical states in tiny circuits and wires are meaningless until assigned a symbolic value from outside the physical system; that is, in a design from us human symbol users.
Readings
- Read first after the video lessons: Prof. Irvine, "Introduction to the Technical Theory of 'Information' (Information as a Semiotic Subsystem)"
- Daniel Hillis, The Pattern on the Stone: The Simple Ideas That Make Computers Work (New York, Basic Books: 1998; rev. 2015) (excerpts).
- For this week, read only the Preface and Chaps. 1-2 (to p.37). Hillis provides good explanations for how we use binary representation and binary logic to impose patterns on states of electricity (which can only be on/off). The key is understanding how we can use one set of representations in binary encoding (on/off, yes/no states) for representing other patterns (all our symbolic systems). Binary encoded "information" (in the digital engineering sense) can be assigned to "mean" something else when interpreted as corresponding to elements of our symbolic systems (e.g., logical values, numerals, written characters, arrays of color values for an image). Obviously, bits registered in electronic states can't "mean" anything as physical states themselves. How do we get them to "mean"?
Optional: For Reference and for Your Own Study:
- Claude Shannon's paper that defined "Information Theory" for electronic communications and computing: "A Mathematical Theory of Communication" (1948).
- Denning and Martell. Great Principles of Computing. Chap. 3, "Information," 35-57.
- Excellent background on technical details from a leader in computer science.
In-Class: Demonstration of Telegraph Signals and Code in a working telegraph system!
-
In-class: Experiment with Morse code in an actual telegraph circuit with vintage equipment.
- Read first: Introduction to Morse and Code (background and dossier of sources) (Irvine)
- Prof. Irvine, Information Theory, Digital Electronics, and Semiotic System Design
(in-class presentation and discussion)
Writing assignment (Canvas Discussion module)
- First, describe what you learned so far about the technical meaning of information in the context of binary electronics and computing systems. Does it help to use the term "E-Informaton" to distinguish the electrical engineering concept from ordinary discourse uses of the word? Next, think through the following questions as prompts to help you clarify the concepts in your own thinking. Then choose two, and explain, as best as you can so far, how the answers help "de-blackbox" the reasons for binary electronic design principles, and how they support the main purpose of computing systems as symbol processors:
- From our viewpoint of designed systems, can you see how "information theory + semiotics = the whole story" of modern computer system design? In other words, how digital electronic information theory provides an elegant engineering solution to a core semiotic problem (representing token structures of human symbolic forms in binary electronics)?
- Why and how is "digital information" a logical-symbolic structure that we impose -- by design -- on structures of electrical energy that "know" nothing about human symbol systems? How do the binary structures become interpretable?
- How does "information theory" in engineering provide the techniques for creating the essential subsystem or substrate for using binary electronics as a physical medium for representing token instances of human symbol systems in structures designed for this kind of representation?
Learning Objectives and Main Topics:
This week students will learn the basic background about “data” as understood in computing, programming, and digital media.
We will focus on two case studies of data types or that we use every day: digital text and images. We send and receive more text today than ever before in human history (text messaging, email, blog posts, etc.). All this digital text is now possible by adoption of an international standard for binary encoding the characters of all languages -- Unicode.
Similarly, we routinely make digital photo images, send and receive images, and view digital images and graphics files in many software and digital device contexts. This is all possible, too, by defining images as types of data and by standards for digital image formats that are interpretable in corresponding kinds of software.
Students will also be introduced to programming, code, and data as elements of the design of computers as semiotic systems with multiple levels of subsystems that combine to create what we use and experience. This is an important step in deblackboxing computing and digital data systems, and leads to further understanding of the meaning of code.
Introductory Reading
- [Read first:] Prof. Irvine, "Introduction to Data Concepts: From Bits to Bytes and Data."
(1) Text as a Data Type: Character Encoding
Video Lessons:
- Representing Numbers and Letters as Binary Data (Crash Course: Computer Science)
- Why Do Computers Use 1s and 0s: Binary Transistors Explained (Basics Explained)
- Digital Character Sets: ASCII to Unicode (Computer Science)
- Unicode Consortium: Introduction to Unicode and Text Encoding
Background for Unicode Case Study:
- Wikipedia overview of Unicode and character encoding is useful.
- For Reference: The Unicode Consortium official site
See: Unicode Glossary of Terms | Unicode Technical Site | Unicode History - Review the following references:
- The Current Unicode Standard, 15.1 (Sept. 2023) [main reference page]
- See the Code Charts (Tables) for All Languages (Unicode 15.1) (experiment with viewing the code chart for different languages). You find the Unicode "code point" for each character by reading across the row and then down the column.
- About Unicode Emoji | | Full Emoji List (15.1) [large file]
- Yes! All emoji "characters" are encoded as Unicode bytecode numbers, or they wouldn't work consistently for all devices, software, and graphics renderings. Emoji are not sent and received as images but as bytecode definitions to be interpreted in a software context. Again, code data and device-software contexts and rendering methods are separate levels is the system design.
- Unicode "test file" of all currently defined "emojis" [15.1]
- This is a text file with Emoji symbols encoded to test how they are interpreted and displayed with the software and graphics rendering of various systems. You may find some don't display in your Web browser. The Emojis often look different from device to device. Why?
- Current Unicode Emoji Chart with modifiers (for skin tone options)
(2) Digital Images as Data: Digitizing Light, Pixels, Image Formats, and Software
- Video Lessons:
- Images, Pixels and RGB (Code.org, by co-founder of Instagram)
- How do Smart Phone Cameras Work? (video lesson, Branch Education)
- Includes excellent background on light and human vision [ignore branding information]
- Background on Digital Photography:
- Ron White and Timothy Downs, How Digital Photography Works. 2nd ed. (2007). Excerpts.
- Well-illustrated presentation of the basics of the digital camera and the digital image: study especially the pages on "how light becomes data," and "how images become data."
- The principles for photography and digital images are the same for the miniature automated cameras in smart phones, which are combined with sensors, data formatting, and software in the device.
- Optional: Slightly More Advanced Video Lessons (from Computerphile, if you want to go further in the technical design)
Writing assignment (Canvas Discussion module)
- For this week's discussion post, connect what you have learned this week with the concepts from the past three weeks. Provide references to the readings and videos that have helped clarify the concepts and explanations for you.
- Assignment Project: Create an example (a token instance) of a digital data type that we are studying this week: either a text string (two sentences) in any language (Unicode encoding is behind what we see), or a digital photo image in a standard format. Copy your data example into your post (using the edit feature in the Canvas input window). Leave it there, and then write a "biography" of your data instance from the moment of "creating" it, and through all of its encodings in different memory components and inputs/outputs in the devices that you use, and on through the re-tokenization in the Canvas platform. Apply as many of the concepts that we've studied this week and over the past weeks. Insert your data "biography" below your data example.
- Hints: For text strings: describe what is happening in your first input encoding with the Unicode language bytecode used by your software (like Word or Google Docs). Describe how Unicode bytecode ("code points") for your token instances are held physically in memory in your PC, and are interpreted by software for physical representation in your output display. What Unicode code group(s) (language family code points) is/are behind what we see on our screens? How does the retokenization from device memory to device memory work, across networks, in Canvas servers, back to our screens interpreted in Web software?
- Hints: For a digital photo image: you can create a data token instance with your mobile phone's camera; the information defining the photo will be "written" (stored) in memory as a file. You can view the digital photo as a tokenization on your screen (interpreted through software), but then "send" it to yourself (email, or Cloud storage that you can access from a computer) so that you can use PC/Mac software to "view" it. If you have a digital camera, you can also think through the steps from "writing" (storing) the photo image data on a memory chip, and through the stages of "copying" to your PC (directly or via the Internet). Think through the transitions in tokenization, re-tokenization, and software interpretation. In semiotic terms, the sum total of mathematical formulas for the arrays of pixels that define each digital image file is the abstract type of the image as a symbolic form, and when we view it on screens, we are viewing one physical-perceptible token instance, a tokenization produced by a software interpretation projected through the graphics components of our PCs and devices.
- For either data type example: think through the stages in your data instance's physical "biography" for your discussion post. Hint: when we "copy" or "move" data items we are communicating intentions, through software routines (the subprocesses in any program) for ongoing retokenizing of the underlying physical bit/byte-level "information" into other physical instances in other digital memory locations. (Indexing and interpreting systems in our devices keep track of the data types and whole files stored in memory locations.)
- Describe (and ask questions) about the encoding/decoding processes of the data type instance as data. What is the relationship between software specifically designed for creating (inputing) and displaying (outputting) a data type for how the instances are rendered in representations on our pixel-based displays? Our "local" PC (or Web-enabled app on a mobile device) and the "remote" Canvas server are designed to facilitate "copies" of your data instances (re-tokenizations), and return "copies" (token instances) to be output through the memory, software, and screens on our devices.
- Can you understand more clearly the how the levels of E-Information and Data are designed in our systems?
Learning Objectives and Main Topics
This week, students will learn the main top-level design principles for modern digital computer systems, and why and how this design was developed only for implementing computation as a semiotic system in the physical structures of binary electronic components. That is, the designs for all digital electronic computing devices are based on, and designed to serve, our shared human symbolic capabilities and systems of expression and representation (from language and writing to multimedia).
This background is important preparation for understanding what we are doing in programming and coding (in the following weeks of the course).
Readings and Video Lessons:
- Prof. Irvine, Introduction to Computer System Design [read first]
- Prof. Irvine, Computer System Design Principles (What is a Computer? What is Computation?). Video. 20 mins. You can also view this video directly in your browser [link] (not on YouTube) [switch to 1080p HD quality].
- I made this video for CCT 505, but with a view that applies to other courses.
Note: I don't use much of the semiotic terminology that you're learning in this course (introducing that would have taken a longer presentation), but you will see how the concepts in this course are assumed in the system designs. - Here are the slides (as Google Slides), so that you can also study the presentation more slowly on your own, if you'd like to.
- I made this video for CCT 505, but with a view that applies to other courses.
- Crash Course Computer Science: Video Lessons
- For this week, complete Lessons 3-8.
[You can also watch the whole series at your own pace whenever you want to.] - We will complete the descriptions in the video lessons by answering the "why" questions, the reasons for the designs as they are physically implemented.
- For this week, complete Lessons 3-8.
- Martin Irvine, "Semiotic Foundations of Computing and Information Systems." Chapter in Bloomsbury Semiotics (London: Bloomsbury, 2022), vol. 2. Review pp. 203-212. Download this file for easier reading, and print it out if you can.
- This chapter explains the "why" and "reasons for" the design of computer systems and data (covered in the video lessons). Focus on the exposition of the framework for understanding computer systems as designs for semiotic systems (pp. 205-207).
Optional and for Reference: Background on the Technical Design of Computers
- Ron White, How Computers Work. 9th ed. (Indianapolis, IN: Que Publishing, 2007). Excerpts.
This is a well-illustrated reference guide to the design of PCs and other devices.- Part One: Boot Up Process (Basic History and Design Architectures)
- Software Applications
- Denning and Martell, Great Principles of Computing (selections). [Here is the whole e-Book in the shared folder.] Read as far as you can in these sections for technical introductions to key concepts: From Chap. 4 (read 59-70, top paragraph); from Chap. 5 (read 83-88); from Chap. 6 (read 99-105 top).
Writing assignment (Canvas Discussion module)
- Discuss one or two main discoveries that you learned about computer system design from our perspective of symbolic systems. Think through the following system outline, and ask questions about what you don’t understand yet:
- Was it clear how digital computer system design is based on a model for implementing symbolic processes physically in two main connected subsystems -- a subsystem for (1) “symbols that mean” (physical representations for “data” as encoded from our symbol systems as tokens in binary bytes) and (2) “symbols that do” (physical representations at the level of binary representations in programming code that are interpreted for performing actions on representations in subsystem (1))? Was it clear how the system design interprets and combines the two subsystems for performing what we call “computations”? Describe as much of the details that you understand, and ask questions that we can go over in class.
- Note: Although computer system processes are unobservable as encoded binary electronic structures, can you see how computers (large or small) are not really "black boxes" (in the sense of being beyond human understanding -- because we designed them with and for our symbolic capabilities)?
- Looking ahead to the next two weeks: Having a basic understanding of the concepts behind computer system design will really help you understand why a programming language’s “code” is designed the way it is, and what we are doing when writing programs and running software.
Learning Objectives and Main Topics:
In Weeks 9-10, students will learn how we communicate with the components in the designed architecture of computer systems (studied last week) through the levels of symbols in programming "code" and "data." This week, students will also begin "learning by doing" with a hands-on tutorial lesson in the fundamentals of programming (continued next week).
For understanding the ideas and methods behind the "why" and "how" of programming, students will also learn about computational thinking -- a universal form of thinking and reasoning that calls on our cognitive-symbolic abilities for abstraction, pattern recognition, planning step-by-step procedures, and modeling the forms of interpretation that we use for our symbolic systems (e.g., language, math, images).
- "Computational Thinking" is simply a specialized application of human symbolic capabilities that we have developed in logic, mathematics, and design. This form of thinking underlies the design of programing languages and computer code as symbolic "languages" for "communicating" with the physical components of computer systems (as we studied last week).
- "Computational Thinking" is NOT learning to think like a computer (whatever notion of "computer" you may have). Rather, it's exposing common logical and conceptual thought patterns that everyone has, can develop, and can learn to apply in programming and digital media design.
The video lessons will help you visualize how a programming language (and thus a software program or app) is designed to specify symbols that mean things (represent values and conceptual meaning, mainly through variables for data types) and symbols that do things (symbols that are interpreted in the computer system to perform actions (interpretations and operations) on other symbols = signs/symbols for syntax and operations).
Introductions and Video Lessons:
- Video Lesson: Computational Thinking: What Is It? How Is It Used? (Computer Science Intro)
- Main "Computational Thinking" methods:
Decomposition: Breaking down a complex problem into manageable units that go together; a development of "systems thinking" = managing a complex system through subsystems that provide specific functions that can be combined in an overall system design.
Pattern Recognition: Discovering patterns in examples of similar tasks or problems so that we can make generalizations that hold over any new example or instance.
Abstraction: Focusing on one level of a problem at at time, and bracketing off the complexity of dealing with the design requirements of other levels.
Algorithm Design: Designing the logical steps for a general procedure that can be coded in a specific programming language as part of a whole program (= "runnable software").
Video Lessons: Crash Course Computer Science
- Continue with these Crash Course Computer Science Lessons for backgrounds on programming:
9 (Instructions and Programs); 11 (Early Programming); 12 (The First Programming Languages); 13 (Programming Basics: Statements and Functions). - In the in-Learning Lesson below, Python is used as a teaching language for introducing programming fundamentals. With the background so far, you can also begin to understand the universal programming principles that every programming language must include.
Main Assignment: Video Lessons for Hands-On Learning
- in-Learning: Programming Foundations: Fundamentals
- Sign in to this online course with your GU Net ID. You may need to log in via the GU Library link (once you are authorized, the system will remember your ID).
- Short video lessons that introduce programming concepts with Python as the learning language. The teacher uses code editing programs and interfaces for the Mac platform, but you can do the same with software tools for Windows PCs.
- Study as far as you can units Introduction, 1 and 2 for this week. You can follow the basic concepts in units 1-2 without installing your own IDE program ("Integrated Developer Environment," a program to write programs) and the Python Interpreter for your platform (OS).
- To go further in trying out your own code for next week, install the Python Interpreter on your own PC (instructions for Mac and Windows platforms in video), and an IDE for writing code and then "running" it on your PC. The video will explain how Python uses an "interpreter" program to send "runnable" (executable) binary code to your system. (The course teacher uses the "Visual Studio" program -- a widely used IDE -- for writing and demonstrating the Python "Source Code".)
- Take notes on what you learned and questions you have about programming concepts and how our "code" gets communicated and interpreted in a computer system.
Writing assignment (Canvas Discussion module)
- Describe what you learned about programming and code from the background video lessons and from working through the first parts of the in-Learning course on Programming Foundations. Were you able to make connections to the computing principles and concepts for code that we've studied? Were any key concepts clearer? What questions would you like explained further in class?
Learning Objectives and Main Topics:
Main goal: By continuing your video lessons and background readings, think for yourself about what the rule-governed "statements and operations" mean in the context of the programming language. Do you understand how Python (and any programming language) is designed for providing the "code" (in step-by-step procedures) for using different kinds of signs and symbols for operations (= processes, functions, interpretations) and for kinds of data representations (defined in variables and data types)? Is it clear how the code syntax + the specially defined signs (the "reserved terms" of the Python language) provide the "operations" that "go with" specific kinds of "typed" data representations?
By seeing the visual representations of programming code signs and symbols, and then what happens in the results from computational processes and actions in the "output" representations, can you understand more clearly how programming and software is about combining:
- "symbols that mean" ("coded" by using a set of symbols for variables as "place-holders" to be filled-in by data-representing symbols when the program is "run"), and
- "symbols that do" (the signs and symbols that create operations, actions, interpretations, and processes on or for the "meaning representing" symbols.
When we pause to observe how we use the whole computer system to encode symbolic representations (interpreted in binary representations) and cause symbolic actions, with and for those representations, can you catch a glimpse of what it means both to "code" and "run" programs? Can you explain on a conceptual level, what it is we are doing:
- (1) when we program with a specific programming language for creating a "source code" file (that is, when writing code for software programs -- including "importing" reusable already-written code from code libraries), and using a source code file as "input" for interpreters or compilers that translate our text code file into binary "executable" files; and
- (2) when we "run" software (from binary executable files in any computing device) for different kinds of data (e.g., text, images, graphics, audio/video), and "interact" with the program dynamically (in "real time") for directing actions and interpreting new/additional data.
Key Concepts
- Source Code
- Executable Code
- Programs/software: how the symbol systems are designed to work, and how a program file is allocated to (or assigned) memory locations, and how the design of the computing system (binary code representations in memory + processors (taking inputs and generating outputs + cycles of time) directs access and memory for outputs.
- The combined systems design for programming and computation.
Building on our learning building blocks so far, you'll learn that:
- Programming languages are, and must be, "formal languages" (metalanguages) with strictly defined symbols for syntax and semantics (what the signs/symbols must mean -- stand for -- in the internal logic of the programming language design), as compared with natural languages (Week 5).
- The strict formalism of programming languages is based on logic and mathematics (human symbol systems with signs for representing values + signs for operations/ interpretations on symbols. Only by using precisely defined formal signs and symbols of a programming "code" is it possible for us to to map (assign, impose a corresponding structure for) the formal necessity (represented in logically precise symbols) onto the physical causality in the corresponding components a digital electronic computer system.
- The mapping of abstract human-symbol-to-physical actions-in-components happens when the symbols that we understand in computing code are "translated" into binary code, which is the form that can be mapped to binary electronics. The translated binary encoded representations can thus be assigned to physical structures in components for both memory (holding, moving, and storing binary representations of data) and actions (processes, interpretations, and rules for transforming data representations) in the binary computing circuits of processors.
- You can see how "E-Information" and "Data" representations (Weeks 6-8) become assigned to different levels in the architecture of a computing system, and how programming code puts them into action.
- Computation in action (as "running" software) is a way of defining transitions in information representations that return further interpretable symbol sequences in chains of "states" that combine meanings and actions. Stages in the results of the processes are programmed to be "output" in our screens and audio devices, and we can continue directing and re-directing the processes by ongoing dialogic input in interactive software and GUI interfaces (more to come in Week 12).
- This is what the software layers running on your device right now are doing to render the interpretable text, graphics, images, and window formatting from the digital data sources combined in a Web "page," image or video file, and many other behind-the-scenes sources (Weeks 6-8).
Readings for Programming Fundamentals:
- David Evans, Introduction to Computing: Explorations in Language, Logic, and Machines. 2011 edition. (The Professor's Open Access site for updates: https://computingbook.org.)
- Chapters 1 and 9 are good accessible introductions to programming concepts.
- Denning and Martell, Great Principles of Computing. Chapters 5, 6, 10 (Programming; Computation; Design). [These chapters will fill in your background for how programming and code are implemented in computer systems.]
Crash Course Computer Science Lessons (Continued)
- Programming Basics: 13: Introduction to Algorithms | 14: Introduction to Data Structures
- How Programs and Data Are Organized: 18: Operating Systems | 19: Memory and Storage | 20: Files and File Systems
Main Assignment:
Continuing the in-Learning Course: Programming Foundations: Fundamentals
- Study Units 3-5 and the Conclusion for this week. Again, you can follow the basic concepts and procedures presented in the video lessons without installing the Python Interpreter and your own IDE program, but you will get more into "hands on" coding if you have the software tools on your own system.
- Continue to take notes about what you are doing and learning, as well as questions about the programming principles.
Writing assignment (Canvas Discussion module)
- In your discussion post, capture your main learning steps and questions from the readings, video lessons, and in-Learning lessons for this week. Think through what you've studied in Weeks 8-9 and this week: have you made further "aha!" connections, and have new questions emerged at this point in the course?
- As you think over your learning, refer also to the learning goals for this week in the first paragraph of the "Learning Objectives and Topics" above. Are these foundational principles of programming, software, and computer systems more understandable? What questions or topics would you like to go over more in class?
Learning Goals and Main Topics:
This unit has two main goals: (1) students will learn the basic design principles for the Internet and Web as subsystems for semiotic systems (the architecture of the Internet), and (2) students will learn the basic features of the code languages designed for "activating" the Internet system as a "metamedia" platform for the Web and mobile apps:
- HTML ("Hypertext Markup Language"),
- CSS ("Cascading Style Sheets"), and
- JavaScript (a "script" coding language designed for creating interactions, for formatting, and for including digital media in HTML files interpreted in Web browsers and mobile apps).
Learning the basics of the "HTML code suite" is a great way to learn about -- and do -- code. Since we "write" the suite of HTML code families in a text file, we have a first-level visualization of the relation between metasymbolic symbols (the signs/symbols of the code as a metalanguage) and the symbolic forms (the data and media types) that we use for meaningful representations (text, graphics, images, audio/video, etc.). The "meta" code level is designed to define, describe, and prescribe the functions of all the digitally-encoded representable forms packaged in an HTML file, but the "meta" code does not get "displayed" in the screen output. You can see right in your HTML code window how we use and distinguish between "symbols that mean" and "symbols that do" in "coding" for computer systems that use Internet/Web data.
These basic coding steps will open up the design principles that enable us to send, retrieve, and format data from Internet/Web servers so that the data can be presented in GUIs (interactive Graphical User Interfaces). You will get a first look at the code that makes everything in our Web browsers and mobile apps work as dialogic interactive systems. You will discover how many of our core human symbolic capabilities can be assigned, delegated, and “outsourced” to large-scale, interconnected networked systems that store and analyze data, which can then be retrieved through Internet/Web connections and be interpreted in software layers for the final "output" presented on our Web “pages” and app “screens”.
With this view of one level of code (used for fetching, describing, and formatting what we see on our screens) you can go further into the "black box" to understand the operations that we can’t see that are initiated in our interactive commands and choices communicated to networked computer systems. And here we meet all kinds of software programmed in several "languages."
Key Terms and Concepts Learned:
- Levels and layers of computing systems and code.
- Metadata and data.
- Basic concepts for coding for data types and interactive commands for networked systems (Internet/Web).
- Code used for what we see in all Web “pages” and mobile app screens (HTML, CSS. JavaScript).
Readings and Video Lessons:
- Prof. Irvine, "Introduction to the Internet and Web as Semiotic Systems" [new: read first]
- Video Lessons: Background on the Internet and Web Technologies:
- Crash Course, Computer Science Lessons:
- Computer Networks: Introduction (Lesson 28)
- The Internet (Lesson 29)
- The World Wide Web (Lesson 30)
- Code.org Video Lessons: "How the Internet Works": Watch lessons 1-5.
- Crash Course, Computer Science Lessons:
- Denning and Martell, Great Principles of Computing, Chap. 11, "Networking."
Video Lessons for Introductions to HTML and Web Coding
- in-Learning has a large set of Video Courses on Web Development (if you'd like to see what they offer and what you could study on your own later.)
- For this week, study these introductory lessons, as far as you can:
- in-Learning: HTML Essentials [Log in via the Library (link) with GU ID for free access]
Study the basics of "What is HTML?", Formatting Text, Links, and Images. - Optional: The next steps in a bit more advanced lessons (if you have time and can follow):
- After the background in basic coding for HTML documents, try out the code with the Tutorials at W3Schools.com:
- Note: you can always return to these lessons after this week to follow up and learn more.
In Class: Follow the Code
- JavaScript discovery html file. [We will go through the code and actions in class.]
- Examining the "code source" of Web pages. With the Chrome browser, type "ctrl-u" for any displaying page, and you will see the "source file" as it comes to your browser. (Other browsers will have a feature for "revealing" the source file as "raw text".)
Writing assignment (Canvas Discussion module)
- This week's assignment has two parts:
(1) With the background on the design of the Internet and Web, and from your learning about the HTML code suite in the lessons, discuss some main points that you learned about the Internet/Web and coding for the Web. Can you describe some features of the HTML code suite and Web "metamedia" interfaces that subsume and combine many of the principles that we have studied for semiotic systems and subsystems, data types, and digital media? What are the main design ideas behind "hyperlinking" and multimedia display interfaces?
(2) From what you learned in the HTML Web coding lessons, write some HTML markup and code for data that you would like to try out and see "run" from a web server. Try out some basic code for formating, font definitions, and image inclusion. Copy and paste your test code and content into the shared Google doc. I will then copy your coded text into an .html file, upload to my web server, and you will see how it works (and if you need to correct syntax or code terms in the markup). - Link to the shared Google doc: for Inserting your HTML test code to see how it works as a file with a URL on a web server.
- Link to student html demo page.
Learning Objective and Main Topics:
- Learning the background history for the models of computation that led to the development of interfaces for human symbolic interaction with programmable processes.
- Understanding the design steps and technical means (in the 1960s-1980s) that enabled computer systems to become general symbol processors and not simply calculating machines.
- Learning the conceptual and semiotic foundations for the development of "graphical interfaces" for multiple symbol systems (data types). This development gave rise to "human computer interaction" (HCI) as a design discipline.
- Learning the design concepts behind the technical architectures in all our devices that support user interfaces to computer systems (small or large) so that they perform as interfaces for semiotic and cognitive systems.
Readings & Video Introductions
- Martin Irvine, The Semiotic Design Principles of Interfaces and Interactive Systems (read first).
- I've synthesized a lot of background history from many sources for opening up the design concepts that enabled the interfaces we use today. Includes a research bibliography if you want to follow up on any of these topics.
- Martin Irvine, Interfaces for Interaction with Symbolic Systems: GUIs to Touch Screens
- Background on the design principles for symbolic actions in Interactive Systems with graphical interfaces, from GUI design to touch screens.
- Ben Shneiderman, Encounters with HCI Pioneers: A Personal History (Morgan & Claypool, 2019): excerpts from Part 1. Read pp. 1-23, and note the concepts for "direct manipulation". There is an excellent bibliography at the end (useful for your own research).
- Shneiderman is one of the founders of Human Computer Interface Design (HCI) as a field of practice and theory. He founded the Human-Computer Interaction Lab at the University of Maryland (1983) as a Professor of Computer Science (and has been a friend of CCT since the beginning). His approach to HCI has always assumed the concepts that we have been studying. His "personal history" is great brief introduction to the history of HCI through the ideas of the main pioneers.
- Brad A. Myers, “A Brief History of Human-Computer Interaction Technology,” Interactions 5, no. 2 (March 1998): 44-54.
- Though this excellent synthesis of the history was written a while ago, we continue to use the interface design principles summed up here. Think about how the different "conceptual leaps" in interaction design (with supporting technologies as they became available) were motivated by semiotic-cognitive needs. How were the technical means motivated by finding ways to design more direct human agency into computer systems through interface designs and supporting hardware and software.
Video Lessons: Crash Course Computer Science
- Lesson 22: Keyboards and Command Line Interfaces
- Lesson 23: Screens and 2D Graphics
- Lesson 24: The Development of the "Personal Computer"
- Lesson 26: The Development of Graphical User Interfaces
Documentary videos on the history of interface and interaction designs:
- Alan Kay on the history of graphical interfaces: Youtube | Internet Archive
- Demo of Ivan Sutherland's Sketchpad, Lincoln Labs, MIT (c.1963).
See first: the short video of Alan Kay's commentary on Sutherland's Sketchpad graphical system. - Doug Engelbart's "Mother of All Demos" (1968). Film documentary of the live event.
The Human Augmentation Lab demonstration of the multi-user graphical "Online System," San Francisco, Oct. 1968.- Highlights (5 mins), SRI || Playlist of Highlights [remastered]
Writing assignment (Canvas Discussion module)
Reflecting on your learning over the past few weeks and this week, develop your own description of an interactive feature (not a mobile app):
- From your study of the concepts and technical design steps in the readings and video lessons, describe some of the key developments that enabled computing systems to be designed as general symbol processors (not just calculating machines) with "interfaces" that non-specialists could use. What were the major "conceptual leaps" that enabled a new "paradigm" (model, design standard) for what a computer system could be? How was this new paradigm made "operational" by new designs for interfaces, software, and digital data types -- all being motivated by enabling people (now understood as co-agents with a computer system) to communicate with computer systems and direct the input and processing of symbolic representations, actions, and intentions?
- In your discussion of some of the key concepts, use examples of software and interface features now "built in" to our current interface designs and ways of directing and interacting with digital data and media. Do you see any ideas and technical possibilities for interactive multimedia systems that they early thinkers and designers hoped for, but that have not yet been realized in the current products of the computer industry?
Learning Objectives:
Synthesizing what you've learned (discovering major connections in concepts and topics) and being able to discuss your main learning achievements. Discuss further thoughts about how to apply and extend the concepts and methods of the course to any aspect of computing, code, digital media, and symbolic systems.
Learning basic research methods and focusing on a topic for your final Capstone Project.
In class:
- The Interactive Computing Paradigm: Summing Up "The Meaning of Code""
- Further Discussion of the semiotic foundations of interactive computing systems and graphical "Two-Way" interfaces in pixel-based screens.
- Discussion of your main learning discoveries and "take-aways" from the course
Readings for Synthesizing Thoughts and Learning for Your Post and Class Discussion
- Michael S. Mahoney, "The Histories of Computing(s)." Interdisciplinary Science Reviews 30, no. 2 (June 2005): 119–35.
- This is a richly detailed and well-informed essay by a major historian of computing that synthesizes technical and humanities perspectives. Focus on a close reading beginning at the bottom of p.128 to the end (p.134). Mahoney's main points about (1) there can be no one (or any) linear history of computing, (2) the fact of multiple "communities of practice" that shaped kinds of computing and applications, and (3) the importance of design and design concepts are now well-accepted. Though the examples are from stages of computing up to 2005, his main points and concluding focus on designs for symbolic processing will always be true.
- Re-read: Irvine, "Semiotic Foundations of Computing and Information Systems." Chapter in Bloomsbury Semiotics (London: Bloomsbury, 2022), pp. 203-212, 217-225.
- Are the main concepts and technical descriptions more understandable for you now? What topics or questions would you like to discuss or research further? (Download this file for easier reading.)
- Consult for background on technical principles:
Denning and Martell, Great Principles of Computing. [Download the book for reference.]- For your own further reading and research, be sure to consult the notes and bibliography. For Final Capstone Projects on any topic covered in this book, you will do well to begin it and with any references cited.
- For your own further reading and research, be sure to consult the notes and bibliography. For Final Capstone Projects on any topic covered in this book, you will do well to begin it and with any references cited.
Planning for Your Final Capstone Project
- Final Capstone Project Instructions (including how to use Zotero for references and bibliography).
Writing assignment (Canvas Discussion module)
- As you reflect over what we've studied, what stands out for you in what you have learned and discovered? What are your main knowledge "takeaways" and "aha" discoveries about "Computing and the Meaning of Code"? Were earlier questions answered? What topics (conceptual or technical) don't you fully understand yet, and would like explained further?
- Consider, too, how the methods, key concepts, approaches that we have studied will apply to other topics or courses that you want to study in CCT.
- Looking toward your final "capstone" project, was there a topic or approach that you would like to learn more about, and develop further on your own?
In Class:
Open Discussion and Presentation of Final Projects (Canvas Discussion module)
- We will have a roundtable discussion of the current state of your research for your topic, the concepts and methods you are working with, and references for your bibliography. This is a good opportunity to organize your thoughts, and get feedback and suggestions from the class.
- Post an outline or summary of your ideas so far in Week 14 of Canvas Discussions (bullet point headings are ok). If you haven't gotten far enough to post ideas or an outline, you can discuss your topic and get feedback in class.
Resources for your Research (curated books and articles in our Google Drive):
- Start here for background and research sources:
- All Folders (by Main Topic)
- Computing
- Design and HCI
- Semiotics, Computation, Digital Media
- Semiotics (main sources and applications in technology)
Instructions for Completing and Submitting Your Final Project
- Follow the Final Capstone Project Instructions (including how to use Zotero for references and bibliography).
- Instruction include how to output your essay in pdf and post it in Canvas in the Final Projects week.
- Due Date: 7 days after the last day of class. Link your pdf document (with title and abstract under the link) in the "Final Projects" Discussion topic.