Windows of Olin Library Thermal image of a house Big dog military robots

INFO 1260 / CS 1340: Choices and Consequences in Computing
Jon Kleinberg and Karen Levy
Spring 2024, Mon-Wed-Fri 11:15am-12:05pm, Bailey Hall

Course description

Computing requires difficult choices that can have serious implications for real people. This course covers a range of ethical, societal, and policy implications of computing and information. It draws on recent developments in digital technology and their impact on society, situating these in the context of fundamental principles from computing, policy, ethics, and the social sciences. A particular emphasis will be placed on large areas in which advances in computing have consistently raised societal challenges: privacy of individual data; fairness in algorithmic decision-making; dissemination of online content; and accountability in the design of computing systems. As this is an area in which the pace of technological development raises new challenges on a regular basis, the broader goal of the course is to enable students to develop their own analyses of new situations as they emerge at the interface of computing and societal interests.

A more extensive summary of the material can be found in the overview of course topics at the end of this page.

Course staff

  • Instructors:
    • Jon Kleinberg jmk6
    • Karen Levy kl838
  • TA staff:
    • Abby Langer arl239
    • Aidan O'Connor aco54
    • Aimee Eicher ame225
    • Alice Hryhorovych ash255
    • Amber Arquilevich ada58
    • Anya Gert aag238
    • Baihe Peng bp352
    • Caleb Chin ctc92
    • Charlie Mollin cdm225
    • Ciara Malamug cm973
    • Daniel Bateyko drb348
    • Daniel Mikhail dcm289
    • Eirian Huang ehh56
    • Elisabeth Pan ep438
    • Eliza Salamon ecs287
    • Genie Enders ebe32
    • George Lee jl3697
    • Haley Qin, hq35
    • Hayley Ai Ni Lim al2347
    • Jonathan Moon hm447
    • Katherine Chang kjc249
    • Katherine Miller km842
    • Linda Lee Zhang lz324
    • Lucy Barsanti leb242
    • Madeline Yeh mgy6
    • Melanie Gao zg66
    • Michela Meister mcm377
    • Obioha Chijioke olc22
    • Rachel Wang jw879
    • Rohan Shah rs2589
    • Ruth Martinez-Yepes rdm268
    • Ruth Rajcoomar rr672
    • Sahithi Jammulamadaka sj549
    • Shengqi Zhu sz595
    • Shreya Ponugoti sp843
    • Sophie Liu rl585
    • Tairan Zhang tz352
    • Tasmin Sangha tks39
    • Teresa Tian st678
    • Thiago Hammes tmh236
    • Una Wu yw523
    • Waki Kamino wk265
    • Zayana Khan zk44



There are no formal pre-requisites for this course. It is open to students of all majors.

For Information Science majors, the course may substitute for INFO 1200 to fulfill major requirements. Students may receive credit for both INFO 1200 and INFO 1260, as the scopes of the two courses are distinct.


  • Homework: 6 assignments, each worth 12.5% of the course grade. You will be responsible for 6 homework assignments, which must be submitted via the class Canvas page by the start of class on the day they are due. Each assignment will consist of a variety of different types of questions, including questions that draw on mathematical models and quantitative arguments using basic probability concepts, and questions that draw on social science, ethics, and policy perspectives.

    The planned due dates for the homework assignment are as follows: HW 1 (due 2/8), HW 2 (due 2/22), HW 3 (due 3/7), HW 4 (due 3/21), HW 5 (due 4/18), HW 6 (due 5/2).

  • Final Exam: take-home, worth 25% of the course grade. The final exam for the course will be a take-home exam that you will have several days to complete. It will be structured like a homework assignment, but will be cumulative in its coverage of the material. The due date for the take-home final is determined by the university, and this semester it will be due at noon on Thursday 5/16.

Academic Integrity

You are expected to observe Cornell’s Code of Academic Integrity in all aspects of this course.

You are allowed to collaborate on the homework and on the take-home final exam to the extent of formulating ideas as a group. However, you must write up the solutions to each assignment completely on your own, and understand what you are writing. You must also list the names of everyone with whom you discussed the assignment.

You are welcome to use generative AI tools like ChatGPT for research, used in a way similar to how you might use a search engine to learn more about a topic. But you may not submit output from one of these tools either verbatim or in closely paraphrased form as an answer to any homework or exam question; doing so is a violation of the academic integrity policy for the course. All homework and exam responses must be your own work, in your own words, reflecting your own understanding of the topic.

Among other duties, academic integrity requires that you properly cite any idea or work product that is not your own, including the work of your classmates or of any written source. If in any doubt at all, cite! If you have any questions about this policy, please ask a member of the course staff.


Overview of Topics

(Note on the readings: The readings listed in the outline are also available on the class Canvas page, and for students enrolled in the class, this is the most direct way to get them. The links below are to lists of publicly available versions, generally through Google Scholar.)

  • Course introduction. We begin by discussing some of the broad forces that laid the foundations for this course, particularly the ways in which applications of computing developed in the online domain have come to impact societal institutions more generally, and the ways in which principles from the social sciences, law, and policy can be used to understand and potentially to shape this impact.
    • Course mechanics
    • Overview of course themes (1/22-26)
      • The relationship of computational models to the world
      • The on-line world changes the frictions that determine what’s easy and what’s hard to do
      • The contrast between policy challenges and implementation challenges
      • The contrast between “Big-P Policy” and “Little-P policy”
      • The non-neutrality of technical choices
      • The challenge of anticipating the consequences of technical developments
      • The layered design of computing systems
      • Digital platforms can create diffuse senses of responsibility and culpability
      • Computing as synecdoche: the problem in computing serves acts as a mirror for the broader societal problem
      • Issues with significant implications for people’s everyday lives
  • Content creation and platform policies. One of the most visible developments in computing over the past two decades has been the growth of enormous social platforms on the Internet through which people connect with each other and share information. We look at some of the profound challenges these platforms face as they set policies to regulate these behaviors, and how those decisions relate to longstanding debates about the values of speech.
  • Data collection, data aggregation, and the problem of privacy. Computing platforms are capable of collecting vast amounts of data about their users, and can analyze those data to make inferences about users' characteristics and behaviors. Data collection and analysis have become central to platforms' business models, but also present fundamental challenges to users' privacy expectations. Here, we describe the difficult choices that platforms must make about how they gather, store, combine, and analyze users' information, and what social and political impacts those practices can have.
  • The role of cryptography and security
  • Privacy from whom?
  • Data-Driven Decision-Making. Algorithms trained using machine learning are increasingly being deployed as part of decision-making processes in a wide range of applications. We discuss how this development is the most recent in a long history of data-driven decision methodologies that companies, governments, and organizations have deployed. When these methods are used to evaluate people, in settings that include employment, education, credit, healthcare, and the legal system, there is the danger that the resulting algorithms may incorporate biases that are present in the human decisions they're trained on. And when the methods are evaluated using experimental interventions, it is important to understand how to apply principles for the ethical conduct of experiments with human participants.