Syllabus Fall 2018

The course is a hands-on, research-level introduction to the areas of computer science that have a direct relevance to journalism, and the broader project of producing an informed and engaged public $100 installment loan. We study two big ideas: the application of computation to produce journalism (such as data science for investigative reporting), and journalism about areas that involve computation (such as the analysis of credit scoring algorithms.)

Alon the way we will touch on many topics: information recommendation systems but also filter bubbles, principles of statistical analysis but also the human processes which generate data, network analysis and its role in investigative journalism, visualization techniques and the cognitive effects involved in viewing a visualization.

Assignments will require programming in Python, but the emphasis will be on clearly articulating the connection between the algorithmic and the editorial. Research-level computer science material will be discussed in class, but the emphasis will be on understanding the capabilities and limitations of this technology.

Format of the class, grading and assignments.
This is a fourteen week, six point course for CS & journalism dual degree students. (It is a three point course for cross-listed students, who also do not have to complete the final project.) The class is conducted in a seminar format. Assigned readings and computational techniques will form the basis of class discussion. The course will be graded as follows:

  • Assignments: 40%. There will be five homework assignments.
  • Final project 40%: Dual students will be complete a medium-ish final project (others will have this 40% from assignments)
  • Class participation: 20%

Assignments will involve experimentation with fundamental computational techniques. Some assignments will require intermediate level coding in Python, but the emphasis will be on thoughtful and critical analysis. As this is a journalism course, you will be expected to write clearly. The final project can be either a piece of software (especially a plugin or extension to an existing tool), a data-driven story, or a research paper on a relevant technique.

Dual degree students will also have a final project. This will be either a research paper, a computationally-driven story, or a software project. The class is conducted on pass/fail basis for journalism students, in line with the journalism school’s grading system. Students from other departments will receive a letter grade.

Week 1: High dimensional data – 9/12
CS techniques can help journalism in two main ways: using computation to do journalism, and doing journalism about computation. Either way, we’ll be working a lot with the abstraction of high dimensional vectors. We’ll start with an overview of interpreting high-dimensional data, then jump right into clustering and the document vector space model, which we’ll need to study natural language processing and recommendation engines.

Slides.

References

Viewed in class

Week 2: Text analysis – 9/19
We’ll start by picking up the story of text analysis in journalism, including the development of thew Overview document mining system. Then probabilistic topic modeling (ala LDA), matrix factorization, more general plate-notation graphical models, and word embedding approaches based on deep learning. Then on to fundamental recommendation approaches such as collaborative filtering. Bringing it to practice we will look at Columbia Newsblaster (a precursor to Google News) and the New York Times recommendation engine.

Slides.

Required

References

Discussed in class

Assignment:  LDA analysis of State of the Union speeches.

Week 3: Filter Design
We’ve studied filtering algorithms, but how are they used in practice — and how should they be? We will study the details of several algorithmic filtering approaches used by social networks, and effects such as polarization and filter bubbles.

Slides.

Readings

References

Viewed in class

Assignment 2:  Design a filtering algorithm for an information source of your choosing

Week 4: Quantification and Statistical Inference 
We’ll begin with the most neglected topic in statistics: measurement. We’ll take a detailed look at the question of what to count, and how to “interview the data” to check for data quality. Then we’ll move on to risk ratios, one of the simplest statistical models and a key idea in accountability. We’ll continue with a look at the uses of multi-variable regression in journalism, and study graphical causal models to help untangle the whole correlation/causation thing.

Slides.

Required:

Recommended

Viewed in class

Week 5: Algorithmic Accountability and Discrimination 
Algorithmic accountability is the study of the algorithms that regulate society, from high frequency trading to predictive policing. We’re at their mercy, unless we learn how to investigate them. We’ll review previous work in this area, then start our study of algorithmic discrimination. Analyzing discrimination data is more subtle and complex than it might seem.

Slides.

Required

References

Viewed in class

Week 6: Quantitative Fairness

Most algorithmic accountability and AI fairness work so far has been concerned with “bias,” but what is that? The answer is more complex than it might seem. In this class we’ll discuss the many definitions of fairness and show that they mostly boil down to three different formulations. We’ll also discuss everything around the algorithm, including how the results are used and what the training data means.

Slides.

Required:

References

Week 7: Randomness and Significance
The notion of randomness is crucial to the idea of statistical significance. We’ll talk about determining causality, p-hacking and reproducibility, and the more qualitative, closer-to-real-world method of triangulation.

Slides.

Required

Recommended

Viewed in class

Week 8: Visualization, Network Analysis 
Visualization helps people interpret information. We’ll look at design principles from user experience considerations, graphic design, and the study of the human visual system. Network analysis (aka social network analysis, link analysis) is a promising and popular technique for uncovering relationships between diverse individuals and organizations. It is widely used in intelligence and law enforcement, and inreasingly in journalism.

 Slides.

Readings

  • Visualization, Tamara Munzner
  • Network Analysis in Journalism: Practices and Possibilities, Stray

References

Examples:

Assignment: Compare different centrality metrics in Gephi.

Week 9: Knowledge representation
How can journalism benefit from encoding knowledge in some formal system? Is journalism in the media business or the data business? And could we use knowledge bases and inferential engines to do journalism better? This gets us deep into the issue of how knowledge is represented in a computer. We’ll look at traditional databases vs. linked data and graph databases, entity and relation detection from unstructured text, and traditional both probabilistic and propositional formalisms. Plus: NLP in investigative journalism, automated fact checking, and more.

Slides.

Readings

References

Viewed in class

Assignment: Text enrichment experiments using OpenCalais entity extraction.

Week 10: Truth and Trust 

Credibility indicators and schema. Information operations. Fake news detection and automated fact checking. Tracking information flows.

Slides.

Readings

References

11: Privacy, Security, and Censorship
Who is watching our online activities? Who gets to access to all of this mass intelligence, and what does the ability to survey everything all the time mean both practically and ethically for journalism? In this lecture we cover both the basics of digital security, and methods to deal with specific journalistic situations — anonymous sources, handling leaks, border crossings, and so on.

Slides.

Readings

  • Digital Security for Journalists, Part 1 and Part 2, Stray

References

Viewed in Class

Week 12: Final Project Presentations 

Comments are closed.