Fall 2017 CS6741: Structured Prediction for NLP

Time: Monday and Wednesday, 10:10-11:25am
Room: Gates 416 / TBA
Instructor: Yoav Artzi, yoav@cs (office hours by appointment)
Class listing: CS6741
CMS, Piazza, Zoom (lectures), CMT (paper reviewing), AOI (topic voting -- cs6741fa17)

All students are asked to bring their laptops to class. We will use laptops to broadcast content. If you can not bring a laptop, you will need to share a laptop with another student (this should work well for pairs). This applies to students on both campuses.

In this course, we will study various topics in NLP. We will focus on research results, and every few meetings switch topics. In general, most topic discussions will include technical overview, data analysis, a classical result, and recent results. Discussion of results will be done through research papers, and will include a class discussion and paper reviews. We will use CMT to read and review papers. Each meeting will start with a 10-15min presentation followed by a discussion. In addition, we will dedicate a part of the semester to a deep dive into a single focus topic. This semester the focus topic will be reinforcement learning for NLP. The focused part of the semester will include, in addition to paper reviews and discussions, implementation and analysis of core algorithms.

We will use All Our Ideas to vote on topics. We will select topics based on votes and the general agenda of the course. There is no guarantee that the top-voted topic will be the next to be discussed (but it is very likely). Please vote a lot. The more you vote, the better the ranking is. The password for the website will be given during the first lecture.

We will continuously update this page.

Possible Topics (not exhaustive)

Schedule

Date Topic Readings Presenter Data Optional Readings and Others
Aug 23 Introduction Intro Questionnaire

Procedurals

Project: There are two project options: research and survey. If you choose the research option, you will do a research project (can be your own research if relevant -- it probably is!). The survey option will require to write a survey paper for selected area in NLP. Both options will include: (a) proposal presentation, (b) final presentation, and (c) final report submission.

Auditing: Auditing is allowed and encouraged with instructor permission. It requires attending all classes, submitting reviews, presenting papers, and participating in the discussion. Auditing does not require completing the project or doing any of the project related presentations. The goal is to allow interested students to join while maintaining a lively and productive discussion group. If you want to audit, please email the instructor as soon as possible.

Grading: The grade will include paper reviews, participation, project, and an intro questionnaire.

Participation of non-PhD students: If you are a master student or an advanced undergraduate student, and you wish to participate in the class, please email the instructor. Cornell Tech master students, please follow the application instructions emailed to you.

Policies are subject to change. If something is not clear, please contact the course staff.

Paper Reviewing Guidelines

Each paper review will require a short summary of the paper and the actual review. Some questions you may use to guide your review are (many others are valid too):

Since this is not a real conference review, please also write what you learned form this paper and why, in your opinion, it was a good choice for reading (or why it was a bad choice). Reviews are due at 5pm the day before class.

Paper Presentation and Discussion Guidelines

Each meeting, if readings are discussed, one student will present the papers for 10-15 minutes. The presentation can use slides or can be just verbal. You should use data examples, if data is available to the topic, to illustrate your point. We will then go around the room and each student will contribute to the discussion.

Data Analysis Guidelines

Pick at least 2-3 examples to discuss during your presentation in class. Examples should be prepared to display on screen. We will share your screen as necessary. Pick the examples to illustrate various aspects of the paper and task. The questions you should think about include (but not limited to):

References

Related Readings

Short and Incomplete List of NLP Pointers

NLP Conferences and Journals

The main publication venues are ACL, NACCL, EMNLP, TACL, EACL, CoNLL, and CL. All the paper from these publications can be found in the ACL Anthology. In addition, NLP publications often appear in ML and AI conferences, including ICML, NIPS, ICLR, AAAI, IJCAI. A calendar of NLP events is available here, and ACL sponsored events are listed here.

Corpora and Other Data

Tagging

Part-of-speech Tags

Both parsing corpora below (PTB and UD) contain POS tags. Each parse tree contains POS tags for all leaf nodes. You can view a sample of the PTB in NLTK:

>> import nltk
>> print ' '.join(map(lambda x: '/'.join(x), nltk.corpus.treebank.tagged_sents()[0]))
Pierre/NNP Vinken/NNP ,/, 61/CD years/NNS old/JJ ,/, will/MD join/VB the/DT board/NN as/IN a/DT nonexecutive/JJ director/NN Nov./NNP 29/CD ./.
>> print ' '.join(map(lambda x: '/'.join(x), nltk.corpus.treebank.tagged_sents(tagset='universal')[0]))
Pierre/NOUN Vinken/NOUN ,/. 61/NUM years/NOUN old/ADJ ,/. will/VERB join/VERB the/DET board/NOUN as/ADP a/DET nonexecutive/ADJ director/NOUN Nov./NOUN 29/NUM ./.

The universal tag set is described here. The PTB tag set is described here.

Named Entity Recognition Data

The CoNLL 2002 shared task is available in NLTK:

>> import nltk
>> len(nltk.corpus.conll2002.iob_sents())
35651
>> len(nltk.corpus.conll2002.iob_words())
678377
>> print ' '.join(map(lambda x: x[0] + '/' + x[2], nltk.corpus.conll2002.iob_sents()[0]))
Sao/B-LOC Paulo/I-LOC (/O Brasil/B-LOC )/O ,/O 23/O may/O (/O EFECOM/B-ORG )/O ./O

CoNLL 2002 is annotated with the IOB annotation scheme and multiple entity types.

NYT Recipe Data

This is another example of tagging. The task is explained here, and the data release is described here.

Dependency Parsing

The Universal Dependencies (UD) project is publicly available online. The website includes statistics for all annotated languages. You can easily download v1.3 from here. UD files follow the simple CoNLL-U format.

Constituency Parsing

The Penn Treebank is available from the LDC You will find tgrep useful for quickly searching the corpus for patterns. NLTK can also be used to load parse trees. A few more browsers are available online.

Machine Translation

The WMT shared task from 2016 is a good source for newswire bi-text.

Textual Entailment

TE has been studied extensively for more than a decade now. Recently, SNLI has been receiving significant attention.

Reading Comprehension

Semantic Parsing

We will look at three data sets commonly used for semantic parsing:

  1. GeoQuery: A natural language interface to a small US geography database. The original data is available here, and the original query language is described here. The data with lambda calculus logical forms is available here.
  2. ATIS: A natural language interface for a flights database. The data is available from the LDC.
  3. Navi: Instructional language for robot navigation. The original data is described here, but we recommend using the data here.

Word Analogy

Question Answering

Vision and Language

Online Demos, Systems, and Tools

If you encounter an interesting demo or system not listed here, please email the course instructor.

Deep Learning frameworks and tools: