opinion mining with deep recurrent nets
back  
Paper : Opinion Mining with Deep Recurrent Neural Networks
O. Irsoy, C. Cardie EMNLP, 2014, Doha, Qatar.

Abstract : Recurrent neural networks (RNNs) are connectionist models of sequential data that are naturally applicable to the analysis of natural language. Recently, ``depth in space" --- as an orthogonal notion to ``depth in time" --- in RNNs has been investigated by stacking multiple layers of RNNs and shown empirically to bring a temporal hierarchy to the architecture. In this work we apply these deep RNNs to the task of opinion expression extraction formulated as a token-level sequence-labeling task. Experimental results show that deep, narrow RNNs outperform traditional shallow, wide RNNs with the same number of parameters. Furthermore, our approach outperforms previous CRF-based baselines, including the state-of-the-art semi-Markov CRF model, and does so without access to the powerful opinion lexicons and syntactic features relied upon by the semi-CRF, as well as without the standard layer-by-layer pre-training typically required of RNN architectures.

Slides : Oral presentation is here.

More slides : Part of this talk I gave at Cornell AI seminar was based on this work.

Code : My C++ code is here. Please cite the paper if you use it.

Data : Preprocessed version of the dataset is here, which you can use to replicate the results. The original MPQA corpus can be found here. You should cite the appropriate paper by Wiebe et al (2005) if you use the data.



Bibtex:
@InProceedings{irsoy-drnt,
  author = {\.Irsoy, Ozan and Cardie, Claire},
  title = {Opinion Mining with Deep Recurrent Neural Networks},
  booktitle = {Proceedings of the Conference on Empirical Methods in Natural Language Processing},
  pages = {720--728},
  year = {2014},
  location = {Doha, Qatar},
  url = {http://aclweb.org/anthology/D14-1080}
}