NAACL 2018

Resultado de imagen de new orleans

We are a bit more than a month away from NAACL 2018 and I thought I’d write a little bit about work I am directly or indirectly involved in, what I’m most excited about this year, and hopefully to whet your appetite for what is to come 🙂

Before the main conference, Jose Camacho-Collados, myself and Mohammad Taher Pilehvar will give a tutorial on the Interplay between Lexical Resources and NLP. We will give an overview of relevant avenues where corpus-based approaches to NLP interact with lexical resources, and vice versa, meaning how we can leverage NLP techniques to automatically improve the quality of existing resources (or create them from scratch). We will cover different topics on computational lexicography, but also knowledge-based embeddings (and their applications), or informing neural networks with expert knowledge. We will make the outline of the tutorial available very soon!

In the main conference I will present a short paper on identifying definition sentences for corpora. With state of the art results, our model may be used for easing up a glossary/dictionary writing process, or as the first step in an ontology learning pipeline. The code is available here.

Finally, I am particularly excited about two SemEval tasks I have had the honor to co-organize. Task 2 on Multilingual Emoji Prediction, and Task 9 on Hypernym Discovery. I will write a bit more in detail about each of them in a few days (why we did it, motivations, challenges, etc.), but let me just leave a couple of ideas out there.

  • The best system on hypernym discovery has as one of its components a Hearst’s patterns matching module.
  • The best system on multilingual emoji prediction uses an ngram-based SVM classifier. Not a neural network. These results are very interesting and we are looking forward to discussing this during the Workshop!

It seems then that “old-fashioned” methods (linear models, pattern matching, etc.) are still quintessential for good performance in a number of NLP tasks. Personally I think this is good, it highlights the fact that language cannot be modeled simply by throwing in a lot of data without considering the nuances of the linguistic problem we aim to model.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s