I am currently a Senior Lecturer at the School of Computer Science and Informatics at Cardiff University. I am also a UKRI Future Leaders Fellow since February 2021, and lead the Cardiff NLP group.
Before that I was a postdoc for a year on the FLEXILOG ERC project with Steven Schockaert. Previously I was a Google Doctoral Fellow and PhD student
at the Linguistic Computing Laboratory (LCL) of Sapienza University of Rome.
My background education includes an Erasmus Mundus Master in Natural Language Processing and Human Language Technology and a 5-year BSc degree in Mathematics.
I also worked for a year as a research engineer at ATILF-CNRS in Nancy (France).
Research. I work on various topics in Natural Language Processing (NLP), mainly on the lexical and distributional semantics areas. In this area I've recently written a book (titled Embeddings in NLP) with Taher Pilehvar that can give an overview of the recent trends in distributional semantics and NLP. These last years I have been particularly interested on how relational knowledge is captured in current NLP models (embeddings/language models - check out our latest RelBERT model!), and how this plays a role in applications. During my PhD I've also worked on integrating explicit knowledge (mainly from lexical resources) into downstream NLP applications, with a special focus on multilinguality and ambiguity. To this end, I have been collaborating on the BabelNet project and developing knowledge-based sense vector representations (e.g. NASARI and SW2V) to be used as a bridge between lexical resources and text-based applications.
Open data. I strongly believe that well-curated datasets and resources, as well as shared tasks, are key for advancing science. In 2019 we organized the WiC challenge on evaluating context-sensitive representations. This competition was part of a shared task in the IJCAI workshop SemDeep, and is featured in the SuperGLUE language understanding benchmark. We recently extended this effort with the WiC-TSV benchmark. In the past I also helped organize several SemEval tasks in Word Similarity Hypernym Discovery and Emoji Prediction. Check them out as all datasets are openly available and in various languages!
Finally, I have also been working on social media applications recently, for which we also have open datasets (TweetEval), time-specific models (TimeLMs), multilingual language models (XLM-T) and cross-lingual word embeddings!
Other. NLP aside, I love travelling and sports. I was raised in Granada, a wonderful city in the south of Spain where I spent the first 20 years of my life. Then, I have been living in large European cities like Paris, Barcelona and Rome, and spent long amounts of time in Seoul. I have also lived in other smaller (but equally charming) cities: Nancy and Besançon (France) and Wolverhampton (UK). I like practising all kinds of sports: football, swimming, tennis, padel, ping pong... and chess (yes, it is also a sport!). I hold the International Master chess title and am currently the Welsh chess champion!
Teaching and Supervision: You can find more about my current teaching and PhD supervision here.
Note: If you are interested in doing a PhD with me, please read this note to prospective PhD students.
|Mark Anderson and Jose Camacho-Collados.
Assessing the Limits of the Distributional Hypothesis in Semantic Spaces: Trait-based Relational Knowledge and the Impact of Co-occurrences. [paper] [data]
*SEM 2022, Seattle (USA).
|Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke and Jose Camacho-Collados.
TimeLMs: Diachronic Language Models from Twitter. [paper] [data&code]
ACL 2022 (Demo), Dublin (Ireland).
|Asahi Ushio, Jose Camacho-Collados and Steven Schockaert.
Distilling Relation Embeddings from Pretrained Language Models [paper] [data&code]
|Asahi Ushio, Luis Espinosa-Anke, Steven Schockaert and Jose Camacho-Collados.
BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies? [paper] [data&code]
|Daniel Loureiro*, Kiamehr Rezaee*, Mohammad Taher Pilehvar and Jose Camacho-Collados.
Language Models and Word Sense Disambiguation: An Overview and Analysis. [paper] [data&code]
Computational Linguistics (2021)
| Francesco Barbieri, Jose Camacho-Collados, Leonardo Neves and Luis Espinosa-Anke.
TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification. [paper] [data]
Findings of EMNLP 2020.
March 2022. We are organising the first EvoNLP EvoNLP workshop (Workshop on Ever Evolving NLP), co-located with EMNLP. EvoNLP also features a meaning shift detection shared task framed as Word-in-Context - trial data available!
April 2022. We are organising a two-day NLP workshop on June 30 and July 1!
February 2022. We have launched our TimeLMs, with the commitment to release a new language model every three months!
August 2021. Taher Pilehvar and I taught a week-long course on "Embeddings in NLP" at the ESSLLI summer school.
December 2020. Our "Embeddings in NLP" book has been published! All information, including a short video tutorial, here.
November 2020. Hiring! Looking for a 3-year postdoc starting early/mid 2021. All the details on how to apply here.
October 2020. Excited and honoured to be awarded a UKRI Future Leaders Fellowship!
September 2020. Recently received a research grant from Snap Research (w/ Luis Espinosa-Anke and Daniel Loureiro) and started a collaboration for investigating meaning shift in social media.