Over the past 10 years, I was involved in a number of amazing projects and career challenges. I am proud of sharing some (or most) of them below. Note that this is not (and I do not want it to be) a complete list.
[2019-NOW] About Your health, SLEEP, and Wearables
Most of my latest work has been applying artificial intelligence techniques to understand the link between lifestyle and clinical outcomes. And there are many unique opportunities in using AI in the health domain.
For example, the research on lifestyle and sleep is still in its infancy, and many of the current procedures could use AI. One instance of this problem that caught my attention is the methods used to measure sleep quality. Traditionally, the most accurate procedure to qualify sleep is called polysomnography (PSG). However, it is as precise as it is cumbersome and expensive, drastically limiting its longitudinal research usage. An alternative for PSG is using wearables. The most common clinical wearable device used is actigraphy, an old cousin of modern smartwatches. Actigraphy devices use an accelerometer to measure sleep quality by mapping the amount of movement performed during sleep to other sleep quality measures and sleep efficiency. Since the 1980s, many different formulas were devised, usually using a small cohort of a dozen people, to map the activity captured by the accelerometer in the actigraphy device to what PSG would have measured (sleep-wake stages).
We propose the adoption of a publicly available large dataset (MESA Sleep), which is at least one order of magnitude larger than any other dataset, to compare existing methods for the detection of sleep-wake stages systematically. We also implemented and compared state-of-the-art methods to score sleep-wake stages, ranging from these widely used traditional formulas to the most recent machine learning approaches. This work was published in Nature Digital Medicine journal, and its code is freely available on Github.
[2019-NOW] ai for Good: using Facebook Ads for a good cause
At QCRI, I proudly worked with Ingmar Weber in the social computing group on methods to use the infamous Facebook Ads for good causes. In particular, we proposed to use Facebook’s advertising platform as an additional data source for monitoring the Venezuelan crisis.
We estimated and validated national and sub-national numbers of refugees and migrants and broke-down their socio-economic profiles to further understand the complexity of the crisis. Although limitations exist, we believe that our methodology can be of value for real-time assessment of refugee and migrant crises world-wide.
This work resulted in a published PLOS One journal that you can check out here.
[2014-2019] Python TRECTOOLs
I have been working on this set of tools for information retrieval evaluation in Python 3. The goal of this software is to provide an interface to common, and often repetitive tasks, such as analyzing runs, running an IR framework like Indri or Terrier with different baselines, evaluating runs, fusing ranking lists to create a more robust run, and finally, analyzing the results. We also support collection creators, by providing the ground for tasks such as document pool creation with different strategies (as explored by Lipani et al. here and here).
The project is in constant update and recently we published a SIGIR demo paper. If you are interested, please install its latest version with pip or check the code out at GitHub.
[2017] Teaching at Carnegie Mellon University Qatar
I was delighted with the challenge to teach at CMU-Qatar.
In the Spring semester of 2017, I taught Information Retrieval there (course 67-300 – Search Engines). Slides are publicly available on GitHub. Overall an amazing experience!
[2014-2017] CLEF eHealth
Together with Dr. Guido Zuccon, I led in the Information Retrieval task at CLEF eHealth from 2014 to 2017.
In 2014, we ran an ad-hoc information retrieval task focused on supporting laypeople in searching for and understanding their health information. The challenge mimicked patients querying for some key disease that appeared in their discharge summary. The document collection was provided by the Khresmoi project and contained more than one million Web pages covering a broad range of health topics, targeted at both the general public and healthcare professionals.
In 2015, we changed the task to focus on symptoms, rather than on diseases. We mimicked queries of laypeople attempting to find out more about signs, symptoms or conditions they may be experiencing. Related research has indicated that current web search engines fail to effectively support these queries (Zuccon et al., Staton et al.). Another innovation introduced in the 2015 task was the document understandability: we asked the recruited medical assessors to also judge, apart from the document relevance, whether they would recommend it to their patient, taking into consideration the difficult to read that document. To the best of our knowledge, it is the first time that document readability was assessed in an IR task.
In 2016 and 2017, we moved the challenge a step forward with a much larger and messier collection, ClueWeb12 B13, which is more representative of the current state of health information online. Topic stories and associated queries were made by mining health Web forums, such as Reddit’s AskDocs to identify real consumer information needs.
See the latest developments on the CLEF main Website and on our CLEF eHealth GitHub page.
[2014-2015] TREC
In the TREC clinical decision support 2014/2015 (TREC-CDS), I participated representing Vienna University of Technology (TUWien). The focus of this task was on providing material to support physicians when making decisions regarding diagnosis, medical tests and treatments. The data collection was made of full-text documents from PubMed Central.
Similarly to the IR task of CLEF eHealth, the main subject of TREC-CDS was also medical information retrieval. However, there were many significant differences between TREC-CDS and CLEF eHealth. For example, readability/understandability was not a problem in the context of TREC-CDS, as IR systems were devised to be used by medical experts, instead of patients/general public. Nevertheless, all domain-specific medical resources, such as Metamap annotations, ICD-9/10, MeSH, or UMLS, could (should) be used in this task as well.
In 2015, our query expansion method got the second place (our of 30). Please check it out here.
[2014-2015] Mediaeval
In 2014 and 2015, I participated in the Mediaeval benchmark challenge on retrieving diverse social images. The task consisted of re-ranking an initial Flickr results list to increase its diversity among the top results. In this context, a diverse list of images is a list that showed different perspectives of a specific point of interest. For example, a list that showed the Notre Dame cathedral from inside, from outside, from far away, and so on. We proposed an ensemble of clusters that worked relatively well (3rd place).
In 2015, we got first place by incorporating more features and methods, including a deep learning approach. We also explored additional ways to combine visual and textual features.
[2014] Readability
One of the subjects that got my interest and became part of my PhD is the readability/understandability of textual documents. The challenge here is matching the best possible material to someone depending on their reading skill. Historically, many traditional metrics were used to measure how hard a text might be. These approaches are mostly based on surface features, such as the length of words and sentences. I implemented these readability formulas in the open-source Python package called ReadabilityCalculator. We also showed in this paper some caveats and guidelines to use readability formulas on Web pages.
If you are interested in learning more about readability/understandability, I highly recommend this link, which covers much of the literature before deep learning.
Check out the readability-calculator source code.
[2013-2014] Kaggle
In 2013-2014 I had a great time with Kaggle. As soon as I got to know this website, I became addicted to it and started to participate in as many competitions as my schedule allowed me. However, as everybody knows, time is a limited resource… Thus, I am much less active now. Nevertheless, if there is an interesting competition going on and you want to team up with me, please let me know! 🙂
See more: my Kaggle profile