Where are we going? ML hype in BioMedical research

I haven’t written in a little while and was planning to write something else. But somehow the recent bombardment of news, interviews, and blog posts about the future of ML, AI, and its role in biomedical research led to a growing feeling of discomfort which in turn led to this blog post. I definitely have much to learn myself about all of these – I find developing ML algorithms for biomedical problems a continuous and humbling process of constant learning. Still, based on my experience I felt the following points should be voiced and possibly stem a discussion.

I’ll try to keep it brief, so here goes:

  1. I don’t think there is a question about the huge untapped potential in ML/AI in biomedical research. As the old saying goes, we have only seen the tip of the iceberg.
  2. There is a huge hype around ML and specifically deep learning (DL). The ML community, like any (scientific) community gets excited about new things, whether it’s fashion or great promise to advance on difficult problems on which it was stuck.
  3. The boom in deep learning can be attributed to some algorithmic progress (even more so now, with all the gained interest) but mostly to a combination of big data availability (to train on) + computing power (harnessing GPUs etc.).
  4. In the field of bio-medical research, a similar explosive growth is already underway for similar reasons: Bio-medical records across the globe are becoming all connected, amenable to searches and training models, with personalized genomic/genetic data becoming cheaper. There are many dangers/issues involved (privacy, data accessibility, common ontologies etc.) but there is clearly great promise. which brings me to:
  5. We don’t really know where the ML/DL boom will end, in terms of new capabilities, or when. This makes it all the more exciting. Suddenly, it became fashionable (or maybe “legitimate”) for serious ML researchers to discuss at length the essence of intelligence, creativity etc. However,
  6. Prominent ML researchers have already pointed out some of the deficiencies or limitations of current DL technology. These include extremely slow learning rate which requires vast amounts of data and/or computing power to store/process/analyze the data [1]. Think how many games of GO the computer had to play, or how many millions of images a DL algorithm needs for training to identify a single concept (e.g. cats). Another issue is that advancement was mostly achieved in problems for which we have a relatively good understanding, translated into representation of basic “features” which are then fed into the DL algorithm. In Genomics, prominent groups used DL to achieve new heights in tasks such as predicting hypersensitivity sites, or transcription factors binding sites. These are good examples for where we stand: In these works researchers used convolutional networks which, at the end of the day, are like scanning PSSM motifs across a large set of labeled sequences. But now they can scan X number of motifs in parallel, with Y number of positions and many smart tweaks to the learning process like ReLU units, dropouts etc. X,Y in this example would be hyper parameters which, like the entire set of PSSMs, would be blasted through GPUs to optimize performance. The end result is improvement in those prediction tasks, but I would say these results by themselves (a) are built with the same building blocks as before and (b) are not a “revolution” by itself. The domain plays a role here: If you can predict 2% better which ad to display that’s already huge money for Google. In Genomics, pushing the performance on an ROC of a PSSM so it’s more accurate is great but if we still can not get it to be accurate for medical applications or translate it to novel biology then the impact is limited.
  7. In general, we as CS people tend to have a “problem solver” attitude. Think of it: It’s very common to get from a CS person some version of “Tell me what your problem and I’ll tell you how to solve it”. It may be phrased differently but it will boil down to that. Which is great: I think it reflects the usefulness of CS and our pragmatic/interdisciplinary approach (borrowing from engineering, math, physics – anything to get the job done). It also contributes to the relevance of CS to much of the modern life/research. However, this attitude can also come across as arrogant, ignorant, or naive. I’ll give three examples:
    1. I attended a talk by a CS guy at our medical school who advocates for adaptation of functional programming (Scala) and big data tools from the industry (e.g. Spark, Kafka) for Genomics. While his intentions were good he showed a promotional video (probably to recruit CS students) that came across as “come cure cancer with us”. Some people got really upset by this naive representation of the complex problems involved. Some just left. I think the issues with this example are pretty clear so I’ll just go to the next.
    2. I recently read an interview with a prominent ML researcher. The researcher explained that humans are pretty bad at understanding bio-medical/genomics problems and therefore we need “super human intelligence”. So where is the problem with this? I am the first to agree humans are not good in looking at many numbers and identifying patterns – that’s why we use ML. But “super human intelligence” is a very murky term (for example, see an excellent discussion about it in Neil’s Lawrence’s recent post [2]  and also this post by Luciano Floridi about AI future [3]). Especially with respect to DL this approach takes away from us, humans, both the role of defining the input feature and understanding the output. This issue becomes evident in following example.
    3. I was talking recently with a smart, capable, young researcher who visited Penn. He told me how he saw “great promise” in the recent DL in Genomics research since it may be possible to just “put it all into the network and not worry about the feature definition”. Unfortunately for us in CS/ML, ML Expert knowledge ≠ Domain Expert knowledge. We need the latter to guide the former, and in many cases we also want the ML (results, model) to expand/deepen our expert understanding. Well, at least until we get some real “super human intelligence” around here…. 😉 More seriously though, to really make a dent in biomedical research one needs to invest in gaining expert domain knowledge and work closely together with domain experts. For that you need good collaborators/mentors/trainees, a good environment, and a good mindset (maybe more on that in a future post).

In summary, I think statements as the ones illustrated above may cause all of us, researchers engaging in ML for bio-medical research, more harm (how we or our field are perceived by other researchers) than good (getting mentioned in the media or maybe even a grant). It’s already exciting and promising as it is, so let’s just be a bit more humble and focus on getting a few more things done….

 

[1] http://inverseprobability.com/2016/03/04/deep-learning-and-uncertainty

[2] http://inverseprobability.com/2016/05/09/machine-learning-futures-6

[3] https://aeon.co/essays/true-ai-is-both-logically-possible-and-utterly-implausible

3 thoughts on “Where are we going? ML hype in BioMedical research

  1. Excellent writeup! As we have started to dip our toes into machine learning in our lab, I feel like there is indeed a lot of promise—and extreme levels of hype. It is, I think, hard to separate progress in algorithmic developments from the general increase in data and computational power, but without the former, I think it will be hard to achieve the “superintelligence” required for something truly transformative to appear. That said, for the practical matter of image analysis and segmentation, I think that machine learning is really going to change things for us in the next year or two…

    Like

    • You know, it’s interesting, counting and localizing the RNA FISH spots themselves are a (relatively) easy problem to solve because they spots are so stereotypical. The real challenge for us is in image segmentation, which is a major bottleneck for our analyses, especially as we expand to large numbers of cells.

      Like

Leave a reply to yosephbarash Cancel reply