Notes on Eric Jang's "How to Understand ML Papers Quickly"

I found “How to Understand ML Papers Quickly” by Eric Jang (VP @ 1X, ex-Google) recently in my learning about generative AI. Written in 2021, I wonder what might be updated about these points given the state of knowledge currently.

Here is my summary of the already-pretty-concise original post:

  • Determine the inputs & outputs of the ML problem & whether these inputs can even produce these outputs
  • “ML models are formed from combining biases and data” so figure out which is being added/subtracted
  • Determine to what extent the model be generalized & whether it requires learning (as opposed to hard-coding) to get there
  • Make sure the claims can be falsified (is it science?)
Thomas Lodato @deptofthomas