Fascinating new paper from Jin and Rinard at MIT that shows models might develop semantic understanding despite being trained on text: “We present evidence that language models can learn meaning despite being trained only to perform next token prediction on text, specifically a corpus of programs.”