That article gave me a whiplash. First part: pretty cool. Second part: deeply questionable.
For example these two paragraphs from sections āproblem with codeā and āmagic of dataā:
āModular and interpretable codeā sounds great until you are staring at 100 modules with 100,000 lines of code each and someone is asking you to interpret it.
Regardless of how complicated your programās behavior is, if you write it as a neural network, the program remains interpretable. To know what your neural network actually does, just read the dataset
Well, ājust read the dataset broā sound great sounds great until you are staring at a dataset with 100 000 examples and someone is asking you to interpret it.
Is that a screenshot from Command&Conquer 4?