breaking models

Models break when the distribution of the inputs to the model changes. This happens when the environment into which the model has been deployed into is so different from what it was taught that it doesn't know what to do and many of its assumptions are broken. So it might be fair to call a model nothing more than statistical assumptions about the world for a particular task.

One possible solution is to build into such a fragile model a causal understanding of the world and a more flexible learning timeline. However, the way we currently teach a model how to do anything is with a very simple supervised fashion through showing the model one observation at a time.

Show Comments

Get the latest posts delivered right to your inbox.