Double Descent

Introduction A recent blog article introduced me to the idea of double descent. This is the phenomenon by which the fitting error in the testing set of a (presumably gradient descent-based) machine-learning method goes down as the number of parameters increases, then rises again (a phenomenon known as overfitting), but then goes down again as the number of parameters is increased still further. I do not want to comment on the specifics of this article as they pertain to machine learning, but rather on a sub-aspect of it.

Understanding P-Values

I am currently reading the book Statistics Done Wrong: The Woefully Complete Guide by Alex Reinhart (No Starch Press, 2015). It contains a passage that goes like this: A 2002 study found that an overwhelming majority of statistics students—and instructors—failed a simple quiz about p values. Try the quiz (slightly adapted for this book) for yourself to see how well you understand what p really means. Suppose you’re testing two medications, Fixitol and Solvix.