The bias-variance trade-off is the notion that for a given learning procedure, the stronger the initial assumptions (i.e., initial bias) about patterns to be learned, the less data needed for learning to be accomplished.
As a corollary, a learning procedure with weak inductive bias will achieve greater variance in the patterns it masters, but will be less sample-efficient with data.
#wip Neural networks are extremely low-bias learning systems, according to Botvinick et al. (2019).