Orthogonalization in Machine Learning

Original Source: https://www.coursera.org/specializations/deep-learning

In his Coursera lecture ‘Deep Learning’, Andrew Ng says it is important to orthogonalize machine learning system.

Orthogonalization is a system design property that assures that modifying an instruction or a component of an algorithm will not create or propagate side effects to other components of the system.

Orthogonalization makes it easier to verify the algorithms independently from one another, thus reduces testing and development time.

Orthoganalization in Deep Learning

There are three consecutive objectives that we should accomplish in machine learning.

First, we should fit training set. If our model doesn’t fit training set well, we should use bigger neural networks, or switch to better optimization algorithm.

Second, we should fit development set. If our model doesn’t fit development set well, we should implement harder regularization, or gather more training examples. We shouldn’t do early stopping to reduce development set error, as this will increase training set error thus violate orthoganalization.

Third, we should decrease test set and real world application error. To do that, we should use a bigger development set and check that distribution of the development set is same as test set or real world cases.

Leave a Comment