https://www.edx.org/course/machine-learning-with-python-from-linear-models-to
Lecturers: Regina Barzilay, Tommi Jaakkola, Karene Chu
Disclaimer: The following notes are a mesh of my own notes, selected transcripts, some useful forum threads and various course material. I do not claim any authorship of these notes, but at the same time any error could well be arising from my own interpretation of the material.
Contributions are really welcome. If you spot an error, want to specify something in a better way (English is not my primary language), add material or just have comments, you can clone, make your edits and make a pull request (preferred) or just open an issue.
(PDF versions may be slightly outdated)
For an implementation of the algorithms in Julia (a relatively recent language incorporating the best of R, Python and Matlab features with the efficiency of compiled languages like C or Fortran), see the companion repository Beta Machine Learning Toolkit (BetaML) (and if you are looking for an introductory book on Julia, have a look on my one).
BetaML currently implements:
- Linear, average and kernel Perceptron (units 1 and 2)
- Feed-forward Neural Networks (unit 3)
- Clustering (k-means, k-medoids and EM algorithm), recommandation system based on EM (unit 4)
- Decision Trees / Random Forest (mentioned on unit 2)
- many utility functions to help prepare the data for the analysis, sample the data or evaluate the algorithms
NEW 2022: You may be interested in a new whole MOOC on Scientific Programming and Machine Learning with Julia that covers most (but not yet all) of the topics in MITx_6.86x, with somehow a different approach, prioritising more intuition and code implementation.
By unit: