'We are drowning in information and starving for knowledge.' β John Naisbitt
Machine learning / data science is a subfield of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs which can access data and use it to learn for themselves. A machine learning algorithm learns by building a mathematical / statistical model from the data. This model can then be used for inference and decision making. Machine learning has become an integral part of many modern applications. It is a cross-topic discipline which comprises computer science, math / statistics as well as domain and business knowledge:
The lecture 'Applied Machine Learning Fundamentals' is supposed to give a general introduction into state-of-the-art machine learning algorithms and their applications. This Readme file provides you with all necessary information. It is structured as follows:
- π Lecture contents
- βοΈ Exercises
- π Exam
- π Python code
- π Literature and recommended reading
- π Bugs and errors
The following topics / algorithms will be covered in the lecture:
- Introduction to machine learning (click here)
- Motivation and applications
- Terminology
- Key challenges in ML: Generalization, feature engineering, model selection, ...
- Mathematical foundations (click here)
- Refresher: Linear algebra
- Refresher: Statistics
- Optimization Techniques
- Unconstrained and constrained optimization
- Gradient descent
- Newton's method
- Bayesian decision theory (click here)
- Bayes optimal classifiers
- Error minimization vs. risk minimization
- Multinomial and Gaussian naive Bayes
- Probability density estimation and maximum likelihood estimation
- Supervised learning
- Regression (click here)
- Linear regression
- Probabilistic regression
- Basis function regression and regularization
- More classification algorithms
- Logistic regression (click here)
- Deep learning (click here)
- Perceptrons
- Multi-layer-perceptrons and back-propagation
- Further network architectures (CNNs, RNNs)
- k-nearest neighbors (click here)
- Decision trees and ensemble methods (click here)
- Regression (click here)
- Evaluation of ML models (click here)
- Out-of-sample testing and cross validation
- Confusion matrices
- Evaluation metrics: Precision, recall, F1 score, ROC, accuracy, RMSE, MAE
- Model selection: Grid search, random search, early stopping
- Bias-variance decomposition
- Unsupervised learning
- Clustering (click here)
- k-Means
- Hierarchical clustering
- DBSCAN
- Principal component analysis (click here)
- Clustering (click here)
- Lecture summary and exam preparation
An exercise sheet is provided for each lecture unit. Most of the time, the exercises are a compilation of old exam questions. However, the exercises also include programming tasks and questions which would not be suitable for an exam (due to time constraints).
The solutions will be provided via the Moodle forum after two weeks. It is highly recommended to solve the exercises on your own! Do not wait for the solutions to be uploaded.
Number | Title | Link π |
---|---|---|
Sheet 1: | Numeric Optimization Techniques | Download |
Sheet 2: | Decision Theory and Probability Density Estimation | Download |
Sheet 3: | Linear Regression | Download |
Sheet 4: | Logistic Regression | Download |
Sheet 5: | Neural Networks / Deep Learning | Download |
Sheet 6: | k-nearest Neighbors | Download |
Sheet 7: | Decision Trees and Ensemble Methods | Download |
Sheet 8: | Evaluation of Machine Learning Models | Download |
Sheet 9: | Clustering | Download |
Sheet 10: | Principal Component Analysis | Download |
The exam is going to take 60 minutes. The maximum attainable score will be 60 points, so you have one minute per point. Important: Keep your answers short and simple in order not to lose too much valuable time.
The exam questions will be given in German, but you may answer them in either English or German (you are also allowed to mix the languages). Please do not translate domain specific technical terms in order to avoid confusion. Please answer all questions on the task sheets (you may also write on the empty back-sides).
Exam preparation:
- You will not be asked for any derivations, rather I want to test whether you understand the general concepts.
- Any content not discussed in the lecture will not be part of the exam.
- The exam will contain a mix of multiple choice questions, short answer questions and calculations.
- Make sure you can answer the self-test questions provided for each topic. There won't be sample solutions for those questions!
- Solve the exercises and work through the solutions if necessary!
- Some of the slides give you important hints (upper left corner):
- A slide marked with symbol (1) provides in-depth information which you do not have to know by heart (think of it as additional material for the sake of completeness).
- Symbol (2) indicates very important content. Make sure you understand it!
- Have a look at the old exams to familiarize yourself with the exam format.
- The last lecture slot is reserved for exam preparation and additional questions.
Symbol (1):
Symbol (2):
Auxiliary material for the exam:
- Non-programmable pocket calculator
- Two-sided hand-written cheat sheet (you may note whatever you want). Hand-written means pen and paper (not on tablet!)
Exam grading Since the lecture Applied Machine Learning Fundamentals is part of a bigger module (Machine Learning Fundamentals, W3WI_DS304), it is not graded individually. Instead, the score you achieved in the exam (at most 60 points) will be added to the points you receive in the second element of the module, the Data Exploration Project in the 4th semester which is also worth 60 points at maximum. Your performance in both elements combined will determine your eventual grade.
Please refer to the official DHBW data science module catalogue for further details.
Machine learning algorithms are easier to understand, if you see them implemented. Please find Python implementations for some of the algorithms in this repository.
Play around with the hyper-parameters of the algorithms and try different data sets in order to get a better feeling for how the algorithms work. Also, debug through the code line by line and check what each line does. Please find further instructions in the Readme there.
You do not need to buy any books for the lecture, most resources are available online.
Please find a curated list below:
Title | Author(s) | View online π |
---|---|---|
Deep Learning | Goodfellow et al. (2016) | click here |
Elements of statistical Learning | Hastie et al. (2008) | click here |
Gaussian Processes for Machine Learning | Rasmussen/Williams (2006) | click here |
Machine Learning | Mitchell (1997) | click here |
Machine Learning - A probabilistic Perspective | Murphy (2012) | click here |
Mathematics for Machine Learning | Deisenroth et al. (2019) | click here |
Pattern Recognition and Machine Learning | Bishop (2006) | click here |
Probabilistic Graphical Models | Koller et al. (2009) | click here |
Reinforcement Learning - An Introduction | Sutton et al. (2014) | click here |
Speech and Language Processing | Jurafsky/Martin (2006) | click here |
The Matrix Cookbook | Petersen et al. (2012) | click here |
π YouTube resources:
- Machine learning lecture by Andrew Ng, Stanford University
- Support vector machines by Patrick Winston, MIT
- Linear algebra by Gilbert Strang, MIT
- Matrix methods in data analysis, signal processing, and machine learning by Gilbert Strang, MIT
- Gradient descent, how neural networks learn (3BlueOneBrown)
- Hidden Markov models
- Viterbi algorithm
π Interesting papers:
- Playing atari with deep reinforcement learning (Mnih et al., 2013)
- Efficient estimation of word representations in vector space (Mikolov et al., 2013)
Help me improve the lecture. Please feel free to file an issue in case you spot any errors. Thank you very much in advance! Please do not open issues for questions concerning the content! Either use the Moodle forum or send me an e-mail for that ([email protected]).
Β© 2024 Daniel Wehner, M.Sc.