
FreeComputerBooks.com
Links to Free Computer, Mathematics, Technical Books all over the World
|
|
- Title: Foundations of Machine Learning
- Author(s) Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar
- Publisher: The MIT Press; 2nd edition (December 25, 2018); eBook (MIT Open Acess Edition)
- Hardcover: 504 pages
- eBook: HTML and PDF (505 pages)
- Language: English
- ISBN-10: 0262039400
- ISBN-13: 978-0262039406
- Share This:
![]() |
This book is a general introduction to machine learning that can serve as a textbook for graduate students and a reference for researchers. It covers fundamental modern topics in machine learning while providing the theoretical basis and conceptual tools needed for the discussion and justification of algorithms. It also describes several key aspects of the application of these algorithms. The authors aim to present novel theoretical tools and concepts while giving concise proofs even for relatively advanced topics.
It is unique in its focus on the analysis and theory of algorithms. The first four chapters lay the theoretical foundation for what follows; subsequent chapters are mostly self-contained. Topics covered include the Probably Approximately Correct (PAC) learning framework; generalization bounds based on Rademacher complexity and VC-dimension; Support Vector Machines (SVMs); kernel methods; boosting; on-line learning; multi-class classification; ranking; regression; algorithmic stability; dimensionality reduction; learning automata and languages; and reinforcement learning. Each chapter ends with a set of exercises. Appendixes provide additional material including concise probability review.
This second edition offers three new chapters, on model selection, maximum entropy models, and conditional entropy models. New material in the appendixes includes a major section on Fenchel duality, expanded coverage of concentration inequalities, and an entirely new entry on information theory. More than half of the exercises are new to this edition.
About the Authors- Andriy Burkov is a dad of two and a machine learning expert based in Quebec City, Canada. Nine years ago, he got a Ph.D. in Artificial Intelligence, and for the last six years, he's been leading a team of machine learning developers at Gartner.
- Machine Learning
- Neural Networks and Deep Learning
- Artificial Intelligence
- Data Analysis and Data Mining

- Foundations of Machine Learning (Mehryar Mohri, et al)
- The Mirror Site (1) - PDF
- The Mirror Site (2) - PDF
-
The Hundred-Page Machine Learning Book (Andriy Burkov)
Everything you really need to know in Machine Learning in a hundred pages! This book provides a great practical guide to get started and execute on ML within a few days without necessarily knowing much about ML apriori.
-
Dive into Deep Learning (Aston Zhang, et al.)
This is an open source, interactive book provided in a unique form factor that integrates text, mathematics and code, now supports the TensorFlow, PyTorch, and Apache MXNet programming frameworks, drafted entirely through Jupyter notebooks.
-
Understanding Machine Learning: From Theory to Algorithms
Explains the principles behind the automated learning approach and the considerations underlying its usage. Provides an extensive theoretical account of the fundamental ideas underlying machine learning and the mathematical derivations.
-
Machine Learning Yearning (Andrew Ng)
You will learn how to align on ML strategies in a team setting, as well as how to set up development (dev) sets and test sets. After finishing this book, you will have a deep understanding of how to set technical direction for a machine learning project.
-
Reinforcement Learning: An Introduction, Second Edition
It provides a clear and simple account of the key ideas and algorithms of reinforcement learning that is accessible to readers in all the related disciplines. Focuses on core online learning algorithms, with the more mathematical material set off in shaded boxes.
-
Probabilistic Machine Learning: An Introduction (Kevin Murphy)
This book is a comprehensive introduction to machine learning that uses probabilistic models and inference as a unifying approach. It is written in an informal, accessible style, complete with pseudo-code for the most important algorithms.
:
|
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |
|
![]() |