|
FreeComputerBooks.com
Links to Free Computer, Mathematics, Technical Books all over the World
|
|
- Title: Foundation Models for Natural Language Processing: Pre-trained Language Models Integrating Media
- Author(s) Gerhard Paaß, Sven Giesselbach
- Publisher: Springer; 1st ed. 2023 edition (May 24, 2023); eBook (Creative Commons Licensed)
- License(s): Creative Commons License (CC)
- Paperback: 454 pages
- eBook: PDF and ePub
- Language: English
- ISBN-10: 3031231899
- ISBN-13: 978-3031231896
- Share This:
|
This open access book provides a comprehensive overview of the state of the art in research and applications of Foundation Models and is intended for readers familiar with basic Natural Language Processing (NLP) concepts.
About the Authors- Dr. Gerhard Paaß is a Lead Scientist at the Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS). With a background in Mathematics, he is a recognized expert in the field of Artificial Intelligence, particularly in the area of Natural Language Processing.
- Natural Language Processing (NLP)
- Computational Linguistics
- Machine Learning
- Information Retrieval (IR) and Search Engines Design/Implementation
- Foundation Models for Natural Language Processing: Pre-trained Language Models Integrating Media
- The Mirror Site (1) - PDF
-
Quick Start Guide to Large Language Models: Early Release
The Practical, Step-by-Step Guide to Using Large Language Models (LLMs) at Scale in Projects and Products. Clears away those obstacles and provides a guide to working with, integrating, and deploying LLMs to solve practical problems.
-
Foundations of Large Language Models (Tong Xiao, et al.)
This is a book about Large Language Models (LLMs). It primarily focuses on foundational concepts rather than comprehensive coverage of all cutting-edge technologies: pre-training, generative models, prompting techniques, and alignment methods.
-
Large Language Models in Cybersecurity: Threats and Mitigation
This open access book provides cybersecurity practitioners with the knowledge needed to understand the risks of the increased availability of powerful Large Language Models (LLMs) and how they can be mitigated.
-
Understanding Large Language Models (Jenny Kunz)
This practical book offers clear, example-rich explanations of how Large Language Models (LLMs) work, how you can interact with them, and how to integrate LLMs into your own applications.
-
Speech and Language Processing (Dan Jurafsky, et al)
This text takes an empirical approach to the subject, based on applying statistical and other machine-learning algorithms to large corporations. It describes a unified vision of speech and language processing. Emphasis is on practical and scientific applications.
-
How to Scale Your Model: A Systems View of LLMs on TPUs
This book takes a very practical, systems-oriented approach to explain the performance side of LLMs like how Tensor Processing Units (TPUs) (and GPUs) work under the hood, how these devices communicate, and how LLMs actually run on real hardware.
-
Representation Learning for Natural Language Processing
This book provides an overview of the recent advances in representation learning theory, algorithms, and applications for natural language processing (NLP), ranging from word embeddings to pre-trained language models.
-
Linguistics for the Age of AI (Marjorie McShane, et al)
This book summarizes an exciting approach to knowledge-rich natural language understanding, in the context of language - using AI agents. Anyone interested in building cognitive systems that use language should read this book.
-
Computational Linguistics: Models, Resources, Applications
This comprehensive reference work provides an overview of the concepts, methodologies, and applications in computational linguistics and natural language processing (NLP). It will be of interest and practical use to a wide range of linguists.






