具體描述
作 者:林大貴 著 定 價:99 齣 版 社:清華大學齣版社 齣版日期:2018年01月01日 頁 數:519 裝 幀:平裝 ISBN:9787302490739 ●第1章 Python Spark機器學習與Hadoop大數據 1
●1.1 機器學習的介紹 2
●1.2 Spark的介紹 5
●1.3 Spark數據處理 RDD、DataFrame、Spark SQL 7
●1.4 使用Python開發 Spark機器學習與大數據應用 8
●1.5 Python Spark 機器學習 9
●1.6 Spark ML Pipeline機器學習流程介紹 10
●1.7 Spark 2.0的介紹 12
●1.8 大數據定義 13
●1.9 Hadoop 簡介 14
●1.10 Hadoop HDFS分布式文件係統 14
●1.11 Hadoop MapReduce的介紹 17
●1.12 結論 18
●第2章 VirtualBox虛擬機軟件的安裝 19
●2.1 VirtualBox的下載和安裝 20
●2.2 設置VirtualBox存儲文件夾 23
●2.3 在VirtualBox創建虛擬機 25
●2.4 結論 29
●第3章 Ubuntu Linux 操作係統的安裝 30
●3.1 Ubuntu Linux 操作係統的安裝 31
●部分目錄
內容簡介
本書從淺顯易懂的“大數據和機器學習”原理說明入手,講述大數據和機器學習的基本概念,如分類、分析、訓練、建模、預測、機器學習(推薦引擎)、機器學習(二元分類)、機器學習(多元分類)、機器學習(迴歸分析)和數據可視化應用等。書中不僅加入瞭新近的大數據技術,還豐富瞭“機器學習”內容。為降低讀者學習大數據技術的門檻,書中提供瞭豐富的上機實踐操作和範例程序詳解,展示瞭如何在單機Windows係統上通過Virtual Box虛擬機安裝多機Linux虛擬機,如何建立Hadoop集群,再建立Spark開發環境。書中介紹搭建的上機實踐平颱並不於單颱實體計算機。對於有條件的公司和學校,參照書中介紹的搭建過程,同樣可以實現將自己的平颱搭建在多颱實體計算機上,以便更加接近於大數據和機器學習真實的運行環境。本書很好適閤於學習大數據基礎知識的初學者閱讀,更適閤正在學習大數據理論和技術的人員作為上機實踐用的教等 林大貴 著 林大貴,從事IT行業多年,在係統設計、網站開發、數字營銷、商業智慧、大數據、機器學習等領域具有豐富的實戰經驗。
Python, Spark, and Hadoop: Unleashing the Power of Big Data and Machine Learning for Real-World Applications In today's data-driven world, the ability to process, analyze, and extract meaningful insights from vast datasets is no longer a niche skill but a fundamental requirement for success across numerous industries. As the volume and complexity of data continue to explode, traditional computing methods falter, demanding the adoption of robust, scalable, and efficient big data technologies. This is where the synergistic power of Python, Apache Spark, and Apache Hadoop truly shines. This comprehensive guide delves deep into the practical application of these foundational technologies, empowering you to build and deploy sophisticated machine learning models and big data solutions that tackle real-world challenges. The journey begins with a solid understanding of the Big Data ecosystem and the pivotal roles played by Hadoop and Spark. We will demystify the core concepts of distributed computing, explaining how Hadoop's MapReduce framework laid the groundwork for processing massive datasets across clusters of commodity hardware. You'll gain a clear grasp of the Hadoop Distributed File System (HDFS) and its vital function in storing and managing petabytes of data reliably and efficiently. Furthermore, we'll explore the evolution from MapReduce to Spark, highlighting Spark's dramatic performance improvements through its in-memory processing capabilities and its versatile API that supports various workloads, including batch processing, real-time streaming, SQL queries, and graph processing. Python, with its elegant syntax, extensive libraries, and thriving community, has become the de facto programming language for data science and machine learning. This guide will equip you with the essential Python skills needed to interact seamlessly with Spark and Hadoop. We will cover fundamental Python concepts, data manipulation with libraries like Pandas, and the crucial data structures and algorithms that underpin effective data analysis. You'll learn how to leverage Python's rich ecosystem of machine learning libraries, such as Scikit-learn, TensorFlow, and PyTorch, and understand how to integrate these powerful tools with your big data pipelines. The heart of this guide lies in its practical, hands-on approach to building and deploying real-world applications. We will guide you through the process of setting up and configuring a Hadoop and Spark environment, whether it's a local development setup or a cluster deployment on cloud platforms like AWS, Azure, or Google Cloud. You'll gain proficiency in writing Spark applications using PySpark, the Python API for Spark, enabling you to harness Spark's distributed processing power for data transformation, feature engineering, and model training on large datasets. Machine learning is a cornerstone of extracting value from big data. This guide provides a comprehensive exploration of various machine learning algorithms, from classic techniques like linear regression, logistic regression, decision trees, and support vector machines to more advanced methods like ensemble learning (random forests, gradient boosting) and deep learning architectures (convolutional neural networks, recurrent neural networks). Crucially, we will focus on how to apply these algorithms within a distributed computing framework. You'll learn how to train models on distributed datasets using Spark MLlib, Spark's native machine learning library, and how to optimize model performance for large-scale scenarios. This includes understanding concepts like distributed training, hyperparameter tuning in a distributed environment, and model deployment strategies for big data applications. Beyond individual machine learning algorithms, the guide emphasizes the entire machine learning lifecycle within the context of big data. This encompasses data preprocessing techniques tailored for large datasets, such as handling missing values, feature scaling, encoding categorical variables, and dimensionality reduction. You'll learn how to perform effective feature engineering to create informative features that drive model accuracy, understanding how to do this efficiently on distributed data. Model evaluation and selection will be covered in detail, focusing on metrics relevant to big data problems and strategies for robust model validation. Furthermore, we will address the critical aspects of model deployment, including how to integrate trained models into real-time data processing pipelines and how to monitor their performance in production. The capabilities of Spark extend far beyond batch processing. This guide will introduce you to Spark Streaming, enabling you to build real-time data processing applications. You'll learn how to ingest data from various sources, such as Kafka or Kinesis, perform transformations and aggregations on streaming data, and even train and deploy machine learning models that can make predictions on incoming data streams. This opens up possibilities for building applications like real-time fraud detection systems, dynamic recommendation engines, and live anomaly detection. Graph processing is another powerful facet of Spark that will be explored. We will delve into GraphX, Spark's API for graph computation. You'll learn how to represent graph data, perform fundamental graph algorithms like PageRank and connected components, and apply these techniques to problems such as social network analysis, recommendation systems, and fraud detection in network structures. Real-world applications are the ultimate test of these technologies. Throughout the guide, you will encounter numerous case studies and practical examples that demonstrate how Python, Spark, and Hadoop are used to solve pressing business problems. These examples will span diverse domains, including: E-commerce and Retail: Building personalized recommendation systems, predicting customer churn, optimizing pricing strategies, and analyzing customer behavior. Finance: Detecting fraudulent transactions in real-time, assessing credit risk, algorithmic trading, and analyzing market trends. Healthcare: Analyzing medical records for disease prediction, identifying patterns in patient data, and developing personalized treatment plans. IoT and Sensor Data: Processing and analyzing data from connected devices for predictive maintenance, anomaly detection, and performance monitoring. Natural Language Processing (NLP): Sentiment analysis, topic modeling, text summarization, and building intelligent chatbots on large text corpora. We will also touch upon important considerations for working with big data, such as data governance, security, and performance optimization. You'll learn techniques for tuning Spark jobs, optimizing data storage, and ensuring the scalability and reliability of your big data solutions. Understanding the nuances of distributed systems, including data partitioning, shuffling, and fault tolerance, will be integral to building robust and efficient applications. This guide is designed for individuals who are passionate about leveraging the power of data to drive innovation. Whether you are a data scientist, a machine learning engineer, a software developer looking to expand your skillset, or a business analyst eager to harness the potential of big data, this book will provide you with the knowledge and practical experience needed to excel in this rapidly evolving field. By mastering the synergy of Python, Spark, and Hadoop, you will be well-equipped to tackle complex data challenges, build intelligent applications, and unlock unprecedented insights from the vast ocean of data that surrounds us.