PARA TODA NECESIDAD SIEMPRE HAY UN LIBRO

Imagen de cubierta local
Imagen de cubierta local
Imagen de Google Jackets

Practical Big Data Analytics : Hands-on techniques to implement enterprise analytics and machine learning using Hadoop, Spark, NoSQL and R / Nataraj Dasgupta.

Por: Tipo de material: TextoTextoIdioma: Inglés Editor: Birmingham, UK : Distribuidor: Packt Publishing; Fecha de copyright: ©2018Edición: 1ª ediciónDescripción: vii, 396 páginas : ilustraciones, gráficas ; 23.5 x 19 cmTipo de contenido:
  • texto
Tipo de medio:
  • sin medio
Tipo de soporte:
  • volumen
ISBN:
  • 9781783554393
Tema(s): Clasificación CDD:
  • 004.6782 DAS
Clasificación LoC:
  • QA76 .9 .B45 D229 2018
Contenidos:
Too Big or Not Too Big -- What is big data? -- A brief history of data -- Dawn of the information age -- Dr. Alan Turing and modern computing -- The advent of the stored-program computer -- From magnetic devices to SSDs -- Why we are talking about big data now if data has always existed -- Definition of big data -- Building blocks of big data analytics -- Types of Big Data -- Structured -- Unstructured -- Semi-structured -- Sources of big data -- The 4Vs of big data -- When do you know you have a big data problem and where do you start your search for the big data solution? -- Summary -- Big Data Mining for the Masses -- What is big data mining? -- Big data mining in the enterprise -- Building the case for a Big Data strategy -- Implementation life cycle -- Stakeholders of the solution -- Implementing the solution -- Technical elements of the big data platform -- Selection of the hardware stack -- Selection of the software stack -- Summary -- The Analytics Toolkit -- Components of the Analytics Toolkit -- System recommendations -- Installing on a laptop or workstation -- Installing on the cloud -- Installing Hadoop -- Installing Oracle VirtualBox -- Installing CDH in other environments -- Installing Packt Data Science Box -- Installing Spark -- Installing R -- Steps for downloading and installing Microsoft R Open -- Installing RStudio -- Installing Python -- Summary -- Big Data With Hadoop -- The fundamentals of Hadoop -- The fundamental premise of Hadoop -- The core modules of Hadoop -- Hadoop Distributed File System - HDFS -- Data storage process in HDFS -- Hadoop MapReduce -- An intuitive introduction to MapReduce -- A technical understanding of MapReduce -- Block size and number of mappers and reducers -- Hadoop YARN -- Job scheduling in YARN -- Other topics in Hadoop -- Encryption -- User authentication -- Hadoop data storage formats -- New features expected in Hadoop 3 -- The Hadoop ecosystem -- Hands-on with CDH -- WordCount using Hadoop MapReduce -- Analyzing oil import prices with Hive -- Joining tables in Hive -- Summary -- Big Data Mining with NoSQL -- Why NoSQL? -- The ACID, BASE, and CAP properties -- ACID and SQL -- The BASE property of NoSQL -- The CAP theorem -- The need for NoSQL technologies -- Google Bigtable -- Amazon Dynamo -- NoSQL databases -- In-memory databases -- Columnar databases -- Document-oriented databases -- Key-value databases -- Graph databases -- Other NoSQL types and summary of other types of databases  -- Analyzing Nobel Laureates data with MongoDB -- JSON format -- Installing and using MongoDB -- Tracking physician payments with real-world data -- Installing kdb+, R, and RStudio -- Installing kdb+ -- Installing R -- Installing RStudio -- The CMS Open Payments Portal -- Downloading the CMS Open Payments data -- Creating the Q application -- Loading the data -- The backend code -- Creating the frontend web portal -- R Shiny platform for developers -- Putting it all together - The CMS Open Payments application -- Applications -- Summary -- Spark for Big Data Analytics -- The advent of Spark -- Limitations of Hadoop -- Overcoming the limitations of Hadoop -- Theoretical concepts in Spark -- Resilient distributed datasets -- Directed acyclic graphs -- SparkContext -- Spark DataFrames -- Actions and transformations -- Spark deployment options -- Spark APIs -- Core components in Spark -- Spark Core -- Spark SQL -- Spark Streaming -- GraphX -- MLlib -- The architecture of Spark -- Spark solutions -- Spark practicals -- Signing up for Databricks Community Edition -- Spark exercise - hands-on with Spark (Databricks) -- Summary -- An Introduction to Machine Learning Concepts -- What is machine learning? -- The evolution of machine learning -- Factors that led to the success of machine learning -- Machine learning, statistics, and AI -- Categories of machine learning -- Supervised and unsupervised machine learning -- Supervised machine learning -- Vehicle Mileage, Number Recognition and other examples -- Unsupervised machine learning -- Subdividing supervised machine learning -- Common terminologies in machine learning -- The core concepts in machine learning -- Data management steps in machine learning -- Pre-processing and feature selection techniques -- Centering and scaling -- The near-zero variance function -- Removing correlated variables -- Other common data transformations -- Data sampling -- Data imputation -- The importance of variables -- The train, test splits, and cross-validation concepts -- Splitting the data into train and test sets -- The cross-validation parameter -- Creating the model -- Leveraging multicore processing in the model -- Summary -- Machine Learning Deep Dive -- The bias, variance, and regularization properties -- The gradient descent and VC Dimension theories -- Popular machine learning algorithms -- Regression models -- Association rules -- Confidence -- Support -- Lift -- Decision trees -- The Random forest extension -- Boosting algorithms -- Support vector machines -- The K-Means machine learning technique -- The neural networks related algorithms -- Tutorial - associative rules mining with CMS data -- Downloading the data -- Writing the R code for Apriori -- Shiny (R Code) -- Using custom CSS and fonts for the application -- Running the application -- Summary -- Enterprise Data Science -- Enterprise data science overview -- A roadmap to enterprise analytics success -- Data science solutions in the enterprise -- Enterprise data warehouse and data mining -- Traditional data warehouse systems -- Oracle Exadata, Exalytics, and TimesTen -- HP Vertica -- Teradata -- IBM data warehouse systems (formerly Netezza appliances) -- PostgreSQL -- Greenplum -- SAP Hana -- Enterprise and open source NoSQL Databases -- Kdb+ -- MongoDB -- Cassandra -- Neo4j -- Cloud databases -- Amazon Redshift, Redshift Spectrum, and Athena databases -- Google BigQuery and other cloud services -- Azure CosmosDB -- GPU databases -- Brytlyt -- MapD -- Other common databases -- Enterprise data science – machine learning and AI -- The R programming language -- Python -- OpenCV, Caffe, and others -- Spark -- Deep learning -- H2O and Driverless AI -- Datarobot -- Command-line tools -- Apache MADlib -- Machine learning as a service -- Enterprise infrastructure solutions -- Cloud computing -- Virtualization -- Containers – Docker, Kubernetes, and Mesos -- On-premises hardware -- Enterprise Big Data -- Tutorial – using RStudio in the cloud -- Summary -- Closing Thoughts on Big Data -- Corporate big data and data science strategy -- Ethical considerations -- Silicon Valley and data science -- The human factor -- Characteristics of successful projects --
Resumen: Big Data analytics relates to the strategies used by organizations to collect, organize and analyze large amounts of data to uncover valuable business insights that otherwise cannot be analyzed through traditional systems. Crafting an enterprise-scale cost-efficient Big Data and machine learning solution to uncover insights and value from your organization's data is a challenge. Today, with hundreds of new Big Data systems, machine learning packages and BI Tools, selecting the right combination of technologies is an even greater challenge. This book will help you do that. With the help of this guide, you will be able to bridge the gap between the theoretical world of technology with the practical ground reality of building corporate Big Data and data science platforms. You will get hands-on exposure to Hadoop and Spark, build machine learning dashboards using R and R Shiny, create web-based apps using NoSQL databases such as MongoDB and even learn how to write R code for neural networks. By the end of the book, you will have a very clear and concrete understanding of what Big Data analytics means, how it drives revenues for organizations, and how you can develop your own Big Data analytics solution using different tools and methods articulated in this book.Resumen: El análisis de Big Data se relaciona con las estrategias utilizadas por las organizaciones para recopilar, organizar y analizar grandes cantidades de datos para descubrir información comercial valiosa que de otro modo no se podría analizar a través de sistemas tradicionales. Crear una solución de Big Data y aprendizaje automático rentable a escala empresarial para descubrir conocimientos y valor de los datos de su organización es un desafío. Hoy en día, con cientos de nuevos sistemas de Big Data, paquetes de aprendizaje automático y herramientas de BI, seleccionar la combinación adecuada de tecnologías es un desafío aún mayor. Este libro le ayudará a hacerlo. Con la ayuda de esta guía, podrá cerrar la brecha entre el mundo teórico de la tecnología y la realidad práctica de la construcción de plataformas corporativas de ciencia de datos y Big Data. Obtendrá una exposición práctica a Hadoop y Spark, creará paneles de aprendizaje automático utilizando R y R Shiny, creará aplicaciones basadas en web utilizando bases de datos NoSQL como MongoDB e incluso aprenderá a escribir código R para redes neuronales. Al final del libro, tendrá una comprensión muy clara y concreta de lo que significa el análisis de Big Data, cómo genera ingresos para las organizaciones y cómo puede desarrollar su propia solución de análisis de Big Data utilizando las diferentes herramientas y métodos articulados en este libro. .
Etiquetas de esta biblioteca: No hay etiquetas de esta biblioteca para este título. Ingresar para agregar etiquetas.
Valoración
    Valoración media: 0.0 (0 votos)
Existencias
Tipo de ítem Biblioteca actual Biblioteca de origen Colección Signatura topográfica Copia número Estado Notas Fecha de vencimiento Código de barras Reserva de ítems
Libros para consulta en sala Libros para consulta en sala Biblioteca Antonio Enriquez Savignac Biblioteca Antonio Enriquez Savignac COLECCIÓN RESERVA QA76 .9 .B45 D229 2018 (Navegar estantería(Abre debajo)) Ejem.1 No para préstamo (Préstamo interno) Ingeniería Logística 043184
Total de reservas: 0

Incluye índice.

Too Big or Not Too Big --
What is big data? --
A brief history of data --
Dawn of the information age --
Dr. Alan Turing and modern computing --
The advent of the stored-program computer --
From magnetic devices to SSDs --
Why we are talking about big data now if data has always existed --
Definition of big data --
Building blocks of big data analytics --
Types of Big Data --
Structured --
Unstructured --
Semi-structured --
Sources of big data --
The 4Vs of big data --
When do you know you have a big data problem and where do you start your search for the big data solution? --
Summary --
Big Data Mining for the Masses --
What is big data mining? --
Big data mining in the enterprise --
Building the case for a Big Data strategy --
Implementation life cycle --
Stakeholders of the solution --
Implementing the solution --
Technical elements of the big data platform --
Selection of the hardware stack --
Selection of the software stack --
Summary --
The Analytics Toolkit --
Components of the Analytics Toolkit --
System recommendations --
Installing on a laptop or workstation --
Installing on the cloud --
Installing Hadoop --
Installing Oracle VirtualBox --
Installing CDH in other environments --
Installing Packt Data Science Box --
Installing Spark --
Installing R --
Steps for downloading and installing Microsoft R Open --
Installing RStudio --
Installing Python --
Summary --
Big Data With Hadoop --
The fundamentals of Hadoop --
The fundamental premise of Hadoop --
The core modules of Hadoop --
Hadoop Distributed File System - HDFS --
Data storage process in HDFS --
Hadoop MapReduce --
An intuitive introduction to MapReduce --
A technical understanding of MapReduce --
Block size and number of mappers and reducers --
Hadoop YARN --
Job scheduling in YARN --
Other topics in Hadoop --
Encryption --
User authentication --
Hadoop data storage formats --
New features expected in Hadoop 3 --
The Hadoop ecosystem --
Hands-on with CDH --
WordCount using Hadoop MapReduce --
Analyzing oil import prices with Hive --
Joining tables in Hive --
Summary --
Big Data Mining with NoSQL --
Why NoSQL? --
The ACID, BASE, and CAP properties --
ACID and SQL --
The BASE property of NoSQL --
The CAP theorem --
The need for NoSQL technologies --
Google Bigtable --
Amazon Dynamo --
NoSQL databases --
In-memory databases --
Columnar databases --
Document-oriented databases --
Key-value databases --
Graph databases --
Other NoSQL types and summary of other types of databases  --
Analyzing Nobel Laureates data with MongoDB --
JSON format --
Installing and using MongoDB --
Tracking physician payments with real-world data --
Installing kdb+, R, and RStudio --
Installing kdb+ --
Installing R --
Installing RStudio --
The CMS Open Payments Portal --
Downloading the CMS Open Payments data --
Creating the Q application --
Loading the data --
The backend code --
Creating the frontend web portal --
R Shiny platform for developers --
Putting it all together - The CMS Open Payments application --
Applications --
Summary --
Spark for Big Data Analytics --
The advent of Spark --
Limitations of Hadoop --
Overcoming the limitations of Hadoop --
Theoretical concepts in Spark --
Resilient distributed datasets --
Directed acyclic graphs --
SparkContext --
Spark DataFrames --
Actions and transformations --
Spark deployment options --
Spark APIs --
Core components in Spark --
Spark Core --
Spark SQL --
Spark Streaming --
GraphX --
MLlib --
The architecture of Spark --
Spark solutions --
Spark practicals --
Signing up for Databricks Community Edition --
Spark exercise - hands-on with Spark (Databricks) --
Summary --
An Introduction to Machine Learning Concepts --
What is machine learning? --
The evolution of machine learning --
Factors that led to the success of machine learning --
Machine learning, statistics, and AI --
Categories of machine learning --
Supervised and unsupervised machine learning --
Supervised machine learning --
Vehicle Mileage, Number Recognition and other examples --
Unsupervised machine learning --
Subdividing supervised machine learning --
Common terminologies in machine learning --
The core concepts in machine learning --
Data management steps in machine learning --
Pre-processing and feature selection techniques --
Centering and scaling --
The near-zero variance function --
Removing correlated variables --
Other common data transformations --
Data sampling --
Data imputation --
The importance of variables --
The train, test splits, and cross-validation concepts --
Splitting the data into train and test sets --
The cross-validation parameter --
Creating the model --
Leveraging multicore processing in the model --
Summary --
Machine Learning Deep Dive --
The bias, variance, and regularization properties --
The gradient descent and VC Dimension theories --
Popular machine learning algorithms --
Regression models --
Association rules --
Confidence --
Support --
Lift --
Decision trees --
The Random forest extension --
Boosting algorithms --
Support vector machines --
The K-Means machine learning technique --
The neural networks related algorithms --
Tutorial - associative rules mining with CMS data --
Downloading the data --
Writing the R code for Apriori --
Shiny (R Code) --
Using custom CSS and fonts for the application --
Running the application --
Summary --
Enterprise Data Science --
Enterprise data science overview --
A roadmap to enterprise analytics success --
Data science solutions in the enterprise --
Enterprise data warehouse and data mining --
Traditional data warehouse systems --
Oracle Exadata, Exalytics, and TimesTen --
HP Vertica --
Teradata --
IBM data warehouse systems (formerly Netezza appliances) --
PostgreSQL --
Greenplum --
SAP Hana --
Enterprise and open source NoSQL Databases --
Kdb+ --
MongoDB --
Cassandra --
Neo4j --
Cloud databases --
Amazon Redshift, Redshift Spectrum, and Athena databases --
Google BigQuery and other cloud services --
Azure CosmosDB --
GPU databases --
Brytlyt --
MapD --
Other common databases --
Enterprise data science – machine learning and AI --
The R programming language --
Python --
OpenCV, Caffe, and others --
Spark --
Deep learning --
H2O and Driverless AI --
Datarobot --
Command-line tools --
Apache MADlib --
Machine learning as a service --
Enterprise infrastructure solutions --
Cloud computing --
Virtualization --
Containers – Docker, Kubernetes, and Mesos --
On-premises hardware --
Enterprise Big Data --
Tutorial – using RStudio in the cloud --
Summary --
Closing Thoughts on Big Data --
Corporate big data and data science strategy --
Ethical considerations --
Silicon Valley and data science --
The human factor --
Characteristics of successful projects --

Big Data analytics relates to the strategies used by organizations to collect, organize and analyze large amounts of data to uncover valuable business insights that otherwise cannot be analyzed through traditional systems. Crafting an enterprise-scale cost-efficient Big Data and machine learning solution to uncover insights and value from your organization's data is a challenge. Today, with hundreds of new Big Data systems, machine learning packages and BI Tools, selecting the right combination of technologies is an even greater challenge. This book will help you do that.

With the help of this guide, you will be able to bridge the gap between the theoretical world of technology with the practical ground reality of building corporate Big Data and data science platforms. You will get hands-on exposure to Hadoop and Spark, build machine learning dashboards using R and R Shiny, create web-based apps using NoSQL databases such as MongoDB and even learn how to write R code for neural networks.

By the end of the book, you will have a very clear and concrete understanding of what Big Data analytics means, how it drives revenues for organizations, and how you can develop your own Big Data analytics solution using different tools and methods articulated in this book.

El análisis de Big Data se relaciona con las estrategias utilizadas por las organizaciones para recopilar, organizar y analizar grandes cantidades de datos para descubrir información comercial valiosa que de otro modo no se podría analizar a través de sistemas tradicionales. Crear una solución de Big Data y aprendizaje automático rentable a escala empresarial para descubrir conocimientos y valor de los datos de su organización es un desafío. Hoy en día, con cientos de nuevos sistemas de Big Data, paquetes de aprendizaje automático y herramientas de BI, seleccionar la combinación adecuada de tecnologías es un desafío aún mayor. Este libro le ayudará a hacerlo.

Con la ayuda de esta guía, podrá cerrar la brecha entre el mundo teórico de la tecnología y la realidad práctica de la construcción de plataformas corporativas de ciencia de datos y Big Data. Obtendrá una exposición práctica a Hadoop y Spark, creará paneles de aprendizaje automático utilizando R y R Shiny, creará aplicaciones basadas en web utilizando bases de datos NoSQL como MongoDB e incluso aprenderá a escribir código R para redes neuronales.

Al final del libro, tendrá una comprensión muy clara y concreta de lo que significa el análisis de Big Data, cómo genera ingresos para las organizaciones y cómo puede desarrollar su propia solución de análisis de Big Data utilizando las diferentes herramientas y métodos articulados en este libro. .

Haga clic en una imagen para verla en el visor de imágenes

Imagen de cubierta local
  • Universidad del Caribe
  • Con tecnología Koha