diff --git a/pyspark/Colab and PySpark.ipynb b/pyspark/Colab and PySpark.ipynb
index d74f96a..1dd52b4 100644
--- a/pyspark/Colab and PySpark.ipynb
+++ b/pyspark/Colab and PySpark.ipynb
@@ -1,4366 +1,4295 @@
{
- "nbformat": 4,
- "nbformat_minor": 0,
- "metadata": {
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "iox_ufgbqDXa"
+ },
+ "source": [
+ "
Introduction to Google Colab and PySpark "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1qV6Grv7qIa9"
+ },
+ "source": [
+ "## Table Of Contents:\n",
+ "\n",
+ "Objective \n",
+ "Prerequisite \n",
+ "Notes from the Author \n",
+ "Big data, PySpark and Colaboratory \n",
+ " \n",
+ " Big data \n",
+ " PySpark \n",
+ " Colaboratory \n",
+ " \n",
+ " \n",
+ "Jupyter Notebook Basics \n",
+ " \n",
+ " Code cells \n",
+ " Text cells \n",
+ " Access to the shell \n",
+ " Installing Spark \n",
+ " \n",
+ " \n",
+ "Exploring the Dataset \n",
+ " \n",
+ " Loading the Dataset \n",
+ " Viewing the Dataframe \n",
+ " Viewing Dataframe Columns \n",
+ " Dataframe Schema \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ "DataFrame Operations on Columns \n",
+ " \n",
+ " Selecting Columns \n",
+ " Selecting Multiple Columns \n",
+ " Adding New Columns \n",
+ " Renaming Columns \n",
+ " Grouping By Columns \n",
+ " Removing Columns \n",
+ " \n",
+ " \n",
+ "DataFrame Operations on Rows \n",
+ " \n",
+ " Filtering Rows \n",
+ " Get Distinct Rows \n",
+ " Sorting Rows \n",
+ " Union Dataframes \n",
+ " \n",
+ " \n",
+ "Common Data Manipulation Functions \n",
+ " \n",
+ " String Functions \n",
+ " Numeric Functions \n",
+ " Operations on Date \n",
+ " \n",
+ " \n",
+ "Joins in PySpark \n",
+ "Spark SQL \n",
+ "RDD \n",
+ "User-Defined Functions (UDF) \n",
+ "Common Questions \n",
+ " \n",
+ " Recommended IDE \n",
+ " Submitting a Spark Job \n",
+ " Creating Dataframes \n",
+ " Drop Duplicates \n",
+ " Fine Tuning a PySpark Job \n",
+ " \n",
+ " \n",
+ " \n",
+ " \n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "FFnYZltvqLgt"
+ },
+ "source": [
+ " \n",
+ "## Objective\n",
+ "The objective of this notebook is to:\n",
+ ">Give a proper understanding about the different PySpark functions available. \n",
+ ">A short introduction to Google Colab, as that is the platform on which this notebook is written on. \n",
+ "\n",
+ "Once you complete this notebook, you should be able to write pyspark programs in an efficent way. The ideal way to use this is by going through the examples given and then trying them on Colab. At the end there are a few hands on questions which you can use to evaluate yourself."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "YR1CO3FTqO5h"
+ },
+ "source": [
+ " \n",
+ "## Prerequisite\n",
+ ">Although some theory about pyspark and big data will be given in this notebook, I recommend everyone to read more about it and have a deeper understanding on how the functions get executed and the relevance of big data in the current scenario.\n",
+ "> A good understanding on python will be an added bonus."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "bGbIBPLHqVXc"
+ },
+ "source": [
+ " \n",
+ "## Notes from the Author\n",
+ "\n",
+ "This tutorial was made using Google Colab so the code you see here is meant to run on a colab notebook. \n",
+ "It goes through basic [PySpark Functions](https://spark.apache.org/docs/latest/api/python/index.html) and a short introduction on how to use [Colab](https://colab.research.google.com/notebooks/basic_features_overview.ipynb). \n",
+ "If you want to view my colab notebook for this particular tutorial, you can view it [here](https://colab.research.google.com/drive/1G894WS7ltIUTusWWmsCnF_zQhQqZCDOc). The viewing experience and readability is much better there. \n",
+ "If you want to try out things with this notebook as a base, feel free to download it from my repo [here](https://github.com/jacobceles/knowledge-repo/blob/master/pyspark/Colab%20and%20PySpark.ipynb) and then use it with jupyter notebook."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "K_q3Yzc9qYc0"
+ },
+ "source": [
+ " \n",
+ "## Big data, PySpark and Colaboratory"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "5gs9JXCWqb9s"
+ },
+ "source": [
+ " \n",
+ "### Big data"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "q3UkLT6yqebl"
+ },
+ "source": [
+ "Big data usually means data of such huge volume that normal data storage solutions cannot efficently store and process it. In this era, data is being generated at an absurd rate. Data is collected for each movement a person makes. The bulk of big data comes from three primary sources: \n",
+ "\n",
+ " Social data \n",
+ " Machine data \n",
+ " Transactional data \n",
+ " \n",
+ "\n",
+ "Some common examples for the sources of such data include internet searches, facebook posts, doorbell cams, smartwatches, online shopping history etc. Every action creates data, it is just a matter of of there is a way to collect them or not. But what's interesting is that out of all this data collected, not even 5% of it is being used fully. There is a huge demand for big data professionals in the industry. Even though the number of graduates with a specialization in big data are rising, the problem is that they don't have the practical knowledge about big data scenarios, which leads to bad architecutres and inefficent methods of processing data.\n",
+ "\n",
+ ">If you are interested to know more about the landscape and technologies involved, here is [an article](https://hostingtribunal.com/blog/big-data-stats/) which I found really interesting!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NhM3wLG2qhlN"
+ },
+ "source": [
+ " \n",
+ "### PySpark"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "VBfC_kvjqjPj"
+ },
+ "source": [
+ "If you are working in the field of big data, you must have definelty heard of spark. If you look at the [Apache Spark](https://spark.apache.org/) website, you will see that it is said to be a `Lightning-fast unified analytics engine`. PySpark is a flavour of Spark used for processing and analysing massive volumes of data. If you are familiar with python and have tried it for huge datasets, you should know that the execution time can get ridiculous. Enter PySpark!\n",
+ "\n",
+ "Imagine your data resides in a distributed manner at different places. If you try brining your data to one point and executing your code there, not only would that be inefficent, but also cause memory issues. Now let's say your code goes to the data rather than the data coming to where your code. This will help avoid unneccesary data movement which will thereby decrease the running time. \n",
+ "\n",
+ "PySpark is the Python API of Spark; which means it can do almost all the things python can. Machine learning(ML) pipelines, exploratory data analysis (at scale), ETLs for data platform, and much more! And all of them in a distributed manner. One of the best parts of pyspark is that if you are already familiar with python, it's really easy to learn.\n",
+ "\n",
+ "Apart from PySpark, there is another language called Scala used for big data processing. Scala is frequently over 10 times faster than *Python*, as it is native for Hadoop as its based on JVM. But PySpark is getting adopted at a fast rate because of the ease of use, easier learning curve and ML capabilities.\n",
+ "\n",
+ "I will briefly explain how a PySpark job works, but I strongly recommend you read more about the [architecture](https://data-flair.training/blogs/how-apache-spark-works/) and how everything works. Now, before I get into it, let me talk about some basic jargons first:\n",
+ "\n",
+ "Cluster is a set of loosely or tightly connected computers that work together so that they can be viewed as a single system.\n",
+ "\n",
+ "Hadoop is an open source, scalable, and fault tolerant framework written in Java. It efficiently processes large volumes of data on a cluster of commodity hardware. Hadoop is not only a storage system but is a platform for large data storage as well as processing.\n",
+ "\n",
+ "HDFS (Hadoop distributed file system). It is one of the world's most reliable storage system. HDFS is a Filesystem of Hadoop designed for storing very large files running on a cluster of commodity hardware.\n",
+ "\n",
+ "MapReduce is a data Processing framework, which has 2 phases - Mapper and Reducer. The map procedure performs filtering and sorting, and the reduce method performs a summary operation. It usually runs on a hadoop cluster.\n",
+ "\n",
+ "Transformation refers to the operations applied on a dataset to create a new dataset. Filter, groupBy and map are the examples of transformations.\n",
+ "\n",
+ "Actions Actions refer to an operation which instructs Spark to perform computation and send the result back to driver. This is an example of action.\n",
+ "\n",
+ "Alright! Now that that's out of the way, let me explain how a spark job runs. In simple terma, each time you submit a pyspark job, the code gets internally converted into a MapReduce program and gets executed in the Java virtual machine. Now one of the thoughts that might be popping in your mind will probably be: `So the code gets converted into a MapReduce program. Wouldn't that mean MapReduce is faster than pySpark?` Well, the answer is a big NO. This is what makes spark jobs special. Spark is capable of handling a massive amount of data at a time, in it's distributed environment. It does this through in-memory processing , which is what makes it almost 100 times faster than Hadoop. Another factor which amkes it fast is Lazy Evaluation . Spark delays its evaluation as much as it can. Each time you submit a job, spark creates an action plan for how to execute the code, and then does nothing. Finally, when you ask for the result(i.e, calls an action), it executes the plan, which is basically all the transofrmations you have mentioned in your code. That's basically the gist of it. \n",
+ "\n",
+ "Now lastly, I want to talk about on more thing. Spark mainly consists of 4 modules:\n",
+ "\n",
+ "\n",
+ " Spark SQL - helps to write spark programs using SQL like queries. \n",
+ " Spark Streaming - is an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams. used heavily in processing of social media data. \n",
+ " Spark MLLib - is the machine learning component of SPark. It helps train ML models on massive datasets with very high efficeny. \n",
+ " Spark GraphX - is the visualization component of Spark. It enables users to view data both as graphs and as collections without data movement or duplication. \n",
+ " \n",
+ "\n",
+ "Hopefully this image gives a better idea of what I am talking about:\n",
+ " \n",
+ "Source: Datanami \n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1NJMWs4NqnlF"
+ },
+ "source": [
+ " \n",
+ "### Colaboratory"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ynKFk6b7qoWr"
+ },
+ "source": [
+ "In the words of Google: \n",
+ "`Colaboratory, or “Colab” for short, is a product from Google Research. Colab allows anybody to write and execute arbitrary python code through the browser, and is especially well suited to machine learning, data analysis and education. More technically, Colab is a hosted Jupyter notebook service that requires no setup to use, while providing free access to computing resources including GPUs.`\n",
+ "\n",
+ "The reason why I used colab is because of its shareability and free GPU and TPU. Yeah you read that right, FREE GPU AND TPU! For using TPU, your program needs to be optimized for the same. Additionally, it helps use different Google services conveniently. It saves to Google Drive and all the services are very closely related. I recommend you go through the offical [overview documentation](https://colab.research.google.com/notebooks/basic_features_overview.ipynb) if you want to know more about it.\n",
+ "If you have more questions about colab, please [refer this link](https://research.google.com/colaboratory/faq.html).\n",
+ "\n",
+ ">While using a colab notebook, you will need an active internet connection to keep a session alive. If you lose the connection you will have to download the datasets again."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "_N5-lspH_N8B"
+ },
+ "source": [
+ " \n",
+ "## Jupyter notebook basics"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6Ul54hAYyHyd"
+ },
+ "source": [
+ " \n",
+ "### Code cells"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
"colab": {
- "name": "Colab and PySpark.ipynb",
- "provenance": [],
- "collapsed_sections": [
- "YR1CO3FTqO5h",
- "5gs9JXCWqb9s",
- "NhM3wLG2qhlN",
- "1NJMWs4NqnlF",
- "6Ul54hAYyHyd",
- "VOqLNkRKyUIS",
- "X6zdrH15_CCW",
- "Dd6t0uFzuR4X",
- "hmIqq6xPK7m7",
- "HgwoX-pfNqQI",
- "_QwZtWxZRCBn",
- "eFoagdqARKb8",
- "rsD48rckdHPe",
- "ikGR5pDICTu7",
- "85Lv3zSXCcOY",
- "QlMf04i2CjDC",
- "4CDifVC2Cnml",
- "CbpEj9fECrW3",
- "WbKK5iHwmIoV",
- "9bKlvX-SH-Wy",
- "zLU-a4auIEvh",
- "-069UYUwIIYI",
- "aN0-A_JsIX-X",
- "xOQPOt19q_he",
- "aHjILb1DriuX",
- "PIKigra7A34e",
- "ldtA0wk9BMkT",
- "KQ6Ul9HGCwC3",
- "sY6PstyLDp6P",
- "7OZElEvcGOD1",
- "EEEB2TVqL4Ie",
- "HNPhsx8P2tUH",
- "x62BiCgBMOtq",
- "3wn2zXe7TbI3",
- "6z9gkoE2R1m1",
- "VZ1bYvF8R8Dc",
- "oVwGYAZZiyGV",
- "f3crkAQVlxKp",
- "vIbXZT29JxmG",
- "snACMwZug5Yn",
- "dABRu9eokZxw",
- "xrgbAiZHnq_U",
- "244El832wT8f",
- "H5cFCvbczHz_",
- "rdGARl-D3n-l",
- "aaxfGqYZ6Iqz",
- "BG09dDdL6Tvt",
- "CJlmPbLYKKFA",
- "r-R5ijHrKSg0",
- "UOvEVcieVn7e",
- "2KY1mxZVfNsl",
- "qwEBu3T3EbfD",
- "-HxUYv77EdSv",
- "NZ5xsdWmNOVz"
- ]
- },
- "kernelspec": {
- "display_name": "Python 3",
- "language": "python",
- "name": "python3"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.6.5"
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "j38beRUTCI5c",
+ "outputId": "5a5d549b-afa7-4742-84ce-f9bec67dd9a9"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "6"
+ ]
+ },
+ "execution_count": 1,
+ "metadata": {
+ "tags": []
+ },
+ "output_type": "execute_result"
}
+ ],
+ "source": [
+ "2*3"
+ ]
},
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "iox_ufgbqDXa",
- "colab_type": "text"
- },
- "source": [
- "Introduction to Google Colab and PySpark "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "1qV6Grv7qIa9",
- "colab_type": "text"
- },
- "source": [
- "## Table Of Contents:\n",
- "\n",
- "Objective \n",
- "Prerequisite \n",
- "Notes from the author \n",
- "Big data, PySpark and Colaboratory \n",
- " \n",
- " Big data \n",
- " PySpark \n",
- " Colaboratory \n",
- " \n",
- " \n",
- "Jupyter notebook basics \n",
- " \n",
- " Code cells \n",
- " Text cells \n",
- " Access to the shell \n",
- " Install Spark \n",
- " \n",
- " \n",
- "Loading Dataset \n",
- "Working with the DataFrame API \n",
- " \n",
- " Viewing Dataframe \n",
- " Schema of a DataFrame \n",
- " Working with Columns \n",
- " \n",
- " \n",
- " Working with Rows \n",
- " \n",
- " \n",
- " \n",
- " \n",
- "Hands-on Questions \n",
- "Functions \n",
- " \n",
- " String Functions \n",
- " Numeric functions \n",
- " Date \n",
- " \n",
- " \n",
- "Working with Dates \n",
- "Working with joins \n",
- "Hands-on again! \n",
- "Spark SQL \n",
- "RDD \n",
- "User-Defined Functions (UDF) \n",
- "Common Questions \n",
- " \n",
- " Recommended IDE \n",
- " Submitting a Spark Job \n",
- " Creating Dataframes \n",
- " Drop Duplicates \n",
- " Fine Tuning a PySpark Job \n",
- " \n",
- " \n",
- " \n",
- " \n",
- " "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "FFnYZltvqLgt",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Objective\n",
- "The objective of this notebook is to:\n",
- ">Give a proper understanding about the different PySpark functions available. \n",
- ">A short introduction to Google Colab, as that is the platform on which this notebook is written on. \n",
- "\n",
- "Once you complete this notebook, you should be able to write pyspark programs in an efficent way. The ideal way to use this is by going through the examples given and then trying them on Colab. At the end there are a few hands on questions which you can use to evaluate yourself."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "YR1CO3FTqO5h",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Prerequisite\n",
- ">Although some theory about pyspark and big data will be given in this notebook, I recommend everyone to read more about it and have a deeper understanding on how the functions get executed and the relevance of big data in the current scenario.\n",
- "> A good understanding on python will be an added bonus."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "bGbIBPLHqVXc",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "### Notes from the author\n",
- "\n",
- "This tutorial was made using Google Colab so the code you see here is meant to run on a colab notebook. \n",
- "It goes through basic [PySpark Functions](https://spark.apache.org/docs/latest/api/python/index.html) and a short introduction on how to use [Colab](https://colab.research.google.com/notebooks/basic_features_overview.ipynb). \n",
- "If you want to view my colab notebook for this particular tutorial, you can view it [here](https://colab.research.google.com/drive/1G894WS7ltIUTusWWmsCnF_zQhQqZCDOc). The viewing experience and readability is much better there. \n",
- "If you want to try out things with this notebook as a base, feel free to download it from my repo [here](https://github.com/jacobceles/knowledge-repo/blob/master/pyspark/Colab%20and%20PySpark.ipynb) and then use it with jupyter notebook."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "K_q3Yzc9qYc0",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Big data, PySpark and Colaboratory"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "5gs9JXCWqb9s",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "### Big data"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "q3UkLT6yqebl",
- "colab_type": "text"
- },
- "source": [
- "Big data usually means data of such huge volume that normal data storage solutions cannot efficently store and process it. In this era, data is being generated at an absurd rate. Data is collected for each movement a person makes. The bulk of big data comes from three primary sources: \n",
- "\n",
- " Social data \n",
- " Machine data \n",
- " Transactional data \n",
- " \n",
- "\n",
- "Some common examples for the sources of such data include internet searches, facebook posts, doorbell cams, smartwatches, online shopping history etc. Every action creates data, it is just a matter of of there is a way to collect them or not. But what's interesting is that out of all this data collected, not even 5% of it is being used fully. There is a huge demand for big data professionals in the industry. Even though the number of graduates with a specialization in big data are rising, the problem is that they don't have the practical knowledge about big data scenarios, which leads to bad architecutres and inefficent methods of processing data.\n",
- "\n",
- ">If you are interested to know more about the landscape and technologies involved, here is [an article](https://hostingtribunal.com/blog/big-data-stats/) which I found really interesting!"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "NhM3wLG2qhlN",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "### PySpark"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "VBfC_kvjqjPj",
- "colab_type": "text"
- },
- "source": [
- "If you are working in the field of big data, you must have definelty heard of spark. If you look at the [Apache Spark](https://spark.apache.org/) website, you will see that it is said to be a `Lightning-fast unified analytics engine`. PySPark is a flavour of Spark used for processing and analysing massive volumes of data. If you are familiar with python and have tried it for huge datasets, you should know that the execution time can get ridiculous. Enter PySpark!\n",
- "\n",
- "Imagine your data resides in a distributed manner at different places. If you try brining your data to one point and executing your code there, not only would that be inefficent, but also cause memory issues. Now let's say your code goes to the data rather than the data coming to where your code. This will help avoid unneccesary data movement which will thereby decrease the running time. \n",
- "\n",
- "PySpark is the Python API of Spark; which means it can do almost all the things python can. Machine learning(ML) pipelines, exploratory data analysis (at scale), ETLs for data platform, and much more! And all of them in a distributed manner. One of the best parts of pyspark is that if you are already familiar with python, it's really easy to learn.\n",
- "\n",
- "Apart from PySpark, there is another language called Scala used for big data processing. Scala is frequently over 10 times faster than Python is native for Hadoop as its based on JVM. But PySpark is getting adopted at a fast rate because of the ease of use, easier learning curve and ML capabilities.\n",
- "\n",
- "I will briefly explain how a PySpark job works, but I strongly recommend you read more about the [architecture](https://data-flair.training/blogs/how-apache-spark-works/) and how everything works. Now, before I get into it, let me talk about some basic jargons first:\n",
- "\n",
- "Cluster is a set of loosely or tightly connected computers that work together so that they can be viewed as a single system.\n",
- "\n",
- "Hadoop is an open source, scalable, and fault tolerant framework written in Java. It efficiently processes large volumes of data on a cluster of commodity hardware. Hadoop is not only a storage system but is a platform for large data storage as well as processing.\n",
- "\n",
- "HDFS (Hadoop distributed file system). It is one of the world's most reliable storage system. HDFS is a Filesystem of Hadoop designed for storing very large files running on a cluster of commodity hardware.\n",
- "\n",
- "MapReduce is a data Processing framework, which has 2 phases - Mapper and Reducer. The map procedure performs filtering and sorting, and the reduce method performs a summary operation. It usually runs on a hadoop cluster.\n",
- "\n",
- "Transformation refers to the operations applied on a dataset to create a new dataset. Filter, groupBy and map are the examples of transformations.\n",
- "\n",
- "Actions Actions refer to an operation which instructs Spark to perform computation and send the result back to driver. This is an example of action.\n",
- "\n",
- "Alright! Now that that's out of the way, let me explain how a spark job runs. In simple terma, each time you submit a pyspark job, the code gets internally converted into a MapReduce program and gets executed in the Java virtual machine. Now one of the thoughts that might be popping in your mind will probably be: `So the code gets converted into a MapReduce program. Wouldn't that mean MapReduce is faster than pySpark?` Well, the answer is a big NO. This is what makes spark jobs special. Spark is capable of handling a massive amount of data at a time, in it's distributed environment. It does this through in-memory processing , which is what makes it almost 100 times faster than Hadoop. Another factor which amkes it fast is Lazy Evaluation . Spark delays its evaluation as much as it can. Each time you submit a job, spark creates an action plan for how to execute the code, and then does nothing. Finally, when you ask for the result(i.e, calls an action), it executes the plan, which is basically all the transofrmations you have mentioned in your code. That's basically the gist of it. \n",
- "\n",
- "Now lastly, I want to talk about on more thing. Spark mainly consists of 4 modules:\n",
- "\n",
- "\n",
- " Spark SQL - helps to write spark programs using SQL like queries. \n",
- " Spark Streaming - is an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams. used heavily in processing of social media data. \n",
- " Spark MLLib - is the machine learning component of SPark. It helps train ML models on massive datasets with very high efficeny. \n",
- " Spark GraphX - is the visualization component of Spark. It enables users to view data both as graphs and as collections without data movement or duplication. \n",
- " \n",
- "\n",
- "Hopefully this image gives a better idea of what I am talking about:\n",
- " \n",
- "Source: Datanami \n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "1NJMWs4NqnlF",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "### Colaboratory"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "ynKFk6b7qoWr",
- "colab_type": "text"
- },
- "source": [
- "In the words of Google: \n",
- "`Colaboratory, or “Colab” for short, is a product from Google Research. Colab allows anybody to write and execute arbitrary python code through the browser, and is especially well suited to machine learning, data analysis and education. More technically, Colab is a hosted Jupyter notebook service that requires no setup to use, while providing free access to computing resources including GPUs.`\n",
- "\n",
- "The reason why I used colab is because of its shareability and free GPU and TPU. Yeah you read that right, FREE GPU AND TPU! For using TPU, your program needs to be optimized for the same. Additionally, it helps use different Google services conveniently. It saves to Google Drive and all the services are very closely related. I recommend you go through the offical [overview documentation](https://colab.research.google.com/notebooks/basic_features_overview.ipynb) if you want to know more about it.\n",
- "If you have more questions about colab, please [refer this link](https://research.google.com/colaboratory/faq.html).\n",
- "\n",
- ">While using a colab notebook, you will need an active internet connection to keep a session alive. If you lose the connection you will have to download the datasets again."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "colab_type": "text",
- "id": "_N5-lspH_N8B"
- },
- "source": [
- " \n",
- "## Jupyter notebook basics"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "colab_type": "text",
- "id": "6Ul54hAYyHyd"
- },
- "source": [
- " \n",
- "### Code cells"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "colab_type": "code",
- "id": "j38beRUTCI5c",
- "outputId": "18933a7a-e345-4ad9-f54f-62cef10b999e",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "2*3"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "6"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 1
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "colab_type": "code",
- "id": "_Jewe_e9CIYa",
- "colab": {}
- },
- "source": [
- "import pandas as pd"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "colab_type": "code",
- "id": "g8Y7w6_CCIIT",
- "outputId": "eb01c5ba-4ec1-4b6a-c988-852774adc0f8",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "print(\"Hello!\")"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "Hello!\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "colab_type": "text",
- "id": "VOqLNkRKyUIS"
- },
- "source": [
- " \n",
- "### Text cells"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "IfbaUe-oq7DK",
- "colab_type": "text"
- },
- "source": [
- "Hello world!"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "colab_type": "text",
- "id": "X6zdrH15_CCW"
- },
- "source": [
- " \n",
- "## Access to the shell"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "colab_type": "code",
- "id": "zdO9sjSdEVnr",
- "outputId": "0fa0cc8d-9130-4003-9858-cc6785719e22",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "ls"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "\u001b[0m\u001b[01;34msample_data\u001b[0m/\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "colab_type": "code",
- "id": "QF9e3lDDEX3I",
- "outputId": "dae9fdc8-9495-4baf-f136-27cd3d182885",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "pwd"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "'/content'"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 5
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "colab_type": "text",
- "id": "Dd6t0uFzuR4X"
- },
- "source": [
- " \n",
- "## Install Spark"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "colab_type": "code",
- "id": "tt7ZS1_wGgjn",
- "outputId": "6c729948-f507-40af-e29d-2cec61f03ee2",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 476
- }
- },
- "source": [
- "!apt-get update\n",
- "!apt-get install openjdk-8-jdk-headless -qq > /dev/null\n",
- "!wget -q http://archive.apache.org/dist/spark/spark-2.4.5/spark-2.4.5-bin-hadoop2.7.tgz\n",
- "!tar xf spark-2.4.5-bin-hadoop2.7.tgz\n",
- "!pip install -q findspark"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "\r0% [Working]\r \rIgn:1 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 InRelease\n",
- "\r0% [Connecting to archive.ubuntu.com (91.189.88.152)] [Connecting to security.u\r \rIgn:2 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 InRelease\n",
- "\r0% [Connecting to archive.ubuntu.com (91.189.88.152)] [Waiting for headers] [Co\r \rHit:3 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 Release\n",
- "\r0% [Connecting to archive.ubuntu.com (91.189.88.152)] [Waiting for headers] [Co\r0% [Release.gpg gpgv 564 B] [Waiting for headers] [Waiting for headers] [Connec\r \rHit:4 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64 Release\n",
- "Get:5 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]\n",
- "Hit:7 http://archive.ubuntu.com/ubuntu bionic InRelease\n",
- "Get:8 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic InRelease [21.3 kB]\n",
- "Get:10 http://archive.ubuntu.com/ubuntu bionic-updates InRelease [88.7 kB]\n",
- "Get:11 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic InRelease [15.4 kB]\n",
- "Get:12 http://security.ubuntu.com/ubuntu bionic-security/main amd64 Packages [908 kB]\n",
- "Get:13 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ InRelease [3,626 B]\n",
- "Get:14 http://archive.ubuntu.com/ubuntu bionic-backports InRelease [74.6 kB]\n",
- "Get:15 http://ppa.launchpad.net/graphics-drivers/ppa/ubuntu bionic/main amd64 Packages [37.4 kB]\n",
- "Get:16 https://cloud.r-project.org/bin/linux/ubuntu bionic-cran35/ Packages [91.7 kB]\n",
- "Get:17 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 Packages [1,205 kB]\n",
- "Get:18 http://security.ubuntu.com/ubuntu bionic-security/restricted amd64 Packages [52.4 kB]\n",
- "Get:19 http://security.ubuntu.com/ubuntu bionic-security/multiverse amd64 Packages [8,505 B]\n",
- "Get:20 http://security.ubuntu.com/ubuntu bionic-security/universe amd64 Packages [844 kB]\n",
- "Get:21 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main Sources [1,813 kB]\n",
- "Get:22 http://archive.ubuntu.com/ubuntu bionic-updates/restricted amd64 Packages [66.7 kB]\n",
- "Get:23 http://archive.ubuntu.com/ubuntu bionic-updates/universe amd64 Packages [1,376 kB]\n",
- "Get:24 http://archive.ubuntu.com/ubuntu bionic-updates/multiverse amd64 Packages [19.8 kB]\n",
- "Get:25 http://archive.ubuntu.com/ubuntu bionic-backports/main amd64 Packages [8,286 B]\n",
- "Get:26 http://archive.ubuntu.com/ubuntu bionic-backports/universe amd64 Packages [8,158 B]\n",
- "Get:27 http://ppa.launchpad.net/marutter/c2d4u3.5/ubuntu bionic/main amd64 Packages [875 kB]\n",
- "Fetched 7,606 kB in 8s (1,014 kB/s)\n",
- "Reading package lists... Done\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "colab_type": "code",
- "id": "sdOOq4twHN1K",
- "colab": {}
- },
- "source": [
- "import os\n",
- "os.environ[\"JAVA_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64\"\n",
- "os.environ[\"SPARK_HOME\"] = \"/content/spark-2.4.5-bin-hadoop2.7\""
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "colab_type": "code",
- "id": "3ACYMwhgHTYz",
- "outputId": "ff3bc469-5ee7-4204-ee8d-dc667ff56244",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "!ls"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "sample_data spark-2.4.5-bin-hadoop2.7\tspark-2.4.5-bin-hadoop2.7.tgz\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "KR1zLBk1998Z",
- "colab_type": "code",
- "outputId": "3fd931c5-4576-4fa4-dd2e-dc4331ca58da",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 193
- }
- },
- "source": [
- "import findspark\n",
- "findspark.init()\n",
- "from pyspark import SparkContext\n",
- "\n",
- "sc = SparkContext.getOrCreate()\n",
- "sc"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/html": [
- "\n",
- " \n",
- "
SparkContext
\n",
- "\n",
- "
Spark UI
\n",
- "\n",
- "
\n",
- " Version \n",
- " v2.4.5
\n",
- " Master \n",
- " local[*]
\n",
- " AppName \n",
- " pyspark-shell
\n",
- " \n",
- "
\n",
- " "
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 61
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "colab_type": "code",
- "id": "Gs7fzvxcHfvw",
- "outputId": "886ca9fc-908e-4f24-ea46-97e569394275",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 216
- }
- },
- "source": [
- "import pyspark\n",
- "from pyspark.sql import SparkSession\n",
- "\n",
- "spark = SparkSession.builder.getOrCreate() \n",
- "spark"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/html": [
- "\n",
- " \n",
- "
SparkSession - in-memory
\n",
- " \n",
- "
\n",
- "
SparkContext
\n",
- "\n",
- "
Spark UI
\n",
- "\n",
- "
\n",
- " Version \n",
- " v2.4.5
\n",
- " Master \n",
- " local[*]
\n",
- " AppName \n",
- " pyspark-shell
\n",
- " \n",
- "
\n",
- " \n",
- "
\n",
- " "
- ],
- "text/plain": [
- ""
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 63
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "hmIqq6xPK7m7",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# Loading Dataset"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "hQ3zmGACLKlN",
- "colab_type": "code",
- "outputId": "a7c382c6-baff-434d-f204-c05cc6ade4f1",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 204
- }
- },
- "source": [
- "# Downloading and preprocessing Chicago's Reported Crime Data\n",
- "!wget https://data.cityofchicago.org/api/views/w98m-zvie/rows.csv?accessType=DOWNLOAD"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "--2020-05-02 15:12:16-- https://data.cityofchicago.org/api/views/w98m-zvie/rows.csv?accessType=DOWNLOAD\n",
- "Resolving data.cityofchicago.org (data.cityofchicago.org)... 52.206.68.26, 52.206.140.199, 52.206.140.205\n",
- "Connecting to data.cityofchicago.org (data.cityofchicago.org)|52.206.68.26|:443... connected.\n",
- "HTTP request sent, awaiting response... 200 OK\n",
- "Length: unspecified [text/csv]\n",
- "Saving to: ‘rows.csv?accessType=DOWNLOAD’\n",
- "\n",
- "rows.csv?accessType [ <=> ] 58.90M 3.21MB/s in 19s \n",
- "\n",
- "2020-05-02 15:12:41 (3.14 MB/s) - ‘rows.csv?accessType=DOWNLOAD’ saved [61759120]\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Wpq2jYvIMOJy",
- "colab_type": "code",
- "outputId": "74d00380-469b-46da-86be-434b64e6de3e",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 51
- }
- },
- "source": [
- "!ls"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "'rows.csv?accessType=DOWNLOAD'\t spark-2.4.5-bin-hadoop2.7\n",
- " sample_data\t\t\t spark-2.4.5-bin-hadoop2.7.tgz\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "uCJX5cwdMS9q",
- "colab_type": "code",
- "colab": {}
- },
- "source": [
- "#Renaming the downloaded file\n",
- "!mv rows.csv?accessType=DOWNLOAD reported-crimes.csv"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "hz6ALr5mMqZt",
- "colab_type": "code",
- "outputId": "ce1ae7ef-8a2c-431c-e88a-329ac9060bbd",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 224
- }
- },
- "source": [
- "df = spark.read.csv('reported-crimes.csv',header=True)\n",
- "spark.conf.set(\"spark.sql.repl.eagerEval.enabled\", True)\n",
- "df.show(5)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------+-----------+--------------------+--------------------+----+------------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+\n",
- "| ID|Case Number| Date| Block|IUCR| Primary Type| Description|Location Description|Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year| Updated On| Latitude| Longitude| Location|\n",
- "+--------+-----------+--------------------+--------------------+----+------------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+\n",
- "|12040452| JD220630|11/21/2019 12:00:...| 004XX N WABASH AVE|1153|DECEPTIVE PRACTICE|FINANCIAL IDENTIT...| RESIDENCE| false| false|1834| 018| 42| 8| 11| null| null|2019|05/01/2020 03:49:...| null| null| null|\n",
- "|12039199| JD219081|04/01/2019 08:00:...| 021XX S INDIANA AVE|1153|DECEPTIVE PRACTICE|FINANCIAL IDENTIT...| RESIDENCE| false| false|0132| 001| 3| 33| 11| 1177977| 1890072|2019|05/01/2020 03:48:...|41.853676611|-87.622239838|(41.853676611, -8...|\n",
- "|12040466| JD220650|11/07/2019 12:00:...|026XX N NEW ENGLA...|1152|DECEPTIVE PRACTICE|ILLEGAL USE CASH ...| RESIDENCE| false| false|2512| 025| 36| 18| 11| null| null|2019|05/01/2020 03:49:...| null| null| null|\n",
- "|11938499| JD100565|12/31/2019 11:30:...|070XX S COTTAGE G...|0460| BATTERY| SIMPLE| OTHER (SPECIFY)| true| false|0321| 003| 6| 42| 08B| 1182773| 1858454|2019|05/01/2020 03:48:...| 41.76680404|-87.605619683|(41.76680404, -87...|\n",
- "|11864640| JC477069|10/18/2019 09:56:...|002XX S LAVERGNE AVE|041A| BATTERY|AGGRAVATED - HANDGUN| SIDEWALK| false| false|1533| 015| 28| 25| 04B| 1143293| 1898684|2019|05/01/2020 03:48:...|41.878026693|-87.749328671|(41.878026693, -8...|\n",
- "+--------+-----------+--------------------+--------------------+----+------------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+\n",
- "only showing top 5 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "HgwoX-pfNqQI",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# Working with the Dataframe API"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "_QwZtWxZRCBn",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Viewing Dataframe"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "50LZ3S8_PMg_",
- "colab_type": "text"
- },
- "source": [
- "In Spark, you have a couple of options to view the DataFrame(DF).\n",
- "\n",
- "\n",
- "1. `take(3)` will return a list of three row objects. \n",
- "2. `df.collect()` will get all of the data from the entire DataFrame . Be careful when using it, because if you have a large data set when you run collect, you can easily crash the driver node. \n",
- "3. If you want Spark to print out your DataFrame in a nice format, then try `df.show()` with the number of rows as paramter. You might notice that the show functions truncates the data as inthe example above. You can use the parameter `truncate=False` to show the entire data. For example, you can use `df.show(5, False)` or `df.show(truncate=False)` to disable truncation of data.\n",
- "\n",
- "> The limit function **returns a new DataFrame** by taking the first n rows.\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "eFoagdqARKb8",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Schema of a DataFrame"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "w6qwTjGsNxrw",
- "colab_type": "code",
- "outputId": "66270202-5824-4a6b-c028-528af2026bff",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 391
- }
- },
- "source": [
- "df.dtypes"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "[('ID', 'string'),\n",
- " ('Case Number', 'string'),\n",
- " ('Date', 'string'),\n",
- " ('Block', 'string'),\n",
- " ('IUCR', 'string'),\n",
- " ('Primary Type', 'string'),\n",
- " ('Description', 'string'),\n",
- " ('Location Description', 'string'),\n",
- " ('Arrest', 'string'),\n",
- " ('Domestic', 'string'),\n",
- " ('Beat', 'string'),\n",
- " ('District', 'string'),\n",
- " ('Ward', 'string'),\n",
- " ('Community Area', 'string'),\n",
- " ('FBI Code', 'string'),\n",
- " ('X Coordinate', 'string'),\n",
- " ('Y Coordinate', 'string'),\n",
- " ('Year', 'string'),\n",
- " ('Updated On', 'string'),\n",
- " ('Latitude', 'string'),\n",
- " ('Longitude', 'string'),\n",
- " ('Location', 'string')]"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 11
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "CCGTFlCWRPw4",
- "colab_type": "code",
- "outputId": "009696e4-0f08-4630-b9d6-d8561d5d95bc",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 425
- }
- },
- "source": [
- "df.printSchema()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "root\n",
- " |-- ID: string (nullable = true)\n",
- " |-- Case Number: string (nullable = true)\n",
- " |-- Date: string (nullable = true)\n",
- " |-- Block: string (nullable = true)\n",
- " |-- IUCR: string (nullable = true)\n",
- " |-- Primary Type: string (nullable = true)\n",
- " |-- Description: string (nullable = true)\n",
- " |-- Location Description: string (nullable = true)\n",
- " |-- Arrest: string (nullable = true)\n",
- " |-- Domestic: string (nullable = true)\n",
- " |-- Beat: string (nullable = true)\n",
- " |-- District: string (nullable = true)\n",
- " |-- Ward: string (nullable = true)\n",
- " |-- Community Area: string (nullable = true)\n",
- " |-- FBI Code: string (nullable = true)\n",
- " |-- X Coordinate: string (nullable = true)\n",
- " |-- Y Coordinate: string (nullable = true)\n",
- " |-- Year: string (nullable = true)\n",
- " |-- Updated On: string (nullable = true)\n",
- " |-- Latitude: string (nullable = true)\n",
- " |-- Longitude: string (nullable = true)\n",
- " |-- Location: string (nullable = true)\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "xpsaQ4JMRUiS",
- "colab_type": "code",
- "outputId": "72504489-13b6-4e02-91d4-8dd8d2826954",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 391
- }
- },
- "source": [
- "# Defining a schema\n",
- "from pyspark.sql.types import StructType, StructField, StringType, TimestampType, BooleanType, DoubleType, IntegerType\n",
- "df.columns"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "['ID',\n",
- " 'Case Number',\n",
- " 'Date',\n",
- " 'Block',\n",
- " 'IUCR',\n",
- " 'Primary Type',\n",
- " 'Description',\n",
- " 'Location Description',\n",
- " 'Arrest',\n",
- " 'Domestic',\n",
- " 'Beat',\n",
- " 'District',\n",
- " 'Ward',\n",
- " 'Community Area',\n",
- " 'FBI Code',\n",
- " 'X Coordinate',\n",
- " 'Y Coordinate',\n",
- " 'Year',\n",
- " 'Updated On',\n",
- " 'Latitude',\n",
- " 'Longitude',\n",
- " 'Location']"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 13
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "ik62VX34SlFh",
- "colab_type": "code",
- "colab": {}
- },
- "source": [
- "labels = [\n",
- " ('ID',StringType()),\n",
- " ('Case Number',StringType()),\n",
- " ('Date',TimestampType()),\n",
- " ('Block',StringType()),\n",
- " ('IUCR',StringType()),\n",
- " ('Primary Type',StringType()),\n",
- " ('Description',StringType()),\n",
- " ('Location Description',StringType()),\n",
- " ('Arrest',StringType()),\n",
- " ('Domestic',BooleanType()),\n",
- " ('Beat',StringType()),\n",
- " ('District',StringType()),\n",
- " ('Ward',StringType()),\n",
- " ('Community Area',StringType()),\n",
- " ('FBI Code',StringType()),\n",
- " ('X Coordinate',StringType()),\n",
- " ('Y Coordinate',StringType()),\n",
- " ('Year',IntegerType()),\n",
- " ('Updated On',StringType()),\n",
- " ('Latitude',DoubleType()),\n",
- " ('Longitude',DoubleType()),\n",
- " ('Location',StringType()),\n",
- " ('Historical Wards 2003-2015',StringType()),\n",
- " ('Zip Codes',StringType()),\n",
- " ('Community Areas',StringType()),\n",
- " ('Census Tracts',StringType()),\n",
- " ('Wards',StringType()),\n",
- " ('Boundaries - ZIP Codes',StringType()),\n",
- " ('Police Districts',StringType()),\n",
- " ('Police Beats',StringType())\n",
- "]"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "T-Fp5y_oU9SF",
- "colab_type": "code",
- "outputId": "25f313fe-7ed2-416e-ea63-8c2a7deca4fc",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 54
- }
- },
- "source": [
- "schema = StructType([StructField (x[0], x[1], True) for x in labels])\n",
- "schema"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "StructType(List(StructField(ID,StringType,true),StructField(Case Number,StringType,true),StructField(Date,TimestampType,true),StructField(Block,StringType,true),StructField(IUCR,StringType,true),StructField(Primary Type,StringType,true),StructField(Description,StringType,true),StructField(Location Description,StringType,true),StructField(Arrest,StringType,true),StructField(Domestic,BooleanType,true),StructField(Beat,StringType,true),StructField(District,StringType,true),StructField(Ward,StringType,true),StructField(Community Area,StringType,true),StructField(FBI Code,StringType,true),StructField(X Coordinate,StringType,true),StructField(Y Coordinate,StringType,true),StructField(Year,IntegerType,true),StructField(Updated On,StringType,true),StructField(Latitude,DoubleType,true),StructField(Longitude,DoubleType,true),StructField(Location,StringType,true),StructField(Historical Wards 2003-2015,StringType,true),StructField(Zip Codes,StringType,true),StructField(Community Areas,StringType,true),StructField(Census Tracts,StringType,true),StructField(Wards,StringType,true),StructField(Boundaries - ZIP Codes,StringType,true),StructField(Police Districts,StringType,true),StructField(Police Beats,StringType,true)))"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 15
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "sgC7gtL5VTls",
- "colab_type": "code",
- "outputId": "dcf5a8ba-0dc8-41c1-fd86-0fb42e272277",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 561
- }
- },
- "source": [
- "df = spark.read.csv('reported-crimes.csv',schema=schema)\n",
- "df.printSchema()\n",
- "# The schema comes as we gave!"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "root\n",
- " |-- ID: string (nullable = true)\n",
- " |-- Case Number: string (nullable = true)\n",
- " |-- Date: timestamp (nullable = true)\n",
- " |-- Block: string (nullable = true)\n",
- " |-- IUCR: string (nullable = true)\n",
- " |-- Primary Type: string (nullable = true)\n",
- " |-- Description: string (nullable = true)\n",
- " |-- Location Description: string (nullable = true)\n",
- " |-- Arrest: string (nullable = true)\n",
- " |-- Domestic: boolean (nullable = true)\n",
- " |-- Beat: string (nullable = true)\n",
- " |-- District: string (nullable = true)\n",
- " |-- Ward: string (nullable = true)\n",
- " |-- Community Area: string (nullable = true)\n",
- " |-- FBI Code: string (nullable = true)\n",
- " |-- X Coordinate: string (nullable = true)\n",
- " |-- Y Coordinate: string (nullable = true)\n",
- " |-- Year: integer (nullable = true)\n",
- " |-- Updated On: string (nullable = true)\n",
- " |-- Latitude: double (nullable = true)\n",
- " |-- Longitude: double (nullable = true)\n",
- " |-- Location: string (nullable = true)\n",
- " |-- Historical Wards 2003-2015: string (nullable = true)\n",
- " |-- Zip Codes: string (nullable = true)\n",
- " |-- Community Areas: string (nullable = true)\n",
- " |-- Census Tracts: string (nullable = true)\n",
- " |-- Wards: string (nullable = true)\n",
- " |-- Boundaries - ZIP Codes: string (nullable = true)\n",
- " |-- Police Districts: string (nullable = true)\n",
- " |-- Police Beats: string (nullable = true)\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Dn2EAhesVmx0",
- "colab_type": "code",
- "outputId": "2628d775-1ddb-4c16-b684-885b84e20840",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 479
- }
- },
- "source": [
- "df.show()\n",
- "# This comes as null which means the datatypes we gave were wrong."
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {
+ "id": "_Jewe_e9CIYa"
+ },
+ "outputs": [],
+ "source": [
+ "from collections import Counter"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "g8Y7w6_CCIIT",
+ "outputId": "ee375530-ba9b-4b26-ac91-4c3aedb8a11e"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "This is a tutorial!\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(\"This is a tutorial!\")"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "VOqLNkRKyUIS"
+ },
+ "source": [
+ " \n",
+ "### Text cells"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "IfbaUe-oq7DK"
+ },
+ "source": [
+ "Hello world!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "X6zdrH15_CCW"
+ },
+ "source": [
+ " \n",
+ "### Access to the shell"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "zdO9sjSdEVnr",
+ "outputId": "727c5cf0-3286-462b-a259-a550011b8460"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\u001b[0m\u001b[01;34msample_data\u001b[0m/\n"
+ ]
+ }
+ ],
+ "source": [
+ "ls"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 35
+ },
+ "id": "QF9e3lDDEX3I",
+ "outputId": "7ddef68f-aef1-4f46-f20a-d28ead31be97"
+ },
+ "outputs": [
+ {
+ "data": {
+ "application/vnd.google.colaboratory.intrinsic+json": {
+ "type": "string"
+ },
+ "text/plain": [
+ "'/content'"
+ ]
+ },
+ "execution_count": 5,
+ "metadata": {
+ "tags": []
+ },
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "pwd"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Dd6t0uFzuR4X"
+ },
+ "source": [
+ " \n",
+ "### Installing Spark"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6apGVff5h4ca"
+ },
+ "source": [
+ "Install Dependencies:\n",
+ "\n",
+ "\n",
+ "1. Java 8\n",
+ "2. Apache Spark with hadoop and\n",
+ "3. Findspark (used to locate the spark in the system)\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {
+ "id": "tt7ZS1_wGgjn"
+ },
+ "outputs": [],
+ "source": [
+ "!apt-get install openjdk-8-jdk-headless -qq > /dev/null\n",
+ "!wget -q http://archive.apache.org/dist/spark/spark-3.1.1/spark-3.1.1-bin-hadoop3.2.tgz\n",
+ "!tar xf spark-3.1.1-bin-hadoop3.2.tgz\n",
+ "!pip install -q findspark"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "C3x0ZRLxjMVr"
+ },
+ "source": [
+ "Set Environment Variables:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {
+ "id": "sdOOq4twHN1K"
+ },
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "os.environ[\"JAVA_HOME\"] = \"/usr/lib/jvm/java-8-openjdk-amd64\"\n",
+ "os.environ[\"SPARK_HOME\"] = \"/content/spark-3.1.1-bin-hadoop3.2\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "3ACYMwhgHTYz",
+ "outputId": "9346c054-ab0c-4559-b4b8-5867f1c8e071"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "sample_data spark-3.1.1-bin-hadoop3.2\tspark-3.1.1-bin-hadoop3.2.tgz\n"
+ ]
+ }
+ ],
+ "source": [
+ "!ls"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 217
+ },
+ "id": "KR1zLBk1998Z",
+ "outputId": "79e2264b-0fd0-4238-c66c-7efea69888f9"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ " \n",
+ "
SparkSession - in-memory
\n",
+ " \n",
+ "
\n",
+ "
SparkContext
\n",
+ "\n",
+ "
Spark UI
\n",
+ "\n",
+ "
\n",
+ " Version \n",
+ " v3.1.1
\n",
+ " Master \n",
+ " local[*]
\n",
+ " AppName \n",
+ " pyspark-shell
\n",
+ " \n",
+ "
\n",
+ " \n",
+ "
\n",
+ " "
],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+----+-----------+----+-----+----+------------+-----------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------+--------+---------+--------+--------------------------+---------+---------------+-------------+-----+----------------------+----------------+------------+\n",
- "| ID|Case Number|Date|Block|IUCR|Primary Type|Description|Location Description|Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year|Updated On|Latitude|Longitude|Location|Historical Wards 2003-2015|Zip Codes|Community Areas|Census Tracts|Wards|Boundaries - ZIP Codes|Police Districts|Police Beats|\n",
- "+----+-----------+----+-----+----+------------+-----------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------+--------+---------+--------+--------------------------+---------+---------------+-------------+-----+----------------------+----------------+------------+\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|null| null|null| null|null| null| null| null| null| null|null| null|null| null| null| null| null|null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "+----+-----------+----+-----+----+------------+-----------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------+--------+---------+--------+--------------------------+---------+---------------+-------------+-----+----------------------+----------------+------------+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "BrkVrQUsWFzk",
- "colab_type": "code",
- "outputId": "8f3e831a-5c33-4cc4-bc7f-e82f59b3835c",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 224
- }
- },
- "source": [
- "# So let's just stick with infered schema for now, casting date to date type on the way\n",
- "from pyspark.sql.functions import col, to_timestamp\n",
- "df = spark.read.csv('reported-crimes.csv',header=True).withColumn('Date',to_timestamp(col('Date'),'MM/dd/yyyy hh:mm:ss a'))\n",
- "df.show(5, False)"
+ "text/plain": [
+ ""
+ ]
+ },
+ "execution_count": 9,
+ "metadata": {
+ "tags": []
+ },
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "import findspark\n",
+ "findspark.init()\n",
+ "from pyspark.sql import SparkSession\n",
+ "spark = SparkSession.builder.master(\"local[*]\").getOrCreate()\n",
+ "spark.conf.set(\"spark.sql.repl.eagerEval.enabled\", True) # Property used to format output tables better\n",
+ "spark"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "hmIqq6xPK7m7"
+ },
+ "source": [
+ " \n",
+ "## Exploring the Dataset"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "VZwsr57lwPgq"
+ },
+ "source": [
+ " \n",
+ "### Loading the Dataset"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "hQ3zmGACLKlN",
+ "outputId": "1c07ff65-0536-4dac-cf14-dc910c4ebee3"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "--2021-03-30 18:26:09-- https://jacobceles.github.io/knowledge_repo/colab_and_pyspark/cars.csv\n",
+ "Resolving jacobceles.github.io (jacobceles.github.io)... 185.199.108.153, 185.199.109.153, 185.199.110.153, ...\n",
+ "Connecting to jacobceles.github.io (jacobceles.github.io)|185.199.108.153|:443... connected.\n",
+ "HTTP request sent, awaiting response... 301 Moved Permanently\n",
+ "Location: https://jacobcelestine.com/knowledge_repo/colab_and_pyspark/cars.csv [following]\n",
+ "--2021-03-30 18:26:10-- https://jacobcelestine.com/knowledge_repo/colab_and_pyspark/cars.csv\n",
+ "Resolving jacobcelestine.com (jacobcelestine.com)... 185.199.108.153, 185.199.109.153, 185.199.110.153, ...\n",
+ "Connecting to jacobcelestine.com (jacobcelestine.com)|185.199.108.153|:443... connected.\n",
+ "HTTP request sent, awaiting response... 200 OK\n",
+ "Length: 22608 (22K) [text/csv]\n",
+ "Saving to: ‘cars.csv’\n",
+ "\n",
+ "cars.csv 100%[===================>] 22.08K --.-KB/s in 0.003s \n",
+ "\n",
+ "2021-03-30 18:26:10 (7.41 MB/s) - ‘cars.csv’ saved [22608/22608]\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Downloading and preprocessing Cars Data downloaded origianlly from https://perso.telecom-paristech.fr/eagan/class/igr204/datasets\n",
+ "!wget https://jacobceles.github.io/knowledge_repo/colab_and_pyspark/cars.csv"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "Wpq2jYvIMOJy",
+ "outputId": "a85b230e-13e6-4d29-8521-de5c99a689b9"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "cars.csv sample_data spark-3.1.1-bin-hadoop3.2 spark-3.1.1-bin-hadoop3.2.tgz\n"
+ ]
+ }
+ ],
+ "source": [
+ "!ls"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "hz6ALr5mMqZt",
+ "outputId": "5a09439c-a96c-4ab5-d236-7890d65d791f"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+--------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "| Car| MPG|Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|\n",
+ "+--------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "|Chevrolet Chevell...|18.0| 8| 307.0| 130.0| 3504.| 12.0| 70| US|\n",
+ "| Buick Skylark 320|15.0| 8| 350.0| 165.0| 3693.| 11.5| 70| US|\n",
+ "| Plymouth Satellite|18.0| 8| 318.0| 150.0| 3436.| 11.0| 70| US|\n",
+ "| AMC Rebel SST|16.0| 8| 304.0| 150.0| 3433.| 12.0| 70| US|\n",
+ "| Ford Torino|17.0| 8| 302.0| 140.0| 3449.| 10.5| 70| US|\n",
+ "+--------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "only showing top 5 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Load data from csv to a dataframe. \n",
+ "# header=True means the first row is a header \n",
+ "# sep=';' means the column are seperated using ''\n",
+ "df = spark.read.csv('cars.csv', header=True, sep=\";\")\n",
+ "df.show(5)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "N0lCS2LNwnoy"
+ },
+ "source": [
+ "The above command loads our data from into a dataframe (DF). A dataframe is a 2-dimensional labeled data structure with columns of potentially different types."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "_QwZtWxZRCBn"
+ },
+ "source": [
+ " \n",
+ "### Viewing the Dataframe"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "50LZ3S8_PMg_"
+ },
+ "source": [
+ "There are a couple of ways to view your dataframe(DF) in PySpark:\n",
+ "\n",
+ "1. `df.take(5)` will return a list of five Row objects. \n",
+ "2. `df.collect()` will get all of the data from the entire DataFrame. Be really careful when using it, because if you have a large data set, you can easily crash the driver node. \n",
+ "3. `df.show()` is the most commonly used method to view a dataframe. There are a few parameters we can pass to this method, like the number of rows and truncaiton. For example, `df.show(5, False)` or ` df.show(5, truncate=False)` will show the entire data wihtout any truncation.\n",
+ "4. `df.limit(5)` will **return a new DataFrame** by taking the first n rows. As spark is distributed in nature, there is no guarantee that `df.limit()` will give you the same results each time.\n",
+ "\n",
+ "Let us see some of them in action below:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "I1qqkqcfxM0v",
+ "outputId": "10a5e152-8c75-4624-acfd-c86048d5de4f"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "|Chevrolet Chevelle Malibu|18.0|8 |307.0 |130.0 |3504. |12.0 |70 |US |\n",
+ "|Buick Skylark 320 |15.0|8 |350.0 |165.0 |3693. |11.5 |70 |US |\n",
+ "|Plymouth Satellite |18.0|8 |318.0 |150.0 |3436. |11.0 |70 |US |\n",
+ "|AMC Rebel SST |16.0|8 |304.0 |150.0 |3433. |12.0 |70 |US |\n",
+ "|Ford Torino |17.0|8 |302.0 |140.0 |3449. |10.5 |70 |US |\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "only showing top 5 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "df.show(5, truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/",
+ "height": 154
+ },
+ "id": "R9zwzswIxXF9",
+ "outputId": "033ab9bd-4064-4067-ec32-abe5dac4cc0a"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "\n",
+ "Car MPG Cylinders Displacement Horsepower Weight Acceleration Model Origin \n",
+ "Chevrolet Chevell... 18.0 8 307.0 130.0 3504. 12.0 70 US \n",
+ "Buick Skylark 320 15.0 8 350.0 165.0 3693. 11.5 70 US \n",
+ "Plymouth Satellite 18.0 8 318.0 150.0 3436. 11.0 70 US \n",
+ "AMC Rebel SST 16.0 8 304.0 150.0 3433. 12.0 70 US \n",
+ "Ford Torino 17.0 8 302.0 140.0 3449. 10.5 70 US \n",
+ "
\n"
],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------+-----------+-------------------+-------------------------+----+------------------+-----------------------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "|ID |Case Number|Date |Block |IUCR|Primary Type |Description |Location Description|Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year|Updated On |Latitude |Longitude |Location |\n",
- "+--------+-----------+-------------------+-------------------------+----+------------------+-----------------------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "|12040452|JD220630 |2019-11-21 12:00:00|004XX N WABASH AVE |1153|DECEPTIVE PRACTICE|FINANCIAL IDENTITY THEFT OVER $ 300|RESIDENCE |false |false |1834|018 |42 |8 |11 |null |null |2019|05/01/2020 03:49:55 PM|null |null |null |\n",
- "|12039199|JD219081 |2019-04-01 08:00:00|021XX S INDIANA AVE |1153|DECEPTIVE PRACTICE|FINANCIAL IDENTITY THEFT OVER $ 300|RESIDENCE |false |false |0132|001 |3 |33 |11 |1177977 |1890072 |2019|05/01/2020 03:48:05 PM|41.853676611|-87.622239838|(41.853676611, -87.622239838)|\n",
- "|12040466|JD220650 |2019-11-07 12:00:00|026XX N NEW ENGLAND AVE |1152|DECEPTIVE PRACTICE|ILLEGAL USE CASH CARD |RESIDENCE |false |false |2512|025 |36 |18 |11 |null |null |2019|05/01/2020 03:49:55 PM|null |null |null |\n",
- "|11938499|JD100565 |2019-12-31 11:30:00|070XX S COTTAGE GROVE AVE|0460|BATTERY |SIMPLE |OTHER (SPECIFY) |true |false |0321|003 |6 |42 |08B |1182773 |1858454 |2019|05/01/2020 03:48:05 PM|41.76680404 |-87.605619683|(41.76680404, -87.605619683) |\n",
- "|11864640|JC477069 |2019-10-18 09:56:00|002XX S LAVERGNE AVE |041A|BATTERY |AGGRAVATED - HANDGUN |SIDEWALK |false |false |1533|015 |28 |25 |04B |1143293 |1898684 |2019|05/01/2020 03:48:05 PM|41.878026693|-87.749328671|(41.878026693, -87.749328671)|\n",
- "+--------+-----------+-------------------+-------------------------+----+------------------+-----------------------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "only showing top 5 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "rsD48rckdHPe",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# Working with Columns"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "ikGR5pDICTu7",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "#### Selecting Columns"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "ge9-_ygideWk",
- "colab_type": "code",
- "outputId": "757652b2-7344-4a5b-a7ff-8cb367d13cbc",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "df.Block"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "Column"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 19
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "md5zaET8dsr4",
- "colab_type": "code",
- "outputId": "7d27dba2-a2ae-44e3-a38c-d2072b4a7cbd",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "df['Block']"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "Column"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 79
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "YxP1su8veNde",
- "colab_type": "text"
- },
- "source": [
- "**NOTE:**\n",
- "\n",
- "> **We can't always use the dot notation because this will break when the column names have reserved names or attributes to the data frame class.**\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "6Gkf14sHec9a",
- "colab_type": "code",
- "outputId": "6aecb1a5-e85c-4322-c2da-1852397c5c6f",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 459
- }
- },
- "source": [
- "df.select(col('Block')).show(truncate=False)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+-------------------------+\n",
- "|Block |\n",
- "+-------------------------+\n",
- "|004XX N WABASH AVE |\n",
- "|021XX S INDIANA AVE |\n",
- "|026XX N NEW ENGLAND AVE |\n",
- "|070XX S COTTAGE GROVE AVE|\n",
- "|002XX S LAVERGNE AVE |\n",
- "|024XX W FOSTER AVE |\n",
- "|048XX W AUGUSTA BLVD |\n",
- "|062XX S EMERALD DR |\n",
- "|062XX S FRANCISCO AVE |\n",
- "|120XX S LAFLIN ST |\n",
- "|049XX N ALBANY AVE |\n",
- "|021XX S HARDING AVE |\n",
- "|026XX W BALMORAL AVE |\n",
- "|071XX S CYRIL AVE |\n",
- "|028XX S CHRISTIANA AVE |\n",
- "|052XX S ARCHER AVE |\n",
- "|057XX W ROOSEVELT RD |\n",
- "|003XX S SPRINGFIELD AVE |\n",
- "|002XX N MENARD AVE |\n",
- "|003XX N MENARD AVE |\n",
- "+-------------------------+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "05YQ7WXhiFcm",
- "colab_type": "code",
- "outputId": "2df46414-e44a-45a5-b625-82f24828c9ab",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 459
- }
- },
- "source": [
- "df.select(df.Block).show(truncate=False)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+-------------------------+\n",
- "|Block |\n",
- "+-------------------------+\n",
- "|004XX N WABASH AVE |\n",
- "|021XX S INDIANA AVE |\n",
- "|026XX N NEW ENGLAND AVE |\n",
- "|070XX S COTTAGE GROVE AVE|\n",
- "|002XX S LAVERGNE AVE |\n",
- "|024XX W FOSTER AVE |\n",
- "|048XX W AUGUSTA BLVD |\n",
- "|062XX S EMERALD DR |\n",
- "|062XX S FRANCISCO AVE |\n",
- "|120XX S LAFLIN ST |\n",
- "|049XX N ALBANY AVE |\n",
- "|021XX S HARDING AVE |\n",
- "|026XX W BALMORAL AVE |\n",
- "|071XX S CYRIL AVE |\n",
- "|028XX S CHRISTIANA AVE |\n",
- "|052XX S ARCHER AVE |\n",
- "|057XX W ROOSEVELT RD |\n",
- "|003XX S SPRINGFIELD AVE |\n",
- "|002XX N MENARD AVE |\n",
- "|003XX N MENARD AVE |\n",
- "+-------------------------+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "NNuJ3RIqe8yY",
- "colab_type": "code",
- "outputId": "dbbda528-1898-44eb-b3da-744b9b0d7021",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 459
- }
- },
- "source": [
- "df.select('Block','Description').show(truncate=False)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+-------------------------+-------------------------------------------------+\n",
- "|Block |Description |\n",
- "+-------------------------+-------------------------------------------------+\n",
- "|004XX N WABASH AVE |FINANCIAL IDENTITY THEFT OVER $ 300 |\n",
- "|021XX S INDIANA AVE |FINANCIAL IDENTITY THEFT OVER $ 300 |\n",
- "|026XX N NEW ENGLAND AVE |ILLEGAL USE CASH CARD |\n",
- "|070XX S COTTAGE GROVE AVE|SIMPLE |\n",
- "|002XX S LAVERGNE AVE |AGGRAVATED - HANDGUN |\n",
- "|024XX W FOSTER AVE |FORCIBLE ENTRY |\n",
- "|048XX W AUGUSTA BLVD |AGGRAVATED - HANDGUN |\n",
- "|062XX S EMERALD DR |CHILD ABUSE |\n",
- "|062XX S FRANCISCO AVE |AGGRAVATED - HANDGUN |\n",
- "|120XX S LAFLIN ST |HARASSMENT BY ELECTRONIC MEANS |\n",
- "|049XX N ALBANY AVE |FROM BUILDING |\n",
- "|021XX S HARDING AVE |AGGRAVATED CRIMINAL SEXUAL ABUSE BY FAMILY MEMBER|\n",
- "|026XX W BALMORAL AVE |FINANCIAL IDENTITY THEFT OVER $ 300 |\n",
- "|071XX S CYRIL AVE |FROM BUILDING |\n",
- "|028XX S CHRISTIANA AVE |OTHER VEHICLE OFFENSE |\n",
- "|052XX S ARCHER AVE |AUTOMOBILE |\n",
- "|057XX W ROOSEVELT RD |AGGRAVATED - OTHER |\n",
- "|003XX S SPRINGFIELD AVE |FINANCIAL IDENTITY THEFT $300 AND UNDER |\n",
- "|002XX N MENARD AVE |FINANCIAL IDENTITY THEFT $300 AND UNDER |\n",
- "|003XX N MENARD AVE |FINANCIAL IDENTITY THEFT OVER $ 300 |\n",
- "+-------------------------+-------------------------------------------------+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "85Lv3zSXCcOY",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "#### Adding a New Column"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "oFHUmRKZeCEV",
- "colab_type": "code",
- "outputId": "458c639b-f091-4fed-ce0a-466fe15a98c8",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 224
- }
- },
- "source": [
- "#Adding a column in PySpark\n",
- "# We are adding a column called 'One' at the end\n",
- "from pyspark.sql.functions import lit\n",
- "df = df.withColumn('One',lit(1))\n",
- "df.show(5,truncate=False)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------+-----------+-------------------+-------------------------+----+------------------+-----------------------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+---+\n",
- "|ID |Case Number|Date |Block |IUCR|Primary Type |Description |Location Description|Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year|Updated On |Latitude |Longitude |Location |One|\n",
- "+--------+-----------+-------------------+-------------------------+----+------------------+-----------------------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+---+\n",
- "|12040452|JD220630 |2019-11-21 12:00:00|004XX N WABASH AVE |1153|DECEPTIVE PRACTICE|FINANCIAL IDENTITY THEFT OVER $ 300|RESIDENCE |false |false |1834|018 |42 |8 |11 |null |null |2019|05/01/2020 03:49:55 PM|null |null |null |1 |\n",
- "|12039199|JD219081 |2019-04-01 08:00:00|021XX S INDIANA AVE |1153|DECEPTIVE PRACTICE|FINANCIAL IDENTITY THEFT OVER $ 300|RESIDENCE |false |false |0132|001 |3 |33 |11 |1177977 |1890072 |2019|05/01/2020 03:48:05 PM|41.853676611|-87.622239838|(41.853676611, -87.622239838)|1 |\n",
- "|12040466|JD220650 |2019-11-07 12:00:00|026XX N NEW ENGLAND AVE |1152|DECEPTIVE PRACTICE|ILLEGAL USE CASH CARD |RESIDENCE |false |false |2512|025 |36 |18 |11 |null |null |2019|05/01/2020 03:49:55 PM|null |null |null |1 |\n",
- "|11938499|JD100565 |2019-12-31 11:30:00|070XX S COTTAGE GROVE AVE|0460|BATTERY |SIMPLE |OTHER (SPECIFY) |true |false |0321|003 |6 |42 |08B |1182773 |1858454 |2019|05/01/2020 03:48:05 PM|41.76680404 |-87.605619683|(41.76680404, -87.605619683) |1 |\n",
- "|11864640|JC477069 |2019-10-18 09:56:00|002XX S LAVERGNE AVE |041A|BATTERY |AGGRAVATED - HANDGUN |SIDEWALK |false |false |1533|015 |28 |25 |04B |1143293 |1898684 |2019|05/01/2020 03:48:05 PM|41.878026693|-87.749328671|(41.878026693, -87.749328671)|1 |\n",
- "+--------+-----------+-------------------+-------------------------+----+------------------+-----------------------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+---+\n",
- "only showing top 5 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "QlMf04i2CjDC",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "#### Renaming a Column"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "QJqgy6lKfk2o",
- "colab_type": "code",
- "outputId": "a6b547cb-c643-4620-bb14-902a465bcb49",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 479
- }
- },
- "source": [
- "#Renaming a column in PySpark\n",
- "df = df.withColumnRenamed('One', 'Test')\n",
- "df.show(truncate=False)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------+-----------+-------------------+-------------------------+----+--------------------------+-------------------------------------------------+--------------------------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+----+\n",
- "|ID |Case Number|Date |Block |IUCR|Primary Type |Description |Location Description |Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year|Updated On |Latitude |Longitude |Location |Test|\n",
- "+--------+-----------+-------------------+-------------------------+----+--------------------------+-------------------------------------------------+--------------------------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+----+\n",
- "|12040452|JD220630 |2019-11-21 12:00:00|004XX N WABASH AVE |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |RESIDENCE |false |false |1834|018 |42 |8 |11 |null |null |2019|05/01/2020 03:49:55 PM|null |null |null |1 |\n",
- "|12039199|JD219081 |2019-04-01 08:00:00|021XX S INDIANA AVE |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |RESIDENCE |false |false |0132|001 |3 |33 |11 |1177977 |1890072 |2019|05/01/2020 03:48:05 PM|41.853676611|-87.622239838|(41.853676611, -87.622239838)|1 |\n",
- "|12040466|JD220650 |2019-11-07 12:00:00|026XX N NEW ENGLAND AVE |1152|DECEPTIVE PRACTICE |ILLEGAL USE CASH CARD |RESIDENCE |false |false |2512|025 |36 |18 |11 |null |null |2019|05/01/2020 03:49:55 PM|null |null |null |1 |\n",
- "|11938499|JD100565 |2019-12-31 11:30:00|070XX S COTTAGE GROVE AVE|0460|BATTERY |SIMPLE |OTHER (SPECIFY) |true |false |0321|003 |6 |42 |08B |1182773 |1858454 |2019|05/01/2020 03:48:05 PM|41.76680404 |-87.605619683|(41.76680404, -87.605619683) |1 |\n",
- "|11864640|JC477069 |2019-10-18 09:56:00|002XX S LAVERGNE AVE |041A|BATTERY |AGGRAVATED - HANDGUN |SIDEWALK |false |false |1533|015 |28 |25 |04B |1143293 |1898684 |2019|05/01/2020 03:48:05 PM|41.878026693|-87.749328671|(41.878026693, -87.749328671)|1 |\n",
- "|11766889|JC359918 |2019-07-17 15:07:00|024XX W FOSTER AVE |0610|BURGLARY |FORCIBLE ENTRY |RESIDENCE - PORCH / HALLWAY |true |false |2011|020 |40 |4 |05 |1158877 |1934454 |2019|05/01/2020 03:48:05 PM|41.975876871|-87.691123958|(41.975876871, -87.691123958)|1 |\n",
- "|11740807|JC328262 |2019-06-30 05:04:00|048XX W AUGUSTA BLVD |041A|BATTERY |AGGRAVATED - HANDGUN |SIDEWALK |false |false |1531|015 |37 |25 |04B |1143848 |1906208 |2019|05/01/2020 03:48:05 PM|41.898663056|-87.74710208 |(41.898663056, -87.74710208) |1 |\n",
- "|11718084|JC300874 |2019-05-13 13:00:00|062XX S EMERALD DR |1750|OFFENSE INVOLVING CHILDREN|CHILD ABUSE |GOVERNMENT BUILDING / PROPERTY |false |true |0711|007 |16 |68 |08B |1172179 |1863746 |2019|05/01/2020 03:48:05 PM|41.781565307|-87.644295268|(41.781565307, -87.644295268)|1 |\n",
- "|11647696|JC215453 |2019-04-07 17:56:00|062XX S FRANCISCO AVE |041A|BATTERY |AGGRAVATED - HANDGUN |ALLEY |false |false |0823|008 |16 |66 |04B |1158121 |1863074 |2019|05/01/2020 03:48:05 PM|41.780018807|-87.69585372 |(41.780018807, -87.69585372) |1 |\n",
- "|12039931|JD220113 |2019-10-18 08:00:00|120XX S LAFLIN ST |2826|OTHER OFFENSE |HARASSMENT BY ELECTRONIC MEANS |RESIDENCE |false |false |0524|005 |34 |53 |26 |null |null |2019|04/30/2020 03:50:06 PM|null |null |null |1 |\n",
- "|12039974|JD220130 |2019-12-24 23:00:00|049XX N ALBANY AVE |0890|THEFT |FROM BUILDING |RESIDENCE |false |false |1713|017 |33 |14 |06 |null |null |2019|04/30/2020 03:50:06 PM|null |null |null |1 |\n",
- "|12039885|JD219909 |2019-10-01 02:00:00|021XX S HARDING AVE |1752|OFFENSE INVOLVING CHILDREN|AGGRAVATED CRIMINAL SEXUAL ABUSE BY FAMILY MEMBER|RESIDENCE |false |true |1014|010 |24 |29 |17 |null |null |2019|04/30/2020 03:50:06 PM|null |null |null |1 |\n",
- "|12039884|JD220045 |2019-07-01 12:00:00|026XX W BALMORAL AVE |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |RESIDENCE |false |false |2011|020 |40 |4 |11 |null |null |2019|04/30/2020 03:50:06 PM|null |null |null |1 |\n",
- "|12038367|JD218291 |2019-04-21 12:00:00|071XX S CYRIL AVE |0890|THEFT |FROM BUILDING |APARTMENT |false |false |0333|003 |5 |43 |06 |1190602 |1857828 |2019|04/30/2020 03:47:59 PM|41.764900898|-87.576943938|(41.764900898, -87.576943938)|1 |\n",
- "|12038544|JD218327 |2019-12-22 13:00:00|028XX S CHRISTIANA AVE |5002|OTHER OFFENSE |OTHER VEHICLE OFFENSE |STREET |false |false |1032|010 |22 |30 |26 |1154495 |1884891 |2019|04/30/2020 03:47:59 PM|41.83996062 |-87.708565824|(41.83996062, -87.708565824) |1 |\n",
- "|12038308|JD218196 |2019-07-22 08:00:00|052XX S ARCHER AVE |0910|MOTOR VEHICLE THEFT |AUTOMOBILE |PARKING LOT / GARAGE (NON RESIDENTIAL)|false |false |0815|008 |23 |57 |07 |1146707 |1870103 |2019|04/30/2020 03:47:59 PM|41.799532089|-87.737521099|(41.799532089, -87.737521099)|1 |\n",
- "|12040067|JD220150 |2019-12-15 18:00:00|057XX W ROOSEVELT RD |0265|CRIMINAL SEXUAL ASSAULT |AGGRAVATED - OTHER |HOSPITAL BUILDING / GROUNDS |false |false |1513|015 |29 |25 |02 |null |null |2019|04/30/2020 03:50:06 PM|null |null |null |1 |\n",
- "|12039858|JD219964 |2019-09-03 09:00:00|003XX S SPRINGFIELD AVE |1154|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT $300 AND UNDER |APARTMENT |false |false |1133|011 |28 |26 |11 |null |null |2019|04/30/2020 03:50:06 PM|null |null |null |1 |\n",
- "|12038342|JD218257 |2019-05-01 00:00:00|002XX N MENARD AVE |1154|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT $300 AND UNDER |RESIDENCE |false |false |1512|015 |29 |25 |11 |1137672 |1901055 |2019|04/30/2020 03:47:59 PM|41.884636133|-87.769910804|(41.884636133, -87.769910804)|1 |\n",
- "|12038644|JD218194 |2019-06-01 13:45:00|003XX N MENARD AVE |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |null |false |false |1512|015 |29 |25 |11 |1137654 |1901595 |2019|04/30/2020 03:47:59 PM|41.886118287|-87.769963886|(41.886118287, -87.769963886)|1 |\n",
- "+--------+-----------+-------------------+-------------------------+----+--------------------------+-------------------------------------------------+--------------------------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+----+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "4CDifVC2Cnml",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "#### Grouping By Column"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "M1ek2opVfqea",
- "colab_type": "code",
- "outputId": "b0715ee8-79df-4559-8d4f-be796bb8ddaf",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 119
- }
- },
- "source": [
- "#Group By a column in PySpark\n",
- "df.groupBy('Year').count().show(5)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+----+------+\n",
- "|Year| count|\n",
- "+----+------+\n",
- "|2019|259013|\n",
- "+----+------+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "CbpEj9fECrW3",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "#### Removing a Column"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "xsb9PXxpfnmh",
- "colab_type": "code",
- "outputId": "d9601326-b08f-4367-cf61-5b5da0d5c9e4",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 224
- }
- },
- "source": [
- "#Remove columns in PySpark\n",
- "df = df.drop('Test')\n",
- "df.show(5,truncate=False)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------+-----------+-------------------+-------------------------+----+------------------+-----------------------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "|ID |Case Number|Date |Block |IUCR|Primary Type |Description |Location Description|Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year|Updated On |Latitude |Longitude |Location |\n",
- "+--------+-----------+-------------------+-------------------------+----+------------------+-----------------------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "|12040452|JD220630 |2019-11-21 12:00:00|004XX N WABASH AVE |1153|DECEPTIVE PRACTICE|FINANCIAL IDENTITY THEFT OVER $ 300|RESIDENCE |false |false |1834|018 |42 |8 |11 |null |null |2019|05/01/2020 03:49:55 PM|null |null |null |\n",
- "|12039199|JD219081 |2019-04-01 08:00:00|021XX S INDIANA AVE |1153|DECEPTIVE PRACTICE|FINANCIAL IDENTITY THEFT OVER $ 300|RESIDENCE |false |false |0132|001 |3 |33 |11 |1177977 |1890072 |2019|05/01/2020 03:48:05 PM|41.853676611|-87.622239838|(41.853676611, -87.622239838)|\n",
- "|12040466|JD220650 |2019-11-07 12:00:00|026XX N NEW ENGLAND AVE |1152|DECEPTIVE PRACTICE|ILLEGAL USE CASH CARD |RESIDENCE |false |false |2512|025 |36 |18 |11 |null |null |2019|05/01/2020 03:49:55 PM|null |null |null |\n",
- "|11938499|JD100565 |2019-12-31 11:30:00|070XX S COTTAGE GROVE AVE|0460|BATTERY |SIMPLE |OTHER (SPECIFY) |true |false |0321|003 |6 |42 |08B |1182773 |1858454 |2019|05/01/2020 03:48:05 PM|41.76680404 |-87.605619683|(41.76680404, -87.605619683) |\n",
- "|11864640|JC477069 |2019-10-18 09:56:00|002XX S LAVERGNE AVE |041A|BATTERY |AGGRAVATED - HANDGUN |SIDEWALK |false |false |1533|015 |28 |25 |04B |1143293 |1898684 |2019|05/01/2020 03:48:05 PM|41.878026693|-87.749328671|(41.878026693, -87.749328671)|\n",
- "+--------+-----------+-------------------+-------------------------+----+------------------+-----------------------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "only showing top 5 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "WbKK5iHwmIoV",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# Working with Rows"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "9bKlvX-SH-Wy",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "#### Filtering Rows"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "YNfcjOIknA3n",
- "colab_type": "code",
- "outputId": "52be7cc5-4aa2-4264-f9d2-e0aa9df1d162",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 479
- }
- },
- "source": [
- "# Filtering rows in PySpark\n",
- "df.filter(col('Date')<'2019-06-01').show(truncate=False)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------+-----------+-------------------+----------------------+----+--------------------------+-------------------------------------------------------+------------------------------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "|ID |Case Number|Date |Block |IUCR|Primary Type |Description |Location Description |Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year|Updated On |Latitude |Longitude |Location |\n",
- "+--------+-----------+-------------------+----------------------+----+--------------------------+-------------------------------------------------------+------------------------------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "|12039199|JD219081 |2019-04-01 08:00:00|021XX S INDIANA AVE |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |RESIDENCE |false |false |0132|001 |3 |33 |11 |1177977 |1890072 |2019|05/01/2020 03:48:05 PM|41.853676611|-87.622239838|(41.853676611, -87.622239838)|\n",
- "|11718084|JC300874 |2019-05-13 13:00:00|062XX S EMERALD DR |1750|OFFENSE INVOLVING CHILDREN|CHILD ABUSE |GOVERNMENT BUILDING / PROPERTY |false |true |0711|007 |16 |68 |08B |1172179 |1863746 |2019|05/01/2020 03:48:05 PM|41.781565307|-87.644295268|(41.781565307, -87.644295268)|\n",
- "|11647696|JC215453 |2019-04-07 17:56:00|062XX S FRANCISCO AVE |041A|BATTERY |AGGRAVATED - HANDGUN |ALLEY |false |false |0823|008 |16 |66 |04B |1158121 |1863074 |2019|05/01/2020 03:48:05 PM|41.780018807|-87.69585372 |(41.780018807, -87.69585372) |\n",
- "|12038367|JD218291 |2019-04-21 12:00:00|071XX S CYRIL AVE |0890|THEFT |FROM BUILDING |APARTMENT |false |false |0333|003 |5 |43 |06 |1190602 |1857828 |2019|04/30/2020 03:47:59 PM|41.764900898|-87.576943938|(41.764900898, -87.576943938)|\n",
- "|12038342|JD218257 |2019-05-01 00:00:00|002XX N MENARD AVE |1154|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT $300 AND UNDER |RESIDENCE |false |false |1512|015 |29 |25 |11 |1137672 |1901055 |2019|04/30/2020 03:47:59 PM|41.884636133|-87.769910804|(41.884636133, -87.769910804)|\n",
- "|11994131|JD167522 |2019-01-01 09:00:00|013XX W 110TH PL |1195|DECEPTIVE PRACTICE |FINANCIAL EXPLOITATION OF AN ELDERLY OR DISABLED PERSON|APARTMENT |false |false |2234|022 |34 |75 |11 |1169349 |1831489 |2019|04/30/2020 03:47:59 PM|41.693109292|-87.655602002|(41.693109292, -87.655602002)|\n",
- "|11696379|JC273783 |2019-05-22 14:40:00|029XX S FORT DEARBORN |041A|BATTERY |AGGRAVATED - HANDGUN |PARKING LOT / GARAGE (NON RESIDENTIAL) |false |false |0133|001 |4 |35 |04B |1181530 |1885736 |2019|04/30/2020 03:47:59 PM|41.841696878|-87.609333378|(41.841696878, -87.609333378)|\n",
- "|11625478|JC188359 |2019-03-16 20:39:00|068XX N OVERHILL AVE |0610|BURGLARY |FORCIBLE ENTRY |RESIDENCE |true |false |1612|016 |41 |9 |05 |1123909 |1944430 |2019|04/30/2020 03:47:59 PM|42.003899378|-87.819496765|(42.003899378, -87.819496765)|\n",
- "|12036474|JD216160 |2019-04-05 10:00:00|003XX W HUBBARD ST |0810|THEFT |OVER $500 |APARTMENT |false |false |1831|018 |42 |8 |06 |1173879 |1903256 |2019|04/29/2020 03:53:17 PM|41.889946527|-87.63688824 |(41.889946527, -87.63688824) |\n",
- "|11566441|JC116418 |2019-01-14 03:00:00|036XX N WHIPPLE ST |0265|CRIMINAL SEXUAL ASSAULT |AGGRAVATED - OTHER |APARTMENT |true |false |1733|017 |33 |16 |02 |1155406 |1924019 |2019|04/29/2020 03:53:17 PM|41.947313291|-87.704169938|(41.947313291, -87.704169938)|\n",
- "|11640117|JC205456 |2019-03-27 09:00:00|082XX S LAFLIN ST |0281|CRIMINAL SEXUAL ASSAULT |NON-AGGRAVATED |RESIDENCE |false |true |0614|006 |21 |71 |02 |1167770 |1850079 |2019|04/28/2020 03:45:44 PM|41.744157062|-87.660851418|(41.744157062, -87.660851418)|\n",
- "|11825583|JC430037 |2019-04-02 06:00:00|0000X S CONCOURSE A ST|0810|THEFT |OVER $500 |AIRPORT TERMINAL UPPER LEVEL - SECURE AREA|true |false |0813|008 |23 |56 |06 |1145563 |1865318 |2019|04/26/2020 03:45:32 PM|41.786422961|-87.741837247|(41.786422961, -87.741837247)|\n",
- "|11699523|JC278066 |2019-05-25 12:49:00|028XX W 51ST ST |031A|ROBBERY |ARMED - HANDGUN |ALLEY |true |false |0923|009 |14 |63 |03 |1158131 |1870706 |2019|04/26/2020 03:45:32 PM|41.800961869|-87.695609509|(41.800961869, -87.695609509)|\n",
- "|11677547|JC250749 |2019-05-05 13:36:00|008XX E 75TH ST |051A|ASSAULT |AGGRAVATED - HANDGUN |SIDEWALK |false |false |0323|003 |8 |69 |04A |1182958 |1855461 |2019|04/26/2020 03:45:32 PM|41.758586652|-87.605034476|(41.758586652, -87.605034476)|\n",
- "|12035312|JD214438 |2019-03-29 08:00:00|049XX W ARTHINGTON ST |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |null |false |false |1533|015 |29 |25 |11 |1143756 |1895516 |2019|04/25/2020 03:46:57 PM|41.869324647|-87.747707988|(41.869324647, -87.747707988)|\n",
- "|12035047|JD214561 |2019-05-01 00:00:00|002XX W 24TH PL |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |RESIDENCE |false |false |0914|009 |25 |34 |11 |1175108 |1888066 |2019|04/25/2020 03:46:57 PM|41.848236712|-87.632830074|(41.848236712, -87.632830074)|\n",
- "|11618719|JC180138 |2019-03-10 10:54:00|010XX S CLARK ST |0479|BATTERY |AGGRAVATED - HANDS, FISTS, FEET, SERIOUS INJURY |RESIDENCE |true |true |0123|001 |4 |32 |04B |1175680 |1895895 |2019|04/25/2020 03:46:57 PM|41.869707206|-87.630495617|(41.869707206, -87.630495617)|\n",
- "|12034436|JD211780 |2019-02-19 09:00:00|055XX W WILSON AVE |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |APARTMENT |false |false |1623|016 |45 |15 |11 |1138670 |1930193 |2019|04/24/2020 03:45:07 PM|41.96457596 |-87.765537442|(41.96457596, -87.765537442) |\n",
- "|11700425|JC279076 |2019-05-26 06:09:00|056XX W NORTH AVE |041A|BATTERY |AGGRAVATED - HANDGUN |RESTAURANT |false |false |2531|025 |29 |25 |04B |1138594 |1910075 |2019|04/24/2020 03:45:07 PM|41.909371462|-87.766306125|(41.909371462, -87.766306125)|\n",
- "|11586737|JC140804 |2019-02-04 23:00:00|002XX N WABASH AVE |0281|CRIMINAL SEXUAL ASSAULT |NON-AGGRAVATED |HOTEL / MOTEL |false |false |0111|001 |42 |32 |02 |1176777 |1901808 |2019|04/24/2020 03:45:07 PM|41.885908101|-87.626289429|(41.885908101, -87.626289429)|\n",
- "+--------+-----------+-------------------+----------------------+----+--------------------------+-------------------------------------------------------+------------------------------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "zLU-a4auIEvh",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "#### Get Distinct Rows"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "B1RKg1UrmBQz",
- "colab_type": "code",
- "outputId": "083fce9b-7a63-4d69-e8d1-3acca65d6433",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 119
- }
- },
- "source": [
- "#Get Unique Rows in PySpark\n",
- "df.select('Year').distinct().show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+----+\n",
- "|Year|\n",
- "+----+\n",
- "|2019|\n",
- "+----+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "-069UYUwIIYI",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "#### Sorting Rows"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "4ZpeJvz0nkBI",
- "colab_type": "code",
- "outputId": "846c33eb-a7a4-4a57-a67d-7bf787a06bab",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 479
- }
- },
- "source": [
- "# Sort Rows in PySpark\n",
- "# By default the data will be sorted in ascending order\n",
- "df.orderBy('Date').show(truncate=False) "
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------+-----------+-------------------+----------------------+----+--------------------------+-----------------------------------+------------------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "|ID |Case Number|Date |Block |IUCR|Primary Type |Description |Location Description |Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year|Updated On |Latitude |Longitude |Location |\n",
- "+--------+-----------+-------------------+----------------------+----+--------------------------+-----------------------------------+------------------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "|11552674|JC100085 |2019-01-01 00:00:00|092XX S NORMAL AVE |0910|MOTOR VEHICLE THEFT |AUTOMOBILE |STREET |false |false |2223|022 |21 |73 |07 |1174586 |1843723 |2019|01/10/2019 03:16:50 PM|41.726566477|-87.636065622|(41.726566477, -87.636065622)|\n",
- "|11560011|JC108820 |2019-01-01 00:00:00|068XX S OGLESBY AVE |0820|THEFT |$500 AND UNDER |STREET |false |false |0331|003 |5 |43 |06 |1192946 |1860098 |2019|01/10/2019 03:16:50 PM|41.771073064|-87.568278663|(41.771073064, -87.568278663)|\n",
- "|11730841|JC315506 |2019-01-01 00:00:00|007XX E 95TH ST |1563|SEX OFFENSE |CRIMINAL SEXUAL ABUSE |DAY CARE CENTER |false |false |0633|006 |9 |49 |17 |1182808 |1842138 |2019|06/30/2019 03:56:27 PM|41.722030342|-87.605996889|(41.722030342, -87.605996889)|\n",
- "|11718021|JC300596 |2019-01-01 00:00:00|102XX W ZEMKE RD |0910|MOTOR VEHICLE THEFT |AUTOMOBILE |PARKING LOT/GARAGE(NON.RESID.)|false |false |1654|016 |41 |76 |07 |1106955 |1941060 |2019|06/30/2019 03:56:27 PM|41.994913946|-87.881937669|(41.994913946, -87.881937669)|\n",
- "|11714518|JC294379 |2019-01-01 00:00:00|0000X W 115TH ST |1210|DECEPTIVE PRACTICE |THEFT OF LABOR/SERVICES |APARTMENT |false |false |0522|005 |34 |53 |11 |1178223 |1828722 |2019|06/30/2019 03:56:27 PM|41.685320047|-87.623196127|(41.685320047, -87.623196127)|\n",
- "|11682859|JC256913 |2019-01-01 00:00:00|074XX S HARVARD AVE |1753|OFFENSE INVOLVING CHILDREN|SEX ASSLT OF CHILD BY FAM MBR |RESIDENCE |false |true |0731|007 |6 |69 |02 |1175242 |1855669 |2019|06/12/2019 04:05:16 PM|41.759333204|-87.633306659|(41.759333204, -87.633306659)|\n",
- "|11552667|JC100123 |2019-01-01 00:00:00|004XX N STATE ST |0890|THEFT |FROM BUILDING |RESTAURANT |false |false |1831|018 |42 |8 |06 |1176302 |1903096 |2019|01/10/2019 03:16:50 PM|41.889453169|-87.627994833|(41.889453169, -87.627994833)|\n",
- "|11723841|JC300604 |2019-01-01 00:00:00|102XX W ZEMKE RD |0910|MOTOR VEHICLE THEFT |AUTOMOBILE |PARKING LOT/GARAGE(NON.RESID.)|false |false |1654|016 |41 |76 |07 |1106955 |1941060 |2019|06/30/2019 03:56:27 PM|41.994913946|-87.881937669|(41.994913946, -87.881937669)|\n",
- "|11706371|JC286255 |2019-01-01 00:00:00|116XX S ADA ST |1544|SEX OFFENSE |SEXUAL EXPLOITATION OF A CHILD |RESIDENCE |false |false |0524|005 |34 |53 |17 |1169420 |1827617 |2019|06/30/2019 03:56:27 PM|41.68248235 |-87.655453609|(41.68248235, -87.655453609) |\n",
- "|11552709|JC100020 |2019-01-01 00:00:00|044XX S WASHTENAW AVE |0486|BATTERY |DOMESTIC BATTERY SIMPLE |APARTMENT |false |true |0922|009 |15 |58 |08B |1159112 |1875020 |2019|01/10/2019 03:16:50 PM|41.812780011|-87.691893746|(41.812780011, -87.691893746)|\n",
- "|11552758|JC100058 |2019-01-01 00:00:00|063XX S MARSHFIELD AVE|1310|CRIMINAL DAMAGE |TO PROPERTY |APARTMENT |false |false |0725|007 |16 |67 |14 |1166414 |1862607 |2019|01/10/2019 03:16:50 PM|41.77856457 |-87.665463557|(41.77856457, -87.665463557) |\n",
- "|11558163|JC106702 |2019-01-01 00:00:00|002XX W ONTARIO ST |0890|THEFT |FROM BUILDING |BAR OR TAVERN |false |false |1831|018 |42 |8 |06 |1174469 |1904439 |2019|01/10/2019 03:16:50 PM|41.893179585|-87.634686145|(41.893179585, -87.634686145)|\n",
- "|11553168|JC100745 |2019-01-01 00:00:00|008XX N MICHIGAN AVE |0890|THEFT |FROM BUILDING |RESTAURANT |false |false |1833|018 |2 |8 |06 |1177330 |1906499 |2019|01/10/2019 03:16:50 PM|41.898767916|-87.624116333|(41.898767916, -87.624116333)|\n",
- "|11553381|JC100934 |2019-01-01 00:00:00|022XX N LEAMINGTON AVE|1320|CRIMINAL DAMAGE |TO VEHICLE |RESIDENCE-GARAGE |false |false |2522|025 |36 |19 |14 |1141689 |1914382 |2019|01/10/2019 03:16:50 PM|41.921133632|-87.75482958 |(41.921133632, -87.75482958) |\n",
- "|11553495|JC101115 |2019-01-01 00:00:00|047XX N RACINE AVE |0281|CRIM SEXUAL ASSAULT |NON-AGGRAVATED |OTHER |false |false |1913|019 |46 |3 |02 |1167451 |1931818 |2019|01/10/2019 03:16:50 PM|41.968462892|-87.659670442|(41.968462892, -87.659670442)|\n",
- "|11553914|JC101459 |2019-01-01 00:00:00|062XX W BELMONT AVE |1310|CRIMINAL DAMAGE |TO PROPERTY |OTHER |false |false |2511|025 |36 |19 |14 |1134192 |1920614 |2019|01/10/2019 03:16:50 PM|41.938370433|-87.782228587|(41.938370433, -87.782228587)|\n",
- "|11556297|JC104365 |2019-01-01 00:00:00|083XX S YATES BLVD |2826|OTHER OFFENSE |HARASSMENT BY ELECTRONIC MEANS |RESIDENCE |false |false |0412|004 |7 |46 |26 |1193625 |1850166 |2019|01/10/2019 03:16:50 PM|41.74380228 |-87.566114429|(41.74380228, -87.566114429) |\n",
- "|11563351|JC112682 |2019-01-01 00:00:00|045XX S EVANS AVE |2825|OTHER OFFENSE |HARASSMENT BY TELEPHONE |APARTMENT |false |true |0221|002 |4 |38 |26 |1181911 |1875136 |2019|01/18/2019 09:37:14 AM|41.812600904|-87.608263541|(41.812600904, -87.608263541)|\n",
- "|11566200|JC116315 |2019-01-01 00:00:00|072XX S EAST END AVE |0320|ROBBERY |STRONGARM - NO WEAPON |APARTMENT |false |true |0324|003 |7 |43 |03 |1188788 |1857344 |2019|01/18/2019 09:37:14 AM|41.763616341|-87.58360812 |(41.763616341, -87.58360812) |\n",
- "|11571975|JC123039 |2019-01-01 00:00:00|0000X N LOCKWOOD AVE |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300|RESIDENCE |false |false |1522|015 |28 |25 |11 |1141053 |1899725 |2019|01/23/2019 04:13:37 PM|41.880924861|-87.757527885|(41.880924861, -87.757527885)|\n",
- "+--------+-----------+-------------------+----------------------+----+--------------------------+-----------------------------------+------------------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "v1CEwofMJV-D",
- "colab_type": "code",
- "outputId": "91c8d3c2-d1c6-4195-abbc-c540519c23c1",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 479
- }
- },
- "source": [
- "# To change the sorting order, you can use the asceding parameter\n",
- "df.orderBy('Date', ascending=False).show(truncate=False) "
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------+-----------+-------------------+----------------------+----+-----------------+--------------------------------------------------+----------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "|ID |Case Number|Date |Block |IUCR|Primary Type |Description |Location Description |Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year|Updated On |Latitude |Longitude |Location |\n",
- "+--------+-----------+-------------------+----------------------+----+-----------------+--------------------------------------------------+----------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "|11938228|JD100017 |2019-12-31 23:55:00|0000X W 69TH ST |143A|WEAPONS VIOLATION|UNLAWFUL POSS OF HANDGUN |STREET |true |false |0731|007 |6 |69 |15 |1176896 |1859260 |2019|01/07/2020 03:52:21 PM|41.769150218|-87.627136786|(41.769150218, -87.627136786)|\n",
- "|11940078|JD100016 |2019-12-31 23:54:00|063XX S MAY ST |0420|BATTERY |AGGRAVATED:KNIFE/CUTTING INSTR |SIDEWALK |false |false |0724|007 |16 |68 |04B |1169736 |1862855 |2019|01/08/2020 03:47:27 PM|41.779173667|-87.653277703|(41.779173667, -87.653277703)|\n",
- "|11938857|JD100599 |2019-12-31 23:50:00|004XX N Ashland ave |0820|THEFT |$500 AND UNDER |BAR OR TAVERN |false |false |1215|012 |27 |24 |06 |null |null |2019|01/07/2020 03:52:21 PM|null |null |null |\n",
- "|11938240|JD100002 |2019-12-31 23:48:00|004XX S CICERO AVE |143A|WEAPONS VIOLATION|UNLAWFUL POSS OF HANDGUN |VEHICLE NON-COMMERCIAL|true |false |1533|015 |29 |25 |15 |1144466 |1897452 |2019|01/07/2020 03:52:21 PM|41.874623951|-87.745052647|(41.874623951, -87.745052647)|\n",
- "|11937967|JC567053 |2019-12-31 23:46:00|034XX W JACKSON BLVD |143A|WEAPONS VIOLATION|UNLAWFUL POSS OF HANDGUN |STREET |false |false |1133|011 |28 |27 |15 |1153587 |1898480 |2019|01/07/2020 03:52:21 PM|41.877268465|-87.711536692|(41.877268465, -87.711536692)|\n",
- "|11938124|JD100001 |2019-12-31 23:37:00|012XX W 99TH ST |5112|OTHER OFFENSE |GUN OFFENDER: DUTY TO REPORT CHANGE OF INFORMATION|STREET |true |false |2232|022 |34 |73 |26 |1170052 |1839142 |2019|01/07/2020 03:52:21 PM|41.714095115|-87.652806763|(41.714095115, -87.652806763)|\n",
- "|11937990|JC567039 |2019-12-31 23:31:00|006XX W 47TH ST |0460|BATTERY |SIMPLE |APARTMENT |false |false |0925|009 |11 |61 |08B |1172817 |1873724 |2019|01/07/2020 03:52:21 PM|41.808931947|-87.641661883|(41.808931947, -87.641661883)|\n",
- "|11937983|JC567043 |2019-12-31 23:30:00|049XX N CHRISTIANA AVE|051A|ASSAULT |AGGRAVATED: HANDGUN |STREET |false |false |1713|017 |33 |14 |04A |1153158 |1932519 |2019|01/07/2020 03:52:21 PM|41.970682839|-87.712206567|(41.970682839, -87.712206567)|\n",
- "|11937963|JC567049 |2019-12-31 23:21:00|101XX S NORMAL AVE |5111|OTHER OFFENSE |GUN OFFENDER: ANNUAL REGISTRATION |STREET |true |false |2232|022 |9 |73 |26 |1174759 |1837501 |2019|01/07/2020 03:52:21 PM|41.709488593|-87.635616613|(41.709488593, -87.635616613)|\n",
- "|11939218|JD100095 |2019-12-31 23:15:00|001XX N CLARK ST |0810|THEFT |OVER $500 |STREET |false |false |0111|001 |42 |32 |06 |1175507 |1901535 |2019|01/07/2020 03:52:21 PM|41.885187593|-87.630961294|(41.885187593, -87.630961294)|\n",
- "|11938244|JC567030 |2019-12-31 23:15:00|070XX S CHAPPEL AVE |0486|BATTERY |DOMESTIC BATTERY SIMPLE |APARTMENT |false |true |0331|003 |5 |43 |08B |1191096 |1858692 |2019|01/07/2020 03:52:21 PM|41.767259848|-87.575105403|(41.767259848, -87.575105403)|\n",
- "|11938338|JC567037 |2019-12-31 23:15:00|001XX W 68TH ST |0486|BATTERY |DOMESTIC BATTERY SIMPLE |APARTMENT |false |true |0722|007 |6 |69 |08B |1176673 |1859896 |2019|01/07/2020 03:52:21 PM|41.770900494|-87.627935073|(41.770900494, -87.627935073)|\n",
- "|11937952|JC567038 |2019-12-31 23:14:00|007XX N MICHIGAN AVE |0486|BATTERY |DOMESTIC BATTERY SIMPLE |HOTEL/MOTEL |false |true |1834|018 |42 |8 |08B |1177303 |1905177 |2019|01/07/2020 03:52:21 PM|41.895140898|-87.624255632|(41.895140898, -87.624255632)|\n",
- "|11938039|JC567055 |2019-12-31 23:14:00|046XX S DAMEN AVE |143A|WEAPONS VIOLATION|UNLAWFUL POSS OF HANDGUN |STREET |true |false |0924|009 |12 |61 |15 |1163735 |1873901 |2019|01/07/2020 03:52:21 PM|41.809613401|-87.674967882|(41.809613401, -87.674967882)|\n",
- "|11938007|JC567034 |2019-12-31 23:12:00|030XX W 21ST PL |0486|BATTERY |DOMESTIC BATTERY SIMPLE |RESIDENCE |true |true |1022|010 |24 |30 |08B |1156456 |1889560 |2019|01/07/2020 03:52:21 PM|41.852733523|-87.701243634|(41.852733523, -87.701243634)|\n",
- "|11937955|JC567056 |2019-12-31 23:12:00|080XX S KINGSTON AVE |0486|BATTERY |DOMESTIC BATTERY SIMPLE |APARTMENT |false |true |0422|004 |7 |46 |08B |1194583 |1851989 |2019|01/07/2020 03:52:21 PM|41.748781248|-87.562544473|(41.748781248, -87.562544473)|\n",
- "|11938003|JC567031 |2019-12-31 23:10:00|100XX S LAFAYETTE AVE |0486|BATTERY |DOMESTIC BATTERY SIMPLE |RESIDENCE |false |true |0511|005 |9 |49 |08B |1177696 |1838416 |2019|01/07/2020 03:52:21 PM|41.711933671|-87.624833412|(41.711933671, -87.624833412)|\n",
- "|11938034|JC567024 |2019-12-31 23:06:00|036XX S STATE ST |143A|WEAPONS VIOLATION|UNLAWFUL POSS OF HANDGUN |STREET |true |false |0213|002 |3 |35 |15 |1176895 |1880755 |2019|01/07/2020 03:52:21 PM|41.828134581|-87.62649256 |(41.828134581, -87.62649256) |\n",
- "|11937958|JC567042 |2019-12-31 23:02:00|014XX E 76TH ST |0460|BATTERY |SIMPLE |RESIDENCE |true |false |0411|004 |5 |43 |08B |1187313 |1854891 |2019|01/07/2020 03:52:21 PM|41.756920236|-87.589092011|(41.756920236, -87.589092011)|\n",
- "|11943133|JD105884 |2019-12-31 23:00:00|007XX N CENTRAL AVE |1320|CRIMINAL DAMAGE |TO VEHICLE |VACANT LOT/LAND |false |false |1511|015 |29 |25 |14 |1138889 |1904432 |2019|01/09/2020 03:55:23 PM|41.893881034|-87.765359656|(41.893881034, -87.765359656)|\n",
- "+--------+-----------+-------------------+----------------------+----+-----------------+--------------------------------------------------+----------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+----------------------+------------+-------------+-----------------------------+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "aN0-A_JsIX-X",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "#### Union Dataframes"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "VH0KOaBrJt6v",
- "colab_type": "text"
- },
- "source": [
- "You will see three main methods for performing union of dataframes. It is important to know the difference between them and which one is preferred:\n",
- "\n",
- "* `union()` – It is used to merge two DataFrames of the same structure/schema. If schemas are not the same, it returns an error\n",
- "* `unionAll()` – This function is deprecated since Spark 2.0.0, and replaced with union()\n",
- "* `unionByName()` - This fucntion is used to merge two dataframes based on column name.\n",
- "\n",
- "> Since `unionAll()` is deprecated, **`union()` is the preferred method for merging dataframes.**\n",
- " \n",
- "> The difference between `unionByName()` and `union()` is that `unionByName()` resolves columns by name, not by position.\n",
- "\n",
- "In other SQLs, Union eliminates the duplicates but UnionAll merges two datasets, thereby including duplicate records. But, in PySpark, both behave the same and includes duplicate records. The recommendation is to use `distinct()` or `dropDuplicates()` to remove duplicate records."
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "bCZIzfYmnx--",
- "colab_type": "code",
- "outputId": "5a2ddbe0-e70a-413d-c118-d2d26f6815f7",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "# Append rows in PySpark.\n",
- "one_day = spark.read.csv('reported-crimes.csv',header=True).withColumn('Date',to_timestamp(col('Date'),'MM/dd/yyyy hh:mm:ss a')).filter(col('Date')==lit('2019-07-30'))\n",
- "df.filter(col('Date')==lit('2019-07-30')).count()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "10"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 31
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "k9CZlZ4Got_e",
- "colab_type": "code",
- "outputId": "85cd7f36-7915-4f42-9e67-d1f90c642223",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "df.union(one_day).filter(col('Date')==lit('2019-07-30')).count()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "20"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 32
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "1pfPzVOFqC_8",
- "colab_type": "text"
- },
- "source": [
- "**Result:**\n",
- "\n",
- "> As you can see here, there were 6 crimes commited on 2019-07-30, and after union, there's 12 records.\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "CjWjzWBoMxx0",
- "colab_type": "code",
- "outputId": "60f1d420-c647-4de3-ddba-92dbed2dd75d",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 136
- }
- },
- "source": [
- "# Creating two dataframes with jumbled columns\n",
- "df1 = spark.createDataFrame([[1, 2, 3]], [\"col0\", \"col1\", \"col2\"])\n",
- "df2 = spark.createDataFrame([[4, 5, 6]], [\"col1\", \"col2\", \"col0\"])\n",
- "df1.unionByName(df2).show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+----+----+----+\n",
- "|col0|col1|col2|\n",
- "+----+----+----+\n",
- "| 1| 2| 3|\n",
- "| 6| 4| 5|\n",
- "+----+----+----+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "EX33t8e3PyGy",
- "colab_type": "text"
- },
- "source": [
- "**Result:**\n",
- "\n",
- "> As you can see here, the two dataframes have been successfully merged based on their column names.\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "6tVJeAY_plob",
- "colab_type": "code",
- "outputId": "bad14931-64f4-472e-bc4c-1144ad0b4a08",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 289
- }
- },
- "source": [
- "# Top 10 number of reported crimes by Primary Type, in descending order of Occurence\n",
- "df.groupBy(\"Primary Type\").count().orderBy('count', ascending=False).show(10)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+-------------------+-----+\n",
- "| Primary Type|count|\n",
- "+-------------------+-----+\n",
- "| THEFT|62362|\n",
- "| BATTERY|49483|\n",
- "| CRIMINAL DAMAGE|26666|\n",
- "| ASSAULT|20605|\n",
- "| DECEPTIVE PRACTICE|18233|\n",
- "| OTHER OFFENSE|16682|\n",
- "| NARCOTICS|14210|\n",
- "| BURGLARY| 9624|\n",
- "|MOTOR VEHICLE THEFT| 8979|\n",
- "| ROBBERY| 7987|\n",
- "+-------------------+-----+\n",
- "only showing top 10 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "xOQPOt19q_he",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# Hands-on Questions 🤚 !"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "uk7x_PCarIff",
- "colab_type": "text"
- },
- "source": [
- "**What percentage of reported crimes resulted in an arrest?**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "zZkkAz-srcMf",
- "colab_type": "code",
- "colab": {}
- },
- "source": [
- "# Answer"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "9Qzp-Vb-ra6w",
- "colab_type": "text"
- },
- "source": [
- "**What are the top 3 locations for reported crimes?**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "qI6srWS4rYwv",
- "colab_type": "code",
- "colab": {}
- },
- "source": [
- "# Answer"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "aHjILb1DriuX",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# Functions"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "x3vlC7ZerlKb",
- "colab_type": "code",
- "outputId": "e7188bca-b33e-44f5-e3a1-9b1699915173",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 54
- }
- },
- "source": [
- "# Functions available in PySpark\n",
- "from pyspark.sql import functions\n",
- "print(dir(functions))"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "['Column', 'DataFrame', 'DataType', 'PandasUDFType', 'PythonEvalType', 'SparkContext', 'StringType', 'UserDefinedFunction', '__all__', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_binary_mathfunctions', '_collect_list_doc', '_collect_set_doc', '_create_binary_mathfunction', '_create_column_from_literal', '_create_function', '_create_udf', '_create_window_function', '_functions', '_functions_1_4', '_functions_1_6', '_functions_2_1', '_functions_2_4', '_functions_deprecated', '_lit_doc', '_message', '_string_functions', '_test', '_to_java_column', '_to_seq', '_window_functions', '_wrap_deprecated_function', 'abs', 'acos', 'add_months', 'approxCountDistinct', 'approx_count_distinct', 'array', 'array_contains', 'array_distinct', 'array_except', 'array_intersect', 'array_join', 'array_max', 'array_min', 'array_position', 'array_remove', 'array_repeat', 'array_sort', 'array_union', 'arrays_overlap', 'arrays_zip', 'asc', 'asc_nulls_first', 'asc_nulls_last', 'ascii', 'asin', 'atan', 'atan2', 'avg', 'base64', 'basestring', 'bin', 'bitwiseNOT', 'blacklist', 'broadcast', 'bround', 'cbrt', 'ceil', 'coalesce', 'col', 'collect_list', 'collect_set', 'column', 'concat', 'concat_ws', 'conv', 'corr', 'cos', 'cosh', 'count', 'countDistinct', 'covar_pop', 'covar_samp', 'crc32', 'create_map', 'cume_dist', 'current_date', 'current_timestamp', 'date_add', 'date_format', 'date_sub', 'date_trunc', 'datediff', 'dayofmonth', 'dayofweek', 'dayofyear', 'decode', 'degrees', 'dense_rank', 'desc', 'desc_nulls_first', 'desc_nulls_last', 'element_at', 'encode', 'exp', 'explode', 'explode_outer', 'expm1', 'expr', 'factorial', 'first', 'flatten', 'floor', 'format_number', 'format_string', 'from_json', 'from_unixtime', 'from_utc_timestamp', 'functools', 'get_json_object', 'greatest', 'grouping', 'grouping_id', 'hash', 'hex', 'hour', 'hypot', 'ignore_unicode_prefix', 'initcap', 'input_file_name', 'instr', 'isnan', 'isnull', 'json_tuple', 'kurtosis', 'lag', 'last', 'last_day', 'lead', 'least', 'length', 'levenshtein', 'lit', 'locate', 'log', 'log10', 'log1p', 'log2', 'lower', 'lpad', 'ltrim', 'map_concat', 'map_from_arrays', 'map_from_entries', 'map_keys', 'map_values', 'max', 'md5', 'mean', 'min', 'minute', 'monotonically_increasing_id', 'month', 'months_between', 'nanvl', 'next_day', 'ntile', 'pandas_udf', 'percent_rank', 'posexplode', 'posexplode_outer', 'pow', 'quarter', 'radians', 'rand', 'randn', 'rank', 'regexp_extract', 'regexp_replace', 'repeat', 'reverse', 'rint', 'round', 'row_number', 'rpad', 'rtrim', 'schema_of_json', 'second', 'sequence', 'sha1', 'sha2', 'shiftLeft', 'shiftRight', 'shiftRightUnsigned', 'shuffle', 'signum', 'sin', 'since', 'sinh', 'size', 'skewness', 'slice', 'sort_array', 'soundex', 'spark_partition_id', 'split', 'sqrt', 'stddev', 'stddev_pop', 'stddev_samp', 'struct', 'substring', 'substring_index', 'sum', 'sumDistinct', 'sys', 'tan', 'tanh', 'toDegrees', 'toRadians', 'to_date', 'to_json', 'to_timestamp', 'to_utc_timestamp', 'translate', 'trim', 'trunc', 'udf', 'unbase64', 'unhex', 'unix_timestamp', 'upper', 'var_pop', 'var_samp', 'variance', 'warnings', 'weekofyear', 'when', 'window', 'year']\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "PIKigra7A34e",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## String Functions"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "63QDccSjBqC4",
- "colab_type": "code",
- "colab": {}
- },
- "source": [
- "# Loading the data\n",
- "from pyspark.sql.functions import col\n",
- "df = spark.read.csv('reported-crimes.csv',header=True).withColumn('Date',to_timestamp(col('Date'),'MM/dd/yyyy hh:mm:ss a'))"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "LiXWN8DUA9x6",
- "colab_type": "text"
- },
- "source": [
- "**Display the Primary Type column in lower and upper characters, and the first 4 characters of the column**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "52Gh9c99BZFr",
- "colab_type": "code",
- "outputId": "0746757f-4b2c-4396-c65c-07e42b17e947",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 459
- }
- },
- "source": [
- "from pyspark.sql.functions import col,lower, upper, substring\n",
- "help(substring)\n",
- "df.select(lower(col('Primary Type')),upper(col('Primary Type')),substring(col('Primary Type'),1,4)).show(5, False)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "Help on function substring in module pyspark.sql.functions:\n",
- "\n",
- "substring(str, pos, len)\n",
- " Substring starts at `pos` and is of length `len` when str is String type or\n",
- " returns the slice of byte array that starts at `pos` in byte and is of length `len`\n",
- " when str is Binary type.\n",
- " \n",
- " .. note:: The position is not zero based, but 1 based index.\n",
- " \n",
- " >>> df = spark.createDataFrame([('abcd',)], ['s',])\n",
- " >>> df.select(substring(df.s, 1, 2).alias('s')).collect()\n",
- " [Row(s='ab')]\n",
- " \n",
- " .. versionadded:: 1.5\n",
- "\n",
- "+-------------------+-------------------+-----------------------------+\n",
- "|lower(Primary Type)|upper(Primary Type)|substring(Primary Type, 1, 4)|\n",
- "+-------------------+-------------------+-----------------------------+\n",
- "|deceptive practice |DECEPTIVE PRACTICE |DECE |\n",
- "|deceptive practice |DECEPTIVE PRACTICE |DECE |\n",
- "|deceptive practice |DECEPTIVE PRACTICE |DECE |\n",
- "|battery |BATTERY |BATT |\n",
- "|battery |BATTERY |BATT |\n",
- "+-------------------+-------------------+-----------------------------+\n",
- "only showing top 5 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "ldtA0wk9BMkT",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Numeric functions"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "kmz4G5LVBOs6",
- "colab_type": "text"
- },
- "source": [
- "**Show the oldest date and the most recent date**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "wBDDH-YpBbdk",
- "colab_type": "code",
- "outputId": "e6389273-b5ec-4f47-893a-544ac45abdad",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 119
- }
- },
- "source": [
- "from pyspark.sql.functions import min, max\n",
- "df.select(min(col('Date')), max(col('Date'))).show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+-------------------+-------------------+\n",
- "| min(Date)| max(Date)|\n",
- "+-------------------+-------------------+\n",
- "|2019-01-01 00:00:00|2019-12-31 23:55:00|\n",
- "+-------------------+-------------------+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "KQ6Ul9HGCwC3",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Date"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "9WFrADdkBPsX",
- "colab_type": "text"
- },
- "source": [
- "**What is 3 days earlier that the oldest date and 3 days later than the most recent date?**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "YrNmD5umAx__",
- "colab_type": "code",
- "outputId": "59a608be-c9e0-44cc-e2d3-d6046861059a",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 119
- }
- },
- "source": [
- "from pyspark.sql.functions import date_add, date_sub\n",
- "df.select(date_add(max(col('Date')),3), date_sub(min(col('Date')),3)).show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+----------------------+----------------------+\n",
- "|date_add(max(Date), 3)|date_sub(min(Date), 3)|\n",
- "+----------------------+----------------------+\n",
- "| 2020-01-03| 2018-12-29|\n",
- "+----------------------+----------------------+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "sY6PstyLDp6P",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# Working with Dates"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "s1jmBN2qFHyk",
- "colab_type": "text"
- },
- "source": [
- "> [PySpark follows SimpleDateFormat table of Java. Click here to view the docs.](https://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html)"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "sCTeI_JvDCsH",
- "colab_type": "code",
- "outputId": "825a09aa-425c-4843-ad5c-af0dd4d0fdef",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 119
- }
- },
- "source": [
- "from pyspark.sql.functions import to_date, to_timestamp, lit\n",
- "df = spark.createDataFrame([('2019-12-25 13:30:00',)], ['Christmas'])\n",
- "df.show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+-------------------+\n",
- "| Christmas|\n",
- "+-------------------+\n",
- "|2019-12-25 13:30:00|\n",
- "+-------------------+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "ZH8ja1eHEW8x",
- "colab_type": "code",
- "outputId": "bf8216eb-a6d9-4d1d-8c03-bce2e5d57de5",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 119
- }
- },
- "source": [
- "df.select(to_date(col('Christmas'),'yyyy-MM-dd HH:mm:ss'), to_timestamp(col('Christmas'),'yyyy-MM-dd HH:mm:ss')).show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+-------------------------------------------+------------------------------------------------+\n",
- "|to_date(`Christmas`, 'yyyy-MM-dd HH:mm:ss')|to_timestamp(`Christmas`, 'yyyy-MM-dd HH:mm:ss')|\n",
- "+-------------------------------------------+------------------------------------------------+\n",
- "| 2019-12-25| 2019-12-25 13:30:00|\n",
- "+-------------------------------------------+------------------------------------------------+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "7g9m_8PPErI1",
- "colab_type": "code",
- "outputId": "6a2be393-b89b-4d17-a16f-e9b390097851",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 119
- }
- },
- "source": [
- "df = spark.createDataFrame([('25/Dec/2019 13:30:00',)], ['Christmas'])\n",
- "df.select(to_date(col('Christmas'),'dd/MMM/yyyy HH:mm:ss'), to_timestamp(col('Christmas'),'dd/MMM/yyyy HH:mm:ss')).show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------------------------------------------+-------------------------------------------------+\n",
- "|to_date(`Christmas`, 'dd/MMM/yyyy HH:mm:ss')|to_timestamp(`Christmas`, 'dd/MMM/yyyy HH:mm:ss')|\n",
- "+--------------------------------------------+-------------------------------------------------+\n",
- "| 2019-12-25| 2019-12-25 13:30:00|\n",
- "+--------------------------------------------+-------------------------------------------------+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "x26Ls5SzE9qJ",
- "colab_type": "code",
- "outputId": "40665c93-601b-4491-b80c-b7f7527da654",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 323
- }
- },
- "source": [
- "df = spark.createDataFrame([('12/25/2019 01:30:00 PM',)], ['Christmas'])\n",
- "df.show(1)\n",
- "df.show(1, truncate = False)\n",
- "df.select(to_date(col('Christmas'),'MM/dd/yyyy hh:mm:ss aa'), to_timestamp(col('Christmas'),'MM/dd/yyyy hh:mm:ss aa')).show(truncate=False)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------------------+\n",
- "| Christmas|\n",
- "+--------------------+\n",
- "|12/25/2019 01:30:...|\n",
- "+--------------------+\n",
- "\n",
- "+----------------------+\n",
- "|Christmas |\n",
- "+----------------------+\n",
- "|12/25/2019 01:30:00 PM|\n",
- "+----------------------+\n",
- "\n",
- "+----------------------------------------------+---------------------------------------------------+\n",
- "|to_date(`Christmas`, 'MM/dd/yyyy hh:mm:ss aa')|to_timestamp(`Christmas`, 'MM/dd/yyyy hh:mm:ss aa')|\n",
- "+----------------------------------------------+---------------------------------------------------+\n",
- "|2019-12-25 |2019-12-25 13:30:00 |\n",
- "+----------------------------------------------+---------------------------------------------------+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "7OZElEvcGOD1",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# Working with Joins"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "UJBC7r3JFyCL",
- "colab_type": "code",
- "colab": {}
- },
- "source": [
- "# Loading the data\n",
- "from pyspark.sql.functions import col\n",
- "df = spark.read.csv('reported-crimes.csv',header=True).withColumn('Date',to_timestamp(col('Date'),'MM/dd/yyyy hh:mm:ss a'))"
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "-_aNpIqwIERe",
- "colab_type": "code",
- "outputId": "fc9c92ba-04e0-47e9-bf9b-1708e707334a",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 204
- }
- },
- "source": [
- "# Dowloading police station data\n",
- "!wget -O police-station.csv https://data.cityofchicago.org/api/views/z8bn-74gv/rows.csv?accessType=DOWNLOAD"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "--2020-05-02 15:16:24-- https://data.cityofchicago.org/api/views/z8bn-74gv/rows.csv?accessType=DOWNLOAD\n",
- "Resolving data.cityofchicago.org (data.cityofchicago.org)... 52.206.140.205, 52.206.140.199, 52.206.68.26\n",
- "Connecting to data.cityofchicago.org (data.cityofchicago.org)|52.206.140.205|:443... connected.\n",
- "HTTP request sent, awaiting response... 200 OK\n",
- "Length: unspecified [text/csv]\n",
- "Saving to: ‘police-station.csv’\n",
- "\n",
- "police-station.csv [ <=> ] 5.57K --.-KB/s in 0s \n",
- "\n",
- "2020-05-02 15:16:28 (587 MB/s) - ‘police-station.csv’ saved [5699]\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "HOQjNSrdIO1n",
- "colab_type": "code",
- "outputId": "d0bee83f-a994-4e86-f934-88f5364a3b8d",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 479
- }
- },
- "source": [
- "ps = spark.read.csv(\"police-station.csv\", header=True)\n",
- "ps.show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------------------+-----------------+--------------------+-------+-----+-----+--------------------+------------+------------+------------+------------+------------+-----------+------------+--------------------+\n",
- "| DISTRICT| DISTRICT NAME| ADDRESS| CITY|STATE| ZIP| WEBSITE| PHONE| FAX| TTY|X COORDINATE|Y COORDINATE| LATITUDE| LONGITUDE| LOCATION|\n",
- "+--------------------+-----------------+--------------------+-------+-----+-----+--------------------+------------+------------+------------+------------+------------+-----------+------------+--------------------+\n",
- "| 1| Central| 1718 S State St|Chicago| IL|60616|http://home.chica...|312-745-4290|312-745-3694|312-745-3693| 1176569.052| 1891771.704|41.85837259|-87.62735617|(41.8583725929, -...|\n",
- "| 6| Gresham| 7808 S Halsted St|Chicago| IL|60620|http://home.chica...|312-745-3617|312-745-3649|312-745-3639| 1172283.013| 1853022.646|41.75213684|-87.64422891|(41.7521368378, -...|\n",
- "| 11| Harrison| 3151 W Harrison St|Chicago| IL|60612|http://home.chica...|312-746-8386|312-746-4281|312-746-5151| 1155244.069| 1897148.755|41.87358229|-87.70548813|(41.8735822883, -...|\n",
- "| 16| Jefferson Park|5151 N Milwaukee Ave|Chicago| IL|60630|http://home.chica...|312-742-4480|312-742-4421|312-742-4423| 1138480.758| 1933660.473|41.97409445|-87.76614884|(41.9740944511, -...|\n",
- "| Headquarters| Headquarters| 3510 S Michigan Ave|Chicago| IL|60653|http://home.chica...| null| null| null| 1177731.401| 1881697.404|41.83070169|-87.62339535|(41.8307016873, -...|\n",
- "| 24| Rogers Park| 6464 N Clark St|Chicago| IL|60626|http://home.chica...|312-744-5907|312-744-6928|312-744-7603| 1164193.588| 1943199.401|41.99976348|-87.67132429|(41.9997634842, -...|\n",
- "| 2| Wentworth|5101 S Wentworth Ave|Chicago| IL|60609|http://home.chica...|312-747-8366|312-747-5396|312-747-6656| 1175864.837| 1871153.753|41.80181109|-87.63056018|(41.8018110912, -...|\n",
- "| 7| Englewood| 1438 W 63rd St|Chicago| IL|60636|http://home.chica...|312-747-8223|312-747-6558|312-747-6652| 1167659.235| 1863005.522|41.77963154|-87.66088702|(41.7796315359, -...|\n",
- "| 25| Grand Central| 5555 W Grand Ave|Chicago| IL|60639|http://home.chica...|312-746-8605|312-746-4353|312-746-8383| 1138770.871| 1913442.439|41.91860889|-87.76557448|(41.9186088912, -...|\n",
- "| 10| Ogden| 3315 W Ogden Ave|Chicago| IL|60623|http://home.chica...|312-747-7511|312-747-7429|312-747-7471| 1154500.753| 1890985.501|41.85668453|-87.70838196|(41.8566845327, -...|\n",
- "| 15| Austin| 5701 W Madison St|Chicago| IL|60644|http://home.chica...|312-743-1440|312-743-1366|312-743-1485| 1138148.815| 1899399.078|41.88008346|-87.76819989|(41.8800834614, -...|\n",
- "| 3| Grand Crossing|7040 S Cottage Gr...|Chicago| IL|60637|http://home.chica...|312-747-8201|312-747-5479|312-747-9168| 1182739.183| 1858317.732|41.76643089|-87.60574786|(41.7664308925, -...|\n",
- "| 14| Shakespeare|2150 N California...|Chicago| IL|60647|http://home.chica...|312-744-8250|312-744-2422|312-744-8260| 1157304.426| 1914481.521|41.92110332|-87.69745182|(41.9211033246, -...|\n",
- "| 8| Chicago Lawn| 3420 W 63rd St|Chicago| IL|60629|http://home.chica...|312-747-8730|312-747-8545|312-747-8116| 1154575.242| 1862672.049|41.77898719|-87.70886382|(41.778987189, -8...|\n",
- "| 4| South Chicago| 2255 E 103rd St|Chicago| IL|60617|http://home.chica...|312-747-7581|312-747-5276|312-747-9169| 1193131.299| 1837090.265|41.70793329|-87.56834912|(41.7079332906, -...|\n",
- "| 20| Lincoln| 5400 N Lincoln Ave|Chicago| IL|60625|http://home.chica...|312-742-8714|312-742-8803|312-742-8841| 1158399.146| 1935788.826|41.97954951|-87.69284451|(41.9795495131, -...|\n",
- "| 18| Near North| 1160 N Larrabee St|Chicago| IL|60610|http://home.chica...|312-742-5870|312-742-5771|312-742-5773| 1172080.029| 1908086.527|41.90324165|-87.64335214|(41.9032416531, -...|\n",
- "| 12| Near West|1412 S Blue Islan...| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "|\",Chicago,IL,6060...| -87.6569725149)\"| null| null| null| null| null| null| null| null| null| null| null| null| null|\n",
- "| 9| Deering| 3120 S Halsted St|Chicago| IL|60608|http://home.chica...|312-747-8227|312-747-5329|312-747-9172| 1171440.24| 1884085.224|41.83739443|-87.64640771|(41.8373944311, -...|\n",
- "+--------------------+-----------------+--------------------+-------+-----+-----+--------------------+------------+------------+------------+------------+------------+-----------+------------+--------------------+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "hY05yWIEJIdb",
- "colab_type": "text"
- },
- "source": [
- "**The reported crimes dataset has only the district number. Add the district name by joining with the police station dataset.**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "QePrc33RI25U",
- "colab_type": "code",
- "outputId": "41dd8527-bcdd-4a12-faa2-8cce3844b207",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "# Caching the crimes dataset to speed things up, and then, since spark does lazy evaluation, gonna run an action to make it evaluated.\n",
- "df.cache()\n",
- "df.count()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "259013"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 47
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "lSsxX7qoJinQ",
- "colab_type": "code",
- "outputId": "0858b3ba-c6aa-4293-e807-66a789cc5eb3",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 901
- }
- },
- "source": [
- "ps.select(col('DISTRICT')).distinct().show()\n",
- "df.select(col('District')).distinct().show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+------------+\n",
- "| DISTRICT|\n",
- "+------------+\n",
- "| 7|\n",
- "| 15|\n",
- "| 11|\n",
- "| 3|\n",
- "| 8|\n",
- "| 22|\n",
- "| 16|\n",
- "| 5|\n",
- "| 18|\n",
- "| 17|\n",
- "| 6|\n",
- "| 19|\n",
- "| 25|\n",
- "|Headquarters|\n",
- "| 24|\n",
- "| 9|\n",
- "| 1|\n",
- "| 20|\n",
- "| 10|\n",
- "| 4|\n",
- "+------------+\n",
- "only showing top 20 rows\n",
- "\n",
- "+--------+\n",
- "|District|\n",
- "+--------+\n",
- "| 009|\n",
- "| 012|\n",
- "| 024|\n",
- "| 031|\n",
- "| 015|\n",
- "| 006|\n",
- "| 019|\n",
- "| 020|\n",
- "| 011|\n",
- "| 025|\n",
- "| 003|\n",
- "| 005|\n",
- "| 016|\n",
- "| 018|\n",
- "| 008|\n",
- "| 022|\n",
- "| 001|\n",
- "| 014|\n",
- "| 010|\n",
- "| 004|\n",
- "+--------+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "7keEhyBMJsFT",
- "colab_type": "code",
- "outputId": "bf93a549-5b5e-45d2-e939-3509a44573a8",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 459
- }
- },
- "source": [
- "# Transfomring crime data to remove 0 from beginning, inroder to match the data\n",
- "from pyspark.sql.functions import lpad\n",
- "ps = ps.withColumn('Format_district',lpad(col('DISTRICT'),3,'0'))\n",
- "ps.select(col('Format_district')).show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+---------------+\n",
- "|Format_district|\n",
- "+---------------+\n",
- "| 001|\n",
- "| 006|\n",
- "| 011|\n",
- "| 016|\n",
- "| Hea|\n",
- "| 024|\n",
- "| 002|\n",
- "| 007|\n",
- "| 025|\n",
- "| 010|\n",
- "| 015|\n",
- "| 003|\n",
- "| 014|\n",
- "| 008|\n",
- "| 004|\n",
- "| 020|\n",
- "| 018|\n",
- "| 012|\n",
- "| \",C|\n",
- "| 009|\n",
- "+---------------+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "U7Py4EYyKJTN",
- "colab_type": "code",
- "outputId": "c5da5c45-e024-46b9-a148-db4a7ce7da6a",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 479
- }
- },
- "source": [
- "# Executing the join and deleting some column so that we don't get too much data\n",
- "df.join(ps, df.District == ps.Format_district, 'left_outer').drop(\n",
- " 'ADDRESS',\n",
- " 'CITY',\n",
- " 'STATE',\n",
- " 'ZIP',\n",
- " 'WEBSITE',\n",
- " 'PHONE',\n",
- " 'FAX',\n",
- " 'TTY',\n",
- " 'X COORDINATE',\n",
- " 'Y COORDINATE',\n",
- " 'LATITUDE',\n",
- " 'LONGITUDE',\n",
- " 'LOCATION').show(truncate=False)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------+-----------+-------------------+-------------------------+----+--------------------------+-------------------------------------------------+--------------------------------------+------+--------+----+--------+----+--------------+--------+----+----------------------+--------+--------------+---------------+\n",
- "|ID |Case Number|Date |Block |IUCR|Primary Type |Description |Location Description |Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|Year|Updated On |DISTRICT|DISTRICT NAME |Format_district|\n",
- "+--------+-----------+-------------------+-------------------------+----+--------------------------+-------------------------------------------------+--------------------------------------+------+--------+----+--------+----+--------------+--------+----+----------------------+--------+--------------+---------------+\n",
- "|12040452|JD220630 |2019-11-21 12:00:00|004XX N WABASH AVE |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |RESIDENCE |false |false |1834|018 |42 |8 |11 |2019|05/01/2020 03:49:55 PM|18 |Near North |018 |\n",
- "|12039199|JD219081 |2019-04-01 08:00:00|021XX S INDIANA AVE |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |RESIDENCE |false |false |0132|001 |3 |33 |11 |2019|05/01/2020 03:48:05 PM|1 |Central |001 |\n",
- "|12040466|JD220650 |2019-11-07 12:00:00|026XX N NEW ENGLAND AVE |1152|DECEPTIVE PRACTICE |ILLEGAL USE CASH CARD |RESIDENCE |false |false |2512|025 |36 |18 |11 |2019|05/01/2020 03:49:55 PM|25 |Grand Central |025 |\n",
- "|11938499|JD100565 |2019-12-31 11:30:00|070XX S COTTAGE GROVE AVE|0460|BATTERY |SIMPLE |OTHER (SPECIFY) |true |false |0321|003 |6 |42 |08B |2019|05/01/2020 03:48:05 PM|3 |Grand Crossing|003 |\n",
- "|11864640|JC477069 |2019-10-18 09:56:00|002XX S LAVERGNE AVE |041A|BATTERY |AGGRAVATED - HANDGUN |SIDEWALK |false |false |1533|015 |28 |25 |04B |2019|05/01/2020 03:48:05 PM|15 |Austin |015 |\n",
- "|11766889|JC359918 |2019-07-17 15:07:00|024XX W FOSTER AVE |0610|BURGLARY |FORCIBLE ENTRY |RESIDENCE - PORCH / HALLWAY |true |false |2011|020 |40 |4 |05 |2019|05/01/2020 03:48:05 PM|20 |Lincoln |020 |\n",
- "|11740807|JC328262 |2019-06-30 05:04:00|048XX W AUGUSTA BLVD |041A|BATTERY |AGGRAVATED - HANDGUN |SIDEWALK |false |false |1531|015 |37 |25 |04B |2019|05/01/2020 03:48:05 PM|15 |Austin |015 |\n",
- "|11718084|JC300874 |2019-05-13 13:00:00|062XX S EMERALD DR |1750|OFFENSE INVOLVING CHILDREN|CHILD ABUSE |GOVERNMENT BUILDING / PROPERTY |false |true |0711|007 |16 |68 |08B |2019|05/01/2020 03:48:05 PM|7 |Englewood |007 |\n",
- "|11647696|JC215453 |2019-04-07 17:56:00|062XX S FRANCISCO AVE |041A|BATTERY |AGGRAVATED - HANDGUN |ALLEY |false |false |0823|008 |16 |66 |04B |2019|05/01/2020 03:48:05 PM|8 |Chicago Lawn |008 |\n",
- "|12039931|JD220113 |2019-10-18 08:00:00|120XX S LAFLIN ST |2826|OTHER OFFENSE |HARASSMENT BY ELECTRONIC MEANS |RESIDENCE |false |false |0524|005 |34 |53 |26 |2019|04/30/2020 03:50:06 PM|5 |Calumet |005 |\n",
- "|12039974|JD220130 |2019-12-24 23:00:00|049XX N ALBANY AVE |0890|THEFT |FROM BUILDING |RESIDENCE |false |false |1713|017 |33 |14 |06 |2019|04/30/2020 03:50:06 PM|17 |Albany Park |017 |\n",
- "|12039885|JD219909 |2019-10-01 02:00:00|021XX S HARDING AVE |1752|OFFENSE INVOLVING CHILDREN|AGGRAVATED CRIMINAL SEXUAL ABUSE BY FAMILY MEMBER|RESIDENCE |false |true |1014|010 |24 |29 |17 |2019|04/30/2020 03:50:06 PM|10 |Ogden |010 |\n",
- "|12039884|JD220045 |2019-07-01 12:00:00|026XX W BALMORAL AVE |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |RESIDENCE |false |false |2011|020 |40 |4 |11 |2019|04/30/2020 03:50:06 PM|20 |Lincoln |020 |\n",
- "|12038367|JD218291 |2019-04-21 12:00:00|071XX S CYRIL AVE |0890|THEFT |FROM BUILDING |APARTMENT |false |false |0333|003 |5 |43 |06 |2019|04/30/2020 03:47:59 PM|3 |Grand Crossing|003 |\n",
- "|12038544|JD218327 |2019-12-22 13:00:00|028XX S CHRISTIANA AVE |5002|OTHER OFFENSE |OTHER VEHICLE OFFENSE |STREET |false |false |1032|010 |22 |30 |26 |2019|04/30/2020 03:47:59 PM|10 |Ogden |010 |\n",
- "|12038308|JD218196 |2019-07-22 08:00:00|052XX S ARCHER AVE |0910|MOTOR VEHICLE THEFT |AUTOMOBILE |PARKING LOT / GARAGE (NON RESIDENTIAL)|false |false |0815|008 |23 |57 |07 |2019|04/30/2020 03:47:59 PM|8 |Chicago Lawn |008 |\n",
- "|12040067|JD220150 |2019-12-15 18:00:00|057XX W ROOSEVELT RD |0265|CRIMINAL SEXUAL ASSAULT |AGGRAVATED - OTHER |HOSPITAL BUILDING / GROUNDS |false |false |1513|015 |29 |25 |02 |2019|04/30/2020 03:50:06 PM|15 |Austin |015 |\n",
- "|12039858|JD219964 |2019-09-03 09:00:00|003XX S SPRINGFIELD AVE |1154|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT $300 AND UNDER |APARTMENT |false |false |1133|011 |28 |26 |11 |2019|04/30/2020 03:50:06 PM|11 |Harrison |011 |\n",
- "|12038342|JD218257 |2019-05-01 00:00:00|002XX N MENARD AVE |1154|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT $300 AND UNDER |RESIDENCE |false |false |1512|015 |29 |25 |11 |2019|04/30/2020 03:47:59 PM|15 |Austin |015 |\n",
- "|12038644|JD218194 |2019-06-01 13:45:00|003XX N MENARD AVE |1153|DECEPTIVE PRACTICE |FINANCIAL IDENTITY THEFT OVER $ 300 |null |false |false |1512|015 |29 |25 |11 |2019|04/30/2020 03:47:59 PM|15 |Austin |015 |\n",
- "+--------+-----------+-------------------+-------------------------+----+--------------------------+-------------------------------------------------+--------------------------------------+------+--------+----+--------+----+--------------+--------+----+----------------------+--------+--------------+---------------+\n",
- "only showing top 20 rows\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "vj0mPaHU5i5n",
- "colab_type": "text"
- },
- "source": [
- "As you can see, we have done a left outer join between two dataframes. The following joins are supported by PySpark:\n",
- "1. inner (default)\n",
- "2. cross\n",
- "3. outer\n",
- "4. full\n",
- "5. full_outer\n",
- "6. left\n",
- "7. left_outer\n",
- "8. right\n",
- "9. right_outer\n",
- "10. left_semi\n",
- "11. left_anti"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "EEEB2TVqL4Ie",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# Hands-on again!"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "50gdkX-_MEMv",
- "colab_type": "text"
- },
- "source": [
- "**What is the most frequently reported non-criminal activity?**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "pbjuihaOL-fM",
- "colab_type": "code",
- "colab": {}
- },
- "source": [
- ""
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "UdMXUsjKME2y",
- "colab_type": "text"
- },
- "source": [
- "**Find the day of the week with the most reported crime?**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "7Sdq1ZwRMN0p",
- "colab_type": "code",
- "colab": {}
- },
- "source": [
- ""
- ],
- "execution_count": 0,
- "outputs": []
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "REpHzg_nNctg",
- "colab_type": "text"
- },
- "source": [
- "**Using a bar chart, plot which day of the week has the most number of reported crime.**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "j5olv2qkNfPa",
- "colab_type": "code",
- "outputId": "60bc50d8-9213-425d-da1b-5b73b2aa839a",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 51
- }
- },
- "source": [
- "from pyspark.sql.functions import date_format, col\n",
- "dow = [x[0] for x in df.groupBy(date_format(col('Date'),'E')).count().collect()]\n",
- "print(dow)\n",
- "cnt = [x[1] for x in df.groupBy(date_format(col('Date'),'E')).count().collect()]\n",
- "print(cnt)"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "['Sun', 'Mon', 'Thu', 'Sat', 'Wed', 'Tue', 'Fri']\n",
- "[36026, 36930, 36235, 37978, 36095, 36855, 38894]\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "gzOCDOqFOOc6",
- "colab_type": "code",
- "outputId": "8cb2e216-8272-4036-dd7b-ddf3b4abcdc8",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 204
- }
- },
- "source": [
- "import pandas as pd\n",
- "import matplotlib.pyplot as plt\n",
- "\n",
- "cp = pd.DataFrame({'Day_of_week':dow, 'Count':cnt})\n",
- "cp.head()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/html": [
- "\n",
- "\n",
- "
\n",
- " \n",
- " \n",
- " \n",
- " Day_of_week \n",
- " Count \n",
- " \n",
- " \n",
- " \n",
- " \n",
- " 0 \n",
- " Sun \n",
- " 36026 \n",
- " \n",
- " \n",
- " 1 \n",
- " Mon \n",
- " 36930 \n",
- " \n",
- " \n",
- " 2 \n",
- " Thu \n",
- " 36235 \n",
- " \n",
- " \n",
- " 3 \n",
- " Sat \n",
- " 37978 \n",
- " \n",
- " \n",
- " 4 \n",
- " Wed \n",
- " 36095 \n",
- " \n",
- " \n",
- "
\n",
- "
"
- ],
- "text/plain": [
- " Day_of_week Count\n",
- "0 Sun 36026\n",
- "1 Mon 36930\n",
- "2 Thu 36235\n",
- "3 Sat 37978\n",
- "4 Wed 36095"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 52
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "J4pxrZeSOvIS",
- "colab_type": "code",
- "outputId": "bd39d130-024a-4fa5-d013-aefdade2e701",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 324
- }
- },
- "source": [
- "cp.sort_values('Count', ascending=False).plot(kind='bar', color= 'red', x='Day_of_week', y='Count')\n",
- "plt.xlabel(\"Day of week\")\n",
- "plt.ylabel(\"Number of reported crimes\")\n",
- "plt.title(\"No.of reported crimes per day\")"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "Text(0.5, 1.0, 'No.of reported crimes per day')"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 53
- },
- {
- "output_type": "display_data",
- "data": {
- "image/png": "iVBORw0KGgoAAAANSUhEUgAAAZEAAAEiCAYAAAA4f++MAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nO3deZyVZf3/8ddbQCFRcSFSUEHFyuUr6rhkZmWpWCpmVtoimYmWln5LU9sorczKJVNTXBJbRH8uSaYRmbgvDIoLLl8nRRlyQVABTQz8/P64rpHjOHPmnsOcOXOY9/PxOI9z7uvePucwnM+5r+u+rksRgZmZWSVWqXUAZmZWv5xEzMysYk4iZmZWMScRMzOrmJOImZlVzEnEzMwq5iRiPY6kD0p6QtJiSfvXOp6uJulHkv7QRce6UdLYrjhWvZA0W9LHax2HJU4iVkj+j/uCpNVLyr4qaVoVTncycE5EDIyIP1fh+CukJ32JRcTeETGx1nFY7+UkYp3RBzimG86zMTCryIaS+lY5lpqcqyNKVqr/vz3p87XiVqo/Qqu6XwLHSRrU1kpJu0iaLumV/LxLeweSdLikJkkLJE2WtEEu/xewCfCXXJ21Whv7zpZ0gqQHgVcl9ZW0s6Q7Jb0s6QFJHynZfpqkUyXdK2mhpOskrVOyfj9Js/K+0yS9v8y5Lgc2KonvO3m7cucfIekWSYskTQXWK/chSxojaWaO9V+SRpe8j59KugN4Ddgkl301r/+ypDsknZnjeDL/m3xZ0px8JTm25DyrSfqVpGckPS/pfEkD8rr1JF2fj7NA0m3tJS1JIemb+XwvSvpl6baSviLpUUkvSZoiaeNW+x4l6QngiXaO/yVJT0uaL+l7rdbtKOmuHOezks6RtGped66k01ttP1nS/5b7/K2TIsIPPzp8ALOBjwPXAD/JZV8FpuXX6wAvAV8C+gIH5+V12zjW7sCLwHbAasBvgFtbn6uDWGYCGwIDgKHAfOATpB9Ge+TlwXn7acBcYCtgdeBq4A953ebAq3mffsB3gCZg1bbO1VZ8Bc5/F3BGfq+7AYtazt/Ge9sReCUfY5V87PeVvI9ngC3zZ9wvl301r/8ysBQ4lHTV+JO8/bn53Hvmcw/M258JTM7/dmsAfwFOzetOBc7P5+gHfAhQOzEHcHM+zkbA/5XENCZ/nu/PMX8fuLPVvlPzvgPaOPYWwOL8ua2WP8elLZ8/sD2wcz72cOBR4NiSz/LfwCp5eT1S8h1S6/9PK9Oj5gH4UR8PlieRrfKX3GDenkS+BNzbap+7gC+3cayLgV+ULA8E/gsMLz1XB7F8pWT5BOD3rbaZAozNr6cBPy9ZtwXwRv6i/QFwZcm6VUgJ5yNtnaut+MqdP3+pLgVWL1n3J9pPIhcAZ7azbhpwchtlpUnkiZJ1W+cv6SElZfOBUYBIyXPTknUfAJ7Kr08GrgM2K/C3EcDokuWvAzfl1zcCh7X6fF8DNi7Zd/cyx/4hMKlkefX8b9fm3wdwLHBtyfKjwB759dHADbX+v7SyPVydZZ0SEQ8D1wMntlq1AfB0q7KnSb+kW3vbthGxmPTl1ta27ZlT8npj4DO5SuNlSS8DuwLrt7P906Rf1+u1Ecubeduh7ezblnLn3wB4KSJebXX+9mwI/KvM+o5ieb7k9X8AIqJ12UDSj4B3ATNKYv5bLodUddkE/D1XU7X+9y4X19Ok9w3ps/l1yTkWkBJY0c93g9L1+XOc37IsafNc7facpIXAz3h7deFE4Iv59ReB33fwPqyTnESsEuOBw3n7F8G/SV8YpTYi/apv7W3bKt3xtW4727andPjpOaQrgUElj9Uj4ucl22zYKq7/kqrUWseivG1pLK2Hum69XO78zwJrq+Sutnz+9swBNi2zvquG3X6RlFC2LIl5rYgYCBARiyLi2xGxCbAf8C1JHytzvNaf77/z6znAEa0+mwERcWfB9/Rs6bElvYv0t9Lit8BjwMiIWBP4LilJtfgDMEbSNqQqtR53t1+9cxKxTouIJuAK4JslxTcAm0v6fG7o/hyp2uj6Ng5xOXCopFG54fxnwD0RMbvCkP4A7CtpL0l9JPWX9BFJw0q2+aKkLfKX0MnAVRGxDLgS+KSkj0nqB3wbWALc+Y6zLPc8qfG/w/NHxNNAI/BjSatK2hXYt8yxLyZ9Nh+TtIqkoZLe19kPpCP5iutC4ExJ7wbI59orv95H0mY5qb4CLAPeLHPI4yWtLWlD0h18V+Ty84GTJG2Zj7uWpM90ItSrgH0k7ZobzE/m7d9bawALgcX5c/paq/fZDEwnXYFcHRH/6cS5rQAnEavUyaT6aQAiYj6wD+lLeD6pgXqfiHgRQOnupy/kbf9Baou4mvRLc1PgoEoDiYg5pAbc7wLzSL9+j+ftf9+/By4FngP6kxNgRDxOqub4DenX+b7AvhHxRplTngp8P1fRHFfg/J8HdiJV5YwHLivzXu4lNYyfSfryvoV3XuF1lRNIVVZ356qgfwDvzetG5uXFpLat8yLi5jLHug6YQboJ4a+kZEhEXAucBkzK53gY2LtogBExCziK1I70LOlmjeaSTY4jfb6LSEnxitbHIFVpbY2rsqpCEZ6UylZuSh0i/xARF9U6lpWRpCBVJzXVOpa2SNqNdLW4cfgLr8v5SsTMVlq5ivIY4CInkOpwEjGzlZJSp9GXSXfJnVXjcFZars4yM7OK+UrEzMwq5iRiZmYV63WjZq633noxfPjwWodhZlZXZsyY8WJEDG5dXvUkIqkPqbPV3IjYR9IIYBKp1+kM4EsR8UbudHYZaUC1+cDnWjqfSToJOIzU4embETEll48Gfk0aA+miVj2U2zR8+HAaGxu7+F2ama3cJLU5XE93VGcdQxoErcVppAHmNiN1HDoslx9GGmNoM1JHq9MAJG1B6oi2JTAaOC/3Cu5DGp10b1LP6IPztmZm1k2qmkTysBOfBC7KyyINA35V3mQi0DL96Zi8TF7/sbz9GNIonksi4ilSD9sd86MpIp7MvYsn5W3NzKybVPtK5CzS8BctY+6sC7wcEUvzcjPLB/EbSh6tM69/JW//VnmrfdorfwdJ4yQ1SmqcN2/eir4nMzPLqtYmImkf4IWImKGSWd5qISImABMAGhoa3DHGzNr03//+l+bmZl5//fVah1Iz/fv3Z9iwYfTr16/Q9tVsWP8gsJ+kT5AGvFuT1Ag+SFLffLUxjOVDbs8lDfncrDTX8lqkBvaW8hal+7RXbmbWac3NzayxxhoMHz6cVJveu0QE8+fPp7m5mREjRhTap2rVWRFxUkQMi4jhpIbxf0bEF0jTaB6YNxtLGv0T0jSdLfM/H5i3j1x+kNJ80CNIo4veSxreeaTS/NWr5nNMrtb7MbOV3+uvv866667bKxMIgCTWXXfdTl2J1aKfyAmkYaF/AtxPHjI6P/9eUhNpyOyDIA0FLelK4BHSNKNH5XkgkHQ0aRrSPsAledhoM7OK9dYE0qKz779bkkhETCPNBU1EPEm6s6r1Nq8DbU5WExE/BX7aRvkNpMmQzMxWCs899xzHHnss06dPZ9CgQQwZMoSzzjqLzTffvEuOP23aNFZddVV22WWXLjler+ux3mnV/lXiATDNeq6u/v/fwf/3iOBTn/oUY8eOZdKkSQA88MADPP/8812aRAYOHNhlScRjZ5mZ9RA333wz/fr148gjj3yrbJtttmHXXXfl+OOPZ6uttmLrrbfmiivSBI7Tpk1jn332eWvbo48+mksvvRRIo3OMHz+e7bbbjq233prHHnuM2bNnc/7553PmmWcyatQobrvtthWO2VciZmY9xMMPP8z222//jvJrrrmGmTNn8sADD/Diiy+yww47sNtuu3V4vPXWW4/77ruP8847j1/96ldcdNFFHHnkkQwcOJDjjjuuS2L2lYiZWQ93++23c/DBB9OnTx+GDBnChz/8YaZPn97hfgcccAAA22+/PbNnz65KbE4iZmY9xJZbbsmMGTMKb9+3b1/efPPNt5Zb35q72mqrAdCnTx+WLl1KNTiJrMyk6j7MrEvtvvvuLFmyhAkTJrxV9uCDDzJo0CCuuOIKli1bxrx587j11lvZcccd2XjjjXnkkUdYsmQJL7/8MjfddFOH51hjjTVYtGhRl8XsNhEzsx5CEtdeey3HHnssp512Gv3792f48OGcddZZLF68mG222QZJ/OIXv+A973kPAJ/97GfZaqutGDFiBNtuu22H59h333058MADue666/jNb37Dhz70oRWLubfNsd7Q0BCdmk+knm/xrefYzWrg0Ucf5f3vf3+tw6i5tj4HSTMioqH1tq7OMjOzijmJmJlZxdwmYj2Xq+PMejwnEbNqcRKsSxHRqwdh7Gw7uauzzMyy/v37M3/+/E5/ka4sWuYT6d+/f+F9fCViZpYNGzaM5uZmevM02i0zGxblJGJmbeuF1XH9+vUrPKOfJU4iZrZyqvckWCfxu03EzMwqVrUkIqm/pHslPSBplqQf5/JLJT0laWZ+jMrlknS2pCZJD0raruRYYyU9kR9jS8q3l/RQ3uds9eZbKszMaqCa1VlLgN0jYrGkfsDtkm7M646PiKtabb83MDI/dgJ+C+wkaR1gPNAABDBD0uSIeClvczhwD2ma3NHAjZiZWbeo2pVIJIvzYr/8KFcJNwa4LO93NzBI0vrAXsDUiFiQE8dUYHRet2ZE3B3pfrzLgP2r9X7MzOydqtomIqmPpJnAC6REcE9e9dNcZXWmpNVy2VBgTsnuzbmsXHlzG+VmZtZNqppEImJZRIwChgE7StoKOAl4H7ADsA5wQjVjAJA0TlKjpMbefP+3mVlX65a7syLiZeBmYHREPJurrJYAvwN2zJvNBTYs2W1YLitXPqyN8rbOPyEiGiKiYfDgwV3xlszMjOrenTVY0qD8egCwB/BYbssg30m1P/Bw3mUycEi+S2tn4JWIeBaYAuwpaW1JawN7AlPyuoWSds7HOgS4rlrvx8zM3qmad2etD0yU1IeUrK6MiOsl/VPSYEDATODIvP0NwCeAJuA14FCAiFgg6RSgZVb6kyNiQX79deBSYADprizfmWVm1o08s2FH6qTXaJvqOXZw/B1x/OU5/vI6Gb9nNjQzsy7nJGJmZhVzEjEzs4o5iZiZWcWcRMzMrGJOImZmVjEnETMzq1iHSUTSZyStkV9/X9I1pXN9mJlZ71XkSuQHEbFI0q7Ax4GLSfN4mJlZL1ckiSzLz58EJkTEX4FVqxeSmZnViyJJZK6kC4DPATfk+T/clmJmZoWSwWdJI+nulYd0Xwc4vqpRmZlZXegwiUTEa6SZCXfNRUuBJ6oZlJmZ1Ycid2eNJ80+eFIu6gf8oZpBmZlZfShSnfUpYD/gVYCI+DewRjWDMjOz+lAkibwRadKRAJC0enVDMjOzelEkiVyZ784aJOlw4B/AhdUNy8zM6kGH0+NGxK8k7QEsBN4L/DAiplY9MjMz6/EK9ffISeMU4GfADEnrdLSPpP6S7pX0gKRZkn6cy0dIukdSk6QrJK2ay1fLy015/fCSY52Uyx+XtFdJ+ehc1iTpxE69czMzW2FF7s46QtJzwINAIzAjP3dkCbB7RGwDjAJGS9oZOA04MyI2A14CDsvbHwa8lMvPzNshaQvgIGBLYDRwnqQ+kvoA5wJ7A1sAB+dtzcysmxS5EjkO2CoihkfEJhExIiI26WinSBbnxX75EcDuwFW5fCKwf349Ji+T139MknL5pIhYEhFPAU3AjvnRFBFPRsQbwKS8rZmZdZMiSeRfwGuVHDxfMcwkdVacmo/1ckQszZs0A0Pz66HAHIC8/hVg3dLyVvu0V95WHOMkNUpqnDdvXiVvxczM2tBhwzqpk+Gdku4hVVEBEBHf7GjHiFgGjJI0CLgWeF+lga6IiJgATABoaGiIWsRgZrYyKpJELgD+CTwEvFnJSSLiZUk3Ax8g3SrcN19tDAPm5s3mAhsCzZL6AmsB80vKW5Tu0165mZl1gyJJpF9EfKuzB5Y0GPhvTiADgD1IjeU3AweS2jDGAtflXSbn5bvy+n9GREiaDPxJ0hnABsBI4F5AwEhJI0jJ4yDg852N08zMKlckidwoaRzwF95enbWgg/3WBybmu6hWAa6MiOslPQJMkvQT4H7SJFfk599LagIWkJICETFL0pXAI6TBH4/K1WRIOpo0wnAf4JKImFXkTZuZWddQGtGkzAbSU20UR5E7tHqihoaGaGwscodyJlUvGIAOPv8VUs+xg+PviOMvz/GX18n4Jc2IiIbW5UV6rI/o1JnMzKzXaDeJSNo9Iv4p6YC21kfENdULy8zM6kG5K5EPk+7K2reNdQE4iZiZ9XLtJpGIGC9pFeDGiLiyG2MyM7M6UbbHekS8CXynm2IxM7M6U2TYk39IOk7ShpLWaXlUPTIzM+vxivQT+Vx+PqqkLIC6vMXXzMy6jm/xNTOzihWZT+SoPIBiy/Lakr5e3bDMzKweFGkTOTwiXm5ZiIiXgMOrF5KZmdWLIkmkT54cCkhzhACrVi8kMzOrF0Ua1v8GXCHpgrx8RC4zM7NerkgSOQEYB3wtL08FLqpaRGZmVjeK3J31JnB+fpiZmb2lSJuImZlZm5xEzMysYk4iZmZWsXLzifyFNLxJmyJiv6pEZGZmdaPclcivgNOBp4D/ABfmx2LgXx0dOA/YeLOkRyTNknRMLv+RpLmSZubHJ0r2OUlSk6THJe1VUj46lzVJOrGkfISke3L5FZLcf8XMrBuVm0/kFgBJp7eaV/cvkopMUr4U+HZE3CdpDWCGpKl53ZkR8avSjSVtARwEbAlsQBo9ePO8+lxgD6AZmC5pckQ8ApyWjzVJ0vnAYcBvC8RmZmZdoEibyOqS3hqxV9IIYPWOdoqIZyPivvx6EfAoMLTMLmOASRGxJCKeApqAHfOjKSKejIg3gEnAmNyLfnfgqrz/RGD/Au/HzMy6SJEk8r/ANEnTJN0C3Awc25mTSBoObAvck4uOlvSgpEskrZ3LhgJzSnZrzmXtla8LvBwRS1uVt3X+cZIaJTXOmzevM6GbmVkZHSaRiPgbMBI4Bvgm8N6ImFL0BJIGAlcDx0bEQlJ106bAKOBZUrtLVUXEhIhoiIiGwYMHV/t0Zma9RpGh4N8FHA8cHREPABtJ2qfIwSX1IyWQP0bENQAR8XxELMs94S8kVVcBzAU2LNl9WC5rr3w+MEhS31blZmbWTYpUZ/0OeAP4QF6eC/yko51ym8XFwKMRcUZJ+folm30KeDi/ngwcJGm13O4yErgXmA6MzHdirUpqfJ8cEUGqWjsw7z8WuK7A+zEzsy5SZADGTSPic5IOBoiI10qHhi/jg8CXgIckzcxl3wUOljSK1AdlNmlUYCJilqQrgUdId3YdFRHLACQdDUwB+gCXRMSsfLwTgEmSfgLcT0paZmbWTYokkTckDSB3PJS0KbCko50i4nagrWRzQ5l9fgr8tI3yG9raLyKeZHl1mJmZdbMiSeRHpPlDNpT0R9IVxqHVDMrMzOpDkaHg/y5pBrAz6crimIh4seqRmZlZj1fk7qybImJ+RPw1Iq6PiBcl3dQdwZmZWc9WbgDG/sC7gPVyh8CW9o01Kd/z3MzMeoly1VlHkHqmbwDMYHkSWQicU+W4zMysDpQbgPHXks4BvhsRp3RjTGZmVifKtonkfhoHdFMsZmZWZ4r0WL9J0qcLdjA0M7NepEgSOQL4f6ROhwslLZK0sMpxmZlZHSjST2SN7gjEzMzqT5Ee60jaD9gtL06LiOurF5KZmdWLIp0Nf06aS+SR/DhG0qnVDszMzHq+IlcinwBG5fk/kDSRNGLuSdUMzMzMer4iDesAg0per1WNQMzMrP4UuRI5Fbhf0s2kXuu7ASdWNSozM6sLRe7OulzSNGAH0pwiJ0TEc9UOzMzMer5Cd2eRpsbdlZRE+gLXVi0iMzOrG0XuzjoPOBJ4iDQf+hGSzi2w34aSbpb0iKRZko7J5etImirpify8di6XpLMlNUl6UNJ2Jccam7d/QtLYkvLtJT2U9znbverNzLpXkYb13YG9IuJ3EfE70t1auxfYbynw7YjYgjSh1VGStiC1p9wUESOBm1jevrI3MDI/xgG/hZR0gPHATqSpcMe3JJ68zeEl+40uEJeZmXWRIkmkCdioZHnDXFZWRDwbEffl14uAR0nzkIwBJubNJgL759djgMsiuRsYJGl9YC9gakQsiIiXgKnA6LxuzYi4OyICuKzkWGZm1g2KtImsATwq6V5Sm8iOQKOkyQARsV9HB5A0HNgWuAcYEhHP5lXPAUPy66HAnJLdmnNZufLmNsrNzKybFEkiP1yRE0gaCFwNHBsRC0ubLSIiJMWKHL9gDONIVWRstNFGHWxtZmZFdVidFRG3ALOBfvn1vcB9EXFLXm6XpH6kBPLHiLgmFz+fq6LIzy/k8rmkqrIWw3JZufJhbZS39R4mRERDRDQMHjy4g3dsZmZFFbk763DgKuCCXDQM+HOB/QRcDDwaEWeUrJoMtNxhNRa4rqT8kHyX1s7AK7naawqwp6S1c4P6nsCUvG6hpJ3zuQ4pOZaZmXWDItVZR5HaQe4BiIgnJL27wH4fBL4EPCRpZi77LvBz4EpJhwFPA5/N624g3fnVBLwGHJrPt0DSKcD0vN3JEbEgv/46cCkwALgxP8zMrJsUSSJLIuKNlrYMSX1JDexlRcTtpGFS2vKxNrYPUsJq61iXAJe0Ud4IbNVRLGZmVh1FbvG9RdJ3gQGS9iDNcviX6oZlZmb1oEgSOQGYR+qxfgSp2un71QzKzMzqQ9nqLEl9gFkR8T7gwu4JyczM6kXZK5GIWAY8LsmdK8zM7B2KNKyvDczKPdZfbSks0lPdzMxWbkWSyA+qHoWZmdWlIpNSle2VbmZmvVfROdbNzMzewUnEzMwq1m4SkXRTfj6t+8IxM7N6Uq5NZH1JuwD7SZpEqyFMWiacMjOz3qtcEvkh6c6sYcAZrdYFxabINTOzlVi7SSQirgKukvSDiDilG2MyM7M6UeQW31Mk7QfsloumRcT11Q3LzMzqQZFJqU4FjgEeyY9jJP2s2oGZmVnPV6TH+ieBURHxJoCkicD9pAmmzMysFyvaT2RQyeu1qhGImZnVnyJXIqcC90u6mXSb727AiVWNyszM6kKHVyIRcTmwM3ANcDXwgYi4oqP9JF0i6QVJD5eU/UjSXEkz8+MTJetOktQk6XFJe5WUj85lTZJOLCkfIemeXH6FpFWLv20zM+sKhaqzIuLZiJicH88VPPalwOg2ys+MiFH5cQOApC2Ag4At8z7nSeqTJ8U6F9gb2AI4OG8LcFo+1mbAS8BhBeMyM7MuUrWxsyLiVmBBwc3HAJMiYklEPAU0ATvmR1NEPBkRbwCTgDGSROrseFXefyKwf5e+ATMz61AtBmA8WtKDubpr7Vw2FJhTsk1zLmuvfF3g5YhY2qq8TZLGSWqU1Dhv3ryueh9mZr1e2SSSq5Qe68Lz/RbYFBgFPAuc3oXHbldETIiIhohoGDx4cHec0sysV+jWOdYj4vmIWJb7nFxIqq4CmAtsWLLpsFzWXvl8YJCkvq3KzcysGxWpzmqZY/0mSZNbHpWcTNL6JYufAlru3JoMHCRpNUkjgJHAvcB0YGS+E2tVUuP75IgI4GbgwLz/WOC6SmIyM7PKVW2OdUmXAx8B1pPUDIwHPiJpFGkU4NnAEQARMUvSlaRhVZYCR+WrICQdDUwB+gCXRMSsfIoTgEmSfkLqQX9xJXGamVnllH7Ud7CRtDEwMiL+IeldQJ+IWFT16KqgoaEhGhsbi+8gdbzNiijw+VesnmMHx98Rx1+e4y+vk/FLmhERDa3LiwzAeDjpVtoLctFQ4M+dOruZma2UirSJHAV8EFgIEBFPAO+uZlBmZlYfiiSRJbmjHwD5jqgqX8eZmVk9KJJEbpH0XWCApD2A/wf8pbphmZlZPSiSRE4E5gEPke6mugH4fjWDMjOz+lBketw380RU95CqsR6PIrd0mZnZSq/DJCLpk8D5wL9I84mMkHRERNxY7eDMzKxnK9LZ8HTgoxHRBCBpU+CvgJOImVkvV6RNZFFLAsmeBOqyo6GZmXWtdq9EJB2QXzZKugG4ktQm8hnSmFZmZtbLlavO2rfk9fPAh/PrecCAqkVkZmZ1o90kEhGHdmcgZmZWf4rcnTUC+AYwvHT7iNivemGZmVk9KHJ31p9Jw6z/BXizuuGYmVk9KZJEXo+Is6seiZmZ1Z0iSeTXksYDfweWtBRGxH1Vi8rMzOpCkSSyNfAlYHeWV2dFXjYzs16sSGfDzwCbRMSHI+Kj+dFhApF0iaQXJD1cUraOpKmSnsjPa+dySTpbUpOkByVtV7LP2Lz9E5LGlpRvL+mhvM/ZUrWnATMzs9aKJJGHgUEVHPtSYHSrshOBmyJiJHBTXgbYGxiZH+OA30JKOqS52XcCdgTGtySevM3hJfu1PpeZmVVZkSQyCHhM0hRJk1seHe0UEbcCC1oVjwEm5tcTgf1Lyi+L5G5gkKT1gb2AqRGxICJeAqYCo/O6NSPi7jyi8GUlxzIzs25SpE1kfBeeb0hEPJtfPwcMya+HAnNKtmvOZeXKm9soNzOzblRkPpFbqnHiiAhJ3TIviaRxpGoyNtpoo+44pZlZr9BhdZakRZIW5sfrkpZJWljh+Z7PVVHk5xdy+Vxgw5LthuWycuXD2ihvU0RMiIiGiGgYPHhwhaGbmVlrHSaRiFgjItaMiDVJAy9+GjivwvNNBlrusBoLXFdSfki+S2tn4JVc7TUF2FPS2rlBfU9gSl63UNLO+a6sQ0qOZWZm3aRIw/pbcsP3n0kN3mVJuhy4C3ivpGZJhwE/B/aQ9ATw8bwMad72J4Em4ELg6/l8C4BTSEPPTwdOzmXkbS7K+/wLT5JlZtbt1NF06SXzikBKOg3AhyPiA9UMrFoaGhqisbGx+A7V7n5Szenq6zl2cPwdcfzlOf7yOhm/pBkR0dC6vMjdWaXziiwFZpNuyTUzs16uyN1ZnlfEzMzaVG563B+W2S8i4pQqxGNmZnWk3JXIq22UrQ4cBqxLavA2M7NerNz0uKe3vJa0BnAMcCgwCTi9vf3MzKz3KNsmkgdA/BbwBfXqaBAAAAzgSURBVNJYV9vlMazMzMzKton8EjgAmABsHRGLuy0qMzOrC+U6G34b2AD4PvDvkqFPFq3AsCdmZrYSKdcm0qne7GZm1vs4UZiZWcWcRMzMrGJOImZmVjEnETMzq5iTiJmZVcxJxMzMKuYkYmZmFXMSMTOzijmJmJlZxWqSRCTNlvSQpJmSGnPZOpKmSnoiP6+dyyXpbElNkh6UtF3Jccbm7Z+QNLYW78XMrDer5ZXIRyNiVMmcvScCN0XESOCmvAywNzAyP8YBv4W3RhgeD+wE7AiMb0k8ZmbWPXpSddYY0nDz5Of9S8ovi+RuYJCk9YG9gKkRsSAPTz8VGN3dQZuZ9Wa1SiIB/F3SDEnjctmQiHg2v34OGJJfDwXmlOzbnMvaK38HSeMkNUpqnDdvXle9BzOzXq/spFRVtGtEzJX0bmCqpMdKV0ZESIquOllETCDNi0JDQ0OXHdfMrLeryZVIRMzNzy8A15LaNJ7P1VTk5xfy5nOBDUt2H5bL2is3M7Nu0u1JRNLqec52JK0O7Ak8DEwGWu6wGgtcl19PBg7Jd2ntDLySq72mAHtKWjs3qO+Zy8zMrJvUojprCHCtpJbz/yki/iZpOnClpMOAp4HP5u1vAD4BNAGvAYcCRMQCSacA0/N2J0fEgu57G2Zmpoje1UTQ0NAQjY2NxXdIya56qvn513Ps4Pg74vjLc/zldTJ+STNKumS8pSfd4mtmZnXGScTMzCrmJGJmZhVzEjEzs4o5iZiZWcWcRMzMrGJOImZmVjEnETMzq5iTiJmZVcxJxMzMKuYkYmZmFXMSMTOzijmJmJlZxZxEzMysYk4iZmZWMScRMzOrmJOImZlVzEnEzMwqVvdJRNJoSY9LapJ0Yq3jMTPrTeo6iUjqA5wL7A1sARwsaYvaRmVm1nvUdRIBdgSaIuLJiHgDmASMqXFMZma9Rt9aB7CChgJzSpabgZ1abyRpHDAuLy6W9HgVY1oPeLHw1lL1Ium8eo4dHH+tOf7aqnb8G7dVWO9JpJCImABM6I5zSWqMiIbuOFdXq+fYwfHXmuOvrVrFX+/VWXOBDUuWh+UyMzPrBvWeRKYDIyWNkLQqcBAwucYxmZn1GnVdnRURSyUdDUwB+gCXRMSsGofVLdVmVVLPsYPjrzXHX1s1iV8RUYvzmpnZSqDeq7PMzKyGnETMzKxiTiJmZlaxum5YtxUnabWIWNJRmVWHpF2BkRHxO0mDgYER8VSt4ypC0lPAOxpVI2KTGoRjNeIksgIkrRkRCyWt09b6iFjQ3TFV4C5guwJlPVYeQ20IJX/PEfFM7SIqRtJ4oAF4L/A7oB/wB+CDtYyrE0o7tvUHPgO0+X+hJ5H0EG0kvxYR8T/dGM4K6Ql/+04iK+ZPwD7ADNIfZek4AgH02F9kkt5DGjZmgKRtWR77msC7ahZYJ0n6BjAeeB54MxcHUA9fBJ8CtgXuA4iIf0tao7YhFRcR81sVnSVpBvDDWsTTCfvk56Py8+/z8xdqEEvFesrfvpPICoiIfSQJ+HA9/PJtZS/gy6Re/meUlC8CvluLgCp0DPDeNr7Q6sEbERGSAkDS6rUOqDMklV6trkK6Munx3ykR8TSApD0iYtuSVSdKug+olyklesTffo//B+/p8pfAX4Gtax1LZ0TERGCipE9HxNW1jmcFzAFeqXUQFbpS0gXAIEmHA18BLqxxTJ1xesnrpcBs4LO1CaUikvTBiLgjL+xCfd1s1CP+9t3ZsAtImgicExHTax1LJSR9EtiSVK8NQEScXLuIipN0MalN4a/AWzcDRMQZ7e7Ug0jaA9iTVJ04JSKm1jikXkPS9sAlwFq56GXgKxFxX+2iKq6n/O37SqRr7AR8UdJs4FXSF0LUQwOdpPNJbSAfBS4CDgTurWlQnfNMfqyaH3UlJ426TBySVgM+DQzn7Q27dfEDJCJmANtIWisv1/xXfSf1iL99X4msAEkbRcQzktocZ7+l7rUnk/RgRPxPyfNA4MaI+FCtY+uMHDcRsbjWsRQlaRHL7xJalXR31qsRsWbtoipO0t9I1SkzgGUt5RFxers79SCShgA/AzaIiL3zrKgfiIiLaxxaXfGVyIr5M7BdRDwt6eqI+HStA6rAf/Lza5I2ABYA69cwnk6RtBXp7pp18vKLwCE9YCDODkXEW3di5Rs0xgA71y6iThsWEaNrHcQKuJR0a/X38vL/AVcAdZFEJN1M2/10du/OOJxEVkzpLb099nbeDlwvaRDwC9IvSkjVWvViAvCtiLgZQNJHSI3Tu9QyqHIk9Y2IpaVlkaoE/pz7jtTL3UF3Sto6Ih6qdSAVWi8irpR0Erw1KviyjnbqQY4red2fVLW4tJ1tq8ZJZMVEO697PEk7AHMi4pS8PBB4CHgMOLOWsXXS6i0JBCAiptXBrbL3AttJOqCkrOUW2ddrE1Jxkh4m9UvoCxwq6UlSw27dtAVmr0pal/x/V9LO9IC7nYrKbTql7pDU7e2ZTiIrZhtJC0n/eQbk17D8P1NPrtu+APg4gKTdgJ8D3wBGkX7dH1i70DrlSUk/YHmHsS8CT9Ywns7Yl+U/Plpukd2vZtEUN5T0d1KXJB0L3Al8B7gO2ETSHcBgUq/7utBqpIyWHyFrtbN59eJww3rvJOmBiNgmvz4XmBcRP8rLMyOiLr4kJK0N/BjYNRfdBvwoIl6qXVTlSWomdfBUq1UBPf/2ZEn3RUTdDIvTmqRfkao730e68p4L3ApcHhEv1jK2zmg1dlnLj5CTI+L27ozDVyK9V5+SuvmPAeNK1tXN30VOFt+sdRyd1AcYyDuTSL14t6RvtbeypyfBiDgOIE+p3UBKKB8BTpL0ckRsUcPwOlRSFT0iL48ltYfMBh7p7njq5svCutzlwC35bqb/kH7BI2kz6qBeWNLkcusjoidXCz1bL30p2lHvSbDFANJYcWvlx79J7YI9Xeuq6FOpYVW0q7N6sdyQuD7w94h4NZdtThqOvEf32pU0jzTsw+XAPbT6QouIW2oRVxGS7m81ZlNdWQmqsyaQRmhYRPrbuRu4uydXgZbqaVXRvhLpxSLi7jbK/q8WsVTgPcAewMHA50lDP1xeD/1DSNWH9azer0A2AlYDniC1hzSThjypFz2qKtpXIlb38vAbBwO/BH4cEefUOKSVmqR16mSunHblzp1bktpDdgG2InW0vSsixtcyto5I+h7wCeBFUkLcLg8EuxkwMSK6dT4aJxGrWzl5fJKUQIYDk4FLImJuLeOy+iFpGGkSsF1I84ysGxGDahtVx3pSVbSTiNUlSZeRfj3eAEyKiIdrHJLVCUnfZPkVyH9JfUZaHg9FxJtldrdWnESsLkl6kzRiMrx9tIB66OhpNSTpDOAO4M6IeLbW8dQ7JxEzM6tYPc3iZWZmPYyTiJmZVcxJxKwVScskzZQ0S9IDkr4tqar/VyT9Mp/vl1U+z/A8Cq9Zl3BnQ7N3+k9Lr19J7wb+RBoeo5r9B8YB60REPc1nYeYrEbNyIuIF0hf80UqGS7pN0n35sQukW44l7d+yn6Q/ShpTeqy8/y8lPSzpIUmfy+WTSWNRzWgpK9nnIUmD8r7zJR1Scr49JPXJx5wu6UFJR5Tse3xJ+Y9bvzdJm0i6Pw/oZ1YRX4mYdSAinpTUB3g38AKwR0S8LmkkaeyuBtKUqv9Lmp1wLVIfhLGtDnUAaZC8bYD1gOmSbo2I/SQtbmfMoztIneGeJs2T8iHgMuADwNeAw4BXImKH3PnyDkl/B0bmx46k254n58H6ngGQ9F5gEvDliHhgxT8l662cRMw6px9wjqRRwDJgc0gDPko6T9Jg0rDcV7eeApc058nlucrqeUm3ADuQetq35zZgN1IS+S0wTtJQ4KWIeFXSnsD/SGoZuXUtUvLYMz/uz+UDc/kzpMmXrgMOiIhuHzrcVi5OImYdkLQJKWG8QGoXeZ50NbEKb5/O9jLSzIoHAYd20elvBY4ijZH0PeBTpKG+b2sJD/hGRExpFfNewKkRcUGr8uGkof6fISU1JxFbIW4TMSsjX1mcD5wTqWfuWqT5QN4EvkSaW6PFpcCxAO38wr8N+FxuxxhMusIoOyd2RMwhVX2NjIgngduB40jJBWAK8DVJ/XK8myvNMT8F+Iqkgbl8aL5JAOANUjI6RNLnC38YZm3wlYjZOw2QNJNUdbWUNH97y2x95wFX5wbuv7F86BUi4nlJjwJ/bue415LaMh4gDdXynYh4rkA897A8Wd1GmoSoZQrUi0iDT96XR6adB+wfEX+X9H7grlTMYtJV0rIc66uS9gGm5vaYspN8mbXHw56YdRFJ7yLNjLddRPT42SHNuoKrs8y6gKSPA48Cv3ECsd7EVyJmZlYxX4mYmVnFnETMzKxiTiJmZlYxJxEzM6uYk4iZmVXMScTMzCr2/wEdC5DNY5uasgAAAABJRU5ErkJggg==\n",
- "text/plain": [
- ""
- ]
- },
- "metadata": {
- "tags": [],
- "needs_background": "light"
- }
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "HNPhsx8P2tUH",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# Spark SQL"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "rHMvBBAh23cw",
- "colab_type": "text"
- },
- "source": [
- "SQL has been around since the 1970s, and so one can imagine the number of people who made it their bread and butter. As big data came into popularity, the number of professionals with the technical knowledge to deal with it was in shortage. This led to the creation of Spark SQL. To quote the docs: \n",
- ">Spark SQL is a Spark module for structured data processing. Unlike the basic Spark RDD API, the interfaces provided by Spark SQL provide Spark with more information about the structure of both the data and the computation being performed. Internally, Spark SQL uses this extra information to perform extra optimizations.\n",
- "\n",
- "Basically, what you need to know is that Spark SQL is used to execute SQL queries on big data. Spark SQL can also be used to read data from Hive tables and views. Let me explain Spark SQL with an example.\n"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "g2DaK9-D7QkX",
- "colab_type": "code",
- "outputId": "9899d009-fa16-4fe8-9e94-69fe84c04f5c",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 394
- }
- },
- "source": [
- "# Load data\n",
- "df = spark.read.csv('reported-crimes.csv',header=True).withColumn('Date',to_timestamp(col('Date'),'MM/dd/yyyy hh:mm:ss a')).filter(col('Date')==lit('2019-07-30'))\n",
- "# Register Temporary Table\n",
- "df.createOrReplaceTempView(\"temp\")\n",
- "# Select all data from temp table\n",
- "spark.sql(\"select * from temp\").show()\n",
- "# Select count of data in table\n",
- "spark.sql(\"select count(*) from temp\").show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+--------+-----------+-------------------+--------------------+----+-------------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+\n",
- "| ID|Case Number| Date| Block|IUCR| Primary Type| Description|Location Description|Arrest|Domestic|Beat|District|Ward|Community Area|FBI Code|X Coordinate|Y Coordinate|Year| Updated On| Latitude| Longitude| Location|\n",
- "+--------+-----------+-------------------+--------------------+----+-------------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+\n",
- "|11895315| JC515309|2019-07-30 00:00:00|097XX S WENTWORTH...|1156| DECEPTIVE PRACTICE|ATTEMPT - FINANCI...| OTHER| false| false|0511| 005| 21| 49| 11| 1176661| 1840424|2019|11/21/2019 03:52:...|41.717467205|-87.628563649|(41.717467205, -8...|\n",
- "|11805495| JC406040|2019-07-30 00:00:00|0000X S MICHIGAN AVE|1156| DECEPTIVE PRACTICE|ATTEMPT - FINANCI...| OTHER| false| false|0112| 001| 42| 32| 11| 1177316| 1900297|2019|08/28/2019 04:08:...|41.881749633|-87.624355971|(41.881749633, -8...|\n",
- "|11805303| JC405788|2019-07-30 00:00:00|067XX S BLACKSTON...|0890| THEFT| FROM BUILDING| APARTMENT| false| false|0332| 003| 5| 43| 06| 1187366| 1860406|2019|08/28/2019 04:08:...|41.772052629|-87.588722816|(41.772052629, -8...|\n",
- "|11798686| JC397805|2019-07-30 00:00:00| 010XX W MADISON ST|1153| DECEPTIVE PRACTICE|FINANCIAL IDENTIT...| null| false| false|1232| 012| 25| 28| 11| 1169700| 1900215|2019|08/21/2019 04:18:...|41.881693864|-87.652323905|(41.881693864, -8...|\n",
- "|11787134| JC383936|2019-07-30 00:00:00|110XX S MICHIGAN AVE|1130| DECEPTIVE PRACTICE|FRAUD OR CONFIDEN...| CURRENCY EXCHANGE| false| false|0513| 005| 9| 49| 11| 1178795| 1831638|2019|08/11/2019 04:00:...|41.693309015|-87.621013915|(41.693309015, -8...|\n",
- "|11781214| JC376972|2019-07-30 00:00:00| 015XX N LEAVITT ST|0820| THEFT| $500 AND UNDER| STREET| false| false|1424| 014| 1| 24| 06| 1161451| 1910391|2019|08/07/2019 03:58:...|41.909793259|-87.682330427|(41.909793259, -8...|\n",
- "|11775928| JC370624|2019-07-30 00:00:00|079XX S EBERHART AVE|0820| THEFT| $500 AND UNDER|VEHICLE NON-COMME...| false| false|0624| 006| 6| 44| 06| 1180949| 1852510|2019|08/06/2019 04:17:...|41.750535224|-87.612487858|(41.750535224, -8...|\n",
- "|11776857| JC371434|2019-07-30 00:00:00|070XX S EBERHART AVE|5002| OTHER OFFENSE|OTHER VEHICLE OFF...| STREET| false| false|0322| 003| 6| 69| 26| 1180795| 1858429|2019|08/06/2019 04:17:...|41.766781131|-87.612870529|(41.766781131, -8...|\n",
- "|11776516| JC371429|2019-07-30 00:00:00| 057XX N CLARK ST|0281|CRIM SEXUAL ASSAULT| NON-AGGRAVATED| ALLEY| false| false|2013| 020| 48| 77| 02| 1164693| 1938570|2019|08/06/2019 04:17:...|41.987049672|-87.669619077|(41.987049672, -8...|\n",
- "|11776159| JC370564|2019-07-30 00:00:00| 066XX N ROCKWELL ST|0620| BURGLARY| UNLAWFUL ENTRY| APARTMENT| false| false|2412| 024| 50| 2| 05| 1157787| 1943931|2019|08/06/2019 04:17:...|42.001904519|-87.694872497|(42.001904519, -8...|\n",
- "+--------+-----------+-------------------+--------------------+----+-------------------+--------------------+--------------------+------+--------+----+--------+----+--------------+--------+------------+------------+----+--------------------+------------+-------------+--------------------+\n",
- "\n",
- "+--------+\n",
- "|count(1)|\n",
- "+--------+\n",
- "| 10|\n",
- "+--------+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "6i32WE8j_ec8",
- "colab_type": "text"
- },
- "source": [
- "As you can see, we registered the dataframe as temporary table and then ran basic SQL queries on it. How amazing is that?! \n",
- "If you are a person who is more comfortable with SQL, then this feature is truly a blessing for you! But this raises a question: \n",
- "> *Should I just keep using Spark SQL all the time?*\n",
- "\n",
- "And the answer is, _**it depends**_. \n",
- "So basically, the different functions acts in differnet ways, and depending upon the type of action you are trying to do, the speed at which it completes execution also differs. But as time progress, this feature is getting better and better, so hopefully the difference should be a small margin. There are plenty of analysis done on this, but nothing has a definite answer yet. You can read this [comparative study done by horton works](https://community.cloudera.com/t5/Community-Articles/Spark-RDDs-vs-DataFrames-vs-SparkSQL/ta-p/246547) or the answer to this [stackoverflow question](https://stackoverflow.com/questions/45430816/writing-sql-vs-using-dataframe-apis-in-spark-sql) if you are still curious about it."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "x62BiCgBMOtq",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# RDD"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "VGXK6uEuUKRh",
- "colab_type": "text"
- },
- "source": [
- "> With map, you define a function and then apply it record by record. Flatmap returns a new RDD by first applying a function to all of the elements in RDDs and then flattening the result. Filter, returns a new RDD. Meaning only the elements that satisfy a condition. With reduce, we are taking neighboring elements and producing a single combined result.\n",
- "For example, let's say you have a set of numbers. You can reduce this to its sum by providing a function that takes as input two values and reduces them to one. \n",
- "\n",
- "Some of the reasons you would use a dataframe over RDD are:\n",
- "1. It's ability to represnt data as rows and columns. But this also means it can only hold structred and semi-structured data.\n",
- "2. It allows processing data in different formats (AVRO, CSV, JSON, and storage system HDFS, HIVE tables, MySQL).\n",
- "3. It's superior job Optimization capability.\n",
- "4. DataFrame API is very easy to use.\n",
- "\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "0_WvAgyvR7m6",
- "colab_type": "code",
- "outputId": "73daad9c-5f3a-4529-dd17-11f36eff6689",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 71
- }
- },
- "source": [
- "psrdd = sc.textFile('police-station.csv')\n",
- "print(psrdd.first())\n",
- "ps_header = psrdd.first()\n",
- "ps_rest = psrdd.filter(lambda line: line!=ps_header)\n",
- "print(ps_rest.first())"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "DISTRICT,DISTRICT NAME,ADDRESS,CITY,STATE,ZIP,WEBSITE,PHONE,FAX,TTY,X COORDINATE,Y COORDINATE,LATITUDE,LONGITUDE,LOCATION\n",
- "1,Central,1718 S State St,Chicago,IL,60616,http://home.chicagopolice.org/community/districts/1st-district-central/,312-745-4290,312-745-3694,312-745-3693,1176569.052,1891771.704,41.85837259,-87.62735617,\"(41.8583725929, -87.627356171)\"\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "P65eAFO3Mkdd",
- "colab_type": "text"
- },
- "source": [
- "**How many police stations are there?**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "Vi03EU0CMSmO",
- "colab_type": "code",
- "outputId": "3f5f62da-b73d-44e7-f6fc-0b7e493635e2",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "ps_rest.map(lambda line: line.split(\",\")).count()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "24"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 65
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "3c4bci70MnlQ",
- "colab_type": "text"
- },
- "source": [
- "**Display the District ID, District name, Address and Zip for the police station with District ID 7**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "fWFpo_WxMnvm",
- "colab_type": "code",
- "outputId": "9ff7afd8-c0aa-4e84-ba7d-ac07962e3c1b",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 34
- }
- },
- "source": [
- "# District is column 0\n",
- "(ps_rest.filter(lambda line: line.split(\",\")[0]=='7').\n",
- " map(lambda line: (line.split(\",\")[0],\n",
- " line.split(\",\")[1],\n",
- " line.split(\",\")[2],\n",
- " line.split(\",\")[5])).collect())"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "[('7', 'Englewood', '1438 W 63rd St', '60636')]"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 66
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "ZYmb5FscMph3",
- "colab_type": "text"
- },
- "source": [
- "**Police stations 10 and 11 are geographically close to each other. Display the District ID, District name, address and zip code**"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "6ZcRIX3mMquF",
- "colab_type": "code",
- "outputId": "c0575516-f533-4c74-b14e-73b903577fd3",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 51
- }
- },
- "source": [
- "# District is column 0\n",
- "(ps_rest.filter(lambda line: line.split(\",\")[0] in ['10', '11']).\n",
- " map(lambda line: (line.split(\",\")[0],\n",
- " line.split(\",\")[1],\n",
- " line.split(\",\")[2],\n",
- " line.split(\",\")[5])).collect())"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "execute_result",
- "data": {
- "text/plain": [
- "[('11', 'Harrison', '3151 W Harrison St', '60612'),\n",
- " ('10', 'Ogden', '3315 W Ogden Ave', '60623')]"
- ]
- },
- "metadata": {
- "tags": []
- },
- "execution_count": 67
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "3wn2zXe7TbI3",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# User-Defined Functions (UDF)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "w0YWspcTRrin",
- "colab_type": "text"
- },
- "source": [
- "PySpark User-Defined Functions (UDFs) help you convert your python code into a scalable version of itself. It comes in handy more than you can imagine, but beware, as the performance is less when you compare it with pyspark functions. You can view examples of how UDF works [here](https://docs.databricks.com/spark/latest/spark-sql/udf-python.html). What I will give in this section is some theory on how it works, and why it is slower.\n",
- "\n",
- "When you try to run a UDF in PySpark, each executor creates a python process. Data will be serialised and deserialised between each executor and python. This leads to lots of performance impact and overhead on spark jobs, making it less efficent than using spark dataframes. Apart from this, sometimes you might have memory issues while using UDFs. The Python worker consumes huge off-heap memory and so it often leads to memoryOverhead, thereby failing your job. Keeping these in mind, I wouldn't recommend using them, but at the end of the day, your choice."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "yv7ODDTQRwVt",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "# Common Questions"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "6z9gkoE2R1m1",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Recommended IDE"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "BpMTbRggR5Z8",
- "colab_type": "text"
- },
- "source": [
- "I personally prefer [PyCharm](https://www.jetbrains.com/pycharm/) while coding in Python/PySpark. It's based on IntelliJ IDEA so it has a lot of features! And the main advantage I have felt is the ease of installing PySpark and other packages. You can customize it with themes and plugins, and it lets you enhance productivity while coding by providing some features like suggestions, local VCS etc."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "VZ1bYvF8R8Dc",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Submitting a Spark Job"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "3EQWnq23SCbE",
- "colab_type": "text"
- },
- "source": [
- "The python syntax for running jobs is: `python .py ...`\n",
- " But when you submit a spark job you have to use spark-submit to run the application.\n",
- "\n",
- "Here is a simple example of a spark-submit command:\n",
- "`spark-submit filename.py --named_argument 'arguemnt value'` \n",
- "Here, named_argument is an argument that you are reading from inside your script.\n",
- "\n",
- "There are other options you can pass in the command, like: \n",
- "`--py-files` which helps you pass a python file to read in your file, \n",
- "`--files` which helps pass other files like txt or config, \n",
- "`--deploy-mode` which tells wether to deploy your worker node on cluster or locally \n",
- "`--conf` which helps pass different configurations, like memoryOverhead, dynamicAllocation etc.\n",
- "\n",
- "There is an [entire page](https://spark.apache.org/docs/latest/submitting-applications.html) in spark documentation dedicated to this. I highly recommend you go through it once."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "oVwGYAZZiyGV",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Creating Dataframes"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "TvndhPjoi0er",
- "colab_type": "text"
- },
- "source": [
- "When getting started with dataframes, the most common question is: *'How do I create a dataframe?'* \n",
- "Below, you can see how to create three kinds of dataframes:"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "QXmRD3hHlM-f",
- "colab_type": "text"
- },
- "source": [
- "### Create a totally empty dataframe"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "ktkb6s-kjtgG",
- "colab_type": "code",
- "outputId": "0a81f022-0d22-4fe9-d34e-1663467591f3",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 102
- }
- },
- "source": [
- "from pyspark.sql.types import StructType\n",
- "#Create empty df\n",
- "schema = StructType([])\n",
- "empty = spark.createDataFrame(sc.emptyRDD(), schema)\n",
- "empty.show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "++\n",
- "||\n",
- "++\n",
- "++\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "mg5K3nz_lSDe",
- "colab_type": "text"
- },
- "source": [
- "### Create an empty dataframe with header"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "9raf4CkRjuTr",
- "colab_type": "code",
- "outputId": "6109efd0-c4ca-49c4-b724-1f6066eed7e2",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 102
- }
- },
- "source": [
- " from pyspark.sql.types import StructType, StructField\n",
- "#Create empty df with header\n",
- "schema_header = StructType([StructField(\"name\", StringType(), True)])\n",
- "empty_with_header = spark.createDataFrame(sc.emptyRDD(), schema_header)\n",
- "empty_with_header.show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+----+\n",
- "|name|\n",
- "+----+\n",
- "+----+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Y1ZNOx7ilUnd",
- "colab_type": "text"
- },
- "source": [
- "### Create a dataframe with header and data"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "TvzyL46QkJBl",
- "colab_type": "code",
- "outputId": "952991b0-fca9-4533-ed29-0fb42cdcd8dd",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 153
- }
- },
- "source": [
- "from pyspark.sql import Row\n",
- "mylist = [\n",
- " {\"name\":'Alice',\"age\":13},\n",
- " {\"name\":'Jacob',\"age\":24},\n",
- " {\"name\":'Betty',\"age\":135},\n",
- "]\n",
- "spark.createDataFrame(Row(**x) for x in mylist).show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+---+-----+\n",
- "|age| name|\n",
- "+---+-----+\n",
- "| 13|Alice|\n",
- "| 24|Jacob|\n",
- "|135|Betty|\n",
- "+---+-----+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "VnRMckA5nLoJ",
- "colab_type": "code",
- "outputId": "35cdaa1b-e984-496d-b4e7-80b5cfdc766b",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 153
- }
- },
- "source": [
- "# You can achieve the same using this - note that we are using spark context here, not a spark session\n",
- "from pyspark.sql import Row\n",
- "df = sc.parallelize([\n",
- " Row(name='Alice', age=13),\n",
- " Row(name='Jacob', age=24),\n",
- " Row(name='Betty', age=135)]).toDF()\n",
- "df.show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+---+-----+\n",
- "|age| name|\n",
- "+---+-----+\n",
- "| 13|Alice|\n",
- "| 24|Jacob|\n",
- "|135|Betty|\n",
- "+---+-----+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "f3crkAQVlxKp",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Drop Duplicates"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "4IHrYEwHmBcc",
- "colab_type": "text"
- },
- "source": [
- "As mentioned earlier, there are two wasy to remove duplicates from a dataframe. We have already seen the usage of distinct under Get Distinct Rows section. \n",
- "I will expalin how to use the `dropDuplicates()` function to achieve the same. \n",
- "\n",
- "> `drop_duplicates()` is an alias for `dropDuplicates()`"
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "wOuHRAPJmWen",
- "colab_type": "code",
- "outputId": "a2b86418-8454-4c74-99bc-0836a8ae4b3e",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 136
- }
- },
- "source": [
- "from pyspark.sql import Row\n",
- "from pyspark.sql import Row\n",
- "mylist = [\n",
- " {\"name\":'Alice',\"age\":5,\"height\":80},\n",
- " {\"name\":'Jacob',\"age\":24,\"height\":80},\n",
- " {\"name\":'Alice',\"age\":5,\"height\":80}\n",
- "]\n",
- "df = spark.createDataFrame(Row(**x) for x in mylist)\n",
- "df.dropDuplicates().show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+---+------+-----+\n",
- "|age|height| name|\n",
- "+---+------+-----+\n",
- "| 5| 80|Alice|\n",
- "| 24| 80|Jacob|\n",
- "+---+------+-----+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "zMv7A-2Hnmjh",
- "colab_type": "text"
- },
- "source": [
- "`dropDuplicates()` can also take in an optional parameter called *subset* which helps specify the columns on which the duplicate check needs to be done on."
- ]
- },
- {
- "cell_type": "code",
- "metadata": {
- "id": "SHnFylV1n8to",
- "colab_type": "code",
- "outputId": "a2a37b6b-fd35-47ed-a55a-6fb48032ab87",
- "colab": {
- "base_uri": "https://localhost:8080/",
- "height": 119
- }
- },
- "source": [
- "df.dropDuplicates(subset=['height']).show()"
- ],
- "execution_count": 0,
- "outputs": [
- {
- "output_type": "stream",
- "text": [
- "+---+------+-----+\n",
- "|age|height| name|\n",
- "+---+------+-----+\n",
- "| 5| 80|Alice|\n",
- "+---+------+-----+\n",
- "\n"
- ],
- "name": "stdout"
- }
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "bAS4DKxjqI7H",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "## Fine Tuning a Spark Job"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "2d9pLlDl76dM",
- "colab_type": "text"
- },
- "source": [
- "Before we begin, please note that this entire section is written purely based on experience. It might differ with use cases, but it will help you get a better understanding of what you should be looking for, or act as a guidance to achieve your aim.\n",
- "\n",
- ">Spark Performance Tuning refers to the process of adjusting settings to record for memory, cores, and instances used by the system. This process guarantees that the Spark has a flawless performance and also prevents bottlenecking of resources in Spark.\n",
- "\n",
- "Considering you are using Amazon EMR to execute your spark jobs, there are three aspects you need to take care of:\n",
- "1. EMR Sizing\n",
- "2. Spark Configurations\n",
- "3. Job Tuning\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "vIbXZT29JxmG",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "### EMR Sizing"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "2Rv2SM_-KA8W",
- "colab_type": "text"
- },
- "source": [
- "Sizing your EMR is extremely important, as this affects the efficency of your spark jobs. Apart from the cost factor, the maximum number of nodes and memory your job can use will be decided by this. If you spin up a EMR with high specifications, that obviously means you are paying more for it, so we should ideally utilize it to the max. These are the guidelines that I follow to make sure the EMR is rightly sized:\n",
- "\n",
- "1. Size of the input data (include all the input data) on the disk.\n",
- "2. Whether the jobs have transformations or just a straight pass through. Assess the joins and the complex joins involved.\n",
- "3. Size of the output data on the disk.\n",
- "\n",
- "Look at the above criteria against the memory you need to process, and the disk space you would need. Start with a small configuration, and keep adding nodes to arrive at an optimal configuration. In case you are wondering about the *Execution time vs EMR configuration* factor, please understand that it is okay for a job to run longer, rather than adding more resources to the cluster. For example, it is okay to run a job for 40 mins job on a 5 node cluster, rather than running a job in 10 mins on a 15 node cluster.\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "pmeFEgv6QTST",
- "colab_type": "text"
- },
- "source": [
- "Another thing you need to know about EMRs, are the different kinds of EC2 instance types provided by Amazon. I will briefly talk about them, but I strongly recommend you to read more about it from the [official documentation](https://aws.amazon.com/ec2/instance-types/). There are 5 types of instance classes. Based on the job you want to run, you can decide which one to use:\n",
- "\n",
- ">Instance Class | Description\n",
- ">--- | ---\n",
- ">General purpose | Balance of compute, memory and networking resources\n",
- ">Compute optimized | Ideal for compute bound applications that benefit from high performance processors\n",
- ">Memory optimized | Designed to deliver fast performance for workloads that process large data sets in memory\n",
- ">Storage optimized | For workloads that require high, sequential read and write access to very large data sets on local storage\n",
- ">GPU instances | Use hardware accelerators, or co-processors, to perform high demanding functions, more efficiently than is possible in software running on CPUs\n",
- "\n",
- "The configuration (memory, storage, cpu, network performance) will differ based on the instance class you choose. \n",
- "To help make life easier, here is what I do when I get into a predicament about which one to go with: \n",
- " 1. Visit [ec2instances](www.https://www.ec2instances.info/)\n",
- " 2. Choose the EC2 instances in question \n",
- " 3. Click on compare selected\n",
- "\n",
- "This will easily help you undesrstand what you are getting into, and thereby help you make the best choice! The site was built by [Garret Heaton](https://github.com/powdahound)(founder of Swoot), and has helped me countless number of times to make an informed decision."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "snACMwZug5Yn",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "### Spark Configurations"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "uFJNpK06hnpo",
- "colab_type": "text"
- },
- "source": [
- "There are a ton of [configurations](https://spark.apache.org/docs/latest/configuration.html) that you can tweak when it comes to Spark. Here, I will be noting down some of the configurations which I use, which have worked well for me. Alright! let's get into it!"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "dABRu9eokZxw",
- "colab_type": "text"
- },
- "source": [
- "#### Job Scheduling"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "8fRFs6atkdxS",
- "colab_type": "text"
- },
- "source": [
- "When you submit your job in a cluster, it will be given to Spark Schedulers, which is responsible for materializing a logical plan for your job. There are two types of [job scheduling](https://spark.apache.org/docs/latest/job-scheduling.html): \n",
- "1. FIFO \n",
- "By default, Spark’s scheduler runs jobs in FIFO fashion. Each job is divided into stages (e.g. map and reduce phases), and the first job gets priority on all available resources while its stages have tasks to launch, then the second job gets priority, etc. If the jobs at the head of the queue don’t need to use the whole cluster, later jobs can start to run right away, but if the jobs at the head of the queue are large, then later jobs may be delayed significantly. \n",
- "2. FAIR \n",
- "The fair scheduler supports grouping jobs into pools and setting different scheduling options (e.g. weight) for each pool. This can be useful to create a high-priority pool for more important jobs, for example, or to group the jobs of each user together and give users equal shares regardless of how many concurrent jobs they have instead of giving jobs equal shares. This approach is modeled after the Hadoop Fair Scheduler.\n",
- "\n",
- "> I personally prefer using the FAIR mode, and this can be set by adding `.config(\"spark.scheduler.mode\", \"FAIR\")` when you create your SparkSession.\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "xrgbAiZHnq_U",
- "colab_type": "text"
- },
- "source": [
- "#### Serializer"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "N7ZYGGcYsRzB",
- "colab_type": "text"
- },
- "source": [
- "We have two types of [serializers](https://spark.apache.org/docs/latest/tuning.html#data-serialization) available: \n",
- "1. Java serialization \n",
- "2. Kryo serialization\n",
- "\n",
- "Kryo is significantly faster and more compact than Java serialization (often as much as 10x), but does not support all Serializable types and requires you to register the classes you’ll use in the program in advance for best performance.\n",
- "\n",
- "Java serialization is used by default because if you have custom class that extends Serializable it can be easily used. You can also control the performance of your serialization more closely by extending java.io.Externalizable\n",
- "\n",
- "> The general recommendation is to use Kyro as the serializer whenver possible, as it leads to much smaller sizes than Java serialization. It can be added by using `.config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")` when you create your SparkSession.\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "244El832wT8f",
- "colab_type": "text"
- },
- "source": [
- "#### Shuffle Behaviour"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "XWsBFnA8xpeF",
- "colab_type": "text"
- },
- "source": [
- "It is generally a good idea to compress the output file after the map phase. The `spark.shuffle.compress` property decides whether to do the compression or not. The compression used is `spark.io.compression.codec`.\n",
- "\n",
- "> The property can be added by using `.config(\"spark.shuffle.compress\", \"true\")` when you create your SparkSession."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "H5cFCvbczHz_",
- "colab_type": "text"
- },
- "source": [
- "#### Compression and Serialization"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "LHU5lfyFzKw9",
- "colab_type": "text"
- },
- "source": [
- "There are 4 defaiult codecs spark provides to compress internal data such as RDD partitions, event log, broadcast variables and shuffle outputs. They are: \n",
- "\n",
- "1. lz4\n",
- "2. lzf\n",
- "3. snappy\n",
- "4. zstd\n",
- "\n",
- "> The decision on which to use rests upon the use case. I generally use the `snappy` compression. Google created Snappy because they needed something that offered very fast compression at the expense of final size. Snappy is fast, stable and free, but it increases the size more than the other codecs. At the same time, since compute costs will be less, it seems like balanced trade off. The property can be added by using `.config(\"spark.io.compression.codec\", \"snappy\")` when you create your SparkSession.\n",
- "\n",
- "This [session](https://databricks.com/session/best-practice-of-compression-decompression-codes-in-apache-spark) explains the best practice of compression/decompression codes in Apache Spark. I recommend you to take a look at it before taking a decision."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "rdGARl-D3n-l",
- "colab_type": "text"
- },
- "source": [
- "#### Scheduling"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Cm0ExAoS4RU6",
- "colab_type": "text"
- },
- "source": [
- "The property `spark.speculation` performs speculative execution of tasks. This means if one or more tasks are running slowly in a stage, they will be re-launched. Speculative execution will not stop the slow running task but it launches the new task in parallel.\n",
- "\n",
- "> I usually disable this option by adding `.config(\"spark.speculation\", \"false\") ` when I create the SparkSession. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "aaxfGqYZ6Iqz",
- "colab_type": "text"
- },
- "source": [
- "#### Application Properties"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "ERovEKOU6TNE",
- "colab_type": "text"
- },
- "source": [
- "There are mainly two application properties that you should know about:\n",
- "\n",
- "1. spark.driver.memoryOverhead - The amount of off-heap memory to be allocated per driver in cluster mode, in MiB unless otherwise specified. This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the container size (typically 6-10%). This option is currently supported on YARN and Kubernetes.\n",
- "\n",
- "2. spark.executor.memoryOverhead - The amount of off-heap memory to be allocated per executor, in MiB unless otherwise specified. This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size (typically 6-10%). This option is currently supported on YARN and Kubernetes.\n",
- "\n",
- "> If you ever face an issue like `Container killed by YARN for exceeding memory limits`, know that it is because you have not specified enough memory Overhead for your job to successfully execute. The default value for Overhead is 10% of avaialbe memory (driver/executor sepearte), with minimum of 384.\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "BG09dDdL6Tvt",
- "colab_type": "text"
- },
- "source": [
- "#### Dynamic Allocation"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "m_lb-JI78CVT",
- "colab_type": "text"
- },
- "source": [
- "Lastly, I want to talk about Dynamic Allocation. This is a feature I constantly use while executing my jobs. This property is by defualt set to False. As the name suggests, it sets whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload. Truly a wonderful feature, and the greatest benefit of using it is that it will help make the best use of all the resources you have! The disadvantage of this feature is that it does not shine well when you have to execute tasks in parallel. Since most of the resources will be used by the first task, the second one will have to wait till some resource gets released. At the same time, if both get submitted at the exact same time, the resources will be shared between them, although not equally. Also, it is not guaranteed to *always* use the most optimal configurations. But in all my tests, the results have been great! \n",
- "\n",
- "> If you are planning on using this feature, you can pass the configurations as required through the spark-submit command. The four configurations which you will have to keep in mind are: \n",
- "```\n",
- "--conf spark.dynamicAllocation.enabled=true\n",
- "--conf spark.dynamicAllocation.initialExecutors\n",
- "--conf spark.dynamicAllocation.minExecutors\n",
- "--conf spark.dynamicAllocation.maxExecutors\n",
- "```\n",
- "\n",
- "You can read more about this feature [here](https://https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation) and [here](https://stackoverflow.com/questions/40200389/how-to-execute-spark-programs-with-dynamic-resource-allocation).\n",
- "\n",
- "\n",
- "\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "CJlmPbLYKKFA",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "### Job Tuning"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "vccmILvVWewW",
- "colab_type": "text"
- },
- "source": [
- "Apart from EMR and Spark tuning, there is another way to approach opttimizations, and that is by tuning your job itself to produce results efficently. I will be going over some such techniques which will help you achieve this. The [Spark Programming Guide](https://spark.apache.org/docs/2.1.1/programming-guide.html) talks more about these concepts in detail. If you guys prefer watching a video over reading, I highly recommend [A Deep Dive into Proper Optimization for Spark Jobs](https://youtu.be/daXEp4HmS-E) by Daniel Tomes from Databricks, which I found really useful and informative!"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "r-R5ijHrKSg0",
- "colab_type": "text"
- },
- "source": [
- "#### Broadcast Joins (Broadcast Hash Join)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "dvO0z5EpM5U8",
- "colab_type": "text"
- },
- "source": [
- "For some jobs, the efficenecy can be increased by caching them in memory. Broadcast Hash Join(BHJ) is such a technique which will help you optimize join queries when the size of one side of the data is low.\n",
- ">BroadCast joins are the fastest but the drawaback is that it will consume more memory on both the executor and driver.\n",
- "\n",
- "This following steps give a sneak peek into how it works, which will help you understand the use cases where it can be used: \n",
- "1. Input file(smaller of the two tables) to be broadcasted is read by the executors in parallel into its working memory.\n",
- "2. All the data from the executors is collected into driver (Hence, the need for higher memory at driver).\n",
- "3. The driver then broadcasts the combined dataset (full copy) into each executor.\n",
- "4. The size of the broadcasted dataset could be several (10-20+) times bigger the input in memory due to factors like deserialization.\n",
- "5. Executors will end up storing the parts it read first, and also the full copy, thereby leading to a high memory requirement.\n",
- "\n",
- "Some things to keep in mind about BHJ:\n",
- "1. It is advisable to use broadcast joins on small datasets only (dimesnion table, for example).\n",
- "2. Spark does not guarantee BHJ is always chosen, since not all cases (e.g. full outer join) support BHJ.\n",
- "3. You could notice skews in tasks due to uneven partition sizes; especially during aggregations, joins etc. This can be evened out by introducing Salt value (random value). *Suggested formula for salt value:* random(0 – (shuffle partition count – 1))\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "UOvEVcieVn7e",
- "colab_type": "text"
- },
- "source": [
- "#### Spark Partitions"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "ocpQiqOtVqPz",
- "colab_type": "text"
- },
- "source": [
- "A partition in spark is an atomic chunk of data (logical division of data) stored on a node in the cluster. Partitions are the basic units of parallelism in Spark. Having too large a number of partitions or too few is not an ideal solution. The number of partitions in spark should be decided based on the cluster configuration and requirements of the application. Increasing the number of partitions will make each partition have less data or no data at all. Generally, spark partitioning can be broken down in three ways:\n",
- "1. Input\n",
- "2. Shuffle\n",
- "3. Output\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "2KY1mxZVfNsl",
- "colab_type": "text"
- },
- "source": [
- "##### Input"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "dKw48h9eEbPa",
- "colab_type": "text"
- },
- "source": [
- "Spark usually does a good job of figuring the ideal configuration for this one, except in very particular cases. It is advisable to use the spark default unless:\n",
- "1. Increase parallelism\n",
- "2. Heavily nested data\n",
- "3. Generating data (explode)\n",
- "4. Source is not optimal\n",
- "5. You are using UDFs\n",
- "\n",
- "`spark.sql.files.maxpartitionBytes`: This property indicates the maximum number of bytes to pack into a single partition when reading files (Default 128 MB) . Use this to increase the parallelism in reading input data. For example, if you have more cores, then you can increase the number of parallel tasks which will ensure usage of the all the cores of the cluster, and increase the speed of the task."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "qwEBu3T3EbfD",
- "colab_type": "text"
- },
- "source": [
- "##### Shuffle"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "Wx0iQUpFEbus",
- "colab_type": "text"
- },
- "source": [
- "One of the major reason why most jobs lags in performance is, for the majority of the time, because they get the shuffle partitions count wrong. By default, the value is set to 200. In almost all situations, this is not ideal. If you are dealing with shuffle satge of less than 20 GB, 200 is fine, but otherwise this needs to be changed. For most cases, you can use the following equation to find the right value:\n",
- ">`Partition Count = Stage Input Data / Target Size` where \n",
- "`Largest Shuffle Stage (Target Size) < 200MB/partition` in most cases. \n",
- "`spark.sql.shuffle.partitions` property is used to set the ideal partition count value.\n",
- "\n",
- "If you ever notice that target size at the range of TBs, there is something terribly wrong, and you might want to change it back to 200, or recalculate it. Shuffle partitions can be configured for every action (not transformation) in the spark script.\n",
- "\n",
- "Let us use an example to explain this scenario: \n",
- "Assume shuffle stage input = 210 GB. \n",
- "Partition Count = Stage Input Data / Target Size = 210000 MB/200 MB = 1050. \n",
- "As you can see, my shuffle partitions should be 1050, not 200.\n",
- "\n",
- "But, if your cluster has 2000 cores, then set your shuffle partitions to 2000.\n",
- ">In a large cluster dealing with a large data job, never set your shuffle partitions less than your total core count. \n",
- "\n",
- "\n",
- "\n",
- "Shuffle stages almost always precede the write stages and having high shuffle partition count creates small files in the output. To address this, use localCheckPoint just before write & do a coalesce call. This localCheckPoint writes the Shuffle Partition to executor local disk and then coalesces into lower partition count and hence improves the overall performance of both shuffle stage and write stage."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "-HxUYv77EdSv",
- "colab_type": "text"
- },
- "source": [
- "##### Output"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "CPA6YRYrEdgG",
- "colab_type": "text"
- },
- "source": [
- "There are different methods to write the data. You can control the size, composition, number of files in the output and even the number of records in each file while writing the data. While writing the data, you can increase the parallelism, thereby ensuring you use all the resources that you have. But this approach would lead to a larger number of smaller files. Usually, this isn't a problem, but if you want bigger files, you will have to use one of the compaction techniques, preferably in a cluster with lesser configuration. There are multiple ways to change the composition of the output. Keep these two in mind about composition:\n",
- "1. Coalesce: Use this to reduce the number of partitions.\n",
- "2. Repartition: Use this very rarely, and never to reduce the number of partitions \n",
- " a. Range Paritioner - It partitions the data either based on some sorted order OR set of sorted ranges of keys. \n",
- " b. Hash Partioner - It spreads around the data in the partitioning based upon the key value. Hash partitioning can make distributed data skewed."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "NZ5xsdWmNOVz",
- "colab_type": "text"
- },
- "source": [
- " \n",
- "### Best Practices"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {
- "id": "NUKLZ8G8NVuR",
- "colab_type": "text"
- },
- "source": [
- "Try to incorporate these to your coding habits for better performance:\n",
- "1. Do not use NOT IN use NOT EXISTS.\n",
- "2. Remove Counts, Distinct Counts (use approxCountDIstinct).\n",
- "3. Drop Duplicates early.\n",
- "4. Always prefer SQL functions over PandasUDF.\n",
- "5. Use Hive partitions effectively.\n",
- "6. Leverage Spark UI effectively. Avoid Shuffle Spills.\n",
- "7. Drop Duplicates early\n",
- "8. Aim for target cluster utilization of atleast 70%\n",
- "\n"
- ]
+ "text/plain": [
+ "+--------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "| Car| MPG|Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|\n",
+ "+--------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "|Chevrolet Chevell...|18.0| 8| 307.0| 130.0| 3504.| 12.0| 70| US|\n",
+ "| Buick Skylark 320|15.0| 8| 350.0| 165.0| 3693.| 11.5| 70| US|\n",
+ "| Plymouth Satellite|18.0| 8| 318.0| 150.0| 3436.| 11.0| 70| US|\n",
+ "| AMC Rebel SST|16.0| 8| 304.0| 150.0| 3433.| 12.0| 70| US|\n",
+ "| Ford Torino|17.0| 8| 302.0| 140.0| 3449.| 10.5| 70| US|\n",
+ "+--------------------+----+---------+------------+----------+------+------------+-----+------+"
+ ]
+ },
+ "execution_count": 14,
+ "metadata": {
+ "tags": []
+ },
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "df.limit(5)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "eUazdCEmu_sp"
+ },
+ "source": [
+ " \n",
+ "### Viewing Dataframe Columns"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "9o7jsazcu-13",
+ "outputId": "a2ce058a-2266-4333-93f3-1b071904cc0f"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "['Car',\n",
+ " 'MPG',\n",
+ " 'Cylinders',\n",
+ " 'Displacement',\n",
+ " 'Horsepower',\n",
+ " 'Weight',\n",
+ " 'Acceleration',\n",
+ " 'Model',\n",
+ " 'Origin']"
+ ]
+ },
+ "execution_count": 15,
+ "metadata": {
+ "tags": []
+ },
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "df.columns"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3lfS2DhHuhPl"
+ },
+ "source": [
+ " \n",
+ "### Dataframe Schema"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "-xX7hRoW_cXY"
+ },
+ "source": [
+ "There are two methods commonly used to view the data types of a dataframe:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "w6qwTjGsNxrw",
+ "outputId": "60faa488-aa99-4d84-d644-7e6e406e389a"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[('Car', 'string'),\n",
+ " ('MPG', 'string'),\n",
+ " ('Cylinders', 'string'),\n",
+ " ('Displacement', 'string'),\n",
+ " ('Horsepower', 'string'),\n",
+ " ('Weight', 'string'),\n",
+ " ('Acceleration', 'string'),\n",
+ " ('Model', 'string'),\n",
+ " ('Origin', 'string')]"
+ ]
+ },
+ "execution_count": 16,
+ "metadata": {
+ "tags": []
+ },
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "df.dtypes"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "CCGTFlCWRPw4",
+ "outputId": "17bcb250-22c4-4d6a-f2a7-b67e99f080e7"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "root\n",
+ " |-- Car: string (nullable = true)\n",
+ " |-- MPG: string (nullable = true)\n",
+ " |-- Cylinders: string (nullable = true)\n",
+ " |-- Displacement: string (nullable = true)\n",
+ " |-- Horsepower: string (nullable = true)\n",
+ " |-- Weight: string (nullable = true)\n",
+ " |-- Acceleration: string (nullable = true)\n",
+ " |-- Model: string (nullable = true)\n",
+ " |-- Origin: string (nullable = true)\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "df.printSchema()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "RXx5ATpZ9oor"
+ },
+ "source": [
+ " \n",
+ "#### Inferring Schema Implicitly"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "7TeflTUp8l29"
+ },
+ "source": [
+ "We can use the parameter `inferschema=true` to infer the input schema automatically while loading the data. An example is shown below:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 18,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "Qym5MjCi894N",
+ "outputId": "f92d912b-621b-4381-cd6c-2eafc89c72f7"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "root\n",
+ " |-- Car: string (nullable = true)\n",
+ " |-- MPG: double (nullable = true)\n",
+ " |-- Cylinders: integer (nullable = true)\n",
+ " |-- Displacement: double (nullable = true)\n",
+ " |-- Horsepower: double (nullable = true)\n",
+ " |-- Weight: decimal(4,0) (nullable = true)\n",
+ " |-- Acceleration: double (nullable = true)\n",
+ " |-- Model: integer (nullable = true)\n",
+ " |-- Origin: string (nullable = true)\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "df = spark.read.csv('cars.csv', header=True, sep=\";\", inferSchema=True)\n",
+ "df.printSchema()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "G6jTedYd-Dhb"
+ },
+ "source": [
+ "As you can see, the datatype has been infered automatically spark with even the correct precison for decimal type. A problem that might arise here is that sometimes, when you have to read multiple files with different schemas in different files, there might be an issue with implicit inferring leading to null values in some columns. Therefore, let us also see how to define schemas explicitly."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "yTVjYqeRuxWn"
+ },
+ "source": [
+ " \n",
+ "#### Defining Schema Explicitly"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 19,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "xpsaQ4JMRUiS",
+ "outputId": "cbfbeed2-7aa1-43de-c8a6-5d505c164062"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "['Car',\n",
+ " 'MPG',\n",
+ " 'Cylinders',\n",
+ " 'Displacement',\n",
+ " 'Horsepower',\n",
+ " 'Weight',\n",
+ " 'Acceleration',\n",
+ " 'Model',\n",
+ " 'Origin']"
+ ]
+ },
+ "execution_count": 19,
+ "metadata": {
+ "tags": []
+ },
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "from pyspark.sql.types import *\n",
+ "df.columns"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 20,
+ "metadata": {
+ "id": "ik62VX34SlFh"
+ },
+ "outputs": [],
+ "source": [
+ "# Creating a list of the schema in the format column_name, data_type\n",
+ "labels = [\n",
+ " ('Car',StringType()),\n",
+ " ('MPG',DoubleType()),\n",
+ " ('Cylinders',IntegerType()),\n",
+ " ('Displacement',DoubleType()),\n",
+ " ('Horsepower',DoubleType()),\n",
+ " ('Weight',DoubleType()),\n",
+ " ('Acceleration',DoubleType()),\n",
+ " ('Model',IntegerType()),\n",
+ " ('Origin',StringType())\n",
+ "]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "T-Fp5y_oU9SF",
+ "outputId": "a0a84e9b-3cca-431f-ba19-d3c6c183c797"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "StructType(List(StructField(Car,StringType,true),StructField(MPG,DoubleType,true),StructField(Cylinders,IntegerType,true),StructField(Displacement,DoubleType,true),StructField(Horsepower,DoubleType,true),StructField(Weight,DoubleType,true),StructField(Acceleration,DoubleType,true),StructField(Model,IntegerType,true),StructField(Origin,StringType,true)))"
+ ]
+ },
+ "execution_count": 21,
+ "metadata": {
+ "tags": []
+ },
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Creating the schema that will be passed when reading the csv\n",
+ "schema = StructType([StructField (x[0], x[1], True) for x in labels])\n",
+ "schema"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "sgC7gtL5VTls",
+ "outputId": "1a84e4b5-0c32-4d8f-f399-602a09156346"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "root\n",
+ " |-- Car: string (nullable = true)\n",
+ " |-- MPG: double (nullable = true)\n",
+ " |-- Cylinders: integer (nullable = true)\n",
+ " |-- Displacement: double (nullable = true)\n",
+ " |-- Horsepower: double (nullable = true)\n",
+ " |-- Weight: double (nullable = true)\n",
+ " |-- Acceleration: double (nullable = true)\n",
+ " |-- Model: integer (nullable = true)\n",
+ " |-- Origin: string (nullable = true)\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "df = spark.read.csv('cars.csv', header=True, sep=\";\", schema=schema)\n",
+ "df.printSchema()\n",
+ "# The schema comes as we gave!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "Dn2EAhesVmx0",
+ "outputId": "dc6c0aed-1731-4677-9311-708de99f3561"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+--------------------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|\n",
+ "+--------------------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "|Chevrolet Chevelle Malibu |18.0|8 |307.0 |130.0 |3504.0|12.0 |70 |US |\n",
+ "|Buick Skylark 320 |15.0|8 |350.0 |165.0 |3693.0|11.5 |70 |US |\n",
+ "|Plymouth Satellite |18.0|8 |318.0 |150.0 |3436.0|11.0 |70 |US |\n",
+ "|AMC Rebel SST |16.0|8 |304.0 |150.0 |3433.0|12.0 |70 |US |\n",
+ "|Ford Torino |17.0|8 |302.0 |140.0 |3449.0|10.5 |70 |US |\n",
+ "|Ford Galaxie 500 |15.0|8 |429.0 |198.0 |4341.0|10.0 |70 |US |\n",
+ "|Chevrolet Impala |14.0|8 |454.0 |220.0 |4354.0|9.0 |70 |US |\n",
+ "|Plymouth Fury iii |14.0|8 |440.0 |215.0 |4312.0|8.5 |70 |US |\n",
+ "|Pontiac Catalina |14.0|8 |455.0 |225.0 |4425.0|10.0 |70 |US |\n",
+ "|AMC Ambassador DPL |15.0|8 |390.0 |190.0 |3850.0|8.5 |70 |US |\n",
+ "|Citroen DS-21 Pallas |0.0 |4 |133.0 |115.0 |3090.0|17.5 |70 |Europe|\n",
+ "|Chevrolet Chevelle Concours (sw)|0.0 |8 |350.0 |165.0 |4142.0|11.5 |70 |US |\n",
+ "|Ford Torino (sw) |0.0 |8 |351.0 |153.0 |4034.0|11.0 |70 |US |\n",
+ "|Plymouth Satellite (sw) |0.0 |8 |383.0 |175.0 |4166.0|10.5 |70 |US |\n",
+ "|AMC Rebel SST (sw) |0.0 |8 |360.0 |175.0 |3850.0|11.0 |70 |US |\n",
+ "|Dodge Challenger SE |15.0|8 |383.0 |170.0 |3563.0|10.0 |70 |US |\n",
+ "|Plymouth 'Cuda 340 |14.0|8 |340.0 |160.0 |3609.0|8.0 |70 |US |\n",
+ "|Ford Mustang Boss 302 |0.0 |8 |302.0 |140.0 |3353.0|8.0 |70 |US |\n",
+ "|Chevrolet Monte Carlo |15.0|8 |400.0 |150.0 |3761.0|9.5 |70 |US |\n",
+ "|Buick Estate Wagon (sw) |14.0|8 |455.0 |225.0 |3086.0|10.0 |70 |US |\n",
+ "+--------------------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "df.show(truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "MDCO3TEe95OY"
+ },
+ "source": [
+ "As we can see here, the data has been successully loaded with the specified datatypes."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "rsD48rckdHPe"
+ },
+ "source": [
+ " \n",
+ "## DataFrame Operations on Columns"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "cMlxdWfSY8ks"
+ },
+ "source": [
+ "We will go over the following in this section:\n",
+ "\n",
+ "1. Selecting Columns\n",
+ "2. Selecting Multiple Columns\n",
+ "3. Adding New Columns\n",
+ "4. Renaming Columns\n",
+ "5. Grouping By Columns\n",
+ "6. Removing Columns\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ikGR5pDICTu7"
+ },
+ "source": [
+ " \n",
+ "### Selecting Columns"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "-VMwIwi2rj_o"
+ },
+ "source": [
+ "There are multiple ways to do a select in PySpark. You can find how they differ and how each below:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "ge9-_ygideWk",
+ "outputId": "37bd7e10-1169-4243-fb5e-798cd980c423"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Column<'Car'>\n",
+ "********************\n",
+ "+--------------------------------+\n",
+ "|Car |\n",
+ "+--------------------------------+\n",
+ "|Chevrolet Chevelle Malibu |\n",
+ "|Buick Skylark 320 |\n",
+ "|Plymouth Satellite |\n",
+ "|AMC Rebel SST |\n",
+ "|Ford Torino |\n",
+ "|Ford Galaxie 500 |\n",
+ "|Chevrolet Impala |\n",
+ "|Plymouth Fury iii |\n",
+ "|Pontiac Catalina |\n",
+ "|AMC Ambassador DPL |\n",
+ "|Citroen DS-21 Pallas |\n",
+ "|Chevrolet Chevelle Concours (sw)|\n",
+ "|Ford Torino (sw) |\n",
+ "|Plymouth Satellite (sw) |\n",
+ "|AMC Rebel SST (sw) |\n",
+ "|Dodge Challenger SE |\n",
+ "|Plymouth 'Cuda 340 |\n",
+ "|Ford Mustang Boss 302 |\n",
+ "|Chevrolet Monte Carlo |\n",
+ "|Buick Estate Wagon (sw) |\n",
+ "+--------------------------------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# 1st method\n",
+ "# Column name is case sensitive in this usage\n",
+ "print(df.Car)\n",
+ "print(\"*\"*20)\n",
+ "df.select(df.Car).show(truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "YxP1su8veNde"
+ },
+ "source": [
+ "**NOTE:**\n",
+ "\n",
+ "> **We can't always use the dot notation because this will break when the column names have reserved names or attributes to the data frame class. Additionally, the column names are case sensitive in nature so we need to always make sure the column names have been changed to a paticular case before using it.**\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 25,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "md5zaET8dsr4",
+ "outputId": "534f2e91-1c8a-48c9-c666-8af90443e896"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Column<'car'>\n",
+ "********************\n",
+ "+--------------------------------+\n",
+ "|car |\n",
+ "+--------------------------------+\n",
+ "|Chevrolet Chevelle Malibu |\n",
+ "|Buick Skylark 320 |\n",
+ "|Plymouth Satellite |\n",
+ "|AMC Rebel SST |\n",
+ "|Ford Torino |\n",
+ "|Ford Galaxie 500 |\n",
+ "|Chevrolet Impala |\n",
+ "|Plymouth Fury iii |\n",
+ "|Pontiac Catalina |\n",
+ "|AMC Ambassador DPL |\n",
+ "|Citroen DS-21 Pallas |\n",
+ "|Chevrolet Chevelle Concours (sw)|\n",
+ "|Ford Torino (sw) |\n",
+ "|Plymouth Satellite (sw) |\n",
+ "|AMC Rebel SST (sw) |\n",
+ "|Dodge Challenger SE |\n",
+ "|Plymouth 'Cuda 340 |\n",
+ "|Ford Mustang Boss 302 |\n",
+ "|Chevrolet Monte Carlo |\n",
+ "|Buick Estate Wagon (sw) |\n",
+ "+--------------------------------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# 2nd method\n",
+ "# Column name is case insensitive here\n",
+ "print(df['car'])\n",
+ "print(\"*\"*20)\n",
+ "df.select(df['car']).show(truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 26,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "6Gkf14sHec9a",
+ "outputId": "a965002e-eb65-462d-fa1f-4959251216cd"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+--------------------------------+\n",
+ "|car |\n",
+ "+--------------------------------+\n",
+ "|Chevrolet Chevelle Malibu |\n",
+ "|Buick Skylark 320 |\n",
+ "|Plymouth Satellite |\n",
+ "|AMC Rebel SST |\n",
+ "|Ford Torino |\n",
+ "|Ford Galaxie 500 |\n",
+ "|Chevrolet Impala |\n",
+ "|Plymouth Fury iii |\n",
+ "|Pontiac Catalina |\n",
+ "|AMC Ambassador DPL |\n",
+ "|Citroen DS-21 Pallas |\n",
+ "|Chevrolet Chevelle Concours (sw)|\n",
+ "|Ford Torino (sw) |\n",
+ "|Plymouth Satellite (sw) |\n",
+ "|AMC Rebel SST (sw) |\n",
+ "|Dodge Challenger SE |\n",
+ "|Plymouth 'Cuda 340 |\n",
+ "|Ford Mustang Boss 302 |\n",
+ "|Chevrolet Monte Carlo |\n",
+ "|Buick Estate Wagon (sw) |\n",
+ "+--------------------------------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# 3rd method\n",
+ "# Column name is case insensitive here\n",
+ "from pyspark.sql.functions import col\n",
+ "df.select(col('car')).show(truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "z6QsMfnNt3qF"
+ },
+ "source": [
+ " \n",
+ "### Selecting Multiple Columns"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 27,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "bPjLMhZ6uAQR",
+ "outputId": "920f8155-50f5-43ae-eb3d-2b9f6be7756e"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Column<'Car'> Column<'Cylinders'>\n",
+ "****************************************\n",
+ "+--------------------------------+---------+\n",
+ "|Car |Cylinders|\n",
+ "+--------------------------------+---------+\n",
+ "|Chevrolet Chevelle Malibu |8 |\n",
+ "|Buick Skylark 320 |8 |\n",
+ "|Plymouth Satellite |8 |\n",
+ "|AMC Rebel SST |8 |\n",
+ "|Ford Torino |8 |\n",
+ "|Ford Galaxie 500 |8 |\n",
+ "|Chevrolet Impala |8 |\n",
+ "|Plymouth Fury iii |8 |\n",
+ "|Pontiac Catalina |8 |\n",
+ "|AMC Ambassador DPL |8 |\n",
+ "|Citroen DS-21 Pallas |4 |\n",
+ "|Chevrolet Chevelle Concours (sw)|8 |\n",
+ "|Ford Torino (sw) |8 |\n",
+ "|Plymouth Satellite (sw) |8 |\n",
+ "|AMC Rebel SST (sw) |8 |\n",
+ "|Dodge Challenger SE |8 |\n",
+ "|Plymouth 'Cuda 340 |8 |\n",
+ "|Ford Mustang Boss 302 |8 |\n",
+ "|Chevrolet Monte Carlo |8 |\n",
+ "|Buick Estate Wagon (sw) |8 |\n",
+ "+--------------------------------+---------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# 1st method\n",
+ "# Column name is case sensitive in this usage\n",
+ "print(df.Car, df.Cylinders)\n",
+ "print(\"*\"*40)\n",
+ "df.select(df.Car, df.Cylinders).show(truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 28,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "DMRMUrv7uHWa",
+ "outputId": "711ecef5-9f20-4106-b935-fb722ec14f72"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Column<'car'> Column<'cylinders'>\n",
+ "****************************************\n",
+ "+--------------------------------+---------+\n",
+ "|car |cylinders|\n",
+ "+--------------------------------+---------+\n",
+ "|Chevrolet Chevelle Malibu |8 |\n",
+ "|Buick Skylark 320 |8 |\n",
+ "|Plymouth Satellite |8 |\n",
+ "|AMC Rebel SST |8 |\n",
+ "|Ford Torino |8 |\n",
+ "|Ford Galaxie 500 |8 |\n",
+ "|Chevrolet Impala |8 |\n",
+ "|Plymouth Fury iii |8 |\n",
+ "|Pontiac Catalina |8 |\n",
+ "|AMC Ambassador DPL |8 |\n",
+ "|Citroen DS-21 Pallas |4 |\n",
+ "|Chevrolet Chevelle Concours (sw)|8 |\n",
+ "|Ford Torino (sw) |8 |\n",
+ "|Plymouth Satellite (sw) |8 |\n",
+ "|AMC Rebel SST (sw) |8 |\n",
+ "|Dodge Challenger SE |8 |\n",
+ "|Plymouth 'Cuda 340 |8 |\n",
+ "|Ford Mustang Boss 302 |8 |\n",
+ "|Chevrolet Monte Carlo |8 |\n",
+ "|Buick Estate Wagon (sw) |8 |\n",
+ "+--------------------------------+---------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# 2nd method\n",
+ "# Column name is case insensitive in this usage\n",
+ "print(df['car'],df['cylinders'])\n",
+ "print(\"*\"*40)\n",
+ "df.select(df['car'],df['cylinders']).show(truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 29,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "RgQ20-4GugjR",
+ "outputId": "91f3f5e6-b90d-413b-ab7f-10efc81b77aa"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+--------------------------------+---------+\n",
+ "|car |cylinders|\n",
+ "+--------------------------------+---------+\n",
+ "|Chevrolet Chevelle Malibu |8 |\n",
+ "|Buick Skylark 320 |8 |\n",
+ "|Plymouth Satellite |8 |\n",
+ "|AMC Rebel SST |8 |\n",
+ "|Ford Torino |8 |\n",
+ "|Ford Galaxie 500 |8 |\n",
+ "|Chevrolet Impala |8 |\n",
+ "|Plymouth Fury iii |8 |\n",
+ "|Pontiac Catalina |8 |\n",
+ "|AMC Ambassador DPL |8 |\n",
+ "|Citroen DS-21 Pallas |4 |\n",
+ "|Chevrolet Chevelle Concours (sw)|8 |\n",
+ "|Ford Torino (sw) |8 |\n",
+ "|Plymouth Satellite (sw) |8 |\n",
+ "|AMC Rebel SST (sw) |8 |\n",
+ "|Dodge Challenger SE |8 |\n",
+ "|Plymouth 'Cuda 340 |8 |\n",
+ "|Ford Mustang Boss 302 |8 |\n",
+ "|Chevrolet Monte Carlo |8 |\n",
+ "|Buick Estate Wagon (sw) |8 |\n",
+ "+--------------------------------+---------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# 3rd method\n",
+ "# Column name is case insensitive in this usage\n",
+ "from pyspark.sql.functions import col\n",
+ "df.select(col('car'),col('cylinders')).show(truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "85Lv3zSXCcOY"
+ },
+ "source": [
+ " \n",
+ "### Adding New Columns"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "d_Y7dcAHu-Uz"
+ },
+ "source": [
+ "We will take a look at three cases here:\n",
+ "\n",
+ "1. Adding a new column\n",
+ "2. Adding multiple columns\n",
+ "3. Deriving a new column from an exisitng one"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 30,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "oFHUmRKZeCEV",
+ "outputId": "4ed01c11-52d6-4ab3-a1ff-0423ef192293"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+------------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|first_column|\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+------------+\n",
+ "|Chevrolet Chevelle Malibu|18.0|8 |307.0 |130.0 |3504.0|12.0 |70 |US |1 |\n",
+ "|Buick Skylark 320 |15.0|8 |350.0 |165.0 |3693.0|11.5 |70 |US |1 |\n",
+ "|Plymouth Satellite |18.0|8 |318.0 |150.0 |3436.0|11.0 |70 |US |1 |\n",
+ "|AMC Rebel SST |16.0|8 |304.0 |150.0 |3433.0|12.0 |70 |US |1 |\n",
+ "|Ford Torino |17.0|8 |302.0 |140.0 |3449.0|10.5 |70 |US |1 |\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+------------+\n",
+ "only showing top 5 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# CASE 1: Adding a new column\n",
+ "# We will add a new column called 'first_column' at the end\n",
+ "from pyspark.sql.functions import lit\n",
+ "df = df.withColumn('first_column',lit(1)) \n",
+ "# lit means literal. It populates the row with the literal value given.\n",
+ "# When adding static data / constant values, it is a good practice to use it.\n",
+ "df.show(5,truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 31,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "U9772_mHwAqL",
+ "outputId": "fe35b15f-cedb-4bda-eaf8-f817c55162e0"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+------------+-------------+------------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|first_column|second_column|third_column|\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+------------+-------------+------------+\n",
+ "|Chevrolet Chevelle Malibu|18.0|8 |307.0 |130.0 |3504.0|12.0 |70 |US |1 |2 |Third Column|\n",
+ "|Buick Skylark 320 |15.0|8 |350.0 |165.0 |3693.0|11.5 |70 |US |1 |2 |Third Column|\n",
+ "|Plymouth Satellite |18.0|8 |318.0 |150.0 |3436.0|11.0 |70 |US |1 |2 |Third Column|\n",
+ "|AMC Rebel SST |16.0|8 |304.0 |150.0 |3433.0|12.0 |70 |US |1 |2 |Third Column|\n",
+ "|Ford Torino |17.0|8 |302.0 |140.0 |3449.0|10.5 |70 |US |1 |2 |Third Column|\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+------------+-------------+------------+\n",
+ "only showing top 5 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# CASE 2: Adding multiple columns\n",
+ "# We will add two new columns called 'second_column' and 'third_column' at the end\n",
+ "df = df.withColumn('second_column', lit(2)) \\\n",
+ " .withColumn('third_column', lit('Third Column')) \n",
+ "# lit means literal. It populates the row with the literal value given.\n",
+ "# When adding static data / constant values, it is a good practice to use it.\n",
+ "df.show(5,truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 32,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "dGaQS_pOwx_b",
+ "outputId": "f21c6b33-4e2a-41bd-cbbc-6b8712de1cbd"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+------------+-------------+------------+----------------------------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|first_column|second_column|third_column|car_model |\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+------------+-------------+------------+----------------------------+\n",
+ "|Chevrolet Chevelle Malibu|18.0|8 |307.0 |130.0 |3504.0|12.0 |70 |US |1 |2 |Third Column|Chevrolet Chevelle Malibu 70|\n",
+ "|Buick Skylark 320 |15.0|8 |350.0 |165.0 |3693.0|11.5 |70 |US |1 |2 |Third Column|Buick Skylark 320 70 |\n",
+ "|Plymouth Satellite |18.0|8 |318.0 |150.0 |3436.0|11.0 |70 |US |1 |2 |Third Column|Plymouth Satellite 70 |\n",
+ "|AMC Rebel SST |16.0|8 |304.0 |150.0 |3433.0|12.0 |70 |US |1 |2 |Third Column|AMC Rebel SST 70 |\n",
+ "|Ford Torino |17.0|8 |302.0 |140.0 |3449.0|10.5 |70 |US |1 |2 |Third Column|Ford Torino 70 |\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+------------+-------------+------------+----------------------------+\n",
+ "only showing top 5 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# CASE 3: Deriving a new column from an exisitng one\n",
+ "# We will add a new column called 'car_model' which has the value of car and model appended together with a space in between \n",
+ "from pyspark.sql.functions import concat\n",
+ "df = df.withColumn('car_model', concat(col(\"Car\"), lit(\" \"), col(\"model\")))\n",
+ "# lit means literal. It populates the row with the literal value given.\n",
+ "# When adding static data / constant values, it is a good practice to use it.\n",
+ "df.show(5,truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xeHExg-zxf5r"
+ },
+ "source": [
+ "As we can see, the new column car model has been created from existing columns. Since our aim was to create a column which has the value of car and model appended together with a space in between we have used the `concat` operator."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "QlMf04i2CjDC"
+ },
+ "source": [
+ " \n",
+ "### Renaming Columns"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "CwGKbSHvxxxG"
+ },
+ "source": [
+ "We use the `withColumnRenamed` function to rename a columm in PySpark. Let us see it in action below:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 33,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "QJqgy6lKfk2o",
+ "outputId": "5abdd234-5373-4cbd-89e0-04312ac6123d"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+--------------------------------+----+---------+------------+----------+------+------------+-----+------+--------------+--------------+----------------+-----------------------------------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|new_column_one|new_column_two|new_column_three|car_model |\n",
+ "+--------------------------------+----+---------+------------+----------+------+------------+-----+------+--------------+--------------+----------------+-----------------------------------+\n",
+ "|Chevrolet Chevelle Malibu |18.0|8 |307.0 |130.0 |3504.0|12.0 |70 |US |1 |2 |Third Column |Chevrolet Chevelle Malibu 70 |\n",
+ "|Buick Skylark 320 |15.0|8 |350.0 |165.0 |3693.0|11.5 |70 |US |1 |2 |Third Column |Buick Skylark 320 70 |\n",
+ "|Plymouth Satellite |18.0|8 |318.0 |150.0 |3436.0|11.0 |70 |US |1 |2 |Third Column |Plymouth Satellite 70 |\n",
+ "|AMC Rebel SST |16.0|8 |304.0 |150.0 |3433.0|12.0 |70 |US |1 |2 |Third Column |AMC Rebel SST 70 |\n",
+ "|Ford Torino |17.0|8 |302.0 |140.0 |3449.0|10.5 |70 |US |1 |2 |Third Column |Ford Torino 70 |\n",
+ "|Ford Galaxie 500 |15.0|8 |429.0 |198.0 |4341.0|10.0 |70 |US |1 |2 |Third Column |Ford Galaxie 500 70 |\n",
+ "|Chevrolet Impala |14.0|8 |454.0 |220.0 |4354.0|9.0 |70 |US |1 |2 |Third Column |Chevrolet Impala 70 |\n",
+ "|Plymouth Fury iii |14.0|8 |440.0 |215.0 |4312.0|8.5 |70 |US |1 |2 |Third Column |Plymouth Fury iii 70 |\n",
+ "|Pontiac Catalina |14.0|8 |455.0 |225.0 |4425.0|10.0 |70 |US |1 |2 |Third Column |Pontiac Catalina 70 |\n",
+ "|AMC Ambassador DPL |15.0|8 |390.0 |190.0 |3850.0|8.5 |70 |US |1 |2 |Third Column |AMC Ambassador DPL 70 |\n",
+ "|Citroen DS-21 Pallas |0.0 |4 |133.0 |115.0 |3090.0|17.5 |70 |Europe|1 |2 |Third Column |Citroen DS-21 Pallas 70 |\n",
+ "|Chevrolet Chevelle Concours (sw)|0.0 |8 |350.0 |165.0 |4142.0|11.5 |70 |US |1 |2 |Third Column |Chevrolet Chevelle Concours (sw) 70|\n",
+ "|Ford Torino (sw) |0.0 |8 |351.0 |153.0 |4034.0|11.0 |70 |US |1 |2 |Third Column |Ford Torino (sw) 70 |\n",
+ "|Plymouth Satellite (sw) |0.0 |8 |383.0 |175.0 |4166.0|10.5 |70 |US |1 |2 |Third Column |Plymouth Satellite (sw) 70 |\n",
+ "|AMC Rebel SST (sw) |0.0 |8 |360.0 |175.0 |3850.0|11.0 |70 |US |1 |2 |Third Column |AMC Rebel SST (sw) 70 |\n",
+ "|Dodge Challenger SE |15.0|8 |383.0 |170.0 |3563.0|10.0 |70 |US |1 |2 |Third Column |Dodge Challenger SE 70 |\n",
+ "|Plymouth 'Cuda 340 |14.0|8 |340.0 |160.0 |3609.0|8.0 |70 |US |1 |2 |Third Column |Plymouth 'Cuda 340 70 |\n",
+ "|Ford Mustang Boss 302 |0.0 |8 |302.0 |140.0 |3353.0|8.0 |70 |US |1 |2 |Third Column |Ford Mustang Boss 302 70 |\n",
+ "|Chevrolet Monte Carlo |15.0|8 |400.0 |150.0 |3761.0|9.5 |70 |US |1 |2 |Third Column |Chevrolet Monte Carlo 70 |\n",
+ "|Buick Estate Wagon (sw) |14.0|8 |455.0 |225.0 |3086.0|10.0 |70 |US |1 |2 |Third Column |Buick Estate Wagon (sw) 70 |\n",
+ "+--------------------------------+----+---------+------------+----------+------+------------+-----+------+--------------+--------------+----------------+-----------------------------------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "#Renaming a column in PySpark\n",
+ "df = df.withColumnRenamed('first_column', 'new_column_one') \\\n",
+ " .withColumnRenamed('second_column', 'new_column_two') \\\n",
+ " .withColumnRenamed('third_column', 'new_column_three')\n",
+ "df.show(truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4CDifVC2Cnml"
+ },
+ "source": [
+ " \n",
+ "### Grouping By Columns"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9wlB76FdyS0W"
+ },
+ "source": [
+ "Here, we see the Dataframe API way of grouping values. We will discuss how to:\n",
+ "\n",
+ "\n",
+ "1. Group By a single column\n",
+ "2. Group By multiple columns"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 34,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "M1ek2opVfqea",
+ "outputId": "3eab0db5-13ab-4868-871b-da4fb16ab768"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+------+-----+\n",
+ "|Origin|count|\n",
+ "+------+-----+\n",
+ "|Europe| 73|\n",
+ "| US| 254|\n",
+ "| Japan| 79|\n",
+ "+------+-----+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Group By a column in PySpark\n",
+ "df.groupBy('Origin').count().show(5)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 35,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "hUh_TWcOysoL",
+ "outputId": "dad5b95e-4949-4259-cbd4-b0103d6daec9"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+------+-----+-----+\n",
+ "|Origin|Model|count|\n",
+ "+------+-----+-----+\n",
+ "|Europe| 71| 5|\n",
+ "|Europe| 80| 9|\n",
+ "|Europe| 79| 4|\n",
+ "| Japan| 75| 4|\n",
+ "| US| 72| 18|\n",
+ "+------+-----+-----+\n",
+ "only showing top 5 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Group By multiple columns in PySpark\n",
+ "df.groupBy('Origin', 'Model').count().show(5)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "CbpEj9fECrW3"
+ },
+ "source": [
+ " \n",
+ "### Removing Columns"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 36,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "xsb9PXxpfnmh",
+ "outputId": "601341ee-c131-46e3-9331-2e93664f9bb2"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+--------------+----------------+----------------------------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|new_column_two|new_column_three|car_model |\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+--------------+----------------+----------------------------+\n",
+ "|Chevrolet Chevelle Malibu|18.0|8 |307.0 |130.0 |3504.0|12.0 |70 |US |2 |Third Column |Chevrolet Chevelle Malibu 70|\n",
+ "|Buick Skylark 320 |15.0|8 |350.0 |165.0 |3693.0|11.5 |70 |US |2 |Third Column |Buick Skylark 320 70 |\n",
+ "|Plymouth Satellite |18.0|8 |318.0 |150.0 |3436.0|11.0 |70 |US |2 |Third Column |Plymouth Satellite 70 |\n",
+ "|AMC Rebel SST |16.0|8 |304.0 |150.0 |3433.0|12.0 |70 |US |2 |Third Column |AMC Rebel SST 70 |\n",
+ "|Ford Torino |17.0|8 |302.0 |140.0 |3449.0|10.5 |70 |US |2 |Third Column |Ford Torino 70 |\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+--------------+----------------+----------------------------+\n",
+ "only showing top 5 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "#Remove columns in PySpark\n",
+ "df = df.drop('new_column_one')\n",
+ "df.show(5,truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 37,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "EKOXrXtvzK_0",
+ "outputId": "329e9696-7a60-4fcf-dc56-4d6cbcd5dc7a"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+----------------------------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|car_model |\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+----------------------------+\n",
+ "|Chevrolet Chevelle Malibu|18.0|8 |307.0 |130.0 |3504.0|12.0 |70 |US |Chevrolet Chevelle Malibu 70|\n",
+ "|Buick Skylark 320 |15.0|8 |350.0 |165.0 |3693.0|11.5 |70 |US |Buick Skylark 320 70 |\n",
+ "|Plymouth Satellite |18.0|8 |318.0 |150.0 |3436.0|11.0 |70 |US |Plymouth Satellite 70 |\n",
+ "|AMC Rebel SST |16.0|8 |304.0 |150.0 |3433.0|12.0 |70 |US |AMC Rebel SST 70 |\n",
+ "|Ford Torino |17.0|8 |302.0 |140.0 |3449.0|10.5 |70 |US |Ford Torino 70 |\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+----------------------------+\n",
+ "only showing top 5 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "#Remove multiple columnss in one go\n",
+ "df = df.drop('new_column_two') \\\n",
+ " .drop('new_column_three')\n",
+ "df.show(5,truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "WbKK5iHwmIoV"
+ },
+ "source": [
+ " \n",
+ "## DataFrame Operations on Rows"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Quwx3KlLzeq9"
+ },
+ "source": [
+ "We will discuss the follwoing in this section:\n",
+ "\n",
+ "1. Filtering Rows\n",
+ "2. \t Get Distinct Rows\n",
+ "3. Sorting Rows\n",
+ "4. Union Dataframes\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "9bKlvX-SH-Wy"
+ },
+ "source": [
+ " \n",
+ "### Filtering Rows"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 38,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "YNfcjOIknA3n",
+ "outputId": "7319fc9b-76cb-428f-dbb6-99f25b80b79c"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "TOTAL RECORD COUNT: 406\n",
+ "EUROPE FILTERED RECORD COUNT: 73\n",
+ "+----------------------------+----+---------+------------+----------+------+------------+-----+------+-------------------------------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|car_model |\n",
+ "+----------------------------+----+---------+------------+----------+------+------------+-----+------+-------------------------------+\n",
+ "|Citroen DS-21 Pallas |0.0 |4 |133.0 |115.0 |3090.0|17.5 |70 |Europe|Citroen DS-21 Pallas 70 |\n",
+ "|Volkswagen 1131 Deluxe Sedan|26.0|4 |97.0 |46.0 |1835.0|20.5 |70 |Europe|Volkswagen 1131 Deluxe Sedan 70|\n",
+ "|Peugeot 504 |25.0|4 |110.0 |87.0 |2672.0|17.5 |70 |Europe|Peugeot 504 70 |\n",
+ "|Audi 100 LS |24.0|4 |107.0 |90.0 |2430.0|14.5 |70 |Europe|Audi 100 LS 70 |\n",
+ "|Saab 99e |25.0|4 |104.0 |95.0 |2375.0|17.5 |70 |Europe|Saab 99e 70 |\n",
+ "|BMW 2002 |26.0|4 |121.0 |113.0 |2234.0|12.5 |70 |Europe|BMW 2002 70 |\n",
+ "|Volkswagen Super Beetle 117 |0.0 |4 |97.0 |48.0 |1978.0|20.0 |71 |Europe|Volkswagen Super Beetle 117 71 |\n",
+ "|Opel 1900 |28.0|4 |116.0 |90.0 |2123.0|14.0 |71 |Europe|Opel 1900 71 |\n",
+ "|Peugeot 304 |30.0|4 |79.0 |70.0 |2074.0|19.5 |71 |Europe|Peugeot 304 71 |\n",
+ "|Fiat 124B |30.0|4 |88.0 |76.0 |2065.0|14.5 |71 |Europe|Fiat 124B 71 |\n",
+ "|Volkswagen Model 111 |27.0|4 |97.0 |60.0 |1834.0|19.0 |71 |Europe|Volkswagen Model 111 71 |\n",
+ "|Volkswagen Type 3 |23.0|4 |97.0 |54.0 |2254.0|23.5 |72 |Europe|Volkswagen Type 3 72 |\n",
+ "|Volvo 145e (sw) |18.0|4 |121.0 |112.0 |2933.0|14.5 |72 |Europe|Volvo 145e (sw) 72 |\n",
+ "|Volkswagen 411 (sw) |22.0|4 |121.0 |76.0 |2511.0|18.0 |72 |Europe|Volkswagen 411 (sw) 72 |\n",
+ "|Peugeot 504 (sw) |21.0|4 |120.0 |87.0 |2979.0|19.5 |72 |Europe|Peugeot 504 (sw) 72 |\n",
+ "|Renault 12 (sw) |26.0|4 |96.0 |69.0 |2189.0|18.0 |72 |Europe|Renault 12 (sw) 72 |\n",
+ "|Volkswagen Super Beetle |26.0|4 |97.0 |46.0 |1950.0|21.0 |73 |Europe|Volkswagen Super Beetle 73 |\n",
+ "|Fiat 124 Sport Coupe |26.0|4 |98.0 |90.0 |2265.0|15.5 |73 |Europe|Fiat 124 Sport Coupe 73 |\n",
+ "|Fiat 128 |29.0|4 |68.0 |49.0 |1867.0|19.5 |73 |Europe|Fiat 128 73 |\n",
+ "|Opel Manta |24.0|4 |116.0 |75.0 |2158.0|15.5 |73 |Europe|Opel Manta 73 |\n",
+ "+----------------------------+----+---------+------------+----------+------+------------+-----+------+-------------------------------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Filtering rows in PySpark\n",
+ "total_count = df.count()\n",
+ "print(\"TOTAL RECORD COUNT: \" + str(total_count)) \n",
+ "europe_filtered_count = df.filter(col('Origin')=='Europe').count()\n",
+ "print(\"EUROPE FILTERED RECORD COUNT: \" + str(europe_filtered_count))\n",
+ "df.filter(col('Origin')=='Europe').show(truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 39,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "MXJxRwBQ1lyd",
+ "outputId": "06fcc49c-c21e-47c5-ee14-324ae724e274"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "TOTAL RECORD COUNT: 406\n",
+ "EUROPE FILTERED RECORD COUNT: 66\n",
+ "+----------------------------+----+---------+------------+----------+------+------------+-----+------+-------------------------------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|car_model |\n",
+ "+----------------------------+----+---------+------------+----------+------+------------+-----+------+-------------------------------+\n",
+ "|Citroen DS-21 Pallas |0.0 |4 |133.0 |115.0 |3090.0|17.5 |70 |Europe|Citroen DS-21 Pallas 70 |\n",
+ "|Volkswagen 1131 Deluxe Sedan|26.0|4 |97.0 |46.0 |1835.0|20.5 |70 |Europe|Volkswagen 1131 Deluxe Sedan 70|\n",
+ "|Peugeot 504 |25.0|4 |110.0 |87.0 |2672.0|17.5 |70 |Europe|Peugeot 504 70 |\n",
+ "|Audi 100 LS |24.0|4 |107.0 |90.0 |2430.0|14.5 |70 |Europe|Audi 100 LS 70 |\n",
+ "|Saab 99e |25.0|4 |104.0 |95.0 |2375.0|17.5 |70 |Europe|Saab 99e 70 |\n",
+ "|BMW 2002 |26.0|4 |121.0 |113.0 |2234.0|12.5 |70 |Europe|BMW 2002 70 |\n",
+ "|Volkswagen Super Beetle 117 |0.0 |4 |97.0 |48.0 |1978.0|20.0 |71 |Europe|Volkswagen Super Beetle 117 71 |\n",
+ "|Opel 1900 |28.0|4 |116.0 |90.0 |2123.0|14.0 |71 |Europe|Opel 1900 71 |\n",
+ "|Peugeot 304 |30.0|4 |79.0 |70.0 |2074.0|19.5 |71 |Europe|Peugeot 304 71 |\n",
+ "|Fiat 124B |30.0|4 |88.0 |76.0 |2065.0|14.5 |71 |Europe|Fiat 124B 71 |\n",
+ "|Volkswagen Model 111 |27.0|4 |97.0 |60.0 |1834.0|19.0 |71 |Europe|Volkswagen Model 111 71 |\n",
+ "|Volkswagen Type 3 |23.0|4 |97.0 |54.0 |2254.0|23.5 |72 |Europe|Volkswagen Type 3 72 |\n",
+ "|Volvo 145e (sw) |18.0|4 |121.0 |112.0 |2933.0|14.5 |72 |Europe|Volvo 145e (sw) 72 |\n",
+ "|Volkswagen 411 (sw) |22.0|4 |121.0 |76.0 |2511.0|18.0 |72 |Europe|Volkswagen 411 (sw) 72 |\n",
+ "|Peugeot 504 (sw) |21.0|4 |120.0 |87.0 |2979.0|19.5 |72 |Europe|Peugeot 504 (sw) 72 |\n",
+ "|Renault 12 (sw) |26.0|4 |96.0 |69.0 |2189.0|18.0 |72 |Europe|Renault 12 (sw) 72 |\n",
+ "|Volkswagen Super Beetle |26.0|4 |97.0 |46.0 |1950.0|21.0 |73 |Europe|Volkswagen Super Beetle 73 |\n",
+ "|Fiat 124 Sport Coupe |26.0|4 |98.0 |90.0 |2265.0|15.5 |73 |Europe|Fiat 124 Sport Coupe 73 |\n",
+ "|Fiat 128 |29.0|4 |68.0 |49.0 |1867.0|19.5 |73 |Europe|Fiat 128 73 |\n",
+ "|Opel Manta |24.0|4 |116.0 |75.0 |2158.0|15.5 |73 |Europe|Opel Manta 73 |\n",
+ "+----------------------------+----+---------+------------+----------+------+------------+-----+------+-------------------------------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Filtering rows in PySpark based on Multiple conditions\n",
+ "total_count = df.count()\n",
+ "print(\"TOTAL RECORD COUNT: \" + str(total_count)) \n",
+ "europe_filtered_count = df.filter((col('Origin')=='Europe') & \n",
+ " (col('Cylinders')==4)).count() # Two conditions added here\n",
+ "print(\"EUROPE FILTERED RECORD COUNT: \" + str(europe_filtered_count))\n",
+ "df.filter(col('Origin')=='Europe').show(truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "zLU-a4auIEvh"
+ },
+ "source": [
+ " \n",
+ "### Get Distinct Rows"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 40,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "B1RKg1UrmBQz",
+ "outputId": "d2713c65-a75b-4b64-b6e1-181b96259649"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+------+\n",
+ "|Origin|\n",
+ "+------+\n",
+ "|Europe|\n",
+ "| US|\n",
+ "| Japan|\n",
+ "+------+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "#Get Unique Rows in PySpark\n",
+ "df.select('Origin').distinct().show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 41,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "_LQWXPXt0g0N",
+ "outputId": "6098ccd8-ae6d-4772-b18a-aad49f8d58c0"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+------+-----+\n",
+ "|Origin|model|\n",
+ "+------+-----+\n",
+ "|Europe| 71|\n",
+ "|Europe| 80|\n",
+ "|Europe| 79|\n",
+ "| Japan| 75|\n",
+ "| US| 72|\n",
+ "| US| 80|\n",
+ "|Europe| 74|\n",
+ "| Japan| 79|\n",
+ "|Europe| 76|\n",
+ "| US| 75|\n",
+ "| Japan| 77|\n",
+ "| US| 82|\n",
+ "| Japan| 80|\n",
+ "| Japan| 78|\n",
+ "| US| 78|\n",
+ "|Europe| 75|\n",
+ "| US| 71|\n",
+ "| US| 77|\n",
+ "| Japan| 70|\n",
+ "| Japan| 71|\n",
+ "+------+-----+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "#Get Unique Rows in PySpark based on mutliple columns\n",
+ "df.select('Origin','model').distinct().show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "-069UYUwIIYI"
+ },
+ "source": [
+ " \n",
+ "### Sorting Rows"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 42,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "4ZpeJvz0nkBI",
+ "outputId": "fa965b71-521e-48b7-b440-0033a4ae6599"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+----------------------------+----+---------+------------+----------+------+------------+-----+------+-------------------------------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|car_model |\n",
+ "+----------------------------+----+---------+------------+----------+------+------------+-----+------+-------------------------------+\n",
+ "|Mazda RX-4 |21.5|3 |80.0 |110.0 |2720.0|13.5 |77 |Japan |Mazda RX-4 77 |\n",
+ "|Mazda RX-7 GS |23.7|3 |70.0 |100.0 |2420.0|12.5 |80 |Japan |Mazda RX-7 GS 80 |\n",
+ "|Mazda RX2 Coupe |19.0|3 |70.0 |97.0 |2330.0|13.5 |72 |Japan |Mazda RX2 Coupe 72 |\n",
+ "|Mazda RX3 |18.0|3 |70.0 |90.0 |2124.0|13.5 |73 |Japan |Mazda RX3 73 |\n",
+ "|Datsun 510 (sw) |28.0|4 |97.0 |92.0 |2288.0|17.0 |72 |Japan |Datsun 510 (sw) 72 |\n",
+ "|Opel 1900 |28.0|4 |116.0 |90.0 |2123.0|14.0 |71 |Europe|Opel 1900 71 |\n",
+ "|Mercury Capri 2000 |23.0|4 |122.0 |86.0 |2220.0|14.0 |71 |US |Mercury Capri 2000 71 |\n",
+ "|Volkswagen 1131 Deluxe Sedan|26.0|4 |97.0 |46.0 |1835.0|20.5 |70 |Europe|Volkswagen 1131 Deluxe Sedan 70|\n",
+ "|Peugeot 304 |30.0|4 |79.0 |70.0 |2074.0|19.5 |71 |Europe|Peugeot 304 71 |\n",
+ "|Fiat 124B |30.0|4 |88.0 |76.0 |2065.0|14.5 |71 |Europe|Fiat 124B 71 |\n",
+ "|Chevrolet Vega (sw) |22.0|4 |140.0 |72.0 |2408.0|19.0 |71 |US |Chevrolet Vega (sw) 71 |\n",
+ "|Datsun 1200 |35.0|4 |72.0 |69.0 |1613.0|18.0 |71 |Japan |Datsun 1200 71 |\n",
+ "|Volkswagen Model 111 |27.0|4 |97.0 |60.0 |1834.0|19.0 |71 |Europe|Volkswagen Model 111 71 |\n",
+ "|Volkswagen Type 3 |23.0|4 |97.0 |54.0 |2254.0|23.5 |72 |Europe|Volkswagen Type 3 72 |\n",
+ "|Audi 100 LS |24.0|4 |107.0 |90.0 |2430.0|14.5 |70 |Europe|Audi 100 LS 70 |\n",
+ "|BMW 2002 |26.0|4 |121.0 |113.0 |2234.0|12.5 |70 |Europe|BMW 2002 70 |\n",
+ "|Toyota Corolla 1200 |31.0|4 |71.0 |65.0 |1773.0|19.0 |71 |Japan |Toyota Corolla 1200 71 |\n",
+ "|Chevrolet Vega 2300 |28.0|4 |140.0 |90.0 |2264.0|15.5 |71 |US |Chevrolet Vega 2300 71 |\n",
+ "|Ford Pinto |25.0|4 |98.0 |0.0 |2046.0|19.0 |71 |US |Ford Pinto 71 |\n",
+ "|Dodge Colt Hardtop |25.0|4 |97.5 |80.0 |2126.0|17.0 |72 |US |Dodge Colt Hardtop 72 |\n",
+ "+----------------------------+----+---------+------------+----------+------+------------+-----+------+-------------------------------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Sort Rows in PySpark\n",
+ "# By default the data will be sorted in ascending order\n",
+ "df.orderBy('Cylinders').show(truncate=False) "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 43,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "v1CEwofMJV-D",
+ "outputId": "46809931-e309-4482-adf1-2bd7cfdac811"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+----------------------------+\n",
+ "|Car |MPG |Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|car_model |\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+----------------------------+\n",
+ "|Plymouth 'Cuda 340 |14.0|8 |340.0 |160.0 |3609.0|8.0 |70 |US |Plymouth 'Cuda 340 70 |\n",
+ "|Pontiac Safari (sw) |13.0|8 |400.0 |175.0 |5140.0|12.0 |71 |US |Pontiac Safari (sw) 71 |\n",
+ "|Ford Mustang Boss 302 |0.0 |8 |302.0 |140.0 |3353.0|8.0 |70 |US |Ford Mustang Boss 302 70 |\n",
+ "|Buick Skylark 320 |15.0|8 |350.0 |165.0 |3693.0|11.5 |70 |US |Buick Skylark 320 70 |\n",
+ "|Chevrolet Monte Carlo |15.0|8 |400.0 |150.0 |3761.0|9.5 |70 |US |Chevrolet Monte Carlo 70 |\n",
+ "|AMC Rebel SST |16.0|8 |304.0 |150.0 |3433.0|12.0 |70 |US |AMC Rebel SST 70 |\n",
+ "|Buick Estate Wagon (sw) |14.0|8 |455.0 |225.0 |3086.0|10.0 |70 |US |Buick Estate Wagon (sw) 70 |\n",
+ "|Ford Galaxie 500 |15.0|8 |429.0 |198.0 |4341.0|10.0 |70 |US |Ford Galaxie 500 70 |\n",
+ "|Ford F250 |10.0|8 |360.0 |215.0 |4615.0|14.0 |70 |US |Ford F250 70 |\n",
+ "|Plymouth Fury iii |14.0|8 |440.0 |215.0 |4312.0|8.5 |70 |US |Plymouth Fury iii 70 |\n",
+ "|Chevy C20 |10.0|8 |307.0 |200.0 |4376.0|15.0 |70 |US |Chevy C20 70 |\n",
+ "|AMC Ambassador DPL |15.0|8 |390.0 |190.0 |3850.0|8.5 |70 |US |AMC Ambassador DPL 70 |\n",
+ "|Dodge D200 |11.0|8 |318.0 |210.0 |4382.0|13.5 |70 |US |Dodge D200 70 |\n",
+ "|Ford Torino (sw) |0.0 |8 |351.0 |153.0 |4034.0|11.0 |70 |US |Ford Torino (sw) 70 |\n",
+ "|Hi 1200D |9.0 |8 |304.0 |193.0 |4732.0|18.5 |70 |US |Hi 1200D 70 |\n",
+ "|AMC Rebel SST (sw) |0.0 |8 |360.0 |175.0 |3850.0|11.0 |70 |US |AMC Rebel SST (sw) 70 |\n",
+ "|Chevrolet Impala |14.0|8 |350.0 |165.0 |4209.0|12.0 |71 |US |Chevrolet Impala 71 |\n",
+ "|Chevrolet Chevelle Malibu|18.0|8 |307.0 |130.0 |3504.0|12.0 |70 |US |Chevrolet Chevelle Malibu 70|\n",
+ "|Pontiac Catalina Brougham|14.0|8 |400.0 |175.0 |4464.0|11.5 |71 |US |Pontiac Catalina Brougham 71|\n",
+ "|Ford Torino |17.0|8 |302.0 |140.0 |3449.0|10.5 |70 |US |Ford Torino 70 |\n",
+ "+-------------------------+----+---------+------------+----------+------+------------+-----+------+----------------------------+\n",
+ "only showing top 20 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# To change the sorting order, you can use the ascending parameter\n",
+ "df.orderBy('Cylinders', ascending=False).show(truncate=False) "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 44,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "Zx3W4aeL5A4O",
+ "outputId": "6671ffa3-9024-4a5f-9342-2cba4483e387"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+------+-----+\n",
+ "|Origin|count|\n",
+ "+------+-----+\n",
+ "| US| 254|\n",
+ "| Japan| 79|\n",
+ "|Europe| 73|\n",
+ "+------+-----+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Using groupBy aand orderBy together\n",
+ "df.groupBy(\"Origin\").count().orderBy('count', ascending=False).show(10)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "aN0-A_JsIX-X"
+ },
+ "source": [
+ " \n",
+ "### Union Dataframes"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "VH0KOaBrJt6v"
+ },
+ "source": [
+ "You will see three main methods for performing union of dataframes. It is important to know the difference between them and which one is preferred:\n",
+ "\n",
+ "* `union()` – It is used to merge two DataFrames of the same structure/schema. If schemas are not the same, it returns an error\n",
+ "* `unionAll()` – This function is deprecated since Spark 2.0.0, and replaced with union()\n",
+ "* `unionByName()` - This function is used to merge two dataframes based on column name.\n",
+ "\n",
+ "> Since `unionAll()` is deprecated, **`union()` is the preferred method for merging dataframes.**\n",
+ " \n",
+ "> The difference between `unionByName()` and `union()` is that `unionByName()` resolves columns by name, not by position.\n",
+ "\n",
+ "In other SQLs, Union eliminates the duplicates but UnionAll merges two datasets, thereby including duplicate records. But, in PySpark, both behave the same and includes duplicate records. The recommendation is to use `distinct()` or `dropDuplicates()` to remove duplicate records."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 45,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "bCZIzfYmnx--",
+ "outputId": "cd0de201-5f82-4ce3-e15c-35b7665ad079"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "EUROPE CARS: 3\n",
+ "JAPAN CARS: 4\n",
+ "AFTER UNION: 7\n"
+ ]
+ }
+ ],
+ "source": [
+ "# CASE 1: Union When columns are in order\n",
+ "df = spark.read.csv('cars.csv', header=True, sep=\";\", inferSchema=True)\n",
+ "europe_cars = df.filter((col('Origin')=='Europe') & (col('Cylinders')==5))\n",
+ "japan_cars = df.filter((col('Origin')=='Japan') & (col('Cylinders')==3))\n",
+ "print(\"EUROPE CARS: \"+str(europe_cars.count()))\n",
+ "print(\"JAPAN CARS: \"+str(japan_cars.count()))\n",
+ "print(\"AFTER UNION: \"+str(europe_cars.union(japan_cars).count()))"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "1pfPzVOFqC_8"
+ },
+ "source": [
+ "**Result:**\n",
+ "\n",
+ "> As you can see here, there were 3 cars from Europe with 5 Cylinders, and 4 cars from Japan with 3 Cylinders. After union, there are 7 cars in total.\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 46,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "CjWjzWBoMxx0",
+ "outputId": "11237d45-cacc-403c-e971-6e2717a32135"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+----+----+----+\n",
+ "|col0|col1|col2|\n",
+ "+----+----+----+\n",
+ "| 1| 2| 3|\n",
+ "| 6| 4| 5|\n",
+ "+----+----+----+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# CASE 1: Union When columns are not in order\n",
+ "# Creating two dataframes with jumbled columns\n",
+ "df1 = spark.createDataFrame([[1, 2, 3]], [\"col0\", \"col1\", \"col2\"])\n",
+ "df2 = spark.createDataFrame([[4, 5, 6]], [\"col1\", \"col2\", \"col0\"])\n",
+ "df1.unionByName(df2).show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "EX33t8e3PyGy"
+ },
+ "source": [
+ "**Result:**\n",
+ "\n",
+ "> As you can see here, the two dataframes have been successfully merged based on their column names.\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "aHjILb1DriuX"
+ },
+ "source": [
+ " \n",
+ "## Common Data Manipulation Functions"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 47,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "x3vlC7ZerlKb",
+ "outputId": "89d386c3-e9c2-4532-c086-f8b00cc38881"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "['Column', 'DataFrame', 'DataType', 'PandasUDFType', 'PythonEvalType', 'SparkContext', 'StringType', 'UserDefinedFunction', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_create_column_from_literal', '_create_lambda', '_create_udf', '_get_get_jvm_function', '_get_lambda_parameters', '_invoke_binary_math_function', '_invoke_function', '_invoke_function_over_column', '_invoke_higher_order_function', '_options_to_str', '_test', '_to_java_column', '_to_seq', '_unresolved_named_lambda_variable', 'abs', 'acos', 'acosh', 'add_months', 'aggregate', 'approxCountDistinct', 'approx_count_distinct', 'array', 'array_contains', 'array_distinct', 'array_except', 'array_intersect', 'array_join', 'array_max', 'array_min', 'array_position', 'array_remove', 'array_repeat', 'array_sort', 'array_union', 'arrays_overlap', 'arrays_zip', 'asc', 'asc_nulls_first', 'asc_nulls_last', 'ascii', 'asin', 'asinh', 'assert_true', 'atan', 'atan2', 'atanh', 'avg', 'base64', 'bin', 'bitwiseNOT', 'broadcast', 'bround', 'bucket', 'cbrt', 'ceil', 'coalesce', 'col', 'collect_list', 'collect_set', 'column', 'concat', 'concat_ws', 'conv', 'corr', 'cos', 'cosh', 'count', 'countDistinct', 'covar_pop', 'covar_samp', 'crc32', 'create_map', 'cume_dist', 'current_date', 'current_timestamp', 'date_add', 'date_format', 'date_sub', 'date_trunc', 'datediff', 'dayofmonth', 'dayofweek', 'dayofyear', 'days', 'decode', 'degrees', 'dense_rank', 'desc', 'desc_nulls_first', 'desc_nulls_last', 'element_at', 'encode', 'exists', 'exp', 'explode', 'explode_outer', 'expm1', 'expr', 'factorial', 'filter', 'first', 'flatten', 'floor', 'forall', 'format_number', 'format_string', 'from_csv', 'from_json', 'from_unixtime', 'from_utc_timestamp', 'functools', 'get_json_object', 'greatest', 'grouping', 'grouping_id', 'hash', 'hex', 'hour', 'hours', 'hypot', 'initcap', 'input_file_name', 'instr', 'isnan', 'isnull', 'json_tuple', 'kurtosis', 'lag', 'last', 'last_day', 'lead', 'least', 'length', 'levenshtein', 'lit', 'locate', 'log', 'log10', 'log1p', 'log2', 'lower', 'lpad', 'ltrim', 'map_concat', 'map_entries', 'map_filter', 'map_from_arrays', 'map_from_entries', 'map_keys', 'map_values', 'map_zip_with', 'max', 'md5', 'mean', 'min', 'minute', 'monotonically_increasing_id', 'month', 'months', 'months_between', 'nanvl', 'next_day', 'nth_value', 'ntile', 'overlay', 'pandas_udf', 'percent_rank', 'percentile_approx', 'posexplode', 'posexplode_outer', 'pow', 'quarter', 'radians', 'raise_error', 'rand', 'randn', 'rank', 'regexp_extract', 'regexp_replace', 'repeat', 'reverse', 'rint', 'round', 'row_number', 'rpad', 'rtrim', 'schema_of_csv', 'schema_of_json', 'second', 'sequence', 'sha1', 'sha2', 'shiftLeft', 'shiftRight', 'shiftRightUnsigned', 'shuffle', 'signum', 'sin', 'since', 'sinh', 'size', 'skewness', 'slice', 'sort_array', 'soundex', 'spark_partition_id', 'split', 'sqrt', 'stddev', 'stddev_pop', 'stddev_samp', 'struct', 'substring', 'substring_index', 'sum', 'sumDistinct', 'sys', 'tan', 'tanh', 'timestamp_seconds', 'toDegrees', 'toRadians', 'to_csv', 'to_date', 'to_json', 'to_str', 'to_timestamp', 'to_utc_timestamp', 'transform', 'transform_keys', 'transform_values', 'translate', 'trim', 'trunc', 'udf', 'unbase64', 'unhex', 'unix_timestamp', 'upper', 'var_pop', 'var_samp', 'variance', 'warnings', 'weekofyear', 'when', 'window', 'xxhash64', 'year', 'years', 'zip_with']\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Functions available in PySpark\n",
+ "from pyspark.sql import functions\n",
+ "# Similar to python, we can use the dir function to view the avaiable functions\n",
+ "print(dir(functions)) "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "PIKigra7A34e"
+ },
+ "source": [
+ " \n",
+ "### String Functions"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 48,
+ "metadata": {
+ "id": "63QDccSjBqC4"
+ },
+ "outputs": [],
+ "source": [
+ "# Loading the data\n",
+ "from pyspark.sql.functions import col\n",
+ "df = spark.read.csv('cars.csv', header=True, sep=\";\", inferSchema=True)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "LiXWN8DUA9x6"
+ },
+ "source": [
+ "**Display the Car column in exisitng, lower and upper characters, and the first 4 characters of the column**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 49,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "52Gh9c99BZFr",
+ "outputId": "51c7cfa5-7363-4387-cb35-57ee79e22055"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Help on function substring in module pyspark.sql.functions:\n",
+ "\n",
+ "substring(str, pos, len)\n",
+ " Substring starts at `pos` and is of length `len` when str is String type or\n",
+ " returns the slice of byte array that starts at `pos` in byte and is of length `len`\n",
+ " when str is Binary type.\n",
+ " \n",
+ " .. versionadded:: 1.5.0\n",
+ " \n",
+ " Notes\n",
+ " -----\n",
+ " The position is not zero based, but 1 based index.\n",
+ " \n",
+ " Examples\n",
+ " --------\n",
+ " >>> df = spark.createDataFrame([('abcd',)], ['s',])\n",
+ " >>> df.select(substring(df.s, 1, 2).alias('s')).collect()\n",
+ " [Row(s='ab')]\n",
+ "\n",
+ "+-------------------------+-------------------------+-------------------------+------------------+\n",
+ "|Car |lower(Car) |upper(Car) |concatenated value|\n",
+ "+-------------------------+-------------------------+-------------------------+------------------+\n",
+ "|Chevrolet Chevelle Malibu|chevrolet chevelle malibu|CHEVROLET CHEVELLE MALIBU|Chev |\n",
+ "|Buick Skylark 320 |buick skylark 320 |BUICK SKYLARK 320 |Buic |\n",
+ "|Plymouth Satellite |plymouth satellite |PLYMOUTH SATELLITE |Plym |\n",
+ "|AMC Rebel SST |amc rebel sst |AMC REBEL SST |AMC |\n",
+ "|Ford Torino |ford torino |FORD TORINO |Ford |\n",
+ "+-------------------------+-------------------------+-------------------------+------------------+\n",
+ "only showing top 5 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pyspark.sql.functions import col,lower, upper, substring\n",
+ "# Prints out the details of a function\n",
+ "help(substring)\n",
+ "# alias is used to rename the column name in the output\n",
+ "df.select(col('Car'),lower(col('Car')),upper(col('Car')),substring(col('Car'),1,4).alias(\"concatenated value\")).show(5, False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "GZJFdTbk6rBt"
+ },
+ "source": [
+ "**Concatenate the Car column and Model column and add a space between them.**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 50,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "8Lo951Cg6phi",
+ "outputId": "e3921f2d-9a96-4a33-f397-c458ec391bf5"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-------------------------+-----+----------------------------+\n",
+ "|Car |model|concat(Car, , model) |\n",
+ "+-------------------------+-----+----------------------------+\n",
+ "|Chevrolet Chevelle Malibu|70 |Chevrolet Chevelle Malibu 70|\n",
+ "|Buick Skylark 320 |70 |Buick Skylark 320 70 |\n",
+ "|Plymouth Satellite |70 |Plymouth Satellite 70 |\n",
+ "|AMC Rebel SST |70 |AMC Rebel SST 70 |\n",
+ "|Ford Torino |70 |Ford Torino 70 |\n",
+ "+-------------------------+-----+----------------------------+\n",
+ "only showing top 5 rows\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pyspark.sql.functions import concat\n",
+ "df.select(col(\"Car\"),col(\"model\"),concat(col(\"Car\"), lit(\" \"), col(\"model\"))).show(5, False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ldtA0wk9BMkT"
+ },
+ "source": [
+ " \n",
+ "### Numeric functions"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "kmz4G5LVBOs6"
+ },
+ "source": [
+ "**Show the oldest date and the most recent date**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 51,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "wBDDH-YpBbdk",
+ "outputId": "71d15146-4d47-45d4-af92-f536684fc7b3"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-----------+-----------+\n",
+ "|min(Weight)|max(Weight)|\n",
+ "+-----------+-----------+\n",
+ "| 1613| 5140|\n",
+ "+-----------+-----------+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pyspark.sql.functions import min, max\n",
+ "df.select(min(col('Weight')), max(col('Weight'))).show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "MTg-Royz7Nvi"
+ },
+ "source": [
+ "**Add 10 to the minimum and maximum weight**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 52,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "YeiemMsI7Vm2",
+ "outputId": "21d6646c-8e6e-47af-bef9-b92ee07c7df7"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+------------------+------------------+\n",
+ "|(min(Weight) + 10)|max((Weight + 10))|\n",
+ "+------------------+------------------+\n",
+ "| 1623| 5150|\n",
+ "+------------------+------------------+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pyspark.sql.functions import min, max, lit\n",
+ "df.select(min(col('Weight'))+lit(10), max(col('Weight')+lit(10))).show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "KQ6Ul9HGCwC3"
+ },
+ "source": [
+ " \n",
+ "### Operations on Date"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "s1jmBN2qFHyk"
+ },
+ "source": [
+ "> [PySpark follows SimpleDateFormat table of Java. Click here to view the docs.](https://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 53,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "sCTeI_JvDCsH",
+ "outputId": "e3f4a392-00d7-422d-945c-8038b3804876"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-------------------+\n",
+ "| DOB|\n",
+ "+-------------------+\n",
+ "|2019-12-25 13:30:00|\n",
+ "+-------------------+\n",
+ "\n",
+ "root\n",
+ " |-- DOB: string (nullable = true)\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pyspark.sql.functions import to_date, to_timestamp, lit\n",
+ "df = spark.createDataFrame([('2019-12-25 13:30:00',)], ['DOB'])\n",
+ "df.show()\n",
+ "df.printSchema()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 54,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "ZH8ja1eHEW8x",
+ "outputId": "2cef04c2-6c6f-494d-db9a-a4c17599b65c"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+---------------------------------+--------------------------------------+\n",
+ "|to_date(DOB, yyyy-MM-dd HH:mm:ss)|to_timestamp(DOB, yyyy-MM-dd HH:mm:ss)|\n",
+ "+---------------------------------+--------------------------------------+\n",
+ "| 2019-12-25| 2019-12-25 13:30:00|\n",
+ "+---------------------------------+--------------------------------------+\n",
+ "\n",
+ "root\n",
+ " |-- to_date(DOB, yyyy-MM-dd HH:mm:ss): date (nullable = true)\n",
+ " |-- to_timestamp(DOB, yyyy-MM-dd HH:mm:ss): timestamp (nullable = true)\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "df = spark.createDataFrame([('2019-12-25 13:30:00',)], ['DOB'])\n",
+ "df = df.select(to_date(col('DOB'),'yyyy-MM-dd HH:mm:ss'), to_timestamp(col('DOB'),'yyyy-MM-dd HH:mm:ss'))\n",
+ "df.show()\n",
+ "df.printSchema()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 55,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "7g9m_8PPErI1",
+ "outputId": "854b3a71-bf83-4be8-c282-d94cc914c91f"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+----------------------------------+---------------------------------------+\n",
+ "|to_date(DOB, dd/MMM/yyyy HH:mm:ss)|to_timestamp(DOB, dd/MMM/yyyy HH:mm:ss)|\n",
+ "+----------------------------------+---------------------------------------+\n",
+ "| 2019-12-25| 2019-12-25 13:30:00|\n",
+ "+----------------------------------+---------------------------------------+\n",
+ "\n",
+ "root\n",
+ " |-- to_date(DOB, dd/MMM/yyyy HH:mm:ss): date (nullable = true)\n",
+ " |-- to_timestamp(DOB, dd/MMM/yyyy HH:mm:ss): timestamp (nullable = true)\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "df = spark.createDataFrame([('25/Dec/2019 13:30:00',)], ['DOB'])\n",
+ "df = df.select(to_date(col('DOB'),'dd/MMM/yyyy HH:mm:ss'), to_timestamp(col('DOB'),'dd/MMM/yyyy HH:mm:ss'))\n",
+ "df.show()\n",
+ "df.printSchema()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dIPQyQV7-Hz5"
+ },
+ "source": [
+ "**What is 3 days earlier that the oldest date and 3 days later than the most recent date?**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 56,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "PUCEwQkZ-I7h",
+ "outputId": "82621ccc-0113-4f4d-fb22-1551ab137893"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+----------------------+----------------------+\n",
+ "|date_add(max(Date), 3)|date_sub(min(Date), 3)|\n",
+ "+----------------------+----------------------+\n",
+ "| 2021-04-02| 1989-12-29|\n",
+ "+----------------------+----------------------+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pyspark.sql.functions import date_add, date_sub\n",
+ "# create a dummy dataframe\n",
+ "df = spark.createDataFrame([('1990-01-01',),('1995-01-03',),('2021-03-30',)], ['Date'])\n",
+ "# find out the required dates\n",
+ "df.select(date_add(max(col('Date')),3), date_sub(min(col('Date')),3)).show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "7OZElEvcGOD1"
+ },
+ "source": [
+ " \n",
+ "## Joins in PySpark"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 57,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "UJBC7r3JFyCL",
+ "outputId": "65bafd3a-cc37-4a3a-e7ff-9578dd78dd4f"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+---+--------+\n",
+ "| id|car_name|\n",
+ "+---+--------+\n",
+ "| 1| Car A|\n",
+ "| 2| Car B|\n",
+ "| 3| Car C|\n",
+ "+---+--------+\n",
+ "\n",
+ "+---+---------+\n",
+ "| id|car_price|\n",
+ "+---+---------+\n",
+ "| 1| 1000|\n",
+ "| 2| 2000|\n",
+ "| 3| 3000|\n",
+ "+---+---------+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Create two dataframes\n",
+ "cars_df = spark.createDataFrame([[1, 'Car A'],[2, 'Car B'],[3, 'Car C']], [\"id\", \"car_name\"])\n",
+ "car_price_df = spark.createDataFrame([[1, 1000],[2, 2000],[3, 3000]], [\"id\", \"car_price\"])\n",
+ "cars_df.show()\n",
+ "car_price_df.show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 58,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "U7Py4EYyKJTN",
+ "outputId": "6bcd2bed-a2a1-4ef7-dbb5-20d27170ce05"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+---+--------+---------+\n",
+ "|id |car_name|car_price|\n",
+ "+---+--------+---------+\n",
+ "|1 |Car A |1000 |\n",
+ "|3 |Car C |3000 |\n",
+ "|2 |Car B |2000 |\n",
+ "+---+--------+---------+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Executing an inner join so we can see the id, name and price of each car in one row\n",
+ "cars_df.join(car_price_df, cars_df.id == car_price_df.id, 'inner').select(cars_df['id'],cars_df['car_name'],car_price_df['car_price']).show(truncate=False)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "vj0mPaHU5i5n"
+ },
+ "source": [
+ "As you can see, we have done an inner join between two dataframes. The following joins are supported by PySpark:\n",
+ "1. inner (default)\n",
+ "2. cross\n",
+ "3. outer\n",
+ "4. full\n",
+ "5. full_outer\n",
+ "6. left\n",
+ "7. left_outer\n",
+ "8. right\n",
+ "9. right_outer\n",
+ "10. left_semi\n",
+ "11. left_anti"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "HNPhsx8P2tUH"
+ },
+ "source": [
+ " \n",
+ "## Spark SQL"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "rHMvBBAh23cw"
+ },
+ "source": [
+ "SQL has been around since the 1970s, and so one can imagine the number of people who made it their bread and butter. As big data came into popularity, the number of professionals with the technical knowledge to deal with it was in shortage. This led to the creation of Spark SQL. To quote the docs: \n",
+ ">Spark SQL is a Spark module for structured data processing. Unlike the basic Spark RDD API, the interfaces provided by Spark SQL provide Spark with more information about the structure of both the data and the computation being performed. Internally, Spark SQL uses this extra information to perform extra optimizations.\n",
+ "\n",
+ "Basically, what you need to know is that Spark SQL is used to execute SQL queries on big data. Spark SQL can also be used to read data from Hive tables and views. Let me explain Spark SQL with an example.\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 59,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "g2DaK9-D7QkX",
+ "outputId": "2f3156cd-7629-4021-fe8f-1f8b7980e650"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+--------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "| Car| MPG|Cylinders|Displacement|Horsepower|Weight|Acceleration|Model|Origin|\n",
+ "+--------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "|Chevrolet Chevell...|18.0| 8| 307.0| 130.0| 3504.| 12.0| 70| US|\n",
+ "| Buick Skylark 320|15.0| 8| 350.0| 165.0| 3693.| 11.5| 70| US|\n",
+ "| Plymouth Satellite|18.0| 8| 318.0| 150.0| 3436.| 11.0| 70| US|\n",
+ "| AMC Rebel SST|16.0| 8| 304.0| 150.0| 3433.| 12.0| 70| US|\n",
+ "| Ford Torino|17.0| 8| 302.0| 140.0| 3449.| 10.5| 70| US|\n",
+ "+--------------------+----+---------+------------+----------+------+------------+-----+------+\n",
+ "\n",
+ "+-----------+\n",
+ "|total_count|\n",
+ "+-----------+\n",
+ "| 406|\n",
+ "+-----------+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Load data\n",
+ "df = spark.read.csv('cars.csv', header=True, sep=\";\")\n",
+ "# Register Temporary Table\n",
+ "df.createOrReplaceTempView(\"temp\")\n",
+ "# Select all data from temp table\n",
+ "spark.sql(\"select * from temp limit 5\").show()\n",
+ "# Select count of data in table\n",
+ "spark.sql(\"select count(*) as total_count from temp\").show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6i32WE8j_ec8"
+ },
+ "source": [
+ "As you can see, we registered the dataframe as temporary table and then ran basic SQL queries on it. How amazing is that?! \n",
+ "If you are a person who is more comfortable with SQL, then this feature is truly a blessing for you! But this raises a question: \n",
+ "> *Should I just keep using Spark SQL all the time?*\n",
+ "\n",
+ "And the answer is, _**it depends**_. \n",
+ "So basically, the different functions acts in differnet ways, and depending upon the type of action you are trying to do, the speed at which it completes execution also differs. But as time progress, this feature is getting better and better, so hopefully the difference should be a small margin. There are plenty of analysis done on this, but nothing has a definite answer yet. You can read this [comparative study done by horton works](https://community.cloudera.com/t5/Community-Articles/Spark-RDDs-vs-DataFrames-vs-SparkSQL/ta-p/246547) or the answer to this [stackoverflow question](https://stackoverflow.com/questions/45430816/writing-sql-vs-using-dataframe-apis-in-spark-sql) if you are still curious about it."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "x62BiCgBMOtq"
+ },
+ "source": [
+ " \n",
+ "## RDD"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "VGXK6uEuUKRh"
+ },
+ "source": [
+ "> With map, you define a function and then apply it record by record. Flatmap returns a new RDD by first applying a function to all of the elements in RDDs and then flattening the result. Filter, returns a new RDD. Meaning only the elements that satisfy a condition. With reduce, we are taking neighboring elements and producing a single combined result.\n",
+ "For example, let's say you have a set of numbers. You can reduce this to its sum by providing a function that takes as input two values and reduces them to one. \n",
+ "\n",
+ "Some of the reasons you would use a dataframe over RDD are:\n",
+ "1. It's ability to represnt data as rows and columns. But this also means it can only hold structred and semi-structured data.\n",
+ "2. It allows processing data in different formats (AVRO, CSV, JSON, and storage system HDFS, HIVE tables, MySQL).\n",
+ "3. It's superior job Optimization capability.\n",
+ "4. DataFrame API is very easy to use.\n",
+ "\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 60,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "0_WvAgyvR7m6",
+ "outputId": "aca36808-b04a-4a45-d628-aeb6b5bab927"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Car;MPG;Cylinders;Displacement;Horsepower;Weight;Acceleration;Model;Origin\n",
+ "Chevrolet Chevelle Malibu;18.0;8;307.0;130.0;3504.;12.0;70;US\n"
+ ]
+ }
+ ],
+ "source": [
+ "cars = spark.sparkContext.textFile('cars.csv')\n",
+ "print(cars.first())\n",
+ "cars_header = cars.first()\n",
+ "cars_rest = cars.filter(lambda line: line!=cars_header)\n",
+ "print(cars_rest.first())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "P65eAFO3Mkdd"
+ },
+ "source": [
+ "**How many cars are there in our csv data?**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 61,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "Vi03EU0CMSmO",
+ "outputId": "2215311f-b291-4659-b2ac-4f3d4491531f"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "406"
+ ]
+ },
+ "execution_count": 61,
+ "metadata": {
+ "tags": []
+ },
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "cars_rest.map(lambda line: line.split(\";\")).count()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3c4bci70MnlQ"
+ },
+ "source": [
+ "**Display the Car name, MPG, Cylinders, Weight and Origin for the cars Originating in Europe**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 62,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "fWFpo_WxMnvm",
+ "outputId": "db9ccb02-d380-4b62-8ff8-679ed5e06e11"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[('Citroen DS-21 Pallas', '0', '4', '3090.', 'Europe'),\n",
+ " ('Volkswagen 1131 Deluxe Sedan', '26.0', '4', '1835.', 'Europe'),\n",
+ " ('Peugeot 504', '25.0', '4', '2672.', 'Europe'),\n",
+ " ('Audi 100 LS', '24.0', '4', '2430.', 'Europe'),\n",
+ " ('Saab 99e', '25.0', '4', '2375.', 'Europe'),\n",
+ " ('BMW 2002', '26.0', '4', '2234.', 'Europe'),\n",
+ " ('Volkswagen Super Beetle 117', '0', '4', '1978.', 'Europe'),\n",
+ " ('Opel 1900', '28.0', '4', '2123.', 'Europe'),\n",
+ " ('Peugeot 304', '30.0', '4', '2074.', 'Europe'),\n",
+ " ('Fiat 124B', '30.0', '4', '2065.', 'Europe'),\n",
+ " ('Volkswagen Model 111', '27.0', '4', '1834.', 'Europe'),\n",
+ " ('Volkswagen Type 3', '23.0', '4', '2254.', 'Europe'),\n",
+ " ('Volvo 145e (sw)', '18.0', '4', '2933.', 'Europe'),\n",
+ " ('Volkswagen 411 (sw)', '22.0', '4', '2511.', 'Europe'),\n",
+ " ('Peugeot 504 (sw)', '21.0', '4', '2979.', 'Europe'),\n",
+ " ('Renault 12 (sw)', '26.0', '4', '2189.', 'Europe'),\n",
+ " ('Volkswagen Super Beetle', '26.0', '4', '1950.', 'Europe'),\n",
+ " ('Fiat 124 Sport Coupe', '26.0', '4', '2265.', 'Europe'),\n",
+ " ('Fiat 128', '29.0', '4', '1867.', 'Europe'),\n",
+ " ('Opel Manta', '24.0', '4', '2158.', 'Europe'),\n",
+ " ('Audi 100LS', '20.0', '4', '2582.', 'Europe'),\n",
+ " ('Volvo 144ea', '19.0', '4', '2868.', 'Europe'),\n",
+ " ('Saab 99le', '24.0', '4', '2660.', 'Europe'),\n",
+ " ('Audi Fox', '29.0', '4', '2219.', 'Europe'),\n",
+ " ('Volkswagen Dasher', '26.0', '4', '1963.', 'Europe'),\n",
+ " ('Opel Manta', '26.0', '4', '2300.', 'Europe'),\n",
+ " ('Fiat 128', '24.0', '4', '2108.', 'Europe'),\n",
+ " ('Fiat 124 TC', '26.0', '4', '2246.', 'Europe'),\n",
+ " ('Fiat x1.9', '31.0', '4', '2000.', 'Europe'),\n",
+ " ('Volkswagen Dasher', '25.0', '4', '2223.', 'Europe'),\n",
+ " ('Volkswagen Rabbit', '29.0', '4', '1937.', 'Europe'),\n",
+ " ('Audi 100LS', '23.0', '4', '2694.', 'Europe'),\n",
+ " ('Peugeot 504', '23.0', '4', '2957.', 'Europe'),\n",
+ " ('Volvo 244DL', '22.0', '4', '2945.', 'Europe'),\n",
+ " ('Saab 99LE', '25.0', '4', '2671.', 'Europe'),\n",
+ " ('Fiat 131', '28.0', '4', '2464.', 'Europe'),\n",
+ " ('Opel 1900', '25.0', '4', '2220.', 'Europe'),\n",
+ " ('Renault 12tl', '27.0', '4', '2202.', 'Europe'),\n",
+ " ('Volkswagen Rabbit', '29.0', '4', '1937.', 'Europe'),\n",
+ " ('Volkswagen Rabbit', '29.5', '4', '1825.', 'Europe'),\n",
+ " ('Volvo 245', '20.0', '4', '3150.', 'Europe'),\n",
+ " ('Peugeot 504', '19.0', '4', '3270.', 'Europe'),\n",
+ " ('Mercedes-Benz 280s', '16.5', '6', '3820.', 'Europe'),\n",
+ " ('Renault 5 GTL', '36.0', '4', '1825.', 'Europe'),\n",
+ " ('Volkswagen Rabbit Custom', '29.0', '4', '1940.', 'Europe'),\n",
+ " ('Volkswagen Dasher', '30.5', '4', '2190.', 'Europe'),\n",
+ " ('BMW 320i', '21.5', '4', '2600.', 'Europe'),\n",
+ " ('Volkswagen Rabbit Custom Diesel', '43.1', '4', '1985.', 'Europe'),\n",
+ " ('Audi 5000', '20.3', '5', '2830.', 'Europe'),\n",
+ " ('Volvo 264gl', '17.0', '6', '3140.', 'Europe'),\n",
+ " ('Saab 99gle', '21.6', '4', '2795.', 'Europe'),\n",
+ " ('Peugeot 604sl', '16.2', '6', '3410.', 'Europe'),\n",
+ " ('Volkswagen Scirocco', '31.5', '4', '1990.', 'Europe'),\n",
+ " ('Volkswagen Rabbit Custom', '31.9', '4', '1925.', 'Europe'),\n",
+ " ('Mercedes Benz 300d', '25.4', '5', '3530.', 'Europe'),\n",
+ " ('Peugeot 504', '27.2', '4', '3190.', 'Europe'),\n",
+ " ('Fiat Strada Custom', '37.3', '4', '2130.', 'Europe'),\n",
+ " ('Volkswagen Rabbit', '41.5', '4', '2144.', 'Europe'),\n",
+ " ('Audi 4000', '34.3', '4', '2188.', 'Europe'),\n",
+ " ('Volkswagen Rabbit C (Diesel)', '44.3', '4', '2085.', 'Europe'),\n",
+ " ('Volkswagen Dasher (diesel)', '43.4', '4', '2335.', 'Europe'),\n",
+ " ('Audi 5000s (diesel)', '36.4', '5', '2950.', 'Europe'),\n",
+ " ('Mercedes-Benz 240d', '30.0', '4', '3250.', 'Europe'),\n",
+ " ('Renault Lecar Deluxe', '40.9', '4', '1835.', 'Europe'),\n",
+ " ('Volkswagen Rabbit', '29.8', '4', '1845.', 'Europe'),\n",
+ " ('Triumph TR7 Coupe', '35.0', '4', '2500.', 'Europe'),\n",
+ " ('Volkswagen Jetta', '33.0', '4', '2190.', 'Europe'),\n",
+ " ('Renault 18i', '34.5', '4', '2320.', 'Europe'),\n",
+ " ('Peugeot 505s Turbo Diesel', '28.1', '4', '3230.', 'Europe'),\n",
+ " ('Saab 900s', '0', '4', '2800.', 'Europe'),\n",
+ " ('Volvo Diesel', '30.7', '6', '3160.', 'Europe'),\n",
+ " ('Volkswagen Rabbit l', '36.0', '4', '1980.', 'Europe'),\n",
+ " ('Volkswagen Pickup', '44.0', '4', '2130.', 'Europe')]"
+ ]
+ },
+ "execution_count": 62,
+ "metadata": {
+ "tags": []
+ },
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Car name is column 0\n",
+ "(cars_rest.filter(lambda line: line.split(\";\")[8]=='Europe').\n",
+ " map(lambda line: (line.split(\";\")[0],\n",
+ " line.split(\";\")[1],\n",
+ " line.split(\";\")[2],\n",
+ " line.split(\";\")[5],\n",
+ " line.split(\";\")[8])).collect())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ZYmb5FscMph3"
+ },
+ "source": [
+ "**Display the Car name, MPG, Cylinders, Weight and Origin for the cars Originating in either Europe or Japan**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 63,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "6ZcRIX3mMquF",
+ "outputId": "8d803b6d-02f3-49dd-ef89-6178292d0217"
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "[('Citroen DS-21 Pallas', '0', '4', '3090.', 'Europe'),\n",
+ " ('Toyota Corolla Mark ii', '24.0', '4', '2372.', 'Japan'),\n",
+ " ('Datsun PL510', '27.0', '4', '2130.', 'Japan'),\n",
+ " ('Volkswagen 1131 Deluxe Sedan', '26.0', '4', '1835.', 'Europe'),\n",
+ " ('Peugeot 504', '25.0', '4', '2672.', 'Europe'),\n",
+ " ('Audi 100 LS', '24.0', '4', '2430.', 'Europe'),\n",
+ " ('Saab 99e', '25.0', '4', '2375.', 'Europe'),\n",
+ " ('BMW 2002', '26.0', '4', '2234.', 'Europe'),\n",
+ " ('Datsun PL510', '27.0', '4', '2130.', 'Japan'),\n",
+ " ('Toyota Corolla', '25.0', '4', '2228.', 'Japan'),\n",
+ " ('Volkswagen Super Beetle 117', '0', '4', '1978.', 'Europe'),\n",
+ " ('Opel 1900', '28.0', '4', '2123.', 'Europe'),\n",
+ " ('Peugeot 304', '30.0', '4', '2074.', 'Europe'),\n",
+ " ('Fiat 124B', '30.0', '4', '2065.', 'Europe'),\n",
+ " ('Toyota Corolla 1200', '31.0', '4', '1773.', 'Japan'),\n",
+ " ('Datsun 1200', '35.0', '4', '1613.', 'Japan'),\n",
+ " ('Volkswagen Model 111', '27.0', '4', '1834.', 'Europe'),\n",
+ " ('Toyota Corolla Hardtop', '24.0', '4', '2278.', 'Japan'),\n",
+ " ('Volkswagen Type 3', '23.0', '4', '2254.', 'Europe'),\n",
+ " ('Mazda RX2 Coupe', '19.0', '3', '2330.', 'Japan'),\n",
+ " ('Volvo 145e (sw)', '18.0', '4', '2933.', 'Europe'),\n",
+ " ('Volkswagen 411 (sw)', '22.0', '4', '2511.', 'Europe'),\n",
+ " ('Peugeot 504 (sw)', '21.0', '4', '2979.', 'Europe'),\n",
+ " ('Renault 12 (sw)', '26.0', '4', '2189.', 'Europe'),\n",
+ " ('Datsun 510 (sw)', '28.0', '4', '2288.', 'Japan'),\n",
+ " ('Toyota Corolla Mark II (sw)', '23.0', '4', '2506.', 'Japan'),\n",
+ " ('Toyota Corolla 1600 (sw)', '27.0', '4', '2100.', 'Japan'),\n",
+ " ('Volkswagen Super Beetle', '26.0', '4', '1950.', 'Europe'),\n",
+ " ('Toyota Camry', '20.0', '4', '2279.', 'Japan'),\n",
+ " ('Datsun 610', '22.0', '4', '2379.', 'Japan'),\n",
+ " ('Mazda RX3', '18.0', '3', '2124.', 'Japan'),\n",
+ " ('Fiat 124 Sport Coupe', '26.0', '4', '2265.', 'Europe'),\n",
+ " ('Fiat 128', '29.0', '4', '1867.', 'Europe'),\n",
+ " ('Opel Manta', '24.0', '4', '2158.', 'Europe'),\n",
+ " ('Audi 100LS', '20.0', '4', '2582.', 'Europe'),\n",
+ " ('Volvo 144ea', '19.0', '4', '2868.', 'Europe'),\n",
+ " ('Saab 99le', '24.0', '4', '2660.', 'Europe'),\n",
+ " ('Toyota Mark II', '20.0', '6', '2807.', 'Japan'),\n",
+ " ('Datsun B210', '31.0', '4', '1950.', 'Japan'),\n",
+ " ('Toyota Corolla 1200', '32.0', '4', '1836.', 'Japan'),\n",
+ " ('Audi Fox', '29.0', '4', '2219.', 'Europe'),\n",
+ " ('Volkswagen Dasher', '26.0', '4', '1963.', 'Europe'),\n",
+ " ('Opel Manta', '26.0', '4', '2300.', 'Europe'),\n",
+ " ('Toyota Corolla', '31.0', '4', '1649.', 'Japan'),\n",
+ " ('Datsun 710', '32.0', '4', '2003.', 'Japan'),\n",
+ " ('Fiat 128', '24.0', '4', '2108.', 'Europe'),\n",
+ " ('Fiat 124 TC', '26.0', '4', '2246.', 'Europe'),\n",
+ " ('Honda Civic', '24.0', '4', '2489.', 'Japan'),\n",
+ " ('Subaru', '26.0', '4', '2391.', 'Japan'),\n",
+ " ('Fiat x1.9', '31.0', '4', '2000.', 'Europe'),\n",
+ " ('Toyota Corolla', '29.0', '4', '2171.', 'Japan'),\n",
+ " ('Toyota Corolla', '24.0', '4', '2702.', 'Japan'),\n",
+ " ('Volkswagen Dasher', '25.0', '4', '2223.', 'Europe'),\n",
+ " ('Datsun 710', '24.0', '4', '2545.', 'Japan'),\n",
+ " ('Volkswagen Rabbit', '29.0', '4', '1937.', 'Europe'),\n",
+ " ('Audi 100LS', '23.0', '4', '2694.', 'Europe'),\n",
+ " ('Peugeot 504', '23.0', '4', '2957.', 'Europe'),\n",
+ " ('Volvo 244DL', '22.0', '4', '2945.', 'Europe'),\n",
+ " ('Saab 99LE', '25.0', '4', '2671.', 'Europe'),\n",
+ " ('Honda Civic CVCC', '33.0', '4', '1795.', 'Japan'),\n",
+ " ('Fiat 131', '28.0', '4', '2464.', 'Europe'),\n",
+ " ('Opel 1900', '25.0', '4', '2220.', 'Europe'),\n",
+ " ('Renault 12tl', '27.0', '4', '2202.', 'Europe'),\n",
+ " ('Volkswagen Rabbit', '29.0', '4', '1937.', 'Europe'),\n",
+ " ('Honda Civic', '33.0', '4', '1795.', 'Japan'),\n",
+ " ('Volkswagen Rabbit', '29.5', '4', '1825.', 'Europe'),\n",
+ " ('Datsun B-210', '32.0', '4', '1990.', 'Japan'),\n",
+ " ('Toyota Corolla', '28.0', '4', '2155.', 'Japan'),\n",
+ " ('Volvo 245', '20.0', '4', '3150.', 'Europe'),\n",
+ " ('Peugeot 504', '19.0', '4', '3270.', 'Europe'),\n",
+ " ('Toyota Mark II', '19.0', '6', '2930.', 'Japan'),\n",
+ " ('Mercedes-Benz 280s', '16.5', '6', '3820.', 'Europe'),\n",
+ " ('Honda Accord CVCC', '31.5', '4', '2045.', 'Japan'),\n",
+ " ('Renault 5 GTL', '36.0', '4', '1825.', 'Europe'),\n",
+ " ('Datsun F-10 Hatchback', '33.5', '4', '1945.', 'Japan'),\n",
+ " ('Volkswagen Rabbit Custom', '29.0', '4', '1940.', 'Europe'),\n",
+ " ('Toyota Corolla Liftback', '26.0', '4', '2265.', 'Japan'),\n",
+ " ('Subaru DL', '30.0', '4', '1985.', 'Japan'),\n",
+ " ('Volkswagen Dasher', '30.5', '4', '2190.', 'Europe'),\n",
+ " ('Datsun 810', '22.0', '6', '2815.', 'Japan'),\n",
+ " ('BMW 320i', '21.5', '4', '2600.', 'Europe'),\n",
+ " ('Mazda RX-4', '21.5', '3', '2720.', 'Japan'),\n",
+ " ('Volkswagen Rabbit Custom Diesel', '43.1', '4', '1985.', 'Europe'),\n",
+ " ('Mazda GLC Deluxe', '32.8', '4', '1985.', 'Japan'),\n",
+ " ('Datsun B210 GX', '39.4', '4', '2070.', 'Japan'),\n",
+ " ('Honda Civic CVCC', '36.1', '4', '1800.', 'Japan'),\n",
+ " ('Toyota Corolla', '27.5', '4', '2560.', 'Japan'),\n",
+ " ('Datsun 510', '27.2', '4', '2300.', 'Japan'),\n",
+ " ('Toyota Celica GT Liftback', '21.1', '4', '2515.', 'Japan'),\n",
+ " ('Datsun 200-SX', '23.9', '4', '2405.', 'Japan'),\n",
+ " ('Audi 5000', '20.3', '5', '2830.', 'Europe'),\n",
+ " ('Volvo 264gl', '17.0', '6', '3140.', 'Europe'),\n",
+ " ('Saab 99gle', '21.6', '4', '2795.', 'Europe'),\n",
+ " ('Peugeot 604sl', '16.2', '6', '3410.', 'Europe'),\n",
+ " ('Volkswagen Scirocco', '31.5', '4', '1990.', 'Europe'),\n",
+ " ('Honda Accord LX', '29.5', '4', '2135.', 'Japan'),\n",
+ " ('Volkswagen Rabbit Custom', '31.9', '4', '1925.', 'Europe'),\n",
+ " ('Mazda GLC Deluxe', '34.1', '4', '1975.', 'Japan'),\n",
+ " ('Mercedes Benz 300d', '25.4', '5', '3530.', 'Europe'),\n",
+ " ('Peugeot 504', '27.2', '4', '3190.', 'Europe'),\n",
+ " ('Datsun 210', '31.8', '4', '2020.', 'Japan'),\n",
+ " ('Fiat Strada Custom', '37.3', '4', '2130.', 'Europe'),\n",
+ " ('Volkswagen Rabbit', '41.5', '4', '2144.', 'Europe'),\n",
+ " ('Toyota Corolla Tercel', '38.1', '4', '1968.', 'Japan'),\n",
+ " ('Datsun 310', '37.2', '4', '2019.', 'Japan'),\n",
+ " ('Audi 4000', '34.3', '4', '2188.', 'Europe'),\n",
+ " ('Toyota Corolla Liftback', '29.8', '4', '2711.', 'Japan'),\n",
+ " ('Mazda 626', '31.3', '4', '2542.', 'Japan'),\n",
+ " ('Datsun 510 Hatchback', '37.0', '4', '2434.', 'Japan'),\n",
+ " ('Toyota Corolla', '32.2', '4', '2265.', 'Japan'),\n",
+ " ('Mazda GLC', '46.6', '4', '2110.', 'Japan'),\n",
+ " ('Datsun 210', '40.8', '4', '2110.', 'Japan'),\n",
+ " ('Volkswagen Rabbit C (Diesel)', '44.3', '4', '2085.', 'Europe'),\n",
+ " ('Volkswagen Dasher (diesel)', '43.4', '4', '2335.', 'Europe'),\n",
+ " ('Audi 5000s (diesel)', '36.4', '5', '2950.', 'Europe'),\n",
+ " ('Mercedes-Benz 240d', '30.0', '4', '3250.', 'Europe'),\n",
+ " ('Honda Civic 1500 gl', '44.6', '4', '1850.', 'Japan'),\n",
+ " ('Renault Lecar Deluxe', '40.9', '4', '1835.', 'Europe'),\n",
+ " ('Subaru DL', '33.8', '4', '2145.', 'Japan'),\n",
+ " ('Volkswagen Rabbit', '29.8', '4', '1845.', 'Europe'),\n",
+ " ('Datsun 280-ZX', '32.7', '6', '2910.', 'Japan'),\n",
+ " ('Mazda RX-7 GS', '23.7', '3', '2420.', 'Japan'),\n",
+ " ('Triumph TR7 Coupe', '35.0', '4', '2500.', 'Europe'),\n",
+ " ('Honda Accord', '32.4', '4', '2290.', 'Japan'),\n",
+ " ('Toyota Starlet', '39.1', '4', '1755.', 'Japan'),\n",
+ " ('Honda Civic 1300', '35.1', '4', '1760.', 'Japan'),\n",
+ " ('Subaru', '32.3', '4', '2065.', 'Japan'),\n",
+ " ('Datsun 210 MPG', '37.0', '4', '1975.', 'Japan'),\n",
+ " ('Toyota Tercel', '37.7', '4', '2050.', 'Japan'),\n",
+ " ('Mazda GLC 4', '34.1', '4', '1985.', 'Japan'),\n",
+ " ('Volkswagen Jetta', '33.0', '4', '2190.', 'Europe'),\n",
+ " ('Renault 18i', '34.5', '4', '2320.', 'Europe'),\n",
+ " ('Honda Prelude', '33.7', '4', '2210.', 'Japan'),\n",
+ " ('Toyota Corolla', '32.4', '4', '2350.', 'Japan'),\n",
+ " ('Datsun 200SX', '32.9', '4', '2615.', 'Japan'),\n",
+ " ('Mazda 626', '31.6', '4', '2635.', 'Japan'),\n",
+ " ('Peugeot 505s Turbo Diesel', '28.1', '4', '3230.', 'Europe'),\n",
+ " ('Saab 900s', '0', '4', '2800.', 'Europe'),\n",
+ " ('Volvo Diesel', '30.7', '6', '3160.', 'Europe'),\n",
+ " ('Toyota Cressida', '25.4', '6', '2900.', 'Japan'),\n",
+ " ('Datsun 810 Maxima', '24.2', '6', '2930.', 'Japan'),\n",
+ " ('Volkswagen Rabbit l', '36.0', '4', '1980.', 'Europe'),\n",
+ " ('Mazda GLC Custom l', '37.0', '4', '2025.', 'Japan'),\n",
+ " ('Mazda GLC Custom', '31.0', '4', '1970.', 'Japan'),\n",
+ " ('Nissan Stanza XE', '36.0', '4', '2160.', 'Japan'),\n",
+ " ('Honda Accord', '36.0', '4', '2205.', 'Japan'),\n",
+ " ('Toyota Corolla', '34.0', '4', '2245', 'Japan'),\n",
+ " ('Honda Civic', '38.0', '4', '1965.', 'Japan'),\n",
+ " ('Honda Civic (auto)', '32.0', '4', '1965.', 'Japan'),\n",
+ " ('Datsun 310 GX', '38.0', '4', '1995.', 'Japan'),\n",
+ " ('Toyota Celica GT', '32.0', '4', '2665.', 'Japan'),\n",
+ " ('Volkswagen Pickup', '44.0', '4', '2130.', 'Europe')]"
+ ]
+ },
+ "execution_count": 63,
+ "metadata": {
+ "tags": []
+ },
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Car name is column 0\n",
+ "(cars_rest.filter(lambda line: line.split(\";\")[8] in ['Europe','Japan']).\n",
+ " map(lambda line: (line.split(\";\")[0],\n",
+ " line.split(\";\")[1],\n",
+ " line.split(\";\")[2],\n",
+ " line.split(\";\")[5],\n",
+ " line.split(\";\")[8])).collect())"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3wn2zXe7TbI3"
+ },
+ "source": [
+ " \n",
+ "## User-Defined Functions (UDF)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "w0YWspcTRrin"
+ },
+ "source": [
+ "PySpark User-Defined Functions (UDFs) help you convert your python code into a scalable version of itself. It comes in handy more than you can imagine, but beware, as the performance is less when you compare it with pyspark functions. You can view examples of how UDF works [here](https://docs.databricks.com/spark/latest/spark-sql/udf-python.html). What I will give in this section is some theory on how it works, and why it is slower.\n",
+ "\n",
+ "When you try to run a UDF in PySpark, each executor creates a python process. Data will be serialised and deserialised between each executor and python. This leads to lots of performance impact and overhead on spark jobs, making it less efficent than using spark dataframes. Apart from this, sometimes you might have memory issues while using UDFs. The Python worker consumes huge off-heap memory and so it often leads to memoryOverhead, thereby failing your job. Keeping these in mind, I wouldn't recommend using them, but at the end of the day, your choice."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "yv7ODDTQRwVt"
+ },
+ "source": [
+ " \n",
+ "# Common Questions"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "6z9gkoE2R1m1"
+ },
+ "source": [
+ " \n",
+ "## Recommended IDE"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "BpMTbRggR5Z8"
+ },
+ "source": [
+ "I personally prefer [PyCharm](https://www.jetbrains.com/pycharm/) while coding in Python/PySpark. It's based on IntelliJ IDEA so it has a lot of features! And the main advantage I have felt is the ease of installing PySpark and other packages. You can customize it with themes and plugins, and it lets you enhance productivity while coding by providing some features like suggestions, local VCS etc."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "VZ1bYvF8R8Dc"
+ },
+ "source": [
+ " \n",
+ "## Submitting a Spark Job"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "3EQWnq23SCbE"
+ },
+ "source": [
+ "The python syntax for running jobs is: `python .py ...`\n",
+ " But when you submit a spark job you have to use spark-submit to run the application.\n",
+ "\n",
+ "Here is a simple example of a spark-submit command:\n",
+ "`spark-submit filename.py --named_argument 'arguemnt value'` \n",
+ "Here, named_argument is an argument that you are reading from inside your script.\n",
+ "\n",
+ "There are other options you can pass in the command, like: \n",
+ "`--py-files` which helps you pass a python file to read in your file, \n",
+ "`--files` which helps pass other files like txt or config, \n",
+ "`--deploy-mode` which tells wether to deploy your worker node on cluster or locally \n",
+ "`--conf` which helps pass different configurations, like memoryOverhead, dynamicAllocation etc.\n",
+ "\n",
+ "There is an [entire page](https://spark.apache.org/docs/latest/submitting-applications.html) in spark documentation dedicated to this. I highly recommend you go through it once."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "oVwGYAZZiyGV"
+ },
+ "source": [
+ " \n",
+ "## Creating Dataframes"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "TvndhPjoi0er"
+ },
+ "source": [
+ "When getting started with dataframes, the most common question is: *'How do I create a dataframe?'* \n",
+ "Below, you can see how to create three kinds of dataframes:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "QXmRD3hHlM-f"
+ },
+ "source": [
+ "### Create a totally empty dataframe"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 66,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "ktkb6s-kjtgG",
+ "outputId": "08e2c46d-2a17-44c8-e954-37704b287f48"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "++\n",
+ "||\n",
+ "++\n",
+ "++\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pyspark.sql.types import StructType\n",
+ "sc = spark.sparkContext\n",
+ "#Create empty df\n",
+ "schema = StructType([])\n",
+ "empty = spark.createDataFrame(sc.emptyRDD(), schema)\n",
+ "empty.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "mg5K3nz_lSDe"
+ },
+ "source": [
+ "### Create an empty dataframe with header"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 67,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "9raf4CkRjuTr",
+ "outputId": "afcce8fb-9136-4ec5-91b6-b6924dcea0e1"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+----+\n",
+ "|name|\n",
+ "+----+\n",
+ "+----+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pyspark.sql.types import StructType, StructField\n",
+ "#Create empty df with header\n",
+ "schema_header = StructType([StructField(\"name\", StringType(), True)])\n",
+ "empty_with_header = spark.createDataFrame(sc.emptyRDD(), schema_header)\n",
+ "empty_with_header.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Y1ZNOx7ilUnd"
+ },
+ "source": [
+ "### Create a dataframe with header and data"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 68,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "TvzyL46QkJBl",
+ "outputId": "80bd5c29-cd11-4226-be60-a0db525651f6"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-----+---+\n",
+ "| name|age|\n",
+ "+-----+---+\n",
+ "|Alice| 13|\n",
+ "|Jacob| 24|\n",
+ "|Betty|135|\n",
+ "+-----+---+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pyspark.sql import Row\n",
+ "mylist = [\n",
+ " {\"name\":'Alice',\"age\":13},\n",
+ " {\"name\":'Jacob',\"age\":24},\n",
+ " {\"name\":'Betty',\"age\":135},\n",
+ "]\n",
+ "spark.createDataFrame(Row(**x) for x in mylist).show()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 69,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "VnRMckA5nLoJ",
+ "outputId": "f362d23c-5343-4601-d669-0e26940c043a"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-----+---+\n",
+ "| name|age|\n",
+ "+-----+---+\n",
+ "|Alice| 13|\n",
+ "|Jacob| 24|\n",
+ "|Betty|135|\n",
+ "+-----+---+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "# You can achieve the same using this - note that we are using spark context here, not a spark session\n",
+ "from pyspark.sql import Row\n",
+ "df = sc.parallelize([\n",
+ " Row(name='Alice', age=13),\n",
+ " Row(name='Jacob', age=24),\n",
+ " Row(name='Betty', age=135)]).toDF()\n",
+ "df.show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "f3crkAQVlxKp"
+ },
+ "source": [
+ " \n",
+ "## Drop Duplicates"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "4IHrYEwHmBcc"
+ },
+ "source": [
+ "As mentioned earlier, there are two easy to remove duplicates from a dataframe. We have already seen the usage of distinct under Get Distinct Rows section. \n",
+ "I will expalin how to use the `dropDuplicates()` function to achieve the same. \n",
+ "\n",
+ "> `drop_duplicates()` is an alias for `dropDuplicates()`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 70,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "wOuHRAPJmWen",
+ "outputId": "08dcf7e7-91b3-4e42-b418-e29cafd1d1f6"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-----+---+------+\n",
+ "| name|age|height|\n",
+ "+-----+---+------+\n",
+ "|Alice| 5| 80|\n",
+ "|Jacob| 24| 80|\n",
+ "+-----+---+------+\n",
+ "\n"
+ ]
+ }
+ ],
+ "source": [
+ "from pyspark.sql import Row\n",
+ "from pyspark.sql import Row\n",
+ "mylist = [\n",
+ " {\"name\":'Alice',\"age\":5,\"height\":80},\n",
+ " {\"name\":'Jacob',\"age\":24,\"height\":80},\n",
+ " {\"name\":'Alice',\"age\":5,\"height\":80}\n",
+ "]\n",
+ "df = spark.createDataFrame(Row(**x) for x in mylist)\n",
+ "df.dropDuplicates().show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "zMv7A-2Hnmjh"
+ },
+ "source": [
+ "`dropDuplicates()` can also take in an optional parameter called *subset* which helps specify the columns on which the duplicate check needs to be done on."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 71,
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "SHnFylV1n8to",
+ "outputId": "04d2b333-5770-4287-bb48-775bbc08aad2"
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "+-----+---+------+\n",
+ "| name|age|height|\n",
+ "+-----+---+------+\n",
+ "|Alice| 5| 80|\n",
+ "+-----+---+------+\n",
+ "\n"
+ ]
}
- ]
-}
\ No newline at end of file
+ ],
+ "source": [
+ "df.dropDuplicates(subset=['height']).show()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "bAS4DKxjqI7H"
+ },
+ "source": [
+ " \n",
+ "## Fine Tuning a Spark Job"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2d9pLlDl76dM"
+ },
+ "source": [
+ "Before we begin, please note that this entire section is written purely based on experience. It might differ with use cases, but it will help you get a better understanding of what you should be looking for, or act as a guidance to achieve your aim.\n",
+ "\n",
+ ">Spark Performance Tuning refers to the process of adjusting settings to record for memory, cores, and instances used by the system. This process guarantees that the Spark has a flawless performance and also prevents bottlenecking of resources in Spark.\n",
+ "\n",
+ "Considering you are using Amazon EMR to execute your spark jobs, there are three aspects you need to take care of:\n",
+ "1. EMR Sizing\n",
+ "2. Spark Configurations\n",
+ "3. Job Tuning\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "vIbXZT29JxmG"
+ },
+ "source": [
+ " \n",
+ "### EMR Sizing"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2Rv2SM_-KA8W"
+ },
+ "source": [
+ "Sizing your EMR is extremely important, as this affects the efficency of your spark jobs. Apart from the cost factor, the maximum number of nodes and memory your job can use will be decided by this. If you spin up a EMR with high specifications, that obviously means you are paying more for it, so we should ideally utilize it to the max. These are the guidelines that I follow to make sure the EMR is rightly sized:\n",
+ "\n",
+ "1. Size of the input data (include all the input data) on the disk.\n",
+ "2. Whether the jobs have transformations or just a straight pass through. Assess the joins and the complex joins involved.\n",
+ "3. Size of the output data on the disk.\n",
+ "\n",
+ "Look at the above criteria against the memory you need to process, and the disk space you would need. Start with a small configuration, and keep adding nodes to arrive at an optimal configuration. In case you are wondering about the *Execution time vs EMR configuration* factor, please understand that it is okay for a job to run longer, rather than adding more resources to the cluster. For example, it is okay to run a job for 40 mins job on a 5 node cluster, rather than running a job in 10 mins on a 15 node cluster.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "pmeFEgv6QTST"
+ },
+ "source": [
+ "Another thing you need to know about EMRs, are the different kinds of EC2 instance types provided by Amazon. I will briefly talk about them, but I strongly recommend you to read more about it from the [official documentation](https://aws.amazon.com/ec2/instance-types/). There are 5 types of instance classes. Based on the job you want to run, you can decide which one to use:\n",
+ "\n",
+ ">Instance Class | Description\n",
+ ">--- | ---\n",
+ ">General purpose | Balance of compute, memory and networking resources\n",
+ ">Compute optimized | Ideal for compute bound applications that benefit from high performance processors\n",
+ ">Memory optimized | Designed to deliver fast performance for workloads that process large data sets in memory\n",
+ ">Storage optimized | For workloads that require high, sequential read and write access to very large data sets on local storage\n",
+ ">GPU instances | Use hardware accelerators, or co-processors, to perform high demanding functions, more efficiently than is possible in software running on CPUs\n",
+ "\n",
+ "The configuration (memory, storage, cpu, network performance) will differ based on the instance class you choose. \n",
+ "To help make life easier, here is what I do when I get into a predicament about which one to go with: \n",
+ " 1. Visit [ec2instances](https://www.ec2instances.info/)\n",
+ " 2. Choose the EC2 instances in question \n",
+ " 3. Click on compare selected\n",
+ "\n",
+ "This will easily help you undesrstand what you are getting into, and thereby help you make the best choice! The site was built by [Garret Heaton](https://github.com/powdahound)(founder of Swoot), and has helped me countless number of times to make an informed decision."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "snACMwZug5Yn"
+ },
+ "source": [
+ " \n",
+ "### Spark Configurations"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "uFJNpK06hnpo"
+ },
+ "source": [
+ "There are a ton of [configurations](https://spark.apache.org/docs/latest/configuration.html) that you can tweak when it comes to Spark. Here, I will be noting down some of the configurations which I use, which have worked well for me. Alright! let's get into it!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dABRu9eokZxw"
+ },
+ "source": [
+ "#### Job Scheduling"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "8fRFs6atkdxS"
+ },
+ "source": [
+ "When you submit your job in a cluster, it will be given to Spark Schedulers, which is responsible for materializing a logical plan for your job. There are two types of [job scheduling](https://spark.apache.org/docs/latest/job-scheduling.html): \n",
+ "1. FIFO \n",
+ "By default, Spark’s scheduler runs jobs in FIFO fashion. Each job is divided into stages (e.g. map and reduce phases), and the first job gets priority on all available resources while its stages have tasks to launch, then the second job gets priority, etc. If the jobs at the head of the queue don’t need to use the whole cluster, later jobs can start to run right away, but if the jobs at the head of the queue are large, then later jobs may be delayed significantly. \n",
+ "2. FAIR \n",
+ "The fair scheduler supports grouping jobs into pools and setting different scheduling options (e.g. weight) for each pool. This can be useful to create a high-priority pool for more important jobs, for example, or to group the jobs of each user together and give users equal shares regardless of how many concurrent jobs they have instead of giving jobs equal shares. This approach is modeled after the Hadoop Fair Scheduler.\n",
+ "\n",
+ "> I personally prefer using the FAIR mode, and this can be set by adding `.config(\"spark.scheduler.mode\", \"FAIR\")` when you create your SparkSession.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "xrgbAiZHnq_U"
+ },
+ "source": [
+ "#### Serializer"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "N7ZYGGcYsRzB"
+ },
+ "source": [
+ "We have two types of [serializers](https://spark.apache.org/docs/latest/tuning.html#data-serialization) available: \n",
+ "1. Java serialization \n",
+ "2. Kryo serialization\n",
+ "\n",
+ "Kryo is significantly faster and more compact than Java serialization (often as much as 10x), but does not support all Serializable types and requires you to register the classes you’ll use in the program in advance for best performance.\n",
+ "\n",
+ "Java serialization is used by default because if you have custom class that extends Serializable it can be easily used. You can also control the performance of your serialization more closely by extending java.io.Externalizable\n",
+ "\n",
+ "> The general recommendation is to use Kyro as the serializer whenver possible, as it leads to much smaller sizes than Java serialization. It can be added by using `.config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")` when you create your SparkSession.\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "244El832wT8f"
+ },
+ "source": [
+ "#### Shuffle Behaviour"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "XWsBFnA8xpeF"
+ },
+ "source": [
+ "It is generally a good idea to compress the output file after the map phase. The `spark.shuffle.compress` property decides whether to do the compression or not. The compression used is `spark.io.compression.codec`.\n",
+ "\n",
+ "> The property can be added by using `.config(\"spark.shuffle.compress\", \"true\")` when you create your SparkSession."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "H5cFCvbczHz_"
+ },
+ "source": [
+ "#### Compression and Serialization"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "LHU5lfyFzKw9"
+ },
+ "source": [
+ "There are 4 defaiult codecs spark provides to compress internal data such as RDD partitions, event log, broadcast variables and shuffle outputs. They are: \n",
+ "\n",
+ "1. lz4\n",
+ "2. lzf\n",
+ "3. snappy\n",
+ "4. zstd\n",
+ "\n",
+ "> The decision on which to use rests upon the use case. I generally use the `snappy` compression. Google created Snappy because they needed something that offered very fast compression at the expense of final size. Snappy is fast, stable and free, but it increases the size more than the other codecs. At the same time, since compute costs will be less, it seems like balanced trade off. The property can be added by using `.config(\"spark.io.compression.codec\", \"snappy\")` when you create your SparkSession.\n",
+ "\n",
+ "This [session](https://databricks.com/session/best-practice-of-compression-decompression-codes-in-apache-spark) explains the best practice of compression/decompression codes in Apache Spark. I recommend you to take a look at it before taking a decision."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "rdGARl-D3n-l"
+ },
+ "source": [
+ "#### Scheduling"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Cm0ExAoS4RU6"
+ },
+ "source": [
+ "The property `spark.speculation` performs speculative execution of tasks. This means if one or more tasks are running slowly in a stage, they will be re-launched. Speculative execution will not stop the slow running task but it launches the new task in parallel.\n",
+ "\n",
+ "> I usually disable this option by adding `.config(\"spark.speculation\", \"false\") ` when I create the SparkSession. "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "aaxfGqYZ6Iqz"
+ },
+ "source": [
+ "#### Application Properties"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ERovEKOU6TNE"
+ },
+ "source": [
+ "There are mainly two application properties that you should know about:\n",
+ "\n",
+ "1. spark.driver.memoryOverhead - The amount of off-heap memory to be allocated per driver in cluster mode, in MiB unless otherwise specified. This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the container size (typically 6-10%). This option is currently supported on YARN and Kubernetes.\n",
+ "\n",
+ "2. spark.executor.memoryOverhead - The amount of off-heap memory to be allocated per executor, in MiB unless otherwise specified. This is memory that accounts for things like VM overheads, interned strings, other native overheads, etc. This tends to grow with the executor size (typically 6-10%). This option is currently supported on YARN and Kubernetes.\n",
+ "\n",
+ "> If you ever face an issue like `Container killed by YARN for exceeding memory limits`, know that it is because you have not specified enough memory Overhead for your job to successfully execute. The default value for Overhead is 10% of avaialbe memory (driver/executor sepearte), with minimum of 384.\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "BG09dDdL6Tvt"
+ },
+ "source": [
+ "#### Dynamic Allocation"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "m_lb-JI78CVT"
+ },
+ "source": [
+ "Lastly, I want to talk about Dynamic Allocation. This is a feature I constantly use while executing my jobs. This property is by defualt set to False. As the name suggests, it sets whether to use dynamic resource allocation, which scales the number of executors registered with this application up and down based on the workload. Truly a wonderful feature, and the greatest benefit of using it is that it will help make the best use of all the resources you have! The disadvantage of this feature is that it does not shine well when you have to execute tasks in parallel. Since most of the resources will be used by the first task, the second one will have to wait till some resource gets released. At the same time, if both get submitted at the exact same time, the resources will be shared between them, although not equally. Also, it is not guaranteed to *always* use the most optimal configurations. But in all my tests, the results have been great! \n",
+ "\n",
+ "> If you are planning on using this feature, you can pass the configurations as required through the spark-submit command. The four configurations which you will have to keep in mind are: \n",
+ "```\n",
+ "--conf spark.dynamicAllocation.enabled=true\n",
+ "--conf spark.dynamicAllocation.initialExecutors\n",
+ "--conf spark.dynamicAllocation.minExecutors\n",
+ "--conf spark.dynamicAllocation.maxExecutors\n",
+ "```\n",
+ "\n",
+ "You can read more about this feature [here](https://spark.apache.org/docs/latest/configuration.html#dynamic-allocation) and [here](https://stackoverflow.com/questions/40200389/how-to-execute-spark-programs-with-dynamic-resource-allocation).\n",
+ "\n",
+ "\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "CJlmPbLYKKFA"
+ },
+ "source": [
+ " \n",
+ "### Job Tuning"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "vccmILvVWewW"
+ },
+ "source": [
+ "Apart from EMR and Spark tuning, there is another way to approach opttimizations, and that is by tuning your job itself to produce results efficently. I will be going over some such techniques which will help you achieve this. The [Spark Programming Guide](https://spark.apache.org/docs/2.1.1/programming-guide.html) talks more about these concepts in detail. If you guys prefer watching a video over reading, I highly recommend [A Deep Dive into Proper Optimization for Spark Jobs](https://youtu.be/daXEp4HmS-E) by Daniel Tomes from Databricks, which I found really useful and informative!"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "r-R5ijHrKSg0"
+ },
+ "source": [
+ "#### Broadcast Joins (Broadcast Hash Join)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dvO0z5EpM5U8"
+ },
+ "source": [
+ "For some jobs, the efficenecy can be increased by caching them in memory. Broadcast Hash Join(BHJ) is such a technique which will help you optimize join queries when the size of one side of the data is low.\n",
+ ">BroadCast joins are the fastest but the drawaback is that it will consume more memory on both the executor and driver.\n",
+ "\n",
+ "This following steps give a sneak peek into how it works, which will help you understand the use cases where it can be used: \n",
+ "1. Input file(smaller of the two tables) to be broadcasted is read by the executors in parallel into its working memory.\n",
+ "2. All the data from the executors is collected into driver (Hence, the need for higher memory at driver).\n",
+ "3. The driver then broadcasts the combined dataset (full copy) into each executor.\n",
+ "4. The size of the broadcasted dataset could be several (10-20+) times bigger the input in memory due to factors like deserialization.\n",
+ "5. Executors will end up storing the parts it read first, and also the full copy, thereby leading to a high memory requirement.\n",
+ "\n",
+ "Some things to keep in mind about BHJ:\n",
+ "1. It is advisable to use broadcast joins on small datasets only (dimesnion table, for example).\n",
+ "2. Spark does not guarantee BHJ is always chosen, since not all cases (e.g. full outer join) support BHJ.\n",
+ "3. You could notice skews in tasks due to uneven partition sizes; especially during aggregations, joins etc. This can be evened out by introducing Salt value (random value). *Suggested formula for salt value:* random(0 – (shuffle partition count – 1))\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "UOvEVcieVn7e"
+ },
+ "source": [
+ "#### Spark Partitions"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "ocpQiqOtVqPz"
+ },
+ "source": [
+ "A partition in spark is an atomic chunk of data (logical division of data) stored on a node in the cluster. Partitions are the basic units of parallelism in Spark. Having too large a number of partitions or too few is not an ideal solution. The number of partitions in spark should be decided based on the cluster configuration and requirements of the application. Increasing the number of partitions will make each partition have less data or no data at all. Generally, spark partitioning can be broken down in three ways:\n",
+ "1. Input\n",
+ "2. Shuffle\n",
+ "3. Output\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "2KY1mxZVfNsl"
+ },
+ "source": [
+ "##### Input"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "dKw48h9eEbPa"
+ },
+ "source": [
+ "Spark usually does a good job of figuring the ideal configuration for this one, except in very particular cases. It is advisable to use the spark default unless:\n",
+ "1. Increase parallelism\n",
+ "2. Heavily nested data\n",
+ "3. Generating data (explode)\n",
+ "4. Source is not optimal\n",
+ "5. You are using UDFs\n",
+ "\n",
+ "`spark.sql.files.maxpartitionBytes`: This property indicates the maximum number of bytes to pack into a single partition when reading files (Default 128 MB) . Use this to increase the parallelism in reading input data. For example, if you have more cores, then you can increase the number of parallel tasks which will ensure usage of the all the cores of the cluster, and increase the speed of the task."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "qwEBu3T3EbfD"
+ },
+ "source": [
+ "##### Shuffle"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "Wx0iQUpFEbus"
+ },
+ "source": [
+ "One of the major reason why most jobs lags in performance is, for the majority of the time, because they get the shuffle partitions count wrong. By default, the value is set to 200. In almost all situations, this is not ideal. If you are dealing with shuffle satge of less than 20 GB, 200 is fine, but otherwise this needs to be changed. For most cases, you can use the following equation to find the right value:\n",
+ ">`Partition Count = Stage Input Data / Target Size` where \n",
+ "`Largest Shuffle Stage (Target Size) < 200MB/partition` in most cases. \n",
+ "`spark.sql.shuffle.partitions` property is used to set the ideal partition count value.\n",
+ "\n",
+ "If you ever notice that target size at the range of TBs, there is something terribly wrong, and you might want to change it back to 200, or recalculate it. Shuffle partitions can be configured for every action (not transformation) in the spark script.\n",
+ "\n",
+ "Let us use an example to explain this scenario: \n",
+ "Assume shuffle stage input = 210 GB. \n",
+ "Partition Count = Stage Input Data / Target Size = 210000 MB/200 MB = 1050. \n",
+ "As you can see, my shuffle partitions should be 1050, not 200.\n",
+ "\n",
+ "But, if your cluster has 2000 cores, then set your shuffle partitions to 2000.\n",
+ ">In a large cluster dealing with a large data job, never set your shuffle partitions less than your total core count. \n",
+ "\n",
+ "\n",
+ "\n",
+ "Shuffle stages almost always precede the write stages and having high shuffle partition count creates small files in the output. To address this, use localCheckPoint just before write & do a coalesce call. This localCheckPoint writes the Shuffle Partition to executor local disk and then coalesces into lower partition count and hence improves the overall performance of both shuffle stage and write stage."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "-HxUYv77EdSv"
+ },
+ "source": [
+ "##### Output"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "CPA6YRYrEdgG"
+ },
+ "source": [
+ "There are different methods to write the data. You can control the size, composition, number of files in the output and even the number of records in each file while writing the data. While writing the data, you can increase the parallelism, thereby ensuring you use all the resources that you have. But this approach would lead to a larger number of smaller files. Usually, this isn't a problem, but if you want bigger files, you will have to use one of the compaction techniques, preferably in a cluster with lesser configuration. There are multiple ways to change the composition of the output. Keep these two in mind about composition:\n",
+ "1. Coalesce: Use this to reduce the number of partitions.\n",
+ "2. Repartition: Use this very rarely, and never to reduce the number of partitions \n",
+ " a. Range Paritioner - It partitions the data either based on some sorted order OR set of sorted ranges of keys. \n",
+ " b. Hash Partioner - It spreads around the data in the partitioning based upon the key value. Hash partitioning can make distributed data skewed."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NZ5xsdWmNOVz"
+ },
+ "source": [
+ " \n",
+ "### Best Practices"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "NUKLZ8G8NVuR"
+ },
+ "source": [
+ "Try to incorporate these to your coding habits for better performance:\n",
+ "1. Do not use NOT IN use NOT EXISTS.\n",
+ "2. Remove Counts, Distinct Counts (use approxCountDIstinct).\n",
+ "3. Drop Duplicates early.\n",
+ "4. Always prefer SQL functions over PandasUDF.\n",
+ "5. Use Hive partitions effectively.\n",
+ "6. Leverage Spark UI effectively. \n",
+ "7. Avoid Shuffle Spills.\n",
+ "8. Aim for target cluster utilization of atleast 70%.\n",
+ "\n"
+ ]
+ }
+ ],
+ "metadata": {
+ "colab": {
+ "collapsed_sections": [],
+ "name": "Colab and PySpark.ipynb",
+ "provenance": [],
+ "toc_visible": true
+ },
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 1
+}