Dropping Rows with Null Values in PySpark

Working with large datasets often involves cleaning and preprocessing the data to ensure it is of high quality for analysis or machine learning tasks. One common step in this process is dealing with null values, which can represent missing, corrupt or irrelevant data points. Apache Spark is a powerful tool for big data processing, and one of its libraries, PySpark, brings the capabilities of Spark to Python users. In PySpark, dropping rows with null values is an essential skill to maintain the integrity of your dataset. In this article, we will explore several methods to remove rows with null values in PySpark dataframes.

Understanding DataFrames in PySpark

Before diving into null value handling, it’s important to understand what a DataFrame is in the context of PySpark. A DataFrame is a distributed collection of data organized into named columns, similar to a table in a relational database or a data frame in R or Python (pandas). They are designed to process a large amount of data and have various functions to perform complex operations such as filtering, grouping, and aggregation.

Creating a PySpark Session

To work with DataFrames in PySpark, you need to initiate a SparkSession first. This is the entry point for programming Spark with the Dataset and DataFrame API.


from pyspark.sql import SparkSession

# Initialize a SparkSession
spark = SparkSession.builder \
    .appName("Dropping Null Values Example") \
    .getOrCreate()

With the SparkSession (here named ‘spark’), you can proceed to create DataFrames and manipulate your data.

Creating a DataFrame with Null Values

Let’s create a sample DataFrame that includes some null (None in Python) values:


from pyspark.sql import Row

# Sample data with null values
data = [Row(name='Alice', age=25, height=165),
        Row(name=None, age=29, height=175),
        Row(name='Bob', age=None, height=None),
        Row(name='Carol', age=32, height=162),
        Row(name='Dan', age=28, height=None)]

# Creating DataFrame
df = spark.createDataFrame(data)

# Show the DataFrame
df.show()

This would output:


+-----+----+------+
| name| age|height|
+-----+----+------+
|Alice|  25|   165|
| null|  29|   175|
|  Bob|null|  null|
|Carol|  32|   162|
|  Dan|  28|  null|
+-----+----+------+

Basic Row Dropping

To drop rows with any null values, you can use the `drop()` method. Without any arguments, `drop()` will remove rows that have any null values in any columns.


# Drop any rows with null values
df_clean = df.drop()

# Show the resulting DataFrame
df_clean.show()

The output would be:


+-----+---+------+
| name|age|height|
+-----+---+------+
|Alice| 25|   165|
|Carol| 32|   162|
+-----+---+------+

Specifying Columns for Dropping Rows

You might want to drop rows based on null values in specific columns. You can do this by passing column names to the `drop()` method.

Dropping Rows with Nulls in Any Given Column

If you want to remove rows that have null values in a particular set of columns, you pass the column names to `drop()`:


# Drop rows with nulls in 'name' or 'age' columns
df_clean_columns = df.drop('name', 'age')

# Show the resulting DataFrame
df_clean_columns.show()

The output now will be:


+-----+----+------+
| name| age|height|
+-----+----+------+
|Alice|  25|   165|
| null|  29|   175|
|Carol|  32|   162|
|  Dan|  28|  null|
+-----+----+------+

Dropping Rows with Nulls in All Given Columns

If you require all specified column values to be null for a row to be dropped, you can use the `how` argument with ‘all’ value:


# Drop rows where all specified columns are null
df_clean_all_columns = df.na.drop(how='all', subset=['age', 'height'])

# Show the resulting DataFrame
df_clean_all_columns.show()

The output this time would be:


+-----+----+------+
| name| age|height|
+-----+----+------+
|Alice|  25|   165|
| null|  29|   175|
|  Bob|null|  null|
|Carol|  32|   162|
|  Dan|  28|  null|
+-----+----+------+

Using a Threshold to Drop Rows

Sometimes you might want to keep rows that have a certain number of non-null values. PySpark provides the `thresh` argument in the `drop()` method, which allows you to specify this minimum number of non-null values in order for a row to be retained.


# Drop rows with less than 2 non-null values
df_clean_thresh = df.na.drop(thresh=2)

# Show the resulting DataFrame
df_clean_thresh.show()

The resulting DataFrame after applying the threshold will look like this:


+-----+----+------+
| name| age|height|
+-----+----+------+
|Alice|  25|   165|
| null|  29|   175|
|Carol|  32|   162|
|  Dan|  28|  null|
+-----+----+------+

Conclusion

Dropping rows with null values is a frequently required data cleaning task when working with big data in Spark. PySpark provides a straightforward and flexible API for handling such scenarios with options for selectively removing data based on columnar conditions, thresholds for non-null values, and removal criteria. By understanding and utilizing the various methods for dropping rows with null values, you can efficiently prepare your dataset for deeper analysis and modeling.

Remember, after cleaning data with PySpark, always review your data to ensure it still meets the needs of your analysis. Properly handling null values can greatly improve the quality of your insights and the performance of your machine learning models.

By using the techniques above, you should be well-equipped to deal with null values in your PySpark dataframes.

About Editorial Team

Our Editorial Team is made up of tech enthusiasts who are highly skilled in Apache Spark, PySpark, and Machine Learning. They are also proficient in Python, Pandas, R, Hive, PostgreSQL, Snowflake, and Databricks. They aren't just experts; they are passionate teachers. They are dedicated to making complex data concepts easy to understand through engaging and simple tutorials with examples.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top