Logo

How to find duplicate values in a SQL table?

Identifying and handling duplicate records is a critical part of data cleansing and quality control in SQL. Whether you’re auditing inconsistent data, enforcing business rules, or debugging a production environment, finding duplicate values in your table is the first step. In this comprehensive guide, we’ll explore various ways to detect duplicates, the pros and cons of each approach, and some best practices to keep your data tidy and accurate.

1. Using GROUP BY and HAVING

The most common way to detect duplicates is by grouping the table on the columns you suspect have duplicate values and then filtering with a HAVING clause.

SELECT ColumnA, ColumnB, COUNT(*) AS duplicate_count FROM YourTable GROUP BY ColumnA, ColumnB HAVING COUNT(*) > 1;

How It Works

  1. GROUP BY: Groups rows by ColumnA and ColumnB.
  2. COUNT(*): Counts how many rows share the same values in those columns.
  3. HAVING COUNT(*) > 1: Retrieves only those groups with more than one row.

When to Use It

  • Best if you need a quick summary of how many duplicates you have and which values are duplicated.
  • Great for identifying patterns in data that might indicate potential data entry errors.

2. Using SELF-JOIN

A SELF-JOIN is handy when you want to join the table to itself based on matching criteria. This approach can give you row-level details about each duplicate.

SELECT t1.* FROM YourTable t1 JOIN YourTable t2 ON t1.ColumnA = t2.ColumnA AND t1.ColumnB = t2.ColumnB AND t1.PrimaryKeyColumn <> t2.PrimaryKeyColumn;

How It Works

  • t1 and t2 are aliases of the same table.
  • JOIN condition checks for matching values in ColumnA and ColumnB but ensures the primary keys differ, indicating distinct rows.

When to Use It

  • Best if you need to see each duplicate row in detail (e.g., entire record).
  • More verbose, can be slower on large tables compared to the GROUP BY approach.

3. Using Window Functions (ROW_NUMBER)

Window functions like ROW_NUMBER() offer a powerful alternative for identifying duplicates. They are especially useful if you need fine-grained control or if you plan on deleting duplicates right after you find them.

WITH DuplicateCTE AS ( SELECT *, ROW_NUMBER() OVER ( PARTITION BY ColumnA, ColumnB ORDER BY PrimaryKeyColumn ) AS row_num FROM YourTable ) SELECT * FROM DuplicateCTE WHERE row_num > 1;

How It Works

  1. PARTITION BY: Groups the rows by the values in ColumnA, ColumnB.
  2. ROW_NUMBER(): Assigns a sequential number to each row within the partition based on some ordering (e.g., PrimaryKeyColumn).
  3. WHERE row_num > 1: Returns only rows that are duplicates.

When to Use It

  • Perfect if you need to pick out or remove extra rows while keeping the first occurrence of each unique set of column values.
  • Offers a more flexible approach to rank duplicates and decide which ones to keep or discard.

4. Best Practices for Handling Duplicates

  • Identify the Root Cause: Duplicates often indicate a data entry or system integration issue. Find out why they’re happening in the first place.
  • Use Constraints or Keys: Prevent duplicates by enforcing UNIQUE constraints or using proper PRIMARY KEY definitions.
  • Clean Up Proactively: If you find duplicates regularly, consider routine cleanup tasks or triggers to keep your data consistent.

5. Explore More SQL Magic

If you’re preparing for interviews or want to sharpen your SQL skills to handle real-world data scenarios:

6. Ready for Real-World Practice?

  • Mock Interviews: Hone your interview skills with personalized mock interviews led by ex-FAANG engineers.
  • Interview BootCamp: Need a structured timeline? Check out the Interview BootCamp for a guided, multi-week approach covering coding, system design, and behavioral prep.
  • Watch and Learn: Visit the DesignGurus YouTube Channel for free tutorials and practical insights on data, system design, and coding topics.

Conclusion

Finding duplicates in SQL is a fundamental skill every developer or data engineer should master. By leveraging queries with GROUP BY, SELF-JOIN, or ROW_NUMBER() window functions, you can quickly isolate problem rows and take corrective action. Best practices like using unique constraints, routinely cleaning your data, and investigating root causes can help prevent duplicates in the future.

For more in-depth learning and hands-on guidance, explore the courses on DesignGurus.io. Armed with these skills, you’ll be well-equipped to keep your databases clean, consistent, and high-performing—even in the most demanding enterprise environments. Good luck with your SQL journey!

CONTRIBUTOR
TechGrind