Get free ebooK with 50 must do coding Question for Product Based Companies solved
Fill the details & get ebook over email
Thank You!
We have sent the Ebook on 50 Must Do Coding Questions for Product Based Companies Solved over your email. All the best!

Remove Duplicates from Sorted Array

Last Updated on June 8, 2023 by Mayank Dham

We’ll talk about various methods for Remove duplicates from a sorted array in this blog. Let’s first examine how these kinds of inquiries can help you land your ideal job before moving on to the ways.

You need to be familiar with the kinds of questions that will be asked in your technical interviews if you want to succeed in the hiring process for a product-based corporation like Google, Amazon, or Microsoft. These businesses rely on their interview process to choose individuals with an in-depth understanding of computer science and engineering fundamentals as the race for engineering talent heats up every year. In order to do this, they employ a range of examinations to aid them in learning about both their fundamental comprehension of computer science and their capacity for handling challenging issues. We’ll look at one of these issues in this post and offer remedies along with the related code.

How to Remove Duplicates from Sorted Array?

Given an integer array sorted in non-decreasing order, remove the duplicates such that the unique element appears only once.

For example :

Array = {2,3,4,4,5,6,6,6,8,8}
Output= 6

Solution Approaches to Remove Duplicates from Sorted Array

There are three ways to deal with this problem. Following are some detailed explanations of both, along with C++, Java, and Python source codes:

Method 1: Brute force approach using extra memory

We could make a new array the same size as the original array, compare each value with its neighbour, and then remove duplicates in parallel. If a match was found, we would add the value to the auxiliary array and take away one instance of it from the primary array.

In order to be able to return the total number of different values after our process is finished, we will also keep track of how many unique values are currently present in the original array. Finally, we will return our original array after copying all of its contents back into the auxiliary array.

Algorithm

  • Create an auxiliary array of the same size as original
  • Compare each element of the original array with its neighbour element
  • If elements do not match
    • Copy the unique element into an auxiliary array maintaining the count variable
    • Copy the elements of the auxiliary array back to the original array
  • Return the new size of the array
#include <iostream>
using namespace std;

int removeDuplicates(int arr[], int n)
{
    if (n == 0 || n == 1)
        return n;

    int temp[n];

    int j = 0;
    for (int i = 0; i < n - 1; i++)
        if (arr[i] != arr[i + 1])
            temp[j++] = arr[i];

    temp[j++] = arr[n - 1];

    for (int i = 0; i < j; i++)
        arr[i] = temp[i];

    return j;
}

int main()
{
    int arr[] = { 1, 2, 2, 3, 4, 4, 4, 5, 5,6 };
    int n = sizeof(arr) / sizeof(arr[0]);

    n = removeDuplicates(arr, n);

    for (int i = 0; i < n; i++)
        cout << arr[i] << " ";

    return 0;
}

class Main {
    static int removeDuplicates(int arr[], int n)
    {
        if (n == 0 || n == 1)
            return n;

        int[] temp = new int[n];

        int j = 0;
        for (int i = 0; i < n - 1; i++)
            if (arr[i] != arr[i + 1])
                temp[j++] = arr[i];

        temp[j++] = arr[n - 1];

        for (int i = 0; i < j; i++)
            arr[i] = temp[i];

        return j;
    }

    public static void main(String[] args)
    {
        int arr[] = { 1, 2, 2, 3, 4, 4, 4, 5, 5, 6, 7 };
        int n = arr.length;

        n = removeDuplicates(arr, n);

        for (int i = 0; i < n; i++)
            System.out.print(arr[i] + " ");
    }
}

def removeDuplicates(arr, n):

    if n == 0 or n == 1:
        return n

    temp = list(range(n))

    j = 0;
    for i in range(0, n-1):

        if arr[i] != arr[i+1]:
            temp[j] = arr[i]
            j += 1

    temp[j] = arr[n-1]
    j += 1
    
    for i in range(0, j):
        arr[i] = temp[i]

    return j

arr = [1, 2, 2, 3, 4, 4, 4, 5, 5, 6, 6, 7]
n = len(arr)

n = removeDuplicates(arr, n)

for i in range(n):
    print ("%d"%(arr[i]), end = " ")

Time Complexity
The given code has an O(N) time complexity, where N is the array’s size. Additionally, because we are building an auxiliary array that is the same size as the original array, the space complexity of the solution is O(N).

Method 2: Optimal approach using index pointer

The same methodology as the method above is used in this method. But the main distinction is that we’ll do in-place swaps rather than using the auxiliary array. Create the count variable in this case to contain the number of unique entries. The location of where to insert the next element in the same array will also be maintained with the aid of this count variable.

Algorithm

  • Create a counter variable to point to the index of unique elements.
  • Compare the element with its neighbour element
  • If elements do not match
    • Consider the first element as unique and consider its pointer in the counter variable
    • Keep comparing the other element with its neighbour element
  • As a result, the final array will consider only the unique element in the array
#include<iostream>
using namespace std;


int removeDuplicates(int arr[], int n)
{
    if (n==0 || n==1)
        return n;

    int j = 0;

    for (int i=0; i < n-1; i++)
        if (arr[i] != arr[i+1])
            arr[j++] = arr[i];

    arr[j++] = arr[n-1];

    return j;
}

int main()
{
    int arr[] = {1, 2, 2, 3, 4, 4, 4, 5, 5, 8};
    int n = sizeof(arr) / sizeof(arr[0]);

    
    n = removeDuplicates(arr, n);

    for (int i=0; i<n; i++)
        cout << arr[i] << " ";

    return 0;
}
class Main
{
    static int removeDuplicates(int arr[], int n)
    {
        if (n == 0 || n == 1)
            return n;
    
        int j = 0;
    
        for (int i = 0; i < n-1; i++)
            if (arr[i] != arr[i+1])
                arr[j++] = arr[i];
    
        arr[j++] = arr[n-1];
    
        return j;
    }
    
    public static void main (String[] args)
    {
        int arr[] = {1, 2, 2, 3, 4, 4, 4, 5, 5, 9, 9};
        int n = arr.length;
        
        n = removeDuplicates(arr, n);

        for (int i=0; i<n; i++)
        System.out.print(arr[i]+" ");
    }
}

Time Complexity
The preceding approach has an O(N) time complexity, where N is the array’s element count. The reason is that in order to compare the items and identify the unique elements, we traverse the complete array only once. Additionally, because we do not need any additional space to eliminate duplicates from the sorted array, the space complexity of the preceding approach is O(1).

Method 3: Two-pointer

Here, a sorted array may be used to our benefit. Two points, I and j, can be used. Up to the point where nums[i] == nums[j], we keep increasing j.

Algorithm:

  • return if nums size <= 1
  • set i = 0
  • Loop for j = 1; j < nums.size(); j++
    • if nums[j] != nums[i]
    • i++
    • nums[i] = nums[j]
  • return i + 1

Lets dry-run our algorithm to see how the solution works.

nums = [0,0,1,1,1,2,2,3,3,4]

Step 1: length = nums.size()
        = 10

Step 2: length <= 1
        10 <= 1
        false

Step 3: i = 0

Step 4: Loop for j = 1; 1 < 10
        nums[i] != nums[j]
        nums[0] != nums[1]
        0 != 0
        false
        j++
        j = 2

Step 5: Loop for j = 2; 2 < 10
        nums[i] != nums[j]
        nums[0] != nums[2]
        0 != 1
        true
        i++
        i = 1
        nums[i] = nums[j]
        nums[1] = nums[2]
        nums[1] = 1
        j++
        j = 3

Step 6: Loop for j = 3; 3 < 10
        nums[i] != nums[j]
        nums[1] != nums[3]
        1 != 1
        false
        j++
        j = 4

Step 7: Loop for j = 4; 4 < 10
        nums[i] != nums[j]
        nums[1] != nums[4]
        1 != 1
        false
        j++
        j = 5

Step 8: Loop for j = 5; 5 < 10
        nums[i] != nums[j]
        nums[1] != nums[5]
        1 != 2
        true
        i++
        i = 2
        nums[i] = nums[j]
        nums[2] = nums[5]
        nums[2] = 2
        j++
        j = 6

Step 9: Loop for j = 6; 6 < 10
        nums[i] != nums[j]
        nums[2] != nums[6]
        2 != 2
        false
        j++
        j = 7

Step 10: Loop for j = 7; 7 < 10
         nums[i] != nums[j]
         nums[2] != nums[7]
         2 != 3
         true
         i++
         i = 3
         nums[i] = nums[j]
         nums[3] = nums[7]
         nums[3] = 3
         j++
         j = 8

Step 11: Loop for j = 8; 8 < 10
         nums[i] != nums[j]
         nums[3] != nums[8]
         3 != 3
         false
         j++
         j = 9

Step 12: Loop for j = 9; 9 < 10
         nums[i] != nums[j]
         nums[3] != nums[9]
         3 != 4
         true
         i++
         i = 4
         nums[i] = nums[j]
         nums[4] = nums[9]
         nums[4] = 4
         j++
         j = 10

Step 13: Loop for j = 10; 10 < 10
         false

Step 14: return i + 1
         return 4 + 1 = 5
#include<iostream>
using namespace std;

 int removeDuplicates(int nums[], int n) {
        if(n <= 1){
            return n;
        }

        int i = 0;

        for(int j = 1; j < n; j++){
            if(nums[j] != nums[i]){
                i++;
                nums[i] = nums[j];
            }
        }

        return i + 1;
    }
    
int main()
{
    int arr[] = {1, 2, 2, 3, 4, 4, 4, 5, 5, 8};
    int n = sizeof(arr) / sizeof(arr[0]);
 
 
    n = removeDuplicates(arr, n);
 
    for (int i=0; i<n; i++)
        cout << arr[i] << " ";
 
    return 0;
}

The above method's time complexity is O(N) and its space complexity is O(1).

Conclusion
Conclusion
Removing duplicates from a sorted array is a common problem in programming, and it can be solved efficiently using a two-pointer approach. By maintaining two pointers, one for iterating through the array and another for placing the unique elements, we can modify the array in place to remove duplicates. This approach has a time complexity of O(n) and a space complexity of O(1), where n is the length of the array.

Frequently Asked Questions (FAQs):

Q1: Why do we need to remove duplicates from a sorted array?
Removing duplicates from a sorted array is often required to ensure data integrity and improve efficiency in various programming tasks. For example, in algorithms that rely on unique elements or when preparing data for further processing, removing duplicates helps in reducing complexity and avoiding unnecessary computations.

Q2: How does the two-pointer approach work?
The two-pointer approach involves maintaining two-pointers, usually referred to as "slow" and "fast." The "slow" pointer keeps track of the position where the unique elements will be placed, and the "fast" pointer iterates through the array to find new elements. When a new element is encountered, it is placed at the position indicated by the "slow" pointer, and both pointers are incremented. If a duplicate element is found, the "fast" pointer continues moving until a new element is encountered.

Q3: What is the advantage of the two-pointer approach?
The two-pointer approach is advantageous because it allows for modifying the array in place, without requiring extra space for a new array. By keeping track of the unique elements and their positions, duplicates can be efficiently skipped and new elements can be inserted without shifting the entire array.

Q4: What is the time and space complexity of removing duplicates from a sorted array using the two-pointer approach?
The two-pointer approach has a time complexity of O(n), where n is the length of the array. This is because we iterate through the array only once. The space complexity is O(1) because the algorithm modifies the input array in-place without using any additional data structures.

Q5: Can the two-pointer approach be used for unsorted arrays?
The two-pointer approach is specifically designed for sorted arrays, where duplicates are adjacent to each other. If the array is unsorted, a different approach, such as using a hash set or sorting the array first, would be more appropriate for removing duplicates efficiently

Leave a Reply

Your email address will not be published. Required fields are marked *