Get free ebooK with 50 must do coding Question for Product Based Companies solved
Fill the details & get ebook over email
Thank You!
We have sent the Ebook on 50 Must Do Coding Questions for Product Based Companies Solved over your email. All the best!

Uniq Command in Linux with Examples

Last Updated on November 20, 2023 by Abhishek Sharma


In the vast world of Linux command-line utilities, the uniq command stands as a powerful tool for text processing and data manipulation. Whether you’re a seasoned sysadmin, a developer, or just a Linux enthusiast, understanding how to use uniq effectively can make your work more efficient and your scripts more powerful.

This article will provide you with a comprehensive guide to the uniq command in Linux. We’ll delve into its various options, discuss its practical applications, and present you with real-world examples to demonstrate its usage. Whether you’re looking to eliminate duplicate lines from a file, count the occurrences of repeated lines, or merge sorted files, uniq has you covered.

Let’s embark on this journey to unlock the full potential of the uniq command.

What is the uniq Command?

The uniq command, short for "unique," is a text-processing utility available in most Linux distributions. It’s primarily used for identifying, filtering, and manipulating unique lines in a text file. Its primary function is to eliminate duplicate lines, but it can also count occurrences of repeated lines, which makes it a versatile tool for various text-processing tasks.

Basic Usage

The basic syntax of the uniq command is as follows:

uniq [OPTION]... [INPUT [OUTPUT]]

[OPTION]: Specifies the options you want to use with uniq.
INPUT: The input file from which you want to remove duplicates or count occurrences.
OUTPUT: The output file where the results are written. If not specified, the output is displayed on the terminal.

Removing Duplicate Lines
One of the most common use cases for the uniq command is removing duplicate lines from a text file. To do this, ensure that the input file is sorted, as uniq compares adjacent lines to detect duplicates.

$ uniq input.txt

In this example, the uniq command takes input.txt as the input file and displays the unique lines in the terminal. Duplicate lines are removed, leaving only one instance of each unique line.

Counting Occurrences
To count the occurrences of repeated lines in a file, use the -c option:

$ uniq -c input.txt

This command will display the unique lines along with the count of how many times each line appears in the input file.

Merging Sorted Files
You can also merge two or more sorted files while eliminating duplicate lines from the result. This is helpful when you have multiple files with overlapping content that you want to combine into one without redundancy.

$ uniq file1.txt file2.txt > merged.txt

Here, file1.txt and file2.txt are merged into merged.txt, and duplicate lines are removed, ensuring that the resulting file contains only unique lines.

Advanced Options
The uniq command offers several options for fine-tuning its behavior:

-d: Display only duplicate lines.
-u: Display only unique lines.
-i: Perform a case-insensitive comparison.
-s N: Skip the first N characters when comparing lines.
-f N: Skip the first N fields when comparing lines using a specific delimiter (use -t to specify the delimiter).
-t CHAR: Specify the delimiter to use with the -f option.

These advanced options allow you to customize the behavior of the uniq command to suit your specific needs.

Conclusion
The uniq command is a versatile and valuable tool in a Linux user’s or administrator’s arsenal. Its ability to filter and manipulate text data, remove duplicates, and count occurrences makes it a handy resource for various tasks, from data processing to scripting.

Throughout this article, we’ve explored the essential uniq options and showcased real-world examples of how it can be employed effectively. By mastering this command, you can improve your text processing skills and streamline your workflow in Linux.

As you continue to experiment and work with the uniq command, you’ll discover even more creative ways to employ it in your daily tasks. Whether you’re maintaining data, parsing logs, or writing shell scripts, uniq will prove to be a valuable companion, simplifying your operations and saving you precious time.

So, embrace the power of uniq and let it help you conquer the world of text manipulation in Linux!

FAQ Related to uniq command in linux with examples

Here are some FAQs related to uniq command in linux with examples.

1. What is the purpose of the uniq command in Linux?
The uniq command is used to filter, manipulate, and identify unique lines within text files. It primarily helps in removing duplicate lines and can also count the occurrences of repeated lines.

2. How do I use the uniq command to remove duplicate lines from a file?
You can use the uniq command with the -d option to remove duplicate lines from a sorted file. For example: uniq -d input.txt.

3. Can the uniq command be used to count the occurrences of lines?
Yes, the uniq command can count the occurrences of lines in a file. Use the -c option, such as uniq -c input.txt, to display the count alongside the unique lines.

4. How can I merge sorted files using the uniq command?
To merge sorted files, you can use the uniq command with the -u option, like this: uniq -u file1.txt file2.txt. This will output a merged file with unique lines from both input files.

5. Are there any advanced options for the uniq command?
Yes, uniq offers various options to handle different scenarios, such as skipping a specific number of characters at the beginning of each line using the -s option, specifying a different field separator with the -t option, and more. You can explore these options in the uniq manual or by running man uniq.

6. Can I use the uniq command with unsorted files?
The uniq command is designed to work with sorted files. If you attempt to use it with unsorted data, you may not get the desired results. It’s advisable to sort your data first using the sort command and then use uniq for optimal results.

Leave a Reply

Your email address will not be published. Required fields are marked *