1. Overview

When it comes to Linux command-line utilities, grep stands out as a powerful tool for searching through text files. However, mastering grep goes beyond simple searches. For example, it can also search for patterns in files or input streams and print matching lines in a specific context.

One use case is to pipe a search term into grep to filter the input stream. This enables searching for a pattern in the output of another command or file.

In this tutorial, we’ll see how to use grep with standard input for efficient file searching. Specifically, we’ll see how pipes enable passing search terms from many sources:

  • dynamically generated lists
  • user input
  • files
  • commands
  • command-line arguments
  • file descriptors

Each case has its specifics.

2. Searching Patterns From Dynamically Generated List

grep can search for patterns from a dynamically generated list via a pipe. For instance, it can filter a log file that is continuously updated as new events occur.

Let’s see some examples.

2.1. Extracting Patterns From Log Files

We can use awk and grep to dynamically extract specific patterns from log files:

$ awk '{print $3}' /var/log/vbox-setup.log | grep -f - data.txt

Let’s break down the above command:

  • awk ‘{print $3}’ /var/log/vbox-setup.log: extracts the third field from the vbox-setup.log file (assuming fields are separated by spaces)
  • grep -f – data.txt: uses the output to filter the data.txt file based on each extracted pattern

Markedly, the sign after the -f option indicates that the grep command gets its input from stdin.

2.2. Searching find Command Output

The find command recursively searches directories and generates filenames or paths that match the specified criteria.

For example, we can use the -type f option to search for regular files:

$ find . -type f -print

The above command outputs all regular files under the current directory recursively.

We can pipe this dynamically generated output to grep to search for patterns:

$ find . -type f -print | grep README

Let’s break down the above command:

  • find -print: prints matching files or paths
  • -type f: searches for regular files
  • grep: searches the filename paths for README

The above command prints only filenames containing README from the dynamically generated list.

2.3. Searching ps Command Output

The ps command shows currently running processes and their details like PID, command, owner, and other process information, for example:

$ ps aux

The above command dynamically lists details of all running processes.

We can pipe this to grep to search the process list:

$ ps aux | grep '[f]irefox' 
sadaan 15862 63.7 1.6 11837220 409572 ? Sl 15:19 0:11 /snap/firefox/3836/usr/lib/firefox/firefox

This prints only process details for Firefox from the dynamic ps output.

Let’s the meaning of the options in the above command:

  • ps aux: dynamically outputs process details
  • grep ‘[f]irefox’: searches for the pattern [f]irefox in process details

Thus, we can only see the process details for Firefox from the dynamic ps output.

2.4. Searching curl Command Output

The curl command can fetch content from the web and output it, for example:

$ curl -s https://tecofers.com
<!DOCTYPE html><html lang="en-US"><head><title>...

Thus, it downloads the HTML content from tecofers.com and prints it.

We can pipe this dynamic output to grep:

$ curl -s https://tecofers.com | grep 'Copyright'
Copyright © Tecofers 2024. All rights reserved.<span class="sep"> | </span>

Let’s see what this command does:

  • curl -s: silently fetches web page content
  • grep: searches the HTML for the pattern Copyright

As a result, grep* searches the downloaded web page for the text *Copyright.

2.5. Searching MySQL Query Output

MySQL can dynamically generate results from queries. For instance, let’s display  the data of column user_login from a table wp_users from the database wpdb :

$ sudo mysql -u root -p -e "SELECT user_login FROM wp_users;" wpdb
Enter password: 
+------------+
| user_login |
+------------+
| Baeldung   |
+------------+

Further, we can pipe the above result to search with grep:

$ sudo mysql -u root -p  -e "SELECT user_login FROM wp_users;" wpdb | grep Baeldung
Enter password: 
Baeldung

Let’s break down the options in the above command:

  • sudo mysql -u root -p: command to log in MySQL
  • SELECT user_login FROM wp_users: selects user_login field from the table wp_users
  • grep: searches names for the pattern Baeldung

Consequently, it prints only names containing Baeldung from the query results.

3. Searching for Patterns From User Input

grep can also search patterns from user input via pipes:

$ read -p "Enter pattern: " pattern; echo "$pattern" | grep -f - apps_list.txt
Enter pattern: x
xdg-user-dirs
xfsprogs
xkb-data
xxd
xz-utils

As a result, the above command searches for the pattern x in the file apps_list.txt.

Let’s see the meaning of each component in the above code:

  • read -p: prompts the user for input
  • echo “$pattern”: the user input is stored in the variable *$*pattern
  • grep -f -:  searches piped input for the value of $pattern inside the file apps_list.txt

Again, the grep command takes its input from stdin. Thus, we can make the search more interactive.

4. Searching for Patterns Stored in a File

Let’s suppose we’ve got a file containing a list of patterns. Further, we want to search for the third pattern from the list and then use it as the standard input for the grep command.

As a result, we combine the head, tail, and grep commands:

$ head -n 3 pattern_list.txt | tail -n 1 | grep -f - filename

Let’s break down what each part of the command does:

  • head -n 3 pattern_list.txt: extracts the first three lines (patterns) from the file pattern_list.txt
  • tail -n 1: selects the last line from the output of the head, which is the third pattern
  • grep -f – filename: reads the third pattern from standard input and then searches for it in the file specified by filename

Next, we’ll see how to split the output of a command into separate streams based on a specific pattern.

5. Splitting Files Into Streams

Sometimes, we may encounter scenarios where we need to split the output of a command into separate streams. Moreover, this can be based on a specific pattern or separator. This can be useful for processing different parts of the data independently.

Markedly, we can use grep to split the output into two files based on a pattern match.

Let’s see how we can achieve this using grep in combination with other commands:

$ command | tee >(grep 'pattern1' > file1.txt) >(grep 'pattern2' > file2.txt) > /dev/null

In the above command:

  • command: represents the command whose output we want to split
  • tee: copies the output to multiple files and standard output
  • >(…): process substitution to redirect output to a command

In this case, the first grep command searches for pattern1. It then writes the matching lines to file1.txt.

The second grep command searches for pattern2. Thereafter, it writes the matching lines to file2.txt. Finally, /dev/null discards the combined output.

Let’s take an example here. Suppose we have the data.txt file with some content:

$ cat data.txt
apple
banana
grape
orange

Subsequently, we want to split the lines containing apple and grape into separate files:

$ cat data.txt | tee >(grep 'apple' > apple.txt) >(grep 'grape' > grape.txt) > /dev/null

Thus, after running the above command, apple.txt has the text apple. Similarly, grape.txt contains grape.

6. Using File Descriptors With grep

In Bash, &3, &4, &5, and so on, are file descriptors. They represent additional input streams apart from the standard input (&0), standard output (&1), and standard error (&2).

We can use these descriptors to pass data from different sources to commands like grep.

For instance, we can read the file data.txt with a script:

$ cat myscript.sh
#!/bin/bash
# Read the data.txt file using file descriptor 3
exec 3< data.txt
while IFS= read -r line <&3; do
  echo "Found at: $line" | grep 'orange'
done
# Closing the file descriptor 3
exec 3<&-

Let’s see how the script works step by step:

  • exec 3< data.txt: opens data.txt for reading and assigns it to file descriptor 3
  • while IFS= read -r line <&3; do: starts a loop that reads each line from data.txt and stores it in the variable line
  • echo “Found: $line” | grep ‘orange’: prints each line read from data.txt with a prefix and then pipes it to grep
  • exec 3<&-: closes file descriptor 3, releasing the associated file

Let’s make the script executable and then run it:

$ chmod +x myscript.sh && ./myscript.sh
Found: orange

Thus, it searches for the word orange in each line.

The loop continues until all lines from data.txt have been processed.

7. Conclusion

In this article, we’ve seen various ways to pipe a search term to the grep command.

We covered different sources for search terms. First, we used dynamically generated lists with various use cases. Then, we searched the pattern based on user input. Next, we used files as a source of patterns for grep. Further, we saw how to break an input into two output streams. Finally, we used the non-standard file descriptor to search for a pattern from a file.

In summary, pipes let us connect grep to other commands and data sources, thereby providing flexibility in composing search queries.

Furthermore, this enables better use of grep extensive text processing capabilities.