1. Introduction

Similar to utilities like Apport and systemd-coredump, the older Automatic Bug Reporting Tool (ABRT) is a set of utilities that handles and works with core dumps. The latter are generated when irrecoverable errors like segmentation faults occur.

In this tutorial, we explore the abrtd daemon and ways to prevent it from filling the system with data. First, we get an overview of ABRT. After that, we go through its deployment steps. Next, we check out a basic usage scenario. Finally, we discuss ABRT storage maintenance options.

Notably, ABRT is specific to older RPM-based Linux distributions and is now largely replaced by systemd-coredump.

We tested the code in this tutorial on CentOS 7 with GNU Bash 4.2.46. It should work in most POSIX-compliant environments unless otherwise specified.

2. Automatic Bug Reporting Tool (ABRT)

Same as its modern counterparts, the Automatic Bug Reporting Tool (ABRT) is a service and toolkit that provides a way to collect and organize data for bug reports.

As such, it has fairly common features:

  • employs own hooks to intercept crashes from different languages (C and C++, Python, Ruby, Java)
  • searches system logs for strings that hint at crashes in the hardware, kernel, or display server
  • detects core dumps from other services

In particular, if any issues are detected, ABRT places relevant data into files within a subdirectory in one of two main directories:

  • /var/spool/abrt
  • /var/tmp/abrt

Let’s see some example files we might find under a new incident subdirectory:

+-------------------+-----------------------------------------+
| Name              | Function                                |
+-------------------+-----------------------------------------+
| backtrace         | active function calls                   |
| coredump          | core dump from compiled programs        |
| count             | number of occurrences                   |
| dso_list          | dynamic libraries loaded                |
| executable        | absolute path of problematic executable |
| package           | RPM package of executable               |
| time              | UNIX timestamp of first occurrence      |
| var_log_messages  | related system log lines                |
+-------------------+-----------------------------------------+

Separately, ABRT has its own command line interface (CLI) and graphical user interface (GUI) tools for viewing and sending reports.

3. Deploy ABRT

Although some Linux installations contain ABRT, others may not include it by default. So, let’s install and configure the service and toolkit.

3.1. Installation

Two main packages provide the ABRT bundle:

So, let’s install abrt-cli via yum:

$ yum install abrt-cli

At this point, we should have the basic utilities.

Let’s start the service via service, since systems with systemd and systemctl usually don’t employ ABRT:

$ service start abrtd

Notably, once any of the abrt-* packages is installed, a new pattern is automatically set in /proc/sys/kernel/core_pattern:

$ cat /proc/sys/kernel/core_pattern
|/usr/libexec/abrt-hook-ccpp %s %c %p %u %g %t e %P %I %h

Thus, new core dumps are redirected to an appropriate ABRT hook.

3.2. Enable Core Dump Generation

However, to ensure core dumps actually do get generated, we might need to edit the /etc/security/limits.conf file:

$ cat /etc/security/limits.conf
[...]
*  soft  core  unlimited
[...]

In particular, we modify the commented line that includes the soft limit for maximum number of core dumps. After uncommenting the line, we change the value from the default of 0 to unlimited.

3.3. Process Custom Executables

Critically, if we want to enable core dump generation for executables that aren’t part of packages, we often need to change another file as well:

$ cat /etc/abrt/abrt-action-save-package-data.conf
[...]
# Process crashes in executables which do not belong to any package?
#
ProcessUnpackaged = yes
[...]

Here, we edit /etc/abrt/abrt-action-save-package-data.conf to change ProcessUnpackaged to yes.

3.4. General Configuration

In case we need to set other ABRT options, we can edit files in one of three main configuration locations:

  • /etc/abrt/
  • /etc/libreport/
  • $HOME/.config/abrt/

For instance, let’s check part of the /etc/abrt/abrt.conf file:

$ cat /etc/abrt/abrt.conf
# Enable this if you want abrtd to auto-unpack crashdump tarballs which appear
# in this directory (for example, uploaded via ftp, scp etc).
# Note: you must ensure that whatever directory you specify here exists
# and is writable for abrtd. abrtd will not create it automatically.
#
#WatchCrashdumpArchiveDir = /var/spool/abrt-upload

# Max size for crash storage [MiB] or 0 for unlimited
#
MaxCrashReportsSize = 5000

# Specify where you want to store coredumps and all files which are needed for
# reporting. (default:/var/spool/abrt)
#
# Changing dump location could cause problems with SELinux. See man abrt_selinux(8).
#
#DumpLocation = /var/spool/abrt
[...]

Here, we see options such as the report pre-upload directory (for online reporting), maximum size of reports, as well as the expected core dump location.

4. Using ABRT

Having installed and configured the package, let’s understand how we can employ ABRT in a typical scenario.

4.1. Generate Core Dump

To begin with, we run a problematic executable and check the result:

$ ./segfault.bin
Segmentation fault (core dumped)

Now, we can check the contents of the /var/spool/abrt directory via ls:

$ ls -1 /var/spool/abrt/
ccpp-2024-01-10-10:00:01-666
last-ccpp

Thus, we can see the new ccpp-2024-01-10-10:00:01-666 directory, starting with the ccpp C and C++ prefix, continuing with the 2024-01-10-10:00:01 date and time, and ending with the 666 process ID (PID) of the offending process.

4.2. ABRT Report Files

In it, we find a number of files, some of which we already know the function of:

$  ls /var/spool/abrt/ccpp-2024-01-10-10\:00\:01-666/
abrt_version  coredump    exploitable      limits     os_release       runlevel  uuid
analyzer      count       global_pid       machineid  pid              time      var_log_messages
architecture  environ     hostname         maps       proc_pid_status  type
cgroup        event_log   kernel           open_fds   pwd              uid
cmdline       executable  last_occurrence  os_info    reason           username

Further, let’s show the sizes of all files and a summary:

$ du --total --bytes * | sort --numeric-sort
0       event_log
1       count
1       uid
4       analyzer
4       global_pid
4       hostname
4       pid
4       runlevel
4       type
5       pwd
5       username
6       abrt_version
6       architecture
10      last_occurrence
10      time
14      cmdline
18      executable
28      kernel
30      reason
36      os_release
40      uuid
125     exploitable
135     machineid
138     open_fds
187     cgroup
337     maps
393     os_info
1209    proc_pid_status
1323    limits
2016    environ
2354    var_log_messages
155648  coredump
164099  total

In this case, we use du to show the actual size in –bytes (-b) of each file, along with the –total. After that, we pipe the result to sort, so we have a more convenient way to see where the bulk of the information is – in the coredump.

In practice, the total size is around 165KB with a coredump of 155KB or around 90%.

4.3. Crash Analysis

Although we wouldn’t be able to employ them in our scenario, there are several ABRT CLI tools for analyzing core dumps:

  • abrt-action-analyze-backtrace
  • abrt-action-analyze-python
  • abrt-action-analyze-c
  • abrt-action-analyze-vmcore
  • abrt-action-analyze-ccpp-local
  • abrt-action-analyze-vulnerability
  • abrt-action-analyze-core
  • abrt-action-analyze-xorg
  • abrt-action-analyze-oops

These require a package-related executable for the best results. Which one we use depends on the crash and executable type.

For example, for package-related ccpp report directories, we can use abrt-action-analyze-ccpp-local. Each command generates everything needed for a bug report, including a backtrace.

5. ABRT Size Controls

Depending on our use cases, the total quantity of dumps can become quite big.

Let’s see how we can manage their size.

5.1. Report Sizes

As we saw, a simple non-package executable generated more than 160KB of data. Thus, if we do regular development work or run a server for testing, the /var/spool/abrt/ or /var/tmp/abrt/ directories can grow a lot.

To limit this, we can use the MaxCrashReportsSize option in the /etc/abrt/abrt.conf file:

$ cat /etc/abrt/abrt.conf
[...]
# Max size for crash storage [MiB] or 0 for unlimited
#
MaxCrashReportsSize = 50

In this case, we limit the total crash report storage to 50MB. The issue is that once this limit is reached, the system automatically deletes the oldest report to make space for a newer one.

So, a single huge report may overwrite all data in the directory.

5.2. Deletion by Date

To exert finer control over the ABRT report removal, we can use a custom executable script:

$ cat abrt-delete.sh
#!/usr/bin/env bash
set -e
function startabrt()
{
  systemctl start abrtd
  systemctl start abrt-oops
}

trap startabrt EXIT

systemctl stop abrtd
systemctl stop abrt-oops
find /var/spool/abrt/ -type d -ctime +1 -exec abrt-cli rm {} \;
startabrt
$ chmod +x abrt-delete.sh

Let’s break down this script:

  • set -e forces the script to exit upon any error
  • function startabrt() starts the abrt and abrt-oops (kernel issues) services
  • trap startabrt EXIT ensures we always leave the services started, regardless of the EXIT reason
  • systemctl controls the status of the relevant services

On the other hand, the find command looks for [d]irectory -type objects within /var/spool/abrt with a creation time (-ctime) of more than one (+1) days. For any that are found, the rm subcommand of the abrt-cli tool deletes them.

Of course, we can use a scheduler like cron to automate how often this process runs.

For example, we can use the /ect/cron.daily/ directory:

$ cp abrt-delete.sh /etc/cron.daily/

Now, abrt-delete.sh checks for reports to delete and removes any matching ones every day.

6. Summary

In this article, we delved into the Automatic Bug Reporting Tool (ABRT).

In conclusion, due to the quantity of information produced by core dumps and ABRT reports, it’s good practice to know how the service works, so we can control its data storage.