Written by a Human Not by AI Banner

Debugging Core Dumps on Linux

A short guide on how to deal with (i.e., debug) core dumps on Linux.

Core Dumps - What are They?

A core dump is snapshot of a process’ memory and other state at the point when it crashed or terminated abnormally (think of the usual “segmentation fault” case).

The term ‘dump’ is used a lot in different subdomains of IT to denote any output of large amount of data (usually in raw format) usually when an anomaly occurs for the purpose of further detailed examination.

Enable/Configure Core Dumps on Linux

Core dumps are usually disabled by default (come to think of it, it probably has to do with their size and the frequency at which they might occur). To check whether your system is configured to allow core dumps:

ulimit -a

You may get an output resembling this:

real-time non-blocking time  (microseconds, -R) unlimited
core file size              (blocks, -c) 0
data seg size               (kbytes, -d) unlimited
scheduling priority                 (-e) 0
file size                   (blocks, -f) unlimited
pending signals                     (-i) 63171
max locked memory           (kbytes, -l) 2031556
max memory size             (kbytes, -m) unlimited
open files                          (-n) 1024
pipe size                (512 bytes, -p) 8
POSIX message queues         (bytes, -q) 819200
real-time priority                  (-r) 0
stack size                  (kbytes, -s) 8192
cpu time                   (seconds, -t) unlimited
max user processes                  (-u) 63171
virtual memory              (kbytes, -v) unlimited
file locks                          (-x) unlimited

The line we care about is: core file size (blocks, -c) 0 which can be interpreted as: “maximum allowed size of core files is 0 blocks”. We have to change that in order to get core dumps.

To allow for unlimited core dump sizes only for the current sessions do:

ulimit -c unlimited

And to make this permanent (something I usually do NOT recommend to do - see not below), you have to change/add these entries in /etc/security/limits.conf:

*               soft    core            unlimited
*               hard    core            unlimited

Note: there is a good reason why core dump limits are NOT set to unlimited by default. Core dumps may include sensitive information and may introduce security risks. They are also very heavyweight so they will probably rapidly fill up your storage..

Set/Change Core Dumps Location on Linux

By default core dumps are written in the current working directory. I usually put dumps (dumps in general, including core dumps) in /var/dumps/. This way I can use a cleanup script that automatically deletes dumps older than a custom defined date (say 3 days). To set/change core dumps location on your system:

sysctl -w kernel.core_pattern="/var/dumps/core.%e.%p.%t"

sysctl is used to modify kernel parameters at runtime. To set parameters permanently, you should write them in /etc/sysctl.conf. In our case, if we want to permanently set a custom directory we do:

kernel.core_pattern=/var/dumps/core.%e.%p.%t

Note: in a production setting, make sure that this is a protected directory

Core-Dumps-Triggering Examples

Segmentation Fault (Invalid Pointer Assignment)

int main() {
  int *ptr = nullptr;
  *ptr = 50; /* triggers a segfault */
  return 0;
}

Then compile using (assuming you named the file segfault.cpp):

g++ segfault.cpp -o segfault

You should see Segmentation fault (core dumped) and you should find the dumped core at /var/dumps/ (or whatever custom directory you have set).

In case you only see Segmentation fault then most probably something went wrong and the OS was not able (or allowed) to write the core dump.

On my system, this trivial program by itself generated a dump file of 304KB.

Generating a Core Dump File from a Running Process

To generate a dump core from an already running process:

gcore <PID>

Analyzing Core Dumps

I usually just use gdb (the GNU debugger) to analyze core dumps - it usually goes straight to the point by providing the location at which, for instance, a segmentation fault was generated:

gdb ./segfault /var/dumps/core.segfault.152400.1762994873

You may get an output as follows:

ore was generated by `./segfault'.                                                                                                      
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00005c3e912f113d in ?? ()

Well, that sucks. That is not very helpful. If you want to make your life easier then generate debugging symbols when compiling the segfault program (or any program you want to debug):

g++ -g segfault.cpp -o segfault

And …, we get a much more human readable indication to what caused the segmentation fault:

Core was generated by `./segfault'.
Program terminated with signal SIGSEGV, Segmentation fault.
--Type <RET> for more, q to quit, c to continue without paging--
#0  0x00005994fbf5813d in main () at segfault.c:5
5         *ptr = 50; /* triggers a segfault */

But, to be fair, that was a trivial example. I am going to figure out why a segfault happened in my custom-made Vulkan engine, let’s see whether the above helps in a more complex program (but not industry-standard software - that would probably require some advanced multi-threaded analysis).

For that simple engine, the core dump had a size of more than 220M! Later on we will see a couple of automation scripts that you can add to your system to watch and clean core dumps.

I got the following output for my segfault’ed Vulkan engine:

#0  0x00007113de70125a in ?? () from /usr/lib/x86_64-linux-gnu/libvulkan_lvp.so

The segmentation fault actually happened in the Vulkan SDK shared library libvulkan_lvp.so which means that my code did not cause a segfault directly but rather through the misuse of some Vulkan API call(s).

System-Wide Core Dumps Management

If you have systemd-coredump utils installed, you can get a list of processes that have generated a core dump using:

coredumpctl list

Which might output something like this:

TIME                           PID  UID  GID SIG     COREFILE EXE                                                                SI>
Thu 2025-11-13 05:53:10 CET 183994 1000 1000 SIGSEGV present  /home/walcht/walcht/_gists/debugging_core_dumps_on_linux/segfault 17.

From which you pick the PID of the process you want to debug and run:

coredumpctl gdb <PID>

Automate Core Dumps Removal

You can automate core dumps removal by running the following script:

#!/bin/bash

CORE_DUMPS_DIR="/var/dumps"
OLDER_THAN_DAYS=1

find "$CORE_DIR" -name "core.*" -type f -mtime +$OLDER_THAN_DAYS -exec rm -f {} \;
echo "removed core dumps older than $OLDER_THAN_DAYS days"

TODO: add description on how to execute a script periodically using systemd