How to Fix MongoDB Segmentation Fault on Ubuntu 22.04
Troubleshooting Guide: MongoDB Segmentation Fault on Ubuntu 22.04
As a Senior DevOps Engineer, encountering a “Segmentation Fault” with critical services like MongoDB can be a frustrating experience. On Ubuntu 22.04, this particular error often points to specific system-level resource limitations rather than a bug within MongoDB itself. This guide will walk you through diagnosing and resolving the most common causes.
1. The Root Cause: Insufficient System Resource Limits (Ulimits)
MongoDB is a resource-intensive application, especially when running a production workload. It requires a significant number of file descriptors (for data files, journal files, network connections, etc.) and processes/threads to operate efficiently.
The most common reason for a Segmentation Fault on Ubuntu 22.04 related to MongoDB is insufficient system resource limits (ulimits), specifically for:
nofile(Number of open files): MongoDB needs to open many files concurrently. If this limit is too low, the database might fail to open new data files, index files, or even network sockets, leading to an unexpected state and potentially a segmentation fault.nproc(Number of processes/threads): MongoDB utilizes multiple threads for various operations (e.g., handling client connections, background tasks, replication). If the allowed number of processes or threads for themongodbuser is too low, the database engine can hit this ceiling, leading to crashes.
When MongoDB attempts to acquire resources beyond these imposed limits, the underlying system calls can fail in ways that the MongoDB process isn’t gracefully designed to handle under default configurations, leading to an illegal memory access or an unexpected state, manifesting as a segmentation fault.
Additionally, while less likely to cause a direct segmentation fault on startup, an improperly configured vm.max_map_count (which affects memory mapping for the WiredTiger storage engine) can contribute to instability, memory-related issues, or slow performance that could exacerbate underlying ulimit problems.
2. Quick Fix (CLI)
Before making permanent changes, you can attempt to increase the ulimits for your current shell session and restart MongoDB. This helps confirm if resource limits are indeed the problem.
-
Check Current Ulimits: First, verify the current effective ulimits in your session (these will likely be too low for MongoDB):
ulimit -n # Max number of open file descriptors ulimit -u # Max number of user processesYou might see values like
1024fornofileand8192fornproc. -
Increase Ulimits Temporarily: Increase the limits for your current session. Remember, these changes are not persistent and only apply to processes started from this shell.
sudo bash -c "ulimit -n 64000; ulimit -u 64000; exec systemctl restart mongod"- We use
sudo bash -cto executeulimitandsystemctl restartin the same context, ensuringsystemctlpicks up the elevated limits. 64000is a commonly recommended value for MongoDB. You might need to adjust this based on your specific workload.
- We use
-
Verify MongoDB Status:
sudo systemctl status mongodIf MongoDB now starts successfully, it strongly indicates that resource limits were the cause of the segmentation fault.
3. Configuration Check: Making Changes Permanent
To ensure MongoDB runs stably across reboots and avoids future segmentation faults, you need to configure persistent ulimits and other kernel parameters.
A. Configure System-Wide Ulimits for the mongodb User
Edit the /etc/security/limits.conf file to set persistent ulimits for the mongodb user.
-
Open
limits.conf:sudo nano /etc/security/limits.conf -
Add/Modify Entries: Add the following lines at the end of the file. Ensure you use both
softandhardlimits.# MongoDB ulimits mongodb soft nofile 64000 mongodb hard nofile 64000 mongodb soft nproc 64000 mongodb hard nproc 64000softlimit: The currently enforced limit for a user. Processes can increase this up to the hard limit.hardlimit: The maximum value a soft limit can take. Only root can increase a hard limit.
B. Configure systemd Service Ulimits (Crucial for Ubuntu 22.04)
systemd often overrides /etc/security/limits.conf for services it manages. You must explicitly set ulimits within the mongod.service unit file.
-
Open
mongod.service: It’s best practice to create an override file rather than editing the main service file directly, but for a direct fix, you can edit the primary file.sudo nano /lib/systemd/system/mongod.service(Alternatively, for an override:
sudo systemctl edit mongod.service) -
Add/Modify
LimitNOFILEandLimitNPROC: Locate the[Service]section and add or modify the following lines:[Service] # ... existing settings ... LimitNOFILE=64000 LimitNPROC=64000 # ... other settings ... -
Reload
systemdDaemon: After modifying the service file, you must reload thesystemddaemon to apply the changes.sudo systemctl daemon-reload
C. Configure vm.max_map_count (WiredTiger Requirement)
The WiredTiger storage engine used by MongoDB requires a higher vm.max_map_count than the default.
-
Edit
sysctl.conf:sudo nano /etc/sysctl.conf -
Add/Modify Entry: Add the following line to the end of the file:
vm.max_map_count=262144 -
Apply Changes:
sudo sysctl -p
D. Disable Transparent Huge Pages (THP)
While not a direct cause of segmentation faults, THP can negatively impact MongoDB performance and stability. It’s recommended to disable it.
-
Create a
systemdservice for THP disabling:sudo nano /etc/systemd/system/disable-thp.service -
Add the following content:
[Unit] Description=Disable Transparent Huge Pages (THP) for MongoDB Before=mongod.service [Service] Type=oneshot ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag" [Install] WantedBy=multi-user.target -
Reload
systemddaemon and enable the service:sudo systemctl daemon-reload sudo systemctl enable disable-thp.service
E. Restart MongoDB
After all configuration changes, restart the MongoDB service to ensure they are applied.
sudo systemctl restart mongod
4. Verification
After applying the configuration changes and restarting MongoDB, it’s crucial to verify that the new limits are active and recognized by MongoDB.
-
Check MongoDB Service Status:
sudo systemctl status mongodEnsure it shows
active (running). -
Connect to MongoDB Shell:
mongo -
Verify MongoDB’s Internal Ulimit View: Inside the
mongoshell, run the following command to check the resource limits that MongoDB itself observes:db.adminCommand({ getCmdLineOpts: 1 })Look for the
systemLog.hostInfo.ulimitssection in the output. You should seeopen filesandmax user processesreflecting the64000values you set.{ // ... other output ... "systemLog" : { "hostInfo" : { // ... other host info ... "ulimits" : { "open files" : { "soft" : 64000, "hard" : 64000 }, "max user processes" : { "soft" : 64000, "hard" : 64000 }, // ... other ulimits ... } } }, // ... rest of output ... } -
Verify
vm.max_map_count:cat /proc/sys/vm/max_map_countThis should output
262144. -
Verify THP Status:
cat /sys/kernel/mm/transparent_hugepage/enabled cat /sys/kernel/mm/transparent_hugepage/defragBoth should output
[never](withalways madvisenot selected).
If all checks pass and MongoDB remains stable, you’ve successfully resolved the segmentation fault due to resource limitations. Always monitor your MongoDB instance for continued stability and performance.