Skip to main content
Code42 Support

Enterprise Server Stalls Due To Too Many Open Files

Applies to:
  • CrashPlan PROe

Overview

In certain circumstances, heavy user activity may cause Linux and OS X enterprise servers to slow or pause backup and sync activity and log the error message "too many open files." This occurs due to open file limits imposed by the Linux and OS X operating systems. This article explains how to correct this issue by increasing the open file limits for Linux and OS X.

Windows is not affected by this issue due to the way it handles open files in memory.

In rare cases, individual Linux and OS X user devices with very large backup file selections may also exceed the open files limit. For information about increasing the open files limit on user devices, see Backups Stall Due To Too Many Open Files.

Affects

Enterprise servers for Linux and OS X

Under the hood

Linux and OS X impose a limit on the number of files a process can have open at any one time. More accurately, the operating system imposes a limit on the number of file descriptors a process can have open at any one time, but for the purposes of this article, the difference isn't significant.

The enterprise server may reach this limit if there is too much backup and/or file sync activity happening at one time. It typically manifests itself by preventing users from backing up to, or syncing with, the destination. Archive maintenance jobs may also stall.

Linux inotify limits
This issue is not related to the limits on inotify watches that occasionally arise on Linux.

Diagnosing

If the enterprise server reaches the operating system's open file limit, logs from the destination include an error in com_backup42_app.log similar to the one below:

Caused by: java.io.FileNotFoundException: /2tb/backups/358017395638843928/cpbf0000000000000035259/cpbdf (Too many open files)

It's possible for this issue to manifest itself in different error messages, but the messages always contain the string "Too many open files." See Sending Logs To Enterprise Support for more information about working with log files.

Recommended solution

Linux

The sections below cover how to check and change the open files limit.

Step 1: Check the Enterprise Server open files limit

To check the open files limit in /proc/[PID]/limits, use the process ID of the enterprise server service.

  1. Use ps to find the process ID:
    ps aux | grep proserver
  2. Use cat to view the limit for the process ID:
    sudo cat /proc/[PID]/limits

In the following example, the enterprise server service has a PID of 4758 and an open files limit of 1024 (shown in bold text).

code42@ubuntu:~$ ps aux | grep proserver
root      4758 67.0 18.2 1554672 187280 pts/2  Sl   09:29   0:03 /opt/proserver/jre/bin/java -Dapp=CPServer -server -Dnetworkaddress.cache.ttl=300 -Djava.net.preferIPv4Stack=true -Ddrools.compiler=JANINO -Dfile.encoding=UTF-8 -Dc42.native.md5.enabled=false -XX:+DisableExplicitGC -XX:+UseAdaptiveGCBoundary -XX:PermSize=256m -XX:MaxPermSize=256m -Xss256k -Xms256m -Xmx1024m -jar /opt/proserver/lib/com.backup42.app.jar -prop conf/conf_proe.properties -config conf/conf_proe.groovy
erik      4771  0.0  0.0   7636   920 pts/2    S+   09:29   0:00 grep --color=auto proserver

code42@ubuntu:~$ sudo cat /proc/4758/limits
Limit                     Soft Limit           Hard Limit           Units    
Max cpu time              unlimited            unlimited            seconds  
Max file size             unlimited            unlimited            bytes    
Max data size             unlimited            unlimited            bytes    
Max stack size            8388608              unlimited            bytes    
Max core file size        0                    unlimited            bytes    
Max resident set          unlimited            unlimited            bytes    
Max processes             unlimited            unlimited            processes
Max open files            1024                 1024                 files    
Max locked memory         65536                65536                bytes    
Max address space         unlimited            unlimited            bytes    
Max file locks            unlimited            unlimited            locks    
Max pending signals       16382                16382                signals  
Max msgqueue size         819200               819200               bytes    
Max nice priority         20                   20                  
Max realtime priority     0                    0                   
Max realtime timeout      unlimited            unlimited            us     

Step 2: Increase the Enterprise server open files limit

  1. Stop the enterprise server service by running the following command:
    sudo /opt/proserver/bin/proserver stop
  2. Open the /opt/proserver/.proserverrc file in a plain text editor.
    If this file does not already exist, create it.
  3. Add the following lines to the file:
    # Increase open files limit
    ulimit -n 409600
    
  4. Save the file.
  5. Run the following command to start the enterprise server service.
    sudo /opt/proserver/bin/proserver start
    

OS X

On OS X, the open file limits are governed by launchd and sysctl values.

  • launchd: Processes are started by launchd, which imposes resource constraints on any process it launches. These limits can be retrieved and set using the launchctl command (the default soft and hard values are 256 and unlimited, respectively). For OS X 10.7 and later, even though the default hard limit is "unlimited", you can't set the hard or soft limit to "unlimited" yourself.
  • sysctl: Operating system open files limits are set with sysctl. These limits can also impact running processes, so the launchd and sysctl open file limits should be set to the same values.

The sections below cover how to check and change these limits.

Step 1: Check the open files limits

Check the launchd and sysctl open files limits before you adjust them.

  1. Open the Terminal application.
  2. Check the launchd open files limit by running the following command:
    sudo launchctl limit maxfiles
    
    This command returns two values, a "soft" and a "hard" limit on each resource (example displayed below). When a process passes the "soft" limit it receives a signal from the operating system but isn't necessarily terminated. When it passes the "hard" limit it is immediately terminated.
    maxfiles    256            unlimited 
    
  3. Check the sysctl open file limits by running the following command:
    sudo sysctl -a | grep files

    This command returns the kern.maxfiles and kern.maxfilesperproc limits (example displayed below).
    kern.maxfiles = 12288
    kern.maxfilesperproc = 10240
    kern.maxfiles: 12288
    kern.maxfilesperproc: 10240
    kern.num_files: 1521
    

Step 2: Increase the launchd open files limit

Depending on the version of OS X, the method for increasing the open file limits is different. Use the instructions below for your version of OS X.

OS X 10.9 and earlier

Set the launchd soft and hard limits to 409600.

  1. Open the Terminal application.
  2. Run the following command to set the soft and hard limits to 409600:
    sudo launchctl limit maxfiles 409600 409600
  3. To make the new limits persist through system restarts, create or edit the /etc/launchd.conf file in a plain text editor and add the following line:
    limit maxfiles 409600 409600
    
  4. Make sure the permissions and file/group ownership on this file are similar to those around it. You can also set these values on a per-user basis by editing or creating a file named $HOME/.launchd.conf. This can be useful if the enterprise server is installed "as user".
OS X 10.10 and later

Set the launchd soft and hard limits to 409600.

  1. Open the Terminal application.
  2. Run the following command to set the soft and hard limits to 409600:
    sudo launchctl limit maxfiles 409600 409600
  3. To make the new limits persist through system, you must create two configuration files:
    1. Create a /Library/LaunchDaemons/limit.maxfiles.plist file in a plain text editor and add the following lines:
      <!--?xml version="1.0" encoding="UTF-8"?-->
      <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
        <plist version="1.0">
          <dict>
            <key>Label</key>
              <string>limit.maxfiles</string>
            <key>ProgramArguments</key>
              <array>
                <string>launchctl</string>
                <string>limit</string>
                <string>maxfiles</string>
                <string>409600</string>
                <string>409600</string>
              </array>
            <key>RunAtLoad</key>
              <true/>
            <key>ServiceIPC</key>
              <false/>
          </dict>
        </plist>
      
    2. Create a /Library/LaunchDaemons/limit.maxproc.plist file in a plain text editor and add the following lines:
      <!--?xml version="1.0" encoding="UTF-8"?-->
      <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
        <plist version="1.0">
          <dict>
            <key>Label</key>
              <string>limit.maxproc</string>
            <key>ProgramArguments</key>
              <array>
                <string>launchctl</string>
                <string>limit</string>
                <string>maxproc</string>
                <string>2048</string>
                <string>2048</string>
              </array>
            <key>RunAtLoad</key>
              <true />
            <key>ServiceIPC</key>
              <false />
          </dict>
        </plist>
      
  4. Make sure the permissions and file/group ownership on this file are similar to those around it. Both plist files must have root permissions (-rw-r--r--). You can ensure that root permissions are in place by running the following commands:
    sudo chmod 644 /Library/LaunchDaemons/limit.maxfiles.plist
    sudo chmod 644 /Library/LaunchDaemons/limit.maxproc.plist
    

Step 3: Increase the sysctl open files limits

On OS X, the launchd open file limit cannot exceed the sysctl open file limits. Set both sysctl open file limits to 409600.

  1. Open the Terminal application.
  2. Run the following commands to set the sysctl open files limits to 409600:
    sudo sysctl -w kern.maxfiles=409600
    sudo sysctl -w kern.maxfilesperproc=409600
  3. To make the new limits persist through system restarts, open /etc/sysctl.conf in a plain text editor and add the following lines:
    kern.maxfiles=409600
    kern.maxfilesperproc=409600

Step 4: Configure JVM open files limit

On OS X versions 10.9.5 and later, the Java virtual machine maintains its own open file limit. Add a Java option to the enterprise server's .plist file to set the Java VM's open file limit to match the enterprise server's open file limit.

  1. In a plain text editor, open the .plist file: /Library/LaunchDaemons/com.crashplan.proserver.plist
  2. Locate the following line:
    <string>-XX:+DisableExplicitGC</string>
    
  3. After that line, add the following line:
    <string>-XX:-MaxFDLimit</string>
    
  4. Save the .plist file.
  5. Restart the enterprise server.