As we have already discussed, when it is first created, an Apache child process usually has a large fraction of its memory shared with its parent. During the child process's life some of its data structures are modified and a part of its memory becomes unshared (pages become "dirty"), leading to an increase in memory consumption. You will remember that the MaxRequestsPerChild directive allows you to specify the number of requests a child process should serve before it is killed. One way to limit the memory consumption of a process is to kill it and let Apache replace it with a newly started process, which again will have most of its memory shared with the Apache parent. The new child process will then serve requests, and eventually the cycle will be repeated.

This is a fairly crude means of limiting unshared memory, and you will probably need to tune MaxRequestsPerChild, eventually finding an optimum value. If, as is likely, your service is undergoing constant changes, this is an inconvenient solution. You'll have to retune this number again and again to adapt to the ever-changing code base.

You really want to set some guardian to watch the shared size and kill the process if it goes below some limit. This way, processes will not be killed unnecessarily.

To set a shared memory lower limit of 4 MB using Apache::GTopLimit, add the following code into the file:

use Apache::GTopLimit;
$Apache::GTopLimit::MIN_PROCESS_SHARED_SIZE = 4096;

and add this line to httpd.conf:

PerlFixupHandler Apache::GTopLimit

Don't forget to restart the server for the changes to take effect.

Adding these lines has the effect that as soon as a child process shares less than 4 MB of memory (the corollary being that it must therefore be occupying a lot of memory with its unique pages), it will be killed after completing its current request, and, as a consequence, a new child will take its place.

If you use Apache::SizeLimit you can accomplish the same by adding this to

use Apache::SizeLimit;
$Apache::SizeLimit::MIN_SHARE_SIZE = 4096;

and this to httpd.conf:

PerlFixupHandler Apache::SizeLimit

If you want to set this limit for only some requests (presumably the ones you think are likely to cause memory to become unshared), you can register a post-processing check using the set_min_shared_size( ) function. For example:

use Apache::GTopLimit;
if ($need_to_limit) {
    # make sure that at least 4MB are shared

or for Apache::SizeLimit:

use Apache::SizeLimit;
if ($need_to_limit) {
    # make sure that at least 4MB are shared

Since accessing the process information adds a little overhead, you may want to check the process size only every N times. In this case, set the $Apache::GTopLimit::CHECK_EVERY_N_REQUESTS variable. For example, to test the size every other time, put the following in your file:

$Apache::GTopLimit::CHECK_EVERY_N_REQUESTS = 2;

or, for Apache::SizeLimit:

$Apache::SizeLimit::CHECK_EVERY_N_REQUESTS = 2;

You can run the Apache::GTopLimit module in debug mode by setting:

PerlSetVar Apache::GTopLimit::DEBUG 1

in httpd.conf. It's important that this setting appears before the Apache::GTopLimit module is loaded.

When debug mode is turned on, the module reports in the error_log file the memory usage of the current process and also when it detects that at least one of the thresholds was crossed and the process is going to be killed.

Apache::SizeLimit controls the debug level via the $Apache::SizeLimit::DEBUG variable:

$Apache::SizeLimit::DEBUG = 1;

which can be modified any time, even after the module has been loaded.