I recently bought a small virtual server (OpenVZ, 128mb memory, 256mb burstable) for web hosting. I knew that it would be a challenge to work with limited memory, especially since OpenVZ doesn’t swap like Xen. For starters, I went with a 32-bit minimal Debian setup, because 64-bit pointers use twice as much memory as 32-bit pointers. Apache was also out of the picture, but Lighttpd easily filled the role. MySQL was also heavily tweaked to limit resource usage. All in all, I had a happy little system that used 60mb-100mb of memory.
This morning, I discovered that there is more memory to be saved… in the form of the thread stack. See, each thread that is spawned is given a chunk of memory to use as stack. On my laptop, the default stack size is 8mb. On my VPS, it’s 10mb. Now, I believe a well-designed program should come nowhere near using 8mb of stack, let alone 10mb. So I used the nifty ulimit bash command to set the default stack size to 256kb instead. To make the setting global, I added the following line to the top of my /etc/init.d/rc file, so the limit is set early during the boot process:
ulimit -s 256
After a reboot, I found that not only did memory usage drop down to 30mb-60mb, the system remained fully functional. 256kb of stack should be plenty for most threads, but certain apps could make do with even less. For example, Debian Lenny runs rsyslogd by default, which is a pretty beefy process that could make do with just 128kb of stack.
While 256kb should be a safe option for most systems, I was able to go down to 128kb without losing functionality, but also without reclaiming much memory; thus, I went back up to 256kb. Would the system refuse to boot if the stack size were set too low? I’ll leave this experiment to the more adventurous souls out there…
In case you successfully perform this tweak, don’t just let the extra memory sit there; after all, free memory is wasted memory. In my case, I was able to run a couple more PHP FastCGI processes to improve concurrency.