eaccelerator optimization

In one of our previous articles here, we had written about eAccelerator and metioned that it should significantly reduce the server load while increasing its speed from 1 to 10 times. But during the evaluation of eAccelerator under 1000+ http req/sec. conditions, we have noticed that Apache repeatedly launches child processes untill it reaches the MaxClients server limit and causes abnormal load averages to bring multi-core and multi-cpu performance servers on their knees just in seconds. When we debug the issue, we have noticed that all complaints in gdb backtraces were somehow about nothing but Zend.

We know eAccelerator has been reported to work with Zend many times but we realised that they do not actually accord well even when the requests only go over 50 – 60 per second. Just imagine the view when you get thousands of them. Previously in our related article, while discussing the initial configuration of eAccelerator, we’ve told that you might load it either as a Zend extension or alternatively as a PHP extension in your php.ini file. Now we explicitly recommend you not to load it as a Zend extension. Instead, prefer the straight PHP extension way with;
extension=”eaccelerator.so”

Do not install Zend Optimizer with eAccelerator if you don’t use scripts encoded with Zend Encoder. So also disable Zend optimization right behind the eAccelerator configuration lines with;
zend_optimizer.optimization_level=0

and just in case put your whole eAccelerator configuration lines at the top of your php.ini file as Zend Optimizer must be loaded after eAccelerator inside php.ini. Incidentally, for those who run the older version of eaccelerator, it’s vital to upgrade it to the latest one as old versions have a spinlock bug that leads into deadlock under heavy loads. Then, restart the apache and observe the new attitudes. Although you’ll gain some increase in the number of requests that are being responded without a fatal crash, there’re still two important things to take care of: “stat()” system calls and the compression.

stat() is a Unix system call that returns data on the size and the parameters associated with a file. On every single hit eAccelerator will check the modification time of a script to see if it’s changed and needs to be recompiled. Each time, this is done by the stat calls which consume time and add a serious overhead to the system which precludes the response to mass requests. You must skip these expensive calls which are enabled by default;
eaccelerator.check_mtime = “0”

When you disable this check, remember that you have to manually clean the eAccelerator cache when you update a file.

Compression also requires an additional cpu activity. While it’s essential for the complete http part, it’s not meaningful for the codes that are going to be cashed. And if we’re talking about skinning a flint here, then disable this feature also;
eaccelerator.compress = “0”

According to our observations, following all these configuration modifications, now the system can handle 100 http req/sec. effectively reducing the overall load at nearly %80. But if your requests grow, this brilliant savings will probably turn into a crash. In our opinion, eaccelerator is very well suited for the web servers that serve utmost 100 http req/sec. which approximately corresponds to ~1000 concurrent users (depending on the type of your web application), not for more.

Those who will take a chance on xcache as an alternative to eaccelerator won’t gain a victory above 100 requests per second, because most probably your web server will become unresponsive with a lot of processes in lockf state. We’ve also tested xcache under same pressures.

But this is not a case of this-cache or that-cache situation, it clearly seems to be a case of locking mechanism used. My impression is that we need to enable semaphore locks instead of default fcntl under certain loads. As FreeBSD doesn’t support pthread mutex locking, our next resort shall be a marginal one if semaphores don’t really help. That is spinlocks which is still considered experimental within APC (Alternative PHP Cache). Install APC from ports by enabling IPC SHM (shared memory) and spinlocking, try the configuration below and see what happens if locking is an issue for you, too. But before, you need to make sure that your “kern.ipc.shmmax” value has been set large enough in/etc/sysctl.conf to handle the shm_size below.
extension=apc.so
apc.enabled=1
apc.shm_segments=1
apc.shm_size=128
apc.file_update_protection=0
apc.stat=0
apc.ttl=0

Leave a Reply

Your email address will not be published.