Why would apc_store() return false? - php

The documentation on php.net is very spotty about causes of failure for APC writes. What kind of scenarios would cause a call to apc_store() to fail?
There's plenty of disk space available, and the failures are spotty. Sometimes the store operation will succeed and sometimes it'll fail.

For php cli it needs to be enabled with another option:
apc.enable_cli=On
In my situation it was working when running from a webbrowser, but not when executing the same stuff with php cli.

I had exactly the same situation.
I migrated my code from using Cron Jobs, to using Gearman Workers, which were managed through supervisord.
Everything Broke. I couldn't ever get caches to work through APC, and had to revert back to using filebase caching.
Eventually I figured out that when I was using cron jobs, I would load each page via wget, rather than command line. This difference, meant that supervisord, which would load my PHP scripts via command line, wouldn't work, because by default APC will not work through command line.
The fix....
apc.enable_cli=On

out of memory (allocated memory for apc, that is)

this asinine (and closed, for some reason) bug was my problem:
http://pecl.php.net/bugs/bug.php?id=16814
has to roll back to apc version 3.1.2 to get apc to work. no fiddling with apc settings in php.ini helped (i'm on mac os 10.5, using apache 2, php 5.3).
for me, this test script showed 3 "trues" for 3.1.2 and true/false/true for 3.1.3p1
var_dump( apc_store('test', 'one') );
var_dump( apc_store('test', 'two') );
var_dump( apc_store('diff', 'thr') );

http://php.net/manual/en/apc.configuration.php
the apc.ttl and apc.user_ttl settings on php.ini:
Leaving this at zero means that APC's cache could potentially fill up with stale entries while newer entries won't be cached.

Out of disk space or permission denied to the storage directory?

In addition to what Greg said, I'd add that a configuration error could cause this.

There's a bug in the version installed with ubuntu 10.04 and debian stable. If you replace the package with this version: http://packages.debian.org/sid/php-apc (3.1.7) it works as it should.

apc_store will fail if that specific key already exists and you are trying to write again to it before the TTL expires. Therefore you can pretty much ignore the return false because it really did fail but the cache is still there. If you want to get around this, start using apc_add instead. http://php.net/manual/en/function.apc-add.php

Related

My long running laravel 4 command keeps being killed

I have a laravel 4 web project that implements a laravel command.
When running in the development homestead vm, it runs to completion (about 40 seconds total time).
However when running it on the production server, it quits with a 'killed' output on the command line.
At first i thought it was the max_execution_time in cli php.ini, so I set it to 0 (for unlimited time).
How can I find out what is killing my command?
I run it on ssh terminal using the standard artisan invokation:
php artisan commandarea:commandname
Does laravel 4 have a command time limit somewhere?
The vps is a Ubuntu 4.10 machine with mysql, nginx and php-fpm
So, firstly, thank you everyone who has pointed me in the right direction regarding PHP and laravel memory usage tracking.
I have answered my own question hoping that it will benefit laravel devs in the future, as my solution was hard to find.
After typing 'dmesg' to show system messages. I found that the php script was being killed by Linux.
So, I added memory logging calls into my script before and after each of the key areas of my script:
Log::Info('Memory now at: ' . memory_get_peak_usage());
Then I ran the script while watching the log output and also the output of the 'top' command.
I found that even though my methods were ending and the variables were going out of scope, the memory was not being freed.
Things that I tried, that DIDNT make any difference in my case:
unset($varname) on variables after I have finished with them - hoping to get GC to kick off
adding gc_enable() at beginning of script and then adding gc_collect_cycle() calls after a significant number of vars are unset.
Disabling mysql transactions - thinking maybe that is memory intensive - it wasnt.
Now, the odd thing was that none of the above made any difference. My script was still using 150mb or ram by time it killed!
The solution that actually worked:
Now this is definitely a laravel specific solution.
But my scripts purpose is basically to parse a large xml feed and then insert thousands of rows into mysql using Elequent ORM.
It turns out that Laravel creates logging information and objects to help you see the query performance.
By turning this off with the following 'magic' call, I got my script down from 150mb to around 20mb!
This is the 'magic;' call:
DB::connection()->disableQueryLog();
I can tell you by the time I found this call, I was grasping at straws ;-(
A process may be killed for several reasons:
Out of Memory
There are two ways to trigger this error: Exceed the amount of memory allocated to PHP script in php.ini, or exceed the available system memory. Check the PHP error log and php.ini file to rule out the first possibility, and use dmesg output to check for the second possibility.
Exceeded the execution time-out limit
In your post you indicate that you disabled the timeout via the max_execution_time setting, but I have included it here for completeness. Be sure that the setting in php.ini is correct and (for those using a web server instead of a CLI script) restart the web server to ensure that the new configuration is active.
An error in the stack
If your script is error-free and not encountering either of the above errors, ensure that your system is running as expected. When using a web server, restart the web server software. Check the error logs for unexpected output, and stop or upgrade related daemons and needed.
Had this issue on a Laravel/Spark project. just wanted to share if others have this issue.
Try a refresh/restart of your dev server if running Vagrant or Ubuntu before more aggressive approaches.
I accidentally ran install of dependency packages on a Vagrant server. I also removed and replaced a mirrored folder repeatedly during install errors. My error was on Laravel/Spark 4.~. I was able to run migrations on other projects; I was getting 'killed' very quickly, 300ms timeframe, on a particular project for nearly all commands. Reading other users, I was dreading trying to find the issue or corruption. In my case, a quick Vagrant reload did the trick. killed issue was resolved.

phpunit phar has long delay when executing

I have the latest PHPUnit as a phar, placed in /usr/local/bin/phpunit (4.1.3). When I execute this file on my vagrant host (ubuntu 12.04, php 5.3.10), it takes what seems to be 30s to 60s before it actually starts performing the unit tests. I cannot figure out why.
Any ideas?
I ran into a similar problem when running phars in production (AWS's phar. Unfortunately, PHP doesn't do any caching around phar archives, even when using APC as an OpCode cache. So, on each request, PHP is unarchiving and parsing the entire phar. My workaround has been to avoid phars in production unless the archive is small.
If you have the option to upgrade PHP 5.5 w/ OpCode caching, you shouldn't have this problem.
I am going to close this for now as I believe the primary issue is that I am working with a shared folder, and I cannot update to 5.5 like monte suggested. I appreciate your response! I will just have to deal with it for now. If I get a moment I will try a vagrant with 5.5 and OpCode like you suggested, and just run phpunit to see if it is faster, even in a shared folder. If so, I will change my accepted answer.

PHP include path order and status cache

I am building a framework where product instances use the main framework files, until there is a copy of it's own version of that file. To achieve this I have done the following:
set_include_path(MY_PRODUCT_ROOT.'/' . PATH_SEPARATOR . MY_FRAMEWORK_ROOT.'/');
So if I call include('view-users.php'); it will first look in MY_PRODUCT_ROOT for /view-users.php and if that's not found, it will then look to MY_FRAMEWORK_ROOT/view-users.php.
This procedure is working very nicely until I add files to the product root. I know that PHP/Apache is caching the includes and one would think to run clearstatcache(true); to clear any status caching. PHP likely uses file_exists inside it's include(); and thinks the new file still does not exist. I have tried restarting Apache with no effect.
Unfortunately running clearstatcache(true); does not help either. Only once I have deleted MY_FRAMEWORK_ROOT/file does it think to clear cache and try again, thus finding MY_PRODUCT_ROOT/file.
Im a little stumped, I know we need to refresh PHP/Apache's understanding of whether the file(s) exist or not, but clearstatcache(true); is not helping...
Any ideas?
UPDATE: Correction, restarting Apache seems to help now. I reiterate that this only occurs when trying to ADD a file to MY_PRODUCT_ROOT, to overlap an existing MY_FRAMEWORK_ROOT file, for customization
UPDATE: Development environment is Zend Server CE PHP 5.3.14 on Windows, Production environment Centos linux httpd, PHP 5.3+. The fact that Zend optimizer is enabled on my dev environment could have an effect, Also not using APC or any other caching scripts
Zend Optimizer+ speeds up PHP execution by opcode caching and optimization. It stores precompiled script bytecode in shared memory. This eliminates the stages of reading code from the disk and compiling it on future access. For further performance improvements, the stored bytecode is optimized for faster execution.
This is caching the file contents found in the includes, thus clearstatcache does not work. I have disabled my Zend Optimizer and it works now.

Are Memcache and APC good together?

I have installed PHP5 - PHP5-MEMCACHE - PHP-APC.
Can they work like that together? Will the loading be fast with these modules ?
I tried to use them, I don't "see" particular differences, maybe the CPU is used less with these modules. My website doesn't have high traffic, but If i can save resources is better!
Thank you
APC keeps cache of PHP bytecode. Memcache keeps cache of your vars, that you set.
So answer is Yes, they can. They're made for different things.
They work together very well, you just need to use them properly :
Memcached is a distributed cache system. What that means in a nutshell is that if you have a cluster of servers all of them can access the same cache pool
APC is an opcache and local cache system. Meaning it optimizes the php scripts so when going through the compiler less operations are made and the code is executed way faster. Another use of APC is local cache, which means you can store values in the cache and access them from the machine running the code.
Yes, they can work together. Whether they will on a production system is another story...
Personally, I had to give up trying to get the following to work for any extended period of time:
Ubuntu 10.04
NGINX 0.7.65
PHP 5.3.2
php-apc
php5-memcache
It will run for awhile, but after stress testing php errors out. I can restart php-fastcgi via /etc/init.d/php-fastcgi and things will role along for some time more, but it always crashes again sooner than later.
I can run either/or without issue, but the two together won't cooperate for me. FYI I tried using binaries (apt-get packages), installing as PECL extensions, downloading source, but all roads lead me to the same sad fate. I also tried running the memache daemon local & remotely on my web host, but same outcome.
I'm working on mmo game based on JavaScript and PHP. We are using both of them. I can't tell you more, beacause I am only frontend developer, however I think if APC and memcache were bad we were not using it.

PHP APC in CLI mode

Does APC module in PHP when running in CLI mode support code optimization? For example, when I run a file with php -f <file> will the file be optimized with APC before executing or not? Presuming APC is set to load in config file. Also, will the scripts included with require_once be also optimized?
I know optimization works fine when running in fastcgi mode, but I'm wondering if it also works in CLI.
apc_* functions work, but I'm wondering about the code optimization, which is the main thing I'm after here.
Happy day,
Matic
The documentation of apc.enable_cli, which control whether APC should be activated in CLI mode, says (quoting) :
Mostly for testing and debugging.
Setting this enables APC for the CLI
version of PHP. Under normal
circumstances, it is not ideal to
create, populate and destroy the APC
cache on every CLI request, but for
various test scenarios it is useful to
be able to enable APC for the CLI
version of PHP easily.
Maybe APC will store the opcodes in memory, but as the PHP executable dies at the end of the script, that memory will be lost : it will not persist between executions of the script.
So opcode-cache in APC is useless in CLI mode : it will not optimize anything, as PHP will still have to re-compile the source to opcodes each time PHP's executable is launched.
Actually, APC doesn't "optimize" : the standard way of executing a PHP script is like this :
read the file, and compile it into opcodes
execute the opcodes
What APC does is store in opcodes in memory, so the execution of a PHP script becomes :
read the opcodes from memory (much faster than compiling the source-code)
execute the opcodes
But this means you must have some place in memory to store the opcodes. When running PHP as an Apache module, Apache is responsible for the persistence of that memory segment... When PHP is run from CLI, there is nothing to keep the memory segment there, so it is destroyed at the end of PHP's execution.
(I don't know how it works exactly, but it's something like that, at least in the principles, even if my words are not very "technical" ^^ )
Or, by "optimization" you mean something else than opcode cache, like the configuration directive apc.optimization ? If so, this one has been removed in APC 3.0.13
If you have CLI code that generates any configuration based on the environment, then the CLI code will think that APC isn't enabled. For example, when generating Symfony's DI container through the CLI, it will tell Doctrine not to use APC (details).
Also, I have not tested it but there's a chance APC may improve the speed of scripts for files included after a pcntl_fork(). Edit: I've asked the question about APC & pcntl_fork() here.
For completeness, to enable APC on the CLI (in Ubuntu):
echo 'apc.enable_cli = 1' > /etc/php5/cli/conf.d/enable-apc-cli.ini
Well, there's a good reason for APC in CLI Mode:
UnitTesting: I wanna do my unit test using an environment as close to the later production environment as possible. Zend Framework has an internal caching solution, which may use APC's Variable Cache as Storage Backend - and I wanna use this.
There is another reason to use it in CLI mode: some scripts are able to use it as a cache

Categories