I have the latest PHPUnit as a phar, placed in /usr/local/bin/phpunit (4.1.3). When I execute this file on my vagrant host (ubuntu 12.04, php 5.3.10), it takes what seems to be 30s to 60s before it actually starts performing the unit tests. I cannot figure out why.
Any ideas?
I ran into a similar problem when running phars in production (AWS's phar. Unfortunately, PHP doesn't do any caching around phar archives, even when using APC as an OpCode cache. So, on each request, PHP is unarchiving and parsing the entire phar. My workaround has been to avoid phars in production unless the archive is small.
If you have the option to upgrade PHP 5.5 w/ OpCode caching, you shouldn't have this problem.
I am going to close this for now as I believe the primary issue is that I am working with a shared folder, and I cannot update to 5.5 like monte suggested. I appreciate your response! I will just have to deal with it for now. If I get a moment I will try a vagrant with 5.5 and OpCode like you suggested, and just run phpunit to see if it is faster, even in a shared folder. If so, I will change my accepted answer.
Related
I've been setting up PHP deployments with Capistrano on CentOS 6 and have run into an interesting issue. The way capistrano works, it sets up folders like this:
/var/www/myapp.com/
current (symlink to latest release in /releases)
shared
releases
20130826172737
20130826172114
When I look at the "current" symlink, it points to the most recent release. At first, when opening my web app, everything worked fine. After deploying a new release, the current folder correctly points to the new release, but the web application tries to load files from the old release (which has been deleted in a Capistrano cleanup process). Also, the virtual host is configured to point at /var/www/myapp.com/current/Public.
Are symlinks cached in any way?
The specific PHP code that fails (which initializes my framework) is this:
require_once dirname(dirname(__FILE__)) . '/App/App.php';
App\App::run();
That is in index.php currently located at /var/www/app.com/current/Public/index.php.
My Apache error logs show:
PHP Fatal error: require_once(): Failed opening required '/var/www/myapp.com/releases/20130826172237/App/App.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/myapp.com/releases/20130826172237/Public/index.php
And the current symlink shows:
current -> /var/www/zverse/releases/20130826172641
Obviously 20130826172641 != 20130826172237 which the latter was the previous version.
Any ideas or areas I can look at?
I can't verify this, but it seems that there is some unpredictable behaviour with Apache following / caching the old location of symlinks:
Is there a way to mimic symlink behavior with an apache configuration?
Case Against Using Symlinks For Code Promotion
The only thing that would absolutely clear up this issue was to cycle Apache, which we would prefer not to do on every deployment. -- Mike Brittain
He suggests moving the whole directory, instead of updating the symlink.
Have you checked the realpath_cache_size and realpath_cache_ttl directives? By default, php > 5.1 caches the real paths of symlinked files for 120 seconds. This will cause problems with capistrano deployments. The main problems are caching - that even if you clear your cache, your old php files will continue to be served for two minutes, repopulating it with old data - and interaction between php and static files. Static files are served directly by Apache, so will be updated immediately. The php code will still be from the previous release for two minutes after deploying though, so it will be expecting the old versions of any changed static files. That's especially a problem if you use a cache breaking procedure that changes the names of those files; in that case the php code won't be able to find the files it's expecting at all.
Anyway, there are two solutions. The first is to set realpath_cache_size to 0 in php.ini. (Note: setting realpath_cache_ttl to 0 does not disable the cache.) Or, if you want to keep it enabled, you should be able to use the clearstatcache function to clear the realpath cache immediately after deploying your symlink, using a capistrano hook. Be aware though, if you're using mod_php, the php cli and apache runtimes are separate, so you would need to call that function using a php script invoked by apache, similarly to what's done for clearing the APC cache here. I haven't tested that though, as I didn't notice a significant performance impact from simply disabling the cache.
I followed this tutorial for installing here:
https://github.com/facebook/hiphop-php/wiki/Building-and-installing-HHVM-on-Ubuntu-13.04
But I can't figure out how to run it. I've gone to to the hphp/hhvm/hhvm and I've run this on hhhm
root#hhvm-ubuntu:~/dev/hiphop-php/hphp/hhvm# ls
CMakeFiles CMakeLists.txt hhvm main.cpp process_init.cpp
cmake_install.cmake global_variables.cpp link_hphp.sh Makefile process_init.h
The problem is each time I run, the server crashes. Actually the server is slow with hhvm install, its a 1 GB instance on Rackspace. But how am I suppose to run hip-hop after compiling from source?
You just run hphp/hhvm/hhvm some_file.php if you want it in command line or hphp/hhvm/hhvm -m server /some/document_root/ for a server. Look on the wiki for more config information.
I don't have the link handy, but 1 gb is not enough to run HipHopVM. The process itself will easily chew up that amount of ram by itself. When it chews up more ram than you have it will slow to a crawl and then eventually crash.
Try using it in a 4gig instance. You may have better luck.
Take a look at this article for some more info on configuring hhvm.
I am building a framework where product instances use the main framework files, until there is a copy of it's own version of that file. To achieve this I have done the following:
set_include_path(MY_PRODUCT_ROOT.'/' . PATH_SEPARATOR . MY_FRAMEWORK_ROOT.'/');
So if I call include('view-users.php'); it will first look in MY_PRODUCT_ROOT for /view-users.php and if that's not found, it will then look to MY_FRAMEWORK_ROOT/view-users.php.
This procedure is working very nicely until I add files to the product root. I know that PHP/Apache is caching the includes and one would think to run clearstatcache(true); to clear any status caching. PHP likely uses file_exists inside it's include(); and thinks the new file still does not exist. I have tried restarting Apache with no effect.
Unfortunately running clearstatcache(true); does not help either. Only once I have deleted MY_FRAMEWORK_ROOT/file does it think to clear cache and try again, thus finding MY_PRODUCT_ROOT/file.
Im a little stumped, I know we need to refresh PHP/Apache's understanding of whether the file(s) exist or not, but clearstatcache(true); is not helping...
Any ideas?
UPDATE: Correction, restarting Apache seems to help now. I reiterate that this only occurs when trying to ADD a file to MY_PRODUCT_ROOT, to overlap an existing MY_FRAMEWORK_ROOT file, for customization
UPDATE: Development environment is Zend Server CE PHP 5.3.14 on Windows, Production environment Centos linux httpd, PHP 5.3+. The fact that Zend optimizer is enabled on my dev environment could have an effect, Also not using APC or any other caching scripts
Zend Optimizer+ speeds up PHP execution by opcode caching and optimization. It stores precompiled script bytecode in shared memory. This eliminates the stages of reading code from the disk and compiling it on future access. For further performance improvements, the stored bytecode is optimized for faster execution.
This is caching the file contents found in the includes, thus clearstatcache does not work. I have disabled my Zend Optimizer and it works now.
I have installed PHP5 - PHP5-MEMCACHE - PHP-APC.
Can they work like that together? Will the loading be fast with these modules ?
I tried to use them, I don't "see" particular differences, maybe the CPU is used less with these modules. My website doesn't have high traffic, but If i can save resources is better!
Thank you
APC keeps cache of PHP bytecode. Memcache keeps cache of your vars, that you set.
So answer is Yes, they can. They're made for different things.
They work together very well, you just need to use them properly :
Memcached is a distributed cache system. What that means in a nutshell is that if you have a cluster of servers all of them can access the same cache pool
APC is an opcache and local cache system. Meaning it optimizes the php scripts so when going through the compiler less operations are made and the code is executed way faster. Another use of APC is local cache, which means you can store values in the cache and access them from the machine running the code.
Yes, they can work together. Whether they will on a production system is another story...
Personally, I had to give up trying to get the following to work for any extended period of time:
Ubuntu 10.04
NGINX 0.7.65
PHP 5.3.2
php-apc
php5-memcache
It will run for awhile, but after stress testing php errors out. I can restart php-fastcgi via /etc/init.d/php-fastcgi and things will role along for some time more, but it always crashes again sooner than later.
I can run either/or without issue, but the two together won't cooperate for me. FYI I tried using binaries (apt-get packages), installing as PECL extensions, downloading source, but all roads lead me to the same sad fate. I also tried running the memache daemon local & remotely on my web host, but same outcome.
I'm working on mmo game based on JavaScript and PHP. We are using both of them. I can't tell you more, beacause I am only frontend developer, however I think if APC and memcache were bad we were not using it.
The documentation on php.net is very spotty about causes of failure for APC writes. What kind of scenarios would cause a call to apc_store() to fail?
There's plenty of disk space available, and the failures are spotty. Sometimes the store operation will succeed and sometimes it'll fail.
For php cli it needs to be enabled with another option:
apc.enable_cli=On
In my situation it was working when running from a webbrowser, but not when executing the same stuff with php cli.
I had exactly the same situation.
I migrated my code from using Cron Jobs, to using Gearman Workers, which were managed through supervisord.
Everything Broke. I couldn't ever get caches to work through APC, and had to revert back to using filebase caching.
Eventually I figured out that when I was using cron jobs, I would load each page via wget, rather than command line. This difference, meant that supervisord, which would load my PHP scripts via command line, wouldn't work, because by default APC will not work through command line.
The fix....
apc.enable_cli=On
out of memory (allocated memory for apc, that is)
this asinine (and closed, for some reason) bug was my problem:
http://pecl.php.net/bugs/bug.php?id=16814
has to roll back to apc version 3.1.2 to get apc to work. no fiddling with apc settings in php.ini helped (i'm on mac os 10.5, using apache 2, php 5.3).
for me, this test script showed 3 "trues" for 3.1.2 and true/false/true for 3.1.3p1
var_dump( apc_store('test', 'one') );
var_dump( apc_store('test', 'two') );
var_dump( apc_store('diff', 'thr') );
http://php.net/manual/en/apc.configuration.php
the apc.ttl and apc.user_ttl settings on php.ini:
Leaving this at zero means that APC's cache could potentially fill up with stale entries while newer entries won't be cached.
Out of disk space or permission denied to the storage directory?
In addition to what Greg said, I'd add that a configuration error could cause this.
There's a bug in the version installed with ubuntu 10.04 and debian stable. If you replace the package with this version: http://packages.debian.org/sid/php-apc (3.1.7) it works as it should.
apc_store will fail if that specific key already exists and you are trying to write again to it before the TTL expires. Therefore you can pretty much ignore the return false because it really did fail but the cache is still there. If you want to get around this, start using apc_add instead. http://php.net/manual/en/function.apc-add.php