I've been setting up PHP deployments with Capistrano on CentOS 6 and have run into an interesting issue. The way capistrano works, it sets up folders like this:
/var/www/myapp.com/
current (symlink to latest release in /releases)
shared
releases
20130826172737
20130826172114
When I look at the "current" symlink, it points to the most recent release. At first, when opening my web app, everything worked fine. After deploying a new release, the current folder correctly points to the new release, but the web application tries to load files from the old release (which has been deleted in a Capistrano cleanup process). Also, the virtual host is configured to point at /var/www/myapp.com/current/Public.
Are symlinks cached in any way?
The specific PHP code that fails (which initializes my framework) is this:
require_once dirname(dirname(__FILE__)) . '/App/App.php';
App\App::run();
That is in index.php currently located at /var/www/app.com/current/Public/index.php.
My Apache error logs show:
PHP Fatal error: require_once(): Failed opening required '/var/www/myapp.com/releases/20130826172237/App/App.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/myapp.com/releases/20130826172237/Public/index.php
And the current symlink shows:
current -> /var/www/zverse/releases/20130826172641
Obviously 20130826172641 != 20130826172237 which the latter was the previous version.
Any ideas or areas I can look at?
I can't verify this, but it seems that there is some unpredictable behaviour with Apache following / caching the old location of symlinks:
Is there a way to mimic symlink behavior with an apache configuration?
Case Against Using Symlinks For Code Promotion
The only thing that would absolutely clear up this issue was to cycle Apache, which we would prefer not to do on every deployment. -- Mike Brittain
He suggests moving the whole directory, instead of updating the symlink.
Have you checked the realpath_cache_size and realpath_cache_ttl directives? By default, php > 5.1 caches the real paths of symlinked files for 120 seconds. This will cause problems with capistrano deployments. The main problems are caching - that even if you clear your cache, your old php files will continue to be served for two minutes, repopulating it with old data - and interaction between php and static files. Static files are served directly by Apache, so will be updated immediately. The php code will still be from the previous release for two minutes after deploying though, so it will be expecting the old versions of any changed static files. That's especially a problem if you use a cache breaking procedure that changes the names of those files; in that case the php code won't be able to find the files it's expecting at all.
Anyway, there are two solutions. The first is to set realpath_cache_size to 0 in php.ini. (Note: setting realpath_cache_ttl to 0 does not disable the cache.) Or, if you want to keep it enabled, you should be able to use the clearstatcache function to clear the realpath cache immediately after deploying your symlink, using a capistrano hook. Be aware though, if you're using mod_php, the php cli and apache runtimes are separate, so you would need to call that function using a php script invoked by apache, similarly to what's done for clearing the APC cache here. I haven't tested that though, as I didn't notice a significant performance impact from simply disabling the cache.
Related
Not sure if this question belongs here, but it is kinda a programming question.
I'm running xampp 3.2.2 (just Apache and MySQL) with php 7.1.14 on Windows 7 to work on several Symfony (2.8) applications.
Initially I had them hosted with vhosts using different ports and accessed them by http://my-ip:port, but that was a PITA because 1) the browser mixes up the cookies, forcing you to continually log in when switching apps and 2) some php cache (I think, its been a while) mixes up User classes, forcing you to restart Apache.
I then switched to name-based vhosts (nothing fancy, local name resolution using C:\Windows\system32\drivers\etc\hosts) which got rid of the issues.
Not sure if it is related to running several apps, but now I got a new problem: Everything started out fine, but then Symfony sporadically threw this error in my face:
Warning: class_implements(): Class does not exist and could not be loaded
Most of the time there is no class name (note the two spaces), sometimes it says some gibberish like \.php$. I typed this one from memory, but I think it was a less well-formed regex. I also get other stuff that's definitely not a class name.
Over time (a few weeks), "sporadically" changed to "every few minutes/requests", even with just using one app. What helps is restarting Apache, sometimes I also need to delete the Symfony cache.
What is going on here and how can I fix it?
this works for me:
in php.ini file
change this line,
;realpath_cache_size = 4096k
to
realpath_cache_size = 4096k
and restart your Apache or XAMPP.
This error was occurring in my Localserver, i just restarted the Apache and things become fine. So, you can make a try.
I've successfully deployed the Laravel application to Heroku.
It works online.
But when I try to run "heroku local" I get:
vendor/bin/heroku-php-apache2: No such file or directory
Which makes sense, since looking into "vendor/bin", the only thing listed is:
psysh -> ../psy/psysh/bin/psysh
So, where's my heroku-php-apache or how do I fix this?
You should have these lines in your composer.json
"require-dev": {
"heroku/heroku-buildpack-php": "*"
}
be sure to run composer update after you add them.
After extensive research, trial and error and talking to the Heroku Support team, I found out that, although Slow Loris's answer was a part of the process, the following answer was given to me by Heroku's Support:
To cut a long story short, heroku local is not officially supported for PHP >applications. The reason is that unlike all the other languages we support on the >platform, PHP has no web servers written in userland. Instead, we use PHP-FPM >together with Apache or Nginx, and the boot scripts (vendor/bin/heroku-(php|hhvm)-(apache2|nginx)) dynamically inject the correct configuration for port >binding and the FastCGI comms sockets.
This works with vanilla PHP and Apache builds, provided that:
1) the current user has all the correct permissions (in your case, >/var/log/apache2/ isn't writable);
2) the correct proxy modules are loaded in the main httpd.conf;
3) the main httpd.conf doesn't bind to a port at all, or at least not to one >under 1024 (which are reserved for superusers).
The main config also needs to be handled by each user on their own, because >sometimes, the modules to be loaded are in libexec/, sometimes in >lib/apache2/modules/, and so forth. Just too many variations; otherwise, we could >ship a full Apache config to users and the experience would be much better.
But the problems don't end there. FPM does not work at all on Windows, and on >most Linux systems, httpd is not a command that works; instead, apache2ctl >handles starting and stopping, and thus, running a server in the foreground is >not possible. In the end, there are simply too many possible permutations in >system configs that make it impossible to ensure every user has a great >experience.
It's simply the current reality in PHP land. Ruby, Python, Node, Java all have >web servers that are written in each respective language, and you don't need >external servers. Which also makes it possible to stream file uploads, handle web >socket upgrades, and so forth. Maybe with PHP 7 we'll see something like that >emerge soon (in PHP 5 it's simply not feasible at all, because a fatal error >kills the engine, so your web server would be gone too).
I know this question is a little dated but I recently deployed a heroku app for the first time and was unable to get heroku local to work for me. I'm on the current branch of Laravel which is 5.8, I am also on Windows 10 using VS Code. I searched all over trying to rectify this issue and could not get it to work no matter what.
I did come up with a solution to be able to work on this locally with only a few lines in terminal.
In VS Code, I used gitbash terminal, once in my heroku project folder composer require laravel/homestead --dev, once that is complete, then we need to install homestead, vendor/bin/homestead make, and then once that is complete, simply run vagrant up and your app will be accessible through localhost:8000.
Docs - https://laravel.com/docs/5.8/homestead
Hope this helps someone!
We have a web-app that we have modified a number of the default php.ini values for; short_open_tag = Off, expose_php = Off, memory_limit = 128M, etc, etc. Our current deployment strategy when we need to scale and bring another app server online involves cloning our app onto a 'new' server that has the latest version of php.ini, along with the distribution-specific (in our case, Debian) php.ini file.
We are currently storing our customized php.ini file in our repo and deploying that when we clone, but ran into a problem recently relating to deprecated config values when a new cloned app server fired up with PHP 5.4+ on it. This resulted in us having a broken config file, and got me thinking about how to best handle this. We'd like to use the default latest php.ini that would contain potentially new directives, and would have deprecated ones removed, and then be able to 'locally' override the settings we need.
Solutions we've considered include using .htaccess files and ini_set(), but three drawbacks here relate to the fact that some settings can only be adjusted in php.ini, that .htaccess will not be used by our cli scripts, and that for each user visiting the site via Apache, we have to process the .htaccess or make calls to ini_set() resulting in unneeded overhead. We've also looked at freezing the version of PHP we use so that there are no updates and changes to php.ini once deployed, but I am not sure if this strategy works best, given we would miss out on minor updates that could be related to security, etc.
Have we missed an option as it relates to portably deploying PHP engine settings?
Per the direction provided by #PeeHaa above, we've decided to lock our application to a specific PHP version (via Composer), taken that PHP version's default php.ini file for both Apache2 and the CLI, and added in our settings. This has then been pushed to our repo, and is copied as needed on deployment and on any git changes to the files.
FYI, in our Debian environment, we've followed the strategy outlined here in terms of installing a specific version of PHP other than latest.
I am building a framework where product instances use the main framework files, until there is a copy of it's own version of that file. To achieve this I have done the following:
set_include_path(MY_PRODUCT_ROOT.'/' . PATH_SEPARATOR . MY_FRAMEWORK_ROOT.'/');
So if I call include('view-users.php'); it will first look in MY_PRODUCT_ROOT for /view-users.php and if that's not found, it will then look to MY_FRAMEWORK_ROOT/view-users.php.
This procedure is working very nicely until I add files to the product root. I know that PHP/Apache is caching the includes and one would think to run clearstatcache(true); to clear any status caching. PHP likely uses file_exists inside it's include(); and thinks the new file still does not exist. I have tried restarting Apache with no effect.
Unfortunately running clearstatcache(true); does not help either. Only once I have deleted MY_FRAMEWORK_ROOT/file does it think to clear cache and try again, thus finding MY_PRODUCT_ROOT/file.
Im a little stumped, I know we need to refresh PHP/Apache's understanding of whether the file(s) exist or not, but clearstatcache(true); is not helping...
Any ideas?
UPDATE: Correction, restarting Apache seems to help now. I reiterate that this only occurs when trying to ADD a file to MY_PRODUCT_ROOT, to overlap an existing MY_FRAMEWORK_ROOT file, for customization
UPDATE: Development environment is Zend Server CE PHP 5.3.14 on Windows, Production environment Centos linux httpd, PHP 5.3+. The fact that Zend optimizer is enabled on my dev environment could have an effect, Also not using APC or any other caching scripts
Zend Optimizer+ speeds up PHP execution by opcode caching and optimization. It stores precompiled script bytecode in shared memory. This eliminates the stages of reading code from the disk and compiling it on future access. For further performance improvements, the stored bytecode is optimized for faster execution.
This is caching the file contents found in the includes, thus clearstatcache does not work. I have disabled my Zend Optimizer and it works now.
I run some PHP websites on a FreeBSD server which was recently updated to PHP 5.2.17, after which exec("something") stopped working, and I was required to write exec("/full/path/something").
Since the scripts run on different machines where executables are in different places writing full paths is not acceptable.
Running passthru("set") from PHP reveals the PATH variable (for user "www") to be:
PATH=/sbin:/bin:/usr/sbin:/usr/bin
I need PATH to point to the PHP safe_mode_exec_dir directory:
PATH=/usr/phpsafe_bin
Running putenv("PATH=/usr/phpsafe_bin") in PHP resolves the problem, but I need a solution that fixes the problem on a global level for all PHP scripts running on this machine, in other words changing php.ini, Apache settings, or other system settings.
Hope someone can provide a good solution to this, maybe even an explanation why this changed in the PHP update. There seems to be no PHP documentation on how the search path for exec() and friends is determined.
It's not a pleasant solution, but it's all I could think of. Create a script file that does the change you've suggested and then use the "auto_prepend_file" in php.ini or .htaccess to include this script. Then in effect every php script that is run will have this file run before it gets executed and thus your directory is changed.
Caution: You need to be very careful using this since any errors, extra white space etc in the prepend script can break whole pages, existing features such as download scripts, or any number of unknown effects.
Read More: http://php.net/manual/en/ini.core.php