Silverstripe TEMP_FOLDER differs by fpm and cli - php

We have a lot of silverstripe installations - each on its own vServer.
Deployment is done by a deployment service.
Each instance is powered by nginx and php5.6-fpm.
When a deployment is running, the typical build/flush actions are executed by the deployment service as ssh commands.
The cli tasks are run by the same user like php5.6-fpm is running.
But the php-Versions are not identical (fpm+cli)
This results in 2 different cache directories
/tmp/silverstripe-cache-php5.6.23... (fpm)
and
/tmp/silverstripe-cache-php5.6.29... (cli)
This is really bad. Example:
There is a new static class variable, that is stored inside the ConfigManifest.
But it is only stored in the manifest of the cache directory that matches the cli version.
The worst case: When browsing the website (php5.6-fpm usage) this config variable is not known. This can lead to server errors (500), because the manifest of the fpm does not know about the new config class variable.
Any idea how to fix this ?
Kind regards, Robert

The only way to mix slightly different php versions is to use temp folder in the root of the project.
create silverstripe-cache folder in the project root folder
add putenv('APACHE_RUN_USER=php-fpm'); in your _ss_environment.php file to force the name of the cache folder to be 'php-fpm'
it is system configuration to ensure write access to the 'silverstripe-cache/php-fpm' folder from php-fpm and cli.
See framework\core\TempPath.php for the logic.

Related

Clearing cache manifest via CLI

I've automated the deploying of my site and I have a script that runs framework/sake /dev/build "flush=1" This works however it clears the cache directory of the user who runs it, which is different from the apache user (which I can't run it from).
I've read a few bug reports and people talking about it on the SS forum however either there is no answer or it doesn't work for example
define('MANIFEST_FILE', TEMP_FOLDER . "/manifest-main");
I thought about just deleting the cache directory however it's a randomised string so not easy to script.
Whats the best way to clear the cache via command line?
To get this to work you need to first move the cache from the default directory to within the web directory by creating a folder silverstripe-cache at the web root. Also make sure the path is read/write (SS default config blocks this being readable by the public)
Then you can script:
sudo -u apache /path/to/web/root/framework/sake dev/build "flush=1"

Magento Cronjob cache configuration

Anyone has experienced change a configuration from magento admin,
clear all cache(including remove the cache folder),
but still load old configuration value from cronjob (if using browser or CURL call, it load the correct configuration)
note:
im using nginx + ubuntu + phpfpm in AWS Ec2, mysql is RDS
im using cron.sh as magento cron
im using magneto custom module with cronjob load old configuration (even core magento configuration value)
Not sure cronjob itself will be cache or not, restart cronjob may help (not yet test), but still dont know the root reason.
Any idea?
---- just test -----
restart cron service not work
restart nginx service not work
restart php-fpm not work
reboot machine work
You can try to use this method. It should works for your configuration.
http://www.emiprotechnologies.com/blog/magento-technical-notes-60/post/automatically-refresh-cache-using-cron-in-magento-307
finally found that, since cron job using ubuntu, then the magento logic will be:
if the var folder (owner is www-data) is not writable by current user (ubuntu)
then write it to /tmp/magento/var/ -- Mage_Core_Model_Config_Options
thus, all old cache store in /tmp/magento/var/, even i clear the cache from magento backend, it will not clear this 'tmp/magento/var'
above issue can be resolve by update var/ to 777, or manually delete the tmp/magento/var/cache in stupid way
however, if using 777 way, another problem occur:
if the cron user create a log file that will share with www-data, the file will be not writable by www-data (644 by default)
another solution is to change the cron user as www-data
However
magento is quite special, the cron.sh will call cron.php and cron.php then call to cron.sh again with bin/sh
thus www-data has no access right to bin/sh, then i cant use it to run cron

executing suphp from memory

I manage a server on linux that has apache with php and suphp. Like most setups, all program files are currently stored on disk.
I want to run suphp from ram. I then copied everything in the suphp folder (config files, folders, and program totaling about 2.8MB) to ram. Then on disk, I renamed the folder so the old version doesn't get accessed. I used the -a switch with cp to preserve the permissions and such.
I then made two attempts which both led to failure.
First, I make a link (using ln -s) named suphp that points to the suphp folder in ram. Then when I browsed the file/folder structure, everything else looks identical as if the setup was ready to work.
Since that didn't work, I made another attempt by removing the suphp symbolic link, then creating an empty suphp folder and mount-binding it (using mount --bind) to the suphp folder in ram. That did not even work.
I then looked at my apache error_log, and during the time I tried this steps and until I restored the original setup, I received various error messages with the text similar to this in common in all of them:
"(2)No such file or directory: couldn't create child process: (suphp folder location)/sbin/suphp for (full path to php file on website)".
What baffles me is, why would it report no such file or directory instead of some other error...
suPHP will be loaded into RAM by Apache since it is an Apache module (mod_suPHP). There is no reason to move the actual files/executables to the system's "ram" directory and set up symlinks like you have done.
If your intent is to keep PHP scripts in memory, consider using an opcache like the one built into 5.5 or apc, see How to use PHP OPCache?

Laravel 4 - cloning the local project on the VPS

I use Laravel 4 to develop my projects.
I develop on my Mac, commit to Git, then clone it on the server (linode 1G VPS).
Since "vendor" folder is by default "GIT-ignored", I usually do "composer install" after cloning the project on the server.
After that, any other packages I install locally, I do "composer update" on the server.
Yesterday, I reported this problem - PHP Composer update "cannot allocate memory" error (using Laravel 4)
So far, I have not found a solution. I even tried to do a "fresh" cloning and "composer install", it's giving me the memory error. This is extremely frustrating.
My question is then, is it ok to just upload my entire project to the server? Since "vendor" folder is only thing that is "git-ignored", if I just copy everything there, would it work? (I haven't tried it since my server is alive at the moment and I don't want to damage anything).
What is the actual role of "compiled.php" file? Is it platform dependent? Can I copy that file too?
I've seen this memory issue quite a few times now and read other people reporting the similar issue. I hope I can just upload the entire project folder and cross my fingers that it will work.
Thanks for your help!
I do not have VPS, or even shell access to my custom/shared hosting from my provider, but I can run git and composer commands without that.
Use sshfs http://osxfuse.github.io/
sshfs actually does SFTP connection to your server and mounts server to local directory.
This way, you can run git and composer commands localy. You do not depend on your VPS/hosting server. sshfs sends files in background to remote server.
To mount VPS to local dir, run this
sshfs user#serverip:. /path/to/existing/local/dir // to mount root dir
cd !$ // to get into mounted dir
// or
sshfs user#serverip:foldername /path/to/existing/local/dir // to mount specific dir
cd !$ // to get into mounted dir
Now you can do whatever you want.
a good thing to know for you - it is possible to set up Laravel config in such a way, that the same app (the very same copy of code) can act differently on different servers (environments).
I am wrtiting that, because if you sync your remote server with a local copy of the code sooner or later you will stumble upon issues like changing the db credentials or app setup after every sync - which of course doesn't make sense :)
Check out Laravel 4 Docs Environment configuration to read more about that, or follow this tutorial by Andrew Elkins - How to set Laravel 4 Environments
The environment is based on url matches.
You’ll find that configuration in /bootstrap/start.php
$env = $app->detectEnvironment(array(
'local' => array('your-machine-name'),
));
Now say you are developing locally and use the prefix/postfix local. E.g: my-new-site.local or local.my-new-site
$env = $app->detectEnvironment(array(
'local' => array('local.*','*.local'),
));
That sets the environment, now to use it you’ll need to create a local folder in /app/config/
1 mkdir app/config/local
And so you want to have a different database configuration for local. Just copy the database config file in to the local directory and modify it.
1 cp app/config/database.php app/config/local/database.php
To sum up and answer your question:
1) I guess it's OK to copy the whole project dir to remote server (although, if your copying vendor it might take a lot of time - it usually contains a big number of files)
2) if you do so, remember to have the updated composer.json on remote server (to reflect all the necessary requirements)
3) If you are using different database servers local and remote - you obviously have to run migrations and seeders on the remote server (this concernes also package migrations/seeds)
4) after you migrate all your files, do
composer dump-autoload
php artisan optimize --force --env=YourProductionEnvironmentName
which should rebuild the bootstrap/autoloaders
5) if you are using the Laravel Environments setup mentioned above, remember to have your remote server seen as production (if your local is testing/staging).

php+nginx+vagrant - php fails to write

I have a vagrant box setup running my dev code which is a nginx/php setup.
(Quick info on vagrant - its a virtualbox wrapper: http://www.vagrantup.com/).
In the vagrant/virtualbox setup, it is using linux guest additions to mount a shared folder on my host computer (MAC OSX).
linux guest path: /var/www/local
OSX host path: ~/src/
On multiple occasions, I find that php can't seem to write anything through any command (file_put_contents, fwrite.. etc) to any path location on the mounted shared folder, However it is able to write outside of the /var/www/local (for example /var/www/not-mounted/..).
I find this very difficult to work with, as I am using a cache system and it keeps failing to write any of the cache javascript/css files to (/var/www/local/public/root/cache/) which I need to be in the root folder of my website which is (/var/www/local/public/root/index.php).
I have done a lot of research on this topic:
it seems, the folder mount has the right permissions:
When I type mount command in the linux guest, I get this:
/var/www/local on /var/www/local/ type vboxsf (uid=1000,gid=1000,rw)
Clarify:
This happens all the time, it is a known problem I encounter which I try to workaround.
From cat /etc/passwd:
vagrant:x:1000:1000:vagrant,,,:/home/vagrant:/bin/bash
Can anyone help me on this?
I have figured out the problem.
I have forgot to give PHP the correct user-privileges and permissions to write to the folder. Basically, my PHP user/user-group was www-data/www-data however, vagrant has its own user/group (vagrant/vagrant) which mounts the folder /local/.
Since I did not want to mess with my vagrant mounting behaviour, I just simply changed my php config to start PHP with the user/group - vagrant/vagrant.
This fixed the issue for me.
Thanks for the help!

Categories