Anyone has experienced change a configuration from magento admin,
clear all cache(including remove the cache folder),
but still load old configuration value from cronjob (if using browser or CURL call, it load the correct configuration)
note:
im using nginx + ubuntu + phpfpm in AWS Ec2, mysql is RDS
im using cron.sh as magento cron
im using magneto custom module with cronjob load old configuration (even core magento configuration value)
Not sure cronjob itself will be cache or not, restart cronjob may help (not yet test), but still dont know the root reason.
Any idea?
---- just test -----
restart cron service not work
restart nginx service not work
restart php-fpm not work
reboot machine work
You can try to use this method. It should works for your configuration.
http://www.emiprotechnologies.com/blog/magento-technical-notes-60/post/automatically-refresh-cache-using-cron-in-magento-307
finally found that, since cron job using ubuntu, then the magento logic will be:
if the var folder (owner is www-data) is not writable by current user (ubuntu)
then write it to /tmp/magento/var/ -- Mage_Core_Model_Config_Options
thus, all old cache store in /tmp/magento/var/, even i clear the cache from magento backend, it will not clear this 'tmp/magento/var'
above issue can be resolve by update var/ to 777, or manually delete the tmp/magento/var/cache in stupid way
however, if using 777 way, another problem occur:
if the cron user create a log file that will share with www-data, the file will be not writable by www-data (644 by default)
another solution is to change the cron user as www-data
However
magento is quite special, the cron.sh will call cron.php and cron.php then call to cron.sh again with bin/sh
thus www-data has no access right to bin/sh, then i cant use it to run cron
Related
I have a folder above the webroot that is used to temporarily store user files generated by a php web application. The files may, for example, be PDF's that are going to be attached to emails.
The folder permissions are set to rwxr-xr-x (0755). When executing a procedure from the web application, the files get written to this folder without any issues.
I have now also set up a cron job that calls the php script to execute that exact same procedure as above. However, the PDF cannot be saved into the above folder due to failed permissions - the cron job reports back a permission denied error.
I have tried setting the folder permissions to 0775 and still get a permission denied. However, when the permissions are 0777, then the cron job then works fine.
This seems very strange to me - why does the cron get a permission denied at 0755 but it works fine through the web app?
The probable answer is that the cron job executes under your user - and the directory is owned by apache (or www-data or nobody or whatever user your web server runs as).
To get it to work, you could set up the cron job to run as the web server user.
Something like this:
su -l www-data -c 'crontab -e'
Alternatively, you could change the permissions to 775 (read-write-execute for the owner and group, and read-execute for others) and set the group ownership of the folder to the user running the cron job.
However, you have to make sure that if you're deleting something or descending into folder which is created by apache, you could still run into problems (apache would create a file which it itself owns, and your user cannot delete it then, regardless of the directory permissions.
You could also look at some stuff like suphp or whatever is up to date - where the web server processes are ran under your username, depending on your system architecture.
It depends on which user you have defined the cronjob.
If you're root (not recommended) it should work. If you're the web-user (e.g. www-data on ubuntu) it should work as well.
sudo su - www-data
crontab -e
Permission are given to user-group-everybody. That's what the 3 characters denote.
Your php script runs as a different user&group than the cron job, so they observe different permissions.
Check chown and chgrp, or try to run the cron job with the same user.
if you are using cpanel to run a php, you can try something like this:
"php /home/algo/public_html/testcron.php" ...
just write: php (the rute of the script)/yourscritpt.php"
I am trying to get a php / Drupal based website. The site also uses solr, something which I've never used before. I inherited this site and the documentation I was left says that I may need to restart solr which can be done by running:
sudo /etc/init.d/tomcat6 restart
I can see from my Drupal admin that solr isn't running so I tried running it. I unexpectedly got a message saying sudo: tomcat6: command not found. However when I list the directory tomcat6 is clearly there.
These are the permissions:
-rwxr-xr-x 1 root root 7929 Mar 16 2012 tomcat6
Does anyone know what the problem with this is and how I can resolve it?
This apparently was always working and I haven't installed anything since I started with this linux machine (VM).
You should run it this way ./tomcat6 start, if you want to run it directly using tomcat6 start you should add the script's directory path to your PATH environment variable.
I've automated the deploying of my site and I have a script that runs framework/sake /dev/build "flush=1" This works however it clears the cache directory of the user who runs it, which is different from the apache user (which I can't run it from).
I've read a few bug reports and people talking about it on the SS forum however either there is no answer or it doesn't work for example
define('MANIFEST_FILE', TEMP_FOLDER . "/manifest-main");
I thought about just deleting the cache directory however it's a randomised string so not easy to script.
Whats the best way to clear the cache via command line?
To get this to work you need to first move the cache from the default directory to within the web directory by creating a folder silverstripe-cache at the web root. Also make sure the path is read/write (SS default config blocks this being readable by the public)
Then you can script:
sudo -u apache /path/to/web/root/framework/sake dev/build "flush=1"
I'm using Laravel, and whenever the logs or the cache is being written to the storage folder, it's giving 755 permissions, and creating the owner as daemon. I have run sudo chown -R username:username app/storage and sudo chmod -R 775 app/storage numerous times. I have even added username to the group daemon and daemon to the group username.
But, it still writes files as daemon, and with 755 permissions, meaning that username can't write to it.
What am I doing wrong?
This one has also been bugging me for a while but I was too busy to hunt down a solution. Your question got me motivated to fix it. I found the answer on Stack Overflow.
In short, the solution is to change the umask of the Apache process. The link above mentions two possible places to make the change: you add umask 002 to
/etc/init.d/apache2
/etc/apache2/envvars (Debian/Ubuntu) or /etc/sysconfig/httpd (CentOS/Red Hat), or
Edit
I recently upgraded from Ubuntu 12.04 32-bit to 14.04 64-bit and, to my great irritation, I could not get this to work. It worked for some PHP scripts but not others - specifically, a short test script I wrote worked fine, but the Laravel caching code did not. A co-worker put me on to another solution: bindfs.
By mounting my project directory (/var/www/project) in my home directory (~/project) with the appropriate user mapping, all my problems were solved. Here's my fstab entry:
/var/www/project /home/username/project fuse.bindfs map=www-data/username:#www-data/#usergroup
Now I work in ~/project - everything looks like it's owned by username:usergroup and all filesystem changes work as if I own the files. But if I ls -la /var/www/project/, everything is actually owned by www-data:www-data.
Perhaps this is an overly-complicated solution, but if you have trouble getting the umask solution to work, this is another approach.
In this instance Apache isn't doing anything wrong. Apache reads and writes files based on the User and Group settings in its configuration file. The configuration file in question is like /etc/httpd/conf/httpd.conf but the location and even name differs depending on the system you're using.
It's also worth noting, that if you're running PHP as something such as FastCGI, then it'll use the user that FastCGI is set to use, seeing as that is the bit that modifies and creates files, not Apache.
I'm setting up a new server and of course I didn't document every change I did to the last one but I'm getting there.
I have a weird issue, I'm trying to do a simple call in php:
exec('service httpd reload');
And it's not doing anything. I can execute other commands such as tar, I did check php.ini for disabled_functions and it's empty. The username php is using for creating files/folders is "apache" as well.
Does anyone know any other areas I can check? This is a fresh install of php 5.2.x so I'm sure there is a security setting in apache or something blocking this.
Well your apache is most probably running under a normal user account (www-data or apache - it depends on your distribution), but to restart apache (or any other service) you have to be root.
You could use sudo to elevate your privileges.
You can't restart Apache as a normal user, but you should never leave your root password written in a file. If you really have to run that command from php, there's an alternative method.
You can allow certain commands to be run as root by a certain user without specifying a password. To do this you must edit the /etc/sudoers file with visudo and add the tag NOPASSWD to the command you want to run. Here is the example from the man page:
ray rushmore = NOPASSWD: /bin/kill, /bin/ls, /usr/bin/lprm
This would allow the user ray to run /bin/kill, /bin/ls, and /usr/bin/lprm as root on the machine rushmore without authenticating himself.