I have a PHP console script that is called via cron which itself among other things creates a tar file of a directory.
When calling the PHP script via cron the the tar file is not created correctly. The following error is given when viewing the tar file:
gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
When calling the PHP script manually via console the tar file is created correctly. The cron log output shows no errors.
Here the tar call form the PHP script.
exec("cd $this->backupTempFolderName/$id; tar -czf ../../$this->backupFolderName/$tarFileName $dbDumpFileName documents");
Dose anybody have an idea why the tar is created correctly when manually called and fails when called via cron?
Update: The error given while creating the tar file via cron is:
tar: ../../backup/20150819-060003.tar.gz: Wrote only 4096 of 10240 bytes
tar: Error is not recoverable: exiting now
Sometimes the error is:
tar: ../../backup/20150819-054002.tar.gz: Cannot write: Broken pipe
tar: Error is not recoverable: exiting now
As said before, the when executed via cron the tar file is created, but always 50% of the correct size (when manually executing the script):
-rw-r--r-- 1 gtz gtz 1596099468 Aug 19 06:25 20150819-042330.tar.gz <- Manually called skript, working tar
-rw-r--r-- 1 gtz gtz 858570752 Aug 19 07:21 20150819-052002.tar.gz <- Script called via cron, broken tar
Update 2
After doing some further research based on the input given here, might should add that the cron called script is running on a virtual private server - I suspect that some limitations may exist for cron jobs that are not documented by the hoster (only limit on minimum repetition time is given in the docs).
That error comes usually from lack of disk space.
I would do some more researching on this subject, by adding some logs before and after the tar execution.
Also check what user your configuration is using for the cron job you have running the backup. It can be some quota limit on that user as well, that doesn't happen when you run on the console outside cron.
Ask your provider for quota limits on the VPS for users and for processes... That is what rings the bell here.
I guess that you have a resource limitation
As M. Ivanov has said, add this command in your PHP script:
shell_exec("php -info");
and check this parameter both when you execute your script from command line and from cron job
memory_limit => ???
You can also try to run your cron by enhancing the memory limit to 1600M
php -d memory_limit=1600M scriptCompressor.php
Hope that helps :)
You may want to try creating the tarball directly from PHP to avoid the exec call. See this answer: https://stackoverflow.com/a/20062628/5260945.
Also, looking at your cron entry, there is no leading slash on your example. I know this could just be a typo for the comment, but make sure you have an absolute path for the cd command. The default environment for a cron job is not the same as for your login shell.
cronjobs by themselves usually don't have limits. If you are using shared hosting they may have installed some enforcing scripts but I suspect they would also break your console backups.
If you are running cronjobs from some container, e.g. Drupal, they have special limits.
Also check bash limits with:
ulimit -a
Report disk space before backup starts and afterwards just in case. It's usually quite small on VPS.
I am sure its memory or execution time problem.
Do one thing run the same script for directory which contains only single test file and check the output, if your script works in this scenario then 100% sure its memory problem.
try to tweak memory parameter and execute your script.
I hope this help you.
Thanks
Looking at the following error.
tar: ../../backup/20150819-054002.tar.gz: Cannot write: Broken pipe
tar: Error is not recoverable: exiting now
I see that as the exec function in the PHP script is not blocking or causing an error prematurely. So the PHP session that get's called during the Cron job exits before the command finishes. This is just a guess but you can try to send this to the background when you run it from Cron.
exec("cd $this->backupTempFolderName/$id; tar -czf ../../$this->backupFolderName/$tarFileName $dbDumpFileName documents &");
This command should be blocking so this is just a shot in the dark.
http://php.net/manual/en/function.exec.php
Error occurs when you trying to execute the php file and gives EOF error. It means somewhere in your php file you must check the code of your cron file it may happen that you forget to complete the brackets of the condition or class etc...
Good luck ['}
To write the errors to the log when executed via cron replace >/paht/to/application/app/logs/backup-output.log in the cron line by 2>&1 >/path/to/application/app/logs/backup-output.log
Also check the path in the cron line .. maybe the change-dir is not working as you might think. Try printing getcwd() to a log or something, when running the php script from cron.
Edit: I wonder why this was voted not useful. The questioner mentioned that no errors are printed to the log when the cron executes the script. Thats not hard to imagine as > just redirects the STDOUT and not the STDERR (on which php errors would get printed) to the log. So adding 2>&1 might reveal some new informations.
Related
$output = shell_exec('echo "php '.$realFile.'" | at '.$targTime.' '.$targDate.' 2>&1');
print $output;
Can someone please help me figure out why the above line isn't doing what it's supposed to be doing? The idea is for it to create an 'at' job that will execute a php script. If I switch to the user apache(which will ideally control the at function when the php file is complete) I can run
echo "php $realFile.php" | at 00:00 05/30/17
and it'll do EXACTLY what I want. The problem is in the above snippet from my php file it will not create the at job correctly. when I do a at -c job# on both of them the job made from my file is about a 3rd the length missing the User info and everything. It basically starts at PATH= and goes down. Doesn't include HOSTNAME=, SHELL=, SSH_CLIENT=, SSH_TTY=, USER=. I assume it needs most of this info to run correctly. The end output (below)is always the same though it just doesn't have any of the top part for some reason. Let me know if you need more info. I didn't want to paste all of my code here as it contains job specific information.
${SHELL:-/bin/sh} << 'marcinDELIMITER0e4bb3e8'
php "$realFile".php
marcinDELIMITER0e4bb3e8
It doesn't seem to be a permission issue because I can su to apache and run the exact command needed. The folder the files are located in are also owned by apache. I've also resulted to giving each file I try to run 777 or 755 permissions through chmod so I don't think that's the issue.
I figured out a coupe ways around it a while back. The way I'm using right now is an ssh2 connect to my own server as root and creating it that way. No compromise as you have to enter the password manually each time. Really bad work around. The main issue is that apache doesn't have the correct permissions to do everything needed for the AT job so someone figuring that out would be awesome. Another option I found on a random webpage would be to use sudo through the php script, but basically the same minus having to reconnect to your own server. Any other options would be appreciated.
Reading the manual and logs would be a good place to start. In particular:
The value of the SHELL environment variable at the time of at invocation will determine which shell is used to execute the at job commands. If SHELL is unset when at is invoked, the user’s login shell will be used; otherwise, if SHELL is set when at is invoked, it must contain the path of a shell interpreter executable that will be used to run the commands at the specified time.
Other things to check are that the user is included in at.allow, SELinux is disabled and the webserver is not running chrrot.
I've created a service with reactphp which runs and does some stuff. It is started as a daemon so all output should be logged in a file.
This log file should be named 'foo-log-file-$(date "+F")'. I want to have a single log file for each day.
Problem:
As mentioned the script runs as a service, without stopping. The starting call for the script is therefore only done once.
php my_script.php >> /var/log/bar/log-file--$(date "+%F") 2>&1
So everything which is printed to the console from this script is saved into the file, but the file is only created with the date-string when it was called and is not updated with a new date.
Question:
Is it possible to solve this without writing the log logic in the php-script? Can i handle this requirement with bash?
FYI
The answer of #fedorqui was a good approach, i solved it with a cronjob, which copies the file to a different one and empties the rest.
You cannot use move, cause the through the running service, it is open all time and you get the error:
cannot move 'foo.log' to 'bar.log': Text file busy
So i cp it and clear the old one with:
cp foo.log foo.log.$(date +"%F");
cp /dev/null file.log;
I have 2 .php files in my application - book.php and weather.php. I create a file named "runscript" in /.openshift/cron/minutely. This file contents:
#!/bin/bash
php -f $OPENSHIFT_REPO_DIR/weather.php
This script send me message to phone every minute, it's OK.
Then I replace to:
php -f $OPENSHIFT_REPO_DIR/book.php
This script MUST send me message too, but nothing is happing. But if I just run this script by my webbrowser (go to the http://xxx-xxxxxxx.rhcloud.com/book.php) so I got my message. How is it possible? Magic?
Did you miss the #!/bin/bash part? That's needed to run the shell script.
For why your cron job is not executing, check the cron logs on OpenShift. You can find them at ~/app-root/logs/cron_*.log when you SSH into your gear.
Make sure your cron job is execuable with chmod, and has the shebang line as #gnaanaa says. Also check if you have one of the .openshift/cron/minutely/jobs.{allow,deny} files as they may cause cron to skip your job. (See the cron README for more information.)
And after your cron job is working, you can get rid of the wrapper script runscript and have cron call book.php directly. To do so, place book.php directly into .openshift/cron/minutely, make it executable, and add this shebang to it:
#!/usr/bin/env php
Hope this helps.
I use openshift aswell and executed a php file with a cron aswell.
#!/bin/bash
php ${OPENSHIFT_REPO_DIR}index.php
This executes the script normally at first sight. However no output was produced. The problem was, that all the required php files couldnt be loaded because the working directory was not the same as it would be when loaded by the webserver. Setting the working directoy in the php script itself will prevent this error and makes the script perfectly executable by the cron.
This should help some people to get their script running.
I basically have a cron job calling one script every minute. Script immediately stops, if previous script is still running (checks previous script's activity time).
So I made a bug, and the script went in to an infinite loop (I know it was called from by cron atleast one time). I created a fix and uploaded it to the server, but I'm still wondering:
How long will the bugged script run?
How can I know if it is still running?
What does terminate a script and why?
The script just echoes out the same text over and over again.
P.S. PHP's max execution time within the script is set to 0 (infinite) and I don't have a direct access to the server, only FTP.
How can I know if it is still running?
Just set up a new cron job, but have the cron command be a something that helps you debug:
a useful one would be
ps -af | grep php > /some/path/to/mylogfile.txt
the ps command lists info on running processes. with those flags, part of the output will be the original linux command that started the process, and so we can grep the line and look for php because the origional command was probably something like:
php myscript.php
the output is redirected to mylogfile.txt for you to manually read after the cron job runs.
the process id should be part of the output. you can then use the kill command on that process id, again by just entering the command as a fake cron job.
Until the script runs into an timeout(max_execution_time defined in php.ini file or set_time_limit method)
Have a look at the running processes
send kill command to the script or wait till a timeout occurs
PS: you have to php.ini files - one for command line and one for Apache - be sure to Change the max_execution_time in the commandline ini file
I know there's been similar questions but they don't solve my problem...
After checking out the folders from the repo (which works fine).
A method is called from jquery to execute the following in php.
exec ('svn cleanup '.$checkout_dir);
session_write_close(); //Some suggestion that was supposed to help but doesn't
exec ('svn commit -m "SAVE DITAMAP" '.$file);
These would output the following:
svn cleanup USER_WORKSPACE/0A8288
svn commit -m "SAVE DITAMAP" USER_WORKSPACE/0A8288/map.ditamap
1) the first line (exec ('svn cleanup')...executes fine.
2) as soon as I call svn commit then my server hangs, and everything goes to hell
The apache error logs show this error:
[notice] Child 3424: Waiting 240 more seconds for 4 worker threads to finish.
I'm not using the php_svn module because I couldn't get it to compile on windows.
Does anyone know what is going on here? I can execute the exact same cmd from the terminal windows and it executes just fine.
since i cannot find any documentation on jquery exec(), i assume this is calling php. i copied this from the documentation page:
When calling exec() from within an apache php script, make sure to take care of stdout, stderr and stdin (as in the example below). If you forget this and your shell command produces output the sh and apache deamons may never return (they will normally time out after a few minutes). From the calling web page the script may seem to not return any data.
If you want to start a php process that continues to run independently from apache (with a different parent pid) use nohub. Example:
exec('nohup php process.php > process.out 2> process.err < /dev/null &');
hope it helps
Okay, I've found the problem.
It actually didn't have anything to do with the exec running the background, especially because a one file commit doesn't take a lot of time.
The problem was that the commit was expecting a --username and --password that didn't show up, and just caused apache to hang.
To solve this, I changed the svnserve.conf in the folder where I installed svn and changed it to allow non-auth users write access.
I don't think you'd normally want to do this, but my site already authenticates the user name and pass upon logging in.
Alternatively you could