I am trying to trigger an update of AWStats from inside a PHP script.
I currently use a cron job to trigger the update, and simply copied the command line into an exec function within the script.
if(exec("/path/to/awstats.pl -config=domain.com -update")) {
echo 'Logs processed';
}
However, this returns a false positive. Although the "Logs processed" line is displayed, AWStats has not processed the stats information.
AWStats does work perfectly when visited directly, and when running the update via the cron job, it just isn't from this PHP script. I have checked the error logs, there is not a problem with my script or with AWStats timing out.
Am I missing something?
For the record, this script is designed to purge the old data, update a blacklist of referrers to block spam, and then recompile the stats data from the log files. Yes, I am aware of the performance issues of using the SkipReferrerBlackList directive.
It seems from your code that you think exec returns a boolean indicating success or failure. It doesn't, it just returns a string (the last line of output from the command). And strings (except "0" and an empty string) always evaluate to true.
To debug the problem you should print the output of the command:
exec("/path/to/awstats.pl -config=domain.com -update", $output);
echo join(PHP_EOL, $output);
Related
Well this isn't true, I'm sure there's a reason, but I can't find it!!
I have a script that can take around 10 minutes to execute. It does a lot of communicating with an api on a service that we have that use. It pulls a bit of a fingerprint of everything every 24 hours. So what it's doing is pretty aside from the point. the probm I'm finding is the script stops executing somewhat randomly!!
I can't find any errors that would cause my script to stop executing, even with
//for debugging
error_reporting(E_ALL);
ini_set('display_errors', '1');
on for debugging, it's all clean. I've also used
set_time_limit(0);
so that it shouldn't ever time out.
With that said, I'm not sure how to get any more debug info to figure out what it's stopping. I can say that the script should NOT be hitting any memory limits or anything. I mean that should throw an error, and I've gone through and cleaned this script up as much as I can see to clean it up.
So my Question is: What are common causes for a cron ending when it shouldn't? How can I debug this more effectively?
You could try using a register_shutdown_function() to define a codeblock that will execute when the script shuts down. Then create a variable across the main code execution points in the cron with details of what is going on. In the shutdown function write this into a log and check your log to see what state the program was in when it stopped. Of course, this is based on the assumption that your code is not totally erroring out.
You could also redirect the standard echo statements and logs into a log file by using
/path/to/cron.php > /path/to/log.txt 2>&1
2>&1 indicates that the standard error (2>) is redirected to the same file descriptor that is pointed by standard output (&1).So, both standard output and error will be redirected to /path/to/log.txt
UPDATE:
Below is a function/flow that I usually use in my crons:
function addLog($msg)
{
if(empty($msg)) return;
$handle = fopen('log.txt', 'a');
$msg = $msg."\r\n";
fwrite($handle,$msg);
fclose($handle);
}
Then I use it like so:
addLog("Initializing...");
init();
addLog("Finished initializing...");
addLog("Calling blah-blah API...");
$result = callBlahBlah();
addLog("blah-blah API returned value". $result);
It is more tedious to have all these logs, but when cron messes up, it really helps!
For eg. when you look at your log.txt and if you see something like:
Initializing...
Finished initializing...
Calling blah-blah API...
And there is no entry which says blah-blah API returned value, then you know that the function call to blah-blah messed up.
What are common causes for a cron ending when it shouldn't?
The most common in my experience is that the cron user has different permissions or different environment variables than the way that you're executing it from the command line.
Make your cronned program dump its environment to a temporary file and see if it's what you expect.
I've been completely unsuccessful finding an answer to this question. Hopefully someone here can help.
I have a PHP script (a WordPress template, to be specific) that automatically imports and processes images when a user hits it. The problem is that the image processing takes up a lot of memory, particularly if multiple users are accessing the template at the same time and initiating the image processing. My server crashed multiple times because of this.
My solution to this was to not execute the image-processing function if it was already running. Before the function started running, I would check a database entry named image_import_running to see if it was set to false. If it was, the function then ran. The very first thing the function did was set image_import_running to true. Then, after it was all finished, I set it back to false.
It worked great -- in theory. The site hasn't crashed since, I can tell you that. But there are two major problems with it:
If the user closes the page while it's loading, the script never finishes processing the images and therefore never sets image_import_running back to false. The template will never process images again until it's manually set to false.
If the script times out while it's processing images -- and that's a strong possibility if there are many images in the queue -- you have essentially the same problem as No. 1: the script never gets to the point where it sets image_import_running back to false.
To handle No. 1 (the first one of the two problems I realized), I added ignore_user_abort(true) to the script. Did it work? I don't know, because No. 2 is still an issue. That's where I'm stumped.
If I could ask the server whether the script was running or not, I could do something like this:
if($import_running && $script_not_running) {
$import_running = false;
}
But how do I set that $script_not_running variable? Beats me.
I've shared this entire story with you just in case you have some other brilliant solution.
Try using
ignore_user_abort(true); it will continue to run even if the person leaves and closes the browser.
you might also want to put a number instead of true false in the db record and set a maximum number of processes that can run together
As others have suggested, it would be best to move the image processing out of the request itself.
As an interim "fix", store a timestamp alongside image_import_running when a processing job begins (e.g., image_import_commenced). This is a very crude mechanism, but if you know the maximum time that a job can run before timing out, the script can check whether that period of time has elapsed.
e.g., if image_import_running is still true but the current time is more than 10 minutes since image_import_commenced, run the processing anyway.
What about setting a transient with an expiry time that would throttle the operation?
if(!get_transient( 'import_running' )) {
set_transient( 'import_running', true, 30 ); // set a 30 second transient on the import.
run_the_import_function();
}
I would rather store the job into database flagging it pending and set a cron job to execute the processing one job at a time.
For Me i use just this simple idea with a text document. for example run.txt file
in the top script use :
if((file_get_contents('run.txt') != 'run'){ // here the script will work
$file = fopen('run.txt', 'w+');
fwrite($file, 'run');
fclose('run.txt');
}else{
exit(); // if it find 'run' in run.txt the script will stop
}
And add this in the end of your script file
$file = fopen('run.txt', 'w+');
fwrite($file, ''); //will delete run word for the next try ;)
fclose('run.txt');
That will check if script already work by checking runt.txt contents
if run word exist in run.txt it will not run
Running a cron would definitively be a better solution. Idea to store url in a table is a good one.
To answer to the original question, you may run a ps auxwww command with exec (Check this page: How to get list of running php scripts using PHP exec()? ) and move your function in a separated php file.
exec("ps auxwww|grep myfunction.php|grep -v grep", $output);
Just add following on the top of your script.
<?php
// Ensures single instance of script run at a time.
$fileName = basename(__FILE__);
$output = shell_exec("ps -ef | grep -v grep | grep $fileName | wc -l");
//echo $output;
if ($output > 2)
{
echo "Already running - $fileName\n";
exit;
}
// Your php script code.
?>
I have a problem with PHP filemtime function. In my webapp I use Smarty template engine with caching option. In my webapp I can do some actions which generate error, but lets focus on only one action. When I click link on page some content is updated - I can click few times and everything is OK but about one request on 10 fails. Following error occurs:
filemtime() [<a href='function.filemtime'>function.filemtime</a>]: stat failed for
and the line that causes the problem:
return ($_template->getCachedFilepath() && file_exists($_template->getCachedFilepath())) ? filemtime($_template->getCachedFilepath()) : false ;
As you can see, file exists because it is checked.
Problematic line of code is included in smarty_internal_cacheresource_file.php (part of Smarty lib v3.0.6)
App is run on UNIX system, external hosting.
Any ideas? Should I post more details?
file_exists internally uses the access system call which checks permissions as the real user, whereas filemtime uses stat, which performs the check as the effective user. Therefore, the problem may be rooted in the assumption of effective user == real user, which does not hold. Another explanation would be that the file gets deleted between the two calls.
Since both the result of $_template->getCachedFilepath() and the existance of the file can change in between system calls, why do you call file_exists at all? Instead, I'd suggest just
return #filemtime($_template->getCachedFilepath());
If $_template->getCachedFilepath() can be set to a dummy value such as false, use the following:
$path = $_template->getCachedFilepath();
if (!$path) return false;
return #filemtime($path);
Use:
Smarty::muteExpectedErrors();
Read this and this
I used filemtime successfully without checking "file_exists" for years. The way I have always interpreted the documentation is that FALSE should be returned from "filemtime" upon any error. Then a few days ago something very weird occurred. If the file did not exist, my Cron job terminated with a result. The result was not in the program output but rather in the Cron output. The message was "file length exceeded". I knew the Cron job ended on the filemtime statement because I sent myself an email before and after that statement. The "after" email never arrived.
I inserted a file_exists check on the file to fix the Cron job. However, that should not have been necessary. I still do not know what was changed on the hosting server I use. Several other Cron jobs started failing on the same day. I do not know yet whether they have anything to do with filemtime.
Short story
I've got a PHP script filtering incoming mail using a .qmail file. The script works perfectly well and logs all activity but, as far as I know, the last .qmail line shouldn't be executed when my script returns a dot-qmail exit code 99 that should stop processing further .qmail lines.
Long story:
I'm using a Parallels Plesk Panel version 9.3.0 under Linux 2.6.18-4-686.
My PHP CLI version is 5.2.0-8+etch16 (cli) (built: Nov 24 2009 11:14:47).
Not satisfied with Spamassassin, Dr. Web and zen.spamhaus.org and their results, I decided to create my own PHP script for filtering all incoming mail.
(An aside to some of you who might think "this guy is reinventing the wheel": I know my customers personally and their specific needs so, after thousands of tests, this turned out to be the best option because it avoids black box models and lets me control the process in a comprehensive way, also freeing server resources and opening doors to other cool functionality).
However I'm having a hard time installing the script at the server.
qmailfilter is my script and you can see it at http://titanpad.com/1IFDj1jvB0
I edited an existing .qmail file in /var/qmail/mailnames/customerdomain.com/username/.qmail to be:
|/var/my/qmailfilter/qmailfilter
|/usr/bin/deliverquota ./Maildir
qmailfilter PHP script executes and logs perfectly when I send a message to this user account, returns the exit code (99 for discarding message and 0 for proceeding to next .qmail line delivering the message).
Turns out that it delivers the message irrespectively of the many exit codes I've already tried.
The script (see line 174) outputs a text exit code without any whitespace before or after. I tried exit($code), print $code, echo($code) and even file_put_contents("php://stdout", $code), and also exit(chr($code)).
dot-qmail codes are:
0 - Success (go to next .qmail line)
99 - Success and abort (do not execute next lines)
100 - permanent error (bounce)
111 - soft error (retry later)
Source: The Big Qmail Picture.
Other attempts/experiments:
Removed the shebang line (#!/usr/bin/php) and changed the first .qmail line to |php -q /var/my/qmailfilter/qmailfilter
Checked the last line of the script for whitespacing
Read dot-qmail man file but nothing conclusive was found
Joined .qmail lines:
|/var/my/qmailfilter/qmailfilter |/usr/bin/deliverquota ./Maildir
In this case I got a message having only the proper return code without any header, subject or message body.
Commented out (#) the second .qmail line, but stopped receiving any kind of messages.
Edited /var/qmail/control/defaultdelivery to add a first line:
|php /var/my/qmailfilter/qmailfilter
|/usr/bin/deliverquota ./Maildir
and renamed user .qmail file to _qmail. Same results.
Should I deliver the message via PHP script and forget exit codes?
If so, is it enough to save the message to the user Maildir/new?
If so, is the message filename important?
Any idea will be appreciated. Thanks very much!
UPDATE: For those of you who need it, I published the final script at icebex.com slash qmailfilter
I only took a quick look at the code, but it looked like you were using string values. exit('99') and exit(99) are not the same. Make sure you use integers and not strings.
exit('99') will print 99 and return 0.
exit(99) will return 99.
I have a PHP script that creates other PHP files based on user input. Basically, there are files containing language specific constants (define) that can be translated by the user. In order to avoid runtime errors, I want to test newly written files for parse errors (due to "unusual" character sequences). I have read several posts here on SO (like PHP include files with parse errors) and tried a function that uses
$output = exec("php -l $filename");
to determine whether a file parses correctly. This works perfectly on my local machine, but at on the provider's machine, the output of calls to exec("php ...") seems to be always empty. I tried a call to ls and it gives me output, leading me to the assumption that PHP is somehow configured to not react to command line invocations or so. Does anyone know a way around this?
EDIT: I forgot to mention, I had already tried shell_exec and it gives no result, either. In response to sganesh's answer: I had tried that too, sorry I forgot to mention. However, the output (second argument) will always be an empty array, and the return value will always be 127, no matter if the PHP file to test has syntax errors or not.
I had the same problem. The solution that worked for me was found in running-at-from-php-gives-no-output. I needed to add output redirection.
$output = exec("php -l $filename 2>&1");
You can try with exec second and third arguments.
second argument will have the output of the command.
third argument will have the return value.
And exec will return only last line of the command.
$filename = "a.php";
$output = exec("php -l $filename",$op,$ret_val);
print $output."\n";
print $ret_val."\n";
var_dump($op);
By executing shell_exec(), you can see the output as if you executed that file via command line. You can just see if there is an error right here.
<?php
if (strpos(shell_exec('php -l file.php'), 'Syntax Error')) {
die('An error!');
}
There may also be a possibility that shell_exec() or exec() may be disable by your host.
Nice idea to check the file validity :-)!
Now, from the PHP manual for exec():
Note: When safe mode is enabled, you can only execute files within the safe_mode_exec_dir. For practical reasons, it is currently not allowed to have components in the path to the executable.
Can you check if this is not the case for you?
Also, can you check by providing the full path of the PHP interpreter in the exec() instead of only php. Let me know how you fare.
Pinaki
the correct way is to add >2&1 as tested on a windows system using imagemagick!
I worked around my original problem by using a different method. Here is what I do now:
Write a temporary file with contents <?php include "< File to test >"; echo "OK"; ?>
Generate the correct URL for the temporary file
Perform HTTP request with this URL
Check if result equals "OK". If yes, the file to test parses without errors.
Delete temporary file
Maybe this could be done without the temporary file by issuing an HTTP request to the file to test directly. However, if there is a parse error and errors are suppressed, the output will be empty and not discernible from the output in the case of a file that gives no parse errors. This method is risky because the file is actually executed instead of just checked. In my case, there is only a limited number of users who have access to this functionality in the first place. Still, I'm naturally not entirely happy with it.
Why the exec() approach did not work, I still do not know exactly. pinaki might be right by suggesting to provide the full path to the PHP executable, but I cannot find out the full path.
Thank you everyone for answering, I upvoted you all. However, I cannot accept any of your answers as none of your suggestions really solved my problem.