I've been searching and can't find something clear, at least for me
I've made a newsletter and sent it with a mail script
Since it had a lot of emails, I said ignore_user-abort and let it work sending emails even if I closed the browser
Now I'd like to check the progress, I mean, if it has finished all the emails
I understand that there are functions like posix_kill and getpid but I don't know if they are the right ones to use
getpid gives me the id but how can I know the name of the script running? Maybe another script is running that I don't know
thanks a lot
Try the following:
Before the script start to send emails create a file by example emails.log
And when the script is done, remove it,
When the script starts again ask if exists, and if exists kill the script,
The only issue i see can happen is if the script dies unexpected or the page give a timeout the file will be not removed
For avoid this you can create a cronjob in your server and configure for run it daily, weekly or whatever you want, the shell script will look something like:
newsletter.sh
#!/bin/bash
if [ ! -f /tmp/newsletter.log ]
then
touch /tmp/newsletter.log
php /var/www/newsletter.php >> /dev/null
rm /tmp/newsletter.log
fi
Good luck!
Related
2 part question,
I am running a script that executes a second script.
I have it setup this way because I read if I put a script in the /etc/init.d directory it will run it at start up.(true or false?)
I have tried adding >> LoopTriggerLogging.log at the end of each line but nothing comes out in the log file
So I have a first script as follows
#!/bin/bash
/var/www/Dev/LoopTrigger.sh
exit
This triggers the following script to run
#!/bin/bash
while true; do
# do some work
php /var/www/Dev/FindNearestDriverAndSendPush.php
php /var/www/Dev/ProcessCustomerPayment.php
php /var/www/Dev/ProcessDriversPayment.php
# write to LoopTriggerLogging.log
sleep 2 # sleep and repeat
done
What I would like is to have the commands logged along with any errors. I have tried to read a little on this but get lost in the answers and what they are trying to tell the user. I am still new at this and learning, kindly give definition to any commands or options. I am open to a best practice scenario.
Also, with putting in the etc/init.d directory will this tell the script to run at start up?
Is there a way to run this script without it taking up the command line because its an endless script?
My ultimate goal is to get the 3 php files to execute every 2 seconds with some sort of logging.
I did some reading on Cron but seems it is not meant for this type of use case.
Ive also seen this:
exec > logfile 2>&1 (dont know what this does)
set -x makes bash print every command before executing it
FOO=BAR (dont know what this means)
echo $FOO (dont know what this means)
if I put a script in the /etc/init.d directory it will run it at start
up.(true or false)
True. If you put a script in init.d then that script will run for every startup.
My ultimate goal is to get the 3 php files to execute every 2 seconds
You are using the correct way of running it approx every 2sec depending upon the time your php script takes to run. Crontab runs after a minimum of one minute so that would not an option.
I have tried adding >> LoopTriggerLogging.log at the end of each line
but nothing comes out in the log file
You can use /var/www/Dev/LoopTrigger.sh >> LoopTriggerLogging.log in your first script so that whenever it runs it will
Create a file for the first time and append the content from the next time.
Push all the logs of the second script into the file.
Note: As logs will keep on appending to the single file, this file will become very huge at some point of time so make sure your handle it well.
$output = shell_exec('echo "php '.$realFile.'" | at '.$targTime.' '.$targDate.' 2>&1');
print $output;
Can someone please help me figure out why the above line isn't doing what it's supposed to be doing? The idea is for it to create an 'at' job that will execute a php script. If I switch to the user apache(which will ideally control the at function when the php file is complete) I can run
echo "php $realFile.php" | at 00:00 05/30/17
and it'll do EXACTLY what I want. The problem is in the above snippet from my php file it will not create the at job correctly. when I do a at -c job# on both of them the job made from my file is about a 3rd the length missing the User info and everything. It basically starts at PATH= and goes down. Doesn't include HOSTNAME=, SHELL=, SSH_CLIENT=, SSH_TTY=, USER=. I assume it needs most of this info to run correctly. The end output (below)is always the same though it just doesn't have any of the top part for some reason. Let me know if you need more info. I didn't want to paste all of my code here as it contains job specific information.
${SHELL:-/bin/sh} << 'marcinDELIMITER0e4bb3e8'
php "$realFile".php
marcinDELIMITER0e4bb3e8
It doesn't seem to be a permission issue because I can su to apache and run the exact command needed. The folder the files are located in are also owned by apache. I've also resulted to giving each file I try to run 777 or 755 permissions through chmod so I don't think that's the issue.
I figured out a coupe ways around it a while back. The way I'm using right now is an ssh2 connect to my own server as root and creating it that way. No compromise as you have to enter the password manually each time. Really bad work around. The main issue is that apache doesn't have the correct permissions to do everything needed for the AT job so someone figuring that out would be awesome. Another option I found on a random webpage would be to use sudo through the php script, but basically the same minus having to reconnect to your own server. Any other options would be appreciated.
Reading the manual and logs would be a good place to start. In particular:
The value of the SHELL environment variable at the time of at invocation will determine which shell is used to execute the at job commands. If SHELL is unset when at is invoked, the user’s login shell will be used; otherwise, if SHELL is set when at is invoked, it must contain the path of a shell interpreter executable that will be used to run the commands at the specified time.
Other things to check are that the user is included in at.allow, SELinux is disabled and the webserver is not running chrrot.
I made a script that shouldn't return anything to the browser (not any echo, print or interruptions of the code with blank space, like ?> <?, and that uses ignore_user_abort(true); to avoid that, once the browser window is closed, the process stops.
Thus once the script is launched, it should go till the end.
The script is designed for newsletter, and it sends one email each 5 seconds, to respect spam policies of my provider, through mail();
Said that, what's happening is that after about 20 minutes working (the total emails are 1002 ), the script "collapses", with no error returned.
Hence my question: is there a life time limit for scripts are running with ignore_user_abort(true); ?
EDIT
Following the suggestion of Hanky (here below) I put the line:
set_time_limit(0);
But the issue persists
So whilst ignore_user_abort(true); will prevent the script stopping after a visitor browses away from a page, it is set_time_limit(0); that will remove the time limit. You can also change the PHP memory_limit in your php.ini or by setting something like php_value memory_limit 2048M in your .htaccess file.
In order to list the default max_execution time you can run echo ini_get('max_execution_time'); (seconds) or echo ini_get('memory_limits'); (megabytes).
This being said, it sounds like your PHP scripts are better suited to being run from the CLI. Using the command line you can run PHP scripts, this sounds better suited to your usage as it seems, from what you have described, the script doesn't really need to serve anything to the web browser. This method is better for PHP scripts that are run in order to operate a background process rather than to return a front-end to the user.
You can run a file from the command line simply by running php script.php or php -f script.php.
Initially there was not way to solve the issue. Also the provider still investigating.
Meanwhile following your suggestions, I was able to make it running. I created a TEST file and I fired it to verify:
exec("/php5.5/bin/php -f /web/htdocs/www.mydomain.tld/home/test.php > /dev/null 2>&1 &");
In worked. I setup a sleep(600); and I sent 6 emails + one that inform me when the process is really finished.
It runs in a transparent way till the end.
Thank you so much for your support
I have a file, lets say file1.php, that within the script executes a file using: exec("php-cli -f _DAEMON.php") after executing the exec() command, it needs to run more code, the problem is that _DAEMON.php as its name says, is a Daemon and it will never stop running, so it freezes file1.php without allowing the rest of the code to run.
Is there a way to allow the code to continue executing even if exec("php-cli -f _DAEMON.php") has not finished. Or to detect if the code delays for more than x seconds/milliseconds, to continue?
Thanks.
Maybe try using a socket (curl might work with a low timeout, not sure if it'll kill the script though offhand). Not ideal, will add some overhead.
http://phplens.com/phpeverywhere/?q=node/view/254
Also, doriana_gd was probably referring to something like node.js, server side javascript
My project calls for 3 php scripts that are run with if-else conditions. The first script is loaded on the index page of the site to check if a condition is set, and if it is, it calls for the second script. The second script check to see if other conditions are set and it finally calls for the last script if everything is good.
Now I could do this by just including the scripts in the if statement, but since the final result is a resource hogging MySQL dump, i need it to be run independently of the original trigger page.
Also those scripts should continue doing their things once triggered, regardless of the user actions on the index page.
One last thing: it should be able to run on win and nix.
How would you do this?
Does the following code make any sense?
if ($blah != $blah-size){
shell_exec ('php first-script.php > /dev/null 2>/dev/null &');
}
//If the size matches, die
else {
}
Thanks a million in advance.
UPDATE: just in case someone else is going through the same deal.
There seem to be a bug in php when running scripts as cgi but command line in Apache works with all the versions I've tested.
See the bug https://bugs.php.net/bug.php?id=11430
so instead i call the script like this:
exec("php-cli mybigfile.php > /dev/null 2>/dev/null &");
Or you could call it as shell. It works on nix systems but my local windows is hopeless so if anyone run it on windows and it works, please update this.
I would not do this by shell exec because you'd have no control over how many of these resource-hogging processes would be running at any one time. Thus, a user could go click-click-click-click and essentially halt your machine.
Instead, I'd build a work queue. Instead of running the dump directly, the script would submit a record to some sort of FIFO queue (could be a database table or a text file in a dir somewhere) and then immediately return. Next you'd have a cron script that runs at regular intervals and checks the queue to see if there's any work to do. If so, it picks the oldest thing, and runs it. This way, you're assured that you're only ever running one dump at a time.
The easiest way I can think is that you can do
exec("screen -d -m php long-running-script.php");
and then it will return immediately and run in the background. screen will allow you to connect to it and see what's happening.
You can also do what you're doing with 'nohup php long-running-script.php', or by writing a simple C app that does daemonize() and then execs your script.