I use windows task scheduler to start up a php script that works perfectly fine. Basically C:\php.exe -f C:\myscript.php
In my script some work happens that sometimes makes me want to run the task script again in 5 minutes.
I tried to implement this by changing the settings of the task to restart every 5 minutes if the task fails and having my php code exit(1). The task scheduler seems to know that I exited with an error code of 1, but it does not run the script again.
Does anyone know what I can do to make it so that task manager will try again in 5 minutes if I signal it from my code somehow.
Not an answer to the question as phrased, but might serve as a fallback if you can't get it working: make your job run every 5 minutes, regardless, and then track "last success"/"last failure" yourself, in a database or file.
Before doing anything else, the script can check the logged status, and if there was a failure last time, try again (up to a limited number of tries, presumably). If there was a success last time, exit immediately, unless it's time for the next job anyway (e.g. if the original schedule was daily, then check for $last_success being longer ago than 24 hours).
Related
I have created a task to open a website every x minutes.
This is what I have.
program: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe"
argument: https://phpfile on my server
start in: C:\Program Files (x86)\Google\Chrome\Application\
It starts manually but never repeats automatically.
It shows repeat time correct but never repeats. The repeat time just keeps updating.
I basically want to run a PHP script on my website every few minutes,
please help.
I solve my problem in Windows 10 by setting multiple triggers to my task in Task Scheduler.
After that I rebooted my computer and its working fine finally.
This is the link I refer to: https://superuser.com/questions/865067/task-scheduler-repeat-task-not-triggering.
Below is the screenshot of my task's properties in Task Scheduler.
You can also create a scheduled task and set the trigger after the present time (not in the past time as it wont start then) say 10min from present. Then dont run the task manually but wait 10min and windows will start scheduling the task say every hour or whatever you set. Basically windows scheduler needs at least one automatic trigger instead of manually Run by a user to start running the task automatically.
I had the same issue and solved it by removing spaces from the name of task and from the executed path. Task scheduler doesn't like spaces.
For the checking of online users who doesn't log out properly but close the browser, I want to run a cron job in every 2/3 second. (I am updating database in every 10 second when logged in)
Will it harmful for server?
Cron lowest possible frequency is 1 minute so you cannot fire anything more often with it. As for overkill - it may or may not be there, but you need to review the code and load it produces yourself.
You can't run cron jobs more frequently than once a minute so this isn't possible from a cron anyways. You'd have to have a process running in a loop with sleep to achieve this, but the idea is overkill anyways - once a minute is fine.
I have several cron jobs executing php scripts. The php scripts sometimes do some heavy jobs like updating hundreds of records at a time in a mysql table.
The problem is that the job should run every minute. However, it randomly misses and as a result does not execute every minute. Sometimes, it executes every 4-6 minutes, then back to every 1 minute, misses 2-3 times more and then normal again.
I am on centos 6.5
Please note that the php runs correctly and there is no problem whatsoever concerning the php scripts themselves since on the time it runs, I get the expected results and that there are about 10 more other similar scripts running at the same time (every minute or every 5 minutes for the other scripts)
Job:
/usr/bin/php "/var/www/html/phpScriptToExecute.php" >> /var/www/html/log/phpScriptLog.log 2>&1
My take is that it is maybe a problem with too many simultaneous scripts running concurrently, accessing the database at the same time.
Last information: No error in the /var/log/cron file or in the phpScriptLog.log file.
The reason could be, your cron job takes more than 1 minute to execute, print out the start time and end time at the end of the script to validate it.
if the cron job is running, linux won't execute it again.
My guess is it's caused b a PHP fatal error, but your PHP probably isn't configured to send error messages to stderr, which is why your phpScriptLog.log is empty. You could check your php.ini (or just use ini_set()) for the following:
display_errors: set it to on/true if you want errors to show on stderr
log_errors: set it to true if you want to send the error messages to a file
error_log: point it to a file you want the errors to be stored in
Or, if you want a solution to avoid overlapping cron jobs, there are plenty here in SO.
I am running a php file via cron job every 5 hrs as it takes long time (Approx 4 hrs sometime). But I am noticing that some way its restarting again and again during this 5 hrs gap. (I can check it via my log file.)
Its getting some Connection time out on run manually via wget. May be this is the reason the cron stopping and running several time.
Can any one suggest me any idea to overcome this situation.
From further investigation I have found there is an exact issue on how long certain instances of the task run for. most successfully run in a few seconds, yet a few consecutive tasks are running up to an hour (then stopped by the manager). At this point any information about Windows scheduled task life cycle would be appreciated
I have a php file which needs to be called every 10 minutes.
The php file deals with new database entries that are updated on the call.
The file is executed via a scheduled task call to php.exe with -f argument (with said file after)
The task is performed as expected, and runs without fail.
However, I have noticed two problems since its initial run.
1st, I had a file_put_contents on a log file which added a 'Schedule Task()' plus the current time, line every time the task ran. For the first few days this ran fine, but it wasn't until one evening when command failed to execute as the log file had filled up (a .txt at just under 1GB).
I checked, and the logfile read as:
Schedule Task() 2014-10-10 18:00:00.000
Schedule Task() 2014-10-10 18:00:00.000
Schedule Task() 2014-10-10 18:00:00.000
(repeated for 1000s of lines, but still with milliseconds time incrementing every 500 lines or so)
Where I'd expect it to display as:
Scheduled Task () 2014-10-10 18:00:00.000
Scheduled Task () 2014-10-10 18:10:00.000
Scheduled Task () 2014-10-10 18:20:00.000
This suggests that the process got stuck on that line, as the milliseconds for 500 lines were equal.
To resolve this, I moved the file_put_contents line after an IF statement to ensure the line would only be written to the log file when new records were in the database.
The problem never happened again.
2nd, However, 3 weeks from the first scheduled task (today) I notice that the scheduled task's history is filled with errors, at a 10 minute incremental rate.
The error code given is that of 'Duplicate instances running'. (Which is correct as I have set the option to not run instances if one is already in progress)
This, suggests that the Scheduled Task manager is executing the action multiple times (multiple processes).
At this stage, I'm not sure wether it is a fault by the scheduled task, or if it is a code issue where I am not correctly ending the file. Has anyone had a similar issue?
Update
From looking into the scheduled task amanbager logs, I can see that there is an issue with the task running longer than 1 hour (see image)
The call is running every 10 minutes however the task run (red line) is not escaping correctly.