How to setup a cron job command to execute an URL?
/usr/bin/wget -q http://www.domain.com/cron_jobs/job1.php >/dev/null 2>&1
Why can't I make this work!? Have tried everything.. The PHP script should send an email and create some files, but none is done
The command returns this:
Output from command /usr/bin/wget -q http://www.domain.com/cron_jobs/job1.php ..
No output generated
... but it still creates an empty file in /root on each execute!? Why?
Use curl like this:
/usr/bin/curl http://domain.com/page.php
Don't worry about the output, it will be ignored
I had the same problem. The solution is understanding that wget is outputting two things: the results of the url request AND activity messages about what it's doing.
By default, if you do not specify an output file, it will create one, seemingly named after the file in your url, in the current folder where wget is run.
If you want to specify a different output file:
-O outputfile.txt
will output the url results to outputfile.txt, overrwriting what's there.
If you wish to append to that file, write to std out and then append to the file from there:
and here's the trick: to write to std out use:
-O-
the second dash is in lieu of a filename and tells wget to write the url results to std out.
then use the append syntax, >>, to append to a file of your choice:
wget -O- http://www.invisibility.com >>/var/log/invisibility.log
The lower case o, specifies the location of the activity log, so if you wish to log activity for the url request, you can:
wget -o http://someurl.com /var/log/activity.log
-q suppresses output of activity messages
wget -q http://someurl.com /var/log/activity.log
will not log any activity to the specified file, and I think that is the crux where people get confused.
Remember:
-O is shorthand for --output-document
-o is shorthand for --output-file, which is the activity log.
Took me hours to get it working. Thank you for people writing down solutions.
One also needs to make sure to check whether single or double quotes are needed, otherwise it will parse the url wrong leading to error messages:
This worked (using single quotes):
/usr/bin/wget -O -q 'http://domain.com/cron-file.php'
This gave errors (using double quotes):
/usr/bin/wget -O -q "http://domain.com/cron-file.php"
Don't know if the /usr/bin/ is needed. Read about different ways of how to do the order of the -O -q. It is hard to find a reliable definitive source on the web for this subject. So many different examples.
An online wget manual can be found here, for the available options (but check with the Linux distro one is using for an up to date version):
http://unixhelp.ed.ac.uk/CGI/man-cgi?wget
For use wget to display HTML:
wget -qO- http://www.example.com
Related
I have a cron command below that is sending me status outputs on the wget.... I really don't want these outputs, I just want the code to run.
wget http://www.domain.com/cron/dailyEmail 2>&1;
How can I turn off the output?
Send the output to the null device.
wget http://www.domain.com/cron/dailyEmail >/dev/null 2>&1
Send it to a temporary file thus:
wget http://www.domain.com/cron/dailyEmail >/tmp/my_wget.out 2>&1
That way, you can see the output if you need to but it doesn't otherwise bother you.
If you want to keep older copies of the output around rather than over-writing them on each run, you can use something like:
wget http://www.domain.com/cron/dailyEmail >/tmp/my_wget_$(date +%Y_%m_%d).out 2>&1
which will give you a filename containing the date (and time if you change the arguments to the date command) but then you'll probably want an automated process to clean up older log files.
I am trying to automate the download of a file using wget and calling the php script from cron, the filename always consists of filename and date, however the date changes depending on when the file is uploaded. The trouble is there is no certainty of when the file is updated, and hence the final name can never really be known until the directory is checked.
An example filename is file20100818.tbz
I tried using wildcards within wget but they have failed, both using * and %
Thanks in advance,
Greg
Assuming the file type is constant then from the wget man page:
You want to download all the GIFs from
a directory on an HTTP server. You
tried wget
http://www.server.com/dir/*.gif, but
that didn't work because HTTP
retrieval does not support globbing.
In that case, use:
wget -r -l1 --no-parent -A.gif http://www.server.com/dir/
So, you want to use the -A flag, something like:
wget -r -l1 --no-parent -A.tbz http://www.mysite.com/path/to/files/
For the sake of clarity, because this threads shows up in google search when searching "wget and wildcards" and because the answers above don't bring sensitive solution and there doesn't seem to be anything else on SO answering this:
According to the wget manual, you can use the wildcards when using ftp and using the option -g on (--glob=on), however, wget will return an error unless you are using all the -r -np -nd options. Thanks to Wiseman20#ubuntuforums for showing us the way.
Samplecode:
wget -r -np -nd --glob=on ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt.*.tar.gz
You can for loop each date like this:
<?php
for($i=0;$i<30;$i++)
{
$filename = "file".date("Ymd", time() + 86400 * $i).".tbz";
//try file download, if successful, break out of loop.
?>
You can increase number of tries in for loop.
I'm trying to get a cron job to run every 5 min on my localhost. Using the Cronnix app I entered the following command
0,5 * * * * root curl http://localhost:8888/site/ > /dev/null
The script runs fine when I visit http://localhost:8888/site/ in my browser. I've read some stuff about getting CI to run on Cron, using wget and various other options but none make a lot of sense.
In another SO post I found the following command
wget -O - -q -t 1 http://www.example.com/cron/run
What is the "-O - -q -t 1" syntax exactly?
Are there other options?
-O - Means the output goes to stdout (-O /dev/null) would nullify any output. -q means be quiet (don't print out any progress bars), this would screw up the look of any log files. -t 1 means to only try once. If the connection fails or times out it will not try again.
See http://linux.die.net/man/1/wget for a full manual on the wget command.
Edit: just realised you're piping all this to /dev/null anyway, you may as well either omit the -O parameter or point that to /dev/null and omit the final pipe.
What I always do is use PHP in cli mode. Seems more efficient to me.
first setup a cron entry like :
*/5 * * * * /usr/bin/php /var/www/html/cronnedscript.php
cronnedscript.php should be placed in your root www folder.
then edit cronnedscript.php with:
<?php
$_GET["/mycontroller/index"] = null;
require "index.php";
?>
where mycrontroller is the CI controller you want to fire.
if you want the controller to only be run by crond ,as opposed through public www requests, add the following line to the controller and to the cronnedscript.php :
if (isset($_SERVER['REMOTE_ADDR'])) die('Permission denied');
I realize that this is a reference to Drupal, however they do a very nice job of explaining what each and every parameter is in the wget call.
Drupal Cron Explanation
If you want the more generic explanation, you can find it here.
Try this and save it by making a folder in the C drive with a .bat extension.
Then give the path of this script to task scheduler.
Then run the same.
C:\xampp\php\php-win.exe -fC:\xampp\htdocs\folder name\index.php controllername functionname
I have a php script I want to run every minute to see if there are draft news posts that need to be posted. I was using "wget" for the cron command in cPanel, but i noticed (after a couple days) that this was creating a blank file in the main directory every single time it ran. Is there something I need to stop that from happening?
Thanks.
When wget runs, by default, it generates an output file, from what I need to remember.
You probably need to use some option of wget, to specify to which file it should write its output -- and use /dev/null as destination file (It's a "special file" that will "eat" everything you can write to it)
Judging from man wget, the -O or --output-file option would be a good candidate :
-O file
--output-document=file
The documents will not be written to the appropriate files, but all will be concatenated together and written to file.
so, you might need to use something like this :
wget -O /dev/null http://www.example.com/your-script.php
And, btw, the output of scripts run from the crontab is often redirected to a logfile -- it can always help.
Something like this might help, about that :
wget -O /dev/null http://www.example.com/your-script.php >> /YOUR_PATH_logfile.log
And you might also want to redirect the error output to another file (can be useful, to help with debugging, the day something goes wrong) :
wget -O /dev/null http://www.example.com/your-script.php >>/YOUR_PATH/log-output.log 2>>/YOUR_PATH/log-errors.log
Im trying to grab all data(text) coming from a URL which is constantly sending text, I tried using PHP but that would mean having the script running the whole time which it isn’t really made for (I think). So I ended up using a BASH script.
At the moment I use wget (I couldn’t get CURL to output the text to a file)
wget --tries=0 --retry-connrefused http://URL/ --output-document=./output.txt
So wget seems to be working pretty well, apart from one thing, every time I re-start the script wget will clear the output.txt file and start filling it again, which isn’t what I want. Is there a way to tell wget to append to the txt file?
Also, is this the best way to capture the live stream of data?
Should I use a different language like Python or …?
You can do wget --tries=0 --retry-connrefused $URL -O - >> output.txt.
Explanation: the parameters -O is short for --output-document, and a dash - means standard output.
The line command > file means write "write output of command to file", and command >> file means "append output of command to file" which is what you want.
Curl doesn't follow redirects by default and outputs nothing if there is a redirect. I always specify the --location option just in case. If you want to use curl, try:
curl http://example.com --location --silent >> output.txt
The --silent option turns off the progress indicator.
You could try this:
while true
do
wget -q -O - http://example.com >> filename # -O - outputs to the screen
sleep 2 # sleep 2 sec
done
curl http://URL/ >> output.txt
the >> redirects the output from curl to output.txt, appending to any data already there. (If it was just > output.txt - that would overwrite the contents of output.txt each time you ran it).