Recently my site was moved to a different server, due to maintenance at the host. Ever since I can't this script to run as a cronjob anymore: http://www.filmhuisalkmaar.nl/wp-content/themes/filmhuis-alkmaar/cron/load-shows.php
I tried running it using PHP with the follow cronjob:
php /home/provadja/domains/filmhuisalkmaar.nl/public_html/wp-content/themes/filmhuis-alkmaar/cron/load-productions.php
But I kept getting the following error:
PHP Warning: require_once(../inc/api.php): failed to open stream: No such file or directory in /home/provadja/domains/filmhuisalkmaar.nl/public_html/wp-content/themes/filmhuis-alkmaar/cron/load-productions.php on line 3 PHP Fatal error: require_once(): Failed opening required '../inc/api.php' (include_path='.:/usr/local/lib/php') in /home/provadja/domains/filmhuisalkmaar.nl/public_html/wp-content/themes/filmhuis-alkmaar/cron/load-productions.php on line 3
I checked if the files stating missing were still in place. And they were. I checked the file permissions and they're set to 755, which should be more than fine. Right?
Then I tried wget with the following cronjob:
/usr/bin/wget -O https://www.filmhuisalkmaar.nl/wp-content/themes/filmhuis-alkmaar/cron/load-shows.php
But then I keep getting the following URL:
wget: missing URL
Usage: wget [OPTION]... [URL]...
Try ‘wget --help’ for more options.
I'm really at a loss here. Especially because it used to work fine in the past. It's very frustrating because these scripts are kind of essential for my site to stay updated.
Any help would really be appreciated. Thank you.
Try to run it like this:
cd /home/provadja/domains/filmhuisalkmaar.nl/public_html/wp-content/themes/filmhuis-alkmaar/cron/ && php load-productions.php
Note the use of cd command at start. This means "change current working directory to ../cron/ and then run script load-productions.php".
I prefer for cron tasks to use the use the full path to included and required scripts. So, instead of:
require_once("../inc/api.php");
I generally do:
$base = dirname(dirname(__FILE__));
require_once($base . "/inc/api.php");
This way the server knows exactly where to look and is not relative to certain directories.
Side note: I also like to do /path/to/php -q /path/to/script.php too. : )
I will quote fvu's comment to my question, which I have tried and can confirm now as fully working:
1) does /home/provadja/domains/filmhuisalkmaar.nl/public_html/wp-content/themes/filmhuis-alkmaar/inc/api.php exist? 2) obvious error in wget usage (-O needs the name of the file in which to save the script output), try wget -O /dev/null https://www.filmhuisalkmaar.nl/wp-content/themes/filmhuis->alkmaar/cron/load-shows.php
Thanks a lot everyone, for your help!
Related
I am trying to setup a cron job for my WP All Import plugin. I have tried setting up cron jobs via Bluehost cpanel with the following 4 options:
php /home2/slotenis/public_html/wp-cron.php?import_key=*****&import_id=9&action=trigger
GET http://www.slotenis.si/wp-cron.php?import_key=*****&import_id=9&action=trigger
/usr/bin/GET http://www.slotenis.si/wp-cron.php?import_key=*****&import_id=9&action=trigger
curl http://www.slotenis.si/wp-cron.php?import_key=*****&import_id=9&action=trigger
NONE of them is working.
I have setup an email confirmation every time a cron job is run and I receive the following email:
cp: cannot stat `exim.pl': No such file or directory
cp: not writing through dangling symlink `/var/fake/slotenis/etc/./exim.pl.local'
Can anyone provide me the exact command line to make it working?
Try using wget.
wget -O /dev/null -o /dev/null "https://www.domain.com/wp-cron.php?import_key=*****&import_id=9&action=trigger
It's what I use on my sites.
For troubleshooting try visiting the URL yourself. If that doesn't work there's either a problem with the plugin, WordPress or Bluehost.
Important to know, the error you are seeing about "cp: cannot stat `exim.pl'" is produced before the command actually runs, and it does not stop your actual command from working. (This is an issue on Bluehost's side. They recently added broken symlinks in /etc/exim.pl and /etc/exim.pl.local.)
About the actual cron command: If you have special characters like "?" and "&", you need to escape them, e.g. enclose the whole URL in double quotes. It works to run a php script, but if you want to pass query parameters, you don't use the "?" syntax. See PHP, pass parameters from command line to a PHP script.
With curl it should work:
curl "http://www.slotenis.si/wp-cron.php?import_key=*****&import_id=9&action=trigger"
I'm currently running a cron job that loads a php script.
I keep getting an error, sh 1 /usr/bin/php: not found.
I tried it two other ways but to no avail.
on a perl script. I tried.
my $x = qx('/usr/bin/php /home/script.here');
This doesn't generate anything and sends me an error message on my mail.
But if I run the line
/usr/bin/php /home/script.here
on my shell, it works.
I also create a script 1.sh and had this.
#!/usr/bin/php -v
I run the script ./1.sh and it shows the result. But as soon as I try to call it via cron or /bin/sh 1.sh, it just fails and can't find the php path even if it was explicitly stated.
Am I missing anything?
I also tried this on php5, but same error.
The problem are the single quotes inside the qx() operator. Remove them:
my $x = qx(/usr/bin/php /home/script.here);
As long as they are there the shell tries to find a command "script.here" in the directory "/usr/bin/php /home" (yes, with the space in the directory name).
Totally forgot about this question.
Found a solution.
I just added
SHELL=/bin/bash in crontab and the scripts worked.
I have an email parsing script on a Codeigniter site that I want to trigger each day with a cronjob. I don't have much experience with cronjobs, or command line on remote servers.
I have a cronJobs controller at mysite.com/public_html/application/controllers/cronJobs. In it is a parseMail method. I'm also using mod_rewrite to get rid of index.php from URLS.
The parseMail method does work when I hit the controller "normally" through my browser at MYSITE.com/cronJobs/parseMail. There is a DB insert that goes off.
But to trigger it with cronjob I have tried >>>
wget http://MYSITE.com/cronJobs/parseMail
And I do get a notification email.. but I'm not sure how to interpret it. It's finding the script? There is no error? Regardless, the parseMail doesn't fire.
--2013-01-19 12:00:02-- http://MYSITE.com/cronJobs/parseMail
Resolving MYSITE.com... xx.xx.xx.xxx
Connecting to MYSITE.com|xx.xx.xx.xxx|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 0 [text/html]
Saving to: `parseMail.203'
0K 0.00 =0s
2013-01-19 12:00:03 (0.00 B/s) - `parseMail.203' saved [0/0]
I also tried
-q wget http://MYSITE.com/cronJobs/parseMail
And received "/bin/sh: get: command not found"
Then I have also tried variations on this..
/usr/local/bin/php -q /public_html/index.php cronJobs parseMail
/usr/local/bin/php -q /public_html/ cronJobs parseMail
/usr/local/bin/php -q home/myusername/public_html/index.php cronJobs parseMail
/usr/local/bin/php -q home/myusername/public_html/ cronJobs parseMail
With these methods I cant even seem to hit the controller. My email notification just says "Could not open input file".
I'm just not really familiar with any of these errors.. so I don't know how to hone in on a solution.
Can anyone give me any tips how to move forward?
solved
After much Googling I found the solution that worked for me. Here is what I used (I was so close in one of my first attempts.. just didn't use an absolute path. And then when I started using some of the suggestions from the comments below, the php path was different and I did not notice)
Here is the correct version.
/usr/local/bin/php /home/myusername/public_html/index.php cronJobs parseMail
None of your paths are real path. All are either invalid (/public_html ones) or relative (home/myusername ones). Following should do.
/usr/bin/env php /home/myusername/public_html/index.php cronJobs parseMail
You might need to change the directory to document root first. In that case use this,
(cd /home/myusername/public_html/ && /usr/bin/env php index.php cronJobs parseMail)
I'm working on a server where users should be able to run protein sequences against a database, and it uses an executable called blastall. The server generates an executable which it should then run using batch. However, it doesn't appear to be running. Here is an example of an executable is generates (cmd.sh):
#!/usr/bin/env sh
cd /var/www/dbCAN
php -q /var/www/dbCAN/tools/blast.php -e -w /var/www/dbCAN/data/blast/20121019135548
Where the crazy number at the end of that is an auto-generated job ID based on when the job was submitted. There are 2 issues, and I'm trying to solve one at a time. The first issue is that when manually executed (by me simply running ./cmd.sh), I get the following errors:
sh: 1: /var/www/dbCAN/tools/blast/bin/blastall: not found
sh: 1: /var/www/dbCAN/tools/blast/bin/blastall: not found
sh: 1: -t: not found
But this doesn't really make sense to me, as the directory specified does in fact contain blastall. It has full rwx permissions and every directory along the way has appropriate permissions.
The blast.php file in tools looks like this:
try {
do_blast($opts["w"]);
$info['status'] = 'done';
$fp = fopen("$opts['w']/info.yaml","w")
fwrite($fp, Sypc::YAMLDump($info)); fclose($fp);
}
With of course variable declarations above it, and the do_blast function looks like this (again with variables declared above it and a cd so the directories work out):
function do_blast($workdir)
{
system("/var/www/dbCAN/tools/blast/bin/blastall -d data/blast/all.seq.fa -m 9 -p blastp -i $workdir/input.faa -o $workdir/output.txt")
system("/var/www/dbCAN/tools/blast/bin/blastall -d data/blast/all.seq.fa -p blastp -i $workdir/input.faa -o $workdir/output2.txt")
}
Any idea what may be causing this issue? I thought it may be because I'm running it and it was created by apache, but rwx is allowed for all users. I can include more information if needed, but I chose not to at this point because the original person who wrote the PHP split up everything into tons of little files, so it's difficult to pinpoint where the problem is exactly. Any ideas (if not complete solutions) are very appreciated.
EDIT: Solution found. As it turns out, the blastall executable had been compiled on a different linux system. Switched to a different executable and it ran flawlessly.
Could it be an issue with relative paths in your script? See my answer here, maybe it helps:
finding a file in php that is 4 directories up
The solution was to recompile the blastall executable. It had been compiled for Redhat and I am using Ubuntu. Unfortunately I assumed the executable I was given was for my system, not the previous one.
I am attempting to create a php script that can connect thru ssh to my Qnap TS219 server and run a command on it.
My script so far connects fine to the server but when I run the command I get an error message and I can't figure it out.
exec.sh
#!/bin/bash
cp /share/MD0_DATA/Qdownload/rapidshare/admin/script.txt /share/MD0_DATA/Qdownload/rapidshare/admin/script.sh
chmod 755 /share/MD0_DATA/Qdownload/rapidshare/admin/script.sh
nohup sh /share/MD0_DATA/Qdownload/rapidshare/admin/script.sh &
exit 0
script.sh
#!/bin/bash
/opt/bin/plowdown -o /share/MD0_DATA/Qdownload/rapidshare /share/MD0_DATA/Qdownload/rapidshare/admin/down.txt 2>/share/MD0_DATA/Qdownload/rapidshare/admin/output.txt
the command that I am currently running thru ssh after I submit the form:
echo $ssh->exec('sh /share/MD0_DATA/Qdownload/rapidshare/admin/exec.sh');
Right now generates the code below but only after I kill 2 bash processes (the page keeps loading indefinetly and the processor activity is at 100% if I don't kill the 2 bash processes):
/share/MD0_DATA/.qpkg/Optware/share/plowshare/lib.sh: line 261: getopt: command not found start download (rapidshare): http://rapidshare.com/files/312885386/Free_Stuff-Your_Internet_eBay_Business_Free_Startup_Resources.rar /share/MD0_DATA/.qpkg/Optware/share/plowshare/lib.sh: line 261: getopt: command not found /share/MD0_DATA/.qpkg/Optware/share/plowshare/lib.sh: line 46: --insecure: command not found Error: failed inside rapidshare_download()
This script will be used in my local network, no access from outside, so I am not worry about security, I know the code looks very basic, primitive but I have no experience with php, shell script, so if someone can make any sense on this and help me out will be greatly appreciated.
Edit1. I also tried the shell_exec command still no joy and if I run the script thru putty works beautifully.
Edit2. I think we are on to something.
I added the code you suggested and I got the following message.
sh: /share/MD0_DATA/.qpkg/Optware/share/plowshare: is a directory /usr/bin:/bin:/usr/sbin:/sbin
I think at the moment the PATH is usr/bin:/bin:usr/sbin:/sbin and I think it should be /opt/bin /opt/sbin because there are the "executables". Any ideeas?
Thanks,
Chris.
Run this
echo $ssh->exec('pwd');
Does it list your path correctly? If so then your problem is NOT PHP, if it doesn't list or still gives an error then PHP is your problem and we can continue from there.
From the error you've listed, my first guess would be that PATH isn't set, so lib.sh can't find what it's looking for.
Remember you're logging in with a custom shell (PHP ssh), quite often things aren't set as they should be, so your scripts might not find requirements like paths and variables.
Edit:
Since it's giving /root, we at least know it's going through, why not also set the PATH etc...
echo $ssh->exec('PATH=$PATH;/share/MD0_DATA/.qpkg/Optware/share/plowshare; sh /share/MD0_DATA/Qdownload/rapidshare/admin/exec.sh');
Remember you can also use this to see what is and isn't being set.
echo $ssh->exec('ECHO $PATH');
I think I got it:
Following viper_sb logic, I changed the code to:
echo $ssh->exec('PATH=$PATH:/share/MD0_DATA/.qpkg/Optware/bin; sh /share/MD0_DATA/Qdownload/rapidshare/admin/exec.sh');
echo $ssh->exec('echo $PATH');
and magic, it worked ... I'll test it further, when I get home, but I think it worked, a file was downloaded in the /Qdownload/rapidshare folder ... hooray.