There are several other posts in reference to this problem. The difference here is that I am willing to give out the troublesome url.
This works:
curl https://pas-gdl.overdrive.com/advanced-search
This does not work:
$pagesource = shell_exec("curl https://pas-gdl.overdrive.com/advanced-search");
I get the dreaded 51 error: "curl: (51) SSL: no alternative certificate subject name matches target host name 'pas-gdl.lib.overdrive.com'"
There is a wildcard ssl cert involved. I have attempted to figure out what the command line curl is doing by default as a possible solution. However, seeing as I am just executing the same command via shell_exec there should be zero difference.
The command line option produces the advanced-search html and the shell_exec does not. Any information as to why would be greatly appreciated.
use php curl instead
also look into how to curl https
Related
I am trying to download the contents of an html file to my Ubuntu Linux 16.04 computer using php's get_file_contents() function. However, when I do this, I get this Warning: "failed to open stream: the , aborting"
Yet when I use wget on the terminal command line, it quickly downloads the file contents.
So why does file_get_contents not work for this? Here is my php code, which produces the Warning:
$testDownload = file_get_contents("https://ebird.org/region/US-AL-001?yr=all");
echo $testDownload;
On my Ubuntu terminal command line, here is my bash code, which works quickly and flawlessly:
wget https://ebird.org/region/US-AL-001?yr=all
I want to use php because I want to automate the downloading of a number of files and need a fair bit of code to do it, and I feel much more comfortable using php than bash.
P.S. I tried various "context" solutions for the file_get_contents function that were suggested on Stack Overflow, but they did not solve the problem.
P.P.S. I earlier tried cURL and got the same redirects Warning, though I admit to not knowing much about cURL.
I found a solution: the shell_exec() function in php allows me to use bash's wget command line function within my php script. I tried it and it worked. (I will have to change the ownership of the downloaded files to get access to them.) Here is the code that worked:
$output = shell_exec('wget https://ebird.org/region/US-AL-005?yr=all');
I still don't understand why wget can get the file contents but file_get_contents cannot. But with shell_exec() I have found a php solution to complete my task, so I am happy.
I am trying to setup a cron job for my WP All Import plugin. I have tried setting up cron jobs via Bluehost cpanel with the following 4 options:
php /home2/slotenis/public_html/wp-cron.php?import_key=*****&import_id=9&action=trigger
GET http://www.slotenis.si/wp-cron.php?import_key=*****&import_id=9&action=trigger
/usr/bin/GET http://www.slotenis.si/wp-cron.php?import_key=*****&import_id=9&action=trigger
curl http://www.slotenis.si/wp-cron.php?import_key=*****&import_id=9&action=trigger
NONE of them is working.
I have setup an email confirmation every time a cron job is run and I receive the following email:
cp: cannot stat `exim.pl': No such file or directory
cp: not writing through dangling symlink `/var/fake/slotenis/etc/./exim.pl.local'
Can anyone provide me the exact command line to make it working?
Try using wget.
wget -O /dev/null -o /dev/null "https://www.domain.com/wp-cron.php?import_key=*****&import_id=9&action=trigger
It's what I use on my sites.
For troubleshooting try visiting the URL yourself. If that doesn't work there's either a problem with the plugin, WordPress or Bluehost.
Important to know, the error you are seeing about "cp: cannot stat `exim.pl'" is produced before the command actually runs, and it does not stop your actual command from working. (This is an issue on Bluehost's side. They recently added broken symlinks in /etc/exim.pl and /etc/exim.pl.local.)
About the actual cron command: If you have special characters like "?" and "&", you need to escape them, e.g. enclose the whole URL in double quotes. It works to run a php script, but if you want to pass query parameters, you don't use the "?" syntax. See PHP, pass parameters from command line to a PHP script.
With curl it should work:
curl "http://www.slotenis.si/wp-cron.php?import_key=*****&import_id=9&action=trigger"
Given a PHP source code with curl handle, how do I get the command line version of that curl request?
Look at the cURL documentation here:
http://curl.haxx.se/docs/manpage.html
or, go to your terminal and type:
man curl
After reading the documentation, you will have to manually find out which commands map to which functions to get the results you want. Unlikely to be any easier way to do this.
A lot of it will be just looking at which of PHP's curl_setopt() parameters map to the matching command line parameters.
You could just create a php file with the curl command in it, and then just run the php script from the command line.
hostname$ php curldownload.php
Or, you could have a look here for examples: http://www.thegeekstuff.com/2012/04/curl-examples/
I wrote a script to parse some data from a website using cURL and it works fine when I run it in my browser, however when I want to run it in the command line I get the error "call to undefined function curl_init()". Do php scripts run under different settings from the command line?
This is happening because you are simply trying to call a PHP function from bash. If you have curl installed in your linux environment then the command should simply be curl [-options] [url]. The simplest of them being something like:
$ curl http://someurl.com/path/to/xmlfile.xml
You can test for this from the command line by tying "$ which curl" (without the quotes of course). That will give you the path to where it is stored in case you have to use the full path. (e.g. /usr/bin/curl [-options] [url]).
EDIT:
after having re-read your question I realized that I dumbly missed the fact that you said you were trying to run the PHP script from the command line and not curl itself. And now I, too, am stumped by your problem. Sorry!
I am trying to track down an issue with a cURL call in PHP. It works fine in our test environment, but not in our production environment. When I try to execute the cURL function, it just hangs and never ever responds. I have tried making a cURL connection from the command line and the same thing happens.
I'm wondering if cURL logs what is happening somewhere, because I can't figure out what is happening during the time the command is churning and churning. Does anyone know if there is a log that tracks what is happening there?
I think it is connectivity issues, but our IT guy insists I should be able to access it without a problem. Any ideas? I'm running CentOS and PHP 5.1.
Updates: Using verbose mode, I've gotten an error 28 "Connect() Timed Out". I tried extending the timeout to 100 seconds, and limiting the max-redirs to 5, no change. I tried pinging the box, and also got a timeout. So I'm going to present this back to IT and see if they will look at it again. Thanks for all the help, hopefully I'll be back in a half-hour with news that it was their problem.
Update 2: Turns out my box was resolving the server name with the external IP address. When IT gave me the internal IP address and I replaced it in the cURL call, everything worked great. Thanks for all the help everybody.
In your php, you can set the CURLOPT_VERBOSE variable:
curl_setopt($curl, CURLOPT_VERBOSE, TRUE);
This then logs to STDERR, or to the file specified using CURLOPT_STDERR (which takes a file pointer):
curl_setopt($curl, CURLOPT_STDERR, $fp);
From the command line, you can use the following switches:
--verbose to report more info to the command line
--trace <file> or --trace-ascii <file> to trace to a file
You can use --trace-time to prepend time stamps to verbose/file outputs
You can also use curl_getinfo() to get information about your specific transfer.
http://in.php.net/manual/en/function.curl-getinfo.php
Have you tried setting CURLOPT_MAXREDIRS? I've found that sometimes there will be an 'infinite' redirect loop for some websites that a normal browser user doesn't see.
If at all possible, try sudo ing as the user PHP runs under (possibly the one Apache runs under).
The curl problem could have various reasons that require a user input, for example an untrusted certificate that is stored in the trusted certificates cache of the root user, but not the PHP one. In that case, the command would be waiting for an input that never happens.
Update: This applies only if you run curl externally using exec - maybe it doesn't apply.