I'm trying to run a CURL script in wordpress but I'm having a problem.
When i test it, i get a 500 internal error as WP changes the URL.
So the script is at www.site.com/curl_script.php - When i test that (navigate to www.site.com/curl_script.php) I end up going to www.site.com/curl_script.php/wp-admin/install.php which returns a 500 internal error.
Now after playing around with the script, I've noticed the problem. It seems to be a function that I'm running (the curl function) thats causing wordpress to take me to that url.
Ive had similar issues to this but have managed to fix it by simply changing the names of the functions, but this doesn't seem to work anymore.
The function:
function verify_user($ref, $username, $uu_name){
$ch = curl_init($server_root);
curl_setopt($ch,CURLOPT_URL,"http://site.com/con1.php");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_POSTFIELDS,$fields_string);
curl_setopt($ch, CURLOPT_POST, 1);
$result = curl_exec($ch);
$data = json_decode($result);
global $ref_;
$ref_ = $data->ref_id;
//fetch some more info
$chh = curl_init($server_root);
curl_setopt($chh,CURLOPT_URL,"http://site.com/con2.php");
curl_setopt($chh, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($chh, CURLOPT_POST, 1);
$resultt_2 = curl_exec($chh);
$data_custt = json_decode($resultt_2);
$cust_st = $data__->user_status;
if ($cust_st == "FAILED"){
echo "this is bad";
}
elseif ($cust_st == "PASSED") {
echo "this is good";
}
}
}
Now when i call this function:
verify_user_info($ref, $username, $uu_name);
Wordpress plays up...
But when i leave the function out (don't call it), everything works fine.
It seems that WP is assuming the user is attempting to run the installation, when that's not the case.
Any ideas on how to fix this, dynamically as others will use this script too?
If sounds like you are getting redirected somehow, even though should shouldn't be if CURLOPT_FOLLOWLOCATION is not set. Try using the curl_getinfo function to debug the URL that is being accessed.
Related
I have a limit of 25 requests/min from PUBGs official API. For some reason instead of it requesting twice for each search its using up 4 requests. I can't figure out why. I have checked that the code isn't running twice. Only once, but still it's requesting 4 times.
UPDATE:
I tried making a separate page and apparently there is a bug somewhere calling my function twice. Still don't know why but I'm now 99% sure it's not the function itself.
Code For My Request
function getProfile($profileName, $region, $seasonDate){
// Just check if there is an acctual user
if($profileName === null){
$data->error = "Player Not Found";
$data->noUser = true;
return $data;
}else{
$season = "division.bro.official.".$seasonDate;
/*
Get The UserID
*/
$ch = curl_init("https://api.pubg.com/shards/$region/players?filter[playerNames]=$profileName");
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
'Authorization: Bearer APIKEY',
'Accept: application/vnd.api+json'));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$rawData = json_decode(curl_exec($ch), true);
$data->playerId = $rawData["data"][0]["id"];
curl_close($ch);
// Testing if user exists
if($rawData["errors"][0]["title"] === "Not Found"){
$data->noUser = true;
$data->error = "Player Not Found";
return $data;
}else{
/*
Get The acctual stats
*/
$ch = curl_init("https://api.pubg.com/shards/$region/players/$data->playerId/seasons/$season");
curl_setopt($ch, CURLOPT_HTTPHEADER, array(
'Authorization: Bearer APIKEY',
'Accept: application/vnd.api+json'));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data->playerDataJSON = curl_exec($ch);
$data->playerData = json_decode($data->playerDataJSON, true);
curl_close($ch);
return $data;
}
}
}
This is how it's getting called
if (isset($_POST['search-username'])) {
$username = $_POST['search-username'];
header("Location: /profile/$username/pc-na/2018-01/overall/tpp");
die();
}
In The actual profile php
$data = getProfile($page_parts[1], $page_parts[2], $page_parts[3]);
absolutely sure it's only called once? set a lock on it. change it to
function getProfile($profileName, $region, $seasonDate){
static $once=true;
if($once!==true){
throw new \LogicException("tried to run getProfile() twice!");
}
$once=false;
Shortly after I figure out it's not the function I realized that the culprit was an empty script I was calling. I knew this script created an error which I didn't really care about since it was empty and I had no idea why it was creating the error. For some obscure reason this script created the error. I'll just make a lesson out of it to always fix the smallest errors.
I've also been seeing this behavior - a script with a single curl_exec() request gets called twice.
The strange thing is this was only happening when running on localhost (under a wampp installation) but when run from any other webserver was fine and it is just called once.
I never managed to debug it completely but it seems to be an issue with the local server so test elsewhere if you are seeing this.
I've programmed a very basic web-scraping tool in PHP using cURL and DOM. I'm running it locally on a Windows 10 box using XAMPP (Apache & MySQL). It scrapes approximately 5 values on 400 pages (~2,000 values in total) on one specific website. The job typically completes in < 120 seconds, but intermittently (about once every 5 runs) it'll stop around the 60 second mark with the following error:
Recv failure: Connection was reset
Probably irrelevant, but all of my scraped data is being thrown into a MySQL table, and a separate .php file is styling the data and presenting it. This part is working fine. The error is being thrown by cURL. Here's my (very trimmed) code:
$html = file_get_html('http://IPAddressOfSiteIAmScraping/subpage/listofitems.html');
//Some code that creates my SQL table.
//Finds all subpages on the site - this part works like a charm.
foreach($html->find('a[href^=/subpage/]') as $uniqueItems){
//3 array variables defined here, which I didn't include in this example.
$path = $uniqueItems->href;
$url = 'http://IPAddressOfSiteIAmScraping' . $path;
//Here's the cURL part - I suspect this is the problem. I am an amateur!
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_URL, trim($url));
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 0); //An attempt to fix it - didn't work.
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, 0); //An attempt to fix it - didn't work.
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 0);
curl_setopt($curl, CURLOPT_TIMEOUT, 1200); //Amount of time I let cURL execute for.
$page = curl_exec($curl);
//This is the part that throws up the connection reset error.
if(curl_errno($curl)) {
echo 'Scraping error: ' . curl_error($curl);
exit; }
curl_close($curl);
//Here we use DOM to begin collecting specific cURLed values we want in our SQL table.
$dom = new DOMDocument;
$dom->encoding = 'utf-8'; //Alows the DOM to display html entities for special characters like รถ.
#$dom->loadHTML(utf8_decode($page)); //Loads the HTML of the cURLed page.
$xpath = new DOMXpath($dom); //Allows us to use Xpath values.
//Xpaths that I've set - this is for the SQL part. Probably irrelevant.
$header = $xpath->query('(//div[#id="wrapper"]//p)[#class="header"][1]');
$price = $xpath->query('//tr[#class="price_tr"]/td[2]');
$currency = $xpath->query('//tr[#class="price_tr"]/td[3]');
$league = $xpath->query('//td[#class="left-column"]/p[1]');
//Here we collect specifically the item name from the DOM.
foreach($header as $e) {
$temp = new DOMDocument();
$temp->appendChild($temp->importNode($e,TRUE));
$val = $temp->saveHTML();
$val = strip_tags($val); //Removes the <p> tag from the data that goes into SQL.
$val = mb_convert_encoding($val, 'html-entities', 'utf-8'); //Allows the HTML entity for special characters to be handled.
$val = html_entity_decode($val); //Converts HTML entities for special characters to the actual character value.
$final = mysqli_real_escape_string($conn, trim($val)); //Defense against SQL injection attacks by canceling out single apostrophes in item names.
$item['title'] = $final; //Here's the item name, ready for the SQL table.
}
//Here's a bunch of code where I write to my SQL table. Again, this part works great!
}
I am not opposed to switching to regex if I need to ditch DOM, but I did three days worth of lurking before I chose DOM over regex. I have spent a lot of time researching this problem, but everything I'm seeing says "Recv failure: Connection was reset by peer", which is not what I am getting. I'm really frustrated that I have to ask for help - I've been doing so great so far - just learning as I go. This is the first thing I've ever written in PHP.
TL;DR: I wrote a cURL web-scraper that works brilliantly only 80% of the time. 20% of the time, for an unknown reason, it errors out with "Recv failure: Connection was reset".
Hopefully someone can help me!! :) Thanks for reading even if you can't!
P.S. if you'd like to see my FULL code, it's at: http://pastebin.com/vf4s0d5L.
After researching this at length (I'd already been researching it for days before posting my question), I've caved in and accepted that this error is probably tied to the site I'm trying to scrape and therefore out of my control.
I did manage to work around it though, so I'll drop my workaround here...
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_URL, trim($url));
curl_setopt($curl, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($curl, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 0);
curl_setopt($curl, CURLOPT_TIMEOUT, 1200); //Amount of time I let cURL execute for.
$page = curl_exec($curl);
if(curl_errno($curl)) {
echo 'Scraping error: ' . curl_error($curl) . '</br>';
echo 'Dropping table...</br>';
$sql = "DROP TABLE table_item_info";
if (!mysqli_query($conn, $sql)) {
echo "Could not drop table: " . mysqli_error($conn);
}
mysqli_close($conn);
echo "TABLE has been dropped. Restarting.</br>";
goto start;
exit; }
curl_close($curl);
Basically, what I've done is implemented error-checking. If the error comes up under curl_errno($curl), I assume it's the connection reset error. That being the case, I drop my SQL table and then jump back to the start of my script using "goto start". Then, at the top of my file I have "start:"
This fixed my problem! Now I don't need to worry about whether the connection was reset or not. My code is smart enough to determine that on its own and reset the script if that was the case.
Hope this helps!
Ok this is a really stupid one but something is definitely wrong.
I have a php script that needs to check for 2 variables ($token and $pid)
if(isset($token) && isset($pid)){
$ppurl = "https://api-3t.paypal.com/nvp";
$cURL = curl_init();
curl_setopt($cURL, CURLOPT_HEADER, false);
curl_setopt($cURL, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($cURL, CURLOPT_TIMEOUT, 5);
$GetExpressCheckoutDetails = $ppurl."?USER=".$apiuser."&PWD=".$apipwd."&SIGNATURE=".$apisig."&METHOD=GetExpressCheckoutDetails&VERSION=78&TOKEN=".$token;
curl_setopt($cURL, CURLOPT_URL, $GetExpressCheckoutDetails);
$info = parsePPResponse(curl_exec($cURL));
die (what);
}
Still, when I access the script directly without any variable it still runs that code.
The same way, if I add this before:
if(!isset($token)){ die(novar); }
It will still run the code and die with the message what .
This doesn't make any sense, anyone have a clue why this might be happenning ?
Based on your description, my best guess is that $pid was actually set and is just empty.
Before your if(isset($token) && isset($pid)){ run the following code:
print '<pre>';
print_r(get_defined_vars());
print '</pre>';
Then see if $pid or $token is present in the page output. If it is, you might need to change your condition to use the empty() function instead of isset().
!empty($token)
might work better
Try to add var_dump($token) to make sure that the variable is really not set and that you are really editing the same file you think you are editing (happened to me)
I have a website, that uses WP Super Cache plugin. I need to recycle cache once a day and then I need to call 5 posts (URL adresses) so WP Super Cache put these posts into cache again (caching is quite time consuming so I'd like to have it precached before users come so they dont have to wait).
On my hosting I can use a CRON but only for 1 call/hour. And I need to call 5 different URL's at once.
Is it possible to do that? Maybe create one HTML page with these 5 posts in iframe? Will something like that work?
Edit: Shell is not available, so I have to use PHP scripting.
The easiest way to do it in PHP is to use file_get_contents() (fopen() also works), if the HTTP stream wrapper is enabled on your server:
<?php
$postUrls = array(
'http://my.site.here/post1',
'http://my.site.here/post2',
'http://my.site.here/post3',
'http://my.site.here/post4',
'http://my.site.here/post5',
);
foreach ($postUrls as $url) {
// Get the post as an user will do it
$text = file_get_contents();
// Here you can check if the request was successful
// For example, use strpos() or regex to find a piece of text you expect
// to find in the post
// Replace 'copyright bla, bla, bla' with a piece of text you display
// in the footer of your site
if (strpos($text, 'copyright bla, bla, bla') === FALSE) {
echo('Retrieval of '.$url." failed.\n");
}
}
If file_get_contents() fails to open the URLs on your server (some ISP restrict this behaviour) you can try to use curl:
function curl_get_contents($url)
{
$ch = curl_init($url);
curl_setopt_array($ch, array(
CURLOPT_CONNECTTIMEOUT => 30, // timeout in seconds
CURLOPT_RETURNTRANSFER => TRUE, // tell curl to return the page content instead of just TRUE/FALSE
));
$text = curl_exec($ch);
curl_close($ch);
return $text;
}
Then use the function curl_get_contents() listed above instead of file_get_contents().
An example using PHP without building a cURL request.
Using PHP's shell exec, you can have an extremely light function like so :
$siteList = array("http://url1", "http://url2", "http://url3", "http://url4", "http://url5");
foreach ($siteList as &$site) {
$request = shell_exec('wget '.$site);
}
Now of course this is not the most concise answer and not always a good solution also, if you actually want anything from the response you will have to work with it a different way to cURLbut its a low impact option.
Thanks to Arkascha tip I created a PHP page that I call from CRON. This page contains simple function using cURL:
function cache_it($Url){
if (!function_exists('curl_init')){
die('No cURL, sorry!');
}
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $Url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 50); //higher timeout needed for cache to load
curl_exec($ch); //dont need it as output, otherwise $output = curl_exec($ch);
curl_close($ch);
}
cache_it('http://www.mywebsite.com/url1');
cache_it('http://www.mywebsite.com/url2');
cache_it('http://www.mywebsite.com/url3');
cache_it('http://www.mywebsite.com/url4');
<?php error_reporting(0);
$currency_code = $_GET['currency_code'];
$currency_opt = strtoupper($currency_code)."INR";
$jsn_response = file_get_contents('http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.xchange%20where%20pair%20in%20%28%22' .$currency_opt. '%22%29&format=json&env=store://datatables.org/alltableswithkeys&callback=');
$currencyrate_arr = json_decode($jsn_response, true);
$currency_rate = $currencyrate_arr['query']['results']['rate']['Rate'];
//var_dump($currency_rate);
if($currency_rate > 0){
echo $currency_text = $currency_rate;
}
else{
echo $currency_text = "SORRY! ERROR..";
}
?>
It was working fine but now I am getting error while using this piece of code for currency conversion.
The script is working for me, and produces correct results.
If you are receiving an error, it's either because of a server configuration, or an API limit.
You can check https://developer.yahoo.com/yql/faq/
Rate limits in YQL are based on your authentication. If you use IP-based authentication, then you are limited to 2,000 calls/hour/IP to the public YQL Web service URL (/v1/public/*)
If you need to exceed the 2000 calls per hour limit, read the above link for more information.
PS: If you want to debug, set:
error_reporting(E_ALL);
ini_set('display_errors', 1);
Edit: Since allow_url_fopen is disabled, you can't use file_get_contents on outside URLs, but you can still use CURL.
So just replace the file_get_contents with:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://query.yahooapis.com/v1/public/yql?q=select%20*%20from%20yahoo.finance.xchange%20where%20pair%20in%20%28%22' .$currency_opt. '%22%29&format=json&env=store://datatables.org/alltableswithkeys&callback=');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$jsn_response = curl_exec($ch);
curl_close($ch);