How to call posts from PHP - php

I have a website, that uses WP Super Cache plugin. I need to recycle cache once a day and then I need to call 5 posts (URL adresses) so WP Super Cache put these posts into cache again (caching is quite time consuming so I'd like to have it precached before users come so they dont have to wait).
On my hosting I can use a CRON but only for 1 call/hour. And I need to call 5 different URL's at once.
Is it possible to do that? Maybe create one HTML page with these 5 posts in iframe? Will something like that work?
Edit: Shell is not available, so I have to use PHP scripting.

The easiest way to do it in PHP is to use file_get_contents() (fopen() also works), if the HTTP stream wrapper is enabled on your server:
<?php
$postUrls = array(
'http://my.site.here/post1',
'http://my.site.here/post2',
'http://my.site.here/post3',
'http://my.site.here/post4',
'http://my.site.here/post5',
);
foreach ($postUrls as $url) {
// Get the post as an user will do it
$text = file_get_contents();
// Here you can check if the request was successful
// For example, use strpos() or regex to find a piece of text you expect
// to find in the post
// Replace 'copyright bla, bla, bla' with a piece of text you display
// in the footer of your site
if (strpos($text, 'copyright bla, bla, bla') === FALSE) {
echo('Retrieval of '.$url." failed.\n");
}
}
If file_get_contents() fails to open the URLs on your server (some ISP restrict this behaviour) you can try to use curl:
function curl_get_contents($url)
{
$ch = curl_init($url);
curl_setopt_array($ch, array(
CURLOPT_CONNECTTIMEOUT => 30, // timeout in seconds
CURLOPT_RETURNTRANSFER => TRUE, // tell curl to return the page content instead of just TRUE/FALSE
));
$text = curl_exec($ch);
curl_close($ch);
return $text;
}
Then use the function curl_get_contents() listed above instead of file_get_contents().

An example using PHP without building a cURL request.
Using PHP's shell exec, you can have an extremely light function like so :
$siteList = array("http://url1", "http://url2", "http://url3", "http://url4", "http://url5");
foreach ($siteList as &$site) {
$request = shell_exec('wget '.$site);
}
Now of course this is not the most concise answer and not always a good solution also, if you actually want anything from the response you will have to work with it a different way to cURLbut its a low impact option.

Thanks to Arkascha tip I created a PHP page that I call from CRON. This page contains simple function using cURL:
function cache_it($Url){
if (!function_exists('curl_init')){
die('No cURL, sorry!');
}
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $Url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 50); //higher timeout needed for cache to load
curl_exec($ch); //dont need it as output, otherwise $output = curl_exec($ch);
curl_close($ch);
}
cache_it('http://www.mywebsite.com/url1');
cache_it('http://www.mywebsite.com/url2');
cache_it('http://www.mywebsite.com/url3');
cache_it('http://www.mywebsite.com/url4');

Related

What's the best way to call php variables from an external domain?

I have a small php script: domain1.com/script1.php
//my database connections, check functions and values, then, load:
$variable1 = 'value1';
$variable2 = 'value2';
if ($variable1 > 5) {
$variable3 = 'ok';
} else {
$variable3 = 'no';
}
And I need to load the variables of this script on several other sites of mine (different domains, servers and ips), so I can control all of them from a single file, for example:
domain2.com/site.php
domain3.com/site.php
domain4.com/site.php
And the "site.php" file needs to call the variable that is in script1.php (but I didn't want to have to copy this file in each of the 25 domains and edit each of them every day):
site.php:
echo $variable1 . $variable2 . $variable3; //loaded by script.php another domain
I don't know if the best and easiest way is to pass this: via API, Cookie, Javascript, JSON or try to load it as an include even from php, authorizing the domain in php.ini. I can't use get variables in the url, like ?variable1=abc.
My area would be php (but not very advanced either), and the rest I am extremely layman, so depending on the solution, I will have to hire a developer, but I wanted to understand what to ask the developer, or maybe the cheapest solution for this (even if not the best), as they are non-profit sites.
Thank you.
If privacy is not a concern, then file_get_contents('https://example.com/file.php') will do. Have the information itself be passed as JSON text it's the industry standard.
If need to protect the information, make a POST request (using cURL or guzzle library) with some password assuming you're using https protocol.
On example.com server:
$param = $_REQUEST("param");
$result = [
'param' => $param,
'hello' => "world"
];
echo json_encode($data);
On client server:
$content = file_get_contents('https://example.com/file.php');
$result = json_decode($content, true);
print_r ($result);
For completeness, here's a POST request:
//
// A very simple PHP example that sends a HTTP POST to a remote site
//
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,"http://www.example.com/file.php");
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS,
"postvar1=value1&postvar2=value2&postvar3=value3");
// In real life you should use something like:
// curl_setopt($ch, CURLOPT_POSTFIELDS,
// http_build_query(array('postvar1' => 'value1')));
// Receive server response ...
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$server_output = curl_exec($ch);
curl_close ($ch);
$result = json_decode($server_output , true);

Getting whole HTML element with PHP

I want to get the whole element <article> which represents 1 listing but it doesn't work. Can someone help me please?
containing the image + title + it's link + description
<?php
$url = 'http://www.polkmugshot.com/';
$content = file_get_contents($url);
$first_step = explode( '<article>' , $content );
$second_step = explode("</article>" , $first_step[3] );
echo $second_step[0];
?>
You should definitely be using curl for this type of requests.
function curl_download($url){
// is cURL installed?
if (!function_exists('curl_init')){
die('cURL is not installed!');
}
$ch = curl_init();
// URL to download
curl_setopt($ch, CURLOPT_URL, $url);
// User agent
curl_setopt($ch, CURLOPT_USERAGENT, "Set your user agent here...");
// Include header in result? (0 = yes, 1 = no)
curl_setopt($ch, CURLOPT_HEADER, 0);
// Should cURL return or print out the data? (true = retu rn, false = print)
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// Timeout in seconds
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
// Download the given URL, and return output
$output = curl_exec($ch);
// Close the cURL resource, and free system resources
curl_close($ch);
return $output;
}
for best results for your question. Combine it with HTML Dom Parser
use it like:
// Find all images
foreach($output->find('img') as $element)
echo $element->src . '<br>';
// Find all links
foreach($output->find('a') as $element)
echo $element->href . '<br>';
Good Luck!
I'm not sure I get you right, But I guess you need a PHP DOM Parser. I suggest this one (This is a great PHP library to parser HTML codes)
Also you can get whole HTML code like this:
$url = 'http://www.polkmugshot.com/';
$html = file_get_html($url);
echo $html;
Probably a better way would be to parse the document and run some xpath queries over it afterwards, like so:
$url = 'http://www.polkmugshot.com/';
$xml = simplexml_load_file($url);
$articles = $xml->xpath("//articles");
foreach ($articles as $article) {
// do sth. useful here
}
Read about SimpleXML here.
extract the articles with DOMDocument. working example:
<?php
$url = 'http://www.polkmugshot.com/';
$content = file_get_contents($url);
$domd=#DOMDocument::loadHTML($content);
foreach($domd->getElementsByTagName("article") as $article){
var_dump($domd->saveHTML($article));
}
and as pointed out by #Guns , you'd better use curl, for several reasons:
1: file_get_contents will fail if allow_url_fopen is not set to true in php.ini
2: until php 5.5.0 (somewhere around there), file_get_contents kept reading from the connection until the connection was actually closed, which for many servers can be many seconds after all content is sent, while curl will only read until it reaches content-length HTTP header, which makes for much faster transfers (luckily this was fixed)
3: curl supports gzip and deflate compressed transfers, which again, makes for much faster transfer (when content is compressible, such as html), while file_get_contents will always transfer plain

cURL using info from mySQL, then storing the cURL'ed info

I'm programming in PHP.
An article I've found useful until now was mainly about how to CURL through one site with a lot of information, but what I really need is how to cURL on multiple sites with not so much information - a few lines, as a matter of fact!
Another part is, the article focus is mainly at storing it at the FTP server in a txt file, but I have loaded around 900 addresses into mysql, and want to load them from there, and enrich the table with the information stored in the links - Which I will provided beneath!
We have some open public libraries with addresses and information about these and an API.
Link to the main site:
The function I would like to use: http://dawa.aws.dk/adresser/autocomplete?q=
SQL Structure:
Data example: http://i.imgur.com/jP1J26U.jpg
fx this addresse: Dornen 2 6715 Esbjerg N (called AdrName in databasen).
http://dawa.aws.dk/adresser/autocomplete?q=Dornen%202%206715%20Esbjerg%20N
This will give me the following output (which I want to store in the AdrID in the database):
[
{
"tekst": "Dornen 2, Tarp, 6715 Esbjerg N",
"adresse": {
"id": "0a3f50b8-d085-32b8-e044-0003ba298018",
"href": "http://dawa.aws.dk/adresser/0a3f50b8-d085-32b8-e044-0003ba298018",
"vejnavn": "Dornen",
"husnr": "2",
"etage": null,
"dør": null,
"supplerendebynavn": "Tarp",
"postnr": "6715",
"postnrnavn": "Esbjerg N"
}
}
]
How to store it all in a blob, as seen in the SQL structure?
If you want to make a cURL request in php use this method
function curl_download($Url){
// is cURL installed yet?
if (!function_exists('curl_init')){
die('Sorry cURL is not installed!');
}
// OK cool - then let's create a new cURL resource handle
$ch = curl_init();
// Now set some options (most are optional)
// Set URL to download
curl_setopt($ch, CURLOPT_URL, $Url);
// Set a referer
curl_setopt($ch, CURLOPT_REFERER, "http://www.example.org/yay.htm");
// User agent
curl_setopt($ch, CURLOPT_USERAGENT, "MozillaXYZ/1.0");
// Include header in result? (0 = yes, 1 = no)
curl_setopt($ch, CURLOPT_HEADER, 0);
// Should cURL return or print out the data? (true = return, false = print)
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// Timeout in seconds
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
// Download the given URL, and return output
$output = curl_exec($ch);
// Close the cURL resource, and free system resources
curl_close($ch);
return $output;
}
And then you call it using
print curl_download('http://dawa.aws.dk/adresser/autocomplete?q=Melvej');
Or you can directly convert it jSON object
$jsonString=curl_download('http://dawa.aws.dk/adresser/autocomplete?q=Melvej');
var_dump(json_decode($jsonString));
The data you download is json, so you can store that in a varchar column rather than blog.
Also the site with the api does not seem bothered about http referrer, user agent etc so you can use file_get_contents in place of curl.
So simply get all the results from your db, iterate over them, making a call to the api, and update the appropriate row with the correct data:
//get all the rows from your database
$addresses = DB::exec('SELECT * FROM addresses'); //i dont know how you actually access your db, this is just an example
foreach($addresses as $address){
$searchTerm = $address['AdrName'];
$addressId = $address['Vid'];
//download the json
$apidata = file_get_contents('http://dawa.aws.dk/adresser/autocomplete?q=' . urlencode($searchTerm));
//save back to db
DB::exec('UPDATE addresses SET status=? WHERE id=?', [$apidata, $searchTerm]);
//if you want to access the data, you can use json_decode:
$data = json_decode($apidata);
echo $data[0]->tekst; //outputs Dornen 2, Tarp, 6715 Esbjerg N
}

Exit out of a cURL fetch

I'm trying to find a way to only quickly access a file and then disconnect immediately.
So I've decided to use cURL since it's the fastest option for me. But I can't figure out how I should "disconnect" cURL.
With the code below, Apache's access logs says that the file I tried accessing was indeed accessed, but I'm feeling a little iffy about this, because when I just run the while loop without breaking out of it, it just keeps looping. Shouldn't the loop stop when cURL has finished fetching the file? Or am I just being silly; is the loop just restarting constantly?
<?php
$Resource = curl_init();
curl_setopt($Resource, CURLOPT_URL, '...');
curl_setopt($Resource, CURLOPT_HEADER, 0);
curl_setopt($Resource, CURLOPT_USERAGENT, '...');
while(curl_exec($Resource)){
break;
}
curl_close($Resource);
?>
I tried setting the CURLOPT_CONNECTTIMEOUT_MS / CURLOPT_CONNECTTIMEOUT options to very small values, but it didn't help in this case.
Is there a more "proper" way of doing this?
This statement is superflous:
while(curl_exec($Resource)){
break;
}
Instead just keep the return value for future reference:
$result = curl_exec($Resource);
The while loop does not help anything. So now to your question: You can tell curl that it should only take some bytes from the body and then quit. That can be achieved by reducing the CURLOPT_BUFFERSIZE to a small value and by using a callback function to tell curl it should stop:
$withCallback = array(
CURLOPT_BUFFERSIZE => 20, # ~ value of bytes you'd like to get
CURLOPT_WRITEFUNCTION => function($handle, $data) {
echo "WRITE: (", strlen($data), ") $data\n";
return 0;
},
);
$handle = curl_init("http://stackoverflow.com/");
curl_setopt_array($handle, $withCallback);
curl_exec($handle);
curl_close($handle);
Output:
WRITE: (10) <!DOCTYPE
Another alternative is to make a HEAD request by using CURLOPT_NOBODY which will never fetch the body. But it's not a GET request.
The connect timeout settings are about how long it will take until the connect times out. The connect is the phase until the server accepts input from curl and curl starts to know about that the server does. It's not related to the phase when curl fetches data from the server, that's
CURLOPT_TIMEOUT The maximum number of seconds to allow cURL functions to execute.
You find a long list of available options in the PHP Manual: curl_setopt­Docs.
Perhaps that might be helpful?
$GLOBALS["dataread"] = 0;
define("MAX_DATA", 3000); // how many bytes should be read?
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://www.php.net/");
curl_setopt($ch, CURLOPT_WRITEFUNCTION, "handlewrite");
curl_exec($ch);
curl_close($ch);
function handlewrite($ch, $data)
{
$GLOBALS["dataread"] += strlen($data);
echo "READ " . strlen($data) . " bytes\n";
if ($GLOBALS["dataread"] > MAX_DATA) {
return 0;
}
return strlen($data);
}

trying to run a CURL script in wordpress

I'm trying to run a CURL script in wordpress but I'm having a problem.
When i test it, i get a 500 internal error as WP changes the URL.
So the script is at www.site.com/curl_script.php - When i test that (navigate to www.site.com/curl_script.php) I end up going to www.site.com/curl_script.php/wp-admin/install.php which returns a 500 internal error.
Now after playing around with the script, I've noticed the problem. It seems to be a function that I'm running (the curl function) thats causing wordpress to take me to that url.
Ive had similar issues to this but have managed to fix it by simply changing the names of the functions, but this doesn't seem to work anymore.
The function:
function verify_user($ref, $username, $uu_name){
$ch = curl_init($server_root);
curl_setopt($ch,CURLOPT_URL,"http://site.com/con1.php");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_POSTFIELDS,$fields_string);
curl_setopt($ch, CURLOPT_POST, 1);
$result = curl_exec($ch);
$data = json_decode($result);
global $ref_;
$ref_ = $data->ref_id;
//fetch some more info
$chh = curl_init($server_root);
curl_setopt($chh,CURLOPT_URL,"http://site.com/con2.php");
curl_setopt($chh, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($chh, CURLOPT_POST, 1);
$resultt_2 = curl_exec($chh);
$data_custt = json_decode($resultt_2);
$cust_st = $data__->user_status;
if ($cust_st == "FAILED"){
echo "this is bad";
}
elseif ($cust_st == "PASSED") {
echo "this is good";
}
}
}
Now when i call this function:
verify_user_info($ref, $username, $uu_name);
Wordpress plays up...
But when i leave the function out (don't call it), everything works fine.
It seems that WP is assuming the user is attempting to run the installation, when that's not the case.
Any ideas on how to fix this, dynamically as others will use this script too?
If sounds like you are getting redirected somehow, even though should shouldn't be if CURLOPT_FOLLOWLOCATION is not set. Try using the curl_getinfo function to debug the URL that is being accessed.

Categories