(I'm scraping this stuff with the permission of the website in question, by the way).
Pretty simple web scraper, was working fine when I was loading all the links by hand, but when I've tried to load them in via JSON and variables (so I can do lots of scraping with the one script and make the process more modular by just adding more links to JSON) it runs on an infinite loop.
(Page has been loading for about 15 minutes now)
Here is my JSON. Only one store is in there for testing purposes but there is going to be about 15 more.
[
{
"store":"Incu Men",
"cat":"Accessories",
"general_cat":"Accessories",
"spec_cat":"accessories",
"url":"http://www.incuclothing.com/shop-men/accessories/",
"baseurl":"http://www.incuclothing.com",
"next_select":"a.next",
"prod_name_select":".infobox .fn",
"label_name_select":".infobox .brand",
"desc_select":".infobox .description",
"price_select":"#price",
"mainImg_select":"",
"more_imgs":".product-images",
"product_url":".hproduct .photo-link"
}
]
Here is the PHP scraper code:
<?php
//Set infinite time limit
set_time_limit (0);
// Include simple html dom
include('simple_html_dom.php');
// Defining the basic cURL function
function curl($url) {
$ch = curl_init();
// Initialising cURL
curl_setopt($ch, CURLOPT_URL, $url);
// Setting cURL's URL option with the $url variable passed into the function
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
// Setting cURL's option to return the webpage data
$data = curl_exec($ch);
// Executing the cURL request and assigning the returned data to the $data variable
curl_close($ch);
// Closing cURL
return $data;
// Returning the data from the function
}
function getLinks($catURL, $prodURL, $baseURL, $next_select) {
$urls = array();
while($catURL) {
echo "Indexing: $url" . PHP_EOL;
$html = str_get_html(curl($catURL));
foreach ($html->find($prodURL) as $el) {
$urls[] = $baseURL . $el->href;
}
$next = $html->find($next_select, 0);
$url = $next ? $baseURL . $next->href : null;
echo "Results: $next" . PHP_EOL;
}
return $urls;
}
$string = file_get_contents("jsonWorkers/incuMens.json");
$json_array = json_decode($string,true);
foreach ($json_array as $value){
$baseURL = $value['baseurl'];
$catURL = $value['url'];
$store = $value['store'];
$general_cat = $value['general_cat'];
$spec_cat = $value['spec_cat'];
$next_select = $value['next_select'];
$prod_name = $value['prod_name_select'];
$label_name = $value['label_name_select'];
$description = $value['desc_select'];
$price = $value['price_select'];
$prodURL = $value['product_url'];
if (!is_null($value['mainImg_select'])){
$mainImg = $value['mainImg_select'];
}
$more_imgs = $value['more_imgs'];
$allLinks = getLinks($catURL, $prodURL, $baseURL, $next_select);
}
?>
Any ideas why the script would be running infinitely and not returning anything/stopping/printing anything to screen? I'm just gonna let it run until it stops. When I was doing this by hand it would only take a minute or so, sometimes less, so I'm sure it's a problem with my variables/json but I can't for the life of me see what the issues lie.
Can anyone take a quick look and point me in the right direction?
There is a problem with your while($catURL) loop. What do you want to do ?
Moreover, you can force to display information on your browser with the flush() command.
Related
I am looking to collect the titles of all of the posts on a subreddit, and I wanted to know what would be the best way of going about this?
I've looked around and found some stuff talking about Python and bots. I've also had a brief look at the API and am unsure in which direction to go.
As I do not want to commit to find out 90% of the way through it won't work, I ask if someone could point me in the right direction of language and extras like any software needed for example pip for Python.
My own experience is in web languages such as PHP so I initially thought of a web app would do the trick but am unsure if this would be the best way and how to go about it.
So as my question stands
What would be the best way to collect the titles (in bulk) of a
subreddit?
Or if that is too subjective
How do I retrieve and store all the post titles of a subreddit?
Preferably needs to :
do more than 1 page of (25) results
save to a .txt file
Thanks in advance.
PHP; in 25 lines:
$subreddit = 'pokemon';
$max_pages = 10;
// Set variables with default data
$page = 0;
$after = '';
$titles = '';
do {
$url = 'http://www.reddit.com/r/' . $subreddit . '/new.json?limit=25&after=' . $after;
// Set URL you want to fetch
$ch = curl_init($url);
// Set curl option of of header to false (don't need them)
curl_setopt($ch, CURLOPT_HEADER, 0);
// Set curl option of nobody to false as we need the body
curl_setopt($ch, CURLOPT_NOBODY, 0);
// Set curl timeout of 5 seconds
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
// Set curl to return output as string
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
// Execute curl
$output = curl_exec($ch);
// Get HTTP code of request
$status = curl_getinfo($ch, CURLINFO_HTTP_CODE);
// Close curl
curl_close($ch);
// If http code is 200 (success)
if ($status == 200) {
// Decode JSON into PHP object
$json = json_decode($output);
// Set after for next curl iteration (reddit's pagination)
$after = $json->data->after;
// Loop though each post and output title
foreach ($json->data->children as $k => $v) {
$titles .= $v->data->title . "\n";
}
}
// Increment page number
$page++;
// Loop though whilst current page number is less than maximum pages
} while ($page < $max_pages);
// Save titles to text file
file_put_contents(dirname(__FILE__) . '/' . $subreddit . '.txt', $titles);
I'm doing in instagram API, and little bit confusing about loop in function.
I try to create code to get all images from instagram user, but the API only give limit to 20 images. And we must do next call to the next page.
I'm using https://github.com/cosenary/Instagram-PHP-API to my application, and here is the function to get images.
function getUserMedia($id = 'self', $limit = 0)
{
$params = array();
if ($limit > 0) {
$params['count'] = $limit;
}
return $this->_makeCall('users/' . $id . '/media/recent', strlen($this->getAccessToken()), $params);
}
I try to make a call, the return value is
{
"pagination":
{
"next_url": "https://api.instagram.com/v1/users/21537353/media/recent?access_token=xxxxxxx&max_id=1173734674550540529_21537353",
"next_max_id": "1173734674550540529_21537353"
}, [.... another result data ....]
That the first function result, and produce 20 images.
My Question is:
How to pass return from to that function, to that function again using next_max_id parameter, so it will looping and using that function again?
How to merge the result to be 1 object array?
I'm sorry about my English and my explanation if not good.
Thank you for your help.
You should use recursive function
and stop that function when next_url found null/empty
From Instagram-PHP-Api documentation it seems to me that you should use pagination() method to receive your next page:
$photos = $instagram->getTagMedia('kitten');
$result = $instagram->pagination($photos);
Just use a condition (if) to verify if $result has content and, if it has, make another call with pagination() to request next page. Do it recursively.
But I think it's a good idea to implement without Instagram-PHP-Api using a while loop:
$token = "<your-accces-token>";
$url = "https://api.instagram.com/v1/users/self/media/recent/?access_token=".$token;
while ($url != null) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$output = curl_exec($ch);
curl_close($ch);
$photos = json_decode($output);
if ($photos->meta->code == 200) {
// do stuff with photos
$url = (isset($photos->pagination->next_url)) ? $photos->pagination->next_url : null; // verify if there's another page
} else {
$url = null; // if error, stop the loop
}
sleep(1000); // to avoid to much requests on Instagram at almost the same time and protect your rate limits API
}
Good luck!
New to PHP, so bear with me...
I'm trying to send/make available the output variables from this simpleXml parser script to this other PHP file, which is supposed to send data to Brightcove's Media API.
Sending Script:
<?php
session_name("FeedParse");
session_start();
$_SESSION['bcName'] = $title;
$_SESSION['shortDescription'] = $description;
$_SESSION['remoteUrl'] = $videoFile;
$html = "";
$url = "http://feeds.nascar.com/feeds/video?command=search_videos&media_delivery=http&custom_fields=adtitle%2cfranchise&page_size=100&sort_by=PUBLISH_DATE:DESC&token=217e0d96-bd4a-4451-88ec-404debfaf425&any=franchise:%20Preview%20Show&any=franchise:%20Weekend%20Top%205&any=franchise:Up%20to%20Speed&any=franchise:Press%20Pass&any=franchise:Sprint%20Cup%20Practice%20Clips&any=franchise:Sprint%20Cup%20Highlights&any=franchise:Sprint%20Cup%20Final%20Laps&any=franchise:Sprint%20Cup%20Victory%20Lane&any=franchise:Sprint%20Cup%20Post%20Race%20Reactions&any=franchise:All%20Access&any=franchise:Nationwide%20Series%20Qualifying%20Clips&any=franchise:Nationwide%20Series%20Highlights&any=franchise:Nationwide%20Series%20Final%20Laps&any=franchise:Nationwide%20Series%20Victory%20Lane&any=franchise:Nationwide%20Series%20Post%20Race%20Reactions&any=franchise:Truck%20Series%20Qualifying%20Clips&any=franchise:Truck%20Series%20Highlights&any=franchise:Truck%20Series%20Final%20Laps&any=franchise:Truck%20Series%20Victory%20Lane&any=franchise:Truck%20Series%20Post%20Race%20Reactions&output=mrss";
$xml = simplexml_load_file($url);
$namespaces = $xml->getNamespaces(true); // get namespaces
for($i = 0; $i < 50; $i++){ // will return the 50 most recent videos
$title = $xml->channel->item[$i]->video;
$link = $xml->channel->item[$i]->link;
$title = $xml->channel->item[$i]->title;
$pubDate = $xml->channel->item[$i]->pubDate;
$description = $xml->channel->item[$i]->description;
$titleid = $xml->channel->item[$i]->children($namespaces['bc'])->titleid;
$m_attrs = $xml->channel->item[$i]->children($namespaces['media'])->content[0]->attributes();
$videoFile = $m_attrs["url"];
$html .= //"<h3>$title</h3>$description<p>$pubDate<p>$url<p>Video ID: $titleid<p>
print $title;
print $description;
print $videoFile;
// echo $html;/* tutorial for this script is here https://www.youtube.com/watch?v=4ZLZkdiKGE0 */
}
//http://support.brightcove.com/en/video-cloud/docs/media-write-api-php-example-upload-video
?>
Receiving Script:
<?php
session_start();
$title = $_SESSION['bcName'];
$description = $_SESSION['shortDescription'];
$videoFile = $_SESSION['remoteUrl'];
// Instantiate the Brightcove class
$bc = new Brightcove(
'//readtoken//', //Read Token BC
'//writetoken//' //Write Token BC
);
// Set the data for the new video DTO using the form values
$metaData = array(
'$title' => $_POST['bcName'],
'$description' => $_POST['bcShortDescription'],
);
//changed all the code below to what i think works for remoteUrl and URLs as opposed to actual video files
// Rename the file to its original file name (instead of temp names like "a445ertd3")
$url = $_URL['remoteUrl'];
//rename($url['tmp_name'], '/tmp/' . $url['name']);
//$url = '/tmp/' . $url['name'];
// Send the file to Brightcove
//Actually, this has been changed to send URL to BC, not file
echo $bc->createVideo($url,$metaData);
class Brightcove {
public $token_read = 'UmILcDyAFKzjtWO90HNzc67X-wLZK_OUEZliwd9b3lZPWosBPgm1AQ..'; //Read Token from USA Today Sports BC
public $token_write = 'svP0oJ8lx3zVkIrMROb6gEkMW6wlX_CK1MoJxTbIajxdn_ElL8MZVg..'; //Write Token from USA Today Sports BC
public $read_url = 'http://api.brightcove.com/services/library?';
public $write_url = 'http://api.brightcove.com/services/post';
public function __construct($token_read, $token_write = NULL ) {
$this->token_read = $token_read;
$this->token_write = $token_write;
}
public function createVideo($url = NULL, $meta) {
$request = array();
$post = array();
$params = array();
$video = array();
foreach($meta as $key => $value) {
$video[$key] = $value;
}
$params['token'] = $this->token_write;
$params['video'] = $video;
$post['method'] = 'create_video';
$post['params'] = $params;
$request['json'] = json_encode($post);
if($file) {
$request['file'] = '#' . $file;
}
// Utilize CURL library to handle HTTP request
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $this->write_url);
curl_setopt($curl, CURLOPT_POST, 1);
curl_setopt($curl, CURLOPT_POSTFIELDS, $request);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_VERBOSE, TRUE );
curl_setopt($curl, CURLOPT_CONNECTTIMEOUT, 300);
curl_setopt($curl, CURLOPT_TIMEOUT, 300);
$response = curl_exec($curl);
curl_close($curl);
// Responses are transfered in JSON, decode into PHP object
$json = json_decode($response);
// Check request error code and re-call createVideo if request
// returned a 213 error. A 213 error occurs when you have
// exceeded your allowed number of concurrent write requests
if(isset($json->error)) {
if($json->error->code == 213) {
return $this->createVideo($url, $meta);
} else {
return FALSE;
}
} else {
return $response;
}
}
}
?>
Did I set up sessions to work correctly here? Any ideas on why the receiving PHP script isn't picking up the data/variables outputted by the PHP feed parser script?
In your sending script, it looks like you're setting the session variables at the beginning of the script and you're expecting them to get updated whenever the local variable changes. This won't happen with the way you've written it.
You could make this happen by assigning the variables by reference, by putting an ampersand (&) before the local variable name, but this can get kinda tricky in some scenarios, and it might be best to skip that headache and instead just update the session variable directly.
However, another issue is that you're attempting to store multiple values (50, as indicated by the code comments) into a scalar session variable. So every time your loop iterates, it would overwrite the previous value. Perhaps what would be better would be to use an array structure:
<?php
session_name("FeedParse");
session_start();
$_SESSION['videos'] = array(); // initialize a session variable 'videos' to be an array
$url = "http://feeds.nascar.com/feeds/video?command=search_videos&media_delivery=http&custom_fields=adtitle%2cfranchise&page_size=100&sort_by=PUBLISH_DATE:DESC&token=217e0d96-bd4a-4451-88ec-404debfaf425&any=franchise:%20Preview%20Show&any=franchise:%20Weekend%20Top%205&any=franchise:Up%20to%20Speed&any=franchise:Press%20Pass&any=franchise:Sprint%20Cup%20Practice%20Clips&any=franchise:Sprint%20Cup%20Highlights&any=franchise:Sprint%20Cup%20Final%20Laps&any=franchise:Sprint%20Cup%20Victory%20Lane&any=franchise:Sprint%20Cup%20Post%20Race%20Reactions&any=franchise:All%20Access&any=franchise:Nationwide%20Series%20Qualifying%20Clips&any=franchise:Nationwide%20Series%20Highlights&any=franchise:Nationwide%20Series%20Final%20Laps&any=franchise:Nationwide%20Series%20Victory%20Lane&any=franchise:Nationwide%20Series%20Post%20Race%20Reactions&any=franchise:Truck%20Series%20Qualifying%20Clips&any=franchise:Truck%20Series%20Highlights&any=franchise:Truck%20Series%20Final%20Laps&any=franchise:Truck%20Series%20Victory%20Lane&any=franchise:Truck%20Series%20Post%20Race%20Reactions&output=mrss";
$xml = simplexml_load_file($url);
$namespaces = $xml->getNamespaces(true); // get namespaces
for($i = 0; $i < 50; $i++){ // will return the 50 most recent videos
$m_attrs = $xml->channel->item[$i]->children($namespaces['media'])->content[0]->attributes();
// on each loop iteration, create a new array structure with the video info in it and
// push it onto the 'video' array session variable
$video = array(
'bcName' => $xml->channel->item[$i]->video,
'shortDescription' => $xml->channel->item[$i]->description,
'remoteUrl' => $m_attrs["url"],
);
$_SESSION['videos'][] = $video;
}
Then, on your receiving script, you'll loop through $_SESSION['videos']:
<?php
session_start();
// Instantiate the Brightcove class
$bc = new Brightcove(
'UmILcDyAFKzjtWO90HNzc67X-wLZK_OUEZliwd9b3lZPWosBPgm1AQ..', //Read Token from USA Today Sports BC
'svP0oJ8lx3zVkIrMROb6gEkMW6wlX_CK1MoJxTbIajxdn_ElL8MZVg..' //Write Token from USA Today Sports BC
foreach ((array)$_SESSION['videos'] as $video) {
$title = $video['bcName'];
$description = $video['shortDescription'];
$videoFile = $video['remoteUrl'];
// The code below this line may need to be adjusted. It does not seem quite right.
// Are you actually sending anything to this script via POST? Or should those just
// be the values we set above?
// What is $_URL? Should that just be the $videoFile value?
// Set the data for the new video DTO using the form values
$metaData = array(
'$title' => $_POST['bcName'],
'$description' => $_POST['bcShortDescription'],
);
//changed all the code below to what i think works for remoteUrl and URLs as opposed to actual video files
// Rename the file to its original file name (instead of temp names like "a445ertd3")
$url = $_URL['remoteUrl'];
//rename($url['tmp_name'], '/tmp/' . $url['name']);
//$url = '/tmp/' . $url['name'];
// Send the file to Brightcove
//Actually, this has been changed to send URL to BC, not file
echo $bc->createVideo($url,$metaData);
}
IMPORTANT NOTE:
Keep in mind that this will call the API once for each video in the session (sounds like up to 50 each time). So you'll be making 50 cURL requests on each run of this script. That seems a bit heavy, but perhaps that's expected. It would be worth investigating if their API allows you to compile the data into one call and send it all up at once, as opposed to connecting, sending the data, parsing the response, and disconnecting, 50 times.
I'm writing a page scraper for a site that is a little slow, but has a lot of information I'd like to use for widget purposes (with their permission). Currently it takes roughly 4-5 minutes to execute and parse all ~150 pages I scrape so far. It will be a crontab'd event, and a temporary table is used while it's being generated, then copied to a "live" table upon completion so it's a seamless transition from a client stand-point, however can you see a way to speed up my code, possibly?
//mysql connection stuff here
function dnl2array($domnodelist) {
$return = array();
$nb = $domnodelist->length;
for ($i = 0; $i < $nb; ++$i) {
$return['pt'][] = utf8_decode(trim($domnodelist->item($i)->nodeValue));
$return['html'][] = utf8_decode(trim(get_inner_html($domnodelist->item($i))));
}
return $return;
}
function get_inner_html( $node ) {
$innerHTML= '';
$children = $node->childNodes;
foreach ($children as $child) {
$innerHTML .= $child->ownerDocument->saveXML( $child );
}
return $innerHTML;
}
// NEW curl instead of file_get_contents()
$c = curl_init($url);
curl_setopt($c, CURLOPT_HEADER, false);
curl_setopt($c, CURLOPT_USERAGENT, getUserAgent());
curl_setopt($c, CURLOPT_FAILONERROR, true);
curl_setopt($c, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($c, CURLOPT_AUTOREFERER, true);
curl_setopt($c, CURLOPT_RETURNTRANSFER, true);
curl_setopt($c, CURLOPT_TIMEOUT, 20);
// Grab the data.
$html = curl_exec($c);
// Check if the HTML didn't load right, if it didn't - report an error
if (!$html) {
echo "<p>cURL error number: " .curl_errno($c) . " on URL: " . $url ."</p>" .
"<p>cURL error: " . curl_error($c) . "</p>";
}
// $html = file_get_contents($url);
$doc = new DOMDocument;
// Load the html into our object
$doc->loadHTML($html);
$xPath = new DOMXPath( $doc );
// scrape initial page that contains list of everything I want to scrape
$results = $xPath->query('//div[#id="food-plan-contents"]//td[#class="product-name"]');
$test['itams'] = dnl2array($results);
foreach($test['itams']['html'] as $get_url){
$prepared_url[] = ""; // The url being scraped, modified slightly to gain access to more information -- not SO applicable data to see
}
$i = 0;
foreach($prepared_url as $url){
$c = curl_init($url);
curl_setopt($c, CURLOPT_HEADER, false);
curl_setopt($c, CURLOPT_USERAGENT, getUserAgent());
curl_setopt($c, CURLOPT_FAILONERROR, true);
curl_setopt($c, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($c, CURLOPT_AUTOREFERER, true);
curl_setopt($c, CURLOPT_RETURNTRANSFER, true);
curl_setopt($c, CURLOPT_TIMEOUT, 20);
// Grab the data.
$html = curl_exec($c);
// Check if the HTML didn't load right, if it didn't - report an error
if (!$html) {
echo "<p>cURL error number: " .curl_errno($c) . " on URL: " . $url ."</p>" .
"<p>cURL error: " . curl_error($c) . "</p>";
}
// $html = file_get_contents($url);
$doc = new DOMDocument;
$doc->loadHTML($html);
$xPath = new DOMXPath($doc);
$results = $xPath->query('//h3[#class="product-name"]');
$arr[$i]['name'] = dnl2array($results);
$results = $xPath->query('//div[#class="product-specs"]');
$arr[$i]['desc'] = dnl2array($results);
$results = $xPath->query('//p[#class="product-image-zoom"]');
$arr[$i]['img'] = dnl2array($results);
$results = $xPath->query('//div[#class="groupedTable"]/table/tbody/tr//span[#class="price"]');
$arr[$i]['price'] = dnl2array($results);
$arr[$i]['url'] = $url;
if($i % 5 == 1){
lazy_loader($arr); //lazy loader adds data to sql database
unset($arr); // keep memory footprint light (server is wimpy -- but free!)
}
$i++;
usleep(50000); // Don't be bandwith pig
}
// Get any stragglers
if(count($arr) > 0){
lazy_loader($arr);
$time = time() + (23 * 60 * 60); // Time + 23 hours for "tomorrow's date"
$tab_name = "sr_data_items_" . date("m_d_y", $time);
// and copy table now that script is finished
mysql_query("CREATE TABLE IF NOT EXISTS `{$tab_name}` LIKE `sr_data_items_skel`");
mysql_query("INSERT INTO `{$tab_name}` SELECT * FROM `sr_data_items_skel`");
mysql_query("TRUNCATE TABLE `sr_data_items_skel`");
}
It sounds like you're mostly dealing with slow server response speeds. At even 2 seconds for each of those 150 pages, you're looking at 300 seconds = 5 minutes. The best way you could speed this up is by using curl_multi_* to run multiple connections at the same time.
So replace the start of the foreach loop (up through the if !html check) with this:
reset($prepared_url); // set internal pointer to first element
$running = array(); // map from curl reference to url
$finished = false;
$mh = curl_multi_init();
$i = 0;
while(!$finished || !empty($running)){
// add urls to $mh up to a maximum
while (count($running) < 15 && !$finished)
{
$url = next($prepared_url);
if ($url === FALSE)
{
$finished = true;
break;
}
$c = setupcurl($url);
curl_multi_add_handle($mh, $c);
$running[$c] = $url;
}
curl_multi_exec($mh, $active);
$info = curl_multi_info_read($mh);
if (false === $info) continue; // nothing to report right now
$c = $info['handle'];
$url = $running[$c];
unset($running[$c]);
$result = $info['result'];
if ($result != CURLE_OK)
{
echo "Curl Error: " . $result . "\n";
continue;
}
$html = curl_multi_getcontent($c);
$download_time = curl_getinfo($c, CURLINFO_TOTAL_TIME);
curl_multi_remove_handle($mh, $c);
// Check if the HTML didn't load right, if it didn't - report an error
if (!$html) {
echo "<p>cURL error number: " .curl_errno($c) . " on URL: " . $url ."</p>\n" .
"<p>cURL error: " . curl_error($c) . "</p>\n";
}
curl_close($c);
<<rest of foreach loop here>>
That will keep 15 downloads going at the same time, and process them as they finish.
Anyway – so for the history: please see my comments up top.
As for caching: I'm using dnsmasq to cache.
My setup is using a recipe for chef, which I run through chef-solo. The templates contains my configuration and the attributes contain my settings. It's pretty straight forward.
So the beauty is that this allows me to put this server into DHCP (we use Amazon EC2 and this service distributes all IPs via DHCP to the virtual instances) and then I don't have to make any changes to my application to use them.
I have another recipe to edit /etc/dhclient.conf.
Does this help? Let me know where to elaborate more.
EDIT
Just for clarification: This is not a Ruby solution I'm just using chef for configuration management (this part makes sure that services are always setup the same, etc..). Dnsmasq itself acts as a local DNS server and saves the requests so it speeds up.
The manual way is as follows:
On a Ubuntu:
apt-get install dnsmasq
Then edit the /etc/dnsmasq.conf:
listen-address=127.0.0.1
cache-size=5000
domain-needed
bogus-priv
log-queries
Restart service and verify it's running (ps aux|grep dnsmasq).
Then put it into your /etc/resolv.conf:
nameserver 127.0.0.1
Test:
dig #127.0.0.1 stackoverflow.com
Execute twice, check time it took to resolve. Second one should be faster.
Enjoy! ;)
The first thing to do is to measure how much time is spent downloading the file from the server. Use function microtime(true) to get a timestamp both before and after the call
file_get_contents($url);
and subtract the values. After you find out that the real bottleneck is inside your code and not on the side of network or remote server, only then you can start thinking about some optimizations.
When you say that 150 pages takes 5 minutes to load & parse, that's 2 seconds per page, and my wild guess is that most of that time is spent to download the page from the server.
You should consider using cUrl instead of both file_get_contents() and DOMDocument::loadHTMLFile, because it's much faster.
See this question:
https://stackoverflow.com/questions/555523/file-get-contents-vs-curl-what-has-better-performance
You need to benchmark. DNS is not an issue, if you're scrapping 150 pages, DNS will for sure get cached on your resolver for the 4 minutes you need to parse the rest of the 149 pages.
Try timing page all transfers with wget/curl, you may get surprised that it's not so fast as you may think.
Try requesting in parallel, hitting them with 4 parallel requests will get your time down to 1 minute.
If you actually find that it's xpath problem use preg_split() or even an awk script with popen() to get your values.
Hey all I have seen several questions on the topic here, but none of them have solved my problem. I have a script on my site which I want to use to generate several different types of emails to my users. I wanted a way to be able to create template files for the different emails which accept $_POST variables to fill in relevant information, and to simply make a post request to these templates and get back the response to place as the body of the email. I am attempting to write a function which would accept the location of the template file (either relative or absolute would work, but I would prefer relative honestly), and an array of parameters that I would like to send to the template via post. So far I have had no luck. Here is my code so far:
private function post_request($url, $data) {
$output = array();
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
$result = curl_exec($ch);
curl_close($ch);
if ($result) {
$output['status'] = "ok";
$output['content'] = $result;
} else {
$output['status'] = "failure";
$output['error'] = curl_error($ch);
}
curl_close($ch);
return $output;
}
I have been getting the error "couldn't connect to host" from curl, but after outputting my url to an error log I have been able to verify that copying and pasting the URL into firefox results in seeing the page correctly.
Any ideas? I am not married to the idea of using curl, so if there is a better option I would be more than happy to use it instead. Thanks for the help all!
You should be able to use file_get_contents() for this, so long as your host has not prevented it from accessing remote locations (and the $url script is not looking exclusively for POST data).
private function post_request($url, $data) {
$output = array();
$url_with_data = '';
foreach ( $data as $k=>$v ){ // Loop through data and create request string
$url_with_data .= '&' . $k . '=' . $v;
}
// Remove first ampersand and encode the data
$url_with_data = urlencode( substr( $url_with_data, 1 ) );
// Request file
// Format will be http://url.com?var1=data&var2=data&var3=data
$result = file_get_contents( $url . '?' . $url_with_data );
if ($result) {
$output['status'] = "ok";
$output['content'] = $result;
} else {
$output['status'] = "failure";
$output['error'] = 'Could not open remote file';
}
return $output;
}
Another option: You say that both files reside on the same server. If that is the case, you could simply require() the template builder.
private function post_request($url, $data) {
$output = array();
#require_once('./path/to/template_builder.php');
if ($result) {
$output['status'] = "ok";
$output['content'] = $result;
} else {
$output['status'] = "failure";
$output['error'] = 'Could not open remote file';
}
return $output;
}
Then in template_builder.php:
<?php
unset( $result );
if ( is_array( $data ) ){
// Parse $data ...
$result = $email_template;
}
As it turns out, the issue ended up being a server configuration error. The server was timing out while attempting to contact the file because it was hitting the wrong DNS server. Fixing that solved my problem!