I've been using various pieces of different twitter feeds to grab tweets, but now I've hit a wall with the rate limiting and caching tweets. Here's my code:
function tweets($twitter_handle, $tweet_limit, $tweet_links, $tweet_tags, $tweet_avatar, $tweet_profile) {
/* Store Tweets in a JSON object */
$tweet_feed = json_decode(file_get_contents('http://api.twitter.com/1/statuses/user_timeline.json?screen_name='.
$twitter_handle.'&include_entities=true&include_rts=true&count='.$hard_max.''));
This works great until I hit the rate limit. Here's what I added to cache tweets:
function tweets($twitter_handle, $tweet_limit, $tweet_links, $tweet_tags, $tweet_avatar, $tweet_profile) {
$url = 'http://api.twitter.com/1/statuses/user_timeline.json?screen_name='.$twitter_handle.'&include_entities=true&include_rts=true&count='.$hard_max.'';
$cache = dirname(__FILE__) . '/cache/twitter';
if(filemtime($cache) < (time() - 60))
{
mkdir(dirname(__FILE__) . '/cache', 0777);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_REFERER, $_SERVER['REQUEST_URI']);
$data = curl_exec($ch);
curl_close($ch);
$cachefile = fopen($cache, 'wb');
fwrite($cachefile, $data);
fclose($cachefile);
}
else
{
$data = file_get_contents($cache);
}
$tweet_feed = json_decode($data);
This however only returns the username and timestamp (which is wrong), when it should be returning the twitter avatar, tweet content, correct timestamp, etc. Additionally, it's also returning an error every few refreshes:
Warning: mkdir() [function.mkdir]: File exists in /home/content/36/8614836/html/wp-content/themes/NCmainSite/functions.php on line 110
Any help would be appreciated.
If you need more info, here's the rest of the function: http://snippi.com/s/9f066q0
Here try this ive fixed your issues, plus you had a rogue post opt in curl.
<?php
function tweets($twitter_handle, $tweet_limit, $tweet_links, $tweet_tags, $tweet_avatar, $tweet_profile) {
$http_query = array('screen_name'=>$twitter_handle,
'include_entities'=>'true',
'include_rts'=>'true',
'count'=>(isset($hard_max))?$hard_max:'5');
$url = 'http://api.twitter.com/1/statuses/user_timeline.json?'.http_build_query($http_query);
$cache_folder = dirname(__FILE__) . '/cache';
$cache_file = $cache_folder . '/twitter.json';
//Check folder exists
if(!file_exists($cache_folder)){mkdir($cache_folder, 0777);}
//Do if cache files not found or older then 60 seconds (tho 60 is not enough)
if(!file_exists($cache_file) || filemtime($cache_file) < (time() - 60)){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 30);
curl_setopt($ch, CURLOPT_TIMEOUT, 5);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_REFERER, $_SERVER['REQUEST_URI']);
$data = curl_exec($ch);
curl_close($ch);
file_put_contents($cache_file,$data);
}else{
$data = file_get_contents($cache_file);
}
return json_decode($data);
}
$twitter = tweets('RemotiaSoftware', 'tweet_limit','tweet_links', 'tweet_tags', 'tweet_avatar', 'tweet_profile');
print_r($twitter);
?>
Related
I am trying to automate the configuration of x IP cameras from their embbeded web server (Self Signed Certificates). So if you try to connect to a camera through a browser in a normal way (no script), you'll have to add an exception, works fine.
I want to automate this, and all my scripts PHP are ran in a Powershell CLI.
I have the following PHP script :
<?php
include('C:\wamp64\bin\php\php7.0.10\run\Librairie\LIB_parse.php');
include('C:\wamp64\bin\php\php7.0.10\run\Librairie\LIB_http.php');
include('C:\wamp64\bin\php\php7.0.10\run\Librairie\LIB_resolve_addresses.php');
$TableauIP = fopen('C:\wamp64\bin\php\php7.0.10\run\x\Ipcamera.txt', 'r');
$count = 0;
while (($URLcamera = fgets($TableauIP, 4096)) !== false){
$IP_unparsed = $URLcamera;
$Ipcamera = return_between($IP_unparsed, "//", "/", EXCL);
echo("Automatic configuration for : ".$Ipcamera."\n");
echo("...............\n\n");
echo("Downloading page : ".$IP_unparsed."\n\n");
$web_page =http_get($IP_unparsed, $ref = "");
echo "ERROR \n";
var_dump($web_page['ERROR']);
$head_section = return_between($string=$web_page['FILE'], $start="<head>", $end="</head>", $type=EXCL);
$meta_tag_array = parse_array($head_section, $beg_tag="<meta", $close_tag=">");
for($xx=0; $xx<count($meta_tag_array); $xx++){
echo $meta_tag_array[$xx]."\n";
}
for($xx=0; $xx<count($meta_tag_array); $xx++){
$meta_attribute = get_attribute($meta_tag_array[$xx], $attribute="http-equiv");
if(strtolower($meta_attribute)=="refresh"){
$new_page = return_between($meta_tag_array[$xx], $start="URL", $end=">", $type=EXCL);
$new_page = trim(str_replace("", "", $new_page));
$new_page = str_replace("=", "", $new_page);
$new_page = str_replace("\"", "", $new_page);
$new_page = resolve_address($new_page, $IP_unparsed);
}
break;
}
echo "HTML Head redirection detected<br>\n\n";
echo "Redirect page = ".$new_page."\n";
$web_page2 = http_get($new_page, $ref = "");
//$web_page = http_get($IP_unparsed.'/login.cs', $ref = "");
echo "FILE CONTENT \n";
var_dump($web_page2['FILE']);
echo "FILE ERROR \n";
var_dump($web_page2['ERROR']);
// for($xx=0; $xx<count($web_page); $xx++){
// echo($web_page[$xx]);
// }
// echo "ERROR \n";
// var_dump($new_page['ERROR']);
//*******************************
// $web_page = file($new_page);
// for($xx = 0; $xx < count($web_page); $xx++)
// echo $web_page[$xx];
//********************************
// $file_handle = fopen($new_page, "r");
// while (!feof($file_handle))
// {
// echo fgets($file_handle, 4096);
// }
// fclose($file_handle);
$count++;
}
?>
(I left the comments, I've tried different way to display the webpage)
As you can see, I am using the engine WampServer_x64 on a basic Windows 7.
I'm following a redirection to the https://x.x.x.x/login.cs page.
The important part is the download of webpage2.
Here the LIB_parse library (just necessary lines), wrapping curl options in PHP functions :
function http_get($target, $ref)
{
return http($target, $ref, $method="GET", $data_array="", EXCL_HEAD);
}
function http($target, $ref, $method, $data_array, $incl_head)
{
# Initialize PHP/CURL handle
$ch = curl_init();
# Prcess data, if presented
if(is_array($data_array))
{
# Convert data array into a query string (ie animal=dog&sport=baseball)
foreach ($data_array as $key => $value)
{
if(strlen(trim($value))>0)
$temp_string[] = $key . "=" . urlencode($value);
else
$temp_string[] = $key;
}
$query_string = join('&', $temp_string);
}
# HEAD method configuration
if($method == HEAD)
{
curl_setopt($ch, CURLOPT_HEADER, TRUE); // No http head
curl_setopt($ch, CURLOPT_NOBODY, TRUE); // Return body
}
else
{
# GET method configuration
if($method == GET)
{
if(isset($query_string))
$target = $target . "?" . $query_string;
curl_setopt ($ch, CURLOPT_HTTPGET, TRUE);
curl_setopt ($ch, CURLOPT_POST, FALSE);
}
# POST method configuration
if($method == POST)
{
if(isset($query_string))
curl_setopt ($ch, CURLOPT_POSTFIELDS, $query_string);
curl_setopt ($ch, CURLOPT_POST, TRUE);
curl_setopt ($ch, CURLOPT_HTTPGET, FALSE);
}
curl_setopt($ch, CURLOPT_HEADER, $incl_head); // Include head as needed
curl_setopt($ch, CURLOPT_NOBODY, FALSE); // Return body
}
curl_setopt($ch, CURLOPT_COOKIEJAR, COOKIE_FILE); // Cookie management.
curl_setopt($ch, CURLOPT_COOKIEFILE, COOKIE_FILE);
curl_setopt($ch, CURLOPT_TIMEOUT, CURL_TIMEOUT); // Timeout
curl_setopt($ch, CURLOPT_USERAGENT, WEBBOT_NAME); // Webbot name
curl_setopt($ch, CURLOPT_URL, $target); // Target site
curl_setopt($ch, CURLOPT_REFERER, $ref); // Referer value
curl_setopt($ch, CURLOPT_VERBOSE, FALSE); // Minimize logs
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE);
curl_setopt($ch, CURLOPT_SSLVERSION, CURL_SSLVERSION_TLSv1_2);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, FALSE); // No certificate
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE); // Follow redirects
curl_setopt($ch, CURLOPT_MAXREDIRS, 4); // Limit redirections to four
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE); // Return in string
# Create return array
$return_array['FILE'] = curl_exec($ch);
$return_array['STATUS'] = curl_getinfo($ch);
$return_array['ERROR'] = curl_error($ch);
# Close PHP/CURL handle
curl_close($ch);
# Return results
return $return_array;
}
I do not know how to handle the TLS connection with cURL. I've been trying for hours with different stuff .. I have this issue : encrypted alert :
whireshark capture TCP and TLS exchange
I've add this line to the original library :
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, FALSE);
//curl_setopt($ch, CURLOPT_SSLVERSION, 6);
curl_setopt($ch, CURLOPT_SSLVERSION, CURL_SSLVERSION_TLSv1_2);
I can't get the web page.
Apparently, the SSL version is 1.0.2h.
I have tried many different things .. With many different error types, but always around the SSL certificate stuff..
I have no more ideas where to look..
If you guys can give me a trail ! That would be nice
Hi I need to perform this PUT command in PHP using curl but I'm having issues getting it to run. The file that needs to be transferred is a zip file. This is the curl command:
curl -X PUT -H "Content-Type:application/zip" --data-binary #yZip.zip http://183.262.144.266:1211/y-validation/repository/HELLO
This is the code I have so far
$ch = curl_init();
$filepath = 'yZip.zip';
curl_setopt($ch, CURLOPT_URL, 'http://183.262.144.266:1211/y-validation/repository/HELLO');
curl_setopt($ch, CURLOPT_PUT, 1);
curl_setopt($ch, CURLOPT_UPLOAD, 1);
$fh_res = fopen($filePath, 'r');
curl_setopt($ch, CURLOPT_INFILE, $fh_res);
curl_setopt($ch, CURLOPT_INFILESIZE, filesize($filePath));
curl_setopt($ch, CURLOPT_TIMEOUT, 86400); // 1 Day Timeout
curl_setopt($ch, CURLOPT_NOPROGRESS, false);
curl_setopt($ch, CURLOPT_BUFFERSIZE, 128);
$curl_response = curl_exec ($ch);
print_r($curl_response);
I've taken this code from various websites but I keep getting errors and not sure what to do next any ideas?
UPDATED: Fixed the errors and I am linking the REST API successfully but the zip file is not being uploaded correctly.
UPDATED 2: Updated with code changes I've made since to try and solve problem but the zip is still not being PUT correctly. Also I'm working on PHP 5.3.8 so can't use the CurlFile class. Can anyone help with this?
UPDATED 3: Still having problems with this, trying to implement headers but thats not working either can anyone help me out?
I try this code and make work it for me...
<?php
set_time_limit(600);
$ch = curl_init();
$filePath = 'file.zip';
curl_setopt($ch, CURLOPT_URL, 'http://site/reciver_script_name/uploading/path/name');
curl_setopt($ch, CURLOPT_POST, 1);
//curl_setopt($ch, CURLOPT_UPLOAD, 1);
$fh_res = fopen($filePath, 'r');
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-type: application/zip'));
curl_setopt($ch, CURLOPT_INFILE, $fh_res);
curl_setopt($ch, CURLOPT_INFILESIZE, filesize($filePath));
curl_setopt($ch, CURLOPT_TIMEOUT, 86400); // 1 Day Timeout
curl_setopt($ch, CURLOPT_NOPROGRESS, false);
curl_setopt($ch, CURLOPT_BUFFERSIZE, 128);
$curl_response = curl_exec ($ch);
print_r($curl_response);
?>
And receiver side:
some path tips...
$path = $_SERVER['SCRIPT_NAME'];
if (substr($path, 0, strlen($prefix)) != $prefix) {
exit_not_found();
}
$path = substr($path, strlen($prefix));
$parts = explode('/', $path);
if (!is_array($parts) || count($parts) != 3) {
exit_not_found();
}
and main part...
if (!$error) {
$file_stream = fopen($file, 'w');
if ($file_stream === false) {
$error = 'fopen failed';
}
}
if (!$error) {
$copy_result = stream_copy_to_stream(fopen('php://input', 'r'), $file_stream);
fclose($file_stream);
if (!$copy_result) {
$error = 'stream_copy_to_stream failed';
}
}
I'm trying to load json data from this url =
http://api.opencagedata.com/geocode/v1/json?query=48.84737%2C2.28605&pretty=1&no_annotations=1&no_dedupe=1&key=b61388b5a248b7cfcaa9579ed290485b
Using file_get_contents works with other json urls but this one is strange. It returns only "{" the first line. Strlen gives 1480 which is right.Substr(2,18) gives "documentation" which is right too. But still i can't echo the entire text. Maybe there's some way to read the text line by line and save it in another string ? The entire text is still fully loaded in the textfile
Here's the php code i tried
<?php
$url = file_get_contents("http://api.opencagedata.com/geocode/v1/json?query=48.84737%2C2.28605&pretty=1&no_annotations=1&no_dedupe=1&key=b61388b5a248b7cfcaa9579ed290485b");
$save = file_put_contents("filename.txt", $url);
echo $url;
?>
Also tried this function but still same.
function get_data($url) {
$ch = curl_init();
$timeout = 5;
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
You can get return value with json_decode.
function get_data($url) {
$ch = curl_init();
$timeout = 5;
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
$data = curl_exec($ch);
curl_close($ch);
return json_decode($data,true);
}
I really like having a share counter on my blogposts. I noticed that it actually encourages visitors to share the content themselves. Because there are no WordPress sharecount plugins out there that I actually find satisfying (most of them make way to much calls), I wrote the code myself.
It works perfect, but still slows down my site. So I would rather it caches and refreshes once per hour or so. I don't know how to manage this though … Any ideas?
This is what I put in the themes function file:
class shareCount {
private $url,$timeout;
function __construct($url,$timeout=10) {
$this->url=rawurlencode($url);
$this->timeout=$timeout;
}
function get_tweets() {
$json_string = $this->file_get_contents_curl('http://urls.api.twitter.com/1/urls/count.json?url=' . $this->url);
$json = json_decode($json_string, true);
return isset($json['count'])?intval($json['count']):0;
}
function get_fb() {
$json_string = $this->file_get_contents_curl('http://api.facebook.com/restserver.php?method=links.getStats&format=json&urls='.$this->url);
$json = json_decode($json_string, true);
return isset($json[0]['total_count'])?intval($json[0]['total_count']):0;
}
private function file_get_contents_curl($url){
$ch=curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']);
curl_setopt($ch, CURLOPT_FAILONERROR, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch, CURLOPT_TIMEOUT, $this->timeout);
$cont = curl_exec($ch);
if(curl_error($ch))
{
die(curl_error($ch));
}
return $cont;
}
}
And this is what I use in single.php:
<!-- Begin mod: Add share counter -->
<span class="share-count">
<?php
$obj=new shareCount(get_permalink( $post->ID ));
echo $obj->get_tweets() + $obj->get_fb();
?>
</span>
<span class="share-text">
keer gedeeld
</span>
<!-- End mod: Add share counter -->
Then I also add some css.
Like vicente said, you should use the built in transient cache.
private function file_get_contents_curl($url){
// Create unique transient key
$transientKey = 'sc_' + md5($url);
// Check cache
$cache = get_transient($transientKey);
if($cache) {
return $cache;
}
$ch=curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']);
curl_setopt($ch, CURLOPT_FAILONERROR, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch, CURLOPT_TIMEOUT, $this->timeout);
$cont = curl_exec($ch);
if(curl_error($ch))
{
die(curl_error($ch));
}
// Cache results for 1 hour
set_transient($transientKey, $cont, 60*60);
return $cont;
}
I'm currently using php URL to browse over 500 web pages a day with cookies.
I have to check each page to ensure that the account is still logged in and the pages are being viewed as a member, not a guest.
The script takes an hour or two to complete as it sleeps in between views.
I just want to know if there's anything I can do to reduce the load this script puts on the local server, I've made sure to clear variables at the end of each loop but is there anything I'm missing that would help?
Any new cURL settings that would help?
$i = 0;
$useragents = array();
foreach($urls as $url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_COOKIEJAR, str_replace('\\','/',dirname(__FILE__)).'/cookies.txt');
curl_setopt($ch, CURLOPT_COOKIEFILE, str_replace('\\','/',dirname(__FILE__)).'/cookies.txt');
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_USERAGENT, $useragents[array_rand($useragents)]);
$html = curl_exec($ch);
curl_close($ch);
if(!$html)
die("No HTML - Not logged in");
if($i %10 != 0)
sleep(rand(5,20));
else
sleep(rand(rand(60,180), rand(300,660)));
$i++;
$html = '';
}
You could reuse your curl handle instead of creating a new one for each connection.
Clearing $html at the end of each iteration won't reduce memory usage and just adds an extra operation because it already gets reset in the next iteration.
$i = 0;
$useragents = array();
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_COOKIEJAR, str_replace('\\','/',dirname(__FILE__)).'/cookies.txt');
curl_setopt($ch, CURLOPT_COOKIEFILE, str_replace('\\','/',dirname(__FILE__)).'/cookies.txt');
foreach($urls as $url){
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_USERAGENT, $useragents[array_rand($useragents)]);
$html = curl_exec($ch);
if(!$html)
die("No HTML - Not logged in");
if($i++ % 10 != 0)
sleep(rand(5,20));
else
sleep(rand(rand(60,180), rand(300,660)));
}
curl_close($ch);