I know I could measure the total site loading time for an external url just with something like:
$start_request = time();
file_get_contents($url);
$end_request = time ();
$time_taken = $end_request - $start_request;
But I don't need the total site loading, I want to measure only the server-response-time like it's displayed here in the "wait"-part of the result:
http://www.bytecheck.com/results?resource=https://www.example.com
How can I do this with php?
You can't do this with PHP like so. With time() or microtime() you can only get the complete time that one or more commands took.
You need a tool where you have access to the Network Layer Data. cURL can do this for you, but you have to enable php curl, it if its not already done.
PHP can than take the result and process it.
<?php
// Create a cURL handle
$ch = curl_init('http://www.example.com/');
// Execute
curl_exec($ch);
// Check if any error occurred
if (!curl_errno($ch)) {
$info = curl_getinfo($ch);
echo 'Took ', $info['total_time'], ' seconds to send a request to ', $info['url'], "\n";
}
// Close handle
curl_close($ch);
You have a bunch of informations in $info like
"filetime"
"total_time"
"namelookup_time"
"connect_time"
"pretransfer_time"
"starttransfer_time"
"redirect_time"
The complete list could be found here
The "Wait" time should be the starttransfer_time - pretransfer_time,
so in your case you need:
$wait = $info['starttransfer_time'] - $info['pretransfer_time'];
Related
I have read some articles (like this, or this), and all of them give me the same way to implements Long Polling in PHP (using usleep() and loop), like that:
$source; // some data source - db, etc
$data = null; // our return data
$timeout = 30; // timeout in seconds
$now = time(); // start time
// loop for $timeout seconds from $now until we get $data
while((time() - $now) < $timeout) {
// fetch $data
$data = $source->getData();
// if we got $data, break the loop
if (!empty($data)) break;
// wait 1 sec to check for new $data
usleep(10000);
}
// if there is no $data, tell the client to re-request (arbitrary status message)
if (empty($data)) $data = array('status'=>'no-data');
// send $data response to client
echo json_encode($data);
Is there another way? I know that PHP is a script language only, but i would like a way that base on event rather than checking and doing or waiting until timeout. It maybe be something like Continuations in Java that would be perfect.
You could try React: http://reactphp.org/
Is not very mature yet, but it may suit your needs. Instead of doing long pooling, you can do it async.
I would recommend: http://ape-project.org/
mature and scalable
some time ago I used query to check the status of a minecraft server via. php, but I wasn´t so happy with the results. Sometimes it just took more than 10 seconds or didn´t even got the status although the mc server was up and the webserver within the same data center.
Which method do you think would work most stable and with a good performance: query, stream_stocket or something else ?
Should I run the test every 30 seconds via. a cronjob or just cache the results for 30 secs '?
You can create a PHP file with this code and run periodically to store the status.
<?php
/**
* #author Kristaps Karlsons <kristaps.karlsons#gmail.com>
* Licensed under MPL 1.1
*/
function mc_status($host,$port='25565') {
$timeInit = microtime();
// TODO: implement a way to store data (memcached or MySQL?) - please don't overload target server
$fp = fsockopen($host,$port,$errno,$errstr,$timeout=10);
if(!$fp) die($errstr.$errno);
else {
fputs($fp, "\xFE"); // xFE - get information about server
$response = '';
while(!feof($fp)) $response .= fgets($fp);
fclose($fp);
$timeEnd = microtime();
$response = str_replace("\x00", "", $response); // remove NULL
//$response = explode("\xFF", $response); // xFF - data start (old version, prior to 1.0?)
$response = explode("\xFF\x16", $response); // data start
$response = $response[1]; // chop off all before xFF (could be done with regex actually)
//echo(dechex(ord($response[0])));
$response = explode("\xA7", $response); // xA7 - delimiter
$timeDiff = $timeEnd-$timeInit;
$response[] = $timeDiff < 0 ? 0 : $timeDiff;
}
return $response;
}
$data = mc_status('mc.exs.lv','25592'); // even better - don't use hostname but provide IP instead (DNS lookup is a waste)
print_r($data); // [0] - motd, [1] - online, [2] - slots, [3] - time of request (in microseconds - use this to present latency information)
Credits: skakri (https://gist.github.com/skakri/2134554)
I am trying to work with the Twitter search API, I found a php library that does authentication with app-only auth and I added the max_id argument to it, however, I would like to run 450 queries per 15 minutes (as per the rate-limit) and I am not sure about how to pass the max_id. So I run it first with the default 0 value, and then it gets the max_id result from the API's response and runs the function again, but this time with the retrieved max_id value and does this 450 times. I tried a few things, and I can get the max_id result after calling the function, but I don't know how to pass it back and tell it to call the function with the updated value.
<?php
function search_for_a_term($bearer_token, $query, $result_type='mixed', $count='15', $max_id='0'){
$url = "https://api.twitter.com/1.1/search/tweets.json"; // base url
$q = $query; // query term
$formed_url ='?q='.$q; // fully formed url
if($result_type!='mixed'){$formed_url = $formed_url.'&result_type='.$result_type;} // result type - mixed(default), recent, popular
if($count!='15'){$formed_url = $formed_url.'&count='.$count;} // results per page - defaulted to 15
$formed_url = $formed_url.'&include_entities=true'; // makes sure the entities are included
if($max_id!='0'){$formed_url=$formed_url.'&max_id='.$max_id;}
$headers = array(
"GET /1.1/search/tweets.json".$formed_url." HTTP/1.1",
"Host: api.twitter.com",
"User-Agent: jonhurlock Twitter Application-only OAuth App v.1",
"Authorization: Bearer ".$bearer_token."",
);
$ch = curl_init(); // setup a curl
curl_setopt($ch, CURLOPT_URL,$url.$formed_url); // set url to send to
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); // set custom headers
ob_start(); // start ouput buffering
$output = curl_exec ($ch); // execute the curl
$retrievedhtml = ob_get_contents(); // grab the retreived html
ob_end_clean(); //End buffering and clean output
curl_close($ch); // close the curl
$result= json_decode($retrievedhtml, true);
return $result;
}
$results=search_for_a_term("mybearertoken", "mysearchterm");
/* would like to get all kinds of info from here and put it into a mysql database */
$max_id=$results["search_metadata"]["max_id_str"];
print $max_id; //this gives me the max_id for that page
?>
I know that there are must be some existing libraries that do this, but I can't use any of the libraries, since none of them have updated to the app-only auth yet.
EDIT: I put a loop in the beginning of the script, to run e.g. 3 times, and then put a print statement to see what happens, but it only prints out the same max_id, doesn't access three different ones.
do{
$result = search_for_a_term("mybearertoken", "searchterm", $max_id);
$max_id = $result["search_metadata"]["max_id_str"];
$i++;
print ' '.$max_id.' ';
}while($i < 3);
I am using the following code to retrieve an amount of Tweets from the Twitter API:
$cache_file = "cache/$username-twitter.cache";
$last = filemtime($cache_file);
$now = time();
$interval = $interval * 60; // ten minutes
// Check the cache file age
if ( !$last || (( $now - $last ) > $interval) ) {
// cache file doesn't exist, or is old, so refresh it
// Get the data from Twitter JSON API
//$json = #file_get_contents("http://api.twitter.com/1/statuses/user_timeline.json?screen_name=" . $username . "&count=" . $count, "rb");
$twitterHandle = fopen("http://api.twitter.com/1/statuses/user_timeline.json?screen_name=$username&count=$count", "rb");
$json = stream_get_contents($twitterHandle);
fclose($twitterHandle);
if($json) {
// Decode JSON into array
$data = json_decode($json, true);
$data = serialize($data);
// Store the data in a cache
$cacheHandle = fopen($cache_file, 'w');
fwrite($cacheHandle, $data);
fclose($cacheHandle);
}
}
// read from the cache file with either new data or the old cache
$tweets = #unserialize(file_get_contents($cache_file));
return $tweets;
Of course $username and the other variables inside the fopen request are correct and it produces the correct URL because I get the error:
Warning: fopen(http://api.twitter.com/1/statuses/user_timeline.json?screen_name=Schodemeiss&count=5) [function.fopen]: failed to open stream: HTTP request failed! HTTP/1.1 400 Bad Request in /home/ellexus1/public_html/settings.php on line 187
that ^^ error returns whenever I try and open my page.
Any ideas why this might be? Do I need to use OAuth to even just get my tweets!? Do I have to register my website as somewhere that might get posts?
I'm really not sure why this is happening. My host is JustHost.com, but I'm not sure if that makes any diffrence. All ideas are welcome!
Thanks.
Andrew
PS. This code lies inside a function where username, interval and count are passed in correctly, hence in the error code its created a well formed address.
Chances are you are getting rate-limited
400 Bad Request: The request was invalid. An accompanying error
message will explain why. This is the status code will be returned
during rate limiting.
150 requests per hour for non authenticated calls (Based on IP-addressing)
350 requests per hour for authenticated calls (Based on the authenticated users calls)
You have to authenticate to avoid these errors popping up.
And also please use cURL when dealing with twitter. I've used file_get_contents and fopen to call the twitter API, and found that it is very unreliable. You would get hit with that every now and then.
Replace the fopen with
$ch = curl_init("http://api.twitter.com/1/statuses/user_timeline.json?screen_name=$username&count=$count");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$it = curl_exec($ch); //content stored in $it
curl_close($ch);
This may help
Error codes
https://developer.twitter.com/en/docs/basics/response-codes.html
Error codes defination is given in above link
I'm making a large request to the brightcove servers to make a batch change of metadata in my videos. It seems like it only made it through 1000 iterations and then stopped - can anyone help in adjusting this code to prevent a timeout from happening? It needs to make about 7000/8000 iterations.
<?php
include 'echove.php';
$e = new Echove(
'xxxxx',
'xxxxx'
);
// Read Video IDs
# Define our parameters
$params = array(
'fields' => 'id,referenceId'
);
# Make our API call
$videos = $e->findAll('video', $params);
//print_r($videos);
foreach ($videos as $video) {
//print_r($video);
$ref_id = $video->referenceId;
$vid_id = $video->id;
switch ($ref_id) {
case "":
$metaData = array(
'id' => $vid_id,
'referenceId' => $vid_id
);
# Update a video with the new meta data
$e->update('video', $metaData);
echo "$vid_id updated sucessfully!<br />";
break;
default:
echo "$ref_id was not updated. <br />";
break;
}
}
?>
Thanks!
Try the set_time_limit() function. Calling set_time_limit(0) will remove any time limits for execution of the script.
Also use ignore_user_abort() to bypass browser abort. The script will keep running even if you close the browser (use with caution).
Try sending a 'Status: 102 Processing' every now and then to prevent the browser from timing out (your best bet is about 15 to 30 seconds in between). After the request has been processed you may send the final response.
The browser shouldn't time out any more this way.