PHP cURL - Get remaining time of download - php

I am offering my users to use remote-upload to download the content directly on my server from (for example) their own server instead of local uploads. For that I'm using cURL. And now I want to get the remaining time cURL needs to complete the download (bitrate would be fine too).
Is there a way I can return the remaining time cURL needs to finish the download via the PHP curl module or do I need to run the command-line interface and somehow put the output in a file and then read from there (since PHP blocks execution when using shell_exec() or exec())?
I'm already getting the bytes expected to download and how many curl already downloaded. This is the associated code as far:
function write_progress($ch, $original_size, $current_size, $os_up, $cs_up) {
global $anitube, $cache;
$cache->write("webupload-progress-".$anitube->input['inputname'], serialize(array("total" => $original_size, "loaded" => $current_size)));
}
ini_set('max_execution_time', '0');
$handle_file = "/tmp/file-".$anitube->generate(20).".tmp";
if(DIRECTORY_SEPARATOR == "\\") {
$handle_file = IN_DIR.$handle_file;
}
$file = fopen($handle_file, "w+");
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, urldecode(trim($anitube->input['filename'])));
curl_setopt($ch, CURLOPT_FILE, $file);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 5);
curl_setopt($ch, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']);
curl_setopt($ch, CURLOPT_NOPROGRESS, false);
curl_setopt($ch, CURLOPT_PROGRESSFUNCTION, "write_progress");
if($anitube->users['ip'] == "127.0.0.1") {
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 0);
}
$data = curl_exec($ch);
if(curl_errno($ch) > 0) {
file_put_contents(IN_DIR."/logs/curl_errors.txt", date("d.m.Y H:i:s")." - Errno: ".curl_errno($ch)." - Error: ".curl_error($ch)." - cURL 4: ".print_r(curl_getinfo($ch), true)."\n", FILE_APPEND);
die(json_encode(array("success" => 0, "response" => $language->word('remote_file_not_available'))));
} elseif(curl_getinfo($ch, CURLINFO_HTTP_CODE) != 200) {
file_put_contents(IN_DIR."/logs/curl_errors.txt", date("d.m.Y H:i:s")." - Error: Connection denied - HTTP-Response-Code: ".curl_getinfo($ch, CURLINFO_HTTP_CODE)." - cURL 4: ".print_r(curl_getinfo($ch), true)."\n", FILE_APPEND);
die(json_encode(array("success" => 0, "response" => $language->word('remote_file_not_available'))));
}
curl_close($ch);
fclose($file);

PHP's cURL library does not appear to provide the estimated time remaining, but it is fairly trivial to calculate this from PHP using the CURLOPT_PROGRESSFUNCTION callback function. I've created a working example below. Note that if GZIP compression is enabled, the output will probably be delayed until the entire request is complete.
Example:
<?php
header( 'Content-Type: text/plain' );
//A helper function to flush output buffers.
function flush_all() {
while ( ob_get_level() ) {
ob_end_flush();
}
flush();
}
$download_start_time = null;
function write_progress( $ch, $original_size, $current_size, $os_up, $cs_up ) {
global $download_start_time;
//Get the current time.
$now = microtime( true );
//Remember the start time.
if ( ! $download_start_time ) {
$download_start_time = $now;
}
//Check if the download size is available yet.
if ( $original_size ) {
//Compute time spent transfering.
$transfer_time = $now - $download_start_time;
//Compute percent already downloaded.
$transfer_percentage = $current_size / $original_size;
//Compute estimated transfer time.
$estimated_tranfer_time = $transfer_time / $transfer_percentage;
//Compute estimated time remaining.
$estimated_time_remaining = $estimated_tranfer_time - $transfer_time;
//Output the remaining time.
var_dump( $estimated_time_remaining );
flush_all();
}
}
//Example usage.
$file = fopen( 'tmp.bin', "w+");
$ch = curl_init();
curl_setopt( $ch, CURLOPT_URL, 'https://ftp.mozilla.org/pub/mozilla.org/firefox/releases/35.0.1/win32/en-US/Firefox%20Setup%2035.0.1.exe' );
curl_setopt( $ch, CURLOPT_FILE, $file );
curl_setopt( $ch, CURLOPT_NOPROGRESS, false );
curl_setopt( $ch, CURLOPT_PROGRESSFUNCTION, 'write_progress' );
$data = curl_exec( $ch );

Related

Stop a PHP Curl Download

I am trying to find a way to stop an active PHP Curl download. I am downloading large files from a remote server and sometimes I would like to cancel the download after it has started. I have tried returning false within CURLOPT_PROGRESSFUNCTION, however that did not work. I also tried deleting the file that was being downloaded, and that did not work either (web stats showed the download was continuing).
The below code is triggered via a quick ajax call:
$ch = curl_init( $file->url );
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_NOPROGRESS, false );
curl_setopt($ch, CURLOPT_FILE, $targetFile); //save the file to here
curl_setopt( $ch, CURLOPT_PROGRESSFUNCTION, function($resource, $download_size, $downloaded_size, $upload_size, $uploaded_size) use ($download_id) {
if ( $download_size == 0 ) {
$progress = 0;
} else {
$progress = round( $downloaded_size * 100 / $download_size );
}
// if download complete trigger completed function
if($progress == 100) {
self::DownloadCompleted($download_id);
}
});
$curl = curl_exec($ch);
Solution was to return a non-zero value in the CURLOPT_PROGRESSFUNCTION function, as per drew010 in the comment.
To get this done I added a check within the function to see if a file exists, if it does the function returns 1 and aborts. I just create a file in the directory with the same name as the download ID when I want to cancel the download. It works well for me.
$ch = curl_init( $file->url );
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_NOPROGRESS, false );
curl_setopt($ch, CURLOPT_FILE, $targetFile); //save the file to here
curl_setopt( $ch, CURLOPT_PROGRESSFUNCTION, function($resource, $download_size, $downloaded_size, $upload_size, $uploaded_size) use ($download_id) {
//if the file exists, the download is aborted
if(file_exists('path/to/directory/cancel.'.$download_id)) {
Self::CleanupCancelledDownload; //function to clean up the partially downloaded file, etc.
return 1; //returning a non-zero value cancels the CURL download.
}
});
$curl = curl_exec($ch);

Checking directory or file from external owned server

I am developing a script for a music company in PHP that has different servers so they need to display a file if it exists or not on the external server
like they have 3 versions of each music file mp3 mp4 etc ..... and they are accessing the files (each version ) from there specific external server . i have made three solutions for it all of them worked like charm but they are making the server slow .
First Method :
$handle = curl_init($url);
curl_setopt($handle, CURLOPT_RETURNTRANSFER, TRUE);
/* Get the HTML or whatever is linked in $url. */
$response = curl_exec($handle);
/* Check for 404 (file not found). */
$httpCode = curl_getinfo($handle, CURLINFO_HTTP_CODE);
if($httpCode == 404) {
/* Handle 404 here. */
}
curl_close($handle);
/* Handle $response here. */
Second Method : Using NuSOAP i made an api which checks internally the file and returns yes/no
Third Method:
function checkurl($url)
{
return true;
$file_headers = #get_headers($url);
//var_dump($file_headers);
if($file_headers[0] == 'HTTP/1.1 302 Moved Temporarily' || $file_headers[0] =='HTTP/1.1 302 Found') {
$exists = false;
}
else {
$exists = true;
}
return $exists;
}
So i need a solution that doesn't makes the server slow any suggestions
Be sure to issue a HEAD request, not GET, since you don't want to get the file contents. And maybe you need to follow redirects, or not...
Example with curl (thanks to this blog post):
<?php
$url = 'http://localhost/c.txt';
echo "\n checking: $url";
$c = curl_init();
curl_setopt( $c, CURLOPT_RETURNTRANSFER, true );
curl_setopt( $c, CURLOPT_FOLLOWLOCATION, true );
curl_setopt( $c, CURLOPT_MAXREDIRS, 5 );
curl_setopt( $c, CURLOPT_CUSTOMREQUEST, 'HEAD' );
curl_setopt( $c, CURLOPT_HEADER, 1 );
curl_setopt( $c, CURLOPT_NOBODY, true );
curl_setopt( $c, CURLOPT_URL, $url );
$res = curl_exec( $c );
echo "\n\ncurl:\n";
var_dump($res);
echo "\nis 200: ";
var_dump(false !== strpos($res, 'HTTP/1.1 200 OK'));
SOAP or other web service implementation can be an option if the file is not available by HTTP.
If you want to use get_headers(), please note that by default it's slow because it issues a GET request. To use HEAD request, you should change the default stream context (please check get_headers() on php manual):
stream_context_set_default(
array(
'http' => array(
'method' => 'HEAD'
)
)
);
I thought it works with above answers but it wasnt working where there were too many requests so i finally try again and again and found this solution its working perfectly actually the problem was redirects too many of them so i set time_out 15 in curl and it worked
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_NOBODY, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 15);
$r = curl_exec($ch);
$r = split("\n", $r);
var_dump($r);

Reading POST data in PHP from cUrl

I am using cUrl in PHP to request from some external service.
Interesting enough, the server is responding with raw "multipart/form-data" instead of binary file data.
My website is using a shared hosting, therefore PECL HTTP is not an option.
Is there a way to parse this data with PHP?
Sample code:
$response = curl_exec($cUrl);
/* $response is raw "multipart/form-data" string
--MIMEBoundaryurn_uuid_DDF2A2C71485B8C94C135176149950475371
Content-Type: application/xop+xml; charset=utf-8; type="text/xml"
Content-Transfer-Encoding: binary
(xml data goes here)
--MIMEBoundaryurn_uuid_DDF2A2C71485B8C94C135176149950475371
Content-Type: application/zip
Content-Transfer-Encoding: binary
(binary file data goes here)
*/
EDIT: I tried piping the response to a localhost HTTP request, but the respond data is likely to exceed the allowed memory size in PHP process. Expending mem limit is not very practical, this action also dramatically reduces the server performance dramatically.
If there is no alternatives to the original question, you may suggest a way to handle very large POST requests, along with XML parsing, in terms of streams in PHP.
I know this would be hard, please comment. I am open for discussions.
if you need the zip file from the response I guess you could just write a tmp file to save the curl response to, and stream that as a workaround:
Never tried that with multipart curls, but I guess it should work.
$fh = fopen('/tmp/foo', 'w');
$cUrl = curl_init('http://example.com/foo');
curl_setopt($cUrl, CURLOPT_FILE, $fh); // redirect output to filehandle
curl_exec($cUrl);
curl_close($cUrl);
fclose($fh); // close filehandle or the file will be corrupted
if you do NOT need anything but the xml part of the response you might want to disable headers
curl_setopt($cUrl, CURLOPT_HEADER, FALSE);
and add option to only accept xml as a response
curl_setopt($cUrl, CURLOPT_HTTPHEADER, array('Accept: application/xml'));
//That's a workaround since there is no available curl option to do so but http allows that
[EDIT]
A Shot in the dark...
can you test with these curlopt settings to see if modifiying these help anything
$headers = array (
'Content-Type: multipart/form-data; boundary=' . $boundary,
'Content-Length: ' . strlen($requestBody),
'X-EBAY-API-COMPATIBILITY-LEVEL: ' . $compatLevel, // API version
'X-EBAY-API-DEV-NAME: ' . $devID,
'X-EBAY-API-APP-NAME: ' . $appID,
'X-EBAY-API-CERT-NAME: ' . $certID,
'X-EBAY-API-CALL-NAME: ' . $verb,
'X-EBAY-API-SITEID: ' . $siteID,
);
$cUrl = curl_init();
curl_setopt($cUrl, CURLOPT_URL, $serverUrl);
curl_setopt($cUrl, CURLOPT_TIMEOUT, 30 );
curl_setopt($cUrl, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($cUrl, CURLOPT_SSL_VERIFYHOST, 0);
curl_setopt($cUrl, CURLOPT_HTTPHEADER, $headers);
curl_setopt($cUrl, CURLOPT_POST, 1);
curl_setopt($cUrl, CURLOPT_POSTFIELDS, $requestBody);
curl_setopt($cUrl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($cUrl, CURLOPT_FAILONERROR, 0 );
curl_setopt($cUrl, CURLOPT_FOLLOWLOCATION, 1 );
curl_setopt($cUrl, CURLOPT_HEADER, 0 );
curl_setopt($cUrl, CURLOPT_USERAGENT, 'ebatns;xmlstyle;1.0' );
curl_setopt($cUrl, CURLOPT_HTTP_VERSION, 1 ); // HTTP version must be 1.0
$response = curl_exec($cUrl);
if ( !$response ) {
print "curl error " . curl_errno($cUrl ) . PHP_EOL;
}
curl_close($cUrl);
[EDIT II]
This is just a try, as mentioned I cannot get my curled pages to respond with a multipart form data. So be gentle with me here ;)
$content_type = ""; //use last know content-type as a trigger
$tmp_cnt_file = "tmp/tmpfile";
$xml_response = ""; // this will hold the "usable" curl response
$hidx = 0; //header index.. counting the number of different headers received
function read_header($cUrl, $string)// this will be called once for every line of each header received
{
global $content_type, $hidx;
$length = strlen($string);
if (preg_match('/Content-Type:(.*)/', $string, $match))
{
$content_type = $match[1];
$hidx++;
}
/*
should set $content_type to 'application/xop+xml; charset=utf-8; type="text/xml"' for the first
and to 'application/zip' for the second response body
echo "Header: $string<br />\n";
*/
return $length;
}
function read_body($cUrl, $string)
{
global $content_header, $xml_response, $tmp_cnt_file, $hidx;
$length = strlen($string);
if(stripos ( $content_type , "xml") !== false)
$xml_response .= $string;
elseif(stripos ($content_type, "zip") !== false)
{
$handle = fopen($tmp_cnt_file."-".$hidx.".zip", "a");
fwrite($handle, $string);
fclose($handle);
}
/*
elseif {...} else{...}
depending on your needs
echo "Received $length bytes<br />\n";
*/
return $length;
}
and of course set the proper curlopts
// Set callback function for header
curl_setopt($cUrl, CURLOPT_HEADERFUNCTION, 'read_header');
// Set callback function for body
curl_setopt($cUrl, CURLOPT_WRITEFUNCTION, 'read_body');
don't forget to NOT save the curl response to a variable because of the memory issues,
hopefully all you need will be in the $xml_response above anyways.
//$response = curl_exec($cUrl);
curl_exec($cUrl);
And for parsing your code you can refer to $xml_response and the temp files you created starting with tmp/tmpfile-2 in this scenario. Again, I have not been able to test the code above in any way. So this might not work (but it should imho ;))
[EDIT III]
Say we want curl to write all incoming data directly to another (outgoing) stream, in this case a socket connection
I'm not sure if it is as easy as this:
$fs = fsockopen($host, $port, $errno, $errstr);
$cUrl = curl_init('http://example.com/foo');
curl_setopt($cUrl, CURLOPT_FILE, $fs); // redirect output to sockethandle
curl_exec($cUrl);
curl_close($cUrl);
fclose($fs); // close handle
else we will have to use our known write and header functions with just a little trick
//first open the socket (before initiating curl)
$fs = fsockopen($host, $port, $errno, $errstr);
// now for the new callback function
function socket_pipe($cUrl, $string)
{
global $fs;
$length = strlen($string);
fputs($fs, $string); // add NOTHING to the received line just send it to $fs; that was easy wasn't it?
return $length;
}
// and of course for the CURLOPT part
// Set callback function for header
curl_setopt($cUrl, CURLOPT_HEADERFUNCTION, 'socket_pipe');
// Set the same callback function for body
curl_setopt($cUrl, CURLOPT_WRITEFUNCTION, 'socket_pipe');
// do not forget to
fclose($fs); //when we're done
The thing is, not editing the result and simply piping it to $fs will make it necessary that apache is listening on a certain port which you then assign your script to.
Or you will need to add ONE header line directly after fsockopen
fputs($fp, "POST $path HTTP/1.0\n"); //where path is your script of course
I'm sorry i can't help much because you did not put much code but i remember i was having a similar issue when i was playing with curl_setopt options.
Did you use CURLOPT_BINARYTRANSFER?
From php documentation -> CURLOPT_BINARYTRANSFER-> TRUE to return the raw output when CURLOPT_RETURNTRANSFER is used.
just set CURLOPT_RETURNTRANSFER CURLOPT_POST
$c = curl_init($url);
curl_setopt($c, CURLOPT_RETURNTRANSFER, true);
curl_setopt($c, CURLOPT_CONNECTTIMEOUT, 1);
curl_setopt($c, CURLOPT_TIMEOUT, 1);
curl_setopt($c, CURLOPT_POST, 1);
curl_setopt($c, CURLOPT_POSTFIELDS,
array());
$rst_str = curl_exec($c);
curl_close($c);
You can re-assemble you Binary data doing something like this, I hope it helps.
$file_array = explode("\n\r", $file, 2);
$header_array = explode("\n", $file_array[0]);
foreach($header_array as $header_value) {
$header_pieces = explode(':', $header_value);
if(count($header_pieces) == 2) {
$headers[$header_pieces[0]] = trim($header_pieces[1]);
}
}
header('Content-type: ' . $headers['Content-Type']);
header('Content-Disposition: ' . $headers['Content-Disposition']);
echo substr($file_array[1], 1);
If you don't need binary data, have you tried below?
curl_setopt($c, CURLOPT_NOBODY, true);

PHP pass through proxy

I want to create a proxy like application from which I send the header to the server and the response goes right to the client and doesn't use all of the server bandwidth.
The only way I can think of is using PHP cURL for this, but that doesn't work since it downloads the file and the sends it to client. I want to know is there a way to remove or minimize the used bandwidth.
What I want to do:
Clients opens the page, presses the download button, then MY server requests to the file server for the file (using a header) and sends its directly to the client or MY server redirects to client.
Clients opens the page presses the download button
MY server requests to the file server the file and sends to the client 8k at time (in the following example).
This using CURLOPT_BUFFERSIZE, CURLOPT_HEADERFUNCTION and CURLOPT_WRITEFUNCTION.
<?php
/*
* curl-pass-through-proxy.php
*
* propose: php curl pass through proxy handle: big file, https, autentication
* example: curl-pass-through-proxy.php?url=precise/ubuntu-12.04.4-desktop-i386.iso
* limitation: don't work on binary if is enabled in php.ini the ;output_handler = ob_gzhandler
* licence: BSD
*
* Copyright 2014 Gabriel Rota <gabriel.rota#gmail.com>
*
*/
$url = "http://releases.ubuntu.com/" . $_GET["url"]; // NOTE: this example don't use https
$credentials = "user:pwd";
$headers = array(
"GET ".$url." HTTP/1.1",
"Content-type: text/xml",
"Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Cache-Control: no-cache",
"Pragma: no-cache",
"Authorization: Basic " . base64_encode($credentials)
);
global $filename; // used in fn_CURLOPT_HEADERFUNCTION setting download filename
$filename = substr($url, strrpos($url, "/")+1); // find last /
function fn_CURLOPT_WRITEFUNCTION($ch, $str){
$len = strlen($str);
echo( $str );
return $len;
}
function fn_CURLOPT_HEADERFUNCTION($ch, $str){
global $filename;
$len = strlen($str);
header( $str );
//~ error_log("curl-pass-through-proxy:fn_CURLOPT_HEADERFUNCTION:str:".$str.PHP_EOL, 3, "/tmp/curl-pass-through-proxy.log");
if ( strpos($str, "application/x-iso9660-image") !== false ) {
header( "Content-Disposition: attachment; filename=\"$filename\"" ); // set download filename
}
return $len;
}
$ch = curl_init(); // init curl resource
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, false); // a true curl_exec return content
curl_setopt($ch, CURLOPT_TIMEOUT, 600); // 60 second
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers); // login $url
curl_setopt($ch, CURLOPT_USERAGENT, $_SERVER['HTTP_USER_AGENT']);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false); // don't check certificate
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); // don't check certificate
curl_setopt($ch, CURLOPT_HEADER, false); // true Return the HTTP headers in string, no good with CURLOPT_HEADERFUNCTION
curl_setopt($ch, CURLOPT_BUFFERSIZE, 8192); // 8192 8k
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_HEADERFUNCTION, "fn_CURLOPT_HEADERFUNCTION"); // handle received headers
curl_setopt($ch, CURLOPT_WRITEFUNCTION, 'fn_CURLOPT_WRITEFUNCTION'); // callad every CURLOPT_BUFFERSIZE
if ( ! curl_exec($ch) ) {
error_log( "curl-pass-through-proxy:Error:".curl_error($ch).PHP_EOL, 3, "/tmp/curl-pass-through-proxy.log" );
}
curl_close($ch); // close curl resource
?>

I want to check if a site is alive within this cURL code?

I use this code to get a response/result from the other server and I want to know how can I check if the site is alive?
$ch = curl_init('http://domain.com/curl.php');
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$result = curl_exec($ch);
curl_close($ch);
if (!$result)
// it will execute some codes if there is no result echoed from curl.php
All you really have to do is a HEAD request to see if you get a 200 OK message after redirects. You do not need to do a full body request for this. In fact, you simply shouldn't.
function check_alive($url, $timeout = 10) {
$ch = curl_init($url);
// Set request options
curl_setopt_array($ch, array(
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_NOBODY => true,
CURLOPT_TIMEOUT => $timeout,
CURLOPT_USERAGENT => "page-check/1.0"
));
// Execute request
curl_exec($ch);
// Check if an error occurred
if(curl_errno($ch)) {
curl_close($ch);
return false;
}
// Get HTTP response code
$code = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
// Page is alive if 200 OK is received
return $code === 200;
}
here is the simpler one
<?php
$yourUR="http://sitez.com";
$handles = curl_init($yourUR);
curl_setopt($handles, CURLOPT_NOBODY, true);
curl_exec($handles);
$resultat = curl_getinfo($handles, CURLINFO_HTTP_CODE);
echo $resultat;
?>
Check a web url status by PHP/cURL function :
Condition is , If HTTP status is not 200 or 302, or the requests takes longer than 10 seconds, so the website is unreachable...
<?php
/**
*
* #param string $url URL that must be checked
*/
function url_test( $url ) {
$timeout = 10;
$ch = curl_init();
curl_setopt ( $ch, CURLOPT_URL, $url );
curl_setopt ( $ch, CURLOPT_RETURNTRANSFER, 1 );
curl_setopt ( $ch, CURLOPT_TIMEOUT, $timeout );
$http_respond = curl_exec($ch);
$http_respond = trim( strip_tags( $http_respond ) );
$http_code = curl_getinfo( $ch, CURLINFO_HTTP_CODE );
if ( ( $http_code == 200 ) || ( $http_code == 302 ) ) {
return true;
} else {
// you can return $http_code here if necessary or wanted
return false;
}
curl_close( $ch );
}
// simple usage:
$website = "www.example.com";
if( !url_test( $website ) ) {
echo $website ." is down!";
} else {
echo $website ." functions correctly.";
}
?>
You can try with cURL:
curl -I "<URL>" 2>&1 | awk '/HTTP\// {print $2}'
It will return 200 when it's alive
Keep it short and simple...
$string = #file_get_contents('http://domain.com/curl.php');
If $string is null or empty the page is probably unreachable (or actually doesnt output anything).

Categories