How can i add exception? - php

I have a php site www.test.com.
In the index page of this site ,i am updating another site's(www.pgh.com) database by the following php code.
$url = "https://pgh.com/test.php?userID=".$userName. "&password=" .$password ;
$response = file_get_contents($url);
But,now the site www.pgh.com is down.So it also affecting my site 'www.test.com'.
So how can i add some exception or something else to this code,so that my site should work if other site is down

$response = file_get_contents($url);
if(!$response)
{
//Return error
}
From the PHP manual
Adding a timeout:
$ctx = stream_context_create(array(
'http' => array(
'timeout' => 1
)
)
);
file_get_contents("http://example.com/", 0, $ctx);
file_get_contents returns false on fail.

You have two options:
Add a timeout to the file_get_contents call using stream_get_context() (The manual has good examples; Docs for the timeout parameter here). This is not perfect, as even a one second's timeout will cause a notable pause when loading the page.
More complex but better: Use a caching mechanism. Do the file_get_contents request in a separate script that gets called frequently (e.g. every 15 minutes) using a cron job (if you have access to one). Write the result to a local file, which your actual script will read.

Related

How to properly make a thousand requests to an API in PHP

I have to make a thousand requests to the IGDB API and I am having troubles making this work. Everytime I run my script, it's loading for some time then my web host tells me "Error: there is a problem...It seems that something went wrong." (not very helpful I know).
Since I believe the issue comes from the amount of requests, I have tried reducing it but I am down to 60 requests with a pause of 4 seconds between each and still no success.
My latest try:
$splice = array_splice($array, 0, 60);
foreach($splice as $key => $value){
$request = wp_remote_get('https://igdbcom-internet-game-database-v1.p.mashape.com/games/?fields=*&search='.$value['Name'],
array( 'headers' => array(
'Accept' => 'application/json',
'X-Mashape-Key' => 'Key' )));
$body = wp_remote_retrieve_body($request);
$data_api = json_decode($body, true);
sleep(4);
}
Would anyone know what I am doing wrong? I am running out of ideas...
That's very likely to be nothing more than the timeout response from either PHP or the server.
Although there ways to go around these securities, they aren't for nothing.
You should use CLI to execute cargo queries, not CGI. CGI access is for regular users, whatever their role/priviledges. As a developer, you have access to the code, and to the server (or at least your sysadmin does if you are in a team). You SHOULD use command line to do these queries. It will take less time, will have less chance to fail and you'll have the error logs printed right away unless you redirect them to a file.

Identify which promise failed and dynamically change promise queue in Guzzle 6

I need to download a large number of large files, stored across multiple identical servers. A file, like '5.doc', that is stored on server 3, is also stored on server 55.
To speed this up, instead of using just one server to download all the files one after another, I'm using all servers at the same time. The problem is that one of the servers may be much slower than the others, or may even be down. When using Guzzle to batch download files, all of the files in that batch must be downloaded before another batch starts.
Is there a way to immediately start downloading another file alongside others so that all of the servers are constantly downloading a file?
If a server is down, I've set a timeout of 300 seconds and when this is reached Guzzle will catch it's ConnectionException.
How do I identify which of the promises (downloads) have failed so I can cancel them? Can I get information about which file/server failed?
Below is a simplified example of the code I'm using to illustrate the point. Thanks for the help!
$filesToDownload = [['5.doc', '8.doc', '10.doc'], ['1.doc', '9.doc']]; //The file names that we need to download
$availableServers = [3, 55, 88]; //Server id's that are available
foreach ($filesToDownload as $index => $fileBatchToDownload) {
$promises = [];
foreach ($availableServers as $key => $availableServer) {
array_push(
$promises, $client->requestAsync('GET', 'http://domain.com/' . $fileBatchToDownload[$index][$key], [
'timeout' => 300,
'sink' => '/assets/' . $fileBatchToDownload[$index][$key]
])
);
$database->updateRecord($fileBatchToDownload[$index][$key], ['is_cached' => 1]);
}
try {
$results = Promise\unwrap($promises);
$results = Promise\settle($promises)->wait();
} catch (\GuzzleHttp\Exception\ConnectException $e) {
//When can't connect to the server or didn't download within timeout
foreach ($e->failed() as $failedPromise) {
//Re-set record in database to is_cached = 0
//Delete file from server
//Remove this server from the $availableServers list as it may be down or too slow
//Re-add this file to the next batch to download $filesToDownload
}
}
}
I'm not sure how you are doing an asynchronous download of one file from multiple servers using Guzzle, but getting array index of failed requests can be done by promise's then() method:
array_push(
$promises,
$client->requestAsync('GET', "http://localhost/file/{$id}", [
'timeout' => 10,
'sink' => "/assets/{$id}"
])->then(function() {
echo 'Success';
},
function() use ($id) {
echo "Failed: $id";
}
)
);
then() accepts two callbacks. First one is triggered on success and the second one on failure. Source calls them $onFullfilled and $onRejected. Other usages are documented in guzzle documentation. This way you can start downloading a file immediately after its failure.
Can I get information about which file/server failed?
When a promise failed then it means request remained unfulfilled. In this case you can get host and requested path by passing an instance of RequestException class to second then()'s callback:
use GuzzleHttp\Exception\RequestException;
.
.
.
array_push(
$promises,
$client->requestAsync('GET', "http://localhost/file/{$id}", [
'timeout' => 10,
'sink' => "/assets/{$id}"
])->then(function() {
echo 'Success';
},
function(RequestException $e) {
echo "Host: ".$e->getRequest()->getUri()->getHost(), "\n";
echo "Path: ".$e->getRequest()->getRequestTarget(), "\n";
}
)
);
So you will have full information about failing host and file's name. If you may need access to more information you should know that $e->getRequest() returns an instance of GuzzleHttp\Psr7\Request class and all methods on this class are available to be used here. (Guzzle and PSR-7)
When an item is successfully downloaded, can we then immediately
start a new file download on this free server, whilst the other files
are still downloading?
I think you should decide to download new files only on creating promises at the very beginning and repeat/renew failed requests within second callback. Trying to make new promises followed by a successful promise may result in an endless process with downloading duplicated files and that's not simple to handle.

fopen() very slow when used in Joomla

I'm writing a authentication plugin for Joomla.
The plugin is calling an external site to verify the username and password.
The code works, but my problem is that when I'm calling fopen() from the Joomla plugin, it takes a very long time (63 seconds) for it to respond.
When running the same code on the server (but not through Joomla), the fopen() call only takes 0.1 second.
Is there any settings that Joomla can have changed, that makes the fopen() call taking so long? Or is there any other function that I should use instead of fopen()? (I have tried file_get_contents() but with the same result)
Below is the code I'm using (based on this article: http://wezfurlong.org/blog/2006/nov/http-post-from-php-without-curl/) (I don't have cURL installed so that is not an option.)
$username = "admin";
$password = "1234" ;
$verb = 'GET'
$url = "https://xxx.xxx.xxx/api.phtml/login" ;
$params = array("username"=>$username, "password"=>$password);
$params = http_build_query($params);
$url .= '?' . $params;
$cparams = array( 'http' => array( 'method' => $verb,
'ignore_errors' => true ) );
$context = stream_context_create($cparams);
$fp = fopen($url, 'rb', false, $context);
The allow_url_fopen is enabled.
Joomla! Version: Joomla! 2.5.27 Stable
PHP Version: 5.2.6-1+lenny10
I have been struggeling with this for three days now, so any help would be very appreciated!
Thanks!
Usually when we have a problem with fopen, we switch to curl and that solves the problem. In any case, check your .htaccess and your local php.ini (or .user.ini) files for the Joomla site for any restrictions on fopen.

PHP non blocking soap request

After a user signs up on my website i need to send a soap request in a method that is not blocking to the user. If the soap server is running slow I don't want the end user to have to wait on it. Is there a way I can send the request and let my main PHP application continue to run without waiting from a response from the soap server? If not, is there a way to set a max timeout on the soap request, and handle functionality if the request is greater than a max timeout?
Edit:
I would ideally like to handle this with a max timeout for the request. I have the following:
//ini_set('default_socket_timeout', 1);
$streamOptions = array(
'http'=>array(
'timeout'=>0.01
)
);
$streamContext = stream_context_create($streamOptions);
$wsdl = 'file://' . dirname(__FILE__) . '/Service.wsdl';
try{
if ( file_get_contents( $wsdl ) ) {
$this->_soapClient = new SoapClient($wsdl,
array(
'soap_version' => SOAP_1_2,
'trace' => true,
'stream_context' => $streamContext
)
);
$auth = array('UserName' => $this->_username, 'Password' => $this->_password);
$header = new SoapHeader(self::WEB_SERVICE_URL, "WSUser", $auth);
$this->_soapClient->__setSoapHeaders(array($header));
}//if
}
catch(Exception $e){
echo "we couldnt connect". $e;
}
$this->_soapClient->GetUser();
I set the timeout to 0.01 to try and force the connection to timeout, but the request still seems to fire off. What am I doing wrong here?
I have had the same issues and have implemented solution !
I have implemented
SoapClient::__doRequest();
To allow multiple soap calls using
curl_multi_exec();
Have a look at this asynchronous-soap
Four solutions:
Use AJAX to do the SOAP -> Simplest SOAP example
Use AJAX to call a second PHP file on your server which does the SOAP (best solution imo)
Put the SOAP request to the end of your PHP file(s) (not the deluxe solution)
Use pcntl_fork() and do everything in a second process (I deprecate that, it might not work with every server configuration)
Depending on the way you implement this, PHP has plenty of timeout configurations,
for example socket_set_timeout(), or stream_set_timeout() (http://php.net/manual/en/function.stream-set-timeout.php)

How to speed up file_get_contents()? or any alternative way

I have a website scraping project. Look at this code:
<?php
include('db.php');
$r = mysql_query("SELECT * FROM urltable");
$rows= mysql_num_rows($r);
for ($j = 0; $j <$rows; ++$j) {
$row = mysql_fetch_row($r);
$html = file_get_contents(mysql_result($r,$j,'url'));
$file = fopen($j.".txt", "w");
fwrite($file,$html);
fclose($file);
}
?>
I have a list of url. This code means that, make text files using the contents(HTML) from each url.
When running this code, I can make only one file per second [each file size~ 20KB]. My internet is providing 3mbps downloading speed, but I can't utilize that speed with this code.
How do I speed up file_get_contents()? Or how do I speed up this code using threading or configure php.ini file or any other methods?
As this was not one of the suggestions on the duplicate page I will add it here.
Take a close look at Curl Multi PHP Manual page.
Its not totally straight forward but once you get it running its very fast.
Basically you issue multiple curl requests and then collect the data returned as and when it returns. It returns in any order so there is a bit of control required.
I have used this on a data collection process to reduce 3-4 hours of processing to 30 minutes.
The only issue could be that you swamp a site with multiple requests and the owner considers that an issue and bans your access. But with a bit of sensible sleep()'ing added to your process you should be able to reduce that possibility to a minimum.
You can add few controls with the streams.
But cURL should be much better, if available.
$stream_options = array(
'http' => array(
'method' => 'GET',
'header' => 'Accept-language: en',
'timeout' => 30,
'ignore_errors' => true,
));
$stream_context = stream_context_create($stream_options);
$fc = file_get_contents($url, false, $stream_context);

Categories