My little application works in this way: in a simple text input user are asked to insert an url. on change my script tries to extract and display the first 10 images found on this page.
<input type="text" name="url" id="url" value="">
$("form.link-form input#url").change(function() {
var request = $.ajax({
type: "GET",
url: "funzioni/ajax/loadImagestFromUrl.php",
data: "url=" + insertedUrl,
dataType: "html",
timeout: 5000,
success: function(res) {
loadUrlImages2div(msg99);
},
error: function() {
request.abort();
}
});
});
the PHP script loadImagestFromUrl.php run this code, using PHP Simple HTML DOM Parser library:
set_time_limit(5);
$html = file_get_html($url); // load into this variable the entire html of the page
$count=1;
foreach($html->find('img') as $key=>$element) { // only images
if ($count==11) break; //only first 10 images
echo "<img class=\"imgFromUrl\" src=\"".$element->src."\" />\n";
}
$count++;
}
This work great on most cases but there are some urls that are not available in few seconds, or protected under password and the server keep executing something even if I set a timeout of 5 seconds for the ajax request and 5 second for the execution of php code.
When this happens everything get blocked, even refreshing the page is impossible, because it load and load and only after a lot of time it returns "504 Gateway Time-out. The server didn't respond in time."
Can somene help me in understanding how to completely block this request and let the server keep working?
Got that the solution must be find in the php code timeout and not with ajax timeout. This is clear.
I found this interesting discussion
Great answer: Handling delays when retrieving files from remote server in PHP
that suggests to use context parameter in the file_get_contents function.
But actually it doesn’t work on my app.
So, as suggested by Wieger I have tried to use curl instead of file_get content. I defined this function
function file_get_contents_curl($url) {
$ch = curl_init();
$timeout=2;
curl_setopt($ch, CURLOPT_AUTOREFERER, TRUE);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_TIMEOUT, $timeout);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, TRUE);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
and used it instead of file_get_contents inside the parser library.
Again, it works for most of case, but some url make the server load for a lot of time until gateway timeout.
Related
I currently have a Laravel application, which is doing a CURL request from one route to another route within the same route. My CURL looks like this:
//LOGGING THAT A CURL CALL IS ABOUT TO BE MADE
$url = env('APP_URL') . '/tests/add/results';
$ch = curl_init();
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); //return server error
curl_setopt($ch, CURLOPT_HEADER, FALSE);
curl_setopt($ch, CURLINFO_HEADER_OUT, FALSE);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_POST, TRUE);
curl_setopt($ch, CURLOPT_POSTFIELDS, $test_post_data);
curl_setopt($ch, CURLOPT_TIMEOUT, 10);
$response = curl_exec($ch);
In the route, that the POST is being sent to, the first thing i log, is the request data:
//LOGING THAT I RECEIVED THE CURL CALL in receiving function
I'm noticing, that the logs for the request data get logged exactly the same amount of time as the timeout, meaning the request is actually being sent 10 seconds after the initial call.
In my logs i'll see something like:
10:10:10 - LOGGING CURL CALL
10:10:20 - Recieving CURL call
If i change the timeout to 30, then the log shows 30 seconds later that i received the CURL call.
Does anyone have any idea why this may be happening?
The response from the CURL just comes back as false always.
I did the following to make a post request work:
Instead of calling the route via CURL i did a post directly to the route function
$testController = new TestsController;
$test_data_request = new \Illuminate\Http\Request();
$test_data_request->setMethod('POST');
$test_data_request->request->add( $test_post_data );
$testId = $testController->addTestResults($test_data_request);
You've not provided enough information, but i think, the problem will be one or more of the following:
The WebServer http://127.0.0.1:8000 is not running
The script located on http://127.0.0.1:8000/tests/add/results is running too long and the request timeouts before it is completed
The requested path is returning redirect headers and creates an infinite loop
The response is too big to finish the data transfer in thirty seconds (very wierd if on localhost)
Try some more debugging and provide more information on this, so we may help you.
PS: Firstly i would try to catch the headers (curl_setopt($ch, CURLINFO_HEADER_OUT, true);) and print out the response (var_dump($response);) - or save it to some file :)
I have to call external script in which i make a first call with CURL to get data which takes about 2-3 minutes, Now during this time i need to make other external call with CURL to get the progress of the first call. But issue is my next call wait till the reply of first CURL comes. I also checked curl_multi but that is also not helping me as i want to make many calls when the first call is in progress. So anyone can help me to solve it please.
I suppose that, there is no need to make second call to track the CURL progress. You can achieve the same by using CURL option CURLOPT_PROGRESSFUNCTION with a callback function.
The call back method takes 5 arguments:
cURL resource
Total number of bytes expected to be downloaded
Number of bytes downloaded so far
Total number of bytes expected to be uploaded
Number of bytes uploaded so far
In the callback method you can calculate the percentage downloaded/uploaded. An example is given below:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://stackoverflow.com");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_PROGRESSFUNCTION, 'progress');
curl_setopt($ch, CURLOPT_NOPROGRESS, false);
curl_setopt($ch, CURLOPT_HEADER, 0);
$html = curl_exec($ch);
curl_close($ch);
function progress($resource,$download_size, $downloaded, $upload_size, $uploaded)
{
if($download_size > 0)
echo $downloaded / $download_size * 100 . "%\n";
sleep(1);
}
There is a way to do this - please see the following links, they explain how to do this using curl_multi_init: php.net curl_multi_init and http://arguments.callee.info/2010/02/21/multiple-curl-requests-with-php/
I am using PHP script (currentimage.php) to get a snapshot from my CCTV IP camera, which works fine:
<?php
while (#ob_end_clean());
header('Content-type: image/jpeg');
// create curl resource
$ch = curl_init();
curl_setopt($ch, CURLOPT_USERAGENT, $useragent);
curl_setopt($ch, CURLOPT_URL, 'http://192.168.0.20/Streaming/channels/1/picture');
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
curl_setopt($ch, CURLOPT_USERPWD, 'username:password');
// $output contains the output string
$output = curl_exec($ch);
echo $output;
// close curl resource to free up system resources
curl_close($ch);
?>
and another PHP/HTML to display the data:
<IMG id="myImage" SRC='currentimage.php'>
but I can't get it to work to refresh the image in background every 30s.
I am trying AJAX, but no success:
<script>
function refresh_image(){
document.getElementbyId("myImage").setAttribute("src", "currentimage.php");
}
setInterval(function(){refresh_image()}, 30000);
</script>
What I am doing wrong? I will appreciate any help
I have tried the concept of changing the displayed image via timed requests to a PHP script, and it works fine. A new request is fired up to the image getting script if the URL is modified, and I did so by appending a version number as a parameter to the URL.
var version = 1;
function refresh_image(){
document.getElementById("myImage").setAttribute("src", "currentimage.php?ver="+version);
version++;
}
setInterval(function(){
refresh_image();
}, 2000);
I suspect the main problem here is that the browser caches your file and will not reload it if you set the attribute again. I've found a way around this though:
document.getElementbyId("myImage").setAttribute("src", "currentimage.php?version=1");
Save the version number and count up every time you make the request. This way, the browser will treat it as a new URL and it will be reloaded again (check this using the developer functions of your browser in the network tab).
I have am trying to build a website in which I can track my fitness and nutrition.
I would like to use the API that is available from USDA
http://ndb.nal.usda.gov/ndb/doc/apilist/API-FOOD-REPORT.md
and this is where I want to get to
http://nutritiondata.self.com/facts/cereal-grains-and-pasta/5680/2
not identical but similar, but with ongoing tracking and being able to record the food I have eaten to my own database.
Appreciating there are apps out there that offer this functionality (myplate) for example I really fancy the challenge of doing this kind of thing myself.
I have setup a Joomla site,
I have checked that CURL is available and active
I've installed Sorcerer by NoNumber
I've read loads of articles on the construction of curl events.
How do I return data to my page.
The closest (I think that I have got is with)
$ch = curl -H "Content-Type:application/json" -d '{"ndbno":"01009","type":"f"}' DEMO_KEY#api.nal.usda.gov/ndb/reports;
$fp = fopen("example_homepage.txt", "w");
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
fclose($fp);
This brings my site offline with the error Parse error: syntax error, unexpected `'"Content-Type:application/json' (T_CONSTANT_ENCAPSED_STRING) in /mounted-storage/home147/sub036/sc85544-PGMS/Domain/plugins/system/sourcerer/helper.php(570) : runtime-created function on line 8
Fatal error: Function name must be a string in /mounted-storage/home147/sub036/sc85544-PGMS/Domain/plugins/system/sourcerer/helper.php on line 575`
When using Sorcerer, you may want to edit your content in WYSIWYG mode. Your sytax will look something like:
{source}
<?php
$ch = curl -H "Content-Type:application/json" -d '{"ndbno":"01009","type":"f"}' DEMO_KEY#api.nal.usda.gov/ndb/reports;
$fp = fopen("example_homepage.txt", "w");
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch); curl_close($ch);
fclose($fp);
{/source}
Note that if you do this, your "source code" will most likely be URL-encoded. I mention this because I suspect that the single or double-quotes might be interfering with content parsing. This could be remediated by writing your code and encapsulating it (with {source}...{/source} in wysiwyg mode. This is just a guess, though.
Another option is that if all you need to do is display the contents of a JSON API response, you may want to use jQuery Ajax.
Example:
I will first show you sample jQuery Ajax syntax followed by the html for the div container that can display the output. Note that the headers line is optional (depends on the requirements of the API).
jQuery(document).ready(function(){
var requestUrl= "https://www.annatech.com/api/v1/content/single/31";
jQuery.ajax({
url: requestUrl,
type: "GET",
headers: { 'token': '9YZi+i7tPpLedg9LtFcuu8rUL3eiCnYuGnNH:650' },
success: function (resultData) {
jQuery( "#output" ).append(resultData.article.introtext).html;
},
error: function (jqXHR, textStatus, errorThrown) {
alert('error');
},
timeout: 120000
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>
<h1>Test AJAX page load</h1>
<div id="output"></div>
The above example uses the cAPI Joomla Rest API, http://getcapi.org. Disclaimer: I developed it.
As you can see, it is possible to query a RESTful JSON API and output the contents directly on a web page (all client-side, in browser). The example above even passes a token through the header to authenticate access to a restricted page.
I am performing a cURL on an ssl page (Page1.php) that in turn performs a cURL on an SSL page (Page2.php). Both pages are on my site and within the same directory and both pages return XML. Through logging I see that Page2.php is being hit and is outputting valid XML. I can also hit page2.php in a browser and it returns valid XML. However, page1.php is timing and out never returning the XML.
Here is the relevant code from Page1.php:
$url = "https://mysite.com/page2.php"
$c = curl_init($url);
if ($c)
{
curl_setopt($c,CURLOPT_RETURNTRANSFER, true);
curl_setopt($c,CURLOPT_FOLLOWLOCATION, true);
curl_setopt($c,CURLOPT_CAINFO, "cacert.pem");
curl_setopt($c, CURLOPT_SSL_VERIFYPEER, true);
curl_setopt($c, CURLOPT_SSL_VERIFYHOST, 2);
curl_setopt($c,CURLOPT_TIMEOUT, 30);
curl_setopt($c,CURLOPT_FRESH_CONNECT, true);
$result = curl_exec($c);
curl_close($c);
}
$result never has anything in it.
Page2 has similar options set but its $result var does have a the expected data in it.
I'm a bit of a noob when it comes to PHP so I'm hoping that I'm overlooking something really simple here.
BTW, we are using a WAMP setup with Windows Server 2008.