I've script which get some values from XML record.
There's code:
<?php
//Data
$xml_data = '<image_process_call><image_url>https://i.pinimg.com/originals/e4/41/54/e44154308e3466d987665c6d50887f06.jpg</image_url><methods_list><method><name>collage</name><params>template_name=Nun Face in Hole;</params></method></methods_list><result_format>jpg</result_format><result_size>800</result_size><template_watermark>false</template_watermark></image_process_call>';
//Settings
$app_id = '';
$key = '';
$sign_data = hash_hmac('SHA1', $xml_data, $key);
//Send request
$request_url = 'http://opeapi.ws.pho.to/addtask?data='. $xml_data .'&sign_data='. $sign_data .'&app_id='. $app_id;
$request_xml = simplexml_load_file($request_url);
$request_id = strval($request_xml -> request_id);
if (isset($request_id)) {
$result_url = 'http://opeapi.ws.pho.to/getresult?request_id='. $request_id;
sleep(6);
$result_xml = simplexml_load_file($result_url);
$result_status = strval($result_xml -> status);
$result_img = strval($result_xml -> result_url);
if (isset($result_img)) {
echo $result_img;
} else {
echo 'Result image not found';
}
} else {
echo 'Request ID not found';
}
?>
The problem depends on time to generate the second XML file. $result_xml took few seconds so I have to use sleep(6) function.
If I remove this, I need to refresh the page (minimum three times) to get a link to generated image from second XML.
Do you have an idea how to do it more professionally? I can't be sure that every image will be generated in 6 seconds (sometimes shorter sometimes longer).
Is there any method for genereting the result only after receiving $result_img? Thanks in advance for your help!
I think it is worth writing.
In practice, it looks like this:
Script does $request_xml and XML from site return:
<image_process_response>
<request_id>2d8d4dec-4344-4df0-a1e1-0c8df304ad11</request_id>
<status>OK</status>
<description/>
<err_code>0</err_code>
</image_process_response>
Script gets request_id from this XML and do $result_xml. However, this is XML and script doesn't get image's url immediately. It needs to wait a few seconds.
After three times refreshing the page or using sleep(6) function finally we get:
<image_process_response>
<request_id>2d8d4dec-4344-4df0-a1e1-0c8df304ad11</request_id>
<status>OK</status>
<result_url>
http://worker-images.ws.pho.to/i1/9F1E2EAF-5B31-4407-8779-9A85F35862D3.jpg
</result_url>
<result_url_alt>
http://worker-images.ws.pho.to.s3.amazonaws.com/i1/9F1E2EAF-5B31-4407-8779-9A85F35862D3.jpg
</result_url_alt>
<limited_image_url>
http://worker-images.ws.pho.to/i1/3F797C83-2C2E-401C-B4AF-C4D36BBD442D.jpg
</limited_image_url>
<nowm_image_url>
http://worker-images.ws.pho.to/i1/9F1E2EAF-5B31-4407-8779-9A85F35862D3.jpg
</nowm_image_url>
<duration>2950.879097ms</duration>
<total_duration>2956.124067ms</total_duration>
</image_process_response>
Edit:
After trying to immediately generate the image I get such an XML:
<image_process_response>
<request_id>e615f0a1-ddee-4d81-94c4-a392f8f123e8</request_id>
<status>InProgress</status>
<description>The task is in progress, you need to wait for sometime.</description>
</image_process_response>
So this is reason why I see blank page...
Do someone have an idea how to force a script to reconnect with the second XML until it finds a result_url?
The problem depends on time to generate the second XML file.
$result_xml took few seconds so I have to use sleep(6) function. If I
remove this, I need to refresh the page (minimum three times) to get a
link to generated image from second XML.
Do you have an idea how to do it more professionally? I can't be sure
that every image will be generated in 6 seconds (sometimes shorter
sometimes longer). Is there any method for genereting the result only
after receiving $result_img? Thanks in advance for your help!
According to Pho.to API, An add task request is a queued POST request.
In my opinion, Send request in while-loop, but wait for smaller time instead of fixed 6 seconds, Check the status in image_process_response, Keep looping until it is not InProgress, After that, You can safely send second request to get processed image result.
You may encounter timeout issue due to low timeout configuration for DoS protection if you run this script on web server (via CGI/FastCGI), To resolve this situation, You need a queue for adding task in your HTTP request, and then process it offline (means without web environment).
Related
I can not understand why only part of the links are returned without sleep(1); function. Although the script works synchronously and after $web_driver->executeScript the object is already loaded, all links have already been loaded.
<?php
require_once('vendor/autoload.php');
use Facebook\WebDriver\Remote\RemoteWebDriver;
use Facebook\WebDriver\WebDriverBy;
$caps = array("platform"=>"SIERRA", "browserName" => "chrome", "version" => "69");
$web_driver = RemoteWebDriver::create(
"http://localhost:4444/wd/hub",
$caps
);
$web_driver->get("https://winestyle.ru/wine/gerard-bertrand/");
$web_driver->executeScript('window.scrollTo(0,document.body.scrollHeight);');
sleep(1);
$element = $web_driver->findElements(WebDriverBy::cssSelector(".bg-text[title='Артикул']"));
foreach ($element as $e){
echo $e->getText().'<br>';
}
$web_driver->quit();
?>
return without sleep:
Артикул:в101222
Артикул:в99863
Артикул:в99981
Артикул:в101225
Артикул:в101212
Артикул:в101224
Артикул:в101211
Артикул:в92722
Артикул:в92723
Артикул:в101208
Артикул:в101210
Артикул:в99979
Артикул:в101223
Артикул:в101220
Артикул:в101213
Артикул:в101221
Артикул:в101227
Артикул:в101218
Артикул:в101217
Артикул:в101215
return with sleep:
Артикул:в101222
Артикул:в99863
Артикул:в99981
Артикул:в101225
Артикул:в101212
Артикул:в101224
Артикул:в101211
Артикул:в92722
Артикул:в92723
Артикул:в101208
Артикул:в101210
Артикул:в99979
Артикул:в101223
Артикул:в101220
Артикул:в101213
Артикул:в101221
Артикул:в101227
Артикул:в101218
Артикул:в101217
Артикул:в101215
Артикул:в101226
Артикул:в99980
Артикул:в85254
Артикул:в66382
Артикул:в66386
Артикул:в66387
Артикул:в85253
Артикул:в101214
Артикул:в101219
Most probably the page has implemented lazy loading - any extra information - new elements, are request through ajax only when the user scrolls to the end of the page.
And this is what's happening in your script - you've executed the js to scroll to the end. If you at that moment execute findElements without the sleep, the page would not have time to send the ajax request, wait for an parse the respone, and update the DOM. Thus you will get only the currently present elements.
With the sleep you give it that possibility.
Have in mind the hardcoded value 1s may sometimes work, and sometimes - not; if the backend takes more time to generate the response, or the network is slow - new data may not be received in time.
An alternative solution would be to poll the DOM for the number of target elements every X milliseconds, and continue once that number increases. This though must accommodate for the case where there are no more results (no more артикулы Russian? артикули Bulgarian? :), and to break out of the polling loop (could be done if there's a total results counter on the page, or similar).
I am sorry to sound confusing but I will try to explain in the best way possible.
In the controller I have a function search
public function search(){
/*
I run my logics and get few URL from
where I need to fetch further data
the urls are saved in the URL array
$urls[0] = "http://url1.com/search1";
$urls[1] = "http://url2.com/search2";
I then set this in data variable and send it to view
so that It can be run in AJAX
I tired running get_file_contents but it executes
in series one after the other URL.
If there are 10 URL (5 secs per URL) the over all processing time
increases drastically
*/
$data["urls"] = $urls;
$resp = $this->load->view('ajaxer',$data,TRUE);
/* based on the $resp i need to run further business logic's */
}
Now the $resp is actually giving me only the HTML code. It is not executing the HTML and hence the ajax is not run.
Any thoughts on how to execute this will be really helpful.
Regards,
Amit
Your code is absolutelly ok. But your javascript is not getting any response data (only headers), because you are not returning any output.
If you want to "execute your HTML" you need to change the line with view to this:
$this->load->view('ajaxer',$data);
or this:
$resp = $this->load->view('ajaxer',$data,TRUE);
echo $resp;
You forgot to echo output in the controller. Apart from this you need few minor modification in your function.
public function search(){
/*
I run my logics and get few URL from
where I need to fetch further data
the urls are saved in the URL array
$urls[0] = "http://url1.com/search1";
$urls[1] = "http://url2.com/search2";
I then set this in data variable and send it to view
so that It can be run in AJAX
I tired running get_file_contents but it executes
in series one after the other URL.
If there are 10 URL (5 secs per URL) the over all processing time
increases drastically
*/
// You need to check either request came from Ajax request or not. If not it will echo passed string. It prevents to access this function besides Ajax request
if (!$this->input->is_ajax_request()) {
echo "Ajax Requests allowed.";
die;
}
$data["urls"] = $urls;
$resp = $this->load->view('ajaxer',$data,TRUE);
// Standard way to set response for output in json format.
// #param status will help to check all things goes correct or not. if not please pass false on the basis or your feature's requirement
$this->output->set_output(json_encode(array('status'=>true,'response'=>$resp)));
// Standard way to get output set above step.
$string = $this->output->get_output();
echo $string;
exit();
/* based on the $resp i need to run further business logic's */
}
Updated code is here. Hope you find your answer
{
"code":420,
"error_type":"OAuthRateLimitException",
"error_message":"You have exceeded the maximum number of requests per hour. You have performed a total of 253 requests in the last hour. Our general maximum request limit is set at 30 requests per hour."
}
I just noticed a clients website I am looking after has stopped showing the Instagram feed, so I loaded up the feed URL straight into the browser and I got the above error. I don't think there should have been 253 requests in an hour, but whilst Googling this problem, I came across someone saying it was because the API was being logged in on every request. Sadly, I have "inherited" this code, and haven't really worked with the Instagram API before, apart from to fix an error with this same website before.
The clients site is in WordPress so I have wrapped the code to get the images in a function:
function get_instagram($user_id=USERID,$count=6,$width=190,$height=190){
$url = 'https://api.instagram.com/v1/users/'.$user_id.'/media/recent/?access_token=ACCESSTOKEN&count='.$count;
// Also Perhaps you should cache the results as the instagram API is slow
$cache = './'.sha1($url).'.json';
if(file_exists($cache) && filemtime($cache) > time() - 60*60){
// If a cache file exists, and it is newer than 1 hour, use it
$jsonData = json_decode(file_get_contents($cache));
} else {
$jsonData = json_decode((file_get_contents($url)));
file_put_contents($cache,json_encode($jsonData));
}
$result = '<a style="background-image:url(/wp-content/themes/iwear/inc/img/instagram-background.jpg);" target="_BLANK" href="http://www.instagr.am" class="lc-box lcbox-4 instagram">'.PHP_EOL.'<ul>'.PHP_EOL;
foreach ($jsonData->data as $key=>$value) {
$result .= "\t".'<li><img src="'.$value->images->low_resolution->url.'" alt="'.$value->caption->text.'" data-width="'.$width.'" data-height="'.$height.'" /><div class="lc-box-inner"><div class="title"><h2>images</h2></div><div class="description">'.$value->caption->text.'</div></div></li>'.PHP_EOL;
}
$result .= '</ul></a>'.PHP_EOL;
return $result;
}
But as I said, this code has stopped working. Is there any way I could optimize this to actually work? I also notice there is mention of a cache in the (probably stolen) instagram stuff, but it isn't actually caching, so that could also be a solution
Thanks
Try registering a new client in instagram and then change
$url = 'https://api.instagram.com/v1/users/'.$user_id.'/media/recent/?access_token=ACCESSTOKEN&count='.$count;
for
$url = https://api.instagram.com/v1/users/'.$user_id.'/media/recent/?client_id=CLIENT_ID&count='.$count;
where CLIENT_ID is the client id of your recently created client.
I am working on a project, in which i need to announce an announcement when a txt file is created on the server and i need to notify all users through an audio announcement,the audio should be played at once on any client browsers that are currently on the pages. the playing of the announcement needs to be synchronized upto maximum accuracy.
the announcement is composed of multiple audio files (playlist).
after the announcement is played on all active clients the txt file will be deleted. and the server will be waiting/looking for another txt file.
for example:
client1 - server time: 19:22:01, Recieved announcement and playing the audio
Client2 - server time: 19:22:01, Recieved announcement and playing the audio
any recommendations? on how to accomplish the announcement at once on all clients, any technique? mysql database or
Flash, Applets, HTML5 audio, JQuery etc.
Thanks..
I wrote a long-poller technique with simple PHP, Ajax en MySQL:
The PHP code is as follows:
timeout = 600;
while (timeout > 0) {
$res = db_query("QUERY");
$return_value = create_value_from_result($res);
// see if the result changed
$db_hash = md5($return_value);
if ($_SESSION['hash'] == $db_hash) {
// the result didn't change -- sleep and poll again
// usleep take microseconds, 100000 is 100 millisecond
// this is the default database polling interval
usleep(100000);
$timeout--;
} else {
// the result changed -- reply the result and set the session hash
$timeout = 0;
$_SESSION['hash'] = $db_hash;
}
}
return json_encode($return_value);
And the Javascript is simple Ajax (dojo is this case):
function longpoll() {
dojo.xhrPost({
url: 'longpolling.php',
load: function (data, ioArgs) {
data = dojo.fromJson(data);
do_magic(data);
// use settimeout to avoid stack overflows
// we could also use a while(1) loop,
// but it might give browser errors such as 'script is
// running too long' (not confirmed)
setTimeout(longpoll, 0);
}
});
}
You need the 60 second timeout to make sure the browser doesn't timeout on the Ajax call.
This way, as soon as the result of QUERY changes (a record gets inserted, an update made on a record), the PHP call returns and the Ajax gets its result.
I have a script that runs every two minutes for a "Tweet-getter" application. In a nutshell it puts tweets onto Facebook. Every now and then it hiccups and despite my error checking, reposts old tweets continuously, every two minutes (the cycle of it being run as a cron job). I have a log.txt that in theory would help me determine what's going on here, but the problem is it isn't being written to every time the job runs. Here's the code:
<?php
$start_time = microtime();
require_once //a library and config
$facebook = new Facebook($api_key, $secret);
get_db_conn(); //returns $conn
$hold_me = mysql_fetch_array(mysql_query("SELECT * FROM `stats`"));
$last_id_posted = $hold_me[0]; //the status # of the most recently posted tweet
$me = "mytwittername";
$ch = curl_init("http://twitter.com/statuses/friends_timeline.xml?since_id=$last_id_posted");
curl_setopt($ch, CURLOPT_USERPWD, $me.":".$pw);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$xs = curl_exec($ch);
$data = new SimpleXMLElement($xs);
$latest_tweet_id = $last_id_posted;
$uid = get_uid(); //returns an array of facebookID->twittername
$user_count = count($uid);
curl_close($ch);
$total_tweets = 0;
$posted_tweets = 0;
foreach ($data->status as $tweet) {
$name = strtolower($tweet->user->screen_name);
if (array_key_exists($name, $uid)) {
$total_tweets += 1;
// $name = Twitter Name
$message = $tweet->text;
$fbid = $uid[$name];
theposting($name,$message,$fbid); //posts tweet to facebook
$this_id = $tweet->id;
if ($this_id > $latest_tweet_id) {
$latest_tweet_id = $this_id;
}
}
}
mysql_query("UPDATE stats SET lasttweet='$latest_tweet_id'");
commit_log(); //logs to a txt file how many tweets posted, how many users, execution duration, and time of execution
?>
So in theory the log is a string of "Monday 24th of August 2009 10:41:32 PM. Called all since # 3326415954. Updated to # 3526415953. 8 users. Took 0.086057 milliseconds. Posted 14 out of 20 tweets." lines. Occasionally though, it will skip two or three hours at a time, and in that time period it will "spam" people's facebook pages with multiple copies of the same tweet. I can't tell what might be breaking my code, but my suspicion is bad XML from twitter. All in all it's relatively low-traffic on my end, so I doubt I'm overloading my server or anything. The log.txt is 50kb right now, and last "broke" at ~35kb, so it's not a huge file slowing it down... Any thoughts would be appreciated!
The first thing I would do to improve the script is to check for cURL errors curl_errno & curl_error. Chances are if anything is going wrong it will be from there if your malformed XML theory is correct. You may also want to specify a timeout for both cURL and PHP.
I've not used the SimpleXML library, but it does look as if there is a check for malformed XML, it'll produce an E_WARNING if it's not well-formed.
Those 2 bits should elminate any dodgy data.
Without seeing the other functions it's a bit hard to see any other potential places where it could be going wrong.
You should test to make sure that your database query was successful.
Try selecting only the $last_id_posted in your SQL select, since you are throwing away the rest of the row anyways.
$last_id_posted has no default value. What is the expected result of ?since_id=
Serialize the state of your db/curl response & XML and dump into your log file.