I am hitting a strange error when I attempt to call $sns->publish (PHP) - it never returns, I am not sure if it dies silently, but I could not catch an exception or get a return code.
I was able to track this down to happen when device for the token (endpoint) appears to be already disabled in the SNS console. It gets disabled on the initial call, I would assume due to the error returned by GCM that token is invalid.
What am I doing wrong and how can I prevent the problem? I do not want to check every endpoint for being enabled since I may be pushing to 10 out of 1000. However I definitely want to continue executing my push loop.
Any thoughts? AWS team forum seems useless, it has been weeks since original reply by AWS team member asking for code with not response since that time.
you can check if the endpoint is disabled before sending push notification as -
$arn_code = ARN_CODE_HERE;
$arn_arr = array("EndpointArn"=>$arn_code);
$endpointAtt = $sns->getEndpointAttributes($arn_arr);
//print_r($endpointAtt);
if($endpointAtt != 'failed' && $endpointAtt['Attributes']['Enabled'] != 'false')
{
....PUBLISH CODE HERE....
}
It will not stop the execution.
Hope it will help you.
Related
I want to read all messages from azure service bus (queue).
I have followed instruction from below link
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-php-how-to-use-queues
Currently it fetch one message..
I want to fetch all messages from service bus(queue).
Thanks in Advance..
I don't think it is possible to specify the number of messages you wish to read from the queue (at least with PHP SDK, it is certainly possible with .Net SDK). Essentially receiveQueueMessage is a wrapper over either Peek-Lock Message (Non-Destructive Read) or Receive and Delete Message (Destructive Read) (depending on the configuration) REST API methods and both of them return only a single message.
One way to solve this problem is to run your code in loop till the time you don't receive any message back from the queue. The issue that you could possibly run into with this is you may get back duplicate messages as once the lock acquired by peek-lock method is expired, the message will become visible again.
Reading all the messages in a Queue is possible with Peek operation concept of Service bus.
MessagingFactory messagingFactory = MessagingFactory.CreateFromConnectionString(<Your_Connection_String>);
var queueClient = messagingFactory.CreateQueueClient(<Your_Queue_Name>, ReceiveMode.PeekLock); // Receive mode is by default PeekLock and hence, optional.
BrokeredMessage message = null;
while(true)
{
message = queueClient.Peek(); // You have read one message
if (message == null) // Continue till you receive no message from the Queue
break;
}
This can also return expired or locked messages. Give a quick read about Message browsing if this suits your requirement of reading messages.
Facebook made a recent change to Leadgen Webhooks where you've to respond with "200 OK" for each lead that comes in or they'll keep pinging you with that lead. Problem is, they don't mention if there's a way to do that and I've been trying to send a response but unsuccessfully so far.
What I've tried so far:
status_header(200);
http_response_code(200);
header[$_SERVER['SERVER_PROTOCOL'] . ' 200 OK']
header[$_SERVER['SERVER_PROTOCOL'] . ' 200 OK', true, 200);
function handle_200() {
status_header(200);
header($_SERVER['SERVER_PROTOCOL'] . ' 200 OK');
error_log('test'); // This never even runs
}
add_action('send_headers', 'handle_200', 1);
None of the above has worked. I've also checked to ensure that at the point I was trying to change the status header, the current headers have not been sent by checking with headers_sent() and that returned a FALSE.
I've also tried checking if the status header can be seen through either one of these:
error_log(print_r(getallheaders(), true));
error_log(print_r(headers_list(), true));
But neither printed out the status header, but if I were to do something like header('foo: bar'), that would be seen in the headers list.
I'm at a complete loss, any help to point in the right direction would be much appreciated.
That being said, it might be relevant to mention that this script is running in the theme functions of a Wordpress Environment. Furthermore, the script itself (where I'm making these changes) is running within ob_start() and ob_end_clean() (not sure if relevant).
EDIT: I just found out that when I cause a Fatal Error that causes my script to end prematurely, Facebook receives a 200 OK response (ironically). This makes me think that the problem is because of running the script within ob_start and ob_end_clean(). Could that be delaying the header response?
EDIT 2: This remains unresolved. I've a temporary solution which successfully sends a response 35% of the time. I'm at a point where I get a lead from the webhook, retrieve the info from the lead and then do a SQL query to check if that lead has been received before, if it has, exit(200). As simple as that, how can that take time for leads that already exist? It wouldn't! Yet my server (according to Facebook's Ad tool) still times out (without a response) 50% of the time? Something doesn't add up.
When consuming data from a ActiveMQ queue im running into the following problem
With the following code:
$stomp = new Stomp($activeMQURI);
$stomp->subscribe($queue);
while ($stomp->hasFrame()) {
$frame = $stomp->readFrame();
if ($frame) {
$stomp->ack($frame);
}
}
It will only loop through about 1-10 messages before $stomp->hasFrame() returns false. The problem is there are 10k messages still in the queue!
When i put a delay in after the acknowledgment everything works as expected:
$stomp = new Stomp($activeMQURI);
$stomp->subscribe($queue);
while ($stomp->hasFrame()) {
$frame = $stomp->readFrame();
if ($frame) {
$stomp->ack($frame);
sleep(1);
}
}
I was thinking that this was happening because the ActiveMQ server has not had a chance to process the ack before the consumer (my code) requests another frame. Can anyone explain the real reason why this is happening, and maybe a better fix then SLEEP?
You don't really specify what client you are using so here's a general answer. Most client's provide a blocking receive call either timed or infinite wait which will return when a message arrives, or indicate failure in the timed case. The speed at which the broker is going to dispatch messages to your client depends on a great many factors such as the number of consumers on the destination, the prefetch size set by each consumer, and the speed of the network etc, etc. Your code should not expect immediate turned and be able to deal with the case where there is a lull in message traffic. That's about as good an answer as I can give since I don't know any more about your setup.
I'm working on a script that a server posts to us and we post to a server and theoretically is supposed to receive a response from the server and reply back to the server that posted to us. However sometimes the server I'm posting to doesn't respond. The big problem is the server that posted to us will assume the transaction went through successfully unless it hears otherwise. So essentially I need a way to send a response back to the first server if I never hear back from the server I'm posting to. I think register_shutdown_function could work for me but I don't know how to set that up so it only runs on error or whatever.
Thanks in advance
register_shutdown_function() should work for you. Just use a global variable to determine whether the remote server responded or not:
<?php
function on_shutdown () {
if (!$GLOBALS['complete']) {
// Handle server not responding here
}
}
// Suppress "FATAL ERROR" message when time limit reached
error_reporting(0);
$complete = FALSE;
register_shutdown_function('on_shutdown');
// Code which posts to remote server here
$complete = TRUE;
This means that $complete will still be false if the code which posts to the remote server does not complete, so you can just check the truthyness of $complete in the function that fires on shutdown.
I have noticed a few websites such as hypem.com show a "You didnt get served" error message when the site is busy rather than just letting people wait, time out or refresh; aggravating what is probably a server load issue.
We are too loaded to process your request. Please click "back" in your
browser and try what you were doing again.
How is this achieved before the server becomes overloaded? It sounds like a really neat way to manage user expectation if a site happens to get overloaded whilst also giving the site time to recover.
Another options is this:
$load = sys_getloadavg();
if ($load[0] > 80) {
header('HTTP/1.1 503 Too busy, try again later');
die('Server too busy. Please try again later.');
}
I got it from php's site http://php.net/sys_getloadavg, altough I'm not sure what the values represent that the sys_getloadavg returns
You could simply create a 500.html file and have your webserver use that whenever a 50x error is thrown.
I.e. in your apache config:
ErrorDocument 500 /errors/500.html
Or use a php shutdown function to check if the request timeout (which defaults to 30s) has been reached and if so - redirect/render something static (so that rendering the error itself cannot cause problems).
Note that most sites where you'll see a "This site is taking too long to respond" message are effectively generating that message with javascript.
This may be to do with the database connection timing out, but that assumes that your server has a bigger DB load than CPU load when times get tough. If this is the case, you can make your DB connector show the message if no connection happens for 1 second.
You could also use a quick query to the logs table to find out how many hits/second there are and automatically not respond to any more after a certain point in order to preserve QOS for the others. In this case, you would have to set that level manually, based on server logs. An alternative method can be seen here in the Drupal throttle module.
Another alternative would be to use the Apache status page to get information on how many child processes are free and to throttle id there are none left as per #giltotherescue's answer to this question.
You can restrict the maximum connection in apache configuration too...
Refer
http://httpd.apache.org/docs/2.2/mod/mpm_common.html#maxclients
http://www.howtoforge.com/configuring_apache_for_maximum_performance
This is not a strictly PHP solution, but you could do like Twitter, i.e.:
serve a mostly static HTML and Javascript app from a CDN or another server of yours
the calls to the actual heavy work server-side (PHP in your case) functions/APIs are actually done in AJAX from one of your static JS files
so you can set a timeout on your AJAX calls and return a "Seems like loading tweets may take longer than expected"-like notice.
You can use the php tick function to detect when a server isn't loading for a specified amount of time, then display an error messages. Basic usage:
<?php
$connection = false;
function checkConnection( $connectionWaitingTime = 3 )
{
// check connection & time
global $time,$connection;
if( ($t = (time() - $time)) >= $connectionWaitingTime && !$connection){
echo ("<p> Server not responding for <strong>$t</strong> seconds !! </p>");
die("Connection aborted");
}
}
register_tick_function("checkConnection");
$time = time();
declare (ticks=1)
{
require 'yourapp.php' // load your main app logic
$connection = true ;
}
The while(true) is just to simulate a loaded server.
To implement the script in your site, you need to remove the while statement and add your page logic E.G dispatch event or front controller action etc.
And the $connectionWaitingTime in the checkCOnnection function is set to timeout after 3 seconds, but you can change that to whatever you want