Amazon Simple Queue Service (SQS) - php

I created a queue in SQS, added two messages (serialized PHP arrays: array('filename' => 0, ...) and array('filename' => 1, ...)). I'm using the newest version of amazon SDK for PHP from their git repo.
The problem is that when I use receive_message function with these options:
MaxNumberOfMessages = 10
VisibilityTimeout = 0 // other values doesn't change much
I get only the first message, repeated 10 times:
<ReceiveMessageResponse>
−
<ReceiveMessageResult>
−
<Message>
<MessageId>82523332-75e0-444d-ae8f-55ccd5580beb</MessageId>
−
<ReceiptHandle>
v5iiyMGi3b6RunVNVvjOQOV+ZDqRV7sNLzj5pUAEj1brIAkucpYiGaM8UIdOEis9Kouh4s+cAkSAd7MhbJKPGM6SdKYE993x2Lf/DwEbhkfmzRxOevzUsyJCrrVdTSTSx0cNUqqV6Cgr/Asi72t/UOhbdXhTp3kaCaZfd2weymg=
</ReceiptHandle>
<MD5OfBody>ced185420292fbd06b32ea6e35da3d21</MD5OfBody>
−
<Body>
a:3:{s:8:"priority";i:2;s:8:"filename";i:0;s:11:"task_ticket";s:0:"";}
</Body>
</Message>
−
<Message>
<MessageId>82523332-75e0-444d-ae8f-55ccd5580beb</MessageId>
−
<ReceiptHandle>
v5iiyMGi3b6RunVNVvjOQOV+ZDqRV7sNLzj5pUAEj1brIAkucpYiGaM8UIdOEis9Kouh4s+cAkSAd7MhbJKPGM6SdKYE993x2Lf/DwEbhkfmzRxOevzUsyJCrrVdTSTSx0cNUqqV6Cgr/Asi72t/UOhbdXhTp3kaCaZfd2weymg=
</ReceiptHandle>
<MD5OfBody>ced185420292fbd06b32ea6e35da3d21</MD5OfBody>
−
<Body>
a:3:{s:8:"priority";i:2;s:8:"filename";i:0;s:11:"task_ticket";s:0:"";}
</Body>
</Message>
...and so on, always with "filename";i:0
I'm 100% that there are only 2 messages in the queue (I deleted it and recreated to be sure) and yet I get only the first one, populated many times. This changes from time to time and sometimes I get the second one mixed in the list. If I leave VisibilityTimeout as default 3 (or other non-zero value) the first one disappears for a while (as expected) and then I get the second one repeated many times.
get_queue_size returns 2, which is true.
I also tried Amazon Scratchpad and just made API calls with the same results. So, is SQS broken or I'm doing something totally wrong?

I believe this is expected behavior because you have set VisibilityTimeout = 0. Typically you would set the timeout value to be the expected duration to process a message. You must call delete on a read message before the visibility timeout expires or the message will be automatically re-queued.
In more complex systems a separate thread might be used to extend the timeout period for a single message if the initial timeout was not long enough.
As it sounds like you are just starting, it's important that you write your message processing code to account for reading the same message multiple times. Not only can your message get re-queued automatically but SQS will occasionally return a duplicate message.

Related

Recorded files lost when user hangs up

I'm writing a voice application in which I want to save a recorded sound file.
My code is:
$file = $clientid.rand(5, 10);
$agi->stream_file("itc-Por-favor-indique-su-nombre-numero-de-telefono");
$sal = $agi->record_file($file,"WAV","0123456789#*",-1,NULL,true);
if ($sal['result'] > 0) {
$bodytext = "Reclamo de la mesa de ayuda, cliente no identificado por IVR.
\nNumero de Telefono: ".$agi->request['agi_callerid'];
}
Whenever I hang up during recording, the recording application can not execute and hangup the call.
Does anyone have any idea how to manage this record function while hanging up?
You need use 'k' option to instruct asterisk you want save file on hangup WITHOUT confirmation.
core show application Record
-= Info about application 'Record' =-
[Synopsis]
Record to a file.
[Description]
If filename contains '%d', these characters will be replaced with a number
incremented by one each time the file is recorded. Use 'core show file formats'
to see the available formats on your system User can press '#' to terminate the
recording and continue to the next priority. If the user hangs up during a
recording, all data will be lost and the application will terminate.
${RECORDED_FILE}: Will be set to the final filename of the recording.
${RECORD_STATUS}: This is the final status of the command
DTMF:A terminating DTMF was received ('#' or '*', depending upon option
't')
SILENCE:The maximum silence occurred in the recording.
SKIP:The line was not yet answered and the 's' option was specified.
TIMEOUT:The maximum length was reached.
HANGUP:The channel was hung up.
ERROR:An unrecoverable error occurred, which resulted in a WARNING to the
logs.
[Syntax]
Record(filename.format[,silence[,maxduration[,options]]])
[Arguments]
format
Is the format of the file type to be recorded (wav, gsm, etc).
silence
Is the number of seconds of silence to allow before returning.
maxduration
Is the maximum recording duration in seconds. If missing or 0 there is no
maximum.
options
a: Append to existing recording rather than replacing.
n: Do not answer, but record anyway if line not yet answered.
q: quiet (do not play a beep tone).
s: skip recording if the line is not yet answered.
t: use alternate '*' terminator key (DTMF) instead of default '#'
x: Ignore all terminator keys (DTMF) and keep recording until hangup.
k: Keep recorded file upon hangup.
y: Terminate recording if *any* DTMF digit is received.

MySQL large requests don't work in AJAX and need LIMIT, while working in straight PHP

I have a MySQL table which can contain up to 500 000 rows and I am calling them on my site without any LIMIT clause; when I do it this without AJAX, it works normally, but with AJAX , again without setting LIMIT, no data is returned. I checked the AJAX code and there is no mistake there. The thing is , when I write a limit, for example 45 000 , it works perfectly; but above this, ajax returns nothing.
With limit
witohut the limit :
Can this be a ajax issue because i found nothing similar on the web or something else?
EDIT
here is the sql request
SELECT ans.*, quest.inversion, t.wave_id, t.region_id, t.branch_id, quest.block, quest.saleschannelid, b.division, b.regionsid, quest.yes, quest.no FROM cms_vtb as ans
LEFT JOIN cms_vtb_question as quest ON ans.question_id=quest.id
LEFT JOIN cms_task as t ON t.id=ans.task_id
LEFT JOIN cms_wave as w ON w.id=t.wave_id
LEFT JOIN cms_branchemployees as b ON b.id=t.branchemployees_id WHERE t.publish='1' AND t.concurent_id='' AND ans.answer<>'3' AND w.publish='1' AND quest.questhide<>1 ORDER BY t.concurent_id DESC LIMIT 44115
the php :
var url='&module=ajax_typespace1&<?=$base_url?>';
$.ajax({
url: 'moduls_ajax.php?'+url,
cache: false,
dataType:'html',
success: function(data)
{
$("#result").html(data);
}
});
Apparently it was a server error, adding ini_set('memory_limit', '2048M'); helped a lot
The reason this happens has to do with how you format the data sent to the client. Not having seen the code of moduls_ajax.php, I can only suspect that you are probably assembling the query result into a variable - possibly in order to json_encode it properly?
But doing so may result in a huge memory allocation, whereas if you send the data piece by piece to the Web server, you may need a fraction of the memory only.
The same happens on your web page where the same query is either output straight on, or is not being encoded. In the latter case, you'll discover that when the row number grows to about two or three times the current value, the working Web page will stop also.
For example:
$result = array();
while ($tuple = $resultset->fetch()) {
$result[] = $tuple;
}
print json_encode($result);
Instead - of course, it's more complicated than before -
// Since we know it is an array with numeric keys, the JSON
// will be of the format [ <item>, <item>,...,<item> ]
$sep = '[';
while ($tuple = $resultset->fetch()) {
print $sep . json_encode($tuple);
$sep = ',';
}
print ']';
Pros and cons
This is about three times as expensive as a single function call, and can also yield a slightly worse compression performance (the web browser may receive the data in chunks of different size and find more difficulty in compressing them optimally; it's a matter of tenths of one percent, usually). On the other hand, in some setups the output will arrive much more quickly to the client browser and possibly prevent browser timeouts.
The memory requirements, if the tuples are all more or less of the same size, is around two to three N-ths of before - if you have one thousand rows, and needed one gigabyte to be able to process the query, now three-four megabytes ought to suffice. Of course, this also means that the more rows, the better... and the less rows, the less point there is in doing this.
More of the same
The same approach holds for other kind of assembling (to HTML, CSV and so on).
In some cases it may be helpful to dump the data into an external temporary file and send a Location header to have it loaded by the browser. Sometimes it is possible (if PHP is compiled as an Apache module on a Unix system) to output the file after having deleted it, so that it's not necessary to do garbage collection on the temporary files:
$fp = fopen($temporary_file, 'r');
unlink($temporary_file); // The file is deleted, the handle remains valid
fpassthru($fp); // On some platforms this results in the browser being "short-circuited" to the file descriptor, so that the PHP script may terminate while output continues normally.
die();

PHP json_encode() Results In HTTP 500

I have a PHP 5.3.3 array that I need to json_encode; encoding fails and Apache returns an HTTP 500.
The array contains Snort rules; until recently the database contained about 700 rules and, the other day, about 10000 rules were added. That's when the web application broke. The application retrieves the data via PHP JSON-encoded so I json_decode then go through a foreach loop to "restructure" the data into a new, temporary array. As a part of building the new array I htmlentities (with ENT_QUOTES) the "options" part of the Snort rule (otherwise I have browser display issues). Once the new array is complete I...
$data = json_encode(array_values($temp));
...which is where my code used to work but is now failing.
If you're not familiar with Snort rules, examples of the options part of a rule are:
flow:established,to_server; content:"?sid="; http_uri; pcre:"/\?sid=[0-9A-F]{180}/U"; reference:url,doc.emergingthreats.net/2007142; classtype:trojan-activity;
...and...
flow:established,to_server; content:"|00 00 00 83|"; depth:4; content:"<CPU>"; content:"</CPU><"; distance:0; content:"<MEM>"; content:"</MEM><"; distance:0; reference:url,doc.emergingthreats.net/bin/view/Main/TrojanDropper497; classtype:trojan-activity;
PHP documentation for json_encode notes data must be UTF-8 which mine is (ASCII according to mb_detect_encoding()). I have seen other JSON posts with HTTP 500 issues. Many are unrelated to my problem though there was one which caught my attention and was easy to rule out... I added set_time_limit even though this didn't seem to be the issue. The failure occurs very promptly.
I'm not sure what else to do to troubleshoot.
Your expertise is much appreciated.
Thanks.
=== EDIT ===
The code, with new data, works in a dev environment.
Dev (works)
* Apache/2.2.8
* PHP 5.2.5
Prod (doesn't work)
* Apache/2.4.2
* PHP 5.5.3
Because the error started when more records were added, my suspicion is the script was now using too much memory. Perhaps the memory limit is lower in production than it is in development.
Solutions for this are documented here: http://www.ducea.com/2008/02/14/increase-php-memory-limit/

trying to fix a crontab file duplicate by PID table

I'm trying to develop a crontab task that every 5 seconds check my email. Normally I could request it every 1 minute instead of 5 seconds, but reading some other posts with no solution, I found one with the same problem than me. The script, after a period of time, was stopping. This is not a real problem cause I can configure a crontab task and make sleep(5) Also I have the same 1and1 server as the other question, which I'm including here.
PHP script stops running arbitrarily with no errors
The real problem I had when I tried to solve this via crontab, every minute a new PID was created, so in an hour I could get almost 50 process at the same time doing the same.
Here I include the .php file called by crontab every minute:
date_default_timezone_set('Europe/Madrid');
require_once ( $_SERVER['DOCUMENT_ROOT'] . '/folder1/path.php' );
require_once ( CLASSES . 'Builder.php');
$UIModules = Builder::getUIModules();
$UIModules->getfile();
So I found a solution by checking the PID table. The idea is if in the PID table are running 2 process, then that means the last proccess is still working, so just finish doing anything. If in the PID table there's just 1 process running, that means the latest process that was working has expired so we can use this new one. The way is something like I show on the next code:
$var_aux = exec("ps -A | grep php");
if (!isarray($var_aux)){
date_default_timezone_set('Europe/Madrid');
require_once ( $_SERVER['DOCUMENT_ROOT'] . '/folder1/path.php' );
require_once ( CLASSES . 'Builder.php');
$UIModules = Builder::getUIModules();
$UIModules->getfile();
}
I'm not sure about the condition isarray($var_aux) cause $var_aux always returns me the last PID process, so it returns a string of 28 characters, but in this case we want to return more than a process so the condition could even change to if (strlen($var) < 34). Note: I've given more margin to the len, cause sometime process take longer than 9999, so it's 1 lenght more.
The main problem I found on this is the exec sentence just print me the last process, in other words, it always returns me a string with a lenght of 28 (The PID for that script).
I don't know if what I've purposed is a crazy idea, but is it possible to get all the PID table with php?
You can use a much simpler solution than emulating crontab in php: use contab
make multiple entries to check every 5 seconds an then call your php program.
A good description of how to set up crontab to perform subminute action can be found here:
https://usu.li/how-to-run-a-cron-job-every-x-seconds
This solution only requires the maximum of 12 processes running every minute.

keep getting MAX timeout error on urlencode

I keep getting an error on this code:
<?php
function encode_object(&$obj) {
foreach ($obj as &$current) {
if (is_string($current) or is_object($current) or is_array($current)) {
if (is_object($current) or is_array($current)) {
encode_object($current);
}
if (is_string($current)) {
$current = urlencode($current);
}
}
}
}
?>
This code has worked before but for some reason every time a run it I get:
Fatal error: Maximum execution time of 30 seconds exceeded in * on line 9
What I'm trying to do is be able to give it an object, search through it and encode all of the strings.
I have tried multiple times but keep getting the same error
I am using:
Apache 2.2.15.0
PHP 5.3.3
Windows 7 Ultimate build 7600
EDIT:
The input I'm entering is an error that, after going through this function, is meant to be converted into JSON and read by javascript through ajax.
The input in this case would be:
array("error"=>
array(
"error"=>"error",
"number"=>2,
"message=>"json_encode() [<a href='function.json-encode'>function.json-encode<\/a>]: recursion detected",
"line"=>22))
That is another error that i will worry about later, but it seems that when I put
$obj['error']['message'] = 'blah';
on the object before I send it, the code works fine. So there is something about
json_encode() [<a href='function.json-encode'>function.json-encode<\/a>]: recursion detected
that urlencode seems to be having a problem with.
If it has worked before, then it seems there's nothing wrong with the code, just that the objects you're sending it are large and are taking longer to process than the default execution time set in PHP.
The quick and dirty way to handle this is to use the ini_set() function:
ini_set("max_execution_time",840); (in this case, 840 is 840/60 or 14 minutes)
I've used this before on a query with a particularly large result-set, one which took at minimum five minutes to load, and build the HTML for the page.
Note, this will not work if your host has "Safe Mode" enabled. In that case you actually have to change the setting in PHP.ini. Otherwise, I use this quick and dirty roundabout fairly often for ridiculously huge parsing/processing tasks.

Categories