I'm implementing AMF service methods for an flash front-end. Normally things work fine, but we've found that if two methods are called one right after the other, the second call returns null to the flash front-end even though the method actually completes successfully on the PHP end (to verify this, I dump the return data to a file right before I return it).
Has anyone had this behavior before? Is it some setting with ZendAMF?
Maybe wait a confirmation that the first method was finished before call the second ?
I use ZendAMF too. I have noticed that if one call fails, it will trigger a failure message for any other batched calls (Async tokens can be used to get around this).
I would try sending each call one at a time and finding out which one is failing if there is one. Personally, I use a software called Charles which is and HTTPProxy that allows me to see the contents and error messages of any AMF calls I perform. You could also use wireshark, either way you would be able to see the exact request sent, and any error messages that are being thrown by your backend.
Are you using any transactions in your code (like Doctrine ), sometimes the code will pass test and write out correctly, but choke when the transaction gets closed and end up throwing an error.
It actually turns out the flash side was using the same connection for two function calls. Making separate connections for each call has solved the problem.
Related
I want to send an email after a user is registered right away without the delay that is caused by the Mail::send().
I don't want to send the email after seconds, i want it to be sent right away but without blocking the controller that is sending it, and for the love of Laravel don't suggest (Queues & Jobs) please.
You can use notifications to avoid jobs. But you still need to setup queues for it to be asynchronous.
$user->notify(new OrderPuchased($order));
You can use partial response in this case(If you are using apache web server)
refer this for more info.
how to call function in background codeigniter
Until now i found 2 ways to deal with this :
fastcgi_finish_request :
I can use fastcgi_finish_request() but only after i have returned/echoed the view (I'm using Laravel), in this case i have to change the structure of my code to handle different situations. and of course in conjunction with error_get_last() if anything goes wrong.
laravel-async-mail :
Surprisingly i found that on Packagist.com, seeing the source code it start a new Process to initiate an artisan command. (I hope someone can give an opinion if this have a downside in production environment or not).
My choice for now is this one since it does exactly what i'm looking for.
I have a method that calls two services from PHP at the same time. Due to multi-tasking abilities of Flex, I think that each service is called in a different thread.
My problem is: Both services return an Array of Objects from the database. But the second service will feed a DataGrid that has a handler for each record. This handler will compare that from both Arrays and when Flex finishes the second one before finishes the first one, I have a problem because the handler tries to compare data with a null object (the PHP Service didn't respond yet).
Any ideas?
EDIT:
On the day that I posted this question, some guy gave me an amazing idea, but sadly seems like he deleted his post, I don't know why.
I kept his idea on my mind and I found a solution that fits my design pattern with his idea.
He told me to put a flag telling me if the data was already loaded or not.
So here is what I'm doing now:
I call first service;
I call second service;
On the result of first service, I check the Flag on the second service, if it's true, it means that it was already loaded, so I can just store my data in the DataGrid so the handler can be called.
If the flag is false, it means that the second data wasn't loaded yet, so instead of storage the data in the official dataProvider, I storage it on a _temp DataProvider that is not BOUND to the dataGrid. In this case, when the second data is loaded, a listener event is dispatched to the first service telling him to catch the _temp dataProvider and copy that to the official dataProvider.
Particularly, I liked the solution and it doesn't break the Table Data Gateway design pattern.
Thanks everyone for the help.
Due to multi-tasking abilities of Flex, I think that each service is
called in a different thread.
What makes you think Flex supports multi-threading? It really doesn't. Single Thread only.
However, your calls are asynchronous in that when they are sent, the program does not stop to wait for an answer, but listens for a completion event.
Your solution is simple: Send only the first request, wait for it to complete, and then send the second request in the completion handler.
EDIT
To preserve your design pattern, you can apply two different strategies:
Use the Business Delegate pattern: Have a custom class act as the gateway and encapsulate the actual connections to your services. It could then dispatch its own COMPLETE event to trigger the handlers you have. To the DataGrid, it would appear like a single asynchronous call.
Use the Session Facade pattern: Create a single service on the server side, which accepts a single request, calls the referenced services locally, and returns the combined data for both services in a single response.
Flex doesn't have multi threading, but it can have multiple asynchronous calls at once. You can deal with not knowing which will return first by having each return handler check to make sure that both services have returned before progressing into code that depends on both.
Let us assume you have two services..
FirstService
SecondService
private function init(): void
{
// Call the first service.
myService.FirstService();
}
private function firstServiceResult(re:ResultEvent) :void
{
// Perform what you need to do with the results of your FirstService (i.e. set the result to array).
// Afterwards, call the next service.
myService.SecondService();
}
private function secondServiceResult(re:ResultEvent) :void
{
// Perform what you need to do with the results of your SecondService.
// Now you can compare the result in the first service and the second service.
}
I'm working with Mootools to develop a simple ajax-y page that does some manipulation with db records using PHP as the backend.
I'm submitting the ajax request to a PHP page that calls a function and returns TRUE or FALSE if the record was able to be updated/deleted or not.
However, it seems like the mootools onSuccess event is fired anytime the server returns a 200 status, regardless of the value returned (eg. FALSE is still assumed to be a success).
How do I use onSuccess in a meaningful way, short of returning a 40x error code or something?
All answers given by #dombenoit and #Marc are technically correct.
However, I totally differ from #Marc's vision: to me, using HTTP status codes is both efficient and common for webservices. I use them all the times and much favor them over outputting text, mainly for the following reasons:
they give a free standard for handling incorrect values, instead of making you output text and parsing it clientside, meaning semantical repetition;
they make all tools understand something went wrong, as the question itself outlines;
they feel right in a REST architecture.
To support this vision, here's a question: what's the goal of your call? Is it to update or delete a record? Then if this goal is not reached, something went wrong. It failed, and the application should know it at application-level, not through first saying 200/OK and then precising in the textual response it did not! To me, it feels like using "undefined" instead of undefined.
So, here, I would make PHP send an HTTP error status code, that is one in the 4xx-5xx ranges.
Then, the good question is: which code to use? This is a design decision that totally depends on your application and the degree of specificity you want to get to.
If the goal of the call is to update / delete, and the fact that it does not happen is extremely unlikely and is an unexpected, serious error (for example: the DB is inconsistent because there's no way the call could reference an entity that does not exist), then you should use 500 / Internal Server Error.
If it could be possible that the targeted entity does not exist at the time of the call without it being a critical error (example: your app provides several ways to delete an item, so another one could have been used instead of this call), then I'd advise 410 / Gone: you get a clear, expressive error handling, for free! And you can still use 500 for actual errors (DB connexion exceptions…).
Then, you could get even more specific about update errors only, for example with 409 / Conflict if that's the kind of errors you're trying to foresee with updates…
I always give a look at the HTTP status codes reference when I'm designing a webapp.
Just for the sake of completion, that's how you send headers in PHP (without framework at least — check for specificities):
header("HTTP/1.0 404 Not Found");
UPDATE: since it seems you decided to go with the answer that suggested to use JSON to encode success or failure, I have to add the following points about resilience.
Not relying on status codes and only on application-level data makes your code very fragile. Indeed, there are situations where you get actually unexpected errors. Not the application-level “exception” that you raised yourself, but something wrong on a lower level (server unavailable, bad config that makes the server crash, changed routing system…). These will all show through HTTP status codes (or through timeout), but not through a JSON-encoded answer, since your application will have already crashed before being able to output anything.
Like #Dimitar put it, from a programming point of view, this is somehow “naive”: you trust not only your code (you shouldn't), but also the environment (server) and the network. That's a very, very optimistic vision.
Using status codes as the standard way to handle expected exceptional situations give you a free handling of those unexpected situations: you've already registered other handlers than onSuccess, and supposedly good ones (retrying once, notifying the user, offering backup possibilities…).
Personaly, I feel that using the HTTP status codes to indicate the success/failure of whatever was supposed to happen on the server is incorrect. Everything about the HTTP call itself worked perfectly - the fact that the server was unable to complete the request doesn't mean the HTTP request failed.
It's like driving to the store to buy something, only to find it's out of stock. Returning an HTTP 404, to me, would imply that the store itself was gone. Yet, you've successfully driven to the store, walked inside, walked back out, drove home.
So, use a JSON data structure to indicate the results of the requested transaction, which you can check for in your client-side code:
$results = array()
$results['success'] = false;
$results['failure_code'] = XXX; // some code meaningful to your app
$results['failure_message'] = 'Something dun gone blowed up, pa!';
echo json_encode($results);
Then in your Moo code:
if (!results['success']) {
alert("AJAX call failed: " + results['failure_message']);
}
If the call worked, then you'd have
$results = array();
$results['success'] = true;
$results['data'] = ....;
You have a couple options, two of which you mentionned in your question. You can use onSuccess and execute some code based on a true/false response, like so :
onSuccess: function(responseText, xml){
if(responseText == "false"){
// do something...
}
}
Or you could raise errors in your PHP code, returning an actual valid error code thus firing the onFailure event.
Or as mentionned by Marc previously, you could use a JSON response format, in which case you would use MooTools' Request.JSON.
onSuccess: function(responseJSON, responseText){
// do something...
}
I need various things to happen after the user has been sent a response, like how register_shutdown_function used to work.
I've had a play with sfShutdownPlugin, it just uses register_shutdown_function, I've also had a look at using a destructor (only on an action) but Symfony doesn't seem to like that too much and the postExecute method still happens before content is sent.
Have a look at how the filter chain works - can you find a point where the response has been sent but execution is still running? If so, you should be able to add your own filter to the chain at this point.
Not sure this'll quite do what you want though - don't think the browser will see it as finished, so the spinner will keep going even if content has loaded and is shown, and won't it delay document.ready etc?
The sfStorage classes use shutdown stuff i think - might be worth looking at what they do.
I have a page that I am performing an AJAX request on. The purpose of the page is to return the headers of an e-mail, which I have working fine. The problem is that this is called for each e-mail in a mailbox. Which means it will be called once per mail in the box. The reason this is a problem is because the execution time of the imap_open function is about a second, so each time it is called, this is executed. Is there a way to make an AJAX call which will return the information as it is available and keep executing to prevent multiple calls to a function with a slow execution time?
Cheers,
Gazler.
There are technologies out there that allow you to configure your server and Javascript to allow for essentially "reverse AJAX" (look on Google/Wikipedia for "comet" or "reverse AJAX"). However, it's not incredibly simple and for what you're doing, it's probably not worth all of the work that goes into setting that up.
It sounds like you have a very common problem which is essentially you're firing off a number of AJAX requests and they each need to do a bit of work that realistically only one of them needs to do once and then you'd be good.
I don't work in PHP, but if it's possible to persist the return value of imap_open or whatever it's side effects are across requests, then you should try to do that and then just reuse that saved resource.
Some pseudocode:
if (!persisted_resource) {
persisted_resource = imap_open()
}
persisted_resource.use()....
where persisted_resource should be some variable stored in session scope, application scope or whatever PHP has available that's longer lived than a request.
Then you can either have each request check this variable so only one request will have to call imap_open or you could initialize it while you're loading the page. Hopefully that's helpful.
Batch your results. Between loading all emails vs loading a single email at a time, you could batch the email headers and send it back. Tweak this number till you find a good fit between responsiveness and content.
The PHP script would receive a range request in this case such as
emailHeaders.php?start=25&end=50
Javascript will maintain state and request data in chunks until all data is loaded. Or you could do some fancy stuff such as create client-side policies on when to request data and what data to request.
The browser is another bottleneck as most browsers only allow 2 outgoing connections at any given time.
It sounds as though you need to process as many e-mails as have been received with each call. At that point, you can return data for all of them together and parse it out on the client side. However, that process cannot go on forever, and the server cannot initiate the return of additional data after the http request has been responded to, so you will have to make subsequent calls to process more e-mails later.
The server-side PHP script can be configured to send the output as soon as its generated. You basically need to disable all functionality that can cause buffering, such as output_buffering, output_handler, HTTP compression, intermediate proxies...
The difficult part is that you'd need that your JavaScript library is able to handle partial input. That is to say: you need to have access to downloaded data as soon as it's received. I believe it's technically possible but some popular libraries like jQuery only allow to read data when the transfer is complete.