ZendAMF - what is all this traffic? - php

I am using ZendAMP php and Flex (Flash Builder 4). It works great, but I noticed when I am looking at the traffic going between my flex application and ZendAMF, there packets moving even though I am not requesting communications in my code.
For example, this is what my service looks like in flex:
var activityLogService:RemoteObject = new RemoteObject("zend");
activityLogService.showBusyCursor=true;
activityLogService.endpoint="http://myserver:80/amf/";
activityLogService.source="ActivityLogService";
Then I call something like activityLogService.getRecord(myPassedParams) after setting up my addlistener.
When I watch the network traffic using something such as fiddler, I can see my request and the response come back.
However, I also see these request packets that do not contain names of my zend service objects:
�����null�/1����
���
�Mflex.messaging.messages.CommandMessageoperationcorrelationIdmessageIdtimeToLivetimestampdestinationheaders bodyclientIdI3961D727-35B9-F41C-713A-AA42625FCFD9��
%DSMessagingVersion DSIdnil
The response coming back is pretty vague too:
�����
/1/onResult������
�Uflex.messaging.messages.AcknowledgeMessagecorrelationIdclientIddestinationmessageIdtimestamptimeToLiveheaders bodyI3961D727-35B9-F41C-713A-AA42625FCFD9I53D9441D-E1DC-4829-9B3F-000040DA9368I1322EAF2-B588-9929-0AC4-000013A22D80131282149600�
Are these just some kind of 'keep alive' messages?
If so, is there a way to turn them off?
Also, if so, is there a way I can use them to keep some kind of session alive on the server side maybe (maybe that's what they are for)?

The RemoteObject AMF implementation requires that the server-side implementations are stateful. This is defined as part of the protocol, so it shouldn't matter which back-end you;re talking to (ie., my experience is in BlazeDS, LCDS and WebOrb, but it should be the same with PHP)
When your application makes it's first AMF call to a RemoteObject, it checks to see if the client has a DSid value set. This is essentially a unique ID which identifies the flex client to the server.
If not, then a call is issued to get a new DSid, and all outbound calls are suspended until that call returns. From then on, the DSid is passed on all outbound calls in the header of the AMF packet.
If you ever reset the DSid on the client, (by calling FlexClient.getInstance().id = 'nil') this process will repeat. (ie., all calls will be suspended again until the server has issued a new DSid to the client.)
Basically, these are internal messages required for the AMF Protocol to work

Related

PHP while(true) loop for file updates

I've got the following problem at hand:
I'm having users on two seperate pages, but saving page input to the same text file. While one user is editing, the other can't. I'm keeping track of this with sessions and writing changes and who's turn to edit it is in a file.
Works fine so far, the output in the end is quite similar to a chat. However, right now I'm having users manually actualize their page and reload the file. What I'd like to do is have the page execute a redirect when the file-timestamp changes (to indicate that the last user has saved their edits and its another users turn). I've looked into javascript shortpolling a little, but then found the php filmtime function and it looks much easier to use. Well - here's what I got:
while(true){
$oldtimestamp=filemtime(msks/$session['user']['kampfnr'].txt);
$waittimer=2;
$waittimer++;
sleep($waittimer);
$newtimestamp=filemtime(msks/$session['user']['kampfnr'].txt);
if ($eintragszeit2 > $eintragszeit1){
addnav("","kampf_ms.php?op=akt");
redirect("kampf_ms.php?op=akt");
}}
In theory, while the user sees the output "it's ... turn to edit the file." this should loop in the background, checking if the file has already been updated, and if yes, redirect the user.
Practically this heavily affects server perfomance (I'm on shared hosting) until it breaks with a memory exceeded error message.
Is something wrong with the code? Or is it generally a bad idea to use a while loop in this case?
Thanks in advance!
PHP language should be only used to generate web content (client do a request to the server => server calls the required script, and returns the response to the client).
Once page is loaded and displayed to the client, the connection is closed, so Internet can die, the client isn't informed...
So with an infinite loop, not only the client can wait for response... an infinite time, but also the server may be heavy impacted because of load... Effectively It is a really bad idea :)
PHP can't be used to make a bidirectional communication: it is just called to build web pages that client demands, and so he can't do anything "in the background" (not directly, effectively you can call an external script, but not for notify a client...)
Also, to do a bidirectional communication, php and "regular" http is not good, because of client / server architecture (the server only answers client request, it is passive)
I can suggest to use WebSocket protocol, to do a chat application:
http://socket.io/
https://en.wikipedia.org/wiki/WebSocket
But for that, you need to use an "active" server solution, such as node.js or ruby (depends of your server capabilities...)
The other way if you want to stay in php is that client makes Ajax request every 10 seconds, for example, to call a php script which check the file, and send back a message to the client if file is updated, but it is really deprecated, because of heavy performance loss, so forget it immediately.

Detecting and pushing a stream of events to a web browser (HTML5, PHP, PostgreSQL)?

My currently in-development website is written in PHP. As users are using the site, they'll be performing actions and I'd like to be able to push notifications of these actions to other users that they're connected to.
Now while I'm sure that using EventSource and a PHP document to server up the appropriate data: lines would work, I've got absolutely no idea how I should notify that PHP document when a new message actually needs to be sent.
What I essentially mean is that when an action takes place, there will be an entry into the PostgreSQL database with the message information (such as the action that was taken). However, it's not efficient to have each instance of the "messaging" PHP document (the one that EventSource is connected to) to continuously poll PostgreSQL for new messages. With 50 users active at once, that would be 50 instances polling PostgreSQL, and as you can probably see, not a very efficient use of resources.
So I'm wondering whether anyone has any suggestions as to software that might assist with this problem. Ideally I'd like to be able to call a function that indicates an action has been undertaken, which is then sent all the other instances of "messaging" PHP document so that they can interpret the message and see whether it's relevant and push it back to the client.
Essentially I need a way to notify running PHP instances (that were started via Apache) of a new message being created, by calling a function in another PHP instance with the message information. I don't need assistance with getting the messages to the client; I can do that with EventSource.
Does anyone have any suggestions as to how this task could be undertaken?
Conventional ways of solving the problem are using a java applet (which can open a socket back to the originating server) or using long polling (e.g. comet).
I've succeeded in doing this by using memcache with a messages-count key-value and a message-$i key-value where $i is an incrementing number. A PHP document is connected to via long polling and it continuously checks to see whether message-$(messages-count) exists, in which case it returns it.
There's a bit more to this since it will return multiple messages if they're created at once and also can load the initial checking number ($i) as a $_GET parameter, but this is essentially how it works. It's near instant and new messages can easily be added to memcache via PHP (each time you create a new message, you increment messages-count).
Take a look at php mem sharing

ajax multi-threaded

Is it possible to achieve true multi-threading with Ajax? If so, how? Please give me some related information, websites or books.
It depends on what you mean by "multithreaded".
Javascript code is distinctly singlethreaded. No Javascript code will interrupt any other Javascript code currently executing on the same page. An AJAX (XHR) request will trigger the browser to do something and (typically) call a callback when it completes.
On the server each Ajax request is a separate HTTP request. Each of these will execute on their own thread. Depending on th Web server config, they may not even execute on the same machine. But each PHP script instance will be entirely separate, even if calling the same script. There is no shared state per se.
Now browsers typically cap the number of simultaneous Ajax requests a page can make on a per host basis. This number is typically 2. I believe you can change it but since the majority of people will have the default value, you have to assume it will be 2. More requests than that will queue until an existing request completes. This can lead to having to do annoying things like creating multiple host names like req1.example.com, req2.example.com, etc.
The one exception is sessions but they aren't multithreaded. Starting a session will block all other scripts attempting to start the exact same session (based on the cookie). This is one reason why you need to minimize the amount of time a session is opened for. Arguably you could use a database or something like memcache to kludge inter-script communication but it's not really what PHP is about.
PHP is best used for simple request processing. A request is received. It is processed and a response is returned. That response could be HTML, XML, text, JSON or whatever. The request could be an HTTP request from the browser or an AJAX request.
Each of these request-response cycles should, where possible, be treated as separate entities.
Another technique used is long-polling. An HTTP request is sent to the server and may not return for a long time. This is used for Web-based chat and other "server push" type scenarios. Sometimes partial responses will be flushed without ending the request.
The last option (on Unix/Linux at least) is that PHP can spawn processes but that doesn't seem to be what you're referring to.
So what is it exactly you're trying to do?
You can't actually multi-thread but what a lot of larger websites do is flush the output for a page and then use Ajax to load additional components on the fly so that the user sees content even while the browser is still requesting new information. Its a good technique to know but, like everything else, you need to be careful how you use it.

AJAX return data before execution has finished

I have a page that I am performing an AJAX request on. The purpose of the page is to return the headers of an e-mail, which I have working fine. The problem is that this is called for each e-mail in a mailbox. Which means it will be called once per mail in the box. The reason this is a problem is because the execution time of the imap_open function is about a second, so each time it is called, this is executed. Is there a way to make an AJAX call which will return the information as it is available and keep executing to prevent multiple calls to a function with a slow execution time?
Cheers,
Gazler.
There are technologies out there that allow you to configure your server and Javascript to allow for essentially "reverse AJAX" (look on Google/Wikipedia for "comet" or "reverse AJAX"). However, it's not incredibly simple and for what you're doing, it's probably not worth all of the work that goes into setting that up.
It sounds like you have a very common problem which is essentially you're firing off a number of AJAX requests and they each need to do a bit of work that realistically only one of them needs to do once and then you'd be good.
I don't work in PHP, but if it's possible to persist the return value of imap_open or whatever it's side effects are across requests, then you should try to do that and then just reuse that saved resource.
Some pseudocode:
if (!persisted_resource) {
persisted_resource = imap_open()
}
persisted_resource.use()....
where persisted_resource should be some variable stored in session scope, application scope or whatever PHP has available that's longer lived than a request.
Then you can either have each request check this variable so only one request will have to call imap_open or you could initialize it while you're loading the page. Hopefully that's helpful.
Batch your results. Between loading all emails vs loading a single email at a time, you could batch the email headers and send it back. Tweak this number till you find a good fit between responsiveness and content.
The PHP script would receive a range request in this case such as
emailHeaders.php?start=25&end=50
Javascript will maintain state and request data in chunks until all data is loaded. Or you could do some fancy stuff such as create client-side policies on when to request data and what data to request.
The browser is another bottleneck as most browsers only allow 2 outgoing connections at any given time.
It sounds as though you need to process as many e-mails as have been received with each call. At that point, you can return data for all of them together and parse it out on the client side. However, that process cannot go on forever, and the server cannot initiate the return of additional data after the http request has been responded to, so you will have to make subsequent calls to process more e-mails later.
The server-side PHP script can be configured to send the output as soon as its generated. You basically need to disable all functionality that can cause buffering, such as output_buffering, output_handler, HTTP compression, intermediate proxies...
The difficult part is that you'd need that your JavaScript library is able to handle partial input. That is to say: you need to have access to downloaded data as soon as it's received. I believe it's technically possible but some popular libraries like jQuery only allow to read data when the transfer is complete.

"Forging" (= mocking) an AMFPHP remoting request

I am using AMFPHP with great success to link my database with my Flex application. However I want to be able to test the remoting requests outside of flash, by typing something like:
http://localhost/amfphp/gateway.php?[WHAT DO I PUT HERE]
What do I put after the questionmark in order to have the browser (or a C++ http component) call the amfphp service, so that the http request needn't "initiate" from flash.
It sounds like you want to make an AMF call from PHP. You can't do this directly from a browser. The data would be returned in the binary AMF format, which of course PHP or a browser can't handle directly. I don't even think it can make the request.
You'll need a AMF client to make the call and decode the data - I suggest using SabreAMF.
Sabre AMF homepage
This is what simple client method call code looks like.
require 'SabreAMF/Client.php';
function make_request($param1,$param2){
$client = new SabreAMF_Client('http://your.server/amfphp/gateway.php');
return $client->sendRequest('your_amf_service.yourAMFmethod',array($param1, $param2));
}
you then invoke this like
$result=make_request('cow',300);
and it returns an array.
You'd probably want to set up a PHP class with all of your methods so you can call each one easily, of course.
AMFPHP has the service browser, which lets you simulate calls to your server-side service and see the responses. It basically does an internal CURL request back to the same service file and passes in the arguments you provided, and acts as if it was done directly from the client-side Flash app.
AMF being a binary format, things are probably not going to be that simple : you'll have to find out how your data is encoded...
As a first step, maybe you could, from your gateway.php script, just dump everything it receives to a file, when it's called from your flash component ?
This way, you could see how the received data looks like (and you'd know if it's passed in POST, or in GET).
Depending on what that data looks like, maybe you'll be able to "forge" a request to your server -- but I don't think it'll be as simple as just calling an URL from your browser...
Considering the AMFPHP gateway is just a mechanism to translate (from/to binary) and dispatch to a class/method with various incoming parameters and finally a return() of data - can you just unit-test directly against the method, thus skipping the entire AMF layer?

Categories