How to run PHP early before HTTP request content finished? - php

I'm creating some specific application in PHP that receive data from clients and process it.
Sometimes the data sent by client can be significantly large or unknown length.
So I want to ask if it possible to make PHP start processing data before client finish uploading?
Here what I'm looking to:
From PHP docs it is possible to use PUT method and receive content via php://input
http://www.php.net/manual/en/features.file-upload.put-method.php
I'm planning on using PUT method with chunked transfer encoding.
but I just try sending PUT method to phpinfo.php (You know what inside.)
and intentionally not sent last chunk. (zero length chunk)
The PHP doesn't response unless I complete the request.
(That's not what I expect from PUT method it should has some kind of pros.
If it unable to start PHP early I rather use ordinary POST method.)
And according to this topic
Can I reply to a HTTP Request Before I've Finished Receiving it?
I don't think early response from PHP will break HTTP.
(Don't worry about the client I write it myself.)
Note: Breaking data into many request is not an option because in order to process the data there are some memory/objects that cant be shared among multiple PHP instances. (even with memory table in MySQL.)

Related

Duration of $_POST Content (Lifetime)

This is a relatively simple question to which I haven't been able to find the answer (or my search skills are just rubbish), say I send $_POST content through jQuery/AJAX, the content is handled by PHP and a result is returned, finishing this function.
How long does this $_POST content still "live"? Is it "destroyed" as soon as the AJAX / PHP function is done?
Or does it remain in the system for a while?
HTTP is a stateless protocol, so the web server does not need to keep any data for the duration of multiple requests. When your web server sends multiple requests to PHP (usually PHP-FPM), it treats every request separately.
So if you send an HTTP POST request (which has a Content-Type of application/x-www-form-urlencoded or multipart/form-data) to PHP, the data will be loaded into $_POST before your script is executed. You can access it while your script is running, but afterwards all variables are deleted from memory, including $_POST.
If you want to use data across multiple requests, you need to persist it somehow, for example using Sessions, files, databases, shared memory, APCu.

PHP - Could you use POST method to stream data to a server over a long period of time?

Caveat, I know this has the potential to be a ridiculously stupid question, but I had the thought and want to know the answer.
Aim: run an interactive session between browser and server with a single request without ajax or websockets etc..
Scenario: a PHP file on the server receives data by POST method from a user. The content length in the header is 8MB so it keeps the connection open until it receives the full data of 8MB. But on the user side we are delivering this data very very slowly (simulating a terrible connection, for example). The server is receiving the data bits at a time. [can this be passed to the PHP file to process bits at a time? Or does it only get passed once all the data is received?] It then does whatever it wants with those bits, and delivers it to the browser, in an echo loop). At certain time intervals, the user injects new data into the 'stream' which will be surrounded by a continuous stream of padding data.
Is any of that possible? Or even with CGI? I am expecting this not to be possible really, but what stops the process timing out if someone does have a terrible connection and the POST data is huge?
As far as I know, you could do this, but the PHP file you are calling with the POST data will only be called by the webserver once it has received all the data. Otherwise, say you were sending an image with POST, and your PHP script moves this image from the tempfiles directory to another directory, before all the data has been received, you would have a corrupt image, nothing more, nothing less.
As long as the ini configurations have been altered correctly, I would think so. But would be a great test to try out!

CakePHP 2.x - Access php://input directly to deal with large request body

I am working on an application that includes a small REST API. One of these methods needs to accept a large (nearly 300MB) binary file upload on a PUT request.
Because the file is quite large, and because there's some risk of several such requests running at once, I'd like to avoid holding the entire request body in memory. I had hoped to do this by reading from the php://input stream directly and siphoning it off to a file.
However, the controller's input() method appears to be interfering. I understand the rationale for input()—once you read the php://input stream, it's gone, so input() holds onto it for repeated access. Obviously, this is the behavior I wish to bypass in this instance.
It seems, though, that input() is being called somewhere before my controller code runs, because by the time I get to php://input, there's nothing left to read.
So, my question: Is there a way for a CakePHP controller to stream a very large request body onto disk without first loading the entire thing into memory?
I figured it out, so I hope the following will save someone else some time.
This turned out to be an error on my part, but not one related to CakePHP. If you read php://input from your controller without having called Controller::input(), you will (normally) get the raw request body as you'd expect.
My problem was that I had inadvertently set the request header Content-Type: application/json. This caused CakePHP's REST code to try to parse the body as input. When I correctly set the type (in my case application/x-gzip) everything functioned correctly.

ajax multi-threaded

Is it possible to achieve true multi-threading with Ajax? If so, how? Please give me some related information, websites or books.
It depends on what you mean by "multithreaded".
Javascript code is distinctly singlethreaded. No Javascript code will interrupt any other Javascript code currently executing on the same page. An AJAX (XHR) request will trigger the browser to do something and (typically) call a callback when it completes.
On the server each Ajax request is a separate HTTP request. Each of these will execute on their own thread. Depending on th Web server config, they may not even execute on the same machine. But each PHP script instance will be entirely separate, even if calling the same script. There is no shared state per se.
Now browsers typically cap the number of simultaneous Ajax requests a page can make on a per host basis. This number is typically 2. I believe you can change it but since the majority of people will have the default value, you have to assume it will be 2. More requests than that will queue until an existing request completes. This can lead to having to do annoying things like creating multiple host names like req1.example.com, req2.example.com, etc.
The one exception is sessions but they aren't multithreaded. Starting a session will block all other scripts attempting to start the exact same session (based on the cookie). This is one reason why you need to minimize the amount of time a session is opened for. Arguably you could use a database or something like memcache to kludge inter-script communication but it's not really what PHP is about.
PHP is best used for simple request processing. A request is received. It is processed and a response is returned. That response could be HTML, XML, text, JSON or whatever. The request could be an HTTP request from the browser or an AJAX request.
Each of these request-response cycles should, where possible, be treated as separate entities.
Another technique used is long-polling. An HTTP request is sent to the server and may not return for a long time. This is used for Web-based chat and other "server push" type scenarios. Sometimes partial responses will be flushed without ending the request.
The last option (on Unix/Linux at least) is that PHP can spawn processes but that doesn't seem to be what you're referring to.
So what is it exactly you're trying to do?
You can't actually multi-thread but what a lot of larger websites do is flush the output for a page and then use Ajax to load additional components on the fly so that the user sees content even while the browser is still requesting new information. Its a good technique to know but, like everything else, you need to be careful how you use it.

AJAX return data before execution has finished

I have a page that I am performing an AJAX request on. The purpose of the page is to return the headers of an e-mail, which I have working fine. The problem is that this is called for each e-mail in a mailbox. Which means it will be called once per mail in the box. The reason this is a problem is because the execution time of the imap_open function is about a second, so each time it is called, this is executed. Is there a way to make an AJAX call which will return the information as it is available and keep executing to prevent multiple calls to a function with a slow execution time?
Cheers,
Gazler.
There are technologies out there that allow you to configure your server and Javascript to allow for essentially "reverse AJAX" (look on Google/Wikipedia for "comet" or "reverse AJAX"). However, it's not incredibly simple and for what you're doing, it's probably not worth all of the work that goes into setting that up.
It sounds like you have a very common problem which is essentially you're firing off a number of AJAX requests and they each need to do a bit of work that realistically only one of them needs to do once and then you'd be good.
I don't work in PHP, but if it's possible to persist the return value of imap_open or whatever it's side effects are across requests, then you should try to do that and then just reuse that saved resource.
Some pseudocode:
if (!persisted_resource) {
persisted_resource = imap_open()
}
persisted_resource.use()....
where persisted_resource should be some variable stored in session scope, application scope or whatever PHP has available that's longer lived than a request.
Then you can either have each request check this variable so only one request will have to call imap_open or you could initialize it while you're loading the page. Hopefully that's helpful.
Batch your results. Between loading all emails vs loading a single email at a time, you could batch the email headers and send it back. Tweak this number till you find a good fit between responsiveness and content.
The PHP script would receive a range request in this case such as
emailHeaders.php?start=25&end=50
Javascript will maintain state and request data in chunks until all data is loaded. Or you could do some fancy stuff such as create client-side policies on when to request data and what data to request.
The browser is another bottleneck as most browsers only allow 2 outgoing connections at any given time.
It sounds as though you need to process as many e-mails as have been received with each call. At that point, you can return data for all of them together and parse it out on the client side. However, that process cannot go on forever, and the server cannot initiate the return of additional data after the http request has been responded to, so you will have to make subsequent calls to process more e-mails later.
The server-side PHP script can be configured to send the output as soon as its generated. You basically need to disable all functionality that can cause buffering, such as output_buffering, output_handler, HTTP compression, intermediate proxies...
The difficult part is that you'd need that your JavaScript library is able to handle partial input. That is to say: you need to have access to downloaded data as soon as it's received. I believe it's technically possible but some popular libraries like jQuery only allow to read data when the transfer is complete.

Categories