Keep-alive in curl / php - php

I'm writing a gateway script in PHP which connects to a remote server, obtains some information and returns it for JSON usage (no JSONP possibility).
This gateway is being requested every second, so it's very important for curl to use keep-alive. From what I learned, curl will do it automatically if we will use the same handle across multiple requests.
The question is: how do I store the handle between two reloads? It's not possible to store the handle resource in session, it also can't be serialized.
Or maybe there's other way to ensure keep-alive in curl?

Generally speaking, every request exists independent of every other request. Connections and other resources are not pooled between requests.
There are possible solutions
Use a proxy with content adaptation (Squid and Greasyspoon would work here) this does take some work to set up. But you will be able to write scripts in java, javascript or ruby to adapt your content.
Run your PHP script as a deamon, sort of like a webserver. This would take a bit of engineering, but it can be done with PHP. You would be getting into sockets and threading.
You might be able to use this as a starting point: http://nanoweb.si.kz/

Related

PHP - cross-application communication

I have two PHP applications on my server. One of them has RESTAPI which I would like to consume and render in the second application. What is better way then curling the API? Can I somehow ask php-fpm for the data directly or something like that?
Doing curl and making request through the webserver seems wrong.
All this happens on single server - I know it probably doesn't scale well but its small project.
why use REST if you can access the functions directly?
If everything is on the same server then there is no need for some REST, since it makes a somewhat pointless run through the webserver.
But if it is already there and you don't care about the overhead (if there's not much traffic going on then it would make sense), then use file_get_contents instead of curl, it is easier to use, but I doubt it is faster/slower; both are right.
You could also use a second webserver (a second virtualhost) on a different port for internal use. That way things are nicely separated.
(If everything is on different servers, but a local network, then using sockets would be fastest. )
Doing curl and making request through the webserver seems wrong. - I disagree with that. You can still achieve what you want to achieve using Php CURL, even if it's on the same server.
I was in the same problem, but i solved it using MySQL to "queue" tasks, and a worker could use any pooling method, or PHP executing a new server side worker.
Since the results were stored in the same database, the PHP pages could load the results, or the status anytime.

AJAX to get data from the server

A page is sending AJAX call to server and should get item info in response. The array to look-up/return is a rather big one and I can’t hold it in the PHP file to accept the request. So, as far as my knowledge and experience tell, there are 2 methods:
Access database for each request.
Store items in files (e.g. “item12.txt”) and send contents to the user.
My C experience says that opening and closing a file takes much more system time than the rest of the program. How is it in PHP? What is the preferred method (most importantly, resource-wise) – file system or database? Is there any other way you would recommend (e.g. JavaScript directly loading the file with variable array from the server for each request)? Maybe there’s some innovative method lying around you’re aware of?
P.S. On the server-side a number only will be accepted, so no worries regarding someone trying to access files in the server or trying to do some fancy stuff on database.
Sockets
Depending on how many requests you will be handling, you could look into socket connections.
Sockets gives you 2 way communication between the client and the server, which would allow you to do interactive things, as needed.
Socket tutorial 1
Socket tutorial 2
Node.js
node.js is the new kid on the block. You write your own socket webserver, and use javascript to communicate with it. This is a great alternative to Ajax, as it's much more efficient and reliabe.
node.js can be run alongside PHP, and only be used for ajax-like calls.
node.js
node.js socket turotial
There are nothing innovative. If you have low frequency calls to data and you want super simple access to data then use files. But today is much better to use any database (SQL lite) is ok i think. IF you need more performance then use MySQL or NoSQL solutions. Tools made to solve things. Use the right tool for your purpose.

PHP redirect file post stream

I'm writing an API using php to wrap a website functionality and returning everything in json\xml. I've been using curl and so far it's working great.
The website has a standard file upload Post that accepts file(s) up to 1GB.
So the problem is how to redirect the file upload stream to the correspondent website?
I could download the file and after that upload it, but I'm limited by my server to just 20MG. And it seems a poor solution.
Is it even possible to control the stream and redirect it directly to the website?
I preserverd original at the bottom for posterity, but as it turns out, there is a way to do this
What you need to use is a combination of HTTP put method (which unfortunately isn't available in native browser forms), the PHP php://input wrapper, and a streaming php Socket. This gets around several limitations - PHP disallows php://input for post data, but it does nothing with regards to PUT filedata - clever!
If you're going to attempt this with apache, you're going to need mod_actions installed an activated. You're also going to need to specify a PUT script with the Script directive in your virtualhost/.htaccess.
http://blog.haraldkraft.de/2009/08/invalid-command-script-in-apache-configuration/
This allows put methods only for one url endpoint. This is the file that will open the socket and forward its data elsewhere. In my example case below, this is just index.php
I've prepared a boilerplate example of this using the python requests module as the client sending the put request with an image file. If you run the remote_server.py it will open a service that just listens on a port and awaits the forwarded message from php. put.py sends the actual put request to PHP. You're going to need to set the hosts put.py and index.php to the ones you define in your virtual host depending on your setup.
Running put.py will open the included image file, send it to your php virtual host, which will, in turn, open a socket and stream the received data to the python pseudo-service and print it to stdout in the terminal. Streaming PHP forwarder!
There's nothing stopping you from using any remote service that listens on a TCP port in the same way, in another language entirely. The client could be rewritten the same way, so long as it can send a PUT request.
The complete example is here:
https://github.com/DeaconDesperado/php-streamer
I actually had a lot of fun with this problem. Please let me know how it works and we can patch it together.
Begin original answer
There is no native way in php to pass a file asynchronously as it comes in with the request body without saving its state down to disc in some manner. This means you are hard bound by the memory limit on your server (20MB). The manner in which the $_FILES superglobal is initialized after the request is received depends upon this, as it will attempt to migrate that multipart data to a tmp directory.
Something similar can be acheived with the use of sockets, as this will circumvent the HTTP protocol at least, but if the file is passed in the HTTP request, php is still going to attempt to save it statefully in memory before it does anything at all with it. You'd have the tail end of the process set up with no practical way of getting that far.
There is the Stream library comes close, but still relies on reading the file out of memory on the server side - it's got to already be there.
What you are describing is a little bit outside of the HTTP protocol, especially since the request body is so large. HTTP is a request/response based mechanism, and one depends upon the other... it's very difficult to accomplish a in-place, streaming upload at an intermediary point since this would imply some protocol that uploads while the bits are streamed in.
One might argue this is more a limitation of HTTP than PHP, and since PHP is designed expressedly with the HTTP protocol in mind, you are moving about outside its comfort zone.
Deployments like this are regularly attempted with high success using other scripting languages (Twisted in Python for example, lot of people are getting onboard with NodeJS for its concurrent design patter, there are alternatives in Ruby or Java that I know much less about.)

Download single file with multiple connections (PHP)

Is it possible to download a single file with multiple parallel connections through php?
As far as I know not easily, no.
You would have to build a PHP implementation of whatever protocols or additions there are to the HTTP protocol to allow the download of files through multiple connections.
While that may not be impossible, I've never heard of such an implementation and even if one exists, it stands to reason that it would be a horrible drain on resources on server side to do this in PHP.
I recommend you look for solutions on the server side for this (i.e. Apache modules and settings for example).
Not through PHP alone. You may be able to do this using some client-side components (maybe even javascript) but I suspect that it would be a lot of work. Many companies that distribute software, instead of having the user download installers via HTTP, deliver a small executable with user interface that then downloads the file with all the optimizations possible. Maybe you could go this way?
You can use the PHP http extension. It has a "range" option for requests, and can wait for incoming request data in the background (open multiple socket connections, non-blocking functions calls). I've even seen callbacks mentioned somewhere there.
And you want to look into HttpRequestPool. Don't ask me for examples, though. If it's that important for your case, you have to write the processing logic yourself.
Or just googled:
Parallel HTTP requests in PHP using PECL HTTP classes [Answer: HttpRequestPool class]

File resource persistence in PHP

I'm developing a simple chat web application based on the MSN protocol. The server communicates with the MSN server through a file resource returned from fsockopen (). The client accesses the server via XMLHttpRequest. The server initially logs in, and prints out the contact list (formatted in an HTML table) which the client receives through the responseText () of the XMLHttpRequest object.
Here's the problem. The file resource that is responsible for communication with the MSN server must be kept alive in order for all chat related functions to work (creating conversations, keeping track of offline/online state changes, etc). However in order for the XMLHttpRequest to complete, the PHP script must finish execution. Which means the client will get no response from the XMLHttpRequest while the chat session is in progress.
Whats worse is a file resource cannot be serialized, meaning I cannot simply store the chat session in a $_SESSION [] placeholder.
So, my question is, is there any possible way for me to 'transfer' a file resource from one file to another?
In most languages its not possible to pass file handles between applications - AFAIK most operating systems don't allow it either.
The solution is to keep the server process running as daemon - which means it needs to run outside of the webserver.
See
http://symcbean.blogspot.com/2010/02/php-and-long-running-processes.html
and
http://www.phpclasses.org/browse/package/5758.html
C.
A possible solution would be to have a PHP script on the server-side that just doesn't end ; this way, the resource corresponding to the fsockopen call would never be deleted, and the connection wouldn't be closed.
About this, you might want to search for the term "comet" ; the basic idea is to have a script that runs forever on the server-side, that sends updates to the client whenever it's necessary.
Instead of having the browser send an Ajax request every X seconds, you'd keep an openened connection between the client and the server -- just note that, unfortunatly, PHP is often said not to be the best tool for that job...
On stackoverflow : [php] comet
The resource can't survive the end of the request unless you create PHP extension that does it (like persistent MySQL connections do with mysql_pconnect() for example). However, you could use Comet technology and for example Bayeux protocol supported by Dojo toolkit among others, to talk to the server. That would require either standalone server or long-running request, in latter case ensure that PHP and webserver time limits would not kill that request for running too long.
Thanks everyone for the suggestions. Before I started this project I had considered using comet technology, but decided against (PHP/Apache don't seem to implement well). I've come up with a hacked together solution, not the most elegant but workable.
One PHP script is responsible for the MSN server communication, it will run as long as the user is active. It writes data to a file (email_out), as well as reads data from a file (email_in). Whenever the client sends a AJAX request a separate PHP script will write any POST data to the file (email_in) and will return any data from (email_out). Both scripts will not read/write data until they finally have access to the file (as there will be fighting for the file resource).
I don't know, suggestions? This is certainty not the most efficient means of doing things but it's really the only PHP/apache solution I could think of.

Categories