How do I set up my server to work with PHP - php

I want to use PHP with a server that i've coded my self. (Not Apache etc) I guess I have to send the http request and some additional data from my server to php, but I dont know how to make that connection and how to format that message.
I know I can run scripts with php.exe, the problem is that way it won't work with php sessions.

If you Google for the CGI specification, this will give you a reasonable idea of what environment variables to set up. Usually you exec the php process with the script filename as an argument, supplying it with the new environment variables. Are you familiar with fork, exec* system calls?
It's a little more complicated than that, since it involves fiddling with standard input, output and error streams. To give you an idea, check out the thttpd source code, in the file libhttpd.c, function cgi_child.
If your server source code is not in C or C++, you will have to dig around for spawning a child process to handle the PHP script being called in your server, and send the output to the browser. Some of that output will be HTTP headers, and it may stop executing after that (e.g. HTTP 2xx no content, HTTP 3xx redirect) in which case you send that back to the browser with \r\n\r\n to terminate the output.
To send the information to the PHP process, just set up the environment and possibly argument variables unless the process reads the environment variable SCRIPT_FILENAME you set up (see CGI environment variables). Change into the directory where the binary is or into the document root, whichever makes sense, and before spawning, handle incoming data (e.g. from POST request) and let the spawned process read it from stdin (from php-fpm, it probably reads from the socket you set up, but not entirely sure, so that's left as an exerciese).
It might be easier to run or spawn php-cgi which is php-fpm, which listens on a default or specified address and port. Run php-fpm -h for options and give it a whirl. This is definitely the way to go for session support I think. Also make sure the process knows where to look for the php.ini file.
As well as the CGI spec, it would be helpful to read all about HTTP. Or at least a good chunk of it.

Related

PHP redirect file post stream

I'm writing an API using php to wrap a website functionality and returning everything in json\xml. I've been using curl and so far it's working great.
The website has a standard file upload Post that accepts file(s) up to 1GB.
So the problem is how to redirect the file upload stream to the correspondent website?
I could download the file and after that upload it, but I'm limited by my server to just 20MG. And it seems a poor solution.
Is it even possible to control the stream and redirect it directly to the website?
I preserverd original at the bottom for posterity, but as it turns out, there is a way to do this
What you need to use is a combination of HTTP put method (which unfortunately isn't available in native browser forms), the PHP php://input wrapper, and a streaming php Socket. This gets around several limitations - PHP disallows php://input for post data, but it does nothing with regards to PUT filedata - clever!
If you're going to attempt this with apache, you're going to need mod_actions installed an activated. You're also going to need to specify a PUT script with the Script directive in your virtualhost/.htaccess.
http://blog.haraldkraft.de/2009/08/invalid-command-script-in-apache-configuration/
This allows put methods only for one url endpoint. This is the file that will open the socket and forward its data elsewhere. In my example case below, this is just index.php
I've prepared a boilerplate example of this using the python requests module as the client sending the put request with an image file. If you run the remote_server.py it will open a service that just listens on a port and awaits the forwarded message from php. put.py sends the actual put request to PHP. You're going to need to set the hosts put.py and index.php to the ones you define in your virtual host depending on your setup.
Running put.py will open the included image file, send it to your php virtual host, which will, in turn, open a socket and stream the received data to the python pseudo-service and print it to stdout in the terminal. Streaming PHP forwarder!
There's nothing stopping you from using any remote service that listens on a TCP port in the same way, in another language entirely. The client could be rewritten the same way, so long as it can send a PUT request.
The complete example is here:
https://github.com/DeaconDesperado/php-streamer
I actually had a lot of fun with this problem. Please let me know how it works and we can patch it together.
Begin original answer
There is no native way in php to pass a file asynchronously as it comes in with the request body without saving its state down to disc in some manner. This means you are hard bound by the memory limit on your server (20MB). The manner in which the $_FILES superglobal is initialized after the request is received depends upon this, as it will attempt to migrate that multipart data to a tmp directory.
Something similar can be acheived with the use of sockets, as this will circumvent the HTTP protocol at least, but if the file is passed in the HTTP request, php is still going to attempt to save it statefully in memory before it does anything at all with it. You'd have the tail end of the process set up with no practical way of getting that far.
There is the Stream library comes close, but still relies on reading the file out of memory on the server side - it's got to already be there.
What you are describing is a little bit outside of the HTTP protocol, especially since the request body is so large. HTTP is a request/response based mechanism, and one depends upon the other... it's very difficult to accomplish a in-place, streaming upload at an intermediary point since this would imply some protocol that uploads while the bits are streamed in.
One might argue this is more a limitation of HTTP than PHP, and since PHP is designed expressedly with the HTTP protocol in mind, you are moving about outside its comfort zone.
Deployments like this are regularly attempted with high success using other scripting languages (Twisted in Python for example, lot of people are getting onboard with NodeJS for its concurrent design patter, there are alternatives in Ruby or Java that I know much less about.)

Configure Apache to serialize access to a script?

Is there a way to configure Apache (any version) to serialize (and queue) requests to a specific set of scripts. Not all requests of course, just ones to a small set of particular URIs.
I can't think there's any way this can be done with Apache itself other than setting maxclients to 1 and allowing only one server thread.
My approach to this problem would be a database or file where PHP stores information about wheather it's been executed from another client or not. If so, you could either wait or send an error message to the client.

Call PHP from virtual/custom "web server"

Basically, I'm trying to figure out how PHP can be called from a "web server".
I've read the documentation, but it didn't help much.
As far as I can tell, there are three ways to invoke PHP:
via command line (eg: php -f "/path/to/script.php")
via CGI(??) / via FastCGI (???)
via a web server (eg: Apache) module
So let's start with CGI. Maybe I'm just blind, but the spec doesn't mention how on Earth the web server passes data (headers & callbacks) to the thing implementing CGI. The situation is even worse with FastCGI.
Next up, we have server-specific modules, which, I don't even know what to search for since all leads end up nowhere.
Invoking a CGI script is pretty simple. PHP has a few peculiarities, but you basically only need to setup a list of environment variables, then call the PHP-CGI binary:
setenv GATEWAY_INTERFACE="CGI/1.1"
setenv SCRIPT_FILENAME=/path/to/script.php
setenv QUERY_STRING="id=123&name=title&parm=333"
setenv REQUEST_METHOD="GET"
...
exec /usr/bin/php-cgi
Most of them are boilerplate. SCRIPT_FILENAME is how you pass the actual php filename to the PHP interpreter, not as exec parameter. Crucial for PHP is also the non-standard variable REDIRECT_STATUS=200.
For a GET request you only need the environment variables. For a POST request, you simply pipe the HTTP request body as stdin to the executed php-cgi binary. The returned stdout is the CGI response consisting of an incomplete HTTP header, \r\n\r\n, and the page body.
(Just from memory. There maybe a few more gotchas.)
FastCGI is probably the best option since it's so wisely used, it would give you language independence (you could drop in Ruby later, for example) and it's well documented with lots of examples.
You could write your own Server API if you really want, but it's trickier than implementing FastCGI and has several disadvantages.
I wouldn't bother at all with straight CGI, FastCGI exists for a reason.

Apache Reverse Proxy - run a script first

I'm experimenting with some video content delivery using VLC and Apache Reverse Proxy. Since VLC can support http streaming, I'm sure that it will work with a Apache Reverse Proxy (I haven't tried this yet, but I don't see why it wouldn't work).
Before letting Apache proxy the http video stream, I would like to run a script first. Is there an option in Apache to do this?
If not, can someone think of a way for PHP to do some magic first, and then somehow redirect to the http video stream, without making a VLC or Windows Media Player client cry? By doing it this way, the Apache Reverse Proxy would just have to point to the PHP script only.
Either way, the idea of the script it to start the VLC streaming server.
Thanks
if you really want to do it in apache you can always write your own module :)
alternatively you can use mod_rewrite with the prg option (rewrite map).
where you basically have a rewrite rule processed by an external program.
you can do whatever you want there (logging, etc).
don't forget to set a rewritelock file, or you will experience strange behaviour.
you could also do "everything" in php and then use the apache module mod_xsendfile where you just pass a header in php containing the locatin of the file in the filesystem.
it will not be disclosed to the client but catched by the apache module and served by apache. your php process will terminate regularely.
theese are the best out of the box options i can think of.
if nothing of this works because you need to catch some stuff during or at the end of the transfer you could just echo the files content with php. with correct output buffering you can achieve accetable performance on that.
or you could do some logfile postprocessing if this solves your problem.

best method for large File transfer via http(s) using PHP (POST) invoked via shell

I want to setup a automated backup via PHP, that using http/s I can "POST" a zip file request to another server and send over a large .zip file , basically, I want to backup an entire site (and its database) and have a cron peridocally transmit the file over via a http/s. somethiling like
wget http://www.thissite.com/cron_backup.php?dest=www.othersite.com&file=backup.zip
The appropriate authentication security can be added afterwords....
I prefer http/s because this other site has limited use of ftp and is on a windows box. So the I figure the sure way to communicate with it is via http/s .. the other end would have a correspondign php script that would store the file.
this process needs to be completely programmatic (ie. Flash uploaders will not work, as this needs a browser to work, this script will run from a shell session)//
Are there any generalized PHP libraries or functions that help with this sort of thing? I'm aware of the PHP script timeout issues, but I can typically alter php.ini to minimize this.
I'd personally not use wget, and just run from the shell directly for this.
Your php script would be called from cron like this:
/usr/local/bin/php /your/script/location.php args here if you want
This way you don't have to worry about yet another program to handle things (wget) if your settings are the same at each run then just put them in a config file or directly into the PHP script.
Timeout can be handled by this, makes PHP script run an unlimited amount of time.
set_time_limit(0);
Not sure what libraries you're using, but look into CRUL to do the POST, should work fine.
I think the biggest issues that would come up would be more sever related and less PHP/script related, i.e make sure you got the bandwidth for it, and that your PHP script CAN connect to an outside server.
If its at all possible I'd stay away from doing large transfers over HTTP. FTP is far from ideal too - but for very different reasons.
Yes it is possible to do this via ftp, http and https using curl - but this does not really solve any of the problems. HTTP is optimized around sending relatively small files in erlatively short periods of time - when you stray away from that you will end up undermining a lot of the optimization that's applied to webservers (e.g. if you've got a setting for maxrequestsperchild you could be artificially extending the life of processes which should have stopped, and there's the interaction between LimitRequest* settings and max_file_size, not to mention various timeouts and the other limit settings in Apache).
A far more sensible solution is to use rsync over ssh for content/code backups and the appropriate database replication method for the DBMS you are using - e.g. mysql replication.

Categories