Is there a way to configure Apache (any version) to serialize (and queue) requests to a specific set of scripts. Not all requests of course, just ones to a small set of particular URIs.
I can't think there's any way this can be done with Apache itself other than setting maxclients to 1 and allowing only one server thread.
My approach to this problem would be a database or file where PHP stores information about wheather it's been executed from another client or not. If so, you could either wait or send an error message to the client.
Related
I want to use PHP with a server that i've coded my self. (Not Apache etc) I guess I have to send the http request and some additional data from my server to php, but I dont know how to make that connection and how to format that message.
I know I can run scripts with php.exe, the problem is that way it won't work with php sessions.
If you Google for the CGI specification, this will give you a reasonable idea of what environment variables to set up. Usually you exec the php process with the script filename as an argument, supplying it with the new environment variables. Are you familiar with fork, exec* system calls?
It's a little more complicated than that, since it involves fiddling with standard input, output and error streams. To give you an idea, check out the thttpd source code, in the file libhttpd.c, function cgi_child.
If your server source code is not in C or C++, you will have to dig around for spawning a child process to handle the PHP script being called in your server, and send the output to the browser. Some of that output will be HTTP headers, and it may stop executing after that (e.g. HTTP 2xx no content, HTTP 3xx redirect) in which case you send that back to the browser with \r\n\r\n to terminate the output.
To send the information to the PHP process, just set up the environment and possibly argument variables unless the process reads the environment variable SCRIPT_FILENAME you set up (see CGI environment variables). Change into the directory where the binary is or into the document root, whichever makes sense, and before spawning, handle incoming data (e.g. from POST request) and let the spawned process read it from stdin (from php-fpm, it probably reads from the socket you set up, but not entirely sure, so that's left as an exerciese).
It might be easier to run or spawn php-cgi which is php-fpm, which listens on a default or specified address and port. Run php-fpm -h for options and give it a whirl. This is definitely the way to go for session support I think. Also make sure the process knows where to look for the php.ini file.
As well as the CGI spec, it would be helpful to read all about HTTP. Or at least a good chunk of it.
I have noticed that multiple users per day are being assigned the same session_id. I am using php 7.2 now, but looking back into the history of user sessions this has been happening since I was using php 5.4.
I am just using php defaults of session_start(), no custom session handler.
I have read that the session_id is a combo of the client ip and time, but give that I am using a load balancer, that might be limiting the randomness of the ip_addresses?
What is the proper way to increase the uniqueness of session_ids to prevent collisions when using a load balancer?
If you are using Nginx you may want to check if FastCGI micro-caching is enabled and disable it. This has caused some errors before noted in PHP.net developers bugs listings in PHP 7.1 running nginx
Bug #75496 Session ID Collision happened few times
After the first case [of collision] we changed a hash entropy php settings in php.ini so session_id is now 48 chars but it didn't help to prevent second case.
Solution:
FastCGI micro caching at nginx that cached 200 responses together with session cookies.
Of course maybe it was set wrong at our server, but its definitely has nothing to do with PHP.
Please see:
https://bugs.php.net/bug.php?id=75496
Assuming you're running PHP-FPM on each web node, but I guess #2 and #3 would probably work running PHP as an Apache plugin as well.
You've got a few options here:
Keep your LB'd web nodes with apache and have each one point to the same upstream PHP-FPM server. Obviously the third box running the single PHP-FPM process might need to be beefier, since it's handling PHP parsing for both web nodes.
Change the session storage to point to a file location (probably a NFS or SMB share) that both servers can access. Not sure if I've ever done this honestly but it seems like it would work. Really, your web files should probably be on an NFS/SMB share already so you can deploy changes to only one location.
Spin up a redis server and have both web nodes' PHP-FPM process use that for session.
Number three is probably the best option in most cases.
I have an ajax image uploader that sends images to a php script, so that they are validated / resized & saved into a directory. The ajax uploader allows multiple files to be uploaded at the same time. Since it allows multiple files there can be timeouts, so I thought of increasing the execution time using set_time_limit. But I am having trouble determining how much time I have to set, since the default is 30sec. Will 1min be enough? The images are uploading properly in my local machine, but I am having thoughts that there will be timeout errors on a shared hosting service. Any ideas & thoughts on how others have implemented will be valuable.. Thanks.
You can set it to 5 minutes if you need to. But for obvious reasons, you don't really want to have it that high especially for http calls.
So... if I had the energy, this is what I'd do...
I need:
process_initializer.php
process_checker.php
client.html
the_process.php (runs as background)
...
client uploads files to process_initializer.
initializer creates a unique ID, maybe based off of time with milliseconds or some other advanced solution
initializer starts a background process, sending it necessary arguments like filenames along with the ID
initializer responds to the client with ID
client then polls process_checker to see what's going on with ID (maybe 20 second intervals - setTimeout(), whatever)
process_checker may check to see if file output_ID.txt exists which the_process should create when it's done and then if it doesn't exist respond to the client that it's not ready, if it does, maybe send the output to the client and then the client can do whatever.
When apache runs php, it uses one php.ini configuration and when you run php from the command line or from another script like exec('php the_process arg1 arg2') it will use a different php.ini for this, reffered to as php cli or something like that unless you have php cli configured to use the same php.ini that apache does. Important thing is, it's possible they use different settings and so you can let cli scripts take more time than your http called scripts.
I have a PHP file in server. When a user sends the request, that PHP file stores their user values in to database. But my problem is, it is storing values for some particular requests only.
If you want to ensure that a request was received, you can
Check the server access log.
No matter how your PHP script is hosted, you should be able to access this. The specifics vary depending on your situation, though.
Try logging requests to a text file when the script starts.
If your database insertion somehow fails, this should still work, so it's a possible way to ensure that the script actually gets the request.
Your webserver will probably log all connection attempts; find out where those logs are stored (it varies based on which webserver you use, which operating system you run it under, and how you have configured it), and then you can search for the specific URL that you are interested in.
I'm developing a simple chat web application based on the MSN protocol. The server communicates with the MSN server through a file resource returned from fsockopen (). The client accesses the server via XMLHttpRequest. The server initially logs in, and prints out the contact list (formatted in an HTML table) which the client receives through the responseText () of the XMLHttpRequest object.
Here's the problem. The file resource that is responsible for communication with the MSN server must be kept alive in order for all chat related functions to work (creating conversations, keeping track of offline/online state changes, etc). However in order for the XMLHttpRequest to complete, the PHP script must finish execution. Which means the client will get no response from the XMLHttpRequest while the chat session is in progress.
Whats worse is a file resource cannot be serialized, meaning I cannot simply store the chat session in a $_SESSION [] placeholder.
So, my question is, is there any possible way for me to 'transfer' a file resource from one file to another?
In most languages its not possible to pass file handles between applications - AFAIK most operating systems don't allow it either.
The solution is to keep the server process running as daemon - which means it needs to run outside of the webserver.
See
http://symcbean.blogspot.com/2010/02/php-and-long-running-processes.html
and
http://www.phpclasses.org/browse/package/5758.html
C.
A possible solution would be to have a PHP script on the server-side that just doesn't end ; this way, the resource corresponding to the fsockopen call would never be deleted, and the connection wouldn't be closed.
About this, you might want to search for the term "comet" ; the basic idea is to have a script that runs forever on the server-side, that sends updates to the client whenever it's necessary.
Instead of having the browser send an Ajax request every X seconds, you'd keep an openened connection between the client and the server -- just note that, unfortunatly, PHP is often said not to be the best tool for that job...
On stackoverflow : [php] comet
The resource can't survive the end of the request unless you create PHP extension that does it (like persistent MySQL connections do with mysql_pconnect() for example). However, you could use Comet technology and for example Bayeux protocol supported by Dojo toolkit among others, to talk to the server. That would require either standalone server or long-running request, in latter case ensure that PHP and webserver time limits would not kill that request for running too long.
Thanks everyone for the suggestions. Before I started this project I had considered using comet technology, but decided against (PHP/Apache don't seem to implement well). I've come up with a hacked together solution, not the most elegant but workable.
One PHP script is responsible for the MSN server communication, it will run as long as the user is active. It writes data to a file (email_out), as well as reads data from a file (email_in). Whenever the client sends a AJAX request a separate PHP script will write any POST data to the file (email_in) and will return any data from (email_out). Both scripts will not read/write data until they finally have access to the file (as there will be fighting for the file resource).
I don't know, suggestions? This is certainty not the most efficient means of doing things but it's really the only PHP/apache solution I could think of.