I have a PHP script that uses file_get_contents to fetch a file on a remote server on every page load. Is it possible to make a persistent connection between the two servers to speed up the time it takes to fetch this file?
Your PHP process is likely ending each request, so you will have to handle this outside of the main PHP process.
I would recommend setting up Nginx as a proxy, and pointing your PHP script at Nginx. You can then configure Nginx to use HTTP/1.1 keep-alive, which will keep a persistent connection open if requests are coming through regularly.
Related
Here is some background info:
I have a dynamic loaded extension running on HHVM using FastCGI
I have a login'ed session ready. e.g. example.com/login.php
I have my own TCP server running within the HHVM fastCGI process, listening port e.g. 8080.
(Assuming the TCP server is started when the extension is loaded, and wait for web socket connections)
What I want to do is, to re-use the session started by login.php in my own TCP server that serves web socket connections. The client should already give me the PHPSESSID cookie from the HTTP header, all I think I need to do is to use that ID to lookup somewhere in the HHVM runtime (because we're in the same process), checking if that session exist or something like that. So I can safely stream data to that websocket.
Is there an API I can call to do that?
Is there any example I can follow? Can someone help me please?
Thank you,
I have a tcp socket service MyServ running on the background (using Java, doesn't really matter though), and a web server with php that accesses MyServ using persistent socket (pfsockopen).
The problem is, if one php request stopped for whatever reason, it leaves some un-read data in the persistent socket, and the following php request will get an error when reading this socket.
I wonder how's other services with similar scenario (like php-mysql, php-memcached) dealing with this problem? more specificly, how can php tell that a used persistent socket is clean?
I'm running a dedicated server...
Centos 6
PHP 5.3.3
Nginx 1.0.15
Nginx uses fastcgi to run php.
The server communicates with another server using remote sql.
A file called download.php initiates a mysql connection, checks some details in the database and then begins streaming bytes to the user with content-displacement.
No matter what I do, I cannot get simultaneous connections to download a file above 5. For instance if I download a file using a file manager, a maximum of 5 connections can be made, the rest timeout.
I've setup nginx to accept up to 32 connections, mysql connection is closed before the file begins to stream so there shouldn't be connection limit issues there.
Does anybody have any idea how I can increase the amount of connections?
Perhaps an idea of what else I can check?
Thanks.
Edit /etc/init.d/php_cgi
set server_childs=32
Problem solved!
Most likely the server is set to restrict by ip address. See http://www.nakedmcse.com/Home/tabid/39/forumid/14/postid/61/scope/posts/Default.aspx for more info.
I am trying to figure out when exacly are php scripts interpreted on apache server via mod_php in connection lifetime(tcp session lifetime).
PHP scripts are executed by Apache in response to HTTP requests. A HTTP request requires a fully established TCP connection.
My php script waits for remote gate response, normally for ~20 seconds. It causes apache httpd threads to live in memory with opened MySQL connection and finally to exceed MaxClients value. How it can be managed to free idle resources until remote gate reponse.
One solution is:
1) run remote gate request and then redirect user to page that refreshes to certain url testing for data coming,
2) write rule for that url in nginx configuration file:
if specific file exist
then run apache to give data
else show refreshing page.
3) remote gate request saves data in file
Therefore we unlinked apache from the script that makes request to remote gate and we can make it tiny as possible. While remote request, server used only by that script and light requests from nginx.
So it may be good solution, but I would like to know downsides of this approach. And may be there are better ways.
Well, you could close your MySQL-connection while waiting for the remote gates response.
mysql_close($link);
And then reopen it once you got the response:
$link = mysql_connect('localhost', 'mysql_user', 'mysql_password');
If you need the MySQL connection only before or only after the remote gate response, I would suggest establishing the MySQL connection only once at the correct possition.
In general mysql_connect() is a little expensive.
But compared to the 20sec your response needs, this is definitely inexpensive.