My php script waits for remote gate response, normally for ~20 seconds. It causes apache httpd threads to live in memory with opened MySQL connection and finally to exceed MaxClients value. How it can be managed to free idle resources until remote gate reponse.
One solution is:
1) run remote gate request and then redirect user to page that refreshes to certain url testing for data coming,
2) write rule for that url in nginx configuration file:
if specific file exist
then run apache to give data
else show refreshing page.
3) remote gate request saves data in file
Therefore we unlinked apache from the script that makes request to remote gate and we can make it tiny as possible. While remote request, server used only by that script and light requests from nginx.
So it may be good solution, but I would like to know downsides of this approach. And may be there are better ways.
Well, you could close your MySQL-connection while waiting for the remote gates response.
mysql_close($link);
And then reopen it once you got the response:
$link = mysql_connect('localhost', 'mysql_user', 'mysql_password');
If you need the MySQL connection only before or only after the remote gate response, I would suggest establishing the MySQL connection only once at the correct possition.
In general mysql_connect() is a little expensive.
But compared to the 20sec your response needs, this is definitely inexpensive.
Related
I have a PHP script that uses file_get_contents to fetch a file on a remote server on every page load. Is it possible to make a persistent connection between the two servers to speed up the time it takes to fetch this file?
Your PHP process is likely ending each request, so you will have to handle this outside of the main PHP process.
I would recommend setting up Nginx as a proxy, and pointing your PHP script at Nginx. You can then configure Nginx to use HTTP/1.1 keep-alive, which will keep a persistent connection open if requests are coming through regularly.
The target is simple: clients post http requests to query data and update record by some keys。 Highest request: 500/sec (the higher the better, but the best is to fulfil this requirement while making the system easy to achieve and using less mashines)
what I've done: nginx + php-cgi(using php) to serve http request, the php use thrift RPC to retrieve data from a DB proxy which is only used to query and update DB(mysql). The DB proxy uses mysql connection pool and thrift's TNonblockingServer. (In my country, there are 2 ISP, DB Proxy will be deployed in multi-isp machine and so is the db, web servers can be deployed on single-isp mashine according to the experience)
what trouble me: when I do stress test(when >500/sec), I found " TSocket: Could not connect to 172.19.122.32:9090 (Connection refused [111]" from php log. I think it may be caused by the port's running out(may be incorrect conclusion). So I design to use thrift connection bool to reduce thrift connection. But there is no connection pool in php (there seem to be some DB connection pool tech) and php does not support the feature.
So I think maybe the project is designed in the wrong way from the beginning(like use php ,thrift). Is there a good way to solve this based on what i've done? And I think most people will doubt my awkward scheme. Well, your new scheme will help a lot
thanks.
"TSocket: Could not connect to 172.19.122.32:9090 (Connection refused [111])" from php log shows the ports running out because of too many short connections in a short time. So I config the tcp TIME_WAIT status to recycle port in time using:
sysctl -w net.ipv4.tcp_timestamps=1
sysctl -w net.ipv4.tcp_tw_recycle=1
it works!
what droubles me is sloved, but to change the kernal parameter will affect the NAT. It's not a perfect solution. I think a new good design of this system can be continue to discuss.
I used to have a small chat app(which was almost working), that uses PHP, jQuery and MySQL. The volume of users is very small (only my friends uses it). I used long polling method for this.
And now, I am thinking about using HTML5 Websockets for this, because it is a lot more efficient. And also most of my friends are using Google Chrome(which already supports HTML5). I have gone through some tutorials that talks about HTML5 websockets. And I have downloaded the phpWebSocket from github. I have gone through the code. But the readme file says that the PHP page that listens to incoming connections should be run using "PHP -q" from commandline. So, I have searched what this "q" flag would do. And I found that it runs the page in quiet mode. So, when I run this in quiet mode what is happened ? It would run endlessly ? Will this running process affect the system resources ?
This PHP page should run the entire time. Then only the connections could be accepted. Isn't it ?
I am having a shared hosting package with HostGator. And they allow cron jobs too. And my present chat app(that uses long polling method) inserts all the messages to database. When the user polls, it would search for any new messages from the database and then output them (if any).
So, I am bit stuck here. :(
It should be run from the command line because as you suspected, it is intended to run endlessly. It binds to a socket on the server and listens for incoming connections. It can't be reliably run from the browser.
The "-q" option tells it not to output any browser headers such as X-Powered-By: PHP or Content-Type: text/html
It will consume as much memory as PHP requires as long as its running. Your memory footprint on startup with no clients will vary between configurations. The more connected clients, the more cpu, memory and socket descriptors you will use. It uses select so it is efficient socket handling.
Also, since you're on shared hosting, you probably won't be able to use it because your user will most likely not have the ability to bind to a port and listen for connections.
As you can see in the demo, the URL to connect the WebSocket to is ws://localhost:12345/websocket/server.php. Unless you have a webserver capable of using WebSockets, you will have to run something like phpWebSocket that acts as a server and listens on a port other than 80.
Hope that helps.
The shared hosting package for HostGator does not allow clients to bind to local ports for incoming. This might be part of the problem.
http://support.hostgator.com/articles/pre-sales-policies/socket-connections
I have a GPS unit that can send data over a TCP connection, but I don't have the ability to modify the message that it sends so it would come to my server in the form of an HTTP request - it can only send a message in a predefined format.
So, I have the following questions:
1) Is it possible to have Apache handle a TCP connection that doesn't come in the form of an HTTP request, and have the message that is sent be processed by a PHP script?
2) If #1 isn't possible, how would you recommend I handle the data being sent to my server?
I will potentially have hundreds, if not thousands, of these GPS units sending data to my server so I need an efficient way to handle all of the connections coming in (which is why I wanted Apache or some other production worthy server to handle the TCP connections). I would like to be able to deal with the message sent over the connection with PHP since that is what the rest of my application runs on, and I will need to insert the data sent into a database (and PHP is really good at doing that kind of thing).
In case it matters, the GPS unit can send data over a UDP connection, but from what I have read Apache doesn't work with UDP connections.
Any suggestions would be welcome.
To answer your questions:
1) Not without major modification
2) Build your own server. This is easily done with several platforms and in several languages. I personally like to use the Twisted Framework because Python is relatively simple to use and the framework is very flexible.
Using Apache wouldn't be practical as it's using a nuclear bomb when a firecracker will suffice. Creating a PHP server is quite simple on Linux with the help of xinetd.
Modify /etc/services. Say you want your service to run on port 56789. In /etc/services, add the line:
gpsservice 56789/tcp
In /etc/xinet.d/, create a file named gpsservice:
service gpsservice
{
socket_type = stream
protocol = tcp
wait = no
user = yourusername
server = /path/to/your/script
log_on_success = HOST PID
disable = no
}
Create your PHP script (chmod it to be executable):
#!/usr/bin/php
<?php
// do stuff
?>
Restart xinetd service xinetd restart
You now have a quick TCP server written in PHP.
On my laptop I have an app that makes 7 AJAX GET requests to a single PHP script at about the same time (millisecond difference). They all return successfully with the result I want.
Then I moved this script to a server (Windows Server) running Apache and PHP. However, this process hangs when I make the same 7 AJAX requests. However, if I make each request individually then they all come back successful! Something doesn't want me to do all 7.
Why is this happening? What configuration variables in the PHP.ini and httpd.conf can I look for to determine what this is?
Thanks
I think the problem might be on the browser-side.
Most browsers have a 2 concurrent connections limit when talking to the same server.
When you moved your application to the server, the extra latency might have overlapped your AJAX requests, which on localhost were being served in quick succession.
You may want to check out these related articles:
The Dreaded 2 Connection Limit
The Two HTTP Connection Limit Issue
Circumventing browser connection limits for fun and profit
The server may have a throttler in place to keep excessive requests from coming in too quickly.
Maybe your Apache configuration limits the number of concurrent connections from the same IP, or even Windows. What version of Windows is it? What kind of Apache installation, Standalone or as a part of XAMPP?