I'm trying to understand better what "blocking" php is.
Say I have a server with 1 CPU thread, and I use PHP to connect to a remote database.
Is the CPU thread "blocked" while waiting for the remote DB to respond? Could another PHP process start up and run while the first php process is waiting for the remote DB's response?
Related
First of all, i'm using pthreads. So the scenario is this: There are servers of a game that send logs over UDP to an ip and port you give them. I'm building an application that will receive those logs, process them and insert them in a mysql database. Since i'm using blocking sockets because the number of servers will never go over 20-30, I'm thinking that i will create a thread for each socket that will receive and process logs for that socket. All the mysql infromation that needs to be inserted in the database will be send to a redis queue where it will get processed by another php running. Is this ok, or better, is it reliable ?
Don't use php for long running processes (php script used for inserting in your graph). The language is designed for web requests (which die after a couple of ms or max seconds). You will run into memory problems all the time.
I connect to mongodb with php's driver and make a map-reduce command. Sometimes the mapreduce takes a long time and this is not a problem for me, at least for now.
The problem is that when I kill the php process, the map-reduce continues to work. I want that when a client disconnects, its all processes should stop too. Because the results of the processes are no longer necessary.
The problem is that when I kill the php process, the map-reduce continues to work.
How can MongoDB know the PHP process was killed, all it sees is a command on a connection came in and that connection is still being used.
This is one reason why you SHOULDN'T run Map Reduce inline to your application and why it is recommended not to.
This same problem applies to web servers when in connection to browsers, a PHP process will conitnue running when a browser is closed.
I use library http://phpseclib.sourceforge.net/ssh/intro.html.
My script communicates with a remote server via bidirectional xml stream.
It uses the read() function of the library to read another chunk of data every 30s. In between, my script does something else + sleep()
Now can it be, that my script misses some data, since it "slept" while they came? Is that possible? How else may it miss data incoming via the stream?
If you are referring to sleep() on the PHP (client) side, than it is a question of whether the SSH client is running under your thread or under its own thread.
If its your thread, then yes it can miss data, if its on its own thread, it won't it will be waiting for you to come back.
NOTE: Doing what you are trying to do will be very unstable, some SSH servers will disconnect you after a certain amount of idle time, as well as a connection that doesn't send/recv data is likely to get terminated.
If you're timing out, on the client side, every 30 seconds, it's possible the server times out in less time than that if no packets are read or sent.
What'd be really helpful is the command you're running, the output you're expecting and the output you're getting back. That'll make diagnosing your issues easier.
I've written some JS scripts on my school's VLE.
It uses the UWA Widget Format and to communicate with a locally-hosted PHP script, it uses a proxy and AJAX requests.
Recently we've moved the aforementioned locally-hosted server from a horrible XP-based WAMP server to a virtual Server 2008 distribution running IIS and FastCGI PHP.
Since then - or maybe it was before and I just didn't notice - my AJAX calls are starting to take in excess of 1 second to run.
I've run the associated PHP script's queries on PHPMyAdmin and, for example, the associated getCategories SQL takes 0.00023s to run so I don't think the problem lies there.
I've pinged the server and it consistently returns <1ms as it should for a local network server on a relatively small scale network. The VLE is on this same network.
My question is this: what steps can I take to determine where the "bottleneck" might be?
First of all, test how long your script is actually running:
Simplest way to profile a PHP script
Secondly, you should check the disk activity on the server. If it is running too many FastCGI processes for the amount of available RAM, it will swap and it will be very slow. If the disk activity is very high, then you know you've found your culprit. Solve it by reducing the maximum number of fastcgi processes or by increasing the amount of server RAM.
I have 3 servers with processes that require all the CPU they can get. I let these processes write their standard output to a file
./run.sh > run.log
Would this writing slow down the process? (the ./run.sh script starts eg. a Java program and sometimes a Ruby program)
Now I want to create a web-interface that would display the output from the script while it is running. I can imagine writing a php script which refreshes every 5 seconds, creates a SSH connection to the server does and gets the run.log file.
But wouldn't that interfere with the process or slow it down? It is really crutial that the server is able to use as much of it's power as possible. Or are there better ways to handle this? Instead of creating an SSH connection every 5 seconds, maby a persistent connection and update with Ajax? (security is not a requirement)
Would this writing slow down the process? (the ./run.sh script starts eg. a Java program and sometimes a Ruby program)
Maybe; if the process writes a lot of data, it can easily slowdown the process, because likely the process will be writing synchronously to the disk. Otherwise, you don't have to worry.
An alternative would be having a setup where the script sends the output to the machine with the web application via some kind of message service. This would avoid polling the server; whether it would be more efficient depends on the details.
A simple an efficient mechanism would be forwarding stdout to a UDP socket and have the web application listen and store temporarily those messages in a circular buffer.