Multithreading implementation pattern - php

First of all, i'm using pthreads. So the scenario is this: There are servers of a game that send logs over UDP to an ip and port you give them. I'm building an application that will receive those logs, process them and insert them in a mysql database. Since i'm using blocking sockets because the number of servers will never go over 20-30, I'm thinking that i will create a thread for each socket that will receive and process logs for that socket. All the mysql infromation that needs to be inserted in the database will be send to a redis queue where it will get processed by another php running. Is this ok, or better, is it reliable ?

Don't use php for long running processes (php script used for inserting in your graph). The language is designed for web requests (which die after a couple of ms or max seconds). You will run into memory problems all the time.

Related

Mysql Server performance using php/exec or php/mysqli

What is better ( in performance/response ) regarding PHP/mysql requests please:
Using normal mysqli library
Using exec('mysql -u user -pPassword database -e "request" 2>&1 ')
By the way I'm not caring about the server RAM but about the server CPU,
I want a fast answer to the user and I need your advice for that.
Thank you
Use the library (mysqli or PDO). Period. Full stop.
The exec needs to do
Launch a process (on client)
Start up mysql (on client)
Connect to mysql (both client and server)
Prepare and execute the request (mostly server)
Disconnect (both)
shutdown the process (on client)
And that achieved performing only one "request"
With mysqli:
Connect to mysql once (both)
Prepare and execute many requests (one at a time) (mostly server)
disconnect once (both)
That is, instead of using CPU, RAM, I/O, etc for 6 steps to run one request, you are using those resources for connecting only once, not once per request.
That is, resources is less for the mysqli approach, at least when running multiple "requests". And the client is definitely doing less work, even for a single request.
"no server communication" -- No. That is not possible. All the database work is always done on the server. The client does little more than send requests and receive results.
mysql is a client program. mysqld is the server. They are different. Think of mysql as a standalone program that has a library equivalent to "mysqli" builtin.

Do "blocking" php scripts consume system resources

I'm trying to understand better what "blocking" php is.
Say I have a server with 1 CPU thread, and I use PHP to connect to a remote database.
Is the CPU thread "blocked" while waiting for the remote DB to respond? Could another PHP process start up and run while the first php process is waiting for the remote DB's response?

Monitoring Memcached with PHP script

I've written a PHP script that I've scheduled with CRON to run every minute. The goal of the script is to verify that the memcached server is online. My strategy is simply to attempt to connect to the memcached server. If I connect successfully, I close the connection. If I do not successfully connect, I send an email alerting that memcached is offline.
My question: is this a sufficient test that memcached is up and running, or is it common practice to do more than just connect to memcached? Should I also test that I can set and retrieve a key/value pair?
Also, in the future, we may want to do more extensive monitoring of memcached so we can track memory usage, connections, number of requests, etc. Are there open source libraries for doing this from PHP? If so, which ones have performed nicely in your experience?
You don't need to build anything. There are a number of PHP scripts intended for monitoring, debugging and displaying stats for a Memchached server.
There are two that I know of and work well:
phpmemcacheadmin full monitoring and debugging suite
memcache.php simple script sort of like apc.php (bundled in the archive file for a release)

Pheanstalk (PHP client for beanstalk) - how do connections work?

I'd like some help understanding the use of pheanstalk (php beanstalk client). I have a PHP program that is executed on the server when form data is sent to it. The PHP program should then package the form data as a JSON structure and send it to a backend server process.
The thing I don't understand is the connection to the beanstalkd server. Should I create a new Pheanstalk() object each time the PHP program executes - in which case, am I incurring the cost of creating the connection. When is the connection closed (since there is no close() method in pheanstalk)?
If the connection is persistent, is it shared among all executions of the PHP program, in which case, what happens in the case of concurrent hits? Thanks for any help.
Yes, you will have to create a new connection with Pheanstalk (or any other library) each time you start the program, since PHP starts each one fresh. The overhead is tiny though.
The Beanstalkd process is optimised to easily handle a number of connections, and will act on them atomically - you won't get a duplicate job, unless you put two of the same in there (and even then, they would have different job-ID's).
Pheanstalk doesn't even send data to the daemon any information (including opening the connection) until the first command is sent. It's for this reason that you can't tell if the daemon is even alive till you actively make a request (in my tests, I get the list of current tubes). If you kept re-using the instantiated class in the running program, then it would keep reusing it of course.
There's no formal close(), but unset($pheanstalk) would do the same thing, running the destructor. Again, the call is program so transient and the daemon can keep so many concurrent connections open if it's allowed to, that it's not an issue - and it will be shut down as the program itself does.
In short, don't worry. The overhead of connecting and sending data into, or out of, Beanstalkd will probably be a tiny fraction of any work that is done by the worker, or producer, in generating the request/response.

MySQL trigger + notify a long-polling Apache/PHP connection

I know there are Comet server technologies that do this but I want to write something simple and home-grown.
When a record is inserted into a MySQL table, I want it to somehow communicate this data to a series of long-polled Apache connections using PHP (or whatever). So multiple people are "listening" through their browser and the second the MySQL INSERT happens, it is sent to their browser and executed.
The easy way is to have the PHP script poll the MySQL database, but this isn't really pushing from the server and introduces some unacceptable order of unnecessary database queries. I want to get that data from MySQL to the long-polling connection essentially without the listeners querying at all.
Any ideas on how to implement this?
I have been trying all kinds of ideas for a solution to this as well and the only way to take out the polling of sql queries is to poll a file instead. If the fill equals 0 then continue looping. If file equals 1 have loop run sql queries and send to users. It does add another level of complexity but I would think it means less work by mysql but same for apache or what ever looping daemon. You could also send the command to a daemon "comet" style but it is going to fork and loop on each request as well from what I have seen on how sockets work so hopefully someone will find a solution to this.
This is something I have been looking for as well for many years. I have not found any functionality where the SQL server pushes out a message on INSERTS, DELETE and UPDATES.
TRIGGERS can run SQL on these events, but that is of no use here.
I guess you have to construct your own system. You can easily broadcast a UDP from PHP (Example in first comment), the problem is that PHP is running on the server side and the clients are static.
My guess would be that you could do a Java Applet running on the client, listening for the UDP message and then trigger an update of the page.
This was only some thoughts in the moment of writing...
MySQL probably isn't the right tool for this problem. Regilero suggested switching your DB, but an easier solution might be to use something like redis which has a pub/sub feature.
http://redis.io/topics/pubsub

Categories