Mysql Server performance using php/exec or php/mysqli - php

What is better ( in performance/response ) regarding PHP/mysql requests please:
Using normal mysqli library
Using exec('mysql -u user -pPassword database -e "request" 2>&1 ')
By the way I'm not caring about the server RAM but about the server CPU,
I want a fast answer to the user and I need your advice for that.
Thank you

Use the library (mysqli or PDO). Period. Full stop.
The exec needs to do
Launch a process (on client)
Start up mysql (on client)
Connect to mysql (both client and server)
Prepare and execute the request (mostly server)
Disconnect (both)
shutdown the process (on client)
And that achieved performing only one "request"
With mysqli:
Connect to mysql once (both)
Prepare and execute many requests (one at a time) (mostly server)
disconnect once (both)
That is, instead of using CPU, RAM, I/O, etc for 6 steps to run one request, you are using those resources for connecting only once, not once per request.
That is, resources is less for the mysqli approach, at least when running multiple "requests". And the client is definitely doing less work, even for a single request.
"no server communication" -- No. That is not possible. All the database work is always done on the server. The client does little more than send requests and receive results.
mysql is a client program. mysqld is the server. They are different. Think of mysql as a standalone program that has a library equivalent to "mysqli" builtin.

Related

Is PDO::lastInsertId() in multithread single connection safe?

I read some threads here about PDO::lastInsertId() and its safety. It returns last inserted ID from current connection (so it's safe for multiuser app while there is only one connection per user/script run).
I'm just wondering if there is a possibility to get invalid ID if there is only one DB connection per one long script (lots of SQL requests) in multicore server system? The question is more likely to be theoretical.
I think PHP script run is linear but maybe I'm wrong.
PDO itself is not thread safe. You must provide your own thread safety if you use PDO connections from a threaded application.
The best, and in my opinion the only maintainable, way to do this is to make your connections thread-private.
If you try to use one connection from more than one thread, your MySQL server will probably throw Packet Out of Order errors.
The Last Insert ID functionality ensures multiple connections to MySQL get their own ID values even if multiple connections do insert operations to the same table.
For a typical php web application, using a multicore server allows it to handle more web-browser requests. A multicore server doesn’t make the php programs multithreaded. Each php program, to handle each web request, allocates is own PDO connections. As you put it, each php script run is “linear”. The multiple cores allow multiple scripts to run at the same time, but independently.
Last Insert ID is designed to be safe for that scenario.
Under some circumstances a php program may leave the MySQL connection open when it's done so another php program may use it. This is is called a persistent connection or connection pooling. It helps performance when a web site has many users connecting to it. The generic term for a reusable connection is "serially reusable resource.*
Some php programs may use threads. In this case the program must avoid allowing more than one thread to use the same connection at the same time, or get the dreaded Packet Out of Order errors.
(Virtually all machines have multiple cores.)

Multithreading implementation pattern

First of all, i'm using pthreads. So the scenario is this: There are servers of a game that send logs over UDP to an ip and port you give them. I'm building an application that will receive those logs, process them and insert them in a mysql database. Since i'm using blocking sockets because the number of servers will never go over 20-30, I'm thinking that i will create a thread for each socket that will receive and process logs for that socket. All the mysql infromation that needs to be inserted in the database will be send to a redis queue where it will get processed by another php running. Is this ok, or better, is it reliable ?
Don't use php for long running processes (php script used for inserting in your graph). The language is designed for web requests (which die after a couple of ms or max seconds). You will run into memory problems all the time.

after client disconnects, mongodb should kill its map-reduce process

I connect to mongodb with php's driver and make a map-reduce command. Sometimes the mapreduce takes a long time and this is not a problem for me, at least for now.
The problem is that when I kill the php process, the map-reduce continues to work. I want that when a client disconnects, its all processes should stop too. Because the results of the processes are no longer necessary.
The problem is that when I kill the php process, the map-reduce continues to work.
How can MongoDB know the PHP process was killed, all it sees is a command on a connection came in and that connection is still being used.
This is one reason why you SHOULDN'T run Map Reduce inline to your application and why it is recommended not to.
This same problem applies to web servers when in connection to browsers, a PHP process will conitnue running when a browser is closed.

I would really like to use PHP and MySQL persistent connections, but how?

Apache/PHP/MySQL persistent connections have such a bad reputation because Apache handles each request as a child php process, each having 1 persistent connection. When visitors scale, MySQL reachs max connections limit from all the apache/php child processes, each with 1 persistent connection. There's also the issues of temporary tables, user variables, charsets, transactions and last-insert-id.
In my situation, we don't have to deal with the latter issues, because we are only READING from MySQL. There are no updates to the db: these are handled by another set of scripts on a different machine. The scripts we want to have persistent connections are the server-end of AJAX connections, returning JSON data.
Each pageview has 5 AJAX requests, so 6 different php child processes are started on the server for each page requested (5 ajax, 1 html). Ideally, I could have ONLY 1 connection from the php/ajax server to MySQL. This connection would be shared by all php child processes.
So, how can I do this?
Should I use another webserver software other than apache? ngynx?
cheers
UPDATE: In this situation, the right way to connect to the MySQL server (http://bit.ly/15rBDP):
using mysql native driver (mysqlnd)
and mysqli
each client reconnecting using the
mysql-change-user function
using
MYSQLI-NO-CHANGE-USER-ON-PCONNECT in
php config ('coz we don't need to
cleanup).
UPDATE 2:
To clarify my question, what i want to have is: ALL php client processes, connecting through only ONE persistent connection. This connection is defined, ran and stored some how (my question), but all new php client processes know it and can use it. The problem with apache/php is that each php client process has 1 connection. If I serve 20,000 pages per minute, there will be 20,000 persistent connections. I want the 20,000 php child processes connecting to 1 unique, central, persistent connection to mysql.
You do realize that having only one (persistent) connection for all your requests, effectively serializes all requests to your server. So request C has to wait for request B to finish, which has to wait for request A to finish etc.
So having one connection turns your multi-threaded/multi-process webserver into a single-threaded application.
Read the accepted answer on this post : Which is better: mysql_connect or mysql_pconnect
Simply, using mysql persistent connections may be good or bad, depending on the hardware resources that you have as well as the way you code your applications.
A php native mysql connector is included in php 5.3 which has improved support for persistent connections.
http://dev.mysql.com/downloads/connector/php-mysqlnd/
http://blog.ulf-wendel.de/?p=211

Debug MySQLs "too many connections"

I'm trying to debug an error I got on a production server. Sometimes MySQL gives up and my web app can't connect to the database (I'm getting the "too many connections" error). The server has a few thousand visitors a day and on the night I'm running a few cron jobs which sometimes does some heavy mysql work (Looping through 50 000 rows, inserting and deletes duplicates etc)
The server runs both apache and mysql on the same machine
MySQL has a pretty standard based configuration (max connections)
The web app is using PHP
How do I debug this issue? Which log files should I read? How do I find the "evil" script? The strange this is that if I restart the MySQL server it starts working again.
Edit:
Different apps/scripts is using different connectors to its database (mostly mysqli but also Zend_Db)
First, use innotop (Google for it) to monitor your connections. It's mostly geared to InnoDB statistics, but it can bet set to show all connections including those not in a transaction.
Otherwise, the following are helpful: Use persistent connections / connection pools in your web apps. Increase your max connections.
It's not necessarily a long-running SQL query.
If you open a connection at the start of a page, it won't be released until the PHP script terminates - even if there is no query running.
You should add some stats to your pages to find out the slowest ones, and the most-hit ones. Closing the connection early would help, if possible.
Try using persistent connections (mysql_pconnect), it will help reduce the server load caused by constantly opening and closing MySQL connections.
The starting point is probably to use mysqladmin processlist to get a list of the processes on the mysql server. The next step depends on what you find.

Categories