PHP script to Check MYSQL Running or not? - php

I am looking for a PHP script which can check mysql is running or not in CENTOS / Linux VPS?
I have tried several method but its not working
First Method:
<?
$command = ('/etc/init.d/mysql restart');
if (! $dbh = mysql_connect('localhost','username','password') ){
$output = shell_exec($command);
echo "<pre>$output</pre>";
}
else
echo"nothing happened";
?>
Second Method:
<?php
// STOP + START = RESTART..
system('net stop "MySQL"'); /* STOP */
system('net start "MySQL"'); /* START */
?>
Both Methods dint not works for me in PHP..
Problem Is : these days my site having too much load of mysql, i tried several method but i am unable to stop its crashing.
therefore i decide to make a PHP script to check weather mysql is running or not?
If not then PHP script will perform Restart Mysql,
or else print "My Sql is running".
for this I've decided to set a Cron Job in PHP, that PHP script will monitor mysql is running or not.
I am also looking to save the logs in the end.. to check how many times mysql got restart.....
please someone find the fix of this issue.. and post here..
thanks in advance...

First of all, allowing MySQL to execute shell commands or executable programs is inviting a security disaster.
Secondly, if your site is hanging up because of too much load in MySQL, restarting MySQL from PHP is inviting a data loss disaster. Spend some time performance tuning your database. There is quite a bit of information about this on the internet.
You can configure your server (presuming you control the server and are not using virtual hosting) to allow read-only (!) access to the .pid file that is present while MySQL is running as suggested by #user1420752. You can check that it exists to see if MySQL is running. That does not ensure that it is at all responsive, though.
Alternatively, you can run just about the cheapest of all MySQL queries
SELECT 1
it will only run if MySQL is up, but does not require any IO to complete (see this).
There are quite a few monitoring solutions for operating systems (Windows/Linux) and your database. Some good ones are open source. I would suggest setting one up to notify you when CPU or IO utilization are high, or if the MySQL server stops, and focus your efforts on MySQL tuning.

You can achieve it by simple bash script. Schedule below script in cron on every 5 minutes or as per your requirement.
#!/bin/bash
EMAILID=yourmail#domain.com
date > /var/log/server_check.log
mysql -uroot -proot123 -e"SELECT USER FROM mysql.user LIMIT 1" >> /var/log/server_check.log 2>&1
if [ $? -eq 0 ]
then
echo -e "Hi,\\n\\nMy DB is running \\nThanks,\\nDatabase Admin" | mail -s "My DB is running" $EMAILID
exit
else
echo -e "Hi,\\n\\nMy DB is not running, so now starting it.\\nThanks,\\nDatabase Admin" | mail -s "My DB is not running" $EMAILID
service mysqld start
fi
Further this is just a check but not solution, you should check your queries in slowlogs or your db configuration to get root cause of it and work on it.

If your mysql service is block or stop running then, use following shell script to auto start your MySql service.
#!/bin/bash
now=$(date +"%T")
echo "in $(pgrep mysqld | wc -l) auto mysql start $now"
if [[ $(pgrep mysqld | wc -l) == 0 ]];
then
sudo -i | /etc/init.d/mysqld start
fi
Set Shell script in Cron job:
*/2 * * * * /bin/sh /home/auto_start_mysql.sh >> /home/cronlog.txt
For more detail check this URL: http://www.easycodingclub.com/check-mysql-running-or-not-using-shell-script-or-php-script/linux-centos/

Platform/version dependent, but how about:
$mysqlIsRunning = file_exists("/var/run/mariadb/mariadb.pid");

Related

Stopping infinit loop from php script run in linux terminal

I am currently following a tutorial that teaches how to create a queue in php. An infinite loop was created in a php script. I simplified the code in order to focus on the question at hand:
while(1) {
echo 'no jobs to do - waiting...', PHP_EOL;
sleep(10);
}
I use PuTTy (with an SSH connection) to connect to the linux terminal in my shared hosting account (godaddy). If I run php queuefile.php, I know it will run with no problems (already tested the code with a finite for loop instead of the infinite while loop).
QUESTION: How could I exit out of the infinite loop once it has started? I have already read online the option of creating code that "checks" if it should continue looping with something like the following code:
$bool = TRUE;
while ($bool)
{
if(!file_exists(allow.txt)){$bool = FALSE}
//... the rest of the code
though I am curious if there might be a command I can type in the terminal, or a set of keys I can push that will cause the script to terminate. If there is any way of terminating the script, or if there is a better way to make the previous "check", I would love your feedback!
Pushing Ctrl+C should stop the running program that is running in the foreground.
You could also kill it when you login in another session an do some ps aux | grep my-php-script.php and see if it is your program, then you can use pkill -f my-php-script.php to kill this process.
I understand so that you want to make cron in your server. Therefore you should log in your server via putty and create cron job.
For example:
After logging...
crontab -e
Then add
1 2 3 4 5 /path/to/command arg1 arg2

How can I run a php script one at a time by cron?

I have a php script that I call via cron per minute.
but I don't want it running again while the first call is still loading or in progress?
I have tried using MYSQL but my problem sometimes, the script did not finish loading so it cannot update the MYSQL that the script is finished. it will stuck in loading.
Thank you
You have to use a lockfile or check if it is already running before starting the new one.
See How to prevent PHP script running more than once?
I have taken the solution there and modified it somewhat:
#!/bin/bash
#change to the name of php-file
SCRIPT='my-php-script.php'
#get running processes containing php-file
PIDS=$(ps aux | grep "$SCRIPT" | grep -v grep)
if [ -z "$PIDS" ]; then
echo "Starting $SCRIPT ..."
#change path to script to its real path
/usr/bin/env php "/path/to/script/$SCRIPT" >> "/var/log/$SCRIPT.log" &
else
echo "$SCRIPT already running."
fi
You should try to run this on the command line first to see that it works, then add it to cron. Depending on the system it might have some paths that are different and not find parts necessary to run correctly.

Slow cronjobs on Cent OS 5

I have 1 cronjob that runs every 60 minutes but for some reason, recently, it is running slow.
Env: centos5 + apache2 + mysql5.5 + php 5.3.3 / raid 10/10k HDD / 16gig ram / 4 xeon processor
Here's what the cronjob do:
parse the last 60 minutes data
a) 1 process parse user agent and save the data to the database
b) 1 process parse impressions/clicks on the website and save them to the database
from the data in step 1
a) build a small report and send emails to the administrator/bussiness
b) save the report into a daily table (available in the admin section)
I see now 8 processes (the same file) when I run the command ps auxf | grep process_stats_hourly.php (found this command in stackoverflow)
Technically I should only have 1 not 8.
Is there any tool in Cent OS or something I can do to make sure my cronjob will run every hour and not overlapping the next one?
Thanks
Your hardware seems to be good enough to process this.
1) Check if you already have hanging processes. Using the ps auxf (see tcurvelo answer), check if you have one or more processes that takes too much resources. Maybe you don't have enough resources to run your cronjob.
2) Check your network connections:
If your databases and your cronjob are on a different server you should check whats the response time between these two machines. Maybe you have network issues that makes the cronjob wait for the network to send the package back.
You can use: Netcat, Iperf, mtr or ttcp
3) Server configuration
Is your server is configured correctly? Your OS, MySQL are setup correctly? I would recommend to read these articles:
http://www3.wiredgorilla.com/content/view/220/53/
http://www.vr.org/knowledgebase/1002/Optimize-and-disable-default-CentOS-services.html
http://dev.mysql.com/doc/refman/5.1/en/starting-server.html
http://www.linux-mag.com/id/7473/
4) Check your database:
Make sure your database has the correct indexes and make sure your queries are optimized. Read this article about the explain command
If a query with few hundreds thousands of record takes times to execute that will affect the rest of your cronjob, if you have a query inside a loop, even worse.
Read these articles:
http://dev.mysql.com/doc/refman/5.0/en/optimization.html
http://20bits.com/articles/10-tips-for-optimizing-mysql-queries-that-dont-suck/
http://blog.fedecarg.com/2008/06/12/10-great-articles-for-optimizing-mysql-queries/
5) Trace and optimized PHP code?
Make sure your PHP code runs as fast as possible.
Read these articles:
http://phplens.com/lens/php-book/optimizing-debugging-php.php
http://code.google.com/speed/articles/optimizing-php.html
http://ilia.ws/archives/12-PHP-Optimization-Tricks.html
A good technique to validate your cronjob is to trace your cronjob script:
Based on your cronjob process, put some debug trace including how much memory, how much time it took to execute the last process. eg:
<?php
echo "\n-------------- DEBUG --------------\n";
echo "memory (start): " . memory_get_usage(TRUE) . "\n";
$startTime = microtime(TRUE);
// some process
$end = microtime(TRUE);
echo "\n-------------- DEBUG --------------\n";
echo "memory after some process: " . memory_get_usage(TRUE) . "\n";
echo "executed time: " . ($end-$start) . "\n";
By doing that you can easily find which process takes how much memory and how long it takes to execute it.
6) External servers/web service calls
Is your cronjob calls external servers or web service? if so, make sure these are loaded as fast as possible. If you request data from a third-party server and this server takes few seconds to return an answer that will affect the speed of your cronjob specially if these calls are in loops.
Try that and let me know what you find.
The ps's output also shows when the process have started (see column STARTED).
$ ps auxf
USER PID %CPU %MEM VSZ RSS TTY STAT STARTED TIME COMMAND
root 2 0.0 0.0 0 0 ? S 18:55 0:00 [ktrheadd]
^^^^^^^
(...)
Or you can customize the output:
$ ps axfo start,command
STARTED COMMAND
18:55 [ktrheadd]
(...)
Thus, you can be sure if they are overlapping.
You should use a lockfile mechanism within your process_stats_hourly.php script. Doesn't have to be anything overly complex, you could have php write the PID which started the process to a file like /var/mydir/process_stats_hourly.txt. So if it takes longer than an hour to process the stats and cron kicks off another instance of the process_stats_hourly.php script, it can check to see if the lockfile already exists, if it does it will not run.
However you are left with the problem of how to "re-queue" the hourly script if it did find the lock file and couldn't start.
You might use strace -p 1234 where 1234 is a relevant process id, on one of the processes which is running too long. Perhaps you'll understand why is it so slow, or even blocked.
Is there any tool in Cent OS or something I can do to make sure my cronjob will run every hour and not overlapping the next one?
Yes. CentOS' standard util-linux package provides a command-line convenience for filesystem locking. As Digital Precision suggested, a lockfile is an easy way to synchronize processes.
Try invoking your cronjob as follows:
flock -n /var/tmp/stats.lock process_stats_hourly.php || logger -p cron.err 'Unable to lock stats.lock'
You'll need to edit paths and adjust for $PATH as appropriate. That invocation will attempt to lock stats.lock, spawning your stats script if successful, otherwise giving up and logging the failure.
Alternatively your script could call PHP's flock() itself to achieve the same effect, but the flock(1) utility is already there for you.
How often is that logfile rotated?
A log-parsing job suddenly taking longer than usual sounds like the log isn't being rotated and is now too big for the parser to handle efficiently.
Try resetting the logfile and see if the job runs faster. If that solves the problem, I recommend logrotate as a means of preventing the problem in the future.
You could add a step to the cronjob to check the output of your above command:
ps auxf | grep process_stats_hourly.php
Keep looping until the command returns nothing, indicating that the process isn't running, then allow the remaining code to execute.

Best way to ping client

Right, I have a PHP script at work where the server ping's a client. The problem I am facing is that sometimes the server cannot contact the client although when I manually ping the client it ping's successfully.
The ping command I am using is this ping -q -w 3 -c 1 < ipaddresshere >
What would be the best way of pinging the clients maybe 2/3 times leaving like a 2/3 second gap if a ping fails before a retry?
As you are in the unix environment, you can always make and then call a shell script to handle the looping and waiting. But I'm surprised that you can't do that inside of php.
Also, i'm not sure about your sample ping command, the 2 different environments I checked seem to have different meanings for the options you mention than what you seem to intend. Try man ping OR ping --help
The script below should give you a framework for implementing a ping-retry, but I can't spend a lot of time on it.
cat pingCheck.sh
#! /bin/bash -vx
IPaddr=$1
: ${maxPingTries:=3}
echo "maxPingTries=${maxPingTries}"
pingTries=0
while ${keepTryingToPing:-true} ; do
if ping -n 3 -r 1 ${IPaddr} ;then
keepTryingToPing=false
else
sleep ${sleepSecs:-3}
if (( ++pingTries >= maxPingTries )) ; then
printf "Execeeded count on ping attempts = ${maxPingTries}\n" 1>&2
keepTryingToPing=false
fi
fi
done
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
for php, you can try PEAR's Net_PING package.
here is a link guiding you through it
http://www.codediesel.com/php/ping-a-server-using-php/

mysqldump and wamp

Update: Finally got this thing working but still not sure what the problem was. I am using a wamp server that I access through a networked folder.
The problem that still exists is that to execute the mysqldump I have to access the php file from the actual machine that is being used to host the WAMP server.
End of update
I am running a wamp server and trying to use mysqldump to backup a mysql database I have. The following is the PHP code I am using to run mysqldump.
exec("mysqldump backup -u$user -p$pass > $sql_file");
When I run the script the page just loads inifnately and the backup is not created.
A blank file is being created so I know something is happening.
Extra info:
* exec() is not disabled
* PHP is not running in safe mode
Any ideas??
Win XP, WAMP, MYSQL 5.0.51b
mysqldump is likely to exceed the maximal time php is supposed to run on your system. Try using the command in cmd or increase the max_execution_time in your php.ini .
Are you sure $pass is defined and doesn't have a space character at the start?
If it wasn't, mysqldump would be waiting for command line entry of the password.
I had the same thing happen a while back. A co-worker pointed me to the MySQL GUI tools and I have been making backups with that. The Query Browser that comes with it is nice, too.
MySQL GUI tools
It might help to look at the stderr output from mysqldump:
$cmd = "mysqldump backup -u$user -p$pass 2>&1 > $sql_file";
exec($cmd, $output, $return);
if ($return != 0) { //0 is ok
die('Error: ' . implode("\r\n", $output));
}
Also you should use escapeshellarg() if $user or $pass are user-supplied.
I've also struggled with using the mysqldump utility. I few things to check/try based on my experience:
Is your server set up to allow programs to run programs with an exec command? (My webhost's server won't let me.) Test with a different command.
Is the mysqldump utility installed? Check with whereis mysqldump.
Try adding the optimize argument --opt

Categories