Right, I have a PHP script at work where the server ping's a client. The problem I am facing is that sometimes the server cannot contact the client although when I manually ping the client it ping's successfully.
The ping command I am using is this ping -q -w 3 -c 1 < ipaddresshere >
What would be the best way of pinging the clients maybe 2/3 times leaving like a 2/3 second gap if a ping fails before a retry?
As you are in the unix environment, you can always make and then call a shell script to handle the looping and waiting. But I'm surprised that you can't do that inside of php.
Also, i'm not sure about your sample ping command, the 2 different environments I checked seem to have different meanings for the options you mention than what you seem to intend. Try man ping OR ping --help
The script below should give you a framework for implementing a ping-retry, but I can't spend a lot of time on it.
cat pingCheck.sh
#! /bin/bash -vx
IPaddr=$1
: ${maxPingTries:=3}
echo "maxPingTries=${maxPingTries}"
pingTries=0
while ${keepTryingToPing:-true} ; do
if ping -n 3 -r 1 ${IPaddr} ;then
keepTryingToPing=false
else
sleep ${sleepSecs:-3}
if (( ++pingTries >= maxPingTries )) ; then
printf "Execeeded count on ping attempts = ${maxPingTries}\n" 1>&2
keepTryingToPing=false
fi
fi
done
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
for php, you can try PEAR's Net_PING package.
here is a link guiding you through it
http://www.codediesel.com/php/ping-a-server-using-php/
Related
I am looking for a PHP script which can check mysql is running or not in CENTOS / Linux VPS?
I have tried several method but its not working
First Method:
<?
$command = ('/etc/init.d/mysql restart');
if (! $dbh = mysql_connect('localhost','username','password') ){
$output = shell_exec($command);
echo "<pre>$output</pre>";
}
else
echo"nothing happened";
?>
Second Method:
<?php
// STOP + START = RESTART..
system('net stop "MySQL"'); /* STOP */
system('net start "MySQL"'); /* START */
?>
Both Methods dint not works for me in PHP..
Problem Is : these days my site having too much load of mysql, i tried several method but i am unable to stop its crashing.
therefore i decide to make a PHP script to check weather mysql is running or not?
If not then PHP script will perform Restart Mysql,
or else print "My Sql is running".
for this I've decided to set a Cron Job in PHP, that PHP script will monitor mysql is running or not.
I am also looking to save the logs in the end.. to check how many times mysql got restart.....
please someone find the fix of this issue.. and post here..
thanks in advance...
First of all, allowing MySQL to execute shell commands or executable programs is inviting a security disaster.
Secondly, if your site is hanging up because of too much load in MySQL, restarting MySQL from PHP is inviting a data loss disaster. Spend some time performance tuning your database. There is quite a bit of information about this on the internet.
You can configure your server (presuming you control the server and are not using virtual hosting) to allow read-only (!) access to the .pid file that is present while MySQL is running as suggested by #user1420752. You can check that it exists to see if MySQL is running. That does not ensure that it is at all responsive, though.
Alternatively, you can run just about the cheapest of all MySQL queries
SELECT 1
it will only run if MySQL is up, but does not require any IO to complete (see this).
There are quite a few monitoring solutions for operating systems (Windows/Linux) and your database. Some good ones are open source. I would suggest setting one up to notify you when CPU or IO utilization are high, or if the MySQL server stops, and focus your efforts on MySQL tuning.
You can achieve it by simple bash script. Schedule below script in cron on every 5 minutes or as per your requirement.
#!/bin/bash
EMAILID=yourmail#domain.com
date > /var/log/server_check.log
mysql -uroot -proot123 -e"SELECT USER FROM mysql.user LIMIT 1" >> /var/log/server_check.log 2>&1
if [ $? -eq 0 ]
then
echo -e "Hi,\\n\\nMy DB is running \\nThanks,\\nDatabase Admin" | mail -s "My DB is running" $EMAILID
exit
else
echo -e "Hi,\\n\\nMy DB is not running, so now starting it.\\nThanks,\\nDatabase Admin" | mail -s "My DB is not running" $EMAILID
service mysqld start
fi
Further this is just a check but not solution, you should check your queries in slowlogs or your db configuration to get root cause of it and work on it.
If your mysql service is block or stop running then, use following shell script to auto start your MySql service.
#!/bin/bash
now=$(date +"%T")
echo "in $(pgrep mysqld | wc -l) auto mysql start $now"
if [[ $(pgrep mysqld | wc -l) == 0 ]];
then
sudo -i | /etc/init.d/mysqld start
fi
Set Shell script in Cron job:
*/2 * * * * /bin/sh /home/auto_start_mysql.sh >> /home/cronlog.txt
For more detail check this URL: http://www.easycodingclub.com/check-mysql-running-or-not-using-shell-script-or-php-script/linux-centos/
Platform/version dependent, but how about:
$mysqlIsRunning = file_exists("/var/run/mariadb/mariadb.pid");
I have a PHP script running on Debian that calls the ping command and redirects the output to a file using exec():
exec('ping -w 5 -c 5 xxx.xxx.xxx.xxx > /var/f/ping/xxx.xxx.xxx.xxx_1436538580.txt &');
The PHP script then has a while loop that scans the /var/f/ping/ folder and checks to see if the ping has finished writing to it. I tried checking the output using:
exec('lsof | grep /var/f/ping/xxx.xxx.xxx.xxx_1436538580.txt');
to see if the file was still open, but it takes lsof about 10-15 seconds to return its results, which is too slow for what we need. Ideally it should be able to check this within 2 or 3 seconds.
Is there a faster/better way to test if the ping has completed?
using grep with lsof is probably the slowest way, as lsof will scan everything. you can narrow down the scope that lsof uses to one directory by doing:
lsof +D /var/f/ping
or similar.
there's a good and easy-to-read overview of lsof uses here:
http://www.thegeekstuff.com/2012/08/lsof-command-examples/
alternately, you could experiment with:
http://php.net/manual/en/function.fam-monitor-file.php
and see if that meets your requirements better.
You need Deffered queue pattern to such kind of tasks. Make pings in background by cron and create table or file with job statuses.
I'll try to explain my problem in a time line history:
I've tried to run several external scripts from php and to return the exit code to the server with an ajax call again.
A single call should start or stop an service on that machine. That works fine on this developing machine.
OS : raspbian Os
Webserver : NginX 1.2.1
Php : 5.4.3.6
However I've exported the code to a larger machine with much more power and everything seemed to work fine but one thing:
A single call causes the php-fpm to freezes and never to come back. By detailed examination I found out, that the call created a zombie process I can not terminate (even with sudo).
OS : Ubuntu
Webserver : NginX 1.6.2
Php : 5.5.9
The only solution seemed to stop the php-fpm proc and than to restart it again. Then everything seems to work fine again, as long as I try to call that script again.
Calling php line
exec("sudo ".$script, $output, $return_var);
(With all variables are normal 'strings' with no special chars)
Start script
#!/bin/sh
service radicale start 2>&1
The service by the way started, but every time the webserver freezes and I had to restart php manually, but that is not acceptable (even for a web server). But only for that single script and only for that service (radicale) with that solemn command (start).
Searching in Google brought me to the point that there is a conflict between the php commands exec() and session_start().
Links:
https://bugs.php.net/bug.php?id=44942
https://bugs.php.net/bug.php?id=44994
Their conclusion was, that that bug could be worked around with such a construct:
...
session_write_close();
exec("sudo ".$script, $output, $return_var);
session_start();
...
But that, for my opinion, was no debugging, but more a helplessly workaround, because you loose the functionality of letting the user know, that his actions have fully functioned, but more let him believe an error has occurred. Much more confusing is the fact, that it runs fully on the Raspberry Pi A, but not on a 64-bit machine with a much larger CPU and 8 GB RAM.
So is there a real solution anywhere or is this workaround the only way to solve that problem? I've read a article about php having some probs with exec/shell_exec and the recognition of the return value? How can that be lost? Someone's having a guess?
THX for reading that long awful English, but I'm no native speaker and was no well listening student in my lessons.
It is likely the case that the new machine simply is not set up the way the Raspberry PI was setup -
You need to do a few things in your shell before this will work on your larger machine:
1). Allow php to use sudo.
sudo usermod -G sudo -a your-php-user
Note that to get the username for your-php-user, you can just run a script that says:
<?php echo get_current_user(); ?> - or alternatively:
<?php echo exec('whoami'); ?> -
2). Allow that user to use sudo without a password
sudo visudo - this command will open /etc/sudoers with a failsafe to keep you from botching anything.
Add this line to the very end:
your-php-user ALL=(ALL) NOPASSWD: /path/to/your/script,/path/to/other/script
You can put as many scripts there, separated by commas, as you need.
Now, your script should work just fine.
AGAIN, please note that you need to change your-php-user to whatever your php user is.
Hope this helps!
This is not a real solution, but it's a better solution than none.
Calling a bash script with
<?php
...
exec("sudo ".$script, $output, $return_var);
...
?>
ends only in this special case in a zombie Thread. As php-fpm waits in expectation for a result, it still holds the line, not giving up nor time outs for the rest of its thread still living. So every other request to the php server is still in queue and will never be processed. That may be okay for some long living or working threads, but my request was done in some [ms].
I did not found the cause for this. As far as I could do debugging, I wasn't the triggered Radicale process fault, for this on gave a any time clean and brave 0 as in return. It seemed that a php process just couldn't get a return line from it and so it still waits and waits.
No time left I changed the malfunction script from
#!/bin/sh
service radicale start 2>&1
to
#!/bin/sh
service radicale start > /dev/null 2>&1 &
... so signaling every returning line to nirvana and disconnecting all subroutines. For now the server did not hung itself up and works as desired. But the feeling this may be a major bug in php still stays in the back of my head, with the hope, that - someday - someone may defeat that bug.
I am having issues executing a VBScript through Apache (WAMP) on Windows Server 2012. I am attempting to convert a Docx to PDF, and the script runs perfectly from the command line, but fails when running through PHP. Rather than posting the vbscript, I will provide a link to it: http://bit.ly/1gngYAn
When executed through PHP as follows, WINWORD.exe starts, as does the VBScript, and it hangs there and nothing happens. No PDF is generated (and I never see the ~temporary.docx hidden file pop in the directory).
I have tried just about every iteration of exec, system, passthru and COM ( 'WScript.Shell' ), and all have the same outcome.
To avoid "escaping" issues, I also tried executing the script though a .bat file so no arguments needed to be passed, and the outcome was the same.
Here is my current php code (convert.vbs is the code from the link above):
$obj = new COM ( 'WScript.Shell' );
$obj->Run ( 'cmd /C wscript.exe //B C:\Users\Administrator\Desktop\convert.vbs c:\wamp\www\fileconv\temp_store\52fa8272bf84f.docx', 1, false );
//I have tried different "window styles" too, and it doesn't make a difference
I also tried modifying the apache service user to run as administrator (this is not a production server), and enabled "Allow service to interact with the desktop", and it had the same outcome.
I have also made sure the directories had "full control" by everyone (reading, writing, executing, etc).
It runs perfectly if I run from the command line or with my ".bat" file.
Since it hangs (the script and word, not apache), I have looked at the event viewer in the control panel, but there are no events that pertain.
My questions is firstly, why is this happening, and secondly, if the first cannot be answered, is there a way that I can get a more in depth look at what is happening when the process is executed, as to further troubleshoot it? As of now, I have no data to review or output to see to help me troubleshoot.
Please feel free to ask for any details. I have tried many, many iterations to try to get this to work, searched high and low, and can't seem to come up with any answers.
I appreciate your assistance,
Louis
It took me a couple of days, but here is the solution I found:
I used PsExec - http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx
The following flags are required: -h -i -accepteula -u -p
(I tried without the -h, -accepteula and -i, but no dice. This is running on Windows Server 2012 under WAMP)
Here is an example:
exec('c:\psexec\PsExec -h -i -accepteula -u Administrator -p '.$password.' C:\Windows\System32\CScript.exe //Nologo //B c:\wamp\www\fileconv\convert.vbs '.$filename)
Now it executes properly and as intended.
I hope this helps someone in the same situation!
PS The WScript.Shell method of execution I used in my question works just as well as exec(), except exec() waits until the process exits.
You should use exec() function
this is the url http://php.net/manual/fr/function.exec.php
I wonder if someone can help with a query I have. My server recently had an email account hacked and subsequently a large amount of spam appear in the mail queue. I've changed the password on the email account in question and used qmHandle to remove the spam from the mail queue. I would like to prevent this from happening again and I was wondering if it would be possible for PHP to access the mail queue and run a cron job that could run every hour and run a script to alert me if the mail queue exceeds a set amount of mails so I could be alerted and react accordingly? My server is Linux running Redhat if that makes any difference?
Many thank in advance.
As i don't know which Maildaemon you use, i can just throw some things to think about:
To display the que, use "mailq" (on a Debian/Postfix system)
To access it from php, use "sudo" (execute a command as root from non priviledged user)
Maybe filter/group it by adding "grep" to "mailq"
Since you are using qmail, and you have qmHandle on the server, it's fairly straight-forward. qmHandle -s will give you some statistics, including the number of messages in the remote queue. The remote queue contains outgoing messages that are queued for delivery. You can cobble together a one-liner using grep and cut, which will give you just the count of the number of messages in the remote queue, like so: qmHandle -s | grep remote | cut -d: -f2
you don't need PHP to do that. A simple bash script ran by cron would do it. Somethig like that:
nbline=`mailq|wc -l`
if [ $nbline -gt $seuilMails ]
then
echo -e "\nSeuil queue postfix dépassé ($nbline lignes)" >> $msgFile
sendmail=true
else echo -e "\nQueue postfix normale" >> $msgFile
fi
if [ "$sendMail" == true ]; then
mailto_admins "$sujet" "$msgFile"
fi