Is there any command that I can run to kill a specific process in remote Ubuntu server using PHP?
Also is there any command to list all the PHP process running in a remote Ubuntu server?
I was using ,
ps aux | grep php
to list all PHP process, after logged into remote machine using ssh. But then, is it possible to get process list from local machine itself?
Note: I am running some set of cron jobs every 15 mins. And keeping process ID for each in DB. There are scenarios that I need to kill certain process ID from my monitoring tool(in another server).
If there is any command, then that I can use in my PHP script to call.
Thanks!
This question could be on askubuntu.
If you have an SSH access to your server you are logged onto it.
Then, you can do sudo killall php to kill all PHP process or sudo kill <idprocess>.
Note that if you have an Apache server running, it can create new processes. Turn Apache off can allow to do not create new processes.
EDIT:
According to this post you can use a package to give password directly without interactivity. Thanks to that, it is possible for your script to sign in on the server, then kill processes, and finally sign out.
You can use the posix_kill() function to send any signal to a process. Depending on your privileges, you may or may not see the results you expect.
The SIGKILL constant will send (as you'd expect) SIGKILL.
You really shouldn't be executing anything involving a combination of exec() and sudo under any circumstances. It implies that your setup is vulnerable to a lot of potential nasty people.
Related
I am trying to start a php (under apache) process (by calling the apache from a browser), that will survive shutting down the apache server (sudo service apache2 stop). Even when I make sure the created process has no parent (parent 1), and has its own session, still, somehow, the process is died when I stop the apache (or restart the apache)
I created a test.php file:
<?php
exec('setsid nohup sleep 1000 > /dev/null 2>/dev/null &');
?>
When running doing HTTP GET to this test.php, indeed we get an immediate OK response, and the process still lives.
But, when we do:
sudo service apache2 stop
The sleep process dies.
How can someone kill a process when the process doesn't belong to its group or session, and when the process is not a child?
Apache will probably have a list of the spawned processes and kills them individually, and not as a group. In that case, all processes in the list will be kill(2)ed. But see below next paragraph for a possibility.
Look at the man pages for the kill(2) system call. In the ERRORS section, the only possibilities to fail are:
EINVAL, meaning an invalid signal number has been passed. Doesn't apply here.
ESRCH, the process (or process group) doesn't exist. Doesn't apply also.
EPERM, you don't have permission to send a signal. This applies here, but the only processes (and this has nothing to do with process hierarchies or parental relationships) you are allowed to send signals to are the processes that run with real/saved user id equals the effective user id of the sender process. So, as Apache has a registry of all the processes it launches, it is normal that it is able to kill the process.
Anyway, have you tried to create a process, from that process create a subprocess, and execute the setsid in the grandchild subprocess? That way, there's no chance for the Apache process to have it registered in the list of spawned processes. I have not tried that, but it could work.
From the FreeBSD kill(2) manual page:
For a process to have permission to send a signal to a process designated
by pid, the user must be the super-user, or the real or saved user ID of
the receiving process must match the real or effective user ID of the
sending process. A single exception is the signal SIGCONT, which may
always be sent to any process with the same session ID as the sender.
(emphasis is mine) in linux, it's almost the same, except
... (if the sending process) have the CAP_KILL capability in the user namespace of the target process...
but this doesn't apply here.
I'm curious on why the apache server needs to be shut down, but im inclined to say you might try a different strategy altogether using a containerized solution. Exposing an nginx docker container that can trigger your php process would likely be more stable than using Apache, as the docker daemon always runs as root. I think it depends on the specific needs of your use case though, so explaining why you need to shut down Apache might get you better answers.
List all active processes (e.g. ps aux). The processes, which were created by Apache run as user www-data.
When you stop Apache service, I suspect they will be terminated based on that or some internal list. apache2.service stop calls apachectl -k graceful-stop. This will SIGTERM all "child processes". Unfortunately, I was unable to find the exact place in the code. Maybe someone could do that to verify the hypothesis.
A solution:
Run the process as another user. How you go about that depends on your case. You will have to define some kind of interface.
For example, you could have another process listen on localhost or use a Unix domain socket. The same will be achieved by using gearman, as #Akshay Vanjare pointed out in the comments.
Your PHP script can then call to the interface.
No solutions:
It is a bad idea to control a daemon this way, see this answer.
As user www-data you cannot start a process as another password protected user, because su asks for pass and switching UID is not a good idea.
For testing, I set up a user with empty password and used su - NEW_USER -c "COMMAND" in PHP script. Do not do this in any system you care about. It is insecure. Very insecure. And you will have a new process every time the script is called. You would have to take care to kill it.
Further thoughts:
I also tried a few alternatives to "nohup" the command, such as daemonizing, fork(), disown etc. They did not work for me.
I did not try hard in the new process to ignore SIGTERM. maybe it is possible to solve the question that way, too.
To me, it makes sense for Apache to do (aggressive) clean up when it is stopped. It is behavior I would expect from a web browser, which has to handle a lot of children.
First about my answer, it's not a good idea to control dameon from the web.
Once this said, you can have a php script that write a flag or somthing like that either on a database, a file, redis, etc.
On the other side make a PHP script that you schedule with cron for looking for the flag. If found, the script can start a PHP daemon that will detach. Pay attention to running the cron script and the PHP daemon with secured user & rights.
But once again be carefull with security concern.
From the PHP developer perspective I don't think there is a way to achieve that.
Or you can implement a queue and worker to do your tasks/executions.
and for that you can either use Redis or any other database.
https://laravel.com/docs/9.x/queues
You can write a service that then you can control with sudo service start|stop|restart
You can use visudo to edit sudoers file to allow www-data to use sudo to run a specific sh file, without being prompted by password. This sh file will contain the line service your_service start.
Example: I wanted to be able to restart apache2 from a client side ajax request.
Context: file server with no screen/keyboard attached. I have SSH setup, but I don't want to launch a client in my laptop. I protect the restart page with apache password protected directory, along with other administrative pages. For example: I have a page to restart/shutdown the whole system.
I did this:
Register a service to run with sudo service apache_restarter_service start
This service file calls a simple sh script, a two lines script: sleep 1 second and then call service apache2 restart.
Use visudo to edit sudoers file. Allow user www-data to use sudo to run a sh script that calls service apache_restarter_service start.
The service file is used only to start a process with root privileges, and it doesn't need to be long lived, so no need for Restart=always in service definition. This process cannot be killed by apache and apache won't even know about it. It will only know that the sh script returned 0.
Example of changes to sudoers file:
www-data ALL=(ALL) NOPASSWD:/opt/apache_restarter/from_php.sh
from_php.sh only calls service apache_restarter_service start. No need for sudo here, because your php script already used sudo when calling from_php.sh.
from_php.sh is needed here, because you don't want php to call service directly. Because a BUG in, or an ATTACK against, your script may do harm to your server. This way, we only authorized from_php.sh in sudoers.
The php script must call from_php.sh with sudo. You can use exec("sudo /opt/apache_restarter/from_php.sh"). An & at the end is not needed, because apache won't be restarted immediately.
Example of service definition file apache_restarter_service.service:
[Unit]
Description=Apache restarter one shot
After=multi-user.target
StartLimitIntervalSec=0
[Service]
Type=simple
Restart=no
User=root
ExecStart=/opt/apache_restarter/run.sh
[Install]
WantedBy=multi-user.target
To avoid the service to run once the system is restarter. No harm really, but also useless. You can have the service always disabled and enable it in from.php script before starting it. Or just tune the service file to not run one time at system start.
You can tune StartLimitIntervalSec=0 to your needs.
You can tune User=root to a safer, less privileged user. In my case I don't care but maybe you do. You may want to use www-data. I don't know why that may not work, but I didn't tried to use www-data so far.
You can write run.sh in many ways and to do different tasks. The simpler of which would be to simply wait 1 second and then restart apache with service apache2 restart.
While my interest is to be able to restart apache from client side, this also applies to spawn a process that won't be terminated when apache is stopped/restarted.
I also tried with nohup, &, setsid and whatnot. I will save people some time. Nothing works. Apache should be terminating all child processes as already suggested.
Update 2022-12-04
I got into the following situation: need to start a python3 web server into alternate port to provide file server statistics.
Why? This is the concept, if nobody interested in the statistics (iostat, ifstat, etc) then the stats_server.py is not even running (wasting RAM/CPU). So, having stats_server.py starting with system is overkill.
Once a user visit http://FILE_SERVER_IP/stats_launch.php, or from JavaScript fetch, the program stats_server.py must be launched and listen in http://FILE_SERVER_IP:ALTERNATE_PORT. You may also need to handle CORS headers to be able to fetch from the alternate port.
Also, it keep running and updating the stats with a second, or half second, interval, and saving to its own memory, so it can calculate averages or rates per second. Doing this server side gives more precision and less latency that just send the raw samples to client side.
This URL can be visited directly (you will see the output of iostat and ifstat, and values calculated by the python script, in plain text format), or user can load FILE_SERVER_IP/stats.html, where a fetch from JavaScript will periodically load that text plain output, and parse into nice HTML/CSS.
You may think that everything can be solved by fork or by using os.getpid(), os.getsid() and os.setsid() inside the python script, to detach it from the original session and avoid termination with parent process (search stackoverflow for use cases) and it will if your www-data user has the right permissions.
Otherwise I still recommend the visudo approach above. You still need to detach if you start your python file from a .sh file, the difference is that now you have the permissions to do so.
I recently made the test to know if a PHP script will stop even if I disconnect. The answer was no, the script is still running and flooding my database. It's while(true) script, so do you guys know how to stop it from running?
It's not on a dedicated server that I have access... I only have the ftp, ssh , mysql access
I already tried to rename the name of the file that was executed, but it's still running
Try to get list of all processes via ssh using
ps -ax
and find process of your script. If you find this process you can kill its with
kill %pid
where %pid is the process id from table of processes.
Is it possible to invoke php script automatically when someone uploads file via ftp-client on our server.
Do you have complete shell access to the server?
What you need to do is detect whether or not the contents of a folder has changed and have a script run.
If you're on Windows this might be helpful.
If you're on *nix, have a look into inotify or launchd
Have them call a php script and away you go!
If you don't have complete control over the server, no doubt you can execute Cron Jobs. Have it execute a PHP script every x minutes that checks the contents of a directory, compares it to the contents it was x minutes ago and performs a diff between the two to find added or removed files.
You can implement a port knocking daemon with iptables. Port knocking is an automate process to personalize ssh or ftp account. You can write a daemon to listen to iptables and run a php script when the right sequence of ports is knocked. I don't know if you can minimize the sequence to just one knock when you connect with a ftp-client.
At linux you can use the watch command:
watch
Usage: watch [-bdhnptvx] [--beep] [--differences[=cumulative]] [--exec] [--help] [--interval=<n>] [--no-title] [--version] <command>
You can pipe the output to a piece of software actually handling the changes.
I have to run 2 commands through exec();
the first command is a wrapper calling for (Plesk panel) subsription,
the second is also a plesk command,for dns.
Note: After i execute an add subscription, the apache WILL RESTART!,
So my Question is:
can i call the exec somehow, to execute both commands at linux side without loss of the second command?
Ex:
exec(("/wrapper2 3 --create ... && /wrapper2 4 --update-soa example.com ... ) > /dev/null 2>&1 );
Php will send both commands to linux to execute, or it will restart apache after the first command, and then i can't execute the second command?
Thanks
Um... I'm thinking bad deal. Generally it is a bad idea for a process to tell its parent to restart while the process needs to keep running. But, even if it were a good idea -- Apache is the parent process of PHP in that context (do ps -A, you'll not see PHP), I can't imagine that it would let you restart it and keep running at the same time.
I'd approach it this way: if you can bridge a delay, then have a cron job look for whether a specific file exists, if it does, then execute the two command that you need it to. At a worse-case scenario, make PHP output a file which has the two commands you want run and then have cron run that file.
Well from my understanding the issue lies in the fact that Apache is going to be the parent of the script that is running, when Apache gets shut down so will the script.
Barring that you can deal with a sort of derp-y setup, you can set up a cron job that looks for when it needs to restart the server (either a file you created via touch or something from PHP), which can handle everything outside of the context of Apache's process.
A sort-of-dirty idea. :(
Put the commands in a shell script and execute that script. It's less complicated and just in case you can call it with other tools as well like on apache restart or via cron.
I think why the apache restart is your command executes too long or cost to much system resource and makes apache sub process exits.
Try using fastcgi mode instead of mod_php.
You can make a shell file to execute two commands.
I need an application to be running in the background on my web server, but I need to be able to start/stop the application with root privileges.
In order to do this, I want to have a service running that has root privileges, so that it can kill the application, and start it up again if need be.
Finally, I need to be able to send the start and kill commands to the service via Apache/PHP, so that it can be indirectly controlled through the web.
How do I create a Linux service?
How do I communicate with a Linux service in this manner?
Thanks in advance!
Use the exec command in your PHP script to call shell files. The shell files can be setup with the "setuser" bit so it will run as its owner (instead of running with the web server's permissions).
Of course, you'll need to be very careful--lots of testing, monitoring, etc.
Finally, think about the service running as a dedicated user, not as root. Eg like apache and most other well done services do.
Added: Re: running a service in Linux. Depends on your flavor of Linux. If you want to be sure that your app service will be automatically re-started if it fails, plus logging, checkout Runit:
https://web.archive.org/web/1/http://blogs.techrepublic%2ecom%2ecom/opensource/?p=202
http://smarden.org/runit
Added: Instead of setuid bit, I think Frank's suggestion (in comment) of using sudo system is better.
So, you have three pieces here :
Your web server without root privilege
An application
A daemon that is monitoring the application
Your problem is not launching the daemon, it is writing it, and communicating with it from the web server, without needing root privilege.
A daemon can be as simple as a non interactive application launched in the background :
# my_dameon &
I am not a php developper, but searching for message queue and php, I discovered beanstalkd
Looking at the example on the first page it seems you can use it to do the following :
The apache/php sends some message to beanstalkd
Your daemon reads the message from beanstalkd. Based on the command it starts or kill or reload the background application.
You can write your daemon in php, since there are client in many languages
You can also check this question
You can create a daemon which accepts the following commands:
daemon_name start
daemon_name stop
daemon_name restart
deamon_name reload
Starting the daemon should not be hard. Just executing daemon_name start from a PHP script should run it. After starting, you can write the PID of the process to a lock file (for stopping, restarting or reloading later). The daemon should handle signals.
In a PHP script, you can then invoke daemon_name stop. This should fire up a new daemon which would check the lock file and get the PID of the running daemon and send a signal which would be handled by the running daemon. The lock file should be removed/cleared and then the stop initiating daemon/process can quit.
I think you should look at inetd, which can be configured to run all sorts of services, and it runs as root. You could then write a relatively simple program that is not itself root privileged, but which when run by root does the tasks you need done.
As far as communicating with the service you did not say what type of service it is, however assuming you're writing it yourself the most common methods would be to comunicate via UNIX sockets or MMAP. Depends on your needs really.
Oh yeah, should point out there are already applications for web management of linux systems. Webmin is a really good one which can be configured to allow as much or as little control as you need.
As #shodanex suggests, using Beanstalkd would be an excellent way to disconnect a web-front-end from a running-as-root command line worker. It could trivially be set to only run exactly what was required.
To run the worker, Pear's System_Daemon can generate and run a daemon-running script, with start/stop/restart.
When doing this be very very careful. Never use any user submitted data from the web in the exec command. this could allow someone to arbitrarily execute commands on your machine.
Also I second Frank use a sudo rule so you can run that specific command with the permissions you need but nothing else. It will be more secure that way.
Of course with
sudo apt-get install openbsd-inetd
you can create a service you want