Ubuntu (linux in general) How to change name of background process? - php

When we run the "top" command from command line we can see the processes and under the COMMAND column we see a generic name.
For e,g if I run a php process in the background like
/usr/bin/php /path/to/myscript.php &
I see just php listed under the COMMAND column when I run top.
Is there a way to change the name of the background process when I run it?
*This question is PHP specific.

A process don't really have a name, it has a pid (of type pid_t which is some integer, the result of fork(2) or related system call). Read credentials(7).
And the displayed php name is the right one, it is the one given to execve(2) as the first argument of index 0 and it is the program name. The kernel don't run directly your PHP script, it is running the php interpreter which takes as input your script (so the actual program which is run is php). And your shell command is explicitly giving /usr/bin/php as the program name. You could use strace(1) to check that.
Your shell is displaying (via jobs -l) the background processes. So you could write your own shell to display them differently.
Perhaps you could write in C some wrapper ELF executable which does the appropriate execve(2).
I'm not sure it is worth the trouble. See also proc(5) to understand how applications (like your shell, or ps, or top) are querying the kernel about processes (using /proc/ file-system).
As commented by melopmane, look also into prctl(2) and PR_SET_NAME
(I never used that). I did however use pthread_setname_np(3) which concerns a thread.
(still, I don't think it is worth the trouble in your case; what is wrong with having a PHP process called php?)
See also setproctitle, or write some PHP extension in C to do that...
But you should not care! and I even think that changing that way the name of your process is confusing to the sysadmin. He wants to know that it is some PHP thing. So even if you could abuse your sysadmin, you should not want to.
BTW, you could check (using proc(5)...) with a command like cat /proc/1234/maps (replace 1234 with the actual pid of your process) that the PHP interpreter is an important part of your virtual address space (so there is no reason to "hide" php as you want to), and you could find your specific php process (if you have many of them) using also pgrep(1).

Related

Unable to delete Linux user using PHP and bash script

I'm able to delete linux users directly from the shell using the command ./del_user.sh
del_user.sh is as below
#!/bin/sh
userdel username
When I run this script from a PHP page, it doesn't work.
echo shell_exec('./del_user.sh');
Both the files are in the same directory.
What could be the problem?
del_user.sh does not take an argument so you're running the script on whatever hardcoded user is in the script. This won't work unless you modify the script to allow a command line argument.
While this task can be accomplished by PHP, this is certainly not a primary function and should probably be delegated to a more suitable application.
As an aside, you haven't stated what your goal is or what this script is supposed to do. Is this part of some user management application or is it simply a one-off for some small task?
Answer:
To enable this, you'd have to give Apache the ability to sudo so it can gain temporarily raised priveleges. You may have to do some googling on how to do this depending on your OS but this link provides some direction on how to do it in Ubuntu: Ubuntu Forums
I would also recommend against using a bash script as its not really necessary to do this. You could use a PHP script that accepts a command line argument. So in your main script you could have something like this: shell_exec('php /path/to/del_user.php username');
Then in del_user.php you'd have something like shell_exec('userdel '.$argv[1]);. More info on commandline arguments can be found here: $argv
Lastly, you could put it directly into your main script instead of using shell_exec twice. In other words, just use shell_exec('userdel '.$username); instead of calling a script. Since Apache will be able to sudo, it should be able to execute this directly.

execute system commands which need input using PHP

Is there a way in PHP to make exec() or one of its variants run a system command that needs user input in the execution session. Can be an FTP transfer for example or even just a print statement command with more flag. Say for e.g. in Windows command prompt I do a type bigfile.txt | more It gives me one screen of output and then I use the keyboard to have the next line come up.
Is there a way to capture this behavior using any of the PHP command line execution functions, when running from the browser? If not in standard PHP are there any PEAR/PECL resources which anyone has used before which does this?
You can probably do this with proc_open, which can be used in a non-blocking manner, and gives you input and output pipes.
However, as a rule of thumb, I'd only attempt this as a last resort. Using a non-interactive executable, or a native PHP library will usually be far more maintainable. For example, I'm struggling to come up with a reason you'd ever want to proc_open('mycommand | more') when you can just exec('mycommand')

How to open an application via php and perl?

I am trying to print generated forms / receipts through PHP (the printers will be installed on the server, I am not attempting to print to a user's local printer). I have decided to try the following methodology:
IN PHP:
Generate a PDF file and save it on the server.
Call a perl script to print said PDF file.
IN perl:
Use system() to "open" Reader and print the given PDF silently.
What works:
I can generate PDFs in PHP.
I can call a perl script.
If the script has errors, they report to the browser window. ie: If I purposely change file paths it fails, and reports the appropriate reason.
functions such as printf seem to work fine as the output displays in the browser.
The exact same perl script (with the "non-functioning" line mentioned below) works properly when executed from the command line or the IDE.
What doesn't work:
In perl: system('"C:\\Program Files (x86)\\Adobe\\Reader 10.0\\Reader\\AcroRd32.exe" /N /T "C:\\test.pdf" 0-XEROX');
What happens:
NOTHING! I get no errors. It just flat out refuses to open Adobe Reader. All code below this line seems to run fine. It's like the function is being ignored. I am at a loss as to why, but I did try a few things.
What I've tried:
Changed permissions of the AcroRd32.exe to Everyone - Full Control.
Output the $? after the system() call. It is 1, but I don't know what 1 means in this case.
Verified that there are no disable_functions listed in php (though I think this is unrelated as shell_exec seems to be working, since some of the perl code is ran).
Various other configurations that at least got me to the point where I can confirm that PHP is in fact calling the perl script, it just isn't running the system() call.
Other info:
Apache 2.2.1.7
PHP 5.35
Perl 5.12.3 built for MSWin32-x86-multi-thread
WampServer 2.1
I'm at a loss here, and while it seems like this is an Apache / permissions problem, I cannot be sure. My experience with Apache is limited, and most of what I find online is linux commands that don't work in my environment.
Try this:
my #args = ('C:/Program Files (x86)/Adobe/Reader 10.0/Reader/AcroRd32.exe');
if (system(#args) != 0) {
# Can't run acroread. Oh Noes!!!
die "Unable to launch acrobat reader!\n";
}
The thing about system() is that it does two different things
depending on the number and type(s) of argument it gets. If the
argument is an array or if there are multiple arguments, Perl assumes
the first is the program to run with the rest as its arguments and it
launches the program itself.
If, however it's just one string, Perl handles it differently. It
runs your command-line interpreter (typically CMD.EXE on Windows) on
the string and lets it do what it wants with it. This becomes
problematic pretty quickly.
Firstly, both Perl and the shell do various kinds of interpolation on
the string (e.g. replace '//' with '/', tokenize by space, etc.) and
it gets very easy to lose track of what does what. I'm not at all
surprised that your command doesn't work--there are just so many
things that can go wrong.
Secondly, it's hard to know for sure what shell actually gets run on
Windows or what changes Perl makes to it first. On Unix, it usually doesn't matter--every shell does more or
less the same with simple commands. But on Windows, you could be
running raw CMD.EXE, GNU Bash or some intermediate program that
provides Unix-shell-like behaviour. And since there are several
different ports of Perl to Windows, it could well change if you
switch.
But if you use the array form, it all stays in Perl and nothing else
happens under the hood.
By the way, the documentation for system() and $? can be found here and here. It's well worth reading.

How to achieve single-processing mode running php scripts?

I have cron job - php script which is called one time in 5 minutes. I need to be sure that previously called php script has finished execution - do not want to mix data that's being processed.
There are three approaches I used to apply:
Creation of auxiliary text file which contains running-state flag. Executed script analyzes the contents of the file and breaks if flag is set to true. It's the simplest solution, but every time I create such script, I feel that I invented a bike one more time. Is there any well-known patterns or best-practices which would satisfy most of the needs?
Adding UNIX service. This approach is the best for the cron jobs. But it's more time consuming to develop and test UNIX service: good bash scripting knowledge is required.
Tracking processes using database. Good solution, but sometimes database usage is not encouraged and again - do not want to invent a bike, hope there is a good flexible solution already.
Maybe you have other suggestions how to manage single-processing of php scripts? Would be glad to hear your thoughts about this.
I'd recommend using the file locking mechanism. You create a text file, and you make your process lock it exclusively (see php flock function: http://us3.php.net/flock). If it fails to lock, then you exit because there is another instance running.
The advantage of using file locking is that if your PHP scripts dies unexpectedly or gets killed, it will automatically release the lock. This will not happen if you use plain text files for the status (if the script is set to update this file at the end of execution and it terminates unexpectedly, you will be left with untrue data).
http://php.net/flock with LOCK_EX should be enough in your case.
You could check wether or not your script is currently running using the ps command, helped by the grep command. "man ps" and "man grep" will tell you all about these unix/linux commands if you need informations about these.
Let's assume your script is called 'my_script.php'. This unix command :
ps aux | grep my_script.php
...will tell you if your script is running. You can run this command with shell_exec() at the start of your script, and exit() if it's already running.
The main advantage of this method is that it can't be wrong, where the script could have crashed, leaving your flag file in a state that would let you think it's still running.
I'd stick to version number 1. It's simple and works out. As long as you only wan't to check whether the script has finished or not it should be sufficent. If more complex data is to be remembered I'd go for version 3 in order to be able to 'memorize' the relevant data...
hth
K

How can I execute CGI files from PHP?

I'm trying to make a web app that will manage my Mercurial repositories for me.
I want it so that when I tell it to load repository X:
Connect to a MySQL server and make sure X exists.
Check if the user is allowed to access the repository.
If above is true, get the location of X from a mysql server.
Run a hgweb cgi script (python) containing the path of the repository.
Here is the problem, I want to: take the hgweb script, modify it, and run it.
But I do not want to: take the hgweb script, modify it, write it to a file and redirect there.
I am using Apache to run the httpd process.
Ryan Ballantyne has the right answer posted (I upvoted it). The backtick operator is the way to execute a shell script.
The simplest solution is probably to modify the hgweb script so that it doesn't "contain" the path to the repository, per se. Instead, pass it as a command-line argument. This means you don't have to worry about modifying and writing the hgweb script anywhere. All you'd have to do is:
//do stuff to get location of repository from MySQL into variable $x
//run shell script
$res = `python hgweb.py $x`;
You can run shell scripts from within PHP. There are various ways to do it, and complications with some hosts not providing the proper permissions, all of which are well-documented on php.net. That said, the simplest way is to simply enclose your command in backticks. So, to unzip a file, I could say:
`unzip /path/to/file`
SO, if your python script is such that it can be run from a command-line environment (or you could modify it so to run), this would seem to be the preferred method.
As far as you question, no, you're not likely to get php to execute a modified script without writing it somewhere, whether that's a file on the disk, a virtual file mapped to ram, or something similar.
It sounds like you might be trying to pound a railroad spike with a twig. If you're to the point where you're filtering access based on user permissions stored in MySQL, have you looked at existing HG solutions to make sure there isn't something more applicable than hgweb? It's really built for doing exactly one thing well, and this is a fair bit beyond it's normal realm.
I might suggest looking into apache's native authentication as a more convenient method for controlling access to repositories, then just serve the repo without modifying the script.

Categories