I want to export data from mysql quickly to an output file. As it turns out, INTO OUTFILE syntax seems miles ahead of any kind of processing I can do in PHP performance wise . However, this aproach seems to be ridden with problems:
The output file can only be created in /tmp or /var/lib/mysql/ (mysqld user needs write permissions)
The output file owner and group will be
set as mysqld
tmp dir is pretty much a dumpster fire because of settings like "private tmp".
How would I manage this in a way that isn't a nightmare in terms of managing the user accounts / file permissions?
I need to access the output file from my php script and I would also like to output this file to the application directory if possible. Of course, if there is another way to export my query results in a performance effective way, I would like to know of it.
Currently I am thinking of the following aproaches:
Add mysqld user to a "www-data" group to give access to application files, write to application dir and other www-data users will hopefully be able to access the output files.
I could not get the access rights working for the mysql user. Having scripts add the user to www-data group or other such measures would also increase the application deployment overhead.
I decided to go with using the program piping method with symfony Process component.
mysql -u <username> -p<password> <database> -e "<query>" | sed 's/\t/","/g;s/^/"/;s/$/"/;' > /output/path/here.csv
Note that the csv formatting might break in case you have values that contain reserved characters like \ , " \n etc. in your columns. You will also need escape these characters (" to \" for example) and possibly do something about mysql outputting null values as NULL (string).
So using 'php index.php' gives me the output I want in the command line. But will not give the output on the webpage.
So first of all I have this python file which basically does everything I want:
import subprocess
subprocess.call("sudo nmap -sP 192.168.1.0/24 > /home/pi/whohome.txt", shell=True)
searchfile = open("/home/pi/whohome.txt", "r")
for line in searchfile:
if "android-5ab6eb374b5fd6" in line: print "Jeremy is home (phone)"
if "Jeremys-MBP" in line: print "Jeremy is home (computer)"
if "LMCMs-iPhone" in line: print "Liam is home (phone)"
if "Liam" in line: print "Liam is home (computer)"
if "android-4a186cbbeb2c5229" in line: print "Lara is home (phone)"
if "LaraD" in line: print "Lara is home (computer)"
if "KristiansiPhone" in line: print "Martin is home (phone)"
if "Martins-MBP" in line: print "Martin is home (computer)"
searchfile.close()
Secondly I just have a sh executable that will put the output of this python command into another text file:
python /home/pi/myRoomMates.py > /var/www/html/website.txt
I then have the php file on an apache web server running on the raspberry pi, it reads:
<?php
shell_exec('/home/pi/whoishome.sh');
echo file_get_contents ("/var/www/html/website.txt");
?>
So if I'm not wrong, each time the page is refreshed it should execute that, wait for the exec to finish, then display the txt file contents? I have tried both shell_exec and just exec, they both do the same..
There are many rights, you have to ensure:
the apache user has to be in the sudoers group
the apache user must write to /home/pi/whohome.txt
the apache user must write to /var/www/html/website.txt
/home/pi/whoishome.shmust be executable for the apache user
For point 1 to 3, it is normally not a good idea, to give the apache users these rights.
You can make it easier if you start your python script as CGI:
import subprocess
ADDRESS = "192.168.1.0/24"
USERS = {
"android-5ab6eb374b5fd6": ("Jeremy", "phone"),
"Jeremys-MBP": ("Jeremy", "computer"),
"LMCMs-iPhone": ("Liam", "phone"),
"Liam": ("Liam", "computer"),
"android-4a186cbbeb2c5229": ("Lara", "phone"),
"LaraD": ("Lara", "computer"),
"KristiansiPhone": ("Martin", "phone"),
"Martins-MBP": ("Martin", "computer"),
}
nmap = subprocess.Popen(["sudo", "nmap", "-sP", ADDRESS], stdout=subprocess.PIPE)
for line in nmap.stdout:
for user, name in USERS.items():
if user in line:
print "%s is home(%s)" % name
nmap.wait()
The only point 1 and 4 must be fullfilled.
I suspect your problem is the sudo part of the nmap command line. If you replace the subprocess.call with subprocess.check_call, I think you will find that command raises a CalledProcessError.
Presumably, your user account is in the /etc/sudoers file, but the Web server is not.
Since the first thing the shell's output redirect operator (>) does is truncate the output file, that failed attempt to run nmap results in a zero-byte whohome.txt. The rest of the Python script then does the same to website.txt, and you end up with nothing to display on your Web site.
Solutions
No sudo required.
On my Linux desktop, I do not need to run nmap as root to do a local ping scan. If that's true on your system, then you should be able to just drop the sudo part of your nmap command, and be done with it.
There is a difference, though. nmap will perform a more thorough testing of each target when the -pS ping sweep is run by root. From an old nmap man page (emphasis added):
-sP (Skip port scan) .
[...]
The -sP option sends an ICMP echo request, TCP SYN to port 443, TCP ACK to port 80, and an ICMP timestamp request by default. When executed by an unprivileged user, only SYN packets are sent (using a connect call) to ports 80 and 443 on the target. When a privileged user tries to scan targets on a local ethernet network, ARP requests are used unless --send-ip was specified. [...]
Enable sudo for your Web server.
If you need this extra information (and it sounds like you do), you'd need to run nmap (or the Python script that calls it) with super-user privileges. I've never tried to force a Web server to do this, but I assume you would at least have to add your Web server's user to /etc/sudoers. Something like:
apache localhost=/usr/bin/nmap -sP
or:
httpd ALL=/usr/local/bin/nmap
...and so on, depending on the user name, where your nmap is located, how strictly you want to limit the arguments to nmap, etc.
Create an SUID executable to run nmap for you.
Alternatively (and I hate myself for recommending this --- there must be a better way) is to write a tiny SUID (Set User ID) program that executes only the nmap command you want. Here's a C program that will do it:
#include <stdio.h>
#include <unistd.h>
int main(void);
int main(void) {
int retval = 0;
char* const error_string = "ERROR: Failed to execute \"/usr/bin/map\"";
char* const nmap_args[] = {
"/usr/bin/nmap",
"-sP",
"192.168.1.0/24",
NULL
};
retval = execv("/usr/bin/nmap", nmap_args);
/* execv returns _only_ if it fails, so if we've reached this
* point, print an error and exit.
*/
perror(error_string);
return retval;
}
Save the above as something like nmap_lan.c, and compile with:
$ gcc -Wall -o nmap_lan nmap_lan.c
Then, move it to wherever you keep your Web site's scripts, and as root, change its ownership and permissions:
# chown root:root nmap_lan # Or whatever group name you use.
# chmod 4555 nmap_lan
The leading 4 sets the SUID bit. A color ls of the directory will probably show that file highlighted. The permissions should look like this:
# ls -l nmap_lan
-r-sr-xr-x. 1 root root 6682 May 23 03:04 nmap_lan
Any user who runs nmap_lan will be temporarily promoted to whoever owns the nmap_lan file (in this case, root) until the program exits. That's extraordinarily generous, which is why I hard-coded everything in that program... To change anything it does --- even just the IP range to scan --- you'll have to edit nmap_lan.c file, re-compile, and re-install.
I've tested nmap_lan on my command line, and it produces privileged-user nmap output when run by an unprivileged user who normally gets only limited output.
Comments on the Python script
In general, Python is vastly better at parsing shell arguments than the shell is (the default value for shell is False for a reason), so have your Python script do as much of the job as possible, including parsing the shell command, redirecting input, and redirecting output.
A major advantage of doing the work in Python is that failure to open, read, write, or close any of your files will result in an immediate crash and a stack trace --- instead of the silent failure you've been dealing with.
I'd rewrite that call command to use a list of explicitly separated arguments. You can handle the output redirection by passing an opened file stream to the stdout parameter. You can eliminate your last bit of shell redirection by having Python open your output file and write to it explicitly.
nmap_file='/home/pi/whohome.txt'
with open(nmap_file, 'wt', encoding='ascii') as fout:
subprocess.call(
['/usr/bin/nmap', '-sP', '192.168.1.0/24'], # Or just ['nmap_lan']
stdout=fout,
universal_newlines=True,
)
output_file='/var/www/html/website.txt'
with open(nmap_file, 'rt', encoding='ascii') as fin:
with open(output_file, 'wt', encoding='ascii') as fout:
for line in fin:
...
print('Output here', file=fout) # Add `file=...` to each print.
Also, unless you need that whohome.txt file for something else, you can eliminate it entirely by using check_output to store the output from the nmap command as a string, and then splitting it into separate lines. (The universal_newlines parameter also handles converting the bytes object into a str, at least in Python 3.)
lines = subprocess.check_output(
['/usr/bin/nmap', '-sP', '192.168.1.0/24'], # Or just ['nmap_lan']
universal_newlines=True
).split('\n')
output_file='/var/www/html/website.txt'
with open(output_file, 'wt', encoding='ascii') as fout:
for line in lines:
...
print('Output here', file=fout) # Add `file=...` to each print.
Note that I used with blocks to get the file closing for free.
(Finally, that series of if commands is crying out to be rewritten as a for machine in machines_dict: loop, with the strings you're searching for as the keys in that dictionary, and the output you want to print as the values.)
I have created a PHP script that generates some .gz files, when I execute the PHP script through command line (cli), it generate the .gz file having 'desert' as user but when the script is executed through browser it generates the .gz file with 'nobody' as user which should not happen. I want the generated file to have 'desert' user rather than 'nobody' user when the script is executed through browser.
Here is the code I have created:
$file='test';
$newFileGZipCommand = 'cat '.$file.'_new | gzip > '.$file.'.gz';
//$newFileGZipCommand = 'sudo -u desert cat '.$file.'_new | gzip > '.$file.'.gz'; // This does not work
$newFileGZipCommandExecute = shell_exec($newFileGZipCommand);
//chmod($file.'.gz',0777) or die("Unable to change file permission");
//chown($file.'.gz', 'directu') or die("Unable to change file Owner");
I tried doing changing the file permissions and owner through chmod() and chown() functions in php but it say "chown(): operation not permitted".
Any pointer to this is highly appreciated.
[Note: I cannot change the httpd.conf or any other configuration files]
Sudo normally requires an interactive shell to enter your password. That's obviously not going to happen in a PHP script. If you're sure you know what you're doing and you've got your security issues covered, try allowing the Apache user to run sudo without a password, but only for certain commands.
For example, adding the following line in your sudoers file will allow Apache to run sudo without a password, only for the gzip command.
nobody ALL=NOPASSWD: gzip
Adjust the path and add any arguments to suit your needs.
Caution:
There might still be complications due to the way PHP calls shell
commands.
Remember that it's very risky to allow the web server to
run commands as root!
Another alternative:
Write a shell script with the suid bit to make it run as root no matter who calls it.
Probably a better alternative:
Write the commands to a queue and have cron pick them up, validate them (only allow known good requests), and run them, then mark that queue complete with the date and result.
Your end-user can then click/wait for update using ajax.
Hope it helps resolve your answer.
I have 2 servers serv1 and serv2 and need to compare the images in those 2 servers to detect which files are missing or has been modified.
So far I have 3 options:
- Create an API using PHP
I created an API file that will return all the images in serv1/www/app/images/
get the modification time of each images
return the result as json
output is something like this: { 'path/to/file' : 123232433422 }
I fetch that in serv2, decode then merge the array to the images in serv2/www/app/images
get the array_diff, works fine
cons:
- takes a lot of time (fetching, decoding, merging, looping, comparison... )
- Use rsync
Dry run to get the list of images that is existing in serv1 but is missing or modified in serv2 (very fast :))
cons:
apache can't run ssh because it's not authorized to access ~/.ssh/
would need to give apache permission but my client doesn't want it
so in short, i cannot use anything that would require permission
- maybe I could use some library or vendor but I doubt my client would allow me. If it can be shell script or a php built in function, I'll do it as long as it's possible.
So my question is if there is another way to fetch the images and modification date of those images without requiring authentication? My first solution is okay if it can be optimized cause if the array is too large, it takes a lot of time.
I hope the solution can be done in PHP, or Shell script.
Please help give me more options. Thanks
Install utility md5deep (or sha1deep) on both servers.
Execute md5deep on first server and save result to text file:
user#server1> md5deep -l -r mydir > server1.txt
Result file would look like this:
e7c3fcf5ad7583012379ec49e9a47b28 .\a\file1.php
2ef76c2ecaefba21b395c6b0c6af7314 .\b\file2.txt
45e19bb4b38d529d6310946966f4df12 .\c\file3.bin
...
Then, copy file server1.txt to second server and run md5deep in negative matching mode:
md5deep -l -r -X server1.txt mydir
This will print checksums and names of all files on second server which are different from first server.
Alternatively, you can compare text files created by md5deep -l -r dir yourself using diff or similar utility.
Last note - it may be easier to simply run md5deep -l -r mydir | gzip > md5deep.txt.gz in cron on each server, such that you have ready to compare filelist with checksums on each server (gzipped so it is fast to fetch).
I am working on a php website and it gets regularly infected by Malware. I've gone through all the security steps but failed. But I know how it every time infect my code. It comes at the starting of my php index file as following.
<script>.....</script><?
Can anybody please help me how can I remove the starting block code of every index file at my server folders? I will use a cron for this.
I already gone through regex question for removal of javascript malware but did not found what I want.
You should change FTP password to your website, and also make sure that there are no programs running in background that open TCP connections on your server enabling some remote dude to change your site files. If you are on Linux, check the running processes and kill/delete all that is suspicious.
You can also make all server files ReadOnly with ROOT...
Anyhow, trojan/malware/unautorized ftp access is to blame, not JavaScript.
Also, this is more a SuperUser question...
Clients regularly call me do disinfect their non-backed up, PHP malware infected sites, on host servers they have no control over.
If I can get shell access, here is a script I wrote to run:
( set -x; pwd; date; time grep -rl zend_framework --include=*.php --exclude=*\"* --exclude=*\^* --exclude=*\%* . |perl -lne 'print quotemeta' |xargs -rt -P3 -n4 sed -i.$(date +%Y%m%d.%H%M%S).bak 's/<?php $zend_framework=.*?>//g'; date ; ls -atrFl ) 2>&1 | tee -a ./$(date +%Y%m%d.%H%M%S).$$.log`;
It may take a while but ONLY modifies PHP files containing the trojan's signature <?php $zend_framework=
It makes a backup of the infected .php versions to .bak so that when re-scanned, will skip those.
If I cannot get shell access, eg. FTP only, then I create a short cleaner.php file containing basically that code for php to exec, but often the webserver times out the script execution before it goes through all subdirectories though.
WORKAROUND for your problem:
I put this in a crontab / at job to run eg. every 12 hours if such access to process scheduling directly on the server is possible, otherwise, there are also more convoluted approaches depending on what is permitted, eg. calling the cleaner php from the outside once in a while, but making it start with different folders each time via sort --random (because after 60sec or so it will get terminated by the web server anyway).
Change Database Username Password
Change FTP password
Change WordPress Hash Key.
Download theme + plugins to your computer and scan with UPDATED antivirus specially NOD32.
Don't look for the pattern that tells you it is malware, just patch all your software, close unused ports, follow what people told you here already instead of trying to clean the code with regex or signatures...