I am trying to implement simple backup feature of some directories (mainly directories in /etc) which is handled by laravel. Basically I store .tar archives containing specific directory files.
This is a command used to create a backup archive of a single directory:
shell_exec("cd {$backupPath} && tar -cf {$dirName}.tar -P {$fullPathToDir}")
This is a command to restore directory from a backup archive:
shell_exec("cd / && sudo tar -xf {$backupPath . $dirName} --recursive-unlink --unlink-first")
For test reasons I let http user run sudo tar, however my initial idea was to create a bash script that will handle that, and add it to sudoers. Running command or shell script gives same errors.
The problem is if I run it through php I get errors like this:
Cannot unlink: Read-only file system
But, if i run it from command line, it works:
su http -s /bin/bash -c "cd / && sudo tar -xf {$backupPath . $dirName} --recursive-unlink --unlink-first"
Running this both on full archlinux system and archlinux docker container gives me same results. I would appreciate any kind of help.
So issue was with systemd unit for php-fpm 7.4, where ProtectSystem was set to true, after commenting it out, everything worked as expected.
sed -i 's:ProtectSystem=full:#ProtectSystem=full:' /usr/lib/systemd/system/php-fpm7.service
Related
If I ssh to server and cd public_html and run the shell script (below) it works fine.
One second thought, it would be easier to just setup a crontab on the server and have it run every day.
But if I run it from the web page outlined below, the zip file 'chessclub.zip' is not created or synced. The bash script is located on the server at 'home/user/public_html/ but it won't be found and executed. How can I get the bash script to execute on the server, not locally?
HTML
<button onclick = 'getZIP.php;'>ZIP IT</button>
PHP 'getZIP.php'
<?php
shell_exec("/home/user/public_html/backup_cccr");
?>
SHELL SCRIPT ON SERVER ("backup_cccr")
#!/bin/bash
zip -r -9 -FS chessclub.zip * -x chessclub.zip
Best idea was to scrap the php zip functions for bash zip functions because bash function are better: (backup_cccr)
#!/bin/bash
zip -r -9 -FS chessclub.zip public_html/* -x 'public_html/chessclub.zip'
cp chessclub.zip public_html/
Copying the updated chessclub.zip to public_html means the file is accessible from a web browser
I used a daily cron job to automatically create a backup. Easy to do.
I got a working php-fpm docker container acting as the php backend to a nginx frontend. What I mean by working, is that it renders phpinfo output in the browser as expected.
My php-fpm container was produced by php-fpm-7.4 prod of the devilbox docker repo. It has OCI8 enable.
The issue: I keep getting ORA-28547 when trying oci_connect
What I have done:
1--add /usr/lib/oracle/client64/lib to a file inside ld.so.conf.d and run ldconfig -v
2--restart docker container.
3-- Now phpinfo shows ORACLE_HOME=/usr/lib/oracle/client64/lib
4--Add tnsnames.ora to /usr/lib/oracle/client6/lib/network/admin (there is a README.md file inside that folder that even tells you to do that)
5--Restart docker container again.
6-oci_connect still fails with the same error.
What I am missing?
Thank you very much for any pointers, I think I have browsed to the end of the internet and back without finding a solution yet.
----SOLUTION: reinstall instantclient, relink libraries (ldconfig) to use new instantclient libraries. Create modified dockerfile to do it when container is created.
I modified the Dockerfile file of the php-fpm to add new instant client files and not the one that were provided by the original file. I was not able to make it work with them. I have tried a few times rebuilding the image (docker-compose up --build) and this is the file that does the trick:
FROM devilbox/php-fpm:7.4-work
#instantclient.conf content: /opt/instantclient
RUN echo "/opt/instantclient" >/etc/ld.so.conf.d/instantclient.conf
WORKDIR /opt
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-sdk-linux.x64-19.8.0.0.0dbru.zip
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-sqlplus-linux.x64-19.8.0.0.0dbru.zip
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-basic-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-sdk-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-sqlplus-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-basic-linux.x64-19.8.0.0.0dbru.zip
RUN mv instantclient_19_8 instantclient
ADD tnsnames.ora /opt/instantclient/network/admin
RUN ldconfig -v
CMD ["php-fpm"]
expose 9000
# Insert following to .bash_profile or .profile of the User starting the php-fpm
export ORACLE_HOME=/usr/lib/oracle/client64
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ORACLE_HOME/lib
export PATH=$PATH:$ORACLE_HOME/bin
export TNS_ADMIN=$ORACLE_HOME/network/admin
# Test to Ping Remote Db to be connected by PHP
tnsping <tns-name of remote DB - i.e. db12c.world>
# restart here the php Engine
Can you please check
https://github.com/caffeinalab/php-fpm-oci8/blob/master/Dockerfile
which seems to create a p-fpm-oci8 docker image
the "wget" for
wget -qO- https://raw.githubusercontent.com/caffeinalab/php-fpm-oci8/master/oracle/instantclient-basic-linux.x64-12.2.0.1.0.zip | bsdtar -xvf- -C /usr/local &&
wget -qO- https://raw.githubusercontent.com/caffeinalab/php-fpm-oci8/master/oracle/instantclient-sdk-linux.x64-12.2.0.1.0.zip | bsdtar -xvf- -C /usr/local &&
wget -qO- https://raw.githubusercontent.com/caffeinalab/php-fpm-oci8/master/oracle/instantclient-sqlplus-linux.x64-12.2.0.1.0.zip | bsdtar -xvf- -C /usr/local && \
can be dropped when you place downloaded instant client files into local host dir
/usr/local
and extract them - resulting in
/usr/local/instantcient_12_2
or 18, 19c equivalents
the 4 "ln" commands have to be adjusted to reflect the local host instantclient dir
the tnsnames.ora for instantclient is available from host by VOLUME command
-------------FINAL SOLUTION------------(it was not network related, I had done a couple of changes to the files, and also tried a different database, all at the same time, so it made me think that it was the different database what fixed the issue)
After many trial and errors, I came up with a Dockerfile that creates the correct configuration of files and connects without any issues to the database:
--Dockerfile: (to build php-fpm 7.4 using devilbox image)
Final solution:
I modified the Dockerfile file of the php-fpm to add new instant client files and not the one that were provided by the original file. I was not able to make it work with them. I have tried a few times rebuilding the image (docker-compose up --build) and this is the file that does the trick:
FROM devilbox/php-fpm:7.4-work
ADD instantclient.conf /etc/ld.so.conf.d/
WORKDIR /opt
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-sdk-linux.x64-19.8.0.0.0dbru.zip
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-sqlplus-linux.x64-19.8.0.0.0dbru.zip
RUN wget https://download.oracle.com/otn_software/linux/instantclient/19800/instantclient-basic-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-sdk-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-sqlplus-linux.x64-19.8.0.0.0dbru.zip
RUN unzip instantclient-basic-linux.x64-19.8.0.0.0dbru.zip
RUN mv instantclient_19_8 instantclient
ADD tnsnames.ora /opt/instantclient/network/admin
RUN ldconfig -v
CMD ["php-fpm"]
expose 9000
That's why I have suggested to use tnsping - unfortunaly it is not included in any of the instant client files which is a pity - so you have to pick it up from regular client with matching OS, bitsize and Oracle release. As workaround you could place SQL*Plus package files into container and try to connect with a foo user like
sqlplus foo/foo#\<ip>:\<port>/\<dbname>
which should generate an error - if
user/password not matching - ORA-1017 i.e. DB & listener running
listener running - ORA-1034 i.e. DB down
listener down (no return, or TNS-Errors)
I got it!. It was a firewall issue. I launched a tcpdump capture
session and there was nothing wrong with php-fpm, oci8 and
instantclient libraries. The traffic was initiated but there was no
response from the database. I made it work against a different
database where this box has no firewall issues.
I now will try rebuilding the docker image so I can see what I have to
manually add if any.
That was incorrect (the firewall as the origin of the problem). Rebuilding the docker file showed me where I had it wrong. See original question for solution.
I have made a button on wordpress and linked it to a linux backend script.
This gives me no error, and the default user output is "www-data"
#!/bin/bash
whoami
touch file
But I want to trigger a git commit using my local user through this script.
#!/bin/bash
sudo -u my_username -p my_password -H sh -c "cd /var/www/html/forcetalks_new/; /usr/bin/git add . ; touch test_file"
This is neither touching the file not adding the files. I wonder what could be the possible reason. And any other solution is welcomed.
PS. I tried using GIT commands in the first script after giving GIT sudo permissions to "www-data" and that didnt work as well.
Try the first script approach with:
#!/bin/bash
sudo -u my_username -p my_password -H sh -c "/path/to/script > /path/to/log 2>&1"
So a script calling a script:
#!/bin/bash -x
whoami
git status
git add .
touch file
The goal is to see what is going on with /path/to/log, thanks to the bash -x option (and the git status command)
Note that if you want to add "file" itself, you should touch the file first, then add.
Note also that a git commit might be needed at some point to actually persist those changes in the Git repository
I have a php script running in a cronjob on the server.
However, I am unexpectedly getting different $PATH from the same user, depending on how I execute the command.
I log in as user ubuntu:
ubuntu#:$ echo $PATH
/home/ubuntu/bin:/home/ubuntu/.local/bin:/home/ubuntu/.nvm/versions/node/v12.3.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
I then sudo su bitbucket:
bitbucket#:$ echo $PATH
/home/bitbucket/.nvm/versions/node/v12.3.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
I execute a script from a cronjob running as bitbucket and output the following debug to a log file:
$ whoami
bitbucket
The above proves the user is bitbucket, then:
$ echo $PATH
/usr/bin:/bin
Please note I am not running as sudo. I am utilising sudo to switch user, but not using sudo to echo $PATH.
How is it that the same user has 2 different $PATH variables?
You didn't say which shell you're using so I'm going to assume it's bash. The first, and perhaps most important, thing to note is that when you run sudo su bitbucket you're getting an interactive shell. Which means that ~/.bashrc will be sourced. Lots of people modify PATH in that script. Something that tends to cause problems. Why? Because non-interactive shells, such as the one launched by cron to run your command, won't read ~/.bashrc.
Your cron job gets a PATH equivalent to running this command: sudo su bitbucket -c 'echo $PATH'. Play around with that to get a better understanding of how this works. For example, instead of echo $PATH try env.
I have a website that needs to build a debian package and move it into a different directory for people to download. I have been able to do this using Linux and bash files to compress and build a Packages file with dpkg. Here's the bash script
#!/bin/bash
echo Enter app name
read NAME
cd /home/stumpx/cydia/apps
dpkg -b $NAME
cp /home/stumpx/cydia/apps/$NAME.deb /home/stumpx/cydia/upload/deb/$NAME.deb
cd /home/stumpx/cydia/upload
dpkg-scanpackages -m . /dev/null >Packages
bzip2 /home/stumpx/cydia/upload/Packages -f -k
It would be nice I guess to make .bz2 files.
You forgot your question. But I'll answer it regardless. Use exec() to invoke your bash script.
Basically you need to execute system commands. This is done via exec() in php. So you will have to write a bash script that does it all ( build package, compress and move it ) and execute it using php