Ubuntu export /opt/lampp/lampp into PATH - php

I'm trying to put lampp into my Ubuntu Path but I'm seemingly doing something wrong because it doesn't work.
I put this into the ~/.bashrc file :
export PATH="/opt/lampp/lampp:$PATH"
and then I ran the following command in ~ :
$ source .bashrc
Thanks for your help
EDIT
Here is the content of the file .bashrc :
# ~/.bashrc: executed by bash(1) for non-login shells.
# see /usr/share/doc/bash/examples/startup-files (in the package bash-doc)
# for examples
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
# don't put duplicate lines or lines starting with space in the history.
# See bash(1) for more options
HISTCONTROL=ignoreboth
# append to the history file, don't overwrite it
shopt -s histappend
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=1000
HISTFILESIZE=2000
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# If set, the pattern "**" used in a pathname expansion context will
# match all files and zero or more directories and subdirectories.
#shopt -s globstar
# make less more friendly for non-text input files, see lesspipe(1)
[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
# uncomment for a colored prompt, if the terminal has the capability; turned
# off by default to not distract the user: the focus in a terminal window
# should be on the output of commands, not on the prompt
#force_color_prompt=yes
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
# We have color support; assume it's compliant with Ecma-48
# (ISO/IEC-6429). (Lack of such support is extremely rare, and such
# a case would tend to support setf rather than setaf.)
color_prompt=yes
else
color_prompt=
fi
fi
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u#\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
else
PS1='${debian_chroot:+($debian_chroot)}\u#\h:\w\$ '
fi
unset color_prompt force_color_prompt
# If this is an xterm set the title to user#host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u#\h: \w\a\]$PS1"
;;
*)
;;
esac
# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
#alias dir='dir --color=auto'
#alias vdir='vdir --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
fi
# colored GCC warnings and errors
#export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
# Add an "alert" alias for long running commands. Use like so:
# sleep 10; alert
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
# Alias definitions.
# You may want to put all your additions into a separate file like
# ~/.bash_aliases, instead of adding them here directly.
# See /usr/share/doc/bash-doc/examples in the bash-doc package.
if [ -f ~/.bash_aliases ]; then
. ~/.bash_aliases
fi
# enable programmable completion features (you don't need to enable
# this, if it's already enabled in /etc/bash.bashrc and /etc/profile
# sources /etc/bash.bashrc).
if ! shopt -oq posix; then
if [ -f /usr/share/bash-completion/bash_completion ]; then
. /usr/share/bash-completion/bash_completion
elif [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi
fi
### Added by the Heroku Toolbelt
export PATH="/usr/local/heroku/bin:$PATH"
export PATH="/opt/lampp/lampp:$PATH"

Okay, so I think I know what's going on! Change your $PATH line in ~/.bashrc to the following:
export PATH="/opt/lampp:$PATH"
Then try source ~/.bashrc, or open a new terminal. You should now have the lampp and xampp commands available. Both should do the same thing.
The problem is that the $PATH variable points to directories that contain executable files, rather than pointing to executable files directly. It appears that /opt/lampp/lampp is a symbolic link that points to /opt/lampp/xampp, which is an executable.
UPDATE: When you use sudo, it's likely not honoring your $PATH variable for security reasons. You might try running sudo visudo, and editing the line that says Defaults secure_path="..." to include /opt/lampp. Then sudo lampp start should work!

I also faced the same problem to fix this first goto your .bashrc file in the home directory of your ubuntu type the following command in the terminal :
export PATH="/opt/lampp/lampp:$PATH"
now type this in the terminal
vim ~/.bashrc
like this add your export path command you entered first at the starting
Now you have to edit the visudo so that your path can be in the list of secured paths so that you can use sudo with the xampp or lampp command otherwise it won't work
sudo visudo
Now like in the photo above append ":/opt/lampp" at the end of the secure path
then press ctrl + X then enter
your work is done now now you can use xampp anywhere with sudo privelages
To list all the commands of the xampp you can write in the terminal
xampp
and it'll show this

Related

Add php and composer alias on QNAP Startup

I came across a couple of issues with my QNAP NAS TS-251+ whilst developing a new project these are:
1) There is no php alias and when I add one via command line it is removed on NAS Restart.
2) A similar thing happens for Composer except on restart it removes Composer as well from the system.
How can I stop this from happening or get around it so that when my NAS restarts the php and composer alias are already set.
I managed to resolve this issue by adding a new script that runs when my NAS starts up. QNAP have provided some basic instructions on how to add a startup script on their wiki page under Running Your Own Application at Startup. However I added a couple more steps t
These steps are fairly basic:
Login to your NAS Server via SSH.
Run the following command mount $(/sbin/hal_app --get_boot_pd port_id=0)6 /tmp/config (Running ls /tmp/config will give you something similar to below)
Run vi /tmp/config/autorun.sh this will allow you to edit/create a file called autorun.sh **
For me I wanted to keep this file as simple as possible so I didn't have to change it much, so the script is just called from with this Shell Script. So add the following to autorun.sh.
autorun.sh code example:
#!/bin/sh
# autorun script for Turbo NAS
/share/CACHEDEV1_DATA/.qpkg/autorun/autorun_startup.sh start
exit 0
You will notice a path of /share/CACHEDEV1_DATA/.qpkg/autorun/ this is where my new script that I want to run is contained, you don't have to have yours here if you don't want to however I know the script will not be removed if placed here. autorun_startup.sh this is the name of the script I want to be running, and start is the command in the script I want to be running.
Run chmod +x /tmp/config/autorun.sh to make sure that autorun.sh is actually runnable.
Save the file and run umount /tmp/config (Important).
Navigate to the folder you have put in the autorun.sh (script in my case /share/CACHEDEV1_DATA/.qpkg/autorun/) and create any folders along the way that you need.
Create your new shell file using vi and call it whatever you want (Again in my case it is called autorun_startup.sh) and add your script to the file. The script I added is below but you can add whatever you want to you startup script.
autorun_startup.sh code example:
#!/bin/sh
RETVAL=0
QPKG_NAME="autorun"
APACHE_ROOT=`/sbin/getcfg SHARE_DEF defWeb -d Qweb -f
/etc/config/def_share.info`
QPKG_DIR=$(/sbin/getcfg $QPKG_NAME Install_Path -f /etc/config/qpkg.conf)
addPHPAlias() {
/bin/cat /etc/profile | /bin/grep "php" | /bin/grep "/usr/local/apache/bin/php" 1>>/dev/null 2>>/dev/null
[ $? -ne 0 ] && /bin/echo "alias php='/usr/local/apache/bin/php'" >> /etc/profile
}
addComposerAlias() {
/bin/cat /etc/profile | /bin/grep "composer" | /bin/grep "/usr/local/bin/composer" 1>>/dev/null 2>>/dev/null
[ $? -ne 0 ] && /bin/echo "alias composer='/usr/local/bin/composer'" >> /etc/profile
}
addPHPComposerAlias() {
/bin/cat /etc/profile | /bin/grep "php-composer" | /bin/grep "/usr/local/apache/bin/php /usr/local/bin/composer" 1>>/dev/null 2>>/dev/null
[ $? -ne 0 ] && /bin/echo "alias php-composer='php /usr/local/bin/composer'" >> /etc/profile
}
download_composer() {
curl -sS https://getcomposer.org/installer | /usr/local/apache/bin/php -- --install-dir=/usr/local/bin --filename=composer
}
case "$1" in
start)
/bin/echo "Enable PHP alias..."
/sbin/log_tool -t 0 -a "Enable PHP alias..."
addPHPAlias
/bin/echo "Downloading Composer..."
/sbin/log_tool -t 0 -a "Downloading Composer..."
download_composer
/bin/echo "Enable composer alias..."
/sbin/log_tool -t 0 -a "Enable composer alias..."
addComposerAlias
/bin/echo "Adding php composer alias..."
/sbin/log_tool -t 0 -a "Adding php composer alias..."
addPHPComposerAlias
/bin/echo "Use it: php-composer"
/sbin/log_tool -t 0 -a "Use it: php-composer"
;;
stop)
;;
restart)
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
esac
exit $RETVAL
Run chmod +x /share/CACHEDEV1_DATA/.qpkg/autorun/autorun_startup.sh to make sure your script is runnable.
Restart your NAS System to make sure the script has been run. After restart for my script I just did php -version via terminal to make sure that the php alias worked and it did.
(*) With steps 3 and 8 you can either do this via something like WinSCP or continue doing it via command line (SSH). For me I chose to do it via WinSCP but here is the command still for SSH
I am fairly new to server related stuff so if anyone has a better way cool.

PHP-CS-Fixer fix in precommit hook but file doesn't add to commit

I want to automatically fix files with php-cs-fixer before commit and then commit changes including this fixes
So I created pre-commit file, but I have problems:
1) I can't get know which file was changed (maybe just a bash problem)
2) If I run "git add" without condition, changes are included to commit, but not the files itself
I've tried to show it clearily in the comments of the hook, so here it is:
#!/usr/bin/env bash
# get the list of changed files
staged_files=$(git diff --cached --name-only)
# command to fix files
cmd='vendor/bin/php-cs-fixer fix %s -q'
if [ -f 'php_cs_fixer_rules.php' ]; then
cmd='vendor/bin/php-cs-fixer fix %s -q --config=php_cs_fixer_rules.php'
fi
for staged in ${staged_files}; do # this cycle exactly works
# work only with existing files
if [[ -f ${staged} && ${staged} == *.php ]]; then # this condition exactly works
# use php-cs-fixer and get flag of correction
eval '$(printf "$cmd" "$staged")' # this command exactly works and corrects the file
correction_code=$? # but this doesn't work
# if fixer fixed the file
if [[ ${correction_code} -eq 1 ]]; then #accordingly this condition never works
$(git add "$staged") # even if the code goes here, then all changes will go into the commit, but the file itself will still be listed as an altered
fi
fi
done
exit 0 # do commit
Thanks in advance for any help
Especially I want to know why correction_code doesn't get value and why
files after "git add" has identical content but not commited anyway
In pre-commit, if you add some files by git add, those files will appear in files to commit.
The problem in your pre-commit is [[ ${correction_code} -eq 1 ]].
When php-cs-fixer fix successed, it returns 0, not 1.
So, the pre-commit should be:
#!/usr/bin/env bash
# get the list of changed files
staged_files=$(git diff --cached --name-only)
# build command to fix files
cmd='vendor/bin/php-cs-fixer fix %s -q'
if [ -f 'php_cs_fixer_rules.php' ]; then
cmd='vendor/bin/php-cs-fixer fix %s -q --config=php_cs_fixer_rules.php'
fi
for staged in ${staged_files}; do
# work only with existing files
if [[ -f ${staged} && ${staged} == *.php ]]; then
# use php-cs-fixer and get flag of correction
"$cmd" "$staged" // execute php-cs-fixer directly
correction_code=$? # if php-cs-fixer fix works, it returns 0
# HERE, if returns 0, add stage it again
if [[ ${correction_code} -eq 0 ]]; then
git add "$staged" # execute git add directly
fi
fi
done
exit 0 # do commit

Imported files do not appear inside Docker container

I touched Docker for the first time yesterday, and I don't know much about web server administration in general. Just a heads up.
I'm struggling to make a simple PHP "hello world" run inside a Docker container. I have built a Docker container with the following dockerfile:
FROM nanoserver/iis
MAINTAINER nanoserver.es#gmail.com
ADD http://windows.php.net/downloads/releases/php-5.6.31-Win32-VC11-x64.zip php.zip
ADD https://nanoserver.es/nanofiles/vcruntime140.dll C:\\Windows\\System32\\vcruntime140.dll
ADD https://nanoserver.es/nanofiles/iisfcgi.dll C:\\Windows\\System32\\inetsrv\\iisfcgi.dll
ADD https://nanoserver.es/nanofiles/info.dll C:\\inetpub\\wwwroot\\info.php
COPY hello.php C:\\inetpub\\wwwroot\\hello.php
ENV PHP C:\\php
RUN powershell -command Expand-Archive -Path c:\php.zip -DestinationPath C:\php
RUN setx PATH /M %PATH%;C:\php
ADD https://nanoserver.es/nanofiles/php.ini C:\\php\\php.ini
RUN powershell -command \
rm C:\Windows\System32\inetsrv\config\Applicationhost.config ; \
Invoke-WebRequest -uri https://nanoserver.es/nanofiles/Applicationhost.txt -outfile C:\\Windows\\System32\\inetsrv\\config\\Applicationhost.config ; \
Remove-Item c:\php.zip -Force
# The above request fails, but I don't see how it would be relevant to my question.
CMD ["powershell.exe"]
I would expect this Dockerfile to create a container with c:\inetpub\wwwroot\info.php, c:\inetpub\wwwroot\hello.php and c:\php. However, Powershell inside the container gives me this output:
PS C:\inetpub\wwwroot> ls
Directory: C:\inetpub\wwwroot
Mode LastWriteTime Length Name
---- ------------- ------ ----
-a---- 4/11/2017 11:55 AM 703 iisstart.htm
-a---- 4/11/2017 11:55 AM 99710 iisstart.png
It feels like there is some fundamental that I haven't grasped. Could someone help me out here?
On windows you have to use forward slashes in paths in the Dockerfile.
Official docs: https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/manage-windows-dockerfile
From docs
On Windows, the destination format must use forward slashes. For example, these are valid COPY instructions.
COPY test1.txt /temp/
COPY test1.txt c:/temp/
However, the following will NOT work.
COPY test1.txt c:\temp\
If either source or destination include whitespace, enclose the path in square brackets and double quotes.
COPY ["<source>", "<destination>"]
Also note that copying to non-existing paths in the image normally do not trigger an error. The directory must exist.

LINUX php service restart not work

I need help, i have webinterface thar run this section in php:
$cmd="/usr/sbin/sudo /usr/sbin/service networking stop"
exec($cmd, $mes);
print_r($mes); #this is emptz message
$cmd="/sbin/ifconfig"
exec($cmd, $mes);
print_r($mes);
print_r($mes); For stop service is empty
print_r($mes); For ifconfig=array have all information about interface (but all are up not down so above mesagge not work well (this service still run))
This script is running via deamon user.
This is my visudo:
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
# Host alias specification
# User alias specification
# Cmnd alias specification
# User privilege specification
root ALL=(ALL:ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
# See sudoers(5) for more information on "#include" directives:
#includedir /etc/sudoers.d
www-data ALL =NOPASSWD: /bin/nc, /bin/cp, /bin/chmod, /bin/chown, /etc/init.d/, /usr/sbin/service
deamon ALL = NOPASSWD: /bin/cp, /bin/chmod, /bin/chown, /etc/init.d/, /usr/sbin/service, /home/optokonlmcp/sss.php, /sbin/ifconfig
Please do you know why this php script not work ?
Thank you in advance
BR
MK
SOLUTIONS:
Create sctipt with rolus for root (root:root) and command what i need to do(sudo /usr/sbin/service ) command must contain sudo
add script into visudo+ all command that are meantioned in my Script:
Visudo containing:
daemon ALL=NOPASSWD: /usr/bin/sudo, /path_my_script/script.sh
Of curse cript must have rule for opening so i change rules for 755.
Now you can try run script with daemon, i use this command: sudo -u daemon /patch_of_script/script.sh
last point is add command into php: exect("sudo /path/./script.sh" )
Now i can restart network via php.
Thank you
BR
MK
Replace exac($cmd, $mes); with exec($cmd, $mes);

How to deploy Gitlab project branch to directory

I have a Gitlab server (Ubuntu 14.04) where I am trying use it as both a host for my repositories as well as a testing server for my PHP projects. Ideally, I would like to have Gitlab/Git export the "release" branch to /var/www/git/<project-name> when that branch is updated.
My Question: How can I export a specific branch in Gitlab, to a specific directory on the localhost, when the branch is updated?
I am aware that there are webhooks available in Gitlab, but it seems unnecessary and wasteful to have the server POST to itself for a local operation.
I suppose you are running the community edition of gitlab.
Then, only the server administrator can configure hook scripts by copying the required scripts into the affected repositories.
gitlab itself is using the $GIT_DIR/hooks directory for its own scripts already. Fortunately they forward control to any hook script in the gitlab specific $GIT_DIR/custom_hooks directory. See also this question about how to run multiple hooks with the same type on gitlab.
The script itself could look like this:
#!/bin/bash
#
# Hook script to export current state of repo to a release area
#
# Always hardcode release area - if configured in the repo this might incur data loss
# or security issues
echo "Git hook: $0 running"
. $(dirname $0)/functions
git=git
release_root=/gitlab/release
# The above release directory must be accessible from the gitlab server
# and any client machines that want to access the exports. Please configure.
if [ $(git rev-parse --is-bare-repository) = true ]; then
group_name=$(basename $(dirname "$PWD"))
repo_name=$(basename "$PWD")
else
cd $(git rev-parse --show-toplevel)
group_name=$(basename $(readlink -nf "$PWD"/../..))
repo_name=$(basename $(readlink -nf "$PWD"/..))
fi
function do_release {
ref=$1
branch=$2
# Decide on name for release
release_date=$(git show -s --format=format:%ci $ref -- | cut -d' ' -f1-2 | tr -d -- -: | tr ' ' -)
if [[ ! "$release_date" =~ [0-9]{8}-[0-9]{6} ]]; then
echo "Could not determine release date for ref '$ref': '$release_date'"
exit 1
fi
dest_root="$release_root/$group_name/$repo_name"
dated_dir="dated/$release_date"
export_dir="$dest_root/$dated_dir"
# Protect against multiple releases in the same second
if [[ -e "$export_dir" ]]; then
export_dir="$export_dir-02"
dated_dir="$dated_dir-02"
while [[ -e "$export_dir" ]]; do
export_dir=$(echo $export_dir | perl -pe 'chomp; print ++$_')
dated_dir=$(echo $dated_dir | perl -pe 'chomp; print ++$_')
done
fi
# Create release area
if ! mkdir -pv "$export_dir"; then
echo 'Failed to create export directory: ' "$export_dir"
exit 1
fi
# Release
if ! git archive $branch | tar -x -C "$export_dir"; then
echo 'Failed to export!'
exit 1
fi
chmod a-w -R "$export_dir" # Not even me should change this dir after release
echo "Exported $branch to $export_dir"
( cd "$dest_root" && rm -f latest && ln -s "$dated_dir" latest )
echo "Adjusted $dest_root/latest pointer"
}
process_ref() {
oldrev=$(git rev-parse $1)
newrev=$(git rev-parse $2)
refname="$3"
set_change_type
set_rev_types
set_describe_tags
echo " Ref: $refname","$rev_type"
case "$refname","$rev_type" in
refs/heads/*,commit)
# branch
refname_type="branch"
function="branch"
short_refname=${refname##refs/heads/}
if [[ $short_refname == release ]]; then
echo " Push accepted. Releasing export for $group_name/$repo_name $short_refname"
do_release "$refname" "$short_refname"
else
echo " Push accepted. No releases done for $group_name/$repo_name $short_refname"
fi
;;
refs/tags/*,tag)
# annotated tag
refname_type="annotated tag"
function="atag"
short_refname=${refname##refs/tags/}
;;
esac
}
while read REF; do process_ref $REF; done
exit 0
The script was started based on this post-receive.send_email script which is already quoted on SO multiple times.
Configure a release area in the variable hardcoded in the script, or e.g. add a mechanism to read a config file in the repo. Maybe you want to give users control over this area. Depends on your security circumstances.
The release area must be accessible by the git#gitlab user, and of course by any client expecting the export.
The branch to export is hardcoded in the script.
The release area will be populated like this:
$release_root/$group_name/$repo_name/dated/$release_date
Plus a symbolic link latest pointing to the latest $release_date. The idea is that this is extensible to later be able to also export tags. If you expect to export different branches, a $branch should be included as a path component, too.
Access control of the gitlab server is not passed down to the directory structure. Currently I do this manually, and that is why I do not auto-populate all new repositories with this hook. I'd rather configure manually, and then adjust unix group permissions (and/or ACLs) on the $release_root/$groupname paths accordingly. This needs to be done only once per group and works because no one else is allowed to create new groups on my gitlab instance. This is very different from the default.
Anything else we can do for you? ;-)

Categories