Clone EC2 instance with all things installed - php

I received a amazon email saying that i have to terminate my instance because of attacks but i worked really hard to install and setup everything on the instance.
I've tried to create a image and on the AMI create a new instance and launch another instance like this. but it not worked.
So how can i create another instance with the stuff that i installed on the current instance?
By the way have some way to copy everything on the instance?
Someone could please spare some hint?
thank's

If the instance security is compromised you should really start from scratch.
But if you need to access data from the older instance you can create a Volume snapshot and then attach it to the new instance.
I know it's annoying to setup everything again, so I keep a shell script to configure everything remotely and I run it each time I create a new instance. If you were trying to setup a LAMP server, you could write an install.sh script with something like:
#!/bin/sh
SERVER_IP= ...
SERVER_ADDR=ec2-user#$SERVER_IP
SERVER_PEM=/../../key.pem
sudo ssh -t -t $SERVER_ADDR -i $SERVER_PEM <<-EOF1
sudo yum update -y
sudo yum install -y httpd24 php56 mysql55-server php56-mysqlnd
EOF1

Related

WebdriverIO and Docker Setup

I am trying to setup a docker container with WebdriverIO built into it, with the eventual aim of being able to run a CI/CD pipeline in gitlab, but I have absolutely no idea where to start.
My application is a PHP/MySQL based app which was also recently dockerised. I access it locally on http://localhost.
I have tried to create a docker image with wdio built into it, but it fails when trying to do the
npm init wdio --yes
as the --yes command doesn't force any of the default settings, which goes against the official documentation. This then causes the wdio installation to fail.
What is confusing me even more is that there seems to be very few tutorials for this, the wdio documentation doesn't seem great, and what tutorials I can find all seem to mention selenium. FYI, I am just a dev that has been tasked to take some existing WDIO scripts and get them ready for CI/CD, I don't know a massive amount about WDIO in the first place.
Does anyone have any basic steps I could follow that would describe the process of taking some local WDIO scripts, and getting them to run inside a container, with the end goal of being to have them into some sort of CI/CD pipeline?
When trying to create the image, the following command does not seem to work:
npm init wdio --yes
It would be much more appropriate if you have initialize a wdio project and copy it to the Dockerfile.
This is what it might look like:
FROM node:16
USER root
#===============================
# Set default workspace
#===============================
RUN mkdir /home/workspace \
&& chmod 2777 /home/workspace
COPY . /home/workspace
WORKDIR /home/workspace
This way, your docker image contains your whole project built in.
Then you could append the following command to make sure the environment is ready for webdriverI/O to execute.
#==================================
# Install needed packages
#==================================
RUN apt update && apt upgrade -y
RUN npm install
If you need anything like browser and webdriver, you could install it via dozens of approaches.
You can use ENTRYPOINT or CMD to make it execute the specified test suites once the container is up.
If you wanna complete CI or CD flow with docker containers, it will depend on which service you may utilize.

PHP-FPM wont start from a Dockerfile

Hello dear community,
I am trying to accomplish something very simple, I want to start a php-fpm service from a docker container using a dockerfile. My dockerfile content is posted here below:
FROM debian
RUN apt-get update && apt-get install php -y && apt-get install php7.3-fpm -y && service php7.3-fpm start
When I build this image from the dockerfile and run it as a container, the php-fpm service is not active.
I even tried it with using docker's "interactive mode" (-i arg) to ensure that the container was not exiting in the case that the service was running as a daemon.
I am confused because the command RUN service php7.3-fpm start from my dockerfile should have started the service.
To successfully start the service inside my container I actually have to manually log into it using the command docker exec -it #containerID bash and run the command service php7.3-fpm start myself, and then the service works and becomes active.
I don't understand why the php-fpm service is not starting automatically from my Dockerfile, any help would be very much appreciated. Thanks in advance!
To a first approximation, commands like service don't work in Docker at all.
A Docker container runs only a single foreground process. That's not usually an init system, or if it is, it's just enough to handle some chores like zombie process cleanup. Conversely, a Docker image only contains a filesystem image and some metadata on how to start that process, but it does not persist any running processes. So for example if you
RUN service php7.3-fpm start
it might record in some file that the service was supposed to have been started, but once the RUN command completes, the running process doesn't exist at all any more.
The easiest way to get a running PHP-FPM setup is to use the Docker Hub php image:
FROM php:7.3-fpm
This should do all of the required setup, including arranging for the FPM server to run as the main container command; you just need to COPY your application code in.
If you really want to run it yourself, you need to make it be the main command of your custom image
CMD ["php-fpm"]
as is done in php:7.3-fpm's Dockerfile.

How to run bundle from PHP script

I'm writing a webhook to automatically publish a site when I push to GitHub. Part of the process requires that I build the site with
bundle exec middleman build --clean
I'm trying to invoke that with a PHP script, the script called by the GitHub webhook, so the user is www-data. No matter what I try, however, I'm getting an error that bundle cannot be found.
How can I run a bundle command from a PHP script?
I was able to figure this out. First, I installed rvm as a multi-user installation to ensure the www-data account can access it.
$ curl -sSL https://get.rvm.io | sudo bash -s stable
Install the desired ruby version, in my case 2.3.1, then set rvm to use it:
$ rvm install 2.3.1
$ rvm use 2.3.1
Run gem to install any gems that are needed. Because rvm is a multi-user installation, these gems are stored to the system and not your specific user.
$ gem install packagename
I don't know if this is necessary, but I would close the SSH session and reopen it. rvm messes with environment variables, so better safe than sorry.
Run env to print all environment variables. printenv also works if env doesn't for some reason. You'll get a big list of everything set, you only need the ruby-related ones. Do not copy/paste these values, they are examples I pulled from my system. Yours will be different!
PATH=/usr/local/rvm/gems/ruby-2.3.1/bin:/usr/local/rvm/gems/ruby-2.3.1#global/bin:/usr/local/rvm/rubies/ruby-2.3.1/bin:/usr/local/rvm/bin:/home/steven/bin:/home/steven/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
rvm_bin_path=/usr/local/rvm/bin
GEM_HOME=/usr/local/rvm/gems/ruby-2.3.1
IRBRC=/usr/local/rvm/rubies/ruby-2.3.1/.irbrc
MY_RUBY_HOME=/usr/local/rvm/rubies/ruby-2.3.1
rvm_path=/usr/local/rvm
rvm_prefix=/usr/local
rvm_ruby_string=ruby-2.3.1
GEM_PATH=/usr/local/rvm/gems/ruby-2.3.1:/usr/local/rvm/gems/ruby-2.3.1#global
RUBY_VERSION=ruby-2.3.1
Now we need PHP to recognize these variables. You'll need to find the right file on your system, which can be tricky. I don't have a way of knowing which one is correct, I used trial and error.
The file on my system is /etc/php/5.6/fpm/pool.d/www.conf. Add all of the environment variables you previously grabbed into this file with the below format. Note that you DO need PATH in here as well!
env[rvm_path] = /usr/local/rvm
env[rvm_prefix] = /usr/local
Now restart php-fpm. Your service name may be different from mine; I'm using the 5.6 build from ondrej/php.
Ubuntu 15.04 and newer (systemd):
$ sudo systemctl restart php5.6-fpm
Ubuntu 14.10 and newer:
$ sudo service php5.6-fpm restart
Finally, in the script itself you'll need to cd to the directory you're running the bundle command from. My short script is this:
cd /opt/slate
/usr/bin/git reset --hard
/usr/bin/git pull
bundle exec middleman build --clean
cp -R /opt/slate/build/* /var/www/docs
Works for me!

Questions on an external raspberry pi database

Okay so I need to have a server with a databse for a smartphone applications I'm working on, and there are a few requirements.
-SQL database
-Returns queries with JSON
So i had this old Raspberry Pi laying around and i wanted to set it up for it. But theres something im uncertain of, and google haven't helped me yet.
I planned on use SQLite on the Pi, and wanted to interact with it in PHP. Then use the PHP to convert into JSON, which would then be retrieved by the smartphone.
The thing is i don't know what service i can use on the PI to make this .PHP file able to be found from another device
You can install nodejs by running:
sudo apt-get install nodejs nodejs-legacy npm
For SQL database you can run
sudo apt-get install mariadb-server mariadb-client
Install your nodejs mysql module by:
sudo npm install -g node-mysql
That should get you started.

Installing ElastiCache Cluster Client on PHP AWS Elastic Beanstalk (without creating resource)

Elastic Beanstalk does not, by default, install the ElastiCache Cluster Client PHP module. This is needed to connect to an ElastiCache node cluster. Reading around, most of the instructions relate to creating an ElastiCache resource (which I assume will also install the PHP module on the Elastic Beanstalk). I want to install the PHP module without creating the resource as I want to use an existing cluster.
(64bit Linux PHP5.5)
The module is not installed by default in Beanstalk nor any EC2 instances. You have to do this yourself. This also is something completely different than creating a resource. You can do one without the other.
The ElastiCache Cluster Client for PHP is an extension that you can install via pecl on your instances. You can do this manually but if the instance is ever destroyed you have to do this again. Therefore it is much better to include the extension's install procedure as part of your deployment process. In a beanstalk app you can do this by adding configurations files in your .ebextensions dir.
For example, create these two files. I took these from an actual config file:
#.ebextensions/01fileselasticachephp.config
files:
"/tmp/AmazonElastiCacheClusterClient-latest-PHP54-64bit.tgz" :
mode: "000777"
owner: ec2-user
group: ec2-user
source: http://elasticache-downloads.s3.amazonaws.com/ClusterClient/PHP-5.4/latest-64bit
#.ebextensions/02setupelasticachephp.config
commands:
01install:
command: "pecl install /tmp/AmazonElastiCacheClusterClient-latest-PHP54-64bit.tgz"
The actual name of the files don't matter. They are for your own organization purposes. Anything in that directory with a .config extension will be executed in alphabetical order, that's why you want to prefix your files with a number so that they get executed in the right order: first download the extension and then install it. Mind you that you can also do it all at once in one file. I split it in two because because my actual config files were a lot bigger.
Once you have these files in place do a deployment and the Elastic Cache Cluster Client will be installed.
Note that at the time I deployed this, only the 5.4 client was available that's why my example shows that. I don't know if there is a 5.5 client so it's up to you to find out. You should only need to change the file name and URL to point to the 5.5 extension and should be all set to go.
UPDATE (as of 10/2020)
The solution above didn't work for me with the current software versions, but it definitely pointed me in the right direction. What didn't work was specifically the pecl install command (even using pecl7): it always threw the error "could not extract the package.xml file from [...]" and I couldn't find a solution for it.
So here's the config file that worked for me:
commands:
02-get-file:
command: "wget https://elasticache-downloads.s3.amazonaws.com/ClusterClient/PHP-7.3/latest-64bit"
02-untar:
command: "sudo tar -zxf latest-64bit amazon-elasticache-cluster-client.so"
03-move-file:
command: "sudo mv amazon-elasticache-cluster-client.so /usr/lib64/php/7.3/modules/"
04-create-ini:
command: "grep -qF 'extension=amazon-elasticache-cluster-client.so' /etc/php-7.3.d/50-memcached.ini || echo 'extension=amazon-elasticache-cluster-client.so' | sudo tee --append /etc/php-7.3.d/50-memcached.ini"
05-cleanup:
command: "sudo rm latest-64bit*"
06-restart-apache:
command: "sudo /etc/init.d/httpd restart"
Hope this helps other people!

Categories