PHP and Ruby with Docker - php

is possible to run two web apps at the same time, one using PHP the other using Ruby, each one on a Docker container ?

Should be no problem. Normally you have one App per container.
You could create a Docker container for your PHP server and a container for your Ruby server.
You need to choose different ports, because by default, both will run on port 80 or 443 and then it should work

Docker is designed to run one software, if you want to run more than one, you need a tool like supervisor, s6, daemontools, check the doc for supervisor
https://docs.docker.com/articles/using_supervisord/

Related

Deploying Laravel with Docker containers

I plan to deploy my Laravel application with docker containers.
I need the following components for my application:
MySQl server
nginx server
cerbot for activation of ssl
Queue worker for Laravel
Since the application is still under development (and probably will always be), it should be very easy to update (I will automate this with GitLab CI/CD) and it should have as little downtime as possible during the update.
Also, I want to be able to host multiple instances of the application, whereby only the .env file for Laravel is different. In addition to the live application, I want to host a staging application.
My current approach is to create a container for the MySQL server, one for the nginx server and one for the queue worker. The application code would be a layer in the nginx server container and in the queue worker container. When updating the application, I would rebuild the nginx conatiner and queue worker container. Is this a good approach? Or are there better ways to achieve this? And what would be a good approach for my mysql server, nginx server, php version,... to stay up to date without downtime for the application?
The main idea of the docker is to divide your app by containers. So yep it is good to have one container for one service. In your example, I suggest keeping MySQL in one container the queue worker in another and so on. As a result, u will have containers for each service. Then suggest to create the internal docket network and connect containers to them. Also, I suggest using docker volumes to store all your application data. To make it much easyer I suggest for configuration to use docker compose.

FTP access inside docker for a PHP envrionnement

I have a web server with multiple PHP website. I push all my updates through FTP.
I intend to move to a more containerized environnement without changing to much of my current basic workflow.
I would like to deploy each of my website in a Docker container. The database for all the website would be in another container.
I will have a Docker as a reverse proxy.
To update my website, i have two ideas :
Set up an FTP access in the container so i can update it directly
Set up a shared directories with the host through volume, so i can set up FTP access from the host.
What do you think of it ?
Thanks for your help
Changing the code inside a running container, or at all, is against docker best practices, as containers are designed to be ephemeral.
A better idea would be to rebuild the image every time you update the code, allowing the containers to stay ephemeral, and making it easier to scale. You could implement this through CI/CD, but that is out of the scope of this question.
If you really want to continue with the idea of ftp still, it's a good idea to have one container with an ftp service in it, and another one with the web server in, as containers should have only one concern.
If your FTP server image is my/ftp-image and your web server is my/web-server-image, then you can start your containers like this:
docker run -itd --name my-web-server -p 80:80 -v files_volume_name_here:/path/to/files/in/container my/web-server-image
docker run -itd --name my-ftp-server [ports for ftp server here] -v files_volume_name_here:/path/to/files/in/container my/ftp-image

Use behat with jenkins in amazon ec2 server

How can I setup and configure behat,ahoy,docker with jenkins in amazon ec2 server?
I want to run my behat feature's every time I push something in my Git A/c with help of jenkins and sauce labs in the ec2 server.
There are lots of ways to do this. What do you know about Amazon EC2? And Selenium? And Docker? There are lot's of technologies here... Do you want to configure a Selenium grid? I'll try to answer some of this. But you are asking so many things... xD
I'll tell you my solution (Selenium grid) on that first:
First of all you need to create a Selenium hub with an EC2 ubuntu 14.04 AMI without UI and link it as a jenkins slave to your Jenkins master. Or as directly a master. What you want. Only command line. Download Selenium Server standalone. (be careful on downloading the version. If you Download the Selenium3Beta, things could change). Here you can configure the HUB. You can also add the Selenium Hub as a service and configure to run automatically at server start. its important that you open the Selenium default port (or the one that you configured) so the nodes can connect to it. You can do that on the Amazon EC2 console when you have created your instance. You just need to add a security group with an inbound rule for TCP in the port you want for the IPs you want.
Then, you can create a Windows server 2012 instance server (for example, that's what I did), and do the same process. Download the same version for Selenium and the chromedriver (there is no need to download any firefoxdriver for Selenium versions before Selenium3). Generate a txt file and prepare the Selenium command to link to the HUB as a NODE. And convert it to *.bat in order to execute it. If you want to run the bat at start you can create a service with the task scheduler or use NSSM (https://nssm.cc/). Don't forget to add the rules to the security groups for this machine too!
You can link as many servers as you want to your node.
If you want to use docker, good luck! ;) Haha.
No, with docker I recommend you to start as easy as posible trying to create a Dockerfile in local that runs the Jenkins server and the Selenium Server NOT in grid mode. When you have it working in local, push it to a repository. When You have all of this running, create an EC2 instance and install docker. Pull your selenium docker image and run it linking the local server ports to the docker machine ports.
You have so many work to do here... But it's so interesting. I recommend you go step by step creating in every iteration a better infrastructure. Don't try to add all that technologies at the same time.
Thera are lots of webs talking about that concepts.
Good luck!

How to build a sidecar container for passing files from a machine outside of the Kubernetes cluster?

I have a noob question. If I'm using a docker image that uses a folder located in the host to do something, Where should be located the folder in the kubernetes cluster? I'm ok doing this with docker since I know where is my host filesystem but I get lost when I'm on a kubernetes cluster.
Actually, I don't know if this is the best approach. But what I'm trying to do is build a development environment for a php backbend. Since what I want is that every person can run a container environment with their own files (which are on their computers), I'm trying to build a sidecar container so when launching the container I can pass the files to the php container.
The problem is that I'm running kubernetes to build development environment for my company using a vagrant (coreos + kubernetes) solution since we don't have a cloud service right now so I can't use a persiten disk. I try NFS but it seems be too much for what I want (just pass some information to the pod regardless of the PC where I am). Also I try to use hostPAth in Kubernetes but the problem is that the machines where I want connect to the containers are located outside of the kubernetes cluster (Vagrant + CoreOS + Kubernetes so I-m trying to expose some container to public IPs but I can not figure out how to past the files (located in the machines outside of the cluster) to the containers.
Thanks for your help, I appreciate your comments.
Not so hard, actually. Check my gists may give you some tips:
https://gist.github.com/resouer/378bcdaef1d9601ed6aa
See, do not try to consume files from outside, just package them in a docker image, and consume them by sidecar mode.

How to make communicate between nginx and php container

I have PHP web application and i want to convert it into docker.
I have these containers
mysql
php
nginx
I have source code in my host folder as /var/www/site1
Now when i launch nginx , then i can mount site1 to nginx as /usr/nginx/share/html
But i am not sure how does i link with PHP conainer. Can't i have stand alone PHP container with only PHP installed or I need to have some webserver along with PHP
My view on docker containers is that each container typically represents one process. E.g. mysql or nginx as in your example. Containers typically communicates with each other using networking or via shared files in volumes.
Each container runs its own operating system (typically specified in the FROM-section in your Dockerfile. In your case, you are suggesting that the nginx-container runs in one process with one operating system and that the php-libraries run in a different process (in a different os). I'm not sure if this is doable but it seems as if it is a strange way of doing things.
My suggestion is that you create two containers:
nginx+php - this container holds the PHP installation as well as the Nginx-stuff
mysql - this container contains the database
The container can communicate via classic networking or as linked containers.
However, the PHP-files that you wish to execute (i.e. your website) should be dynamically mounted as a data volume on the nginx+php container or as data volume container.

Categories