I'm trying to setup docker as web developer environment on my mac. I share local volume from my machine to web root folder inside a container. But I got stuck with permissions, web-server inside docker can't create a new folder or write new files to shared volume because almost all files have permissions 744 and different a user and a group.
I thought make all permissions 777 but it doesn't sounds good because in the git and in future on a server it will be also with bad permissions. In any way, a new files and folder that is creating web-server have wrong permissions.
I thought make a same group name on my mac that running web server inside docker (www-data) and change permissions to 774. But it sounds stupid.
What is a best way to fast manage files inside a docker? In my way I need to edit PHP files and immediately see result in browser.
you can use docker exec -it container_id script script being either a sed or a replacement of your file with a new version.
Starting with docker 1.8, you can add a specific user, docker exec -u muyser
Related
I have two different docker containers, each of them runs a PHP application. The problem that I have to solve is copy a list of files (using the PHP copy command) from container 1 to container 2.
Eg:
copy('var/www/html/uploads/test.jpg', 'var/www/html/site/uploads/test.jpg');
Now, the container 1 doesn't have access to container 2 which is site.
What is the best way to fix this?
Use a shared volume to transfer data. So mount
-v filetransfer:/var/www/html/transfer
or
--mount type=volume,source=filetransfer,destination=/var/www/html/transfer
to both containers. If both containers running different non-root users you have to ensure file permissions are set accordingly.
If you want to avoid file corruption use :ro (read-only) for all but one containers or ensure that by code.
Other comments:
docker cp is used to copy files from host to container or vice versa.
Building a REST-API for copying a single file is in my opinion a little bit over-engineered, as long as you're using the same host.
Right now we have a website running on php 5.6 at Azure on a CentOS 7 based operating system.
Every time if we want to do a new code deploy we have to do it using ftp to our server and manually transfer files and folders. This is very error prone and takes us hours of deploying and debugging afterwards every single time.
we develop on our localhost Windows machine using PHP with WAMP. So there's already a discrepancy between our localhost environments and the production environment.
I started reading a lot about docker lately and how it integrates with BitBucket pipelines. So I wanted to make our deployment Flow more streamlined and automatic with BitBucket pipelines.
Before I get to the technical stuff I have already tried, I want to make sure that I have the general picture of steps that need to be done correct.
What I want to achieve is a way for me and my colleague to write our code and push it to our BitBucket repository, from there on the pipeline picks it up, creates a docker container and deploys it (automatically) (is this a good idea, what about active users during a new container deploy?) to our website.
These are the steps I think that need to be done, please correct me where I am wrong:
1) I create a CentOS virtual machine using VirtualBox
on this VM I install docker
I create a dockerfile here where I use the php7.3-apache base image and I will install mysql on top of it as well.
?? Do I need to do extra stuff here like copying of folders with code or is
that done by bitbucket??
Now the problem I encounter is whilst creating this "docker container" for my situation. I realize this is probably a very common use case for Docker, but I've read through thousand of tutorials and watches tons of videos, but I cannot find answers to my most basic questions and I end up being stuck and frustrated for days/weeks.
I've got a fully working website created in CodeIgniter, but for the sake of the question I just want to have a working version of the docker container containing PHP MySQL and Apache.
I've logged into the CentOS VM and performed the following commands:
mkdir dockertest
touch index.html (and i placed some text in here)
touch index.php (and i placed a basic echo "hello world" in here)
touch docker-compose.yml
mkdir .docker
cd .docker
touch Dockerfile
touch vhost.conf
Dockerfile looks like this:
FROM php:7.3.0-apache-stretch
MAINTAINER Dennis
COPY . /srv/app
COPY .docker/vhost.conf /etc/apache2/sites-available/000-default.conf
RUN chown -R www-data:www-data /srv/app && a2enmod rewrite
Then I'm able to build the image using
docker build --file .docker/Dockerfile -t docker-test .
Right now I can run the container with the following command:
docker run --rm -p 8080:80 docker-test
At this point I go to my CentOS VM and I try to do a curl localhost:8080 and I get the following HTML:
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>403 Forbidden</title>
</head><body>
<h1>Forbidden</h1>
<p>You don't have permission to access on this server. <br />
</p>
<hr>
<address>Apache/2.4.25 (Debian) Server at localhost Port 8080</address>
</body></html>
So I guess this means that the Apache server is running, but it does not see my index files anywhere.
I am massively overwhelmed by the amount of documentation and tutorials that are available for Docker, but they all seem to be too high level for me or none which targets CentOS 7, PHP, MySQL and Apache combined.
A question that's also bugging me: The advantage of Docker is that it can be deployed to anywhere and the environment is the exact same. This causes no problems like "it works on my localhost". But how does this exactly work, do me and my colleague need to develop our code INSIDE the docker container? How does this even work?
The process should be:
develop: you and your colleages develop code, they push that to a version control system (git on bitbucket/github) -> the code is in one trusted repository
build: you take this code and create a (or multiple) Docker image(s) with it: on the Apache server, you need the HTML, Javascript code. Build a Docker image starting from Apache image, which then has a step to PULL the code from the git repository inside the container. That's your front end server.
For the DB part, you probably want another container, or even use a managed service that handles the migrations/updates for you, so you only need to worry about the data in the database. If you want to have you own container, make sure the data is in a VOLUME that is mounted in the container, but is otherwise stored on a local or network drive, (i.e. NOT inside the container which would get destroyed on any update)
deploy: pull the images from registry of choice, and make sure the containers are connected as needed (i.e. either on the same host and linked, or on different nodes that have access to each other through a private network)
Notes:
Use Docker for Windows rather than virtual machine and installing Docker inside it.
The host doesn't matter, it's the base image in the container that matters whether you deploy on a Ubuntu, CentOS or CoreOS host, the Docker base image is what matters for you to install dependencies and make your code run.
On the build phase, you probably don't want to pull from git inside the image if your project is a private repository, because you would need to have credentials inside the image to do that: rather you either pull the code from git outside the image, and ADD it to the image, or use another (private) container that has the git pull credentials to pull the code, do the build, and dump a build file that you can then ADD to a shippable container.
Imagine that i had created service for uploading kittens pictures and use Docker container on production.
To do it I created Docker image with PHP 5.5 service, mounted "upload" folder of my app from real OS and also mounted folder with source code.
After some time I decided to improve my app, changed source code and now it requires different env from existed in Docker.
For example, now I need PHP 5.6 instead of PHP 5.5
So when I want to change source code of my app, I can do it by switching mounted source code folder with symlinks (or cannot, because Docker will keep socket? If so, how to switch source code? Should I do it right in container without mounting?).
But how can I quickly switch Docker container after switching source code?
Fastest way would be to exec a shell session in the container, update the environment, restart the php service. As you have mounted the source code, no need to switch.
Best way would be to create a docker image with required environment and stop previous container then run the new image mounting appropriate directories.
Env: Linux
PHP apps runs as "www-data"
PHP files in /var/www/html/app owned by "ubuntu". Source files are pulled from git repository. /var/www/html/app is the local git repository (origin: bitbucket)
Issue: Our Developers and Devops would like to pull the latest sources (frequently), and would like to initiate this over the web (rather than putty -> and running the git pull command).
However, since the PHP files run as "www-data" it cannot run a git pull (as the files are owned by "ubuntu").
I am not comfortable with both alternatives:
Running Apache server as "ubuntu", due to obvious security issue.
The git repository files to be "www-data", as it makes it very inconvenient for developers logging into the server and editing the files directly.
What is the best practice for handling this situation? I am sure this must be a common issue for many setups.
Right now, we have a mechanism where the Devops triggers the git pull request from the web (where a PHP job - running as "www-data" creates a temp file). And a Cron job, running as "ubuntu", reads the temp file trigger and then issues the "git pull" command. There is a time lag, between the trigger and the actual git pull, which is a minor irritant now. I am in the process of setting up docker containers, and have the requirement to update the repo, running on multiple containers within the same host. I wanted to use this opportunity to solve this problem, in a better way, and looking for advise regarding this.
We use Rocketeer and groups to deploy. Rocketeer deploys with the user set to the deployment user (ubuntu in your case) and read/write permission for it, and the www-data group with read/execute permission. Then, as a last step, it modifies the permissions on the web-writable folders so that php can write to them.
Rocketeer executes over ssh, so can be triggered from anywhere, as long as it can connect to the server (public keys help). You might be able to setup your continuous integration/automated deployment to trigger a deploy automatically when a branch is updated/tests pass.
In any case, something where the files are owned by one user that can modify them and the web group can read the files should solve the main issue.
If you are planning on using docker, the simplest way would be to generate a new docker image for each build that you can distribute to your hosts. The docker build process would simply pull the latest changes on creation and never update itself. If a new version needs to be deployed, a new immutable image with the latest code is created and distributed.
I am not much of a web developer, so apologies in advance for this dumb question.
I have a test server (Centos 6.3) with LAMP set up for me to play around. From what I understand, the server executes whatever is in /var/www/html directory. How do you edit source files in that directory ? Do you do a sudo vim "foo.php" each time you want to fix something (or add something ) ? I'd imagine that would be a pain when you are building a complex application with many files and directories .
This is what worked for me. For the record this is a Centos 6.3 server running LAMP (On Rackspace).
First, I found out that apache runs as user "apache" and group "apache" on centos systems. In other distros, I believe it runs as "www-data" in group "www-data". You can verify this by looking at /etc/httpd/conf/httpd.conf. We need to change ownership of /var/www to this user. Replace "apache" below with "www-data" if that is the case for you.
chown -hR apache:apache /var/www
Now lets make it writable by the group:
chmod -R g+rw /var/www
Add yourself to the apache group:
usermod -aG apache yourusername
Replace apache with www-data in the above if thats the case for you.
Now log out and log in - you can now edit the files, ftp them to this directory or do whatrever you want to do.
Comments welcome. TNX!
There are many approaches to modifying and deploying websites/web apps.
CentOS 6 is, by default, accessible with SSH2 on port 22. If you are on Windows you can use a combination of PuTTY and WinSCP (to manage your server, and its files, respectively). If you're on Linux or Mac OS X, SSH is already built into your system and can be accessed with Terminal. I find that using SSH is favourable over other methods because of how widely supported, secure and lightweight it is.
You'll also need a decent text editor or IDE to edit the files if you want proper syntax detection. There's tons of choices, my favourites are Notepad++ and Sublime Text 2. Not to say that I haven't edited my PHP files from time to time using the nano text editor package directly in PuTTY (yum install nano).
If you're using a edit-save-upload approach, just remember to back up your files regularly, you will find out the hard way if you fail to do so. Also, never use root unless you need to. Creating a user solely to modify your websites is good practice (adduser <username>, and give that user write access to /var/www/html).
To answer your second question:
Once you get into heavier web development you will probably want to use something like Git. Deploying with git is beyond the scope of this question so I won't get into that. In brief, you can set it up so your development environment sits locally and you can use a combination of git commit and git push to deploy.
I use a FTP client (FileZilla) to download files, edit them and then re-upload them. If you're a one (wo)man show, and on a test setup and just playing around to learn, this is probably sufficient. With more than 1 person, or going to a (test and) production setup, you should look at some more control with svn like #Markus mentioned in another answer.
You should change the permissions of that directory (with chmod) so you have write permissions, and can then read and write to that directory. Then, you don't need sudo.
dude. read up on version control and source code control systems like subversion and git. the idea is to develop on your machine, revision control the result, then deploy a known working version on the production server.