i would like to increase the default max-post size in the php.ini and max upload size in the nginx config. How can i add that in an .sh file so it get executed when i provision the box?
Use the provision tools, such as puppet, chef, salt, ansible, etc.
For example, put below lines in your Vagrantfile, it will automatically apply puppet modules (such as php and nginx) with your changes.
config.vm.provision :puppet do |puppet|
puppet.module_path = "modules"
puppet.manifests_path = "manifests"
puppet.manifest_file = "vagrant.pp"
puppet.options = ['--verbose']
end
Take a look on these urls.
https://docs.vagrantup.com/v2/provisioning/puppet_apply.html
https://docs.vagrantup.com/v2/provisioning/ansible.html
https://docs.vagrantup.com/v2/provisioning/chef_solo.html
The correct answer to the exact question would be (given the current version of Homestead):
After cloning go to src/stubs and edit the after.sh file
launch init.sh from the root of the repository
vagrant up
after.sh is a file copied to the VM and launched after homestead finishes its provisioning.
Related
I'm Dockerizing legacy PHP project. I would like to have Xdebug enabled in development environment and my Dockerfile copies pre-built php.ini into container.
Due to some network issues we have to have xdebug.remote_connect_back = 0 on Mac OS X (and corresponding xdebug.remote_host = docker.for.mac.localhost) and xdebug.remote_connect_back = 1 on Linux.
Is it possible to grab current OS type in Dockerfile/Docker Compose to copy php.ini corresponding to host OS?
Use volumes described here in docker-compose.yml. Create php.linux.ini and php.mac.ini in a config folder (or wherever) and map one of them to the container:
services:
php:
image: php
volumes:
- ./config/php.linux.ini:/etc/php.ini #or wherever the config is
Of course your users will have to manually change php.linux.ini for php.mac.ini, but it's a one time manual change.
That information isn't (and shouldn't) be available at image build time. The same Linux-based image could be run on native Linux, a Linux VM on Mac (and then either the Docker Machine VM or the hidden VM provided by Docker for Mac), a Linux VM on Windows, or even a Linux VM on Linux, regardless of where it was originally built.
Configuration such as host names should be provided at container run time. Environment variables are a typical way to do this, or you can use the Docker volume mechanism to push in configuration files from the host.
If your issue is purely around debugging your application, you can also set up a full development environment on your host, and only build in to your image the things you need to run it in a more production-like environment.
I decided to use Docker Compose ability of reading .env files. The whole workflow is as following:
create .env.sample file with all the lines commented (sorry, couldn't manage to correctly display commented lines):
OS=windows
OS=linux
OS=mac
ignore .env file by adding /.env line to .gitignore file
copy sample file with $ cp .env.sample .env and leave uncommented just one line corresponding to your OS
move OS-specific Xdebug-related section of php.ini into separate file with names like xdebug-mac.ini, xdebug-windows.ini, xdebug-linux.ini, etc.
add to docker-compose.yml args section to chosen service with value like - OS=${OS}
in corresponding Dockerfile add lines:
ARG OS=${OS}
COPY ./xdebug-${OS}.ini /usr/local/etc/php/conf.g/
OS value mentioned in .env will be expanded on building image time
execute $ docker-compose up -d --build to build image and start container
commit all your changes on success to let your colleagues have Xdebug set properly on any platform; don't forget to tell them make their own instance of .env file from template
I have MAMP installed on my Mac with MacOS HighSierra 10.13.4.
I have ran composer create-project roots/bedrockin my /Applications/MAMP/htdocsfolder.
I have started up my servers via the MAMP UI. When I surf to http://localhost:8888/MAMPI get the MAMP startpage so everything seems to be working fine.
When I go to http://localhost:8888/bedrock I get a list of my files and dirs in my bedrock folder:
Index of /bedrock
Parent Directory
.env
.env.example
.gitignore
CHANGELOG.md
...
This is what my .env file looks like:
DB_NAME=adatabase
DB_USER=auser
DB_PASSWORD=apassword
# Optional variables
# DB_HOST=localhost
# DB_PREFIX=wp_
WP_ENV=development
WP_HOME=http://localhost:8888/bedrock
WP_SITEURL=${WP_HOME}/wp
I am wondering what I am doing wrong since I don't see the WordPress installation page.
It looks like apache, by dint of MAMP's default config, isn't serving from the correct directory for a bedrock project.
According to the bedrock docs, you should:
Set your site vhost document root to /path/to/site/web/ (/path/to/site/current/web/ if using deploys)
So, you'll need to modify your MAMP config to serve this project not from /Applications/MAMP/htdocs/bedrock, but from /Applications/MAMP/htdocs/bedrock/web.
You will need to click Set Web & MySQL ports to 80 & 3306 via MAMP > Preferences > Ports.
Then http://localhost/bedrock
Please let me know if this works :)
i´m moving my wordpress farm (10 installs) to docker architecture,
I want had one nginx container and run 10 php-fpm containers (mysql is on external server)
the php containers are named php_domainname, and also contain persistent storage
i want know how do this:
a)How pass domainname and containername to vhost conf file¿
b)when i start a php-fpm container
1) add a vhost.conf file into nginx confs folder
2) add volume (persistent storage) to nginx instance
3) restart nginx instance
All nginx-php dockers that i founded, has both process per instance, but i think that had 10+1 nginx is overloading the machine, and break the docker advantages
Thanks
No need to reinvent the wheel, this one has already been solved by docker-proxy which is also available on docker hub.
You can also use consul or like with service-autodiscovery. This means:
you add a consul server to your stack
you register all FPM servers as nodes
you register every FPM-daemon as a service "fpm" in consul
For your nginx vhost conf, lets say located /etc/nginx/conf.d/mywpfarm.conf you use consul-template https://github.com/hashicorp/consul-template to generate the config in a go-template were you use
upstream fpm {
{{range service "fpm"}}
server {{.Name}} {{.Address}}:{{.Port}};
{{end}}
}
In your location when you forward .php based request to the FPM upstream, you now use the upstream above. This way nginx will load-balance through all available servers. If you shutdown one FPM host, the config changes automatically and the FPM upstream gets adjusted ( thats what consul-template is for, it watches for changes ) - so you can add new FPM services at any time and scale horizontally very easy
I'm using the Elastic Beanstalk multi-container environment. I've created my own lightweight PHP7 + nginx image (https://github.com/maestrooo/eb-docker-php7) which comes with some sane PHP.ini settings (https://github.com/maestrooo/eb-docker-php7/blob/master/config/custom.ini).
However, I'd like to turn off the "opcache.validate_timestamps" option. The problem is that if I do it in the custom.ini in the image, it will also be applied on development.
I therefore wanted to be able to set it to "off" through .ebextensions file (this way it will only be deployed on production, as eb local run command ignore .ebextensions files), so I added a .ebextensions/server.config file to something like that:
files:
"/usr/local/etc/php/conf.d/project.ini":
mode: "000644"
owner: root
group: root
content: |
opcache.validate_timestamps = off
While the file is properly added to the instance, unfortunately it is still to "On". It seems because PHP needs to be restarted, but I've been unable to know how I could restart PHP in the context of the Docker multi-container environment.
Did anyone were able to do something similar?
Thanks
I'm a noob and using Wordpress on Google cloud. When attempting to upload a new theme, I get the following error message:
The uploaded file exceeds the upload_max_filesize directive in php.ini.
This is a limitation seems to be set by Google Compute Engine. I've found info about the limitation being set in the php.ini file, but I can't seem to locate that file anywhere.
Can anyone give some idiot proof, step-by-step instructions to increase the upload size beyond 2MB? I've installed the WP plug-ins that should do this, but the problem must be server side.
I'm not sure what operating system you are using or what version of PHP you are using. I run an Ubuntu 12.04 instance from Amazon Web Services using PHP-FPM. But, the instructions should be basically the same for you. The directory where your php.ini file is saved may be slightly different in item 3. Go hunt for it.
Log in to your server via SSH.
Change user to root: sudo /bin/bash
Edit the php.ini file: nano /etc/php5/fpm/php.ini
Find the line that says upload_max_filesize = 2M . In nano, you can search by typing Ctrl W.
Change to whatever file size you want Whatever you type must have an M (megabytes) or G (gigabytes) at the end (e.g. upload_max_filesize = 200M or =1G).
Aim for the lowest number that you NEED, and keep in mind that PHP has another setting elsewhere that sets how long it will wait before a timeout. You can set a 2G upload limit, but if your timeout is 30 seconds you're still going to fail unless you can upload 2G in 30 seconds. As a general rule, aim low.
Type Ctrl X to exit, save your file changes.
Restart PHP by typing service php5-fpm restart
In your Google development console dashboard, at the left, under Compute you have Instance of MV
click to see all your instance parameters
click on ssh button to access to your server
tape find / -name php.ini to find the directory of your php.ini
tape sudo nano in my case it was sudo nano /etc/php5/apache2/php.ini
find the line upload_max_filesize = 2M and change it to 20M or more
restart your server with sudo /etc/init.d/apache2 restart
It worked fine for me !
Install Google Cloud SDK (GCS) https://cloud.google.com/sdk/gcloud/
GCS: Setup you account in Google Cloud SDK Shell gcloud auth login
Get connection string from web console https://console.developers.google.com
YourProject > Compute > Compute Engine > VM instances
On instance line: connect/SSH > popup menu > view gcloud command
Copy gcloud compute command line
GCS: Run it in Google Cloud SDK Shell to open SSH gcloud compute --project "your_project-id" ssh --zone "us-central1-a" "wordpress-your_id"
GCS: Copy php.ini to your localhost: gcloud compute --project "your_project-id" copy-files "wordpress-your_id":/etc/php5/apache2/php.ini php.ini --zone "us-central1-a"
Edit line upload_max_filesize = 2M in php.ini located in your Cloud SDK folder
GCS: Upload back to the host in your home directory:gcloud compute --project "your_project-id" php.ini copy-files "wordpress-your_id":~/php.ini --zone "us-central1-a"
SSH: Change user to root in PuTTY: sudo /bin/bash
SSH: Replace php.ini: cp php.ini /etc/php5/apache2/php.ini
SSH: Restart service: service apache2 restart
I did this steps
In the terminal console you need to edit the correct php.ini, in my case was:
vi /etc/php5/apache2/php.ini
I made the seek and changed the upload_max_filesize and post_max_size variables like this:
post_max_size = 256M
upload_max_filesize = 256M
I restarted the apache server
/etc/init.d/apache2 restart
And work for me.
You must know about how Google products works.
At least, there are two things you can control.
The memory that WP itself will try to use as max: define('WP_MEMORY_LIMIT', '256M');
The value of the "container" you are using (600Mhz/128Mb on the free usage).
Even if it's running PHP, it's using their own infraestructure, based on Python and using it's own config files. So no .htaccess or php.ini will be parsed here. Just the config files that you can read about on the Documentation.
Anyway, im watching people's reports and being know that you can at least run the 128Mb of the minumun instance. Also, i don't recommend you to use AppEngine to host a blog.