I'm using the Elastic Beanstalk multi-container environment. I've created my own lightweight PHP7 + nginx image (https://github.com/maestrooo/eb-docker-php7) which comes with some sane PHP.ini settings (https://github.com/maestrooo/eb-docker-php7/blob/master/config/custom.ini).
However, I'd like to turn off the "opcache.validate_timestamps" option. The problem is that if I do it in the custom.ini in the image, it will also be applied on development.
I therefore wanted to be able to set it to "off" through .ebextensions file (this way it will only be deployed on production, as eb local run command ignore .ebextensions files), so I added a .ebextensions/server.config file to something like that:
files:
"/usr/local/etc/php/conf.d/project.ini":
mode: "000644"
owner: root
group: root
content: |
opcache.validate_timestamps = off
While the file is properly added to the instance, unfortunately it is still to "On". It seems because PHP needs to be restarted, but I've been unable to know how I could restart PHP in the context of the Docker multi-container environment.
Did anyone were able to do something similar?
Thanks
Related
I have setup Mapbender on Ubuntu 20.04 on a VirutalBox machine. PostgreSQL, PostGIS and Geoserver are all installed on the VM. I created a map application and added a search router function (followed the instructions in the documentation). The search is working like a charm in the dev environment but in the prod, it is not. In the dev environment, it is giving a result and hovering the mouse over the result highlights the feature and clicking on the result moves and zooms the map to the feature.
In the prod environment, nothing seems to happen when typing the search string and pressing search. The devtools report an internal server error 500, which is not very helpful. Although, in Firefox, the devtools show Referrer policy "strict-origin-when-cross-origin" in red.
I have already modified the Postgres configuration files to Listeners = * and host 0.0.0.0 to make sure it is not a database access problem.
Host Machine: Windows 10 Pro 20H2
Guest Machine: Ubuntu 20.04
Mapbender 3.2.6
Database Postgresql 12.8 with Postgis 3.0
WMS Served through Geoserver
PHP7.2
While I am not sure I provided all the information to properly diagnose the problem, any indication on what to do to investigate this issue and solve it are appreciated.
Update:
I modified php.ini to enable error logging by setting the following switches:
error_reporting = E_ALL
display_errors = Off
log_errors = On
ignore_repeated_errors = On
ignore_repeated_source = Off
error_log = /var/log/apache2/php_errors.log
But no errors are being logged so far and php_errors.log file is not being created. Even creating the file is not having any effect on the logging. Am not sure what I am missing. I want to reiterate though that the search is working in the dev environment so can't see how it can be an authentication issue. I am trying the search in the prod environment on a browser from within the VM, so using localhost to access the application.
On dev tools I get the following:
jquery.min.js:formatted:4210 POST`
http://localhost/mapbender1/application/bh_admin/element/337/0-ed10fcc5-57e7-1f83-8a76-c32030225b85/search 500 (Internal Server Error)
send # jquery.min.js:formatted:4210
ajax # jquery.min.js:formatted:3992
n.<computed> # jquery.min.js:formatted:4044
getJSON # jquery.min.js:formatted:4033
_search # js:14187
(anonymous) # jquery-ui.min.js:6
(anonymous) # js:13976
dispatch # jquery.min.js:formatted:2119
r.handle # jquery.min.js:formatted:1998
When clicking on jquery.min.js:4210, the following line is highlighted in the file:
g.send(b.hasContent && b.data || null),
Update 2
Following #IonBazan suggestion, I found the prod.log file, albeit in a different folder, and the error indicates that the database service cannot be found. The log file was in:
var/www/mapbender1/app/logs
And this is the message in the log file:
request.CRITICAL: Uncaught PHP Exception
Symfony\Component\DependencyInjection\Exception\ServiceNotFoundException:
"You have requested a non-existent service
"doctrine.dbal.mobh_data_connection". Did you mean this:
"doctrine.dbal.default_connection"?" at
/var/www/mapbender1/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/Container.php
line 348 {"exception":"[object]
(Symfony\Component\DependencyInjection\Exception\ServiceNotFoundException(code:
0): You have requested a non-existent service
"doctrine.dbal.mobh_data_connection". Did you mean this:
"doctrine.dbal.default_connection"? at
/var/www/mapbender1/vendor/symfony/symfony/src/Symfony/Component/DependencyInjection/Container.php:348)"}
[]
As I have mentioned before, the dev app is capable of accessing the service. This means, I suppose, that the DB connection parameters are correct in the parameters.yml and config.yml files. So I have a feeling there might be some cached item that needs updating, especially that Mapbender documentation mentions this:
The cache-mechanism of the development-environment behaves
differently: Not all files are cached, thus code changes are directly
visible. Therefore the usage of the app_dev.php is always slower than
the production-environment.
And
The directory app/cache contains the cache-files. It contains
directories for each environment (prod and dev). But the mechanism of
the dev-cache, as described before, behaves differently.
If changes of the Mapbender interface or the code are made, the
cache-directory (app/cache) has to be cleared to see the changes in
the application.
So this turned out to be a folder permission issue. The reason why the dev environment was working was because the dev caches less components than the prod, which makes changes made to configuration files like parameters.yml and config.yml reflected in the dev and not in the prod. At some point during the setup and configuration process, the ownership of the cache/prod folder went to root which left the www-data user without the proper access rights to the folder. So bottom line, the prod cache was not being updated which made the database connection service invisible to the prod environment, although the parameters.yml and config.yml had the correct settings.
So what I did was the following, noting that there are steps I performed which might have been unnecessary, but at this stage I will not be looking into finding out which step was not needed.
First step, stop the running services (Apache and PHP server):
sudo app/console server:stop
sudo service apache2 stop
Clear the prod cache:
sudo app/console cache:clear --env=prod --no-debug
I also used the cache:clear command with the no-warmup switch which essentially leaves you with an almost empty cache folder. I issued this command since the previous one left some files in the folder.
sudo app/console cache:clear --env=prod --no-warmup
Install the assets:
sudo app/console assets:install web --env=prod
Give www-data user the proper folder permissions:
sudo chown -R www-data:www-data /var/www/mapbender/app/cache
sudo chmod -R ug+w /var/www/mapbender/app/cache
Start Apache and PHP server:
sudo service apache2 start
sudo app/console server:start
Note that app/console needs to be executed from the folder /var/www/mapbender
Like I mentioned earlier, there might be unnecessary steps but this is more or less what I did and now the app is working as expected.
Disclaimer: I am not a developer and the information presented here was assembled from more than one source, including the Mapbender documentation.
I'm Dockerizing legacy PHP project. I would like to have Xdebug enabled in development environment and my Dockerfile copies pre-built php.ini into container.
Due to some network issues we have to have xdebug.remote_connect_back = 0 on Mac OS X (and corresponding xdebug.remote_host = docker.for.mac.localhost) and xdebug.remote_connect_back = 1 on Linux.
Is it possible to grab current OS type in Dockerfile/Docker Compose to copy php.ini corresponding to host OS?
Use volumes described here in docker-compose.yml. Create php.linux.ini and php.mac.ini in a config folder (or wherever) and map one of them to the container:
services:
php:
image: php
volumes:
- ./config/php.linux.ini:/etc/php.ini #or wherever the config is
Of course your users will have to manually change php.linux.ini for php.mac.ini, but it's a one time manual change.
That information isn't (and shouldn't) be available at image build time. The same Linux-based image could be run on native Linux, a Linux VM on Mac (and then either the Docker Machine VM or the hidden VM provided by Docker for Mac), a Linux VM on Windows, or even a Linux VM on Linux, regardless of where it was originally built.
Configuration such as host names should be provided at container run time. Environment variables are a typical way to do this, or you can use the Docker volume mechanism to push in configuration files from the host.
If your issue is purely around debugging your application, you can also set up a full development environment on your host, and only build in to your image the things you need to run it in a more production-like environment.
I decided to use Docker Compose ability of reading .env files. The whole workflow is as following:
create .env.sample file with all the lines commented (sorry, couldn't manage to correctly display commented lines):
OS=windows
OS=linux
OS=mac
ignore .env file by adding /.env line to .gitignore file
copy sample file with $ cp .env.sample .env and leave uncommented just one line corresponding to your OS
move OS-specific Xdebug-related section of php.ini into separate file with names like xdebug-mac.ini, xdebug-windows.ini, xdebug-linux.ini, etc.
add to docker-compose.yml args section to chosen service with value like - OS=${OS}
in corresponding Dockerfile add lines:
ARG OS=${OS}
COPY ./xdebug-${OS}.ini /usr/local/etc/php/conf.g/
OS value mentioned in .env will be expanded on building image time
execute $ docker-compose up -d --build to build image and start container
commit all your changes on success to let your colleagues have Xdebug set properly on any platform; don't forget to tell them make their own instance of .env file from template
I am running a website using AWS Elastic Beanstalk. In AWS Elastic Beanstalk, the default upload_max_filesize in php.ini is limited to 2M. I want to increase the upload_max_filesize to 20M. I do the following thing and 'Upload and Deploy' EBS by uploading the new application source codes with the new 99my_php_ini_change.config. But it does not automatically create the /etc/php.d/zzz_my_own_php.ini. I also
'Create New Application' using Elastic Beanstalk, but I did not see the file /etc/php.d/zzz_my_own_php.ini was successfully created either.
Where is the error?
I thing I did was:
put file 99my_php_ini_change.config inside folder .ebextensions under application root.
99my_php_ini_change.config contains :
files:
"/etc/php.d/zzz_my_own_php.ini" :
mode: "000644"
owner: root
group: root
content: |
upload_max_filesize=20M
Do you see any error messages in the log file at /var/log/eb-activity.log? You can view the full log file by doing an eb ssh, or you can retrieve it through the EB console or from the command-line using eb logs. If there are any errors, please show your log file here.
Also, YAML files are very sensitive to whitespace. You might try the following instead (notice the two spaces per indent level, lack of space before the colon, and lack of newline):
files:
"/etc/php.d/zzz_my_own_php.ini":
mode: "000644"
owner: root
group: root
content: |
upload_max_filesize=20M
although the path /mnt/my-proj/app/../var/sessions/dev is accessible for both the normal user and www-data I get the following message:
Warning: session_write_close(): Failed to write session data (user). Please verify that the current setting of session.save_path is correct (/mnt/op-accounting2/app/../var/sessions/dev)
I get the message above only in dev, but not in prod.
/mnt/my-proj/app/../var/sessions/dev and /mnt/my-proj/app/../var/sessions/prod have the same pemissions: 777.
The path above is mounted as following:
# mount -t vboxsf -o uid=1000,gid=33,umask=000 my-proj /mnt/my-proj;
What am I doing wrong?
I've read the following posts, but could find no solution for me:
PHP session handling errors
https://github.com/NewEraCracker/suhosin-patches/issues/3
PHP7 + Symfony 2.8, Failed to write session data
I'm using Vagrant 1.8.1 on Windows 8.1 Enterprice (64Bit) and ubuntu-xenial 16.04 in Vagrant. The provider is VirtualBox 5.0.20. The settings are mostly default ones. The path above is shared using VirtualBox GUI with full access.
Kind regards,
Juri
SOLVED! :-)
Setting
save_path: "/var/lib/php/sessions"
in /mnt/my-proj/app/config/config.yml solved the problem. Any adjusting of ini-Files in /etc/php/7.0/ wasn't neccessary (those files have still default values only).
But I wander why didn't I get that error message in prod?
You can just edit the configured file.
vi /etc/php/7.2/fpm/pool.d/www.conf
Then change the owner role for PHP from www-data to vagrant
user = vagrant
group = vagrant
In addition to the previous answer from Juri Sinitson, it also solved me tweaking the VM instead of tweaking the project base.
Adding to my Vagrant bash root provisioner this line:
sed -i "s/www-data/vagrant/g" /etc/apache2/envvars
service apache2 restart
Makes the apache run as vagrant. This confers apache more power on the shared directory as it appears to the filesystem that it is the user vagrant and not the user www-data who happens to be touching there.
Maybe this is 'apparmor' related or so.
i would like to increase the default max-post size in the php.ini and max upload size in the nginx config. How can i add that in an .sh file so it get executed when i provision the box?
Use the provision tools, such as puppet, chef, salt, ansible, etc.
For example, put below lines in your Vagrantfile, it will automatically apply puppet modules (such as php and nginx) with your changes.
config.vm.provision :puppet do |puppet|
puppet.module_path = "modules"
puppet.manifests_path = "manifests"
puppet.manifest_file = "vagrant.pp"
puppet.options = ['--verbose']
end
Take a look on these urls.
https://docs.vagrantup.com/v2/provisioning/puppet_apply.html
https://docs.vagrantup.com/v2/provisioning/ansible.html
https://docs.vagrantup.com/v2/provisioning/chef_solo.html
The correct answer to the exact question would be (given the current version of Homestead):
After cloning go to src/stubs and edit the after.sh file
launch init.sh from the root of the repository
vagrant up
after.sh is a file copied to the VM and launched after homestead finishes its provisioning.