I have a Laravel installation and have set up three environments with their own corresponding config directories:
local
staging
production
I use php artisan migrate:make create_users_table etc as described here to create database migrations.
In my local environment I use Vagrant and a simple MySQL server setup, and on staging & production I use AWS RDS.
To configure database access for the staging environment I have a app/config/staging/database.php file with settings like this:
...
"mysql" => array(
"driver" => "mysql",
"host" => $_SERVER["RDS_HOSTNAME"],
"database" => $_SERVER["RDS_DB_NAME"],
"username" => $_SERVER["RDS_USERNAME"],
"password" => $_SERVER["RDS_PASSWORD"],
"charset" => "utf8",
"collaction" => "utf8_unicode_ci",
"prefix" => "",
),
...
I use git to deploy the app with git aws.push as described here.
The question is: How do I run the migration on my staging (and later production) EBS server when deploying?
I solved it by creating a new directory in the root of my project named .ebextensions. In that directory I created a script file my-scripts.config:
.ebextensions/
my-scripts.config
app/
artisan
bootstrap
...
The file my-scripts.config gets executed when EBS deploys, is a YAML file and looks like this:
container_commands:
01-migration:
command: "php /var/app/ondeck/artisan --env=staging migrate"
leader_only: true
Add the directory and file to git, commit, and run git aws.push and it will migrate.
Explanations on how stuff in .ebextensions works can be found here.
The path /var/app/ondeck is where your application lives when your script runs, it will afterwards be copied into /var/app/current.
The artisan option --env=staging is useful for telling artisan what environment it should run in, so that it can find the correct database settings from app/config/staging/database.php
If you need a quick and dirty way to log why the migrate command fails you might want to try out something like "php /var/app/ondeck/artisan --env=staging migrate > /tmp/artisan-migrate.log" so that you can log into your ec2 instance and check the log.
After oskarth answer, some instructions might had been changed in the past few years on how AWS Elastic Beanstalk deploys a new application version. According to the AWS documentation related to the container_commands of .ebextensions, case the "cwd" option is not set, the working directory is the staging directory of the unzipped application. It means that the user under the instance during the deployment process will be positioned at /var/app/staging/, where the extracted source version of the application is. Therefore, the artisan command can be executed either alone or following by var/app/staging/ path instead of /var/app/ondeck/ like this:
container_commands:
01-migration:
command: "php artisan --env=staging migrate"
leader_only: true
or this
container_commands:
01-migration:
command: "php /var/app/staging/artisan --env=staging migrate"
leader_only: true
I have deploy my project using the both configurations above. I discovered that after hours looking at the eb-engine.log file and reading the documentation over and over again. I hope anyone take so long as well after reading that. The logs can be accessed throw the eb logs command on terminal, on the environment console or through the S3 bucket associated to the environment. There is pretty much everything in the documentation explained. I didn't comment it on the oskarth answer I'm not allowed yet!
ps. the /var/app/staging path has no relation with the staging environment in laravel.
Worth mentioning here if people are using a dockerised container to run their application on Beanstalk, you will have to run this inside the container.
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/98_build_app.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
echo "Running laravel migrations" >> /var/log/eb-activity.log
docker exec $(docker ps -qf name=php-fpm) sh -c "php artisan --env=staging migrate --force || echo Migrations didnt run" >> /var/log/eb-activity.log 2>&1
The problem is the container name changes each time.
So the docker ps -qf name=php-fpm part is just going to get a container with the container name containing php-fpm. So replace that with something else that matches which container you want to run it on.
Also use --force because otherwise, its going to try and wait for a prompt
Related
I'm following the Laravel documentation for using Laravel with Docker with using the Getting Started section: https://laravel.com/docs/9.x/installation#getting-started-on-macos. I use this curl command: curl -s "https://laravel.build/example-app" | bash to create the project and then I run sail up to create the docker containers, which works fine.
The issue happens when I come to work with the storage folder. I'm following this documentation: https://laravel.com/docs/9.x/filesystem#the-public-disk. I run php artisan storage:link which creates this item in the links array:
'links' => [
public_path('storage') => storage_path('app/public'),
],
However, when I go to localhost and try and retrieve an asset by just entering the URL, I get a 404 error. I went into the docker container to see if the symlink was actually created. This is the output: lrwxr-xr-x 1 root root 79 Dec 28 11:29 storage -> /Users/ex_user/example-app/storage/app/public. It points to my local storage directory, so I ran php artisan storage:link inside of the container and the output of the alias was this: lrwxr-xr-x 1 root root 32 Dec 28 11:47 storage -> /var/www/html/storage/app/public. The images then load correctly. So how can I solve this without have to run the command within the docker container, I do want the project to be able to work if somebody wants to run it without the docker container. I haven't changed anything with my project, this is a fresh pull from the Laravel CURL request so I don't know if this is a bug that needs to be fixed. Any ideas on how to fix this? Thanks
I've tried changing the paths and creating a symlink within the container, but this is not the optimal solution
If you run the storage: link from outside of your container, the symlink will be relative to your local machine. Get into the container by running: docker exec -it name_of_your_php_container bashes
The command php artisan storage:link creates absolute symbolic links by default.
To create relative symbolic links instead, use php artisan storage:link --relative. This requires that the symfony/filesystem dependency is installed.
I have a Laravel Web Application, and it works just fine locally, using a local .env file that references the local database.
I have the same Laravel Web Application deployed in production, where I find a .env, which is different from the one that I use locally.
Both the scenarios work perfectly, but when I wanted to perform a test with the remote database (that I can access from my local IP address), I copied the remote .env and renamed it .env.production.
How can I run the php artisan serve using the .env.production ?
The php artisan serve help states that adding a --env parameter should make the trick, as you can see from the command result below
php artisan serve --help
Description:
Serve the application on the PHP development server
Usage:
serve [options]
Options:
--host[=HOST] The host address to serve the application on [default: "127.0.0.1"]
--port[=PORT] The port to serve the application on
--tries[=TRIES] The max number of ports to attempt to serve from [default: 10]
--no-reload Do not reload the development server on .env file changes
-h, --help Display help for the given command. When no command is given display help for the list command
-q, --quiet Do not output any message
-V, --version Display this application version
--ansi Force ANSI output
--no-ansi Disable ANSI output
-n, --no-interaction Do not ask any interactive question
--env[=ENV] The environment the command should run under
-v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug
but the command php artisan serve --env=production still loads the local database.
What am I doing wrong ?
After some tests I found a working solution in a Laracast Forum, giving credits to a Laravel.io post that consists into running the following command:
APP_ENV=production php artisan serve
This causes the .env.production to be loaded and used by the local server, as needed.
I post this here hoping it will be useful to someone in my same condition.
I am deploying a Symfony 4.4 app to AWS ElasticBeanstalk and noticed that the cache wasn't cleared after each deploy.
The app was running fine though, exception made to the stale cache.
To resolve the cache issue I added the following file:
/.ebextensions/deploy.config
container_commands:
01-clear-cache:
command: php bin/console cache:clear --no-warmup --env=prod
That seems to clear the cache but then somehow it changes permissions so that I then get the error when trying to access the app.
Fatal error: Uncaught RuntimeException: Unable to write in the cache directory (/var/app/current/var/cache/prod)
Why does running cache:clear changes permissions and is there a way to avoid that happening, or at least how to resolve afterwards, ie, in the same/another .ebextensions file?
These commands are run by the root user, as specified in the docs.
The specified commands run as the root user, and are processed in alphabetical order by name. Container commands are run from the staging directory, where your source code is extracted prior to being deployed to the application server. Any changes you make to your source code in the staging directory with a container command will be included when the source is deployed to its final location.
(Emphasis mine).
When re-creating the cache, the new directories are owned by root, and your PHP process can't write there if it needs to.
Execute your command so it runs using the same user than your PHP runtime. E.g. if it runs under the www-data user:
container_commands:
01-clear-cache:
command: sudo -u webapp php bin/console cache:clear --no-warmup --env=prod
When using Ansible you can actually just use become: true as a mechanism to become a root user and become_user: xxx to become the desired user.
Example:
---
# roles/app/tasks/main.yml
- name: Run composer install
become: true
become_user: ubuntu
composer:
command: install
working_dir: "{{ deploy_path }}"
Note that you have to define a variable called deploy_path.
I have issue with Artisan usage on Windows 10 with Xampp.
What have I done:
clone project from git repositiory
run composer install
In directory with project i trying run:
php artisan
but i get errror:
Could not open input file: artisan
But if i run command
php bin/console i get list of command like cache, debug, eloquent... so some tools are there... But none artisan...
How i can add/use artisan into existing project?
Check if artisan file exists in your project root folder. If it does, then you're probably on the wrong folder. If it doesn't exist, you can just download it from the official repo.
To configure an existing project, you'd typically check those things first :
You should goto app/config/database.php check file and verify username and password.
After check in Project Folder vendor folder and composer.json file exist then remove it (that remove old configuration now we going to fresh configuration).
Then after back to command prompt and fire command composer update and that download some dependent file download.
Now Run php artisan serve
I'm new to GitHub and I found this site very useful for a lot of us. I came upon storing my Laravel project here in GitHub, but there's a problem every time I will clone it to be able to go to production, when I'm about to clone it at first, it always shows this error.
Warning: require(C:\xampp\htdocs\tourismPortal\bootstrap/../vendor/autoload.php): failed to open stream: No such file or directory in C:\xampp\htdocs\tourismPortal\bootstrap\autoload.php on line 17
Fatal error: require(): Failed opening required 'C:\xampp\htdocs\tourismPortal\bootstrap/../vendor/autoload.php' (include_path='.;C:\xampp\php\PEAR') in C:\xampp\htdocs\tourismPortal\bootstrap\autoload.php on line 17
I know this will be solved by using composer update on it, but is there any way to prevent this error so that every time I clone it, I will go to production without encountering this error? Thanks, by the way, I'm using tortoisegit to clone, pull and push.
Clone your project
Go to the folder application using cd command on your cmd or terminal
Run composer install on your cmd or terminal
Copy .env.example file to .env on the root folder. You can type copy .env.example .env if using command prompt Windows or cp .env.example .env if using terminal, Ubuntu
Open your .env file and change the database name (DB_DATABASE) to whatever you have, username (DB_USERNAME) and password (DB_PASSWORD) field correspond to your configuration.
Run php artisan key:generate
Run php artisan migrate
Run php artisan serve
Go to http://localhost:8000/
Yes you can, but that is not recommended at all.
You can delete everything in .gitignore file and push them from a working project. Then it will work perfectly where you git clone them.
But there are so many drawbacks in this way.
I recommend you not to do that.
Run `git clone 'link projer github'
Run composer install
Run cp .env.example .env or copy .env.example .env
Run php artisan key:generate
Run php artisan migrate
Run php artisan db:seed
Run php artisan serve
Go to link localhost:8000 OR 127.0.0.1:8000
You guys missed a step here, after this command
php artisan key:generate
Don't forget to run npm install && npm run dev
If you have any front-end environment setup, this command will clear all the dependencies...
Thanks for everyone's great help I also run the command to make the project work.
composer update
Run the following commands:
git clone --single-branch --branch [TAG_VERSION] https://github.com/laravel/laravel.git [CUSTOM_PROJECT_NAME]
composer install