We have a Laravel deployment website set up under deploy.mysite.com which handles deployments for a multitude of other websites.
One other website I'm trying to deploy which is also a Laravel site resides under site2.myothersite.com.
Both are on the same server. Deploy calls a script on site2, this deploy script runs various commands after cd'ing to the project directory. We use the following to update the database structure.
php artisan migrate --force
Ordinarily when this is run directly via SSH when in the project root, it runs just fine.
However when this is run via the deployment script (using php exec() to run these commands) the process does work - however, instead of updating the project that we've cd'd into, it updates the database structure of the deployment site!
It seems as if the php artisan migrate command ignores the fact I've cd'd into another project and it takes the database values from the current directory.
How can I go about changing this behaviour?
After playing around with multiple different solutions I eventually came to realise that the problem was the .env file from the project that I was in was setting the environment variables and then weren't being overwritten by anything therefore was then essentially running the code as the wrong site.
What I did to solve this
In my deploy script I manually loaded up the .env file and overwrote the environment variables with overload rather than just load.
Note: This will only work on versions below V5.
$dotenv = Dotenv::create(__DIR__);
$dotenv->overload();
UPDATE for V5
As per #nibnut's comment, since dotenv V5 (now in Laravel 7.0) this loads the environment variables as immutable. In my case I'm happy to allow these to be mutable, in which case I can do the following instead of the above.
$dotenv = Dotenv::createMutable(__DIR__);
$dotenv->load();
https://github.com/vlucas/phpdotenv#immutability-and-repository-customization
This was all I had to do to get my original script working.
NOTE: as this is a laravel install at all ends, the dotenv package was already installed. This could be used for any project by installing it separately.
Rather than cd to the directory, you could change the command to something similar to this:
php /var/www/sitea/artisan migrate --force
That will run the artisan command for the provided directory.
$log = shell_exec("unset DB_HOST &&
unset DB_PORT &&
unset DB_PASSWORD &&
unset DB_USERNAME &&
unset DB_DATABASE &&
composer dump-autoload &&
cd /var/www/yourproject/ && php /var/www/yourproject/artisan migrate:fresh &&
php /var/www/yourproject/artisan db:seed &&
php /var/www/yourproject/artisan seed:translation");
novocaine's answer unfortunately no longer works in Laravel 7.x, since the DotEnv file is loaded immutable.
My current workaround is to create a temporary .env file in my project (i.e. ".env.temp") then use the --env argument when calling my artisan commands:
php artisan migrate --env=temp
This may not work for all use cases, but if you're calling artisan commands in project B from within project A, it should work fine.
novocaine's answer involve reloading .env file. But you can also empty environment variables with env -i.
This method will avoid having some variables not overwritten due to different .env file variable name. For example, your deployment site may have a variable called DEPLOY_KEY in the .env that is non-existent on the .env of site2. Reloading .env file may leave the DEPLOY_KEY variable available on the site2 script. This may leads into security issues.
So, instead of running
exec('php artisan migrate --force');
you'll have just to add env -i:
exec('env -i php artisan migrate --force');
Source: https://unix.stackexchange.com/a/48995/375005
Related
I'm using Laravel 8.0 in my application and I added a new .env variable:
APP_URL_FRONT=https://tmsv00008:6204
And how I use it:
To access the project use the following link: LINK
When I deploy the application I cache the config files:
php artisan config:cache
But the env files is not visible and the strange thing is that if I execute:
php artisan config:clear
The variable is visible.
What I'm doing wrong? Because it doesn't make sense to me.
Also, in this article they suggest to execute the commands in this order:
php artisan config:cache
php artisan config:clear
But this doesn't make sense to me. Shouldn't be the other way around? And I think the cache commands also clears the cache first.
The env() function is strongly recommended to be used only in configuration file, since caching the configurations, will make the env function to return null
In other words, crate a configuration file, with that environment variable, or add it to an existing one
It's hidden in an old update guide this "note"
I use Github Actions workflows for my CI/CD processes for Node and PHP projects.
Within a workflow I clone my repository into Github Actions runner virtual machine. Then in order to run tests within a workflow I have to have the .env file in the cloned repository.
The problem is my .env file is not a part of repository (which is the ubuquitous practice).
To solve the problem I use what I consider a workaround: set up MY_PROJECT_ENV Github Action sercret variable, manually put there the content of my .env file and then dynamically create the .env file within my workflow with Linux console echo "${{ secrets.MY_PROJECT_ENV}}" > .env. This works.
But I would like to know are there other approaches for providing .env files to Github Actions workflows?
There are 3 ways to do this I know by now. I put the answer to my own question a year after in the different question. See there.
For the sake of SO rules and findablity I put here a summary.
You keep your .env file in the repository. Use dotenv actions to read your file into the workflow.
You keep the file out of the repository. Then you have 2 ways of getting .env variables:
2.1. as I wrote in my question above manually copy the file content to the GitHub actions secret variable and then in your workflow create the .env file from that variable.
2.2. Use the GitHub Actions API to create/update the secrets: write the NodeJS script on your machine (chances are you anyway use Webpack, Gulp or the like Node thing so you have Node installed).
The script should read the local .env files and write their content to the GH secrets. Of course you can write a custom console utilty to do this with any language you use in your project.
As easy as this :)
As you know .env doesn't mean to push to the remote repository.
You need to somehow add the environment variables to the machine that you're running the program.
In your case, you can add environment variables by using the .yaml file as below
steps:
- name: Hello Program
run: Hello $FIRST_NAME $LAST_NAME!
env:
FIRST_NAME: Akhil
LAST_NAME: Pentamsetti
for more information please visit github official doc about using the environment variables.
I do the following, which is simple and effective:
Add environment variables (either define them in the yaml file or as secrets) as needed
Keep .env.example in the repository, and run the following at the start of the CI job:
# Create the .env file
cp .env.example .env
# Install dependencies so we can run artisan commands
composer install ...
# generate an APP_KEY
php artisan key:generate
An alternative to this is to commit a .env.ci file to the repository with env vars specific to the CI environment, and run cp .env.ci .env when running tests. Sensitive keys should still be set as secrets.
You can technically provide all of your env vars between secrets / env's in the YAML file and have no .env file, but I like having a random APP_KEY set per test run to ensure there's nothing relying on a specific APP_KEY.
Environment Precedence
As an aside, here's how environment precedence works with Laravel in phpunit tests. This is laravel specific and may come at a surprise as it's not exactly how phpunit alone works outside of Laravel:
Env vars set in phpunit.xml always "win" (this is true in Laravel despite what phpunit's docs say about system env vars taking precedence over phpunit.xml file items)
System environment variations (in GitHub actions, these are ones set as an env var when running commands in the yaml file)
.env file items
Source: I created/run Chipper CI, a CI platform for Laravel.
I recently inherited a project based on the laravel framework, which after I have set up, installed all requirements by composer and run php artisan migrations on, will not run via php artisan serve.
When I researched possible causes for this, I came across the following on SO:
laravel5: chdir(): No such file or directory (errno 2)
Using artisan serve after changing the public folder name
When I follow the suggested solution in the second one of adding the lines
'''
$app = new Illuminate\Foundation\Application(
realpath(DIR . '/../')
);
'''
to bootstrap/app.php I get the same exact error.
Is there a configuration file somewhere I need to update? Most of the suggested solutions I've found like changing files under vendor seem to be rather hacky? I'm really stuck on this and any help would be greatly appreciated - thanks!
check the configuration of your Homestead.yalm folder, there you can specity where your project is in the folders->map section.
It may the error from composer so try
Composer dump-autoload -o
I am trying to make a script which automaticly loads a project from a git repository. I run exec() from a php file with name of the shell script as an argument. The script looks something like this:
git pull
php yii migrate
The git command works well, but the yii command is totally ignored. I'm doing this from the root of the directory of the yii site, so it should work, but it doesn't.
How can I fix this?
if you are using Yii version 1.x, you have to run your command from inside protected directory
cd protected
php yiic.php migrate
First of all, if you want to run console application in Yii2, just use
yii <route> [--option1=value1 --option2=value2 ... argument1 argument2 ...]
Second: yii migrate is the defined console command to upgrade a database to its latest structure. So it probably does its work, but not that you want to.
Try to rename your console command.
Reference links to read:
Guide: console commands
Guide: migrations (part with yii migrate).
I have two controllers with the same name:
app\controllers\CareersController.php (for public use)
app\controllers\Admin\CareersController.php (for admins)
Because of the naming conflict, I added namespace admin; to the admin controller.
Everything works fine locally but when I uploaded the new admin controller to my server, I get an error: Class Admin\CareersController does not exist
From what I understand, the fix is:
php artisan dump-autoload
and composer dump-autoload
However, I don't have Shell access to run those commands and composer isn't installed on the server anyway. So, is there a way to reload the auto-load file without Shell access?
Run composer dump-autoload locally. Then, in your hosting site,
you can update two files, autoload_classmap.php and autoload_static.php, manually in vendor/composer folder. I prefer to copy and paste the added classes from local to the hosting server.
You dont need shell access. Artisan includes a dump-autoload function. You can just it via a PHP call within your app:
Route::get('/updateapp', function()
{
\Artisan::call('dump-autoload');
echo 'dump-autoload complete';
});
Edit: just noticed you wrote "composer isn't installed on the server anyway". Not sure what will happen - try the command above and let us know.
If it doesnt work - then just run composer dump-autoload locally - then upload your new autoload.php.
As a side point - is there any option to switch servers? You going to keep running into various issues if you dont have command line & composer access. You could just use Forge and spin up a new server on DigitalOcean, Linode etc in less time than it would take to fix this issue :)
I was using a shared hosting by client's requirements and did not have access to ssh or composer, what I did was to composer dump-autoload on my local machine and then I figured that for my project autoloader just updates composer directory in my vendor directory, so I just re-uploaded that one folder after each dump-autoload and not all vendor directory
Edit:
Another pitfall for me that generated the same error but the cause was something else, I develop on Windows machine in which file and directory names are case insensitive when deploying to Linux server, the framework could actually not find my controllers so I changed
Route::get('/news', 'newsController#index');
to
Route::get('/news', 'NewsController#index');
now it is working, autoload is doing it's job correctly