This is the first time to add a gitlab webhook in my laravel application running in laradocker.
First, run docker up:
docker-compose up -d nginx redis mysql
Second, add webhook in my gitlab project
point to laravel website http://example.com/deploy/
Third, laravel add router and Controller
// web.php
Route::post('/deploy', 'DeployController#index')->name('deploy');
// DeployController
//........
$result = shell_exec("/usr/bin/git pull");
logger('success result: ' . $result);
//.........
It doesn't work!
which step go wrong?
I found php-fpm has logs like:
[22-Jan-2018 07:46:46] WARNING: [pool www] child 7 said into stderr: "sh: 1: /usr/bin/git: not found"
I am a new docker learner, it will helpfull if you leaves some comment or advise, thanks!
The error is clear, git is not installed or it is not in your $PATH.
/usr/bin/git: not found
Related
I'm working on a project with PHP 7.1.33 and Laravel 5.2.45 (just a bit outdated I know) running on a Docker container. I used the following code in bootstrap/app.php in order to send different levels of logs to stdout/stderr:
if (in_array(env('APP_ENV'), ['staging', 'production'])) {
$app->configureMonologUsing(function($monolog) {
$bubble = false;
$debugStreamHandler = new Monolog\Handler\StreamHandler('php://stdout', Monolog\Logger::DEBUG, $bubble);
$monolog->pushHandler($debugStreamHandler);
$infoStreamHandler = new Monolog\Handler\StreamHandler('php://stdout', Monolog\Logger::INFO, $bubble);
$monolog->pushHandler($infoStreamHandler);
$noticeStreamHandler = new Monolog\Handler\StreamHandler('php://stdout', Monolog\Logger::NOTICE, $bubble);
$monolog->pushHandler($noticeStreamHandler);
$warningStreamHandler = new Monolog\Handler\StreamHandler('php://stdout', Monolog\Logger::WARNING, $bubble);
$monolog->pushHandler($warningStreamHandler);
$errorStreamHandler = new Monolog\Handler\StreamHandler('php://stderr', Monolog\Logger::ERROR, $bubble);
$monolog->pushHandler($errorStreamHandler);
$criticalStreamHandler = new Monolog\Handler\StreamHandler('php://stderr', Monolog\Logger::CRITICAL, $bubble);
$monolog->pushHandler($criticalStreamHandler);
$alertStreamHandler = new Monolog\Handler\StreamHandler('php://stderr', Monolog\Logger::ALERT, $bubble);
$monolog->pushHandler($alertStreamHandler);
$emergencyStreamHandler = new Monolog\Handler\StreamHandler('php://stderr', Monolog\Logger::EMERGENCY, $bubble);
$monolog->pushHandler($emergencyStreamHandler);
});
}
Since this Laravel version does not have logging channel configuration, this one of a few ways to get it done in a global manner so I can use the Log facade.
When I Log from requests, everything works as expected (I also redirected Apache logs to stdout/stderr):
[name]#[name]:~ $ docker logs -f [name]
[23-Jan-2023 21:09:37] NOTICE: fpm is running, pid 1
[23-Jan-2023 21:09:37] NOTICE: ready to handle connections
[23-Jan-2023 22:04:56] WARNING: [pool www] child 8 said into stdout: "[2023-01-23 19:04:56] staging.INFO: testing from controller [] []"
172.30.0.9 - 23/Jan/2023:22:04:55 +0000 "POST /index.php" 200
But when I log from Console commands (executing them manually with php artisan while accessing the pod using "docker exec -it [name] /bin/bash"), the output only displays on the console or the screen of that session but it's not redirected like Controllers. I would like to understand why or if I'm missing something. I tried inspecting Laravel framework code, inside Monolog and Logger logic so I could find differences between Request and Console context but without success.
The main objective is to log some jobs that are executed by a PHP instance running with supervisor using "php artisan queue:work ..." and once captured by stdout/stderr, it would be processed by CloudWatch or any similar tool.
Thank you.
I am developing a Laravel (8.83.24) app and testing with Laravel's built in server "php artisan serve". I am using Process and am trying to debug a memory allocation error which has lead me to here:
$process = new Process(['node', '-v']);
$process->run();
print_r($process);
That leads me to a line in the output:
[command] => cmd /V:ON /E:ON /D /C (node -v) 1>"C:\Users\mat\AppData\Local\Temp\sf_proc_01.out" 2>"C:\Users\mat\AppData\Local\Temp\sf_proc_01.err"
and sf_proc_01.err contains:
'node' is not recognized as an internal or external command,
operable program or batch file.
Clearly node can't be found. I am on Windows 10. Node is set in my in System and User PATH. Running "node -v" from cmd.exe works and returns the version number. So Laravel's server doesn't seem to be able to find node. However, I can't check the path as running this:
$process = new Process(['echo', '%PATH%']);
$process->run();
print_r($process);
Just leads to an output of
"%PATH%"
How can I make Node findable by Laravel / Process?
Many thanks for your help.
I eventually solved this by setting the path for node in my Laravel .env file:
NODE_PATH='C:\PROGRA~1\nodejs\node.exe'
NPM_PATH='D:\Work\Project\node_modules'
Error 503 Service Unavailable
Service Unavailable
Guru Meditation:
XID: 5312211
Varnish cache server
I working on cpanel and subdomain but i got this error from laravel project. do you can help me to solve this?
I using cpanel and laravel 5.5
Looks like you ran the artisan down command, but not the up command.
Just run:
php artisan up
and than you will get:
"Application is now live."
if have tried php artisan up command and your site still not up and gives 503 then ,
delete down file inside /storage/framework/
I had the same problem in Cpanel what i have did to fix this.
I just have fixed PHP version to the latest
then I goes to "Shell option in panel" and entered this command and my app is live now. php artisan up
The cause of the error likely stems from Apache and PHP-FPM becoming overloaded with requests. PHP-FPM, and occasionally Apache, will need adjustments to their limitations to get around this.
To start, we recommend attempting to adjust the PHP-FPM pool limits from within the WHM "MultiPHP Manager." To do so globally:
[1] Access WHM >> MultiPHP Manager
[2] Select "System PHP-FPM Configuration"
[3] Adjust the "Max Children" and "Max Requests" fields. We recommend incrementing in values of 5 to 10 to ensure that PHP-FPM does not get overloaded.
[4] Save your configuration settings
To perform these changes by domain:
[1] Access WHM >> MultiPHP Manager
[2] Scroll down and locate a domain that experiences the problem.
[3] Select "Edit PHP-FPM" for the domain in question.
[4] Adjust the "Max Children" and "Max Requests" fields.
[5] Save your configuration settings
If you have run php artisan down previously then you may face this issue.
You have to make this up using php artisan up command
Why this happened? as I did the same mistake php artisan down and then run
php artisan serve
and CLI was showing me
<http://127.0.0.1:8000>
[Thu Dec 31 00:25:23 2020] PHP 7.4.3 Development Server (http://127.0.0.1:8000) started
application started but it was showing 503 service Unavailable then run php artisan up and application started.
run this command to make this workable
php artisan up
If you get "Laravel Error 503 Service Unavailable Service" but there is no info in log file, check filesystem status. You may have filled up disk space. Check what's filling it:
cd / && sudo du -h --max-depth=1 -x
503 Service Unavailable
The server is currently unavailable (because it is overloaded or down for maintenance). Generally, this is a temporary state
I have trouble deploying with Deployer 4.0.2 and I am in need for help of somebody more experienced than me in this.
I want to deploy a repository of mine to a Ubuntu 16.04 server.
I am using laravel homestead as a development environment, where I also installed deployer. From there I ssh into my remote server.
I was able to deploy my code with the root user, until I hit a RuntimeExceptionthat aborted my deployment.
Do not run Composer as root/super user! See https://getcomposer.org/root for details
That made me create another user called george, whom I gave superuser rights. I copied my public key from my local machine to a newly generated ~/.ssh/authorized_keys file, that gave me permission to access the server via ssh.
Yet when I run dep deploy with the new user:
server('production', '138.68.99.157')
->user('george')
->identityFile()
->set('deploy_path', '/var/www/test');
I get another RuntimeException:
Host key verification failed.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Now it looks like the new user george cannot access the ~/.ssh/id_rsa.pubkey. So I copy them from the root folder into my home folder and also add the public key in the Github SSH settings.
cp root/.ssh/id_rsa.pub home/george/.ssh/id_rsa.pub
cp root/.ssh/id_rsa home/george/.ssh/id_rsa
Only to get the same error as before.
In the end I had to add github to my list of authorized hosts:
ssh-keyscan -H github.com >> ~/.ssh/known_hosts
Only to get the next RuntimeException
[RuntimeException]
sudo: no tty present and no askpass program specified
I managed to comment this code in the deploy.php
// desc('Restart PHP-FPM service');
// task('php-fpm:restart', function () {
// // The user must have rights for restart service
// // /etc/sudoers: username ALL=NOPASSWD:/bin/systemctl restart php-fpm.service
// run('sudo systemctl restart php-fpm.service');
// });
// after('deploy:symlink', 'php-fpm:restart');
to get the deployment process finally done, and now I ask myself, if the restart of php-fpm is really necessary, for me to continue debugging this deployment tool? Or can I live without it?
And if I need it, can somebody help me understand what I need it for? And maybe as a luxury also provide the solution to the RuntimeException?
Try this:
->identityFile('~/.ssh/id_rsa.pub', '~/.ssh/id_rsa', 'pass phrase')
It works great for me - no need for an askpass program.
It helps to be explicit in my experience.
As for your phpfm restart task .. I haven't seen that before. Shouldn't be needed. :)
EDIT:
That you provide a password is probably a good sign that you ought to refactor your Deployer code a bit if you keep it under source control.
I am loading site specific data from a YAML file - which I am not submitting to source control.
The first bit of my stage.yml :
# Site Configuration
# -------------
prod_1:
host: hostname
user: username
identity_file:
public_key: /home/user/.ssh/key.pub
private_key: /home/user/.ssh/key
password: "password"
stage: production
repository: https://github.com/user/repository.git
deploy_path: /var/www
app:
debug: false
stage: 'prod'
And then, in my deploy.php :
if (!file_exists (__DIR__ . '/deployer/stage/servers.yml')) {
die('Please create "' . __DIR__ . '/deployer/stage/servers.yml" before continuing.' . "\n");
}
serverList(__DIR__ . '/deployer/stage/servers.yml');
set('repository', '{{repository}}');
set('default_stage', 'production');
Notice that, when you use serverList, it replaces your server setup in deploy.php
I hope someone can help me out with this issue I'm facing.
I've made a fully functional project on a local server and would now like to deploy it to Bluemix Cloud Foundry.
I've followed the tutorial: https://console.eu-gb.bluemix.net/docs/starters/upload_app.html
But when I'm trying to push it through terminal with following commands
cf push app_name -b https://github.com/cloudfoundry/php-buildpack.git -s cflinuxfs2
cf push app_name -b https://github.com/cloudfoundry/go-buildpack
cf push app_name -c start_command
cf push app_name -m 512m
But non seems to work, since every single time I get the following error
Staging failed: Buildpack compilation step failed
-----> Composer command failed
FAILED
Error restarting application: BuildpackCompileFailed
It is a PHP app build with PHPStorm on Symfony and Doctrine if that matters.
I am fairly new to all server/setup/deployment configurations as well as command line.
EDIT 1
I figured out this part thanks to this link: https://support.run.pivotal.io/entries/109600943-cf-push-ing-a-symfony-app-fails-with-Composer-command-failed-
It seems that by default the buildpack assumes that you want all of the files you push to be public. Because of this assumption, it takes all of your files and moves them into the doc root of either HTTPD or Nginx.
By creating the file .bp-config/options.json in the root of your project. Then inside options.json add
{
"WEBDIR": "web"
}
This will tell the buildpack that you have a specific directory to use for the doc root, so it will just use that instead of moving everything into the default doc root.
However...
This brings me a new issue and returns the following error
FAILED
Error restarting application: Start unsuccessful
If i enter the recent log the terminal provides me this:
2016-08-25T02:53:40.62+0200 [App/0] OUT Could not open input file: app.php
2016-08-25T02:53:40.62+0200 [App/0] ERR
2016-08-25T02:53:40.69+0200 [DEA/211] ERR Instance (index 0) failed to start accepting connections
2016-08-25T02:53:40.72+0200 [API/9] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"b6c3c871-5484-4f12-9d84-657cf6eacfbf", "instance"=>"c11566bdabe5458d9bfc4965c9c1aa85", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472086420}
2016-08-25T02:53:40.72+0200 [API/3] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"b6c3c871-5484-4f12-9d84-657cf6eacfbf", "instance"=>"c11566bdabe5458d9bfc4965c9c1aa85", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472086420}
2016-08-24T16:41:14.03+0200 [DEA/135] OUT Starting app instance (index 0) with guid abb206b3-b8ea-4269-b248-ec7b35f7098a
2016-08-24T16:41:26.26+0200 [App/0] ERR bash: start_command: command not found
2016-08-24T16:41:26.26+0200 [App/0] OUT
2016-08-24T16:41:26.35+0200 [DEA/135] ERR Instance (index 0) failed to start accepting connections
2016-08-24T16:41:26.38+0200 [API/6] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"5ebd6d77-68c4-4901-b9a8-b5cecfa4cddb", "instance"=>"7b5b555ae68645f4a2c09b73c0adbcb3", "index"=>0, "reason"=>"CRASHED", "exit_status"=>127, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472049686}
EDIT 2 (updated error msg)