I'm having trouble getting my worker process to update in Heroku. I have a worker dyno set in my Procfile that is connected to a Redis instance; I cannot see changes that I make to this file after deploying.
I've tried:
Resetting the dyno using heroku ps:restart worker.1 -a [appname]
Restarting all dynos using heroku ps:restart -a [appname]
Changing the contents of the file so the size is different
Changing the Procfile to point to a different PHP file
Nothing works. It looks like it picked up some of my changes overnight (maybe during reboot?) but I can't force it to pick up the changes... any ideas?
Logs to the rescue... I had an issue with my required include file paths, which was causing the build to fail. It was defaulting to the last successful build, which is why it looked like it was caching.
I was able to find this by watching the logs during the build:
heroku[worker.1]: Starting process with command `php bin/worker.php`
heroku[worker.1]: State changed from starting to up
heroku[worker.1]: Process exited with status 255
app[worker.1]: PHP Warning: require_once(../vendor/autoload.php): failed to open stream: No such file or directory in /app/bin/worker.php on line 9
app[worker.1]: PHP Fatal error: require_once(): Failed opening required '../vendor/autoload.php' (include_path='.:/app/.heroku/php/lib/php') in /app/bin/worker.php on line 9
heroku[worker.1]: State changed from up to crashed
Once I solved the build errors it no longer 'cached' and everything worked properly.
The include path that worked:
require_once(__DIR__ . '/../vendor/autoload.php');
Related
I am running Laravel on Homestead, and whenever I run any php artisan XXX command, the file named -1 is created in the root directory of the app.
Contents of the file are similar to these ones:
Log opened at 2017-12-22 13:54:00
I: Connecting to configured address/port: 10.0.2.2:9000.
E: Time-out connecting to client. :-(
Log closed at 2017-12-22 13:54:00
I am 99% sure it is related some changes I made in my failed attempts to make XDebug breakpoints work with artisan commands. I have exported some shell variables, as recommended in this answer, but when I run export -p I don't see any of them.
Did anyone have a similar issue? What setting can be causing such behavior?
Following the suggestion of LazyOne, I found the answer:
It seems that paths in .ini file have to be absolute. So instead of:
xdebug.remote_log=~/code/xdebug.log
I had to set it to:
xdebug.remote_log=/home/vagrant/code/xdebug.log
and now it works as supposed to.
I've been away from Laravel/Unix for some time but I have a project to setup and hit some snags. The most recent version of Laravel required PHP >=5.6 so I got that updated, setup my new project with laravel new project and made some modifications to the user/group permissions for storage/ and bootstrap/cache folders as normal.
I'm getting HTTP/500 error from Nginx so I checked the error log and I'm getting this in /var/log/nginx/error.log:
FastCGI sent in stderr: "PHP Message: PHP Fatal Error: Class PDO not found in /home/user/public_html/project/config/database.php on line 16" while reading upstream ... upstream: "fastcgi://unix:/var/run/php5-fpm.sock"
PHP Version 5.6.28-1~dotdeb+7.1
When I check /etc/php5/fpm/php.ini the normal extension=pdo.so and extension=pdo_mysql.so were not there so I added them to test it, however those are being requested in the conf.d folder and PHPINO shows those files being scanned/loaded.
However, later in PHPINFO results, PDO/PDO_MYSQL is not listed
UPDATE
I just attempted to use find /-name pdo.so and the same for pdo_mysql.so to find the path to those files and manually modified the loading configuration files to point to them correctly, started the server and that doesn't change anything.
I apologize this is long. I've spent a couple hours scouring to make sure I wasn't just missing something silly, and I may still be.
Any ideas overlfowers?
Well, after much consternation and fiddling I figured this one out...
There was a recursive / endless loop being created in the php-fpm.conf config as shown here:
include=/etc/php5/fpm/*.conf
This was causing PHP5-FPM .conf to attempt including itself, which I caught in a bootup error: Failed to load configuration file /etc/php5/fpm/php-fpm.conf from /etc/php5/fpm/php-fpm.conf
So I modified that to include=/etc/php5/fpm/conf.d/*.conf and everything started back up and now PDO/PDO_MSYQL is loading.
I hope someone can help me out with this issue I'm facing.
I've made a fully functional project on a local server and would now like to deploy it to Bluemix Cloud Foundry.
I've followed the tutorial: https://console.eu-gb.bluemix.net/docs/starters/upload_app.html
But when I'm trying to push it through terminal with following commands
cf push app_name -b https://github.com/cloudfoundry/php-buildpack.git -s cflinuxfs2
cf push app_name -b https://github.com/cloudfoundry/go-buildpack
cf push app_name -c start_command
cf push app_name -m 512m
But non seems to work, since every single time I get the following error
Staging failed: Buildpack compilation step failed
-----> Composer command failed
FAILED
Error restarting application: BuildpackCompileFailed
It is a PHP app build with PHPStorm on Symfony and Doctrine if that matters.
I am fairly new to all server/setup/deployment configurations as well as command line.
EDIT 1
I figured out this part thanks to this link: https://support.run.pivotal.io/entries/109600943-cf-push-ing-a-symfony-app-fails-with-Composer-command-failed-
It seems that by default the buildpack assumes that you want all of the files you push to be public. Because of this assumption, it takes all of your files and moves them into the doc root of either HTTPD or Nginx.
By creating the file .bp-config/options.json in the root of your project. Then inside options.json add
{
"WEBDIR": "web"
}
This will tell the buildpack that you have a specific directory to use for the doc root, so it will just use that instead of moving everything into the default doc root.
However...
This brings me a new issue and returns the following error
FAILED
Error restarting application: Start unsuccessful
If i enter the recent log the terminal provides me this:
2016-08-25T02:53:40.62+0200 [App/0] OUT Could not open input file: app.php
2016-08-25T02:53:40.62+0200 [App/0] ERR
2016-08-25T02:53:40.69+0200 [DEA/211] ERR Instance (index 0) failed to start accepting connections
2016-08-25T02:53:40.72+0200 [API/9] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"b6c3c871-5484-4f12-9d84-657cf6eacfbf", "instance"=>"c11566bdabe5458d9bfc4965c9c1aa85", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472086420}
2016-08-25T02:53:40.72+0200 [API/3] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"b6c3c871-5484-4f12-9d84-657cf6eacfbf", "instance"=>"c11566bdabe5458d9bfc4965c9c1aa85", "index"=>0, "reason"=>"CRASHED", "exit_status"=>1, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472086420}
2016-08-24T16:41:14.03+0200 [DEA/135] OUT Starting app instance (index 0) with guid abb206b3-b8ea-4269-b248-ec7b35f7098a
2016-08-24T16:41:26.26+0200 [App/0] ERR bash: start_command: command not found
2016-08-24T16:41:26.26+0200 [App/0] OUT
2016-08-24T16:41:26.35+0200 [DEA/135] ERR Instance (index 0) failed to start accepting connections
2016-08-24T16:41:26.38+0200 [API/6] OUT App instance exited with guid abb206b3-b8ea-4269-b248-ec7b35f7098a payload: {"cc_partition"=>"default", "droplet"=>"abb206b3-b8ea-4269-b248-ec7b35f7098a", "version"=>"5ebd6d77-68c4-4901-b9a8-b5cecfa4cddb", "instance"=>"7b5b555ae68645f4a2c09b73c0adbcb3", "index"=>0, "reason"=>"CRASHED", "exit_status"=>127, "exit_description"=>"failed to accept connections within health check timeout", "crash_timestamp"=>1472049686}
EDIT 2 (updated error msg)
I'm working on Laravel 5 and I'm using PHP function php -S localhost:8888 folder-name - t to show the web site.
Everything is working fine until I updated to Window 10. Now, I try to run the project in my browser, I get a blank page and this message in my cmd:
[Mon Aug 03 00:17:05 2015] PHP Fatal error: Unknown: Failed opening
required 'public' (include_path='.;C:\php\pear\') in Unknown on line 0
What is going wrong?
It sounds like a permissions issue. I don't have much experiencing working with Laravel on Windows, but I just fixed a similar issue on an Ubuntu box.
The source of trouble for me was that I installed composer as root, so I had to remove ./vendor, change the owner and group of ~/.composer to ubuntu:www-data (ubuntu is my user, www-data is the Nginx user), and rerun composer install. I also made sure the ./storage permissions were recursively set to 775, and change the owner of my entire Laravel project to ubuntu:www-data.
Some stuff will definitely be different if you're on Windows, but hope this helps!
It is a permission error actually. But if you try to run directly from your localhost server, it will work.
On windows 10 PHP Server will not be able to access PHP Pear extension that is why it is showing this error.
If you are running XAMPP. you have to go to your apache localhost with http://localhost/laravel-folder/public.
On my development box (thank goodness it's not happening in production—that I know of—yet), as I'm working on a PHP site, I get this occasional error:
Warning:
require_once(filename.php): failed
to open stream: Too many open files in
path/functions.php on line 502
Fatal error: require_once(): Failed opening required 'filename.php'
(include_path='/my/include/path') in
path/functions.php on line 502
Line 502 of functions.php is my "autoload" function which automatically requires PHP files containing classes I use:
function autoload($className)
{
require_once $className . ".php"; // <-- Line 502
}
By an "occasional" error, I mean that it'll work fine for about a day of development, then when I first see this, I can refresh the page and it'll be okay again, then refreshing gives me the same error. This happens just a few times before it starts to show it every time. And sometimes the name of the file it's requiring (I have a script split out into several PHP files) is different... it's not always the first or last or middle files that it bombs on.
Restarting php-fpm seems to solve the symptoms, but not the problem in the long run.
I'm running PHP 5.5.3 on my Mac (OS X 10.8) with nginx 1.4.2 via php-fpm.
Running lsof | grep php-fpm | wc -l tells me that php-fpm has 824 files open. When I examined the actual output, I saw that, along with some .so and .dylib files, the vast majority of lines were like this:
php-fpm 4093 myuser 69u unix 0x45bc1a64810eb32b 0t0 ->(none)
The segment "69u" and the 0x45bc1a6481... number are different on each row. What could this mean? Is this the problem? (ulimit is "unlimited")
Incidentally (though perhaps un-related), there's also one or two of these:
php-fpm 4093 myuser 8u IPv4 0x45bc1a646b0f97b3 0t0 TCP 192.168.1.2:59611->rest.nexmo.com:https (CLOSE_WAIT)
(I have some pages which use HttpRequest (PECL libraries) to call out to the Nexmo API. Are these not being closed properly or something? How can I crack down on those?)
Try to set php-fpm (they're set to infinite by default) to more appropriate values on your needs.
For example:
emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s
Maybe, set this too if your app works with lots of files:
rlimit_files = 1024