I uploaded my laravel project on webhosting heroku but it doesn´t work, it says: Application error An error occurred in the application and your page could not be served. If you are the application owner, check your logs for details. You can do this from the Heroku CLI with the command heroku logs --tail
When i run the command heroku logs --tail in cmd it shows:
It looks like there are even more errors than one and I don´t really know what that errors mean and I couldn´t find them anywhere.
2023-02-10T20:33:34.785966+00:00 heroku[web.1]: State changed from starting to crashed
2023-02-10T20:34:07.000000+00:00 app[api]: Build succeeded
2023-02-10T20:34:07.404549+00:00 heroku[web.1]: State changed from crashed to starting
2023-02-10T20:34:10.304458+00:00 heroku[web.1]: Starting process with command `npm start`
2023-02-10T20:34:12.223124+00:00 app[web.1]:
2023-02-10T20:34:12.223146+00:00 app[web.1]: > start
2023-02-10T20:34:12.223147+00:00 app[web.1]: > if-env NODE_ENV=production && npm run start:prod || npm run start:dev
2023-02-10T20:34:12.223147+00:00 app[web.1]:
2023-02-10T20:34:12.228492+00:00 app[web.1]: sh: 1: if-env: not found
2023-02-10T20:34:12.626616+00:00 app[web.1]:
2023-02-10T20:34:12.626644+00:00 app[web.1]: > start:dev
2023-02-10T20:34:12.626646+00:00 app[web.1]: > concurrently "nodemon - -ignore 'client/*'" "npm run client"
2023-02-10T20:34:12.626646+00:00 app[web.1]:
2023-02-10T20:34:12.632186+00:00 app[web.1]: sh: 1: concurrently: not found
2023-02-10T20:34:12.794122+00:00 heroku[web.1]: Process exited with status 127
2023-02-10T20:34:12.856093+00:00 heroku[web.1]: State changed from starting to crashed
2023-02-10T20:34:14.498753+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/" host=poradej-turnaj.herokuapp.com request_id=dbc569cd-b37e-4a45-a8cb-8e791fa59c1c fwd="84.42.219.107" dyno= connect= service= status=503 bytes= protocol=https
2023-02-10T20:34:14.675679+00:00 heroku[router]: at=error code=H10 desc="App crashed" method=GET path="/favicon.ico" host=poradej-turnaj.herokuapp.com request_id=5476f6cb-67c8-4f2e-a5e2-abdbfc923cc2 fwd="84.42.219.107" dyno= connect= service= status=503 bytes= protocol=https
Here I post some of my code:
package.json
{
"private": true,
"scripts": {
"start": "if-env NODE_ENV=production && npm run start:prod || npm run start:dev",
"start:prod": "node server.js",
"start:dev": "concurrently \"nodemon - -ignore 'client/*'\" \"npm run client\""
},
"devDependencies": {
"#popperjs/core": "^2.11.6",
"axios": "^0.27",
"bootstrap": "^5.2.3",
"laravel-vite-plugin": "^0.6.0",
"lodash": "^4.17.19",
"postcss": "^8.1.14",
"sass": "^1.56.1",
"vite": "^3.0.0"
},
"engines": {
"npm": "9.4.2",
"node": "18.13.0"
},
"dependencies": {
"nodemon": "^2.0.20"
}
}
Procfile
web: vendor/bin/heroku-php-apache2 public/
Can you please give me any advice? Tell me if I should post any more code
It looks like there are two command line utilities that just simply don't exist. if-env and concurrently. If you look at the scripts in your package.json you'll see what I mean. The server is trying to run those commands but the utilities don't exits.
So you'll need to install them on the Heroku server, or find a way to change the way your frontend application is starting.
https://www.npmjs.com/package/if-env
https://www.npmjs.com/package/concurrently
I'm not really sure why your server would be running those scripts though. Usually servers pull the code down from Github, run npm build (or whatever) and then just simply serve the compiled js.
I don't know anything about Heroku though.
Related
I know this may seem similar to many questions already out there, but I've read dozens and I am still experiencing the same issue. My code works fine on my local machine, and my Raspberry Pi was configured by the exact same setup script, but it doesn't work on the Pi.
My OS is Raspbian 10 (Buster) with kernel version 5.10.103-v7l+. I am running PHP 8.1, along with the other packages in the following command: sudo apt install -y php8.1-common php8.1-cli php8.1-fpm php8.1-xml php8.1-curl
php -m lists curl and php -i and phpinfo() show that cURL is installed and enabled:
cURL support => enabled
cURL Information => 7.64.0
I know cURL is installed and enabled, so how do I get my PHP to recognize and use the package? It works perfectly on my local Linux box but on the Pi it just doesn't. I have restarted my server and my entire machine and neither made a difference. I have ran sudo apt update && sudo apt upgrade and everything is at the most recent version.
The 'problematic' snippet of code on my server is the following:
require "vendor/autoload.php";
use Symfony\Component\Panther\Client;
$options = [
"--headless",
"--window-size=1200,1100",
"--no-sandbox",
"--disable-gpu",
"--disable-dev-shm-usage",
];
$query = urlencode($_GET["q"]);
$url ="/search?hl=en&tbo=d&site=&source=hp&q=".$query;
$client = Client::createChromeClient("/usr/lib/chromium-browser/chromedriver", $options, [], "http://www.google.com");
$client->request("GET", $url);
My full error log is:
*168 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught Error: Call to undefined function Facebook\WebDriver\Remote\curl_init() in /var/lib/jenkins/workspace/url-here.com/vendor/php-webdriver/webdriver/lib/Remote/HttpCommandExecutor.php:190
Stack trace:
#0 /var/lib/jenkins/workspace/url-here.com/vendor/php-webdriver/webdriver/lib/Remote/RemoteWebDriver.php(100): Facebook\WebDriver\Remote\HttpCommandExecutor->__construct()
#1 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/ProcessManager/ChromeManager.php(75): Facebook\WebDriver\Remote\RemoteWebDriver::create()
#2 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/Client.php(117): Symfony\Component\Panther\ProcessManager\ChromeManager->start()
#3 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/Client.php(521): Symfony\Component\Panther\Client->start()
#4 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/Client.php(273): Symfony\Component\Panther\Client->get()
#5 /var/lib/jenkins/workspace" while reading response header from upstream, client: 96.244.127.121, server: url-here.com, request: "GET /googleSearch.php?q=test HTTP/1.1", upstream: "fastcgi://unix:/run/php/php8.0-fpm.sock:", host: "url-here.com", referrer: "https://url-here.com/apps/browser.html"
Any and all help would be appreciated, I've been struggling with this one for a while. I would just avoid using curl_init() if I could, but unfortunately as you can see the function is called in a library that my project depends on.
This line from the error log indicates what the problem most likely is:
#5 /var/lib/jenkins/workspace" while reading response header from upstream, client: 96.244.127.121, server: url-here.com, request: "GET /googleSearch.php?q=test HTTP/1.1", upstream: "fastcgi://unix:/run/php/php8.0-fpm.sock:", host: "url-here.com", referrer: "https://url-here.com/apps/browser.html"
You've got PHP 8.0 here, but you've installed Curl for PHP 8.1. You need to check your FPM or server configuration.
I think you're using Nginx. If that's correct then this article might be helpful. Specifically you'll want to look in:
/etc/nginx/sites-available to check which socket is used
/etc/php/VERSION/fpm/pool.d to check what sockets are available
Today I meet a strange question:
our project need to online a new module
when I execute artisan down in online environment and visit the site, I found it isn't show the maintenance page, it also show the home page
I check the CheckForMantenance Middleware has aleady add to global route middleware, the down file in storage/framework is exist
I execute php index.php it return the maintenance page, but When I visit the site from brown or curl it show the home page
I also run in test server and local, it all work well
I add a new route for test middlware, and visit the url used by curl and brown, and result is 404, the route does not found
I think it may be caused by router cache, but there is no cache file on bootstrap/cache or storage/framework, because I never open router cache!
I have no idea, so I modify the index file, add write header function at top, and redirect to a error html, the crazy thing happened it also show the home page!!!!!!
What happend? I'm sure the project path is right
Finally I reload the php-fpm and it recovery normal, the maintenance view work, redirect url work, and route normal
I don't understand still now, but I guess it may by caused by opcache?
I open the opcache ext, and use the default setting;
env:
laravel: 5.3
nginx: 1.8.1
php-fpm: 7.0.9 with opcache ext
First check your fpm logs, usually, something like this will pop up (check debug/log levels) between the notices:
[01-Mar-2017 23:59:45] NOTICE: [pool www] child 16951 started
[01-Mar-2017 23:59:48] WARNING: [pool www] child 14754
exited on signal 11 (SIGSEGV - core dumped) after 4393.427133 seconds
from start
You have to disable opcache unfortunately. I've been seeing this issue since php 5.5 all the way to 7.1 , you will also find these in the error logs:
2017/03/02 10:00:24 [error] 30498#30498: *170523 upstream timed out
(110: Connection timed out) while reading response header from upstream,
client: 81.243.144.1xx, server: fake.test.pro,
request: "POST /api/users/53e4203cfd1c46e08d5b570c2c93ff86/items HTTP/1.1",
upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock", host: "fake.test.pro",
referrer: "http://fake.test.pro/console"
In particular with Laravel, but I've also seen it on wordpress installations. It stops when I disable opcache on all versions of php-fpm
There are bugreports around on this issue but no fixes so far. I always end up doing this :
[opcache]
; Determines if Zend OPCache is enabled
opcache.enable=0
in /etc/php/7.*/fpm/php.ini files. Then my application is robust again and it costs us 150ms. it sucks.
Problem
When xdebug server is running from IntelliJ IDEA, I get 502 Bad Gateway from nginx when I try loading my site to trigger breakpoints.
If I stop the xdebug server, the site works as intended.
So, I'm not able to run the debugger, but it did work previously (!). Not able to pinpoint why it suddenly stopped working.
Setup
A short explanation of the setup (let me know if I need to expand on this).
My php app is running in a docker container, and it is linked to nginx running in a different container using volumes_fromin the docker compose config.
After starting the app, I can verify using phpinfo(); the xdebug module is loaded.
My xdebug.ini has the following content:
zend_extension=xdebug.so
xdebug.remote_enable=1
xdebug.remote_host=10.0.2.2
xdebug.remote_connect_back=0
xdebug.remote_port=5555
xdebug.idekey=complex
xdebug.remote_handler=dbgp
xdebug.remote_log=/var/log/xdebug.log
xdebug.remote_autostart=1
I got the ip address for remote_host (where the xdebug server is running) by these steps:
docker-machine ssh default
route -n | awk '/UG[ \t]/{print $2}' <-- Returns 10.0.2.2
To verify I could reach the debugging server from within my php container, I did the following steps
docker exec -it randomhash bash
nc -z -v 10.0.2.2 5555
Giving the following output depending on xdebug server running or not:
Running: Connection to 10.0.2.2 5555 port [tcp/*] succeeded!
Not running: nc: connect to 10.0.2.2 port 5555 (tcp) failed: Connection refused
So IntelliJ IDEA is surely set up to receive connections on 5555. I also did the appropriate path mapping between my source file paths and the remote path (when setting up the PHP Remote Debugging server from within IDEA).
Any ideas? Kind of lost on this one as I don't have much experience with any of these technologies :D
This sometimes happens, the reason is the errors in php-fpm and xdebug (exactly)!
When I refactored my colleagues code, оne page on the project returned 502 Bad Gateway
Here's what I found:
php-fpm.log
WARNING: [pool www] child 158 said into stderr: "*** Error in `php-fpm: pool www': free(): invalid size: 0x00007f1351b7d2a0 ***"
........
........
WARNING: [pool www] child 158 exited on signal 6 (SIGABRT - core dumped) after 38.407847 seconds from start
I found a piece of code that caused the error:
ob_start();
$result = eval("?>".$string."<"."?p"."hp return 1;");
$new_string = ob_get_clean();
But that is not all. The error occurred only in a certain state $string which at first glance, did not differ from the others. In my case, everything is simple. I removed the code that caused the error. This did not affect the functionality of the web page. I continued to debug the code further.
I had the same problem with the Vagrant Homestead Parallels box with a Silicon chip. Switching from php 7.3 to 7.4 fixed the issue for me.
I'm setting up Continues Integration and I'm wondering if everything should take so damn long.
My build is running for over a day in the mean time and still it's not finished.
It is a normal Laravel app with around 20 controllers, so a little time is granted, but over a day?
My config is fairly simple in my opinion:
build_settings:
ignore:
- "vendor"
setup:
composer:
action: "install"
test:
php_mess_detector:
allow_failures: true
php_code_sniffer:
standard: "PSR2"
php_cpd:
allow_failures: true
php_docblock_checker:
allowed_warnings: 10
skip_classes: true
php_loc:
directory: "src"
No errors, only the (by now) pesky status "Pending"
When I check the logs I get this error:
2016/01/28 08:01:32 [error] 6702#0: *4 FastCGI sent in stderr: "PHP message: PHP Fatal error: Class 'PHPCI\Controller' not found in /var/www/vendor/block8/b8framework/b8/Application.php on line 93" while reading response header from upstream, client: someipaddress, server: green.somedomain.com, request: "GET /assets/js/plugins/datepicker/locales/bootstrap-datepicker.en.js HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "green.somedomain.com", referrer: "http://green.somedomain.com/build/view/5"
I did composer update / install and I also added the following rule to the nginx configuration:
fastcgi_param SCRIPT_NAME index.php;
My question is, is this normal? Is my config good? Am I forgetting something?
You've not set up the build runner when you set up PHPCI. The web interface merely creates the build and displays the results, you need to run the command line tool to run the builds.
There are three ways to set this up:
(New in 1.7 beta) PHPCI Worker w/ beanstalkd.
Install beanstalkd
Use supervisord (or similar) to run /path/to/phpci/console phpci:worker
(Recommended for 1.6 and below) PHPCI Daemon: https://www.phptesting.org/wiki/Run-Builds-Using-a-Daemon
(Fallback option) Cron: https://www.phptesting.org/wiki/Run-Builds-Using-Cron
I can't figure out why my nginx is crashing each time I try to use curl:
Code snippet to reproduce a crash:
$request = new \Buzz\Message\Request('GET', '/', 'https://google.com');
$response = new \Buzz\Message\Response();
$client = new \Buzz\Client\Curl();
$client->send($request, $response);
application log
2015/12/29 11:42:30 [error] 213#0: *416 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: dev-fr.local.xxxx.com, request: "GET /login/check-vkontakte?code=xxxxx HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "dev-ru.local.xxxx.com", referrer: "https://dev-ru.local.xxxx.com/"
/var/log/system.log
Dec 29 11:40:54 Alains-MacBook-Pro.local ReportCrash[75875]: Saved crash report for php-fpm[75864] version 0 to /Users/alain/Library/Logs/DiagnosticReports/php-fpm_2015-12-29-114054_Alains-MacBook-Pro.crash
crash report (beginning)
Process: php-fpm [75865]
Path: /usr/local/Cellar/php56/5.6.15/sbin/php-fpm
Identifier: php-fpm
Version: 0
Code Type: X86-64 (Native)
Parent Process: php-fpm [75858]
Responsible: php-fpm [75858]
User ID: 501
Date/Time: 2015-12-29 11:42:30.733 +0100
OS Version: Mac OS X 10.10.5 (14F1021)
Report Version: 11
Anonymous UUID: 1DC6CEB0-0479-4A5E-FFD2-E48BA3961196
Sleep/Wake UUID: A704AF01-8AE8-44D2-BBF3-DED65D834B0B
Time Awake Since Boot: 29000 seconds
Time Since Wake: 3000 seconds
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGSEGV)
Exception Codes: KERN_INVALID_ADDRESS at 0x0000000000000110
I searched for hours and always given up, but now I'm tired and really want to fix it.
Complete crash report: http://ninsuo.com/crash-nginx-xxxx.html
My phpinfo(): http://ninsuo.com/phpinfo-xxxx.html
Better practice is to reinstall php with
brew install --with-fpm --with-homebrew-curl --with-homebrew-openssl --with-imap --with-homebrew-libxslt --without-snmp php56
and then reinstall curl with
brew install --with-openssl curl
I experienced same problem early and fixed it with running php-fpm as root.
Also I found links which confirms this solution:First link, Second link. Hope it will help you too.