I am working in my project in 2 different servers. The code is cloned from bitbucket so is exactly the same, but in one of them I get this error on every HTTP request:
Couldn't connect to host, Elasticsearch down?
500 Internal Server Error - HttpException
S.O. Ubuntu 16.04
When I run service elasticsearch status I get this:
elasticsearch.service - LSB: Starts elasticsearch
Loaded: loaded (/etc/init.d/elasticsearch; bad; vendor preset: enabled)
Active: active (exited) since Sun 2017-01-15 11:05:25 CET; 1h 16min ago
Docs: man:systemd-sysv-generator(8)
Process: 1366 ExecStart=/etc/init.d/elasticsearch start (code=exited, status=0
Jan 15 11:05:25 ubuntu systemd[1]: Starting LSB: Starts elasticsearch...
Jan 15 11:05:25 ubuntu systemd[1]: Started LSB: Starts elasticsearch.
When I run fos:elastica:populate I got this error:
[Elastica\Exception\Connection\HttpException]
Couldn't connect to host, Elasticsearch down?
And running curl -XGET http://127.0.0.1:9200 I get
curl: (7) Failed to connect to 127.0.0.1 port 9200: Connection refused
I was searching for 5 days and think this issue is something about permissions.
Thanks a lot #antonbormotov. I was searching about to start elasticsearch and found this response.
It seems that to get Elasticsearch to run on 16.04 you have to set
START_DAEMON to true on /etc/default/elasticsearch. It comes commented
out by default, and uncommenting it makes Elasticsearch start again
just fine.
Be sure to use systemctl restart instead of just start because the
service is started right after installation, and apparently there's
some socket/pidfile/something that systemd keeps that must be released
before being able to start the service again
It's running now!
Our application runs in a Docker container on AWS:
Operating system: Ubuntu 14.04.2 LTS (Trusty Tahr)
Nginx version: nginx/1.4.6 (Ubuntu)
Memcached version: memcached 1.4.14
PHP version: PHP 5.5.9-1ubuntu4.11 (cli) (built: Jul 2 2015 15:23:08)
System Memory: 7.5 GB
We get blank pages and a 404 Error less frequently. While checking the logs, I found that the php-child process is killed and it seems that memory is mostly used by memcache and php-fpm process and very low free memory.
memcache is configured to use 2 GB memory.
Here is php www.conf
pm = dynamic
pm.max_children = 30
pm.start_servers = 9
pm.min_spare_servers = 4
pm.max_spare_servers = 14
rlimit_files = 131072
rlimit_core = unlimited
Error logs
/var/log/nginx/php5-fpm.log
[29-Jul-2015 14:37:09] WARNING: [pool www] child 259 exited on signal 11 (SIGSEGV - core dumped) after 1339.412219 seconds from start
/var/log/nginx/error.log
2015/07/29 14:37:09 [error] 141#0: *2810 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: x.x.x.x, server: _, request: "GET /suggestions/business?q=Selectfrom HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "example.com", referrer: "http://example.com/"
/var/log/nginx/php5-fpm.log
[29-Jul-2015 14:37:09] NOTICE: [pool www] child 375 started
/var/log/nginx/php5-fpm.log:[29-Jul-2015 14:37:56] WARNING: [pool www] child 290 exited on signal 11 (SIGSEGV - core dumped) after 1078.606356 seconds from start
Coredump
Core was generated by php-fpm: pool www.Program terminated with signal SIGSEGV, Segmentation fault.#0 0x00007f41ccaea13a in memcached_io_readline(memcached_server_st*, char*, unsigned long, unsigned long&) () from /usr/lib/x86_64-linux-gnu/libmemcached.so.10
dmesg
[Wed Jul 29 14:26:15 2015] php5-fpm[12193]: segfault at 7f41c9e8e2da ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:28:26 2015] php5-fpm[12211]: segfault at 7f41c966b2da ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:29:16 2015] php5-fpm[12371]: segfault at 7f41c9e972da ip 00007f41ccaea13a sp 00007ffcc5730b70 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:35:36 2015] php5-fpm[12469]: segfault at 7f41c96961e9 ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:35:43 2015] php5-fpm[12142]: segfault at 7f41c9e6c2bd ip 00007f41ccaea13a sp 00007ffcc5730b70 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:37:07 2015] php5-fpm[11917]: segfault at 7f41c9dd22bd ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
[Wed Jul 29 14:37:54 2015] php5-fpm[12083]: segfault at 7f41c9db72bd ip 00007f41ccaea13a sp 00007ffcc5730ce0 error 4 in libmemcached.so.10.0.0[7f41ccad2000+32000]
While googling for this same issue, and trying hard to find a solution that was not related to sessions (because I have ruled that out) nor to bad PHP code (because I have several websites running precisely the same version of WordPress, and none have issues... except for one), I came upon an answer telling that a possible solution did involve removing some buggy extension (usually memcache/d, but could be something else).
Since I had this same site working flawlessly on one Ubuntu server, when switching to a newer server, I immediately suspected that it was the migration from PHP 5.5 to 7 that caused the problem. It was just strange because no other website was affected. Then I remembered that another thing was different on this new server: I had also installed New Relic. This is both an extension and a small server that runs in the background and sends a lot of analytics data to New Relic for processing. Allegedly, it's a PHP 5 extension, but, surprisingly, it loads well on PHP 7, too.
Now here comes the tricky bit. At some point, I had installed W3 Total Cache for the WordPress installation of that particular website. Subsequently, I saw that the performance of that server was so stellar that W3TC was unnecessary, and simply stuck to a much simpler configuration. So I could uninstall W3TC. That's all very nice, but... I forgot that I had turned New Relic on W3TC, too (allegedly, it adds some extra analytics data to be sent to New Relic). When uninstalling W3TC, probably there was 'something' left on the New Relic configuration in my server which was still attempting to send data through the W3TC interface (assuming that W3TC has an interface... I really have no idea how it works at that level), and, because that specific bit of code was missing, the php_fpm handler for that website would fail... some of the time. Not all the time, because I'm assuming that, in most cases, nginx was sending static pages back. Or maybe php_fpm, set to 'recycle' after 100 calls or so, would crash-on-stop. Whatever exactly was happening, it was definitely related to New Relic — as soon as I removed the New Relic extension from PHP, that website went back to working normally.
Because this is such a specific scenario, I'm just writing this as an answer, in the remote chance that someone in the future googles for the exact problem.
In my case it was related to zend debug/xdebug. It forwards some TCP packets to the IDE (PhpStorm), that was not listening on this port (debugging was off). The solution is to either disable these extensions or enable debug listening on the debugging port.
I had this problem after installing xdebug, adding some properties to /etc/php/7.1/fpm/php.ini and restarting nginx. This is running on a Homestead Laravel box.
Simply restarting the php7.1-fpm service solved it for me.
It can happen if PHP is unable to write the session information to a file. By default it is /var/lib/php/session. You can change it by using configuration session_save_path.
phpMyAdmin having problems on nginx and php-fpm on RHEL 6
In my case it was Xdebug. After uninstalling it, it got back to normal.
In my case, it was caused by the New Relic PHP Agent. Therefore, for a specific function that caused a crash, I added this code to disable New Relic:
if (function_exists('newrelic_ignore_transaction')) {
newrelic_ignore_transaction();
}
Refer to: https://discuss.newrelic.com/t/how-to-disable-a-specific-transaction-in-php-agent/42384/2
In our case it was caused by Guzzle + New Relic. In the New Relic Agent changelog they've mentioned that in version 7.3 there was some Guzzle fix, but even using the 8.0 didn't work, so there is still something wrong. In our case this was happening only in two of our scripts that were using Guzzle. We found that there are two solutions:
Set newrelic.guzzle.enabled = false in newrelic.ini. You will lose data in the External Services tab this way, but you might not need it anyway.
Downgrade New Relic Agent to version 6.x that somehow also works
If you are reading this when they've released something newer than version 8.0, you could also try to update New Relic Agent to the latest and maybe they fixed that
In my case I had deactivated the buffering function ob_start("buffer"); in my code ;)
A possible problem is PHP 7.3 + Xdebug. Please change Xdebug 2.7.0beta1 to Xdebug 2.7.0rc1 or the latest version of Xdebug.
For some reason, when I remove profile from my xdebug.ini modes, it fixes it for me.
i.e. change
xdebug.mode=debug,develop,profile
to
xdebug.mode=debug,develop
I have had MySQL 5.6.16 and was working perfect suddenly the MySQL connection become so slow to connect (20 - 21 seconds) I have removed it completely even with the data tables and downloaded and installed ver. 5.6.19 with a fresh and new installation, but this didn't fix the long time connection EVEN with the MySQL Command Line Client it takes 20 seconds to return the "mysql>".
Also any web-based php code with mysql query connection takes 20 sec to get the data page.
I also tried enabling skip_name_resolve / connect_timeout = 10 / wait_timeout = 50 but non of this did anything.
I have:
Windows 7 Home Premium SP1 x64
MySQL 5.6.19-enterprise-commercial-advanced
Apache 2.4.9 (Win64) OpenSSL 1.0.1f
PHP 5.5.10
MySQL Global Status: https://www.dropbox.com/s/gz44pvtomwbncog/MYSQL_GLOBAL_STATUS.txt
Thanks anyway, I have figured out that this issue because of the windows, I have used a repair tool: "Tweaking.com - Windows Repair" and it fixed this issue...
I have an application with some memcached operations. I have installed php memcached extension version 2.0.1. from configure to make install everything went smoothly, no errors.
Now in my application I instantiate a memcached instance and When I run methods like addServer or get or set, everything runs fine. But when I fire getStats or getVersion I get this error
/usr/local/bin/php: symbol lookup error: /usr/local/lib/php/extensions/no-debug-non-zts-20060613/memcached.so: undefined symbol: zend_parse_parameters_none
Can anyone help me with this? Been stuck with this since entire yesterday.
Another strange observation is with NetBeans. I use version 7.0.1. When I create a memcache object I get the autocomplete when type in $memcObj-> but the same is not true when the object is of memcached. No autocomplete.
I believe zend_parse_parameters_none was introduced for the Zend API in 2008.
sh ~> git blame Zend/zend_API.h
...
11e5d2f2 (Zeev Suraski 2001-07-30 02:07:52 +0000 247) _zend_get_parameters_array_ex(param_count, argument_array TSRMLS_CC)
cc2b17d5 (Felipe Pena 2008-03-10 22:02:41 +0000 248) #define zend_parse_parameters_none() \
cc2b17d5 (Felipe Pena 2008-03-10 22:02:41 +0000 249) zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "")
58f88057 (Andrei Zmievski 2001-07-09 18:51:29 +0000 250)
I imagine your version of PHP is not compatible with PECL's Memcached extension, as your extensions directory has a timestamp before 2006-11-02 (no-debug-non-zts-20060613).
Ideally you would want to look into upgrading PHP. Else, try PECL's Memcache extension; which, does offer a stable version to match your PHP build.
I'm running a Rackspace cloud server CentOs + apache2 + php 5.4 + pcntl module with a basic Kohana php framework with a mongoDb task module that forks children processes. I get the following error if I try to run more then 1 child task process at the same time:
Unable to connect to MongoDB server at Interrupted system call
According to the mongoDb task module author the issue is not related to code but perhaps the mongoDb driver or the server.
Does anyone know what the error means and/or what may be the cause?
Full error output:
0 /var/www/.../modules/mangodb/classes/mangodb.php(370):
MangoDB->connect()
1 /var/www/.../modules/mangodb/classes/mangodb.php(173):
MangoDB->_call('command', Array, Array)
2 /var/www/.../modules/mangotask/classes/model/queue/task.php(33):
MangoDB->command(Array)
3 /var/www/.../modules/mangoQueue/classes/controller/daemon.php(232):
Model_Queue_Task->get_next()
4 /var/www/.../modules/mangoQueue/classes/controller/daemon.php(111):
Controller_Daemon->daemon()
5 [internal function]: Controller_Daemon->action_index()
6 /var/www/.../system/classes/kohana/request/client/internal.php(118):
reflectionMethod->invoke(Object(Controller_Daemon))
7 /var/www/.../system/classes/kohana/request/client.php(64):
Kohana_Request_Client_Internal->execute_request(Object(Request))
8 /var/www/.../system/classes/kohana/request.php(1138):
Kohana_Request_Client->execute(Object(Request))
9 /var/www/.../index.php(109): Kohana_Request->execute()
Driver version 1.2.12 definitely has issues with forking, but this is something that should be resolved in the forthcoming 1.3.0 release. In particular, PHP-426 is one of the later issues to address this problem, as it relocated connection selection from MongoCursor to MongoCursor::doQuery(), allowing the driver to operate correctly after a fork. I would keep an eye out for the next 1.3.0 pre-release (either beta3 or rc1), and certainly when the final 1.3.0 version is released via http://pecl.php.net/package/mongo.