Broadcasting, laravel-echo-server not receiving events - php

I'm using Laravel in a project and I want to use broadcasting with laravel-echo-server and Redis. I have set up both in a docker container. Output below:
Redis
redis_1 | 1:C 27 Sep 06:24:35.521 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1 | 1:C 27 Sep 06:24:35.577 # Redis version=4.0.2, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1 | 1:C 27 Sep 06:24:35.577 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | 1:M 27 Sep 06:24:35.635 * Running mode=standalone, port=6379.
redis_1 | 1:M 27 Sep 06:24:35.635 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1 | 1:M 27 Sep 06:24:35.635 # Server initialized
redis_1 | 1:M 27 Sep 06:24:35.635 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1 | 1:M 27 Sep 06:24:35.636 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1 | 1:M 27 Sep 06:24:35.715 * DB loaded from disk: 0.079 seconds
redis_1 | 1:M 27 Sep 06:24:35.715 * Ready to accept connections
A few warnings but nothing breaking.
laravel-echo-server
laravel-echo-server_1 | L A R A V E L E C H O S E R V E R
laravel-echo-server_1 |
laravel-echo-server_1 | version 1.3.1
laravel-echo-server_1 |
laravel-echo-server_1 | ⚠ Starting server in DEV mode...
laravel-echo-server_1 |
laravel-echo-server_1 | ✔ Running at localhost on port 6001
laravel-echo-server_1 | ✔ Channels are ready.
laravel-echo-server_1 | ✔ Listening for http events...
laravel-echo-server_1 | ✔ Listening for redis events...
laravel-echo-server_1 |
laravel-echo-server_1 | Server ready!
laravel-echo-server_1 |
laravel-echo-server_1 | [6:29:38 AM] - dG0sLqG9Aa9oVVePAAAA joined channel: office-dashboard
The client seems to join the channel without any problems.
However, if I kick of an event laravel-echo-server doesn't receive the event.
I did a bit of research and found something about a queue worker. So I decided to run that (php artisan queue:work) and see if that did anything. According to the docs it should run only the first task in the queue and then exit (as opposed to queue:listen). And sure enough it began processing the event I kicked of earlier. But it didn't stop and kept going until I killed it:
[2017-09-27 08:33:51] Processing: App\Events\CompanyUpdated
[2017-09-27 08:33:51] Processing: App\Events\CompanyUpdated
[2017-09-27 08:33:51] Processing: App\Events\CompanyUpdated
[2017-09-27 08:33:51] Processing: App\Events\CompanyUpdated
etc..
The following output showed in the redis container:
redis_1 | 1:M 27 Sep 06:39:01.562 * 10000 changes in 60
seconds. Saving...
redis_1 | 1:M 27 Sep 06:39:01.562 * Background saving started by pid 19
redis_1 | 19:C 27 Sep 06:39:01.662 * DB saved on disk
redis_1 | 19:C 27 Sep 06:39:01.663 * RDB: 2 MB of memory used by copy-on-write
redis_1 | 1:M 27 Sep 06:39:01.762 * Background saving terminated with success
Now I either did so many api calls that the queue is so massive, or something is going wrong. Additionally, laravel-echo-server didn't show any output after the jobs were 'processed'.
I have created a hook in my Model which kicks of the event:
public function __construct(array $attributes = []) {
parent::__construct($attributes);
parent::created(function( $model ){
//event(new CompanyCreated($model));
});
parent::updated(function( $model ){
event(new CompanyUpdated($model));
});
parent::deleted(function( $model ){
event(new CompanyDeleted($model));
});
}
Then this is the event it kicks off:
class CompanyUpdated implements ShouldBroadcast {
use Dispatchable, InteractsWithSockets, SerializesModels;
/**
* Create a new event instance.
*
* #return void
*/
public function __construct(Company $company) {
$this->company = $company;
}
/**
* Get the channels the event should broadcast on.
*
* #return Channel|array
*/
public function broadcastOn() {
return new Channel('office-dashboard');
}
}
And finally, this is the code on the front-end that's listening for the event:
window.Echo.channel('office-dashboard')
.listen('CompanyUpdated', (e) => {
console.log(e.company.name);
});
.env file:
BROADCAST_DRIVER=redis
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=redis
REDIS_HOST=redis
REDIS_PASSWORD=null
REDIS_PORT=6379
Why isn't the event passed to laravel-echo-server? Anything I'm missing or forgetting?

Started working out of the blue.

For me, #jesuisgenial's comment pointed me in the right direction.
You can easily identify if it's not subscribing by checking the Websockets tab under the Networking tab in Chrome developer tools
Without 1 second delay (no subscribe event):
Echo.private(`users.${context.getters.getUserId}`)
.listen('.conversations.new_message', function(data) {
console.log(data.message);
})
With 1 second delay (subscribe event present):
setTimeout(() => {
Echo.private(`users.${context.getters.getUserId}`)
.listen('.conversations.new_message', function(data) {
console.log(data.message);
})
}, 1000)

Related

PHP Warning: require_once

I'M Trying to run a cloned project and set my php evironment but everytime i try to execute php artisan serve commad,it gives me this error... the problem is that im new to laravel...
almando#almando-ThinkPad-Edge-E531:~/Documents/laravelPro/active-ecommerce$ php artisan serve
Starting Laravel development server: http://127.0.0.1:8000
[Mon Dec 19 00:16:03 2022] PHP Warning: require_once(/home/almando/Documents/laravelPro/active-ecommerce/public/index.php): failed to open stream: No such file or directory in /home/almando/Documents/laravelPro/active-ecommerce/server.php on line 21
[Mon Dec 19 00:16:03 2022] PHP Fatal error: require_once(): Failed opening required '/home/almando/Documents/laravelPro/active-ecommerce/public/index.php' (include_path='.:/usr/share/php') in /home/almando/Documents/laravelPro/active-ecommerce/server.php on line 21
[Mon Dec 19 00:16:03 2022] 127.0.0.1:35764 [200]: /favicon.ico
[Mon Dec 19 00:16:10 2022] PHP Warning: require_once(/home/almando/Documents/laravelPro/active-ecommerce/public/index.php): failed to open stream: No such file or directory in /home/almando/Documents/laravelPro/active-ecommerce/server.php on line 21
[Mon Dec 19 00:16:10 2022] PHP Fatal error: require_once(): Failed opening required '/home/almando/Documents/laravelPro/active-ecommerce/public/index.php' (include_path='.:/usr/share/php') in /home/almando/Documents/laravelPro/active-ecommerce/server.php on line 21
^C
I don't know where to make changes is there someone who help me out on this, i will appreciate it..
below is the server.php
<?php
/**
* Laravel - A PHP Framework For Web Artisans
*
* #package Laravel
* #author Naft Otell <naft#laravel.com>
*/
$uri = urldecode(
parse_url($_SERVER['REQUEST_URI'], PHP_URL_PATH)
);
// This file allows us to emulate Apache's "mod_rewrite" functionality from the
// built-in PHP web server. This provides a convenient way to test a Laravel
// application without having installed a "real" web server software here.
if ($uri !== '/' && file_exists(__DIR__.'/public'.$uri)) {
return false;
}
require_once __DIR__.'/public/index.php';
below is the index.php
<?php
ini_set('serialize_precision', -1);
/**
* Laravel - A PHP Framework For Web Artisans
*
* #package Laravel
* #author Taylor Otwell <taylor#laravel.com>
*/
define('LARAVEL_START', microtime(true));
/*
|--------------------------------------------------------------------------
| Register The Auto Loader
|--------------------------------------------------------------------------
|
| Composer provides a convenient, automatically generated class loader for
| our application. We just need to utilize it! We'll simply require it
| into the script here so that we don't have to worry about manual
| loading any of our classes later on. It feels great to relax.
|
*/
require __DIR__.'/vendor/autoload.php';
/*
|--------------------------------------------------------------------------
| Turn On The Lights
|--------------------------------------------------------------------------
|
| We need to illuminate PHP development, so let us turn on the lights.
| This bootstraps the framework and gets it ready for use, then it
| will load up this application so that we can run it and send
| the responses back to the browser and delight our users.
|
*/
$app = require_once __DIR__.'/bootstrap/app.php';
/*
|--------------------------------------------------------------------------
| Run The Application
|--------------------------------------------------------------------------
|
| Once we have the application, we can handle the incoming request
| through the kernel, and send the associated response back to
| the client's browser allowing them to enjoy the creative
| and wonderful application we have prepared for them.
|
*/
$kernel = $app->make(Illuminate\Contracts\Http\Kernel::class);
$response = $kernel->handle(
$request = Illuminate\Http\Request::capture()
);
$response->send();
$kernel->terminate($request, $response);
The question is how would you solve this if it was you?
Thank you
Assuming folder doesn't have the permissions needed, try with this:
chown -R {username}:www-data /home/almando/Documents/laravelPro/active-ecommerce
chmod -R 750 /home/almando/Documents/laravelPro/active-ecommerce
chmod -R 770 /home/almando/Documents/laravelPro/active-ecommerce/storage

Facter command slows down when phpunit is started over Jenkins

I'm having the problem that shell_exec in combination with the facter command is very slow when started over Jenkins. Other commands than facter (like whoami) are fast.
The code runs on a Ubuntu VM that was recently upgrade from 14.x to 18.04.1 LTS. On Ubuntu 14.x I didn't encounter that problem. Facter is currently on version 3.11.3.
I nailed the speed problem down to the shell_exec in combination with facter by using the following code:
<?php
/**
*/
require_once 'AbstractTestCase.php';
/**
*/
class FacterTest extends AbstractTestCase
{
/**
*/
public function testSpeedDebug()
{
Core_Util_Debug::performanceStart('whoami');
shell_exec('whoami');
Core_Util_Debug::performanceEnd('whoami');
Core_Util_Debug::performanceStart('facter');
shell_exec('facter hostname');
Core_Util_Debug::performanceEnd('facter');
die (PHP_EOL);
}
}
When started manually over the CLI the output is:
>> name: whoami | time: 0.005261 s | memory: 3.3359 kB RAM
>> name: facter | time: 0.160292 s | memory: 0 B RAM
When started over Jenkins the output is:
>> name: whoami | time: 0.005495 s | memory: 3.3359 kB RAM
>> name: facter | time: 8.652776 s | memory: 0 B RAM
Does someone have an idea why it is slow when started over Jenkins (~8 times slower)?
Thank you in advance.
Extra info:
Gave it a quick try on Bamboo, and the behavior is the same as on Jenkins.

Laravel | Deployer and automatic deploy with gitlab ci

i want to setup automatic deploy to my production server.
This is my log from gitlab ci ->
Running with gitlab-runner 11.9.0-rc2 (227934c0)
on docker-auto-scale fa6cab46
Using Docker executor with image lorisleiva/laravel-docker:latest ...
Pulling docker image lorisleiva/laravel-docker:latest ...
Using docker image sha256:4bd5ecacba7b0f46950944f376647090071a70a7b1ffa0eacb492719bd476c6b for lorisleiva/laravel-docker:latest ...
Running on runner-fa6cab46-project-11286864-concurrent-0 via runner-fa6cab46-srm-1552945620-f480ce3e...
Initialized empty Git repository in /builds/Woblex/web/.git/
Fetching changes...
Created fresh repository.
From https://gitlab.com/Woblex/web
* [new branch] master -> origin/master
Checking out ce51a64e as master...
Skipping Git submodules setup
Downloading artifacts for composer (179849020)...
Downloading artifacts from coordinator... ok id=179849020 responseStatus=200 OK token=zy8-CGce
Downloading artifacts for npm (179849021)...
Downloading artifacts from coordinator... ok id=179849021 responseStatus=200 OK token=NvUWyzkg
$ eval $(ssh-agent -s) # collapsed multi-line command
Agent pid 11
Identity added: (stdin) (git#gitlab.com)
$ find . -type f -not -path "./vendor/*" -exec chmod 664 {} \; # collapsed multi-line command
$ whoami
root
$ php artisan deploy woblex.cz -s upload
✈︎ Deploying HEAD on woblex.cz with upload strategy
➤ Executing task deploy:prepare
✔ Ok
➤ Executing task deploy:lock
✔ Ok
➤ Executing task deploy:release
✔ Ok
➤ Executing task deploy:update_code
➤ Executing task deploy:failed
✔ Ok
➤ Executing task deploy:unlock
✔ Ok
In Client.php line 99:
The command "cd /var/www/dev.woblex.cz && (/usr/bin/git clone --recursive
git#gitlab.com:Woblex/web.git /var/www/dev.woblex.cz/releases/1 2>&1)" fa
iled.
Exit Code: 128 (Invalid exit argument)
Host Name: woblex.cz
================
Warning: Identity file /home/gitlab/.ssh/id_rsa not accessible: No such fil
e or directory.
deploy [-p|--parallel] [-l|--limit LIMIT] [--no-hooks] [--log LOG] [--roles ROLES] [--hosts HOSTS] [-o|--option OPTION] [--] [<stage>]
In Process.php line 239:
The command "vendor/bin/dep --file=vendor/lorisleiva/laravel-deployer/.buil
d/deploy.php deploy 'woblex.cz'" failed.
Exit Code: 128(Invalid exit argument)
Working directory: /builds/Woblex/web
Output:
================
✈︎ Deploying HEAD on woblex.cz with upload strategy
➤ Executing task deploy:prepare
✔ Ok
➤ Executing task deploy:lock
✔ Ok
➤ Executing task deploy:release
✔ Ok
➤ Executing task deploy:update_code
➤ Executing task deploy:failed
✔ Ok
➤ Executing task deploy:unlock
✔ Ok
Error Output:
================
In Client.php line 99:
The command "cd /var/www/dev.woblex.cz && (/usr/bin/git clone --recursi
ve
git#gitlab.com:Woblex/web.git /var/www/dev.woblex.cz/releases/1 2>&1)"
fa
iled.
Exit Code: 128 (Invalid exit argument)
Host Name: woblex.cz
================
Warning: Identity file /home/gitlab/.ssh/id_rsa not accessible: No such f
il
e or directory.
deploy [-p|--parallel] [-l|--limit LIMIT] [--no-hooks] [--log LOG] [--roles
ROLES] [--hosts HOSTS] [-o|--option OPTION] [--] [<stage>]
ERROR: Job failed: exit code 1
I there is my deploy.php configuration ->
<?php
return [
/*
|--------------------------------------------------------------------------
| Default deployment strategy
|--------------------------------------------------------------------------
|
| This option defines which deployment strategy to use by default on all
| of your hosts. Laravel Deployer provides some strategies out-of-box
| for you to choose from explained in detail in the documentation.
|
| Supported: 'basic', 'firstdeploy', 'local', 'pull'.
|
*/
'default' => 'basic',
/*
|--------------------------------------------------------------------------
| Custom deployment strategies
|--------------------------------------------------------------------------
|
| Here, you can easily set up new custom strategies as a list of tasks.
| Any key of this array are supported in the `default` option above.
| Any key matching Laravel Deployer's strategies overrides them.
|
*/
'strategies' => [
//
],
/*
|--------------------------------------------------------------------------
| Hooks
|--------------------------------------------------------------------------
|
| Hooks let you customize your deployments conveniently by pushing tasks
| into strategic places of your deployment flow. Each of the official
| strategies invoke hooks in different ways to implement their logic.
|
*/
'hooks' => [
// Right before we start deploying.
'start' => [
//
],
// Code and composer vendors are ready but nothing is built.
'build' => [
//
],
// Deployment is done but not live yet (before symlink)
'ready' => [
'artisan:storage:link',
'artisan:view:clear',
'artisan:cache:clear',
'artisan:config:cache',
'artisan:migrate',
'artisan:horizon:terminate',
],
// Deployment is done and live
'done' => [
//
],
// Deployment succeeded.
'success' => [
//
],
// Deployment failed.
'fail' => [
//
],
],
/*
|--------------------------------------------------------------------------
| Deployment options
|--------------------------------------------------------------------------
|
| Options follow a simple key/value structure and are used within tasks
| to make them more configurable and reusable. You can use options to
| configure existing tasks or to use within your own custom tasks.
|
*/
'options' => [
'application' => env('APP_NAME', 'Laravel'),
'repository' => 'git#gitlab.com:Woblex/web.git',
'git_tty' => false,
],
/*
|--------------------------------------------------------------------------
| Hosts
|--------------------------------------------------------------------------
|
| Here, you can define any domain or subdomain you want to deploy to.
| You can provide them with roles and stages to filter them during
| deployment. Read more about how to configure them in the docs.
|
*/
'hosts' => [
'woblex.cz' => [
'deploy_path' => '/var/www/dev.woblex.cz',
'user' => 'gitlab',
'identityFile' => '/home/gitlab/.ssh/id_rsa',
],
],
/*
|--------------------------------------------------------------------------
| Localhost
|--------------------------------------------------------------------------
|
| This localhost option give you the ability to deploy directly on your
| local machine, without needing any SSH connection. You can use the
| same configurations used by hosts to configure your localhost.
|
*/
'localhost' => [
//
],
/*
|--------------------------------------------------------------------------
| Include additional Deployer recipes
|--------------------------------------------------------------------------
|
| Here, you can add any third party recipes to provide additional tasks,
| options and strategies. Therefore, it also allows you to create and
| include your own recipes to define more complex deployment flows.
|
*/
'include' => [
//
],
/*
|--------------------------------------------------------------------------
| Use a custom Deployer file
|--------------------------------------------------------------------------
|
| If you know what you are doing and want to take complete control over
| Deployer's file, you can provide its path here. Note that, without
| this configuration file, the root's deployer file will be used.
|
*/
'custom_deployer_file' => false,
];
I think that there is a problem with user connected to my ubuntu server from laravel deployer, because i set in config the username to 'gitlab', but if deployer connect to ubuntu server with this user it can see a file /home/gitlab/.ssh/id_rsa. Have anybody someting, that can help me? Thanks a lot!
Edit: Here my gitlab-ci.yml
image: lorisleiva/laravel-docker:latest
.init_ssh: &init_ssh |
eval $(ssh-agent -s)
echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add - > /dev/null
mkdir -p ~/.ssh
chmod 700 ~/.ssh
[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
# Replace the last line with the following lines if you'd rather
# leave StrictHostKeyChecking enabled (replace yourdomain.com):
#
# ssh-keyscan yourdomain.com >> ~/.ssh/known_hosts
# chmod 644 ~/.ssh/known_hosts
.change_file_permissions: &change_file_permissions |
find . -type f -not -path "./vendor/*" -exec chmod 664 {} \;
find . -type d -not -path "./vendor/*" -exec chmod 775 {} \;
composer:
stage: build
cache:
key: ${CI_COMMIT_REF_SLUG}-composer
paths:
- vendor/
script:
- composer install --prefer-dist --no-ansi --no-interaction --no-progress --no-scripts
- cp .env.example .env
- php artisan key:generate
artifacts:
expire_in: 1 month
paths:
- vendor/
- .env
npm:
stage: build
cache:
key: ${CI_COMMIT_REF_SLUG}-npm
paths:
- node_modules/
script:
- npm install
- npm run production
artifacts:
expire_in: 1 month
paths:
- node_modules/
- public/css/
- public/js/
#codestyle:
# stage: test
# dependencies: []
# script:
# - phpcs --standard=PSR2 --extensions=php --ignore=app/Support/helpers.php app
phpunit:
stage: test
dependencies:
- composer
script:
- phpunit --coverage-text --colors=never
staging:
stage: deploy
script:
- *init_ssh
- *change_file_permissions
- whoami
- php artisan deploy woblex.cz -s upload
environment:
name: staging
url: http://www.dev.woblex.cz
only:
- master
production:
stage: deploy
script:
- *init_ssh
- *change_file_permissions
- php artisan deploy new.woblex.cz -s upload
environment:
name: production
url: https://www.new.woblex.cz
when: manual
only:
- master

docker-compose create and deploy image to docker.com

I am trying to wrap my head around Docker. For me there are still grey areas, which I have tried to research on the internet.
I'm trying to setup a Docker for an existing Laravel application.
I use Laradock (laradock/laradock) from GitHub (it's like Homestead but for Docker).
I use command like this:
docker-compose up -d nginx mysql
If I use docker ps command, I would be able to see the running containers for both MySQL and Nginx and maybe some others.
How can I take an image of docker-compose?
I have tried this
docker commit [container-id] username/reponame
An image does get created, but it's incomplete image. As I am running a whole application with MySQL, Nginx, etc..
How to take image of whole and deploy it to Docker?
All I want is that I can deploy an existing project that is using multiple containers in local to the Docker container on docker.com.
I'm using Docker on Windows 10.
Update
Here is the single node i have, currently I am on free plan with docker.com and microsoft.com.
The Node with Azure Microsoft:
The Container of Laravel:
The repository on Docker I am using on container:
and finally the image link as my repository image is public:
https://hub.docker.com/r/pakistanihaider/pakistanihaider.me/
Now for some reason the image I created and I pushed to Docker is not working and site is not going live, even on localhost when I create image and I try to run image it doesn't work. But when I try docker-compose up -d command, everything works perfectly.
The logs I have online for my service
[pakistanihaider-1]2017-01-02T17:56:56.794183200Z /usr/lib/python2.7/dist-packages/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
[pakistanihaider-1]2017-01-02T17:56:56.794218000Z 'Supervisord is running as root and it is searching '
[pakistanihaider-1]2017-01-02T17:56:56.923437200Z 2017-01-02 17:56:56,923 CRIT Supervisor running as root (no user in config file)
[pakistanihaider-1]2017-01-02T17:56:56.923533400Z 2017-01-02 17:56:56,923 WARN Included extra file "/etc/supervisor/conf.d/supervisord.conf" during parsing
[pakistanihaider-1]2017-01-02T17:56:56.958072800Z 2017-01-02 17:56:56,957 INFO RPC interface 'supervisor' initialized
[pakistanihaider-1]2017-01-02T17:56:56.958969800Z 2017-01-02 17:56:56,958 CRIT Server 'unix_http_server' running without any HTTP authentication checking
[pakistanihaider-1]2017-01-02T17:56:56.960595100Z 2017-01-02 17:56:56,960 INFO supervisord started with pid 1
[pakistanihaider-1]2017-01-02T17:56:57.968087400Z 2017-01-02 17:56:57,967 INFO spawned: 'nginx' with pid 7
[pakistanihaider-1]2017-01-02T17:56:57.970004700Z 2017-01-02 17:56:57,969 INFO spawned: 'hhvm-fastcgi' with pid 8
[pakistanihaider-1]2017-01-02T17:56:57.971064400Z 2017-01-02 17:56:57,970 INFO spawned: 'php5-fpm' with pid 9
[pakistanihaider-1]2017-01-02T17:56:59.547596700Z 2017-01-02 17:56:59,542 INFO success: nginx entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
[pakistanihaider-1]2017-01-02T17:56:59.547617600Z 2017-01-02 17:56:59,542 INFO success: hhvm-fastcgi entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
[pakistanihaider-1]2017-01-02T17:56:59.547623400Z 2017-01-02 17:56:59,543 INFO success: php5-fpm entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
What is going wrong here?

PHP dies when Doctrine MongoDB ODM performs successful file query

I am working on a Symfony2 project using Doctrine and MongoDB. To say the least, everything has been working great, until today when I uncovered a problem.
Queries that return one or more records/documents are apparently causing the PHP process to die. To be more specific, this is only happening with queries where "file" results are returned. I do not get a PHP error, and there are also no errors logged to the apache error log.
When I hit the URL that causes this query to run, I get net::ERR_EMPTY_RESPONSE in Chrome. I can output content with echo 'test';exit() just before the query, and I see the content in my browser. If I put the same echo 'test';exit(); line right after the query, I get the empty response error.
I have a development environment setup on my computer, which includes the LAMP stack. However, I have this configured to connect to my remote MongoDB instance. I have no problems when querying for files using my local setup. The various service versions are slightly different between my computer and server. Based on this observation, it seems as if it is not a MongoDB service issue, but maybe a PHP extension issue?
I add that I am able to successfully store files using the services on my server. But I can only query/retrieve the data on my local setup.
Is any log content generated when PHP dies like this?
I am running the following service versions:
OS: Ubuntu 12.04 LTS
Apache: 2.2.22
PHP: 5.3.10-1ubuntu3.1
Mongo PHP extension: 1.2.10
MongoDB-10gen: 2.0.5
Any help would be greatly appreciated. I have tried everything I know and have yet to find any clue as to what is actually causing this to happen.
--
My model looks like this:
<?php
namespace Project\Bundle\Document;
use Doctrine\ODM\MongoDB\Mapping\Annotations as MongoDB;
use Symfony\Component\Validator\Constraints as Assert;
/**
* #MongoDB\Document
*/
class File {
/**
* #MongoDB\Id(strategy="auto")
*/
protected $id;
/**
* #MongoDB\ObjectId
* #MongoDB\Index
* #Assert\NotBlank
*/
protected $userId;
/**
* #MongoDB\ObjectId
* #MongoDB\Index
*/
protected $commonId;
/**
* #MongoDB\File
*/
public $file;
/**
* #MongoDB\String
*/
public $mimeType;
/**
* #MongoDB\Hash
*/
public $meta;
... getters / setters ...
?>
I turned on verbose logging for the MongoDB server, and the query appears to run fine:
Wed May 9 20:04:29 [conn1] queryd dbdev.File.files query: { $query: { commonId: ObjectId('4fab01396bd985c215000000'), meta.size: "large" }, $orderby: {} } ntoreturn:1 nreturned:1 reslen:258 0ms
Wed May 9 20:04:29 [conn1] end connection 127.0.0.1:42087
Wed May 9 20:04:30 [DataFileSync] flushing mmap took 0ms for 5 files
Wed May 9 20:04:30 [DataFileSync] flushing diag log
Wed May 9 20:04:30 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 0ms
Wed May 9 20:04:30 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 0ms
Wed May 9 20:04:30 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 0ms
Wed May 9 20:04:30 [clientcursormon] mem (MB) res:46 virt:997 mapped:160
UPDATE
I used strace to find the following segmentation fault in Apache:
en("/opt/dev/app/cache/dev/doctrine/odm/mongodb/Hydrators/ProjectBundleDocumentFileHydrator.php", O_RDONLY) = 28
fstat(28, {st_mode=S_IFREG|0777, st_size=2462, ...}) = 0
fstat(28, {st_mode=S_IFREG|0777, st_size=2462, ...}) = 0
fstat(28, {st_mode=S_IFREG|0777, st_size=2462, ...}) = 0
fstat(28, {st_mode=S_IFREG|0777, st_size=2462, ...}) = 0
mmap(NULL, 2462, PROT_READ, MAP_SHARED, 28, 0) = 0x7fa3ae356000
munmap(0x7fa3ae356000, 2462) = 0
close(28) = 0
--- SIGSEGV (Segmentation fault) # 0 (0) ---
chdir("/etc/apache2") = 0
rt_sigaction(SIGSEGV, {SIG_DFL, [], SA_RESTORER|SA_INTERRUPT, 0x7fa3b3ce4cb0}, {SIG_DFL, [], SA_RESTORER|SA_RESETHAND, 0x7fa3b3ce4cb0}, 8) = 0
kill(5020, SIGSEGV) = 0
rt_sigreturn(0x139c) = 0
--- SIGSEGV (Segmentation fault) # 0 (0) ---
Process 5020 detached
This sounds like a bug which I've fixed on May 3rd. I would suggest you try the latest version of github (the v1.2 branch!). It also helps if you would include your phpinfo() section on "mongodb". If you still have an issue, please file a bug report with a small reproducible script at http://jira.mongodb.org/browse/PHP.

Categories