WebSockets problem while deploying Symfony application - php

I am using Ratchet WebSockets on my Symfony app. There is a Symfony Process Component implementation on WebSocket notification handler. It means that some Symfony commands is started using the Symfony Process Component after some particular message comes into socket.
Socket server itself is being handled by Supervisor. Now after every code deployment I run the supervisor restart command to apply the new code to the socket server.
And here comes the problem. If I deploy the code changes while any of the Symfony Command is running - socket disconnects and only way to raise it up is to kill the supervisor process and restart the supervisor again. Of course this way all the running processes dies.
My goal would be to continue running Symfony processes even after the restart of Supervisor. After these processes finished, the next ones would run on the newly deployed code. I wonder what is the standard handling of continuous delivery on WebSockets is.

Related

Laravel worker failing when run inside a K8s pod with 139 error code, SIGSEGV

We use k8s deployment as a laravel queue worker. The runtime is alpine 3.10 and php 7.3 fpm with laravel 5.6. Our resource limits are requests: 512MB and limits 1Gi.
we are running 8 replicas to offload the incoming messages from SQS and, we are using. messages to queue are dispatched via kubernetes cron jobs
php /var/www/artisan queue:work ${CHANNEL} -vvv --tries=3 --sleep=3 --timeout=3600 --memory=${MEMORY}
where CHANNEL is the queue name (SQS) and MEMORY is the memory limit passed to the laravel worker.
on average each pod is always processing 170 + messages which talks to various third party apis and stuff.
problem:
intermittently our pods are restarting with an error code 139,
SIGSEGV, Segmentation fault.
This is impacting our production systems as our pods restart while there is a message in process.
This is a community wiki answer as it only addresses the issue from the docker container side. Feel free to expand on this as you wish.
The error code that you see indicates that container received SIGSEGV:
SIGSEGV indicates a segmentation fault. This occurs when a program
attempts to access a memory location that it’s not allowed to access,
or attempts to access a memory location in a way that’s not allowed.
From the Docker container standpoint, this either indicates an issue
with the application code or sometimes an issue with the base images
used by the container.
In that case you should make sure that you are not using some old Docker versions and than try to test your code inside the container with a debugger. I am not familiar enough with this topic to guide you further but this SO question might be useful for you.

RPCS on Server disappear after idle time of 15-20min

I am really sorry for my Poor English.
I have created simple websocket game with voryx/ThruwayBundle for Symfony. Game uses RPCS registered on Server. Everything works fine but when I leave for about 20 mins RPCS are no longer available. And I have to restart websocket server to make them avaiable again.
I tried to register my rpcs as workers and I can see them running but they are still unavailable
websocket server process status
The annotation I use to register RPC is
/**
* #Register("games.snake.newplayer",serializerEnableMaxDepthChecks=true, worker="add-snake")
*/
I run server with command
nohup php app/console thruway:process start &
You can see it on http://amusement.cloudapp.net/
I am using Ubuntu 15.10 server created in Microsoft azure if it's any help
I don't know what I can do to make those RPC available anytime without restarting websocket server. Should I make some cron action to reset websocket server if they're stopped responding and how can I do it.
Edit#1
RPCS work great on my local machine Ubuntu 14.04
To prevent disapeearing of rpcs I created symfony console command to ping them with some test data. Next I registered this command as cron job to be executed every minute.
I could't find the source of problem however it's easy way to avoid it.

rabbitmq + phalconphp as consumer (with background jobs)

How to connect phalconphp as consumer with rabbitmq?
As I understood I need to have a background process running for phalconphp to be able to listen for events/messages from rabbitmq and process some time intensive tasks (sending mail, writing to logs).
What would fire the consumer (in Phalcon), maybe supervisord?
I found some article that states just to run php worker.php containing a listen method:
http://www.sitepoint.com/php-rabbitmq-advanced-examples/
While just running php worker.php will work, if you don't use a supervisor service, and just use a while(1) and send it to the background, there is no way to handle the process dying.
supervisord is recommended because you need to daemonize the process, and ensure that if it dies, or if the system is rebooted, that the process will be restarted.
You might also want to check into upstart. It can achieve the same goal.

PHP application on Azure as "Console App"

is there a way how to easily run a PHP application as from command line on Windows Azure?
I have a standard Web Application (on Azure) and I want to communicate using WebSockets.
So I need to have a WebSocket Server running all the time on Azure.
I use Wrench project which I need to run "all the time" to listen on some port and deal with messages from JavaScript-sended WebSocket.
So again - how easily run a "persistent" PHP application on Azure?
Thank you in advance.
Sandrino's answer is fine, but I prefer ProgramEntryPoint for doing this sort of thing. The trouble with a background task is that (unless you build something on your own) nothing is monitoring it. Using ProgramEntryPoint, Windows Azure will monitor the process, and if it exits for any reason, the role instance will be restarted.
EDIT:
Sandrino points out that the PHP program isn't the only thing running. (There's also a website.) In that case, I'd recommend launching php.exe in Run() in WebRole.cs. Process.Start it and then do a .WaitForExit() on it. That way, if the process exits, the role itself will exit from Run(), causing the role instance to restart. See http://blog.smarx.com/posts/using-other-web-servers-on-windows-azure for an example.
In order to run your PHP script as a command line application you should use the PHP CLI (command line interface).
php.exe -f "yourWebSocketServce.php" -- -arg1 -arg2 -arg3
Now, in order to run this in Windows Azure you'll need to define a startup task that runs this command. You'll see that the default task type is simple, which means that the startup of your role will block until the task finishes. But in your case running the WebSocket in PHP will be a blocking process, that's why you should change the type to background (this will make sure the instance continues starting up while your WebSocket server is running).
Here is a WebSockets service on Azure. - Live XSockets.NET
Have a look at http://live.xsockets.net, an easy way of getting started, but it depends on what you are about to do on the "server side". This service i mention can be uses as a "message" dispatcher, to ntify "clients" on changes etc.. Hmm in other words it is a way of boosting "regular" web-apps..

RabbitMQ + PHP deployment strategy

I have a PHP project (Symfony2) that uses RabbitMQ. I use its as simple message queue to delay some jobs (sending mails, important data from APIs). The consumers run on the webserver and their code is part of the webserver repo - they are deployed in the same with with the web.
The questions are:
How do I start the consumers as daemons and make sure they always run?
When deploying the app, how do I shut down consumers "gracefully" so that they stop consuming but finish processing the message they started?
If it's any important, for deployment I use Capifony.
Thank you!
It maybe worth looking at something supervisord which is written in python. I've used it before for running workers for Gearmand which is a job queue that fullfils a similar role to the way your using RabbitMQ.

Categories