Getting AWS Elastic Beanstalk health code programatically via php - php

I have an AWS Elastic Beanstalk instance on which I have some crons running. I occasionally have issues
Environment health has transitioned from Ok to Severe. 4.5 % of the requests to the ELB are failing with HTTP 5xx (2 minutes ago)
I have alarms set to let me know about this, and that part is working correctly. But I want to stop the crons from executing if the environment health status is not "OK" or "Warning". Is there a way to programatically get the health status code (shown in image below, circled in red) from a php script running on the same instance?

I would use CloudWatch alarm to trigger sending a notification email in such event.
The first thing to understand is that Elastic Beanstalk to a convenient tool to bootstrap the infrastructure on AWS. Anything you setup with Elastic Beanstalk, either via its cli or console, you can pretty much do the same on the EC2 page.
On AWS console, go to EC2
Go to Load Balancing/Target Groups
Find the right Target Group that is attached to the Load Balancer created by Elastic Beanstalk
Check the "Monitoring" tab, you should see "CloudWatch alarms: no alarm configured"
Click "Create Alarm" and finish off the rest, you may need to create a topic, set email address, and define the alarm triggering criteria.

Assuming you have a health script within your application that beanstalk pings periodically, you should be able to determine if your app is healthy or not within your application. You should not rely on having to programically get the status from Benstalk.
For example, within your health script you can set a flag if your application is healthy or not. Such flag can be in the form of a touched file in the filesystem. Or a flag somewhere in a cache if you use one. Ex: Memcache.

Related

Keep PHP file always running in EC2

I have a server.php file in my elastic beanstalk website that running on an ec2 instance, it creates a websocket and keep it alive with an infinite loop (takes messages and send them correct client).
But after the deployment server.php file never starts run until I open it on my browser and I am not sure if it keeps running on.
I don't know the right way to do this. If it's the correct way how can I get server.php to open after deployment and keep running always.
Use supervisord. That's commonly used by laravel (php) to keep workers running. It's quite comfortable and has nice features that can be enabled, such as detection if a script did not successfully start, retries, automatic restarts, delayed start and some more quality of life stuff.
There appear some tutorials link
and link

Amazon Elastic Beanstalk Worker cronjob (SQS) triggers same message multiple times

All,
I have a quite disturbing problem with my Amazon Elastic Beanstalk Worker combined with SQS, which is supposed to provide a cron job scheduling - all this running with PHP.
Following scenario - I need a PHP script to be executed regularly in the background, which might eventually run for hours. I saw this nice introduction which seems to cover exact my scenario (AWS Worker Environments - see the Periodic Task part)
So I read quite a lot of howtos and set up an EBS Worker with the SQS (which actually is done automatically during creation of the worker) and provided the cron config (cron.yaml) within my deployment package.
The cron script is properly recognized. The sqs daemon starts, messages are put into the queue and trigger my PHP script exactly on schedule. The script is run and everything works fine.
The configuration of the queue looks like this:
SQS configuration
However after some time of processing (the script is still busy - and NO it is not the next scheduled run^^) a second message is opened and another instance of the same script is executed, and another, and another... in exactly 5 minutes intervals.
I suspect, somehow the message is not removed from the queue (although I ensured that the script sends status 200 back), which ends up in creating new message, if the script runs for too long.
Is there a way to prevent the spawning of another messages? Tell the queue or the sqs daemon not to create new flighing messages? Do I have to remove the message in my code? Although the tutorial states it should happen automatically
I would like to just trigger the script, remove the message from queue and let the script run. No fancy fallback / retry mechanisms please :-)
I spent many hours trying to find something on the internet. Unsuccessful. Any help is appreciated.
Thanks
a second message is opened and another instance of the same script is executed, and another, and another... in exactly 5 minutes intervals.
I doubt it is a second message. I believe it is the same message.
If you don't respond 200 OK before the Inactivity Timeout expires, then the message goes back to the queue, and yes, you'll receive it again, because the system assumes you've crashed, and you would want to see it again. That's part of the design.
There's an X-Aws-Sqsd-Receive-Count request header you're receiving that tells you approximately how many times the current message has been delivered. The X-Aws-Sqsd-Msgid request header identifies the unique message.
If you can't ensure that the script will finish before the timeout, then this is not likely an appropriate use case for this service. It sounds like the service is working correctly.
I know this doesn't directly answer your question regarding configuration, but I ran into a similar issue - my queue configuration is set exactly like yours, and in my Elastic Beanstalk setup, I've set the Visibility Timeout to 1800 seconds (or half an hour) and Max Retries to 2.
If a job runs for more than a minute, it gets run again and then thrown into the dead letter queue, even though after a 200 OK is returned from the application every time.
After a few hours, I realized that it was the Nginx server that was timing out - checking the Nginx error log yielded that insight. I don't know why Elastic Beanstalk includes a web server in this scenario... You may want to check if EB spawns a web server in front of your application, if all else fails.
Look at the Worker Environment documentation for details on the values you can configure. You can configure several different timeout values as well as "Max retries", which if set to 1 will prevent re-sends. However, your Dead Letter Queue will fill up with messages that were actually processed successfully, so that might not be your best option.

Reloading multiple application servers on deploy

I have a setup where there are several application servers running php-fpm service and they all share a GlusterFS mount for the application code and other assets. In the current deploy process, the files get updated directly on the file server and many times to reflect changes the application service must be reloaded. To achieve that, the deployment script needs to get into every server and issue a reload command but with autoscaling, the number of servers is not the same at every moment.
Overall, I am working on sketching a couple of alternatives to solution this problem:
First one, more artesanal and not perfect, as a proof of concept, would be a cron job that will run every X minutes on the application machines and look for a file that should contain a unique info like it's hostname or IP address. If it matches, it will not take action but if not, it will reload and write itself within the file. On the deployment procedure, the script would clear the file and all servers should get reloaded in the next cron run.
Second, using a more sophisticated approach like a message queue or notification service where the running applications machine would subscribe to at boot time and wait for an order to reload. Deploy script would then publish a notification to get all servers aware it is time. A similar cron job from the previous method would then notice that and reload the app server.
Would any of that make sense? Is there any simpler or more standard way to trigger a broadcast for the applications servers running at a given moment in the deploy procedure without having to ssh to each and issuing the reload command? Any other advice you can provide or other suggestions?
Thanks!

run php process in background, send email when finished

I am writing a script that allows users to download vm-images from a remote repository. The images have to download from the remote repository (a), to a local server(b), and then the users can download the image from that local server(b) via a url link. This is achieved via a php exec call on an api with url endpoints.
The question I have, is that it can take a while for the image transfer from the "a" machine to the "b" machine. Is there a way to have the download process execute in the background. When image transfer is done, user gets an email containing the link to the file?
Otherwise, the user will just sit at a spinning page for as long as the max_execution_time setting will allow.
I was looking at this site for reference, but it was not super helpful.
Edit: I am running on a LAMP setup
You may want to look into starting a worker via beanstalk.
http://kr.github.io/beanstalkd/
You send a message containing the download link, and the email to send to. A worker can be started on demand when your message is sent, and automatically start the download. When the download is complete, your worker would fire off the email.
The PHP library to allow you to interface with beanstalk can be found here:
https://github.com/pda/pheanstalk
Beanstalkd
Beanstalkd is a daemon written to handle running jobs asynchronously so that your user is not left hanging while waiting for a task to finish. It's written in Ruby (I think), but there are many client libraries to interface with it.
Pheanstalk
Pheanstalk is the PHP library for integrating with Beanstalkd. You can define Job classes and then use this API to submit those jobs for processing.
Most major frameworks have support for something like this.

Scaling cronjobs over multiple servers

right now, we have a single server with a cronjob tab that sends out daily emails. We would like to scale that server. The application is standard zend framework application deployed on centos server in amazon cloud.
We already took care of the load balancing, content management and managing deployment. However, the cronjob is still an issue for us, as we need to grantee that some jobs are performed only once.
For example, the daily emails cronjob must only be executed once by a single server. I'm looking for the best method to grantee only one server will execute it only once.
I'm thinking about 2 solutions, but i was wondering if someone else had the same issue.
Make one of the servers "master", who only sends out the daily emails. That will be an issue, if the server malfunction, and generally we don't want to have a "special" server. It would also means we will need to keep track which server is master.
Have a queue of schedule tasks to be performed. Each server open that queue and sees which tasks needed to be performed. The first server who "grab" the task, will preform the task and mark it as done. I was looking at amazon simple queuing service as a solution for the queue.
Both these solutions have advantages and disadvantages, and i was wondering if someone thought about someone else that might help us here.
When you need to scale out cron jobs, you are better off using a job manager like Gearman
Beanstalkd could also be an option for you.
I had the same problem. What I did was dead simple.
I spun up the cheapest EC2 instance on AWS.
I created the cronjob(s) only on this server.
The cron job just run jobs that only makes a simple request to my endpoint / api (i.e. api.mydomain.com).
On my api, i just have a route watching for these special request that will run the job I want. So basically, all I'm doing instead of running the task using a cronjob, im running the task via a http request.
I hope that makes sense! Now it doesn't matter how many servers you have, it will just scale! Also, your cronjob server's only function is to run dead simple jobs to send a request, nothing more.

Categories