I have setup plesk to run through ssh and it works awesome, the only problem is that I have to run the script manually everyday. This should be automated. Just putting in this command "aws s3 sync [localdirectory] [s3://[S3Directory]" in the scheduled tasks section on a domain doesn't allow for the credentials to load. I am thinking that there should be a relatively easy solution to automate this. I have setup php for it as well so if there is a short script that would handle the same thing I would be interested in seeing an example.
Related
I have a server.php file in my elastic beanstalk website that running on an ec2 instance, it creates a websocket and keep it alive with an infinite loop (takes messages and send them correct client).
But after the deployment server.php file never starts run until I open it on my browser and I am not sure if it keeps running on.
I don't know the right way to do this. If it's the correct way how can I get server.php to open after deployment and keep running always.
Use supervisord. That's commonly used by laravel (php) to keep workers running. It's quite comfortable and has nice features that can be enabled, such as detection if a script did not successfully start, retries, automatic restarts, delayed start and some more quality of life stuff.
There appear some tutorials link
and link
I am working on a game panel and I building it in PHP and BASH. So, the web panel controls the game severs by running bash scripts when buttons are pressed. So far so good. My problems come when I need to run the bash scripts because the user running them needs sudo privileges or the scripts will have a an unexpected behavior:
Should I create a new user and give it sudo privileges, then login to the server via SSH (through PHP)? If so how can I store the login credentials safely so if a hacker breaches my website he won't find them?
Should I give www-data sudo privileges to the specific scripts? Is this a dangerous approach?
Is there any better and more secure way to run bash scripts from a webpage?
I am a newbie PHP developer and my first project is a game panel running on Ubuntu server. Please have mercy. :)
I wouldn't run any bash scripts directly from PHP, instead I would decouple the two by using a message queue.
Have the PHP script send a message to an exchange and mark the action as "in progress". Then have a bash script run as a consumer for a queue that receives the message, process it and run the necessary script. Finally pass the message on to another queue which is consumed by PHP and update the action status as "completed" or "failed", depending on the outcome. This is not a synchronous process but it's the safer way to handle it.
Suggested reading:
RabbitMQ
RabbitMQ and bash
What you are trying to do here is very dangerous, if you can execute bash scripts from within a web page you will be hacked and the hacker will have full access on your machine.
What you need to do depends on what your project is, but basically you want to setup some form of server that will run you bash commands and have your web page call that server. This is not straight forward.
I want to run Cron job in BlueMix in my php web application and I not found directly any Cron concept in bluemix. So how can I perform my cron task?
In many tutorial they say workload scheduler service but this is in java and I don't understand this concept and no good tutorial about this.
For scripts that need to be called on a schedule, invoke them over HTTP from a cron job using curl or wget from a Linux host, often the one hosting the LAMP stack itself.
However, that doesn't work on a PaaS like Bluemix because you don't have shell access to any underlying VM, so the alternative is to install a cron job elsewhere on a server under your control and script it to hit your Bluemix script on a schedule.
I feel a little bit silly for asking this question but I can't seem to find an answer on the internet for this problem. After searching for several hours I figured out that on a linux server you use Supervisor to run "php artisan queue:listen" (either with or without daemon) continuously on your website to handle jobs pushed to the queue. This is all well and good, but what if I want to do this on a Windows Azure web app? After searching around the solutions I found were:
Make a chron job to run "php artisan queue:listen" every minute (or every X minutes), I really dislike this solution and wanted to avoid it specially if the site gets more traffic;
Add a WebJob that runs "php artisan queue:listen" continuously (the problem here is I don't know how to write the script for the WebJob...);
I want to ask you guys for help on to know which of these is the correct solution, if there is a better one and if the WebJob is the best one how do I write the script for this? Thanks in advance.
In short, Supervisor is a modern alternative to nohup (no hang up) with a few other bits and pieces tacked on. In short, there's other resources that can keep a task running in the background (daemon) and the solution I use for Windows based projects (very few tbh) is Forever which I discovered via: https://stackoverflow.com/a/18226392/5912664
C:\myprojectroot > forever -c php artisan queue:listen --queue=some_nice_queue --tries=3
How?
Install node for Windows, then with npm install Forever
C:\myprojectroot > npm install -g forever
If you're stuck for getting Node running on Windows, I recommend the Windows Package Manager, Chocolatey
https://chocolatey.org/packages?q=node
Be sure to check for any logfiles that Forever creates, as I had left one long enough to consume 30Gb of disk space!
For Azure you can make a new webjob to your web app, and upload a .cmd file including a command like this.
php %HOME%\site\wwwroot\artisan queue:work --daemon
and defining that as a triguered and 0 * * * * * frequency cron.
that way work for me.
best.
First of all you cannot use a WebJob with Laravel on Azure. The Azure PHP Web App is hosted on Linux. WebJobs do not work with Linux at this moment.
The best way to do chron jobs in Laravel on Azure is to create an Azure Logic App. You use the Recurrence trigger and then a HTTP action to send a POST request to your Laravel Web App. You use this periodic heartbeat to run whatever actions you need to do. Be sure to add authentication to your POST request.
The next problem you will have is that POST will be synchronous so the work you are doing cannot be extensive or your HTTP request will time out or you will reach the time limit on PHP scripts (60 seconds).
The solution is not Laravel Jobs because here again you need something running in the background to process the queues.
The solution is also not PHP threads. The standard Azure PHP Web App does not support PHP Threads. You can of course build your own Web App and enable PHP threads, but this is really swimming upstream.
You simply have to live with synchronous logic. So the work you are doing with the heartbeat should take no more than about 60 seconds.
If you need more extensive processing then you really need to off load it to another place: another Web App, an Azure Function, etc.
But why not do that in the first place? The reason is cost and complexity. If you have something simple...like a daily report...you simply connect the report to the heartbeat and all the facilities for producing the report are right there in Laravel. To separate the daily report into its own container would require setup and the Web App it runs in would incur costs...not worth it in my view for something simple.
I have a setup where there are several application servers running php-fpm service and they all share a GlusterFS mount for the application code and other assets. In the current deploy process, the files get updated directly on the file server and many times to reflect changes the application service must be reloaded. To achieve that, the deployment script needs to get into every server and issue a reload command but with autoscaling, the number of servers is not the same at every moment.
Overall, I am working on sketching a couple of alternatives to solution this problem:
First one, more artesanal and not perfect, as a proof of concept, would be a cron job that will run every X minutes on the application machines and look for a file that should contain a unique info like it's hostname or IP address. If it matches, it will not take action but if not, it will reload and write itself within the file. On the deployment procedure, the script would clear the file and all servers should get reloaded in the next cron run.
Second, using a more sophisticated approach like a message queue or notification service where the running applications machine would subscribe to at boot time and wait for an order to reload. Deploy script would then publish a notification to get all servers aware it is time. A similar cron job from the previous method would then notice that and reload the app server.
Would any of that make sense? Is there any simpler or more standard way to trigger a broadcast for the applications servers running at a given moment in the deploy procedure without having to ssh to each and issuing the reload command? Any other advice you can provide or other suggestions?
Thanks!