PHP Heroku background workers? - php

I'm using Heroku running a PHP app, I need to setup background workers which talk to external APIs and write to my DB, Heroku has lots of info about setting up workers for Ruby but not for PHP.
Is this easily doable on Heroku with PHP?
I've never tackled launching background processes and I can't seem to find any docs detailing it...

Add the following environment variables:
# Add this config
$ heroku config:add LD_LIBRARY_PATH=/app/php/ext:/app/apache/lib
Then you can just add the worker to your Procfile.
worker: cd ~/www/ && ~/php/bin/php worker.php

Related

How to automate of entering pass-phrases into docker containers with bash scripts?

I need to know how to automate the process of entering pass-phrases for private keys within docker containers.
I have a problem with setting up my web application container with docker. During in the setup process, I have to run composer install within my container and there is a dependency which needs to clone a private git repository. In that case it requires to use my private key and the key is secured with a pass-phrase. Because of that, when running composer install it needs to provide the pass-phrase to clone the repository. There is no issue with that flow when I running these commands within the container manually.
But I'm planning to automate this whole process with a bash script, and when I run my bash script within the host machine, it's not asking for the pass-phrase and exit with a "Permission denied (publickey)" error.
How do I automate the complete process with bash script without exiting from the script? I'm searching a way of something like that is prompting for the pass-phrase at the time of cloning that private repository or any other working solution.
Usually the answer is "don't". The password entered during the build would be part of the image, often in the layer history. If you don't mind the password being included in the image, then it depends on the command being run. Sometimes you can echo p4ssw0rd | git clone ..., but other times the command will intentionally strip out the piped in input and force a prompt for security. Other commands have options to include the password on the command prompt with a flag. Some people will handle prompts like this with an "expect" script that watches the output and sends the password when prompted. Each of these will put the password inside the image.
For your specific scenario, one option is to add a keypair that doesn't require a password to git. Another option is to checkout the code outside of the docker build, and COPY the repo into the image instead of letting git check it out.
If you absolutely need to include the password in the image, then look into a multi-stage build. Inject your password during the first stage, and then copy the result into the second stage so that the image you push does not include the password in the image layers.
You can use secrets to manage any sensitive data which a container needs at runtime but you don’t want to store in the image or in source control.
Note: Docker secrets are only available to swarm services, not to standalone containers. To use this feature, consider adapting your container to run as a service. Stateful containers can typically run with a scale of 1 without changing the container code.
Check https://docs.docker.com/engine/swarm/secrets/#how-docker-manages-secrets
I found the answer for my question and it is so simple as setting -it flag into docker exec command in the bash script which calls the composer install. I skipped that flag intentionally because it's run within a bash script and I was not noticed that would be required an interactive mode to prompt for the pass-phrase.
Anyhow, after modified the bash script like below, the issue is solved.
docker exec -it {container_name} composer install
And after looking at the above answers, I felt that I've not given the complete picture regarding the issue. So I thought, I need to add some explanations here. The structure of the bash script is as below;
Start the containers with docker-compose
In this case my application code was placed outside of the web app container as a docker volume to be able to managed easily.
Install PHP dependencies into web app container with above docker exec command
Run database migrations with again related docker exec commands
So, my issue was with the 2nd step when installing composer packages. There was one package which needed to be cloned from a private git repository during the composer install. For that to be cloned, my private key is used and since it's protected with a pass-phrase and there was missing the required flags (-it) for the interactive mode of docker exec command, the bash script was exit with an error. After setting the -it flags now issue is fixed.

GITHUB : I want to add project from local to live server through github [duplicate]

Is there any way to set up git such that it listens for updates from a remote repo and will pull whenever something changes? The use case is I want to deploy a web app using git (so I get version control of the deployed application) but want to put the "central" git repo on Github rather than on the web server (Github's interface is just soooo nice).
Has anyone gotten this working? How does Heroku do it? My Google-fu is failing to give me any relevant results.
Git has "hooks", actions that can be executed after other actions. What you seem to be looking for is "post-receive hook". In the github admin, you can set up a post-receive url that will be hit (with a payload containing data about what was just pushed) everytime somebody pushes to your repo.
For what it's worth, I don't think auto-pull is a good idea -- what if something wrong was pushed to your branch ? I'd use a tool like capistrano (or an equivalent) for such things.
On unix-likes you can create cron job that calls "git pull" (every day or every week or whatever) on your machine. On windows you could use task scheduler or "AT" command to do the same thing.
There are continuous integrations programs like Jenkins or Bamboo, which can detect commits and trigger operations like build, test, package and deploy. They do what you want, but they are heavy with dependencies, hard to configure and in the end they may use periodical check against git repository, which would have same effect like calling git pull by cron every minute.
I know this question is a bit old, but you can use the windows log and git to autopull your project using a webhook and php (assuming your project involves a webserver.
See my gist here :
https://gist.github.com/upggr/a6d92e2808e9628ebe0d01fd93569f4a
As some have noticed after trying this, if you use php exec(), it turns out that solving for permissions is not that simple.
The user that will execute the command might not be your own, but www-data or apache.
If you have root/sudo access, I recommend you read this Jonathan's blog post
When you aren't allowed/can't solve permissions
My solution was a bit creative. I noticed I could create a script under my username with a loop and git pull would work fine. But that, as pointed out by others, bring the question of running a lot of useless git pull every, say, 60 seconds.
So here the steps to a more delicate solution using webhooks:
deploy key: Go to your server and type:
ssh-keygen -t rsa -b 4096 -C "deploy" to generate a new deploy key, no need write-permissions (read-only is safer). Copy the public key to your github repository settings, under "deploy key".
Webhook: Go to your repository settings and create a webhook. Lets assume the payload address is http://example.com/gitpull.php
Payload: create a php file with this code example bellow in it. The purpose of the payload is not to git pull but to warn the following script that a pull is necessary. Here the simple code:
gitpull.php:
<?php
/* Deploy (C) by DrBeco 2021-06-08 */
echo("<br />\n");
chdir('/home/user/www/example.com/repository');
touch(GITPULLMASTER);
?>
Script: create a script in your preferred folder, say, /home/user/gitpull.sh with the following code:
gitpull.sh
#!/bin/bash
cd /home/user/www/example.com/repository
while true ; do
if [[ -f GITPULLMASTER ]] ; then
git pull > gitpull.log 2>&1
mv GITPULLMASTER GITPULLMASTER.`date +"%Y%m%d%H%M%S"`
fi
sleep 10
done
Detach: the last step is to run the script in detached mode, so you can log out and keep the script running in background.
There are 2 ways of doing that, the first is simpler and don't need screen software installed:
disown:
run ./gitpull.sh & to put it in background
then type disown -h %1 to detach and you can log out
screen:
run screen
run ./gitpull.sh
type control+a d to detach and you can log out
Conclusion
This solution is simple and you avoid messing with keys, passwords, permissions, sudo, root, etc., and also you prevent the script to flood your server with useless git pulls.
The way it works is that it checks if the file GITPULLMASTER exists; if not, back to sleep. Only if it exists, then do a git pull.
You can change the line:
mv GITPULLMASTER GITPULLMASTER.date +"%Y%m%d%H%M%S"`
to
rm GITPULLMASTER
if you prefer a cleaner directory. But I find it useful for debug to let the pull date registered (and untracked).
For our on-premises Windows test servers, we use Windows Task Scheduler tasks, set to run every 3 minutes, pulling from Bitbucket Cloud to repositories on those servers. While not instantaneous, it meets our needs, and has proven to be reliable.

Git Post-Push hook replacing build folder

I'm working with an AngularJS app that I am hosting on Azure as a Web App.
My repository is on Bitbucket, connected as a deployment slot. I use TortoiseGit as a Windows shell for commit/pull/push. Grunt is my task runner.
Problem: I am not managing to replace the build folder after a push has been made. This is what is being exposed on Azure (see picture):
What I've tried:
Batch file using Windows ftp.exe replacing the folder after push using mput
Following this guide by taking advantages of Azure's engine for git deployment(s) that is behind every web app.
WebHook on Bitbucket calling URL with simple PHP script doing a FTP upload
Currently my solution is that after a push I'm building using grunt build and connecting through FTP with FileZilla and replacing build folder manually.
How can I automate this?
Thanks in advance.
I solved this by refactor my initial batch script that calls:
npm install
bower install
grunt clean
grunt build
winscp.com /script=azureBuildScript.txt
and my azureBuildScript.txt:
option batch abort
option confirm off
open ftps://xxx\xxx:xxx#waws-prod-db3-023.ftp.azurewebsites.windows.net/
cd site/wwwroot
rmdir %existingFolder%
put %myFolder%
exit
This is being triggered Post-Push as a Hook job in TortoiseGit.
It turns out ftp.exe is restricted regarding passive mode and can't interpret a session whilst WinSCP does this with ease.

How to run (or should I run) PHP composer on Jelastic?

Basically what I'm trying to do is to create a simple multi-node env with varnish+nginx+mariadb+memcached. By now I've managed to launch the environment and attach git project to it. The problem is that we work with php and symfony2, which requires composer to be executed at least once in order to properly deploy the application.
Outside of jelastic we use Jenkins + Ant (but we don't scale horizontally in automatic on the projects where this setup is used, so it's not a problem to add node manually).
So the question is: How can I run composer or ant with build.xml on each deploy?
I see that Java environments have a build server option, is there something like this for php environments?
PHP projects do not have a "standard" build server in the way that many Java projects do - requirements for PHP build tools are more varied depending on the particular project.
For example one customer may ask for grunt, another for ant, and another for phing.
If you want to perform a sophisticated build, you can create your own build node for your PHP project using an Elastic VPS or separate Docker environment. To deploy the built project to your servers you can use SSH connections, or simply git push and set the runtime environment to auto-update (e.g. via ZDT feature) from that git repo / branch.
If your needs are more simple, you can install composer directly onto your php runtime node in the normal way via SSH.
E.g.
$ curl -sS https://getcomposer.org/installer | php
There are more detailed tips about how to tidy that up (add to your PATH etc.) at http://kb.layershift.com/jelastic-install-composer

Azure using cached deployment script

I have a PHP website that is automatically being deployed to Azure, this works well.
However, I would like to use a custom deployment script to automate moving certain files to Azure Storage/CDN on deploy. I've set up azure-cli using npm and created a deployment script using azure site deploymentscript --php. The deployment scripts were created and I've added my own script at the end of deploy.cmd.
My problem is that, the generated deployscript does not appear to be used. In the azure deploylog it says this:
Generating deployment script
Using cached version of deployment script (command: 'azure -y --no-dot-deployment -r "D:\home\site\repository" -o "D:\home\site\deployments\tools" --basic --sitePath "D:\home\site\repository"').
Running deployment command...
Command: "D:\home\site\deployments\tools\deploy.cmd"
etc...
That deploy.cmd is not the one it should be using- not the one I can change.
So, how do I get it to stop using a cached deploymentscript, or to use my custom deployment script?
Try to add the .deployment in the root folder, and then the deployment.cmd inside the specific project. This way I got azure to do the non cache deployment. Just ensure that you update the path to the .cmd file inside the .deployment file.

Categories