I have an app on Heroku, using PHP and PostgreSQL. Now I would like to create backup of my database regularly, put it on a folder on the server, or record its S3 urls and download it.
I have been doing research on the topic. It seems that the best is to use pgbackups add-on which I already have and can use on local command line, like: heroku pgbackups:url --app=APP_NAME
I want to automate the process, lets say in a cron job. I see we have workers on Heroku, but I have never used them and this is still a development environment. A free plan does not have workers. Besides, my app doesnt really require background workers. I dont want to buy worker dynos only for automatic database backups. Which way should I go?
If I can create PHP cron jobs on Heroku, then I need to know: How can I run Heroku commands in PHP? I tried exec and passthru, but none of them seems to work on Heroku server. On my localhost, the above command (heroku pgbackups) works pretty well, providing the Heroku toolbelt installed on local server.
For Ruby, they have https://github.com/heroku/ toolkit for server-side commands. But I had no luck in my search for a PHP branch...
The overall purpose is to have the DB backup and store it on the server and download it. (Even though Heroku makes backups itself, we want to see it in our hands :)
How can I make it happen?
Probably the best thing to do is to have a cron job on your backup server to run heroku pgbackups:url and to get a URL to the latest pgbackup, and then download it with curl. Something like this:
curl $(heroku pgbackups:url) > latest_backup
For more info about heroku pgbackups:url, see:
https://devcenter.heroku.com/articles/pgbackups#downloading-a-backup
Doing anything with worker dynos wouldn't really make sense because that wouldn't really help you get the backup to your backup server unless you were downloading and re-uploading it or something. Just running a cron job on your backup server downloading once is a lot more straight forward.
Related
When I did run my web site on my old server, I launched Transmit on my Mac (OS X 10.11.6), connected to my server, Control-Click-Open the remote php file, made the fix and save. The file got updated on the server in a second. That was great to run some php/mysql/google_service test that I can't run locally.
Now I have just moved my project on an Amazon server, AWS. Every time I need to run a test (for example on the S3_Bucket, that I can't run locally), or modify a variable, change a flag... I have to do it on my local php/html/java/css/apis project, zip it, upload it via the Elastic Beanstalk panel, wait about half a minute, then run it. I have found no way to edit a single file in an easy way (Open, Write, Save) as I did before through Transmit. I can't go ahead this way. It's wasting my time.
Do you know any better way to develop/test/run my project on AWS?
Have you considered using Docker?
https://aws.amazon.com/about-aws/whats-new/2015/04/aws-elastic-beanstalk-cli-supports-local-development-and-testing-for-docker-containers/
Or use MAMP?
https://www.mamp.info/en/
How do i get the changes from live to my repo? The files running on the heroku app have changed and now if i push these will be overwritten.
I have my php code running on heroku and storing 'database' things in local files.
{
"id":1,
"date":"12/1/2012",
"topImg":"/img/dates/1.jpg"
.....
So these things are stored in a json object then just saved over.
Don't do this!
Local files are your enemy, because Heroku is a cloud application host that runs applications on multiple anonymous load-balanced nodes.
Perhaps you're running a single dyno right now for development purposes, but if you ever want to make your site go live you'll need at least two dynos (because Heroku free tier service is qualitatively different from their non-free tier service, particularly in that they will spin down a free dyno if it is not being used but they will never do that to a non-free dyno). When you have multiple dynos, using local files for anything other than caching will be totally unmanageable.
Even if you somehow stay with one dyno forever, Heroku dynos are not guaranteed to maintain their local storage -- if for instance there is a hardware failure on the machine your dyno is served from, Heroku will not hesitate to spin down your application, deleting all local storage, and spin it up again with just your application code loaded, because it does not expect your application to be using local storage for anything.
There is no one supported method for getting files off of a dyno, because, again, it's never a good idea to store local files on a dyno. However, if you really, really need to do this, you can use heroku run and run one-off commands to, for instance, open up a shell and upload the files somewhere. Again: do not do this for anything serious, because once you have multiple dynos it'll be nearly impossible to manage files on them.
Totaly agree with #Andrew. Prefer to use something as mongoDB database as a service with heroku : https://addons.heroku.com/catalog/mongolab or elasticsearch, if you want to add search function over those documents: https://addons.heroku.com/catalog/searchbox. There are well designed to store json docs and, with those services, you are sure that your data will be persistent no matter your dynos are.
Now, to get back your heroku local files, I would do something like that :
run the heroku bash with heroku run bash
make a scp -pYourPort yourFile(s) userName#yourDestination:/pathToSaveLocaion
logout from your heroku instance
I hope this will help you.
Our website currently backs up every night to a seperate server that we have which is fine, but when we go to dowload the files the next day it take's a long time to download the files (usually around 36,000+ images). Downloading this the following day takes quite some time and affects the speeds of everyone else using our network so ideally we would try and do this in the middle of the night - except there's no-one here to do it.
The server that the backup is on is running Cpanel which appears to make it fairly simple to run a PHP file as a Cron job.
I'm assuming the following, feel free to tell me I'm wrong.
1) The server the backup is on runs Cpanel. It appears that it shouldn't be too difficult to set up a PHP script to run as a Cron job in the middle of the night.
2) We could deploy a PHP script utilizing the FTP functions to connect to another server and start the backup of these files using this cron job.
3) We are running Xampp on a windows platform. It has Filezilla as part of it so I'm assuming it should be able to accept incoming FTP connections.
4) Overall - we could deploy a script on the backup server that would run every night and send the files back to my local computer running Xampp.
So that's what I'm guessing. I'm getting stuck at the first hurdle though. I've tried to create a script that runs on our local computer and sends a specified folder to the backup server when it executes, but all I seem to be able to find is scripts relating to single files. Although I've some experience of PHP, I haven't touched upon the FTP functions before which are giving me some problems. I've tried the other examples here on stack overflow with no success :(
I'm just looking for the most simplistic form of a script that can transfer upload a folder to a remote IP. Any help would be appreciated.
There is a fair amount of overhead involved in transferring a bunch of small files over FTP. Ive seen jobs take 5x as long, over a local network. It is by far easier to pack the files in something like a zip and send them in one large file.
you can use exec() to run zip from the command line (or whatever compression tool you prefer). After that, you can send it over ftp pretty quickly (you said you found methods for transferring 1 file). For backup purposes, having the files zipped would probably makes things easier to handle, but if you need them unzipped you can setup a job on the other machine to unpack the file.
I’ve been working on a cloud based (AWS EC2 ) PHP Web Application, and I’m struggling with one issue when it comes to working with multiple servers (all under an AWS Elastic Load Balancer). On one server, when I upload the latest files, they’re instantly in production across the entire application. But this isn’t true when using multiple servers – you have to upload files to each of them, every time you commit a change. This could work alright if you don’t update anything very often, or if you just have one or two servers. But what if you update the system multiple times in one week, across ten servers?
What I’m looking for is a way to ‘commit’ changes from our dev or testing server and have it ‘pushed’ out to all of our production servers immediately. Ideally the update would be applied to only one server at a time (even though it just takes a second or two per server) so the ELB will not send traffic to it while files are changing so as not to disrupt any production traffic that may be flowing to the ELB .
What is the best way of doing this? One of my thoughts would be to use SVN on the dev server, but that doesn’t really ‘push’ to the servers. I’m looking for a process that takes just a few seconds to commit an update and subsequently begin applying it to servers. Also, for those of you familiar with AWS , what’s the best way to update an AMI with the latest updates so the auto-scaler always launches new instances with the latest version of the software?
There have to be good ways of doing this….can’t really picture sites like Facebook, Google, Apple, Amazon, Twitter, etc. going through and updating hundreds or thousands of servers manually and one by one when they make a change.
Thanks in advance for your help. I’m hoping we can find some solution to this problem….what has to be at least 100 Google searches by both myself and my business partner over the last day have proven unsuccessful for the most part in solving this problem.
Alex
We use scalr.net to manage our web servers and load balancer instances. It worked pretty well until now. we have a server farm for each of our environments (2 production farms, staging, sandbox). We have a pre configured roles for a web servers so it's super easy to open new instances and scale when needed. the web server pull code from github when it boots up.
We haven't completed all the deployment changes we want to do, but basically here's how we deploy new versions into our production environment:
we use phing to update the source code and deployment on each web service. we created a task that execute a git pull and run database changes (dbdeploy phing task). http://www.phing.info/trac/
we wrote a shell script that executes phing and we added it to scalr as a script. Scalr has a nice interface to manage scripts.
#!/bin/sh
cd /var/www
phing -f /var/www/build.xml -Denvironment=production deploy
scalr has an option to execute scripts on all the instances in a specific farm, so each release we just push to the master branch in github and execute the scalr script.
We want to create a github hook that deploys automatically when we push to the master branch. Scalr has api that can execute scripts, so it's possible.
Have a good look at KwateeSDCM. It enables you to deploy files and software on any number of servers and, if needed, to customize server-specific parameters along the way. There's a post about deploying a web application on multiple tomcat instances but it's language agnostic and will work for PHP just as well as long as you have ssh enabled on your AWS servers.
.. and sorry for my english ..
I have something like IS written in PHP and I would like to make updating system for my customers. Here is my vision:
I upload new version to ftp server (or web server)
After click on update, system should compare version (done), backup old scripts (done) and make update like rsync. Delete deleted, change changed, add new files and folders.
For rsync I have to make ssh hole to my server and I don't want to do it. I found zsync, but it is designed for files, not for folder system.
Is there any easy way to do it? Some smart linux utility or some already done script in PHP?
Thanks for answers!
rsync can run on its own port, without any ssh involved at all: https://help.ubuntu.com/community/rsync#Rsync Daemon
But this assume you're comfortable allowing the world at large to access your program. Do you want to restrict it to just your customers?
You could also publish your source via git; git can also run as a daemon http://www.kernel.org/pub/software/scm/git/docs/git-daemon.html without requiring ssh. Again, it assume you're comfortable with the world at large getting your program.