Is redis on Heroku possible without an addon? - php

I'm looking into using Heroku for a PHP app that uses Redis. I've seen the various addons for redis. With Redis To Go, for example, you can use an environment variable $_ENV['REDISTOGO_URL'] in your PHP code, as the URL of the Redis Server.
Most of these add ons have their own pricing schemes which I'd like to avoid. I'm a little confused about how heroku works. Is there a way that I can just install Redis on my own Dynos without the addons?
Like for example, have one worker dyno that acts as a server, and another that acts as a client? If possible, how would I go about:
Installing and running the redis server on a Dyno? Is this just the same as
installing on any other unix box? Can I just ssh to it and install whatever i want?
Have one Dyno connect to
another with an IP/port via TCP? Do the worker dynos have their own
reference-able IP addresses or named URLS that I can use? Can I get them dynamically from PHP somehow?
The php code for a redis client assumes there is a host and port that you can connect to, but have no idea what it would be?
$redis = new Predis\Client(array(
"scheme" => "tcp",
"host" => $host, //how do i get the host/port of a dyno?
"port" => $port));

Running redis on a dyno is an interesting idea. You will probably need to create a redis buildpack so your dynos can download and run redis. As "redis has no dependencies other than a working GCC compiler and libc" this should be technically possible.
However, here are some problems you may run into:
Heroku dynos don't have a static IP address
"dynos don’t have static IP addresses .. you can never access a dyno directly by IP"
Even if you set up and run Redis on a dyno I am not aware of a way to locate that dyno instance and send it redis requests. This means your Redis server will probably have to run on the same dyno as your web server/main application.
This also means that if you attempt to scale your app by creating more web dynos you will also be creating more local redis instances. Data will not be shared between them. This does not strike me as a particularly scalable design, but if your app is small enough to only require one web dyno it may work.
Heroku dynos have an ephemeral filesystem
"no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted"
By default Redis writes its RDB file and AOF log to disk. You'll need to regularly back these up somewhere so you can fetch and restore after your dyno restarts. See the documentation on Redis persistence.
Heroku dynos are rebooted often
"Dynos are cycled at least once per day, or whenever the dyno manifold detects a fault in the underlying hardware"
You'll need to be able to start your redis server each time the dyno starts and restore the data.
Heroku dynos have 512MB of RAM
"Each dyno is allocated 512MB of memory to operate within"
If your Redis server is running on the same dyno as your web server, subtract the RAM needed for your main app. How much Redis memory do you need?
Here are some questions attempting to estimate and track Redis memory use:
Redis: Database Size to Memory Ratio?
Profiling Redis Memory Usage
--
Overall: I suggest reading up on 12 Factor Apps to understand a bit more about heroku's intended application model.
The short version is that dynos are intended to be independent workers that can be easily created and discarded to meet demand, and that dynos access various resources to read or write data and serve your app. A redis instance is an example of a resource. As you can see from the items above, by using a redis add-on you're getting something that's guaranteed to be static, stable, and accessible.
Reading material:
http://www.12factor.net/ - specifically Processes and Services
The Heroku Process Model
Heroku Blog - The Process Model

redis has a client server architecture you can install it on one machine(in your case dyno) and access it from any client.
for more help on libraries you can refer this link
or you can go through this Redis documentaion which is a simple case study of implementing a twitter clone using Redis ad database and PHP

Related

redis : 40+ servers reading the same redis content

I'm gathering sports data every minute with PHP scripts and store them into Redis. It's all done on one ubuntu 16.04 server. Let's call it the collector server.
My goal is to have that Redis generated database available to our customers. The DB will only be read-only to our customers.
The way we connect customers servers to our Redis content is by directly
pointing them to the Redis host: port of that collector server. If all our clients would want to access the DB, I'm afraid the collector server would get stuck (40+ customers)...
That Redis content is updated every minute, and we are the owners of the customers' servers and content.
Is there setup to do in Redis or ways to have 40 +external servers reading the same Redis content DB without killing the collector server?
Before scaling, I recommend that you benchmark your application against Redis with real and/or simulated load - a single Redis server can handle an impressive load (see https://redis.io/topics/benchmarks) so you may be over engineering this.
That said, to scale reads only, read about Redis' replication. If you want to scale writes as well, read about Redis cluster.
+1 For Itamar's answer. But one more important thing you should keep in mind, letting your customers connect to your Redis resource directly is dangerous and should be avoided.
They will have your host:port and password and they will be able to connect, write, modify, delete, and even shutdown or change your password.
It is not scalable, and you'll probably notice it when it is already too late and too hard to change.
Some customers might have troubles connecting and passing some routers and FW with the non standard TCP port.
You should have an app server(s) that does the Redis communication for your customers.

Easiest way to deploy a simple AWS Beanstalk PHP app without any PHP frameworks, using RDS + ElastiCache for data

Hope someone can point me. Google doesn't yield much that's simple to understand (there's stuff like Pheanstalk, etc), and Amazon's own Beanstalk documentation as always is woefully arcane presuming that we use Laravel or Symfony2.
We have a simple set of 10 PHP scripts that constitute our entire "website", with fast functional programming. In our testing this has been much faster than doing the same things with needless OOP. Anyway, with PHP 7, we're very happy with the simple functional code we have.
We could go the EC2 route. Two EC2 servers load balanced by ELB. Both EC2 servers just have Nginx running with PHP-FPM, and calling the RDS stuff for data (ElastiCache for some caching speed for read-only queries).
However, the idea is to lower management costs for EC2 by relying on Beanstalk for the simple processing that's needed in these 10 PHP scripts.
Are we thinking the right way? Is it simple to "upload" scripts to Beanstalk in the way we do in EC2 via SSH or SFTP? Or is that only programatically available via git etc?
You can easily replicate your EC2 environment to Elastic Beanstalk using Docker containers.
Create a Docker container that contains required packages (nginx etc), any configuration files, and your PHP scripts. Then you'd deploy the container to Beanstalk.
With Beanstalk, you can define environment variables that are passed to underlying EC2 instances where you application is running. Typically, one would use environment variables to pass, for example, the RDS hostname, username, and password to the Beanstalk application.
Additionally, you can store the Dockerfile, configuration files, and scripts in your git repository for version control, and fetch them whenever you create the container.
See AWS documentation about deploying beanstalk application from Docker containers.

Remote Redis connection slow

I am experimenting with using Redis for a Drupal website, hosted on Ubuntu 14.04.
I have installed the redis drupal module and am using the Predis library. I have also installed the 'redis-server' Ubuntu package and left the default configuration.
Configuring the Drupal site to use Redis for its cache backend works fine and the pages are lightning fast.
The problem arrives when I tried to spark up an m3.medium AWS instance and hosting the redis server there. The reason behind this is so that we can use one redis server and connect to it from multiple servers (live website hosted on multiple instances behind a load balancer, so each instance should connect to the same redis server).
I have set up the redis server on the instance, modified the redis.conf file to bind the correct IP address so it can be accessed from the outside, opened up the 6379 port, then tried connecting to it from my local computer
redis-cli -h IP
It worked fine so I decided to flip my local site's configuration to point to the new redis server.
The moment I did that the site became painfully slow, and at first I thought it might not even load at all. After almost a minute it finally loaded the home page. Clicking around the site was almost as slow, but the time reduced to maybe 10-15 seconds. That it still unacceptable and doesn't even compare to the lightning fast page load when using the redis server.
My question is: is there some specific configuration I need to do to make the remote connection faster? Is there something preventing it from performing well? some bottleneck somewhere?
Let me know if you want me to add the drupal settings.php configuration, although I am using a pretty standard config.
Although I ran the same configuration for a php application as you are trying, I had no issues hosting redis on either a small or medium instance and handling large amounts of traffic. There must be a config issue somewhere. Another option to debug it would be to try switching to Elasticcache (AWS' redis offering) it requires that all clients be within the same region, but could make finding your problem very easy.

Comprehensive guide to setting up a data driven website using Amazon web services for EC2

I have started making a website and was hosting on Hostgator but I am going to move it to Amazon web services before launch. There is a small problem that I previously just uploaded my files to the relevant location to Hostgator and it has all just worked. I have no experience in setting up from scratch a production worthy server setup and I need to know how. I did setup the basic lamp stack on the EC2 instance, however, I keep reading that when the EC2 instance does down it will take all the data with it and I can not have that happen. I have also read then when it dies it wont do anything and you have to start up the apache server again it is not automatic. I need it to be reliable and have the data independent so it will not crash, burn and die if the server goes. I have worked out that I will need S3 for static things such as my PDF's and images as well as using the RDS for my MYSQL database. My domain name is registered elsewhere so I believe I need to use route 53 as well.I want to use AWS for a few reasons reasons, firstly as it can scale which is really important but not sure if this is built in or it requires customization. I have been told that it is very secure the EC2 and the last reason is that I can debug my php code. The debug reason is that I have an error that only appears on the Hostgator server not my local lamp stack and I can't debug it there so I should be able to when I move to EC2.
I have done a lot of looking around online and I can't find anything comprehensive about what to setup. I have been reading (some of you may think otherwise). However, I am so overwhelmed by the amount of information there is as it is either far to complicated discussing some theory that I do not care about or to easy and does not discuss how to use anything other then a generic install of a LAMP stack on the EC2 with out using the other services.
I have seen http://bitnami.com/stack/lamp/cloud/amazon but do not think this is what I want as again the EC2 has a mysql database and I am not using the RDS
If someone can point me in the direction of a comprehensive guide to setting up a slid LAMP stack on AWS (mabey even a book has been written) that would be great as I found the amazon docs did not go into much detail and told me how to do things but not why I should do them and what purpose they had.
Thanks
I'll start with answering your q's first, and as you are a newbie I would suggest don't pressurize to learn all of AWS, you can keep migrating slowly and keep discovering the magic of cloud.
Q.
when the EC2 instance does down it will take all the data with it and
I can not have that happen. I have also read then when it dies it wont
do anything and you have to start up the apache server again it is not
automatic?
A. When an EC2 instance goes down (down could mean shutdown manual by you or Down means AWS network is down, or instances are having some other issues) only the data on "ephemeral data" or you can say data on RAM or sessions will get lost, whatever is on disk will remain on disk, And the instance will be available as soon as problem is resolved.
Apache will start itself when an instance restarts, and remains up until you manually shut it down or some other issue.
Q. I will need S3 for static things such as my PDF's and images as
well as using the RDS for my MYSQL database?
A. Its a good practice to keep static stuff on s3, but not a necessary thing to do, you can set up a ftp or manage your static content like you were used to, like keeping it on a folder of your website.
You don't necessarily need RDS to have a mysql database, I have a process running on aws with around 40 mil transactions a day, and I do it on a normal mysql at an ec2 instance.
however having RDS gets rid from the daily backup and index maintenance hustles.
Q. My domain name is registered elsewhere so I believe I need to use
route 53 as well ?
A. Again not a necessary thing, you can just go to your domain manager and change the A-name or C-name records (with static public ip of ec2) and give a static public ip to your ec2 instance or Elastic load balancer and you'll be up and running in no time.
Q. I want to use AWS for a few reasons reasons, firstly as it can
scale which is really important but not sure if this is built in or it
requires customization.
A. It can scale really well, but depends how do you want it to scale, and its highly customizable.
there are 2 kinds of scaling
vertical - you change your instance type from one type to another to get better disk / cpu or RAM or better network performance, but this will need you to stop your ec2 instance and change its type, that means there will be downtime of around 10 minutes while you do so.
horizontal - you can put your website (ec2 based) behind a load balancer (ELB - elastic load balancer) and add/remove more instances to/from it as and when you deemed suitable, or you can also have an auto scaling policy to help you do it automatically depending up on the load at your web server.
Security? - you can be very well assured its very well secure, and so much secure that I can bet my life on a secure ec2 instance, i can swear by linux thor that it works and it works like a charm.
Debugging? - I suggest you do debugging by classic means, make logs of errors and all, just treat ec2 like a normal machine and learn slowly the tricks of trade.
Now lets setup a basic solid LAMP stack for ourselves, I am assuming that you have a ready ubuntu instance, and you can ssh to it, in case you haven't been able to make one - see this.
basically.
1. create security groups - This is your firewall, makes sure which ports are open, and also makes sure which ec2 instances can talk amongst themselves.
2. Create an ec2 instance - make any ubuntu instance.
And access your instance using ssh - ssh is basically secure terminal connection to your ec2 machine which is secured by a key file (pem file) and whoever has it can access your machine's data, so keep it very very secure, and you can't afford to lose it.
3. install LAMP using - Tasksel utility
4. setup a public ip for yourself ( costs a dollar per month) - you can use this ip to redirect your www.example.com traffic using domain manager of your DNS provider - godaddy or someone alike i suppose.
I think this will be it to make you start with AWS.
Just to be safe that you have a copy of your data make an AMI of your ec2 instance with all the data on it. AMI is the image from which you can make a similar or better instance in 10 minutes flat (or even lesser).
You wil pay for - instance type you chose, public IP, traffic if its beyond a level (usually very very cheap), and disk usage (8 gb is the default disk), and AMI volume.
Have fun with AWS.
To retain data between during the down time, make sure you use EBS storage. Its default now a days. In the past, before EBS, instance storage was default and you would lose data once server is down, but with the EBS storage, data is retained during the shutdown.
You can go one of the follow two routes depending upon your needs.
1. Use AWS ElasticBeanStalk (http://aws.amazon.com/elasticbeanstalk/) if you do not need to install anything additional Its super easy and its similar to Google Apps and you can deploy your app quickly. You do not get server, but a server to deploy your app. You have to use RDS for database and S3 for storage. You can not store locally on the server where you are running.
Use EC2 server with static IP address. You can get pre-configured LAMP stacks from market place. I use bitnami cloud stacks for AWS that comes pre-configured with LAMP and many other apps. Just use their free account to create micro instance for your PHP and select a server and you are good to go. http://bitnami.com/cloud
You do not need to use Route 53 unless you need to manage DNS programatically. You can just point your server to EC2 server by adding entry in your DNS (godaddy or whoever is your domain name provider).
Bitnami service also allow scheduled backups, but if you are not storing anything locally, you do not need frequent backups.
Make sure you use Multi-AZ option in RDS which is more reliable. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Also, Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period and enabling point-in-time recovery, up to last 5 minutes.
I hope this helps.
You should be using dynamo DB (http://aws.amazon.com/dynamodb/pricing/) in with LAMP without Mysql for storage. Having a Samebox database can almost never give you reliability. So you will not loose your data what ever your Application box goes through. You can even read our application config from dynamo DB.
http://aws.amazon.com/documentation/dynamodb/
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SettingUpTestingSDKPHP.html
Do I need to use EC2 with DynamoDB?
You wont loose data when server is down. Just make sure your select EBS volume, and not Instance.
You can get ready-made server from AWS market place. I used the following for my projects, but there are many other pre-configured servers available.
https://aws.amazon.com/marketplace/pp/B007IN7GJA/ref=srh_res_product_title?ie=UTF8&sr=0-2&qid=1382655655469
This with RDS server is what you need. We use this all the time for production servers and never had any issues.
Here are two guides that look good to me:
http://shout.setfive.com/2013/04/05/amazon-aws-ec2-lamp-quickstart-guide-5-steps-in-10-minutes/
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html
If learning the Linux command line isn't your thing, you should consider going "up the stack" to a PaaS (Platform As A Service). They are things like Heroku, Google App Engine, and ElasticBeanStalk.
The trade-off between Infrastructure as a Service (IaaS like EC2) and a Platform as a Service (PaaS like Heroku):
- PasS is quicker to get started, less to learn. IaaS requires you to know the entire stack from the start (or hire/rent a sysadmin).
- PasS usually gets more expensive as you get bigger compared to IaaS (but it depends).
- PaaS has less control (you can't choose the language version, so you can't upgrade to get around a specific bug.)
- IaaS can literally do anything (it's just a Linux box)
- IaaS allows for more tuning (upgrade libraries to get features, switch to different instance type to trade off RAM for CPU, run HipHop for speed, add caching layers, etc)
You have a few choices:
Use only EC2. Install Apache+MySQL and your dynamic website on EC2. This will be very similar to setting it up on Hostgator except you are running a full server.
Use EC2 for "compute" (that is, the dynamic part of the site) and S3 for storage. This doesn't differ much from #1 above, except that you are using S3 for static file storage - which is great if you are expecting to host a lot of static content (multimedia, etc)
Set up your website using Amazon Elastic Beanstalk (which now supports PHP). However, if you go this route, you will need to host your database somewhere - which will likely be RDS.
I recommend going with #1. There is nothing wrong with that - yes, if EC2 goes down, it will take down your site with it, but to alleviate that, you can run two servers in two different regions (one in US East and one in US West) - I don't think two EC2 regions have ever gone down at the same time.
UPDATE: If you are concerned about backup/restore and making sure your data is safe, I recommend the following (I do this with a site in production on EC2):
Put your website code into Git/SVN source control; and pull from there
Backup your MySQL database to Amazon S3 regularly (at least once a day) using mysqldump
I think you have some misconceptions.
If EC2 as a whole goes down (which is rare) then you do NOT lose your data. The site would simply be offline until Amazon restored services.
If your particular instance goes down due to a hardware issue, then you might lose data. This is no different than if your own server went belly up. The right answer is to simply make normal backups of your database and store it in S3 or some other location. Generally you will want to create and attach a second EBS volume to your DB server which has the DB files on it as well.
If you Terminate your instance then, yes you will lose everything on that. However Amazon has the ability to make terminating instances difficult so you don't do it accidentally.
Stopping your instance is like turning the computer off. The difference being that you can remotely turn it back on when you want. You can only stop EBS backed instances - which means that your data is safe while it is offline.
I would highly suggest that if you are uncomfortable with setting up and maintaining your own server that you should investigate fully managed hosting instead. EC2 is awesome, we've been on it for 2 years. However, we have a strong tech team that understands what it takes to run and manage servers.

Pull live code from heroku

How do i get the changes from live to my repo? The files running on the heroku app have changed and now if i push these will be overwritten.
I have my php code running on heroku and storing 'database' things in local files.
{
"id":1,
"date":"12/1/2012",
"topImg":"/img/dates/1.jpg"
.....
So these things are stored in a json object then just saved over.
Don't do this!
Local files are your enemy, because Heroku is a cloud application host that runs applications on multiple anonymous load-balanced nodes.
Perhaps you're running a single dyno right now for development purposes, but if you ever want to make your site go live you'll need at least two dynos (because Heroku free tier service is qualitatively different from their non-free tier service, particularly in that they will spin down a free dyno if it is not being used but they will never do that to a non-free dyno). When you have multiple dynos, using local files for anything other than caching will be totally unmanageable.
Even if you somehow stay with one dyno forever, Heroku dynos are not guaranteed to maintain their local storage -- if for instance there is a hardware failure on the machine your dyno is served from, Heroku will not hesitate to spin down your application, deleting all local storage, and spin it up again with just your application code loaded, because it does not expect your application to be using local storage for anything.
There is no one supported method for getting files off of a dyno, because, again, it's never a good idea to store local files on a dyno. However, if you really, really need to do this, you can use heroku run and run one-off commands to, for instance, open up a shell and upload the files somewhere. Again: do not do this for anything serious, because once you have multiple dynos it'll be nearly impossible to manage files on them.
Totaly agree with #Andrew. Prefer to use something as mongoDB database as a service with heroku : https://addons.heroku.com/catalog/mongolab or elasticsearch, if you want to add search function over those documents: https://addons.heroku.com/catalog/searchbox. There are well designed to store json docs and, with those services, you are sure that your data will be persistent no matter your dynos are.
Now, to get back your heroku local files, I would do something like that :
run the heroku bash with heroku run bash
make a scp -pYourPort yourFile(s) userName#yourDestination:/pathToSaveLocaion
logout from your heroku instance
I hope this will help you.

Categories