Remote Redis connection slow - php

I am experimenting with using Redis for a Drupal website, hosted on Ubuntu 14.04.
I have installed the redis drupal module and am using the Predis library. I have also installed the 'redis-server' Ubuntu package and left the default configuration.
Configuring the Drupal site to use Redis for its cache backend works fine and the pages are lightning fast.
The problem arrives when I tried to spark up an m3.medium AWS instance and hosting the redis server there. The reason behind this is so that we can use one redis server and connect to it from multiple servers (live website hosted on multiple instances behind a load balancer, so each instance should connect to the same redis server).
I have set up the redis server on the instance, modified the redis.conf file to bind the correct IP address so it can be accessed from the outside, opened up the 6379 port, then tried connecting to it from my local computer
redis-cli -h IP
It worked fine so I decided to flip my local site's configuration to point to the new redis server.
The moment I did that the site became painfully slow, and at first I thought it might not even load at all. After almost a minute it finally loaded the home page. Clicking around the site was almost as slow, but the time reduced to maybe 10-15 seconds. That it still unacceptable and doesn't even compare to the lightning fast page load when using the redis server.
My question is: is there some specific configuration I need to do to make the remote connection faster? Is there something preventing it from performing well? some bottleneck somewhere?
Let me know if you want me to add the drupal settings.php configuration, although I am using a pretty standard config.

Although I ran the same configuration for a php application as you are trying, I had no issues hosting redis on either a small or medium instance and handling large amounts of traffic. There must be a config issue somewhere. Another option to debug it would be to try switching to Elasticcache (AWS' redis offering) it requires that all clients be within the same region, but could make finding your problem very easy.

Related

slow response from php code at vps

i have simple site on some vps. i used wamp server and additional setup postgresql instead mysql. site works fast on localhost and from public ip, but...
i have webix grid on one page and php file put data from database into this grid, when page are loading. when i on localhost, grid loading time is 5 sec. I think for 35k row its not bad. But when i loaded page from web via another pc, loading time grow 4-5 times - above 20 second.
internet connection is fast, it looks like some server setting slow down transfering data from php or executing php code. I had same problem with transfering files on server from ftp, and solve it via disabling some checking options in server.
Here is my php code
require_once("../codebase/data_connector.php");
require_once("../codebase/db_postgre.php");
$conn_string = "host=127.0.0.1 port=5432 dbname=demo user=postgres password=postgres";
$dbconn = pg_pconnect($conn_string);
$dbtype = "Postgre";
$data = new JSONDataConnector($dbconn, $dbtype);
//$data->dynamic_loading(35);
$data->render_sql("SELECT id,count,contragent,value,inn,kpp FROM mart_customers", "id", "count, contragent, value, inn, kpp");
pg_close($dbconn);
Do anybody have idea, what problem can be now? i tried do some famous thing, like change localhost on 127.0.0.1 and add apache into firewall exclusion, but nothing changed.
My options: windows server 2012 R2 + wamp + postgreSQL
I strongly suggest you use a development environment which matches your production/live environment as closely as possible.
For this, you could use VirtualBox (https://www.virtualbox.org/) which allows you to spin up a guest machine on your host in which you can use any OS, server, etc together with any dependencies on a project by project basis. There are others but I've only got experience with the VB so can't comment on others.
Try using Vagrant together with VirtualBox: https://www.vagrantup.com/. Vagrant makes the provisioning of your virtual environments easy and reproducible without much effort.

W3Total Cache cache settings only affects individual EC2 instance on multiple AWS EC2 setup with load balancer?

I have an Wordpress website, which was been set using AWS by a hosting provider. The hosting setup contains 2 EC2 instances, A memcached server, an amazon load balancer and a 2 seperate database servers (One is a master and one is a slave replication, using Hyper DB). The Wordpress site would also connect to a cloudfront CDN. The setup has the ability to autoscale spawning new EC2 servers when the load increases.
Currently I am in the middle of setting up the CDN using W3Total cache. However I have come across an issue where I have saved CDN settings to enable and when I reload the page responsible for enabling the CDN, it shows that the CDN is disabled.
Upon trying to set this a few times, I realised that the whenever I make any change to any W3Total cache setting, such as the CDN, they are only set to the instance that originally saved those settings. It does not propagate over to the other EC2 instances and I would have to repeat the same process to ensure consistency across all instances.
Then I'm also worried what would happen when my setup spawns new EC2 instances, if in this situation then I don't think the settings carry forward.
Can somebody please explain to me whether or not this is completely normal on cloud based setups, such as AWS, or is there really an underlying issue?
Would also it be possible have a situation where if I save my W3Total cache settings, it gets updated on all of my EC2 instances instead of having to change the cache settings one machine at a time?
Any feedback would be greatly appreciated. Thanks.
Stupid me.
I found out that W3Total Cache stores each of their config settings in a config file on a particular server. In order for the config settings to sync up with every server, we can download a settings file from one server and upload it to the other server.
Thanks.

How to host a PHP file with AWS?

After many hours of reading documentation and messing around with Amazon Web Services. I am unable to figure out how to host a PHP page.
Currently I am using the S3 service for a basic website, but I know that this service does not support dynamic pages. I was able to use the Elastic Beanstalk to make the Sample Application running PHP. But i have really no idea how to use it. I read up on some other services but they don't seem to do what I want or they are just way to confusing.
So what I want to be able to do is host a website with amazon that has dynamic PHP pages. Is this possible and what services do you use?
For a PHP app, you really have two choices in AWS.
Elastic Beanstalk is a service that takes your code, and manages the runtime environment for you - once you've set it up, it's very easy to deploy, and you don't have to worry about managing servers - AWS does pretty much everything for you. You have less control over the environment, but if your server will run in EB then this is a pretty easy path.
EC2 is closer to conventional hosting. You need to decide how your servers are configured & deployed (what packages get installed, what version of linux, instance size, etc), your system architecture (do you have separate instances for cache or database, whether or not you need a load balancer, etc) and how you manage availability and scalability (multiple zones, multiple data centers, auto scaling rules, etc).
Now, those are all things that you can use - you dont have to. If you're just trying to learn about php in AWS, you can start with a single EC2 instance, deploy your code, and get it running in a few minutes without worring about any of the stuff in the previous paragraph. Just create an instance from the Amazon Linux AMI, install apache & php, open the appropriate ports in the firewall (AKA the EC2 security group), deploy your code, and you should be up & running.
Your Php must run on EC2 machines.
Amazon provides great tools to make life easy (Beanstalk, ECS for Docker ...) but at the end, you own EC2 machines.
There is no a such thing where you can put your Php code without worrying about anything else ;-(
If you are having problems hosting PHP sites on AWS then you can go with a service provider like Cloudways. They provide managed AWS servers with one click installs of PHP frameworks and CMS.

Comprehensive guide to setting up a data driven website using Amazon web services for EC2

I have started making a website and was hosting on Hostgator but I am going to move it to Amazon web services before launch. There is a small problem that I previously just uploaded my files to the relevant location to Hostgator and it has all just worked. I have no experience in setting up from scratch a production worthy server setup and I need to know how. I did setup the basic lamp stack on the EC2 instance, however, I keep reading that when the EC2 instance does down it will take all the data with it and I can not have that happen. I have also read then when it dies it wont do anything and you have to start up the apache server again it is not automatic. I need it to be reliable and have the data independent so it will not crash, burn and die if the server goes. I have worked out that I will need S3 for static things such as my PDF's and images as well as using the RDS for my MYSQL database. My domain name is registered elsewhere so I believe I need to use route 53 as well.I want to use AWS for a few reasons reasons, firstly as it can scale which is really important but not sure if this is built in or it requires customization. I have been told that it is very secure the EC2 and the last reason is that I can debug my php code. The debug reason is that I have an error that only appears on the Hostgator server not my local lamp stack and I can't debug it there so I should be able to when I move to EC2.
I have done a lot of looking around online and I can't find anything comprehensive about what to setup. I have been reading (some of you may think otherwise). However, I am so overwhelmed by the amount of information there is as it is either far to complicated discussing some theory that I do not care about or to easy and does not discuss how to use anything other then a generic install of a LAMP stack on the EC2 with out using the other services.
I have seen http://bitnami.com/stack/lamp/cloud/amazon but do not think this is what I want as again the EC2 has a mysql database and I am not using the RDS
If someone can point me in the direction of a comprehensive guide to setting up a slid LAMP stack on AWS (mabey even a book has been written) that would be great as I found the amazon docs did not go into much detail and told me how to do things but not why I should do them and what purpose they had.
Thanks
I'll start with answering your q's first, and as you are a newbie I would suggest don't pressurize to learn all of AWS, you can keep migrating slowly and keep discovering the magic of cloud.
Q.
when the EC2 instance does down it will take all the data with it and
I can not have that happen. I have also read then when it dies it wont
do anything and you have to start up the apache server again it is not
automatic?
A. When an EC2 instance goes down (down could mean shutdown manual by you or Down means AWS network is down, or instances are having some other issues) only the data on "ephemeral data" or you can say data on RAM or sessions will get lost, whatever is on disk will remain on disk, And the instance will be available as soon as problem is resolved.
Apache will start itself when an instance restarts, and remains up until you manually shut it down or some other issue.
Q. I will need S3 for static things such as my PDF's and images as
well as using the RDS for my MYSQL database?
A. Its a good practice to keep static stuff on s3, but not a necessary thing to do, you can set up a ftp or manage your static content like you were used to, like keeping it on a folder of your website.
You don't necessarily need RDS to have a mysql database, I have a process running on aws with around 40 mil transactions a day, and I do it on a normal mysql at an ec2 instance.
however having RDS gets rid from the daily backup and index maintenance hustles.
Q. My domain name is registered elsewhere so I believe I need to use
route 53 as well ?
A. Again not a necessary thing, you can just go to your domain manager and change the A-name or C-name records (with static public ip of ec2) and give a static public ip to your ec2 instance or Elastic load balancer and you'll be up and running in no time.
Q. I want to use AWS for a few reasons reasons, firstly as it can
scale which is really important but not sure if this is built in or it
requires customization.
A. It can scale really well, but depends how do you want it to scale, and its highly customizable.
there are 2 kinds of scaling
vertical - you change your instance type from one type to another to get better disk / cpu or RAM or better network performance, but this will need you to stop your ec2 instance and change its type, that means there will be downtime of around 10 minutes while you do so.
horizontal - you can put your website (ec2 based) behind a load balancer (ELB - elastic load balancer) and add/remove more instances to/from it as and when you deemed suitable, or you can also have an auto scaling policy to help you do it automatically depending up on the load at your web server.
Security? - you can be very well assured its very well secure, and so much secure that I can bet my life on a secure ec2 instance, i can swear by linux thor that it works and it works like a charm.
Debugging? - I suggest you do debugging by classic means, make logs of errors and all, just treat ec2 like a normal machine and learn slowly the tricks of trade.
Now lets setup a basic solid LAMP stack for ourselves, I am assuming that you have a ready ubuntu instance, and you can ssh to it, in case you haven't been able to make one - see this.
basically.
1. create security groups - This is your firewall, makes sure which ports are open, and also makes sure which ec2 instances can talk amongst themselves.
2. Create an ec2 instance - make any ubuntu instance.
And access your instance using ssh - ssh is basically secure terminal connection to your ec2 machine which is secured by a key file (pem file) and whoever has it can access your machine's data, so keep it very very secure, and you can't afford to lose it.
3. install LAMP using - Tasksel utility
4. setup a public ip for yourself ( costs a dollar per month) - you can use this ip to redirect your www.example.com traffic using domain manager of your DNS provider - godaddy or someone alike i suppose.
I think this will be it to make you start with AWS.
Just to be safe that you have a copy of your data make an AMI of your ec2 instance with all the data on it. AMI is the image from which you can make a similar or better instance in 10 minutes flat (or even lesser).
You wil pay for - instance type you chose, public IP, traffic if its beyond a level (usually very very cheap), and disk usage (8 gb is the default disk), and AMI volume.
Have fun with AWS.
To retain data between during the down time, make sure you use EBS storage. Its default now a days. In the past, before EBS, instance storage was default and you would lose data once server is down, but with the EBS storage, data is retained during the shutdown.
You can go one of the follow two routes depending upon your needs.
1. Use AWS ElasticBeanStalk (http://aws.amazon.com/elasticbeanstalk/) if you do not need to install anything additional Its super easy and its similar to Google Apps and you can deploy your app quickly. You do not get server, but a server to deploy your app. You have to use RDS for database and S3 for storage. You can not store locally on the server where you are running.
Use EC2 server with static IP address. You can get pre-configured LAMP stacks from market place. I use bitnami cloud stacks for AWS that comes pre-configured with LAMP and many other apps. Just use their free account to create micro instance for your PHP and select a server and you are good to go. http://bitnami.com/cloud
You do not need to use Route 53 unless you need to manage DNS programatically. You can just point your server to EC2 server by adding entry in your DNS (godaddy or whoever is your domain name provider).
Bitnami service also allow scheduled backups, but if you are not storing anything locally, you do not need frequent backups.
Make sure you use Multi-AZ option in RDS which is more reliable. When you provision a Multi-AZ DB Instance, Amazon RDS automatically creates a primary DB Instance and synchronously replicates the data to a standby instance in a different Availability Zone (AZ). Also, Amazon RDS automatically patches the database software and backs up your database, storing the backups for a user-defined retention period and enabling point-in-time recovery, up to last 5 minutes.
I hope this helps.
You should be using dynamo DB (http://aws.amazon.com/dynamodb/pricing/) in with LAMP without Mysql for storage. Having a Samebox database can almost never give you reliability. So you will not loose your data what ever your Application box goes through. You can even read our application config from dynamo DB.
http://aws.amazon.com/documentation/dynamodb/
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SettingUpTestingSDKPHP.html
Do I need to use EC2 with DynamoDB?
You wont loose data when server is down. Just make sure your select EBS volume, and not Instance.
You can get ready-made server from AWS market place. I used the following for my projects, but there are many other pre-configured servers available.
https://aws.amazon.com/marketplace/pp/B007IN7GJA/ref=srh_res_product_title?ie=UTF8&sr=0-2&qid=1382655655469
This with RDS server is what you need. We use this all the time for production servers and never had any issues.
Here are two guides that look good to me:
http://shout.setfive.com/2013/04/05/amazon-aws-ec2-lamp-quickstart-guide-5-steps-in-10-minutes/
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html
If learning the Linux command line isn't your thing, you should consider going "up the stack" to a PaaS (Platform As A Service). They are things like Heroku, Google App Engine, and ElasticBeanStalk.
The trade-off between Infrastructure as a Service (IaaS like EC2) and a Platform as a Service (PaaS like Heroku):
- PasS is quicker to get started, less to learn. IaaS requires you to know the entire stack from the start (or hire/rent a sysadmin).
- PasS usually gets more expensive as you get bigger compared to IaaS (but it depends).
- PaaS has less control (you can't choose the language version, so you can't upgrade to get around a specific bug.)
- IaaS can literally do anything (it's just a Linux box)
- IaaS allows for more tuning (upgrade libraries to get features, switch to different instance type to trade off RAM for CPU, run HipHop for speed, add caching layers, etc)
You have a few choices:
Use only EC2. Install Apache+MySQL and your dynamic website on EC2. This will be very similar to setting it up on Hostgator except you are running a full server.
Use EC2 for "compute" (that is, the dynamic part of the site) and S3 for storage. This doesn't differ much from #1 above, except that you are using S3 for static file storage - which is great if you are expecting to host a lot of static content (multimedia, etc)
Set up your website using Amazon Elastic Beanstalk (which now supports PHP). However, if you go this route, you will need to host your database somewhere - which will likely be RDS.
I recommend going with #1. There is nothing wrong with that - yes, if EC2 goes down, it will take down your site with it, but to alleviate that, you can run two servers in two different regions (one in US East and one in US West) - I don't think two EC2 regions have ever gone down at the same time.
UPDATE: If you are concerned about backup/restore and making sure your data is safe, I recommend the following (I do this with a site in production on EC2):
Put your website code into Git/SVN source control; and pull from there
Backup your MySQL database to Amazon S3 regularly (at least once a day) using mysqldump
I think you have some misconceptions.
If EC2 as a whole goes down (which is rare) then you do NOT lose your data. The site would simply be offline until Amazon restored services.
If your particular instance goes down due to a hardware issue, then you might lose data. This is no different than if your own server went belly up. The right answer is to simply make normal backups of your database and store it in S3 or some other location. Generally you will want to create and attach a second EBS volume to your DB server which has the DB files on it as well.
If you Terminate your instance then, yes you will lose everything on that. However Amazon has the ability to make terminating instances difficult so you don't do it accidentally.
Stopping your instance is like turning the computer off. The difference being that you can remotely turn it back on when you want. You can only stop EBS backed instances - which means that your data is safe while it is offline.
I would highly suggest that if you are uncomfortable with setting up and maintaining your own server that you should investigate fully managed hosting instead. EC2 is awesome, we've been on it for 2 years. However, we have a strong tech team that understands what it takes to run and manage servers.

Is redis on Heroku possible without an addon?

I'm looking into using Heroku for a PHP app that uses Redis. I've seen the various addons for redis. With Redis To Go, for example, you can use an environment variable $_ENV['REDISTOGO_URL'] in your PHP code, as the URL of the Redis Server.
Most of these add ons have their own pricing schemes which I'd like to avoid. I'm a little confused about how heroku works. Is there a way that I can just install Redis on my own Dynos without the addons?
Like for example, have one worker dyno that acts as a server, and another that acts as a client? If possible, how would I go about:
Installing and running the redis server on a Dyno? Is this just the same as
installing on any other unix box? Can I just ssh to it and install whatever i want?
Have one Dyno connect to
another with an IP/port via TCP? Do the worker dynos have their own
reference-able IP addresses or named URLS that I can use? Can I get them dynamically from PHP somehow?
The php code for a redis client assumes there is a host and port that you can connect to, but have no idea what it would be?
$redis = new Predis\Client(array(
"scheme" => "tcp",
"host" => $host, //how do i get the host/port of a dyno?
"port" => $port));
Running redis on a dyno is an interesting idea. You will probably need to create a redis buildpack so your dynos can download and run redis. As "redis has no dependencies other than a working GCC compiler and libc" this should be technically possible.
However, here are some problems you may run into:
Heroku dynos don't have a static IP address
"dynos don’t have static IP addresses .. you can never access a dyno directly by IP"
Even if you set up and run Redis on a dyno I am not aware of a way to locate that dyno instance and send it redis requests. This means your Redis server will probably have to run on the same dyno as your web server/main application.
This also means that if you attempt to scale your app by creating more web dynos you will also be creating more local redis instances. Data will not be shared between them. This does not strike me as a particularly scalable design, but if your app is small enough to only require one web dyno it may work.
Heroku dynos have an ephemeral filesystem
"no files that are written are visible to processes in any other dyno and any files written will be discarded the moment the dyno is stopped or restarted"
By default Redis writes its RDB file and AOF log to disk. You'll need to regularly back these up somewhere so you can fetch and restore after your dyno restarts. See the documentation on Redis persistence.
Heroku dynos are rebooted often
"Dynos are cycled at least once per day, or whenever the dyno manifold detects a fault in the underlying hardware"
You'll need to be able to start your redis server each time the dyno starts and restore the data.
Heroku dynos have 512MB of RAM
"Each dyno is allocated 512MB of memory to operate within"
If your Redis server is running on the same dyno as your web server, subtract the RAM needed for your main app. How much Redis memory do you need?
Here are some questions attempting to estimate and track Redis memory use:
Redis: Database Size to Memory Ratio?
Profiling Redis Memory Usage
--
Overall: I suggest reading up on 12 Factor Apps to understand a bit more about heroku's intended application model.
The short version is that dynos are intended to be independent workers that can be easily created and discarded to meet demand, and that dynos access various resources to read or write data and serve your app. A redis instance is an example of a resource. As you can see from the items above, by using a redis add-on you're getting something that's guaranteed to be static, stable, and accessible.
Reading material:
http://www.12factor.net/ - specifically Processes and Services
The Heroku Process Model
Heroku Blog - The Process Model
redis has a client server architecture you can install it on one machine(in your case dyno) and access it from any client.
for more help on libraries you can refer this link
or you can go through this Redis documentaion which is a simple case study of implementing a twitter clone using Redis ad database and PHP

Categories