I'm a little bit confused by how scaling in Azure works. I'm using a Cloud Service and have 2 web roles running a PHP application. I can RDP on both machines and both applications run great on each machine. Also I don't have any problems calling the staging URL.
But I can't figure out if I configure scaling so that 2 machines run always, if I have to configure a load balancer somehow. Or is this already done for me?
In Azure VM's I had to create a load-balanced set endpoint for an endpoint, but what about cloud services? (Load balance virtual machines)
And how is this done in the XML configuration file for my service? What if I don't do it?
The question has been answered in the MSDN Forums:
Windows Azure supports load balance for cloud services and standard websites, we just need to set instance count to more than 1 to enable load balance. For virtual machines, it needs to set up manually.
Answer at MSDN
Related
I have this setup currently.
My front end runs on Google App Engine ( GAE ) on PHP. I use PDO to connect to an open AWS RDS MYSLQ 8.
I can not move the data out of the AWS RDS instance.
I have been requested to make the RDS secure and not allow open in coming ports like 0.0.0.0/0 in the AWS security group.
I want to know if there is a serverless way to achieve this type of connection without setting up a EC proxy or an Google Compute Engine server.
I am not able to find a solution and all known solution points to setting up a proxy.
Any one have any thoughts on this problem?
It's hard to tell something without any codes and errors stacks etc. But I have some thoughts.
If you are thinking of "serverless" solution. "serverless" does not mean that there is no server, but it means that there is no need to care about sever which is being maintained by some service provider (like GAE or other). So practically there is no 100% serverless solution, but it's just covered behind provider logic.
The main question is that you cannot connect to RDS. This should be possible form local machine. So you should be able to develop something that is working on your local machine and than for sure you are able to deploy the same logic to App Engine.
If above is not possible, you should consider GCP cloud SQL or serverless.
I'm using Google App Engine Flex to develop an angularjs/php-rest backend application.
I've a successful port from regular servers to AppEngine, and I now want to integrate more with GCP services like : StackDriver, Cloud Storage and so on.
StackDriver to have logging & monitoring.
Cloud Storage: to store export data files and zip them before sending it to browser.
My question is how do I develop locally on my laptop (which can be online & offline) ?
I didn't find in the documentation "the way" of local development :
Should stackDriver or Cloud Storage client be configured to write on disk instead of reaching GCP ?
Should I configure some proxy (like the cloud_sql_proxy) to be able to reach GCP ? Should I create a project for my local dev ? How does it work if I'm offline ?
Any hint appreciated :)
App Engine Flexible doesn't come with a development server or service emulators for use during development so you may use the services directly.
Stackdriver Logging: logs written to stdout and stderr are automatically sent to Stackdriver Logging for you, without needing to use Stackdriver Logging library for PHP. This may be enough for you to get logs locally but we recommend that you use the PSR-3 logger which automatically adds metadata to your logs so that your application logs are correlated to the request logs. You can set it up to run locally and log to your project by following the doc here.
Stackdriver Monitoring: Google App Engine includes built-in support for Monitoring in the flexible environment (when deployed) and doesn't require configuration. The monitoring agent cannot be installed on your local machine though, but it would be pointless to monitor it anyway.
Cloud Storage: an easy option is to create a dev bucket that you can use during development. You can create it in whichever project you wish and grant permissions to your development service account.
One common practice is to create different GCP projects for prod, staging and dev purposes. This allows you to create specific resources for a given environment. Taking logging as example, you'll be able to see logs and troubleshoot any issue with it within the dev project, without polluting your prod project's logs. That'd be true with CloudSQL, Datastore, etc...
You don't need to configure any proxy for those services. The cloud_sql_proxy is a convenient method to enforce secure connections and ease authentication with CloudSQL instances without the need to whitelist IP addresses.
Regarding the offline situation now, of course those calls from your local app to those services will fail if you don't have internet connection at that time (intermittent disconnections may actually help you to test your retries and error handling mechanisms). If you expect to develop with no internet connection at all though, you'll need to write stub services to mimic the expected behavior locally.
I am totally confused on how to host a Dynamic website created using PHP and MySQL in Amazon Cloud.
I went through Amazon S3 and I hosted a static website there!
Then I tried Amazon EC2 and I learned some aspects about the concept of VPC. I thought that the dynamic websites are hosting in Amazon Cloud using EC2. I followed some steps and they taught me how to launch a website using Drupal (But, I didn't want that !! )
No other tutorials on EC2 to deploy my web application was not found.
Then I found AWS Elastic Beanstalk, I uploaded a simple PHP document and I can see that deployed successfully.
But Still, I am not satisfied. Because, I don't know which is the correct way to deploy my PHP application.
So can anyone direct me on Deploying a PHP MySQL Application in AWS ?
Depends on your needs. Elastic Beanstalk might be a good option for many apps, but I chose EC2 for my app's backend (using PHP, MySQL and S3 for storage).
Quick steps to get you up and running:
Log into the AWS Mangement Console and start a new EC instance (Windows server 2012 R2 Base > t2.micro should be good enough for a start!)
At step "6. Configure Security Group", add Rules for at least HTTP, HTTPS and RDP (so you can connnect via Remote Desktop)
Connect to your new instance via Remote Desktop and install a decent browser (Enable File Downloads in IE's Security Settings and download Chrome or Firefox)
Open the Windows Firewall and add rules for the same ports you opened in the Security Group of your Instance in the AWS Management Console. (Right-click on “Inbound Rules”, then select “New Rule…”)
Download and install XAMPP (I put it in C:\xampp)
Open the XAMPP Control panel and install Apache and MySQL as services (so they will start automatically when your instance launches); make sure everything is started up.
Now put your files in C:\xampp\htdocs\ and you're ready to go!
Bonus Steps:
Set up Filezilla FTP Server (and open the required ports in both the instance's security group and the Windows Firewall) so you can upload/download files without having to go through Remote Desktop.
Get an Elastic IP and assign it to your instance, so it's IP address will never change.
Get an SSL certificate so you can use HTTPS
The answer depends on the load that you are expecting and the resources you have to handle all the administration tasks.
If you expect heavy or variable loads, there are many reasons why not to deploy a production PHP + MySQL application on a EC2.
Here are some of the benefits of deploying to Elastic Beanstalk instead of a manual configured EC2:
You get version control of each deployment.
You can scale up or down automatically if you need more/less instances to handle new load.
You get a load-balancer in front of your EC2s instances with a bunch of out-of-the-box "recommended" configurations.
Regarding MySQL, if you go for an Amazon RDS instance you can handle replication, monitorization and automatic backups with pretty low effort. A lot of the configurations you would need to tweak are now available through parameter-groups.
On the other hand, if you want to have full control of everything that is going on on your server (that means you have time to monitor, backup and do maintenance tasks, which is not my case :), or if you do not plan to have much traffic, or if you want the less expensive option, you should go with a low cost EC2 instance.
In my experience, (after 2 years of working on AWS with 10 production applications, I'm kind of a regular AWS user) pretty much every customization or change I needed on both RDS and EBS I was able to tweak it and get it working, so I'm pretty satisfied with choosing the EBS+RDS option.
Below are two links i found which are helpful to Create and Update an Application with AWS Elastic Beanstalk
https://aws.amazon.com/getting-started/tutorials/launch-an-app/
https://aws.amazon.com/getting-started/tutorials/update-an-app/
I have a WordPress platform based site, switched 3 hosting companies in 3 months, due to speed and uptime problem.
is it possible to host my site on two servers, if one is down site automatically run from second one :)
Note : 1). it is a design images based site (you can say wallpaper site)
Yes it's possible you can host your site on multiple servers with one domain by providing different 'Name Servers' e.g. ns1.domainname.com, ns2.domainname.com, ns1.diffdomainname.com and so on in your website hosting panel. This will serve your issue if your site is down from one server, it easily picks the site from other server.
The solution for this problem is setting up high available and auto scalable infrastructure.
You can use AWS, it supports everything you need.
Your WordPress application can be have 3 layers.
1) Load balancing Layer
2) PHP Application Layer
3) Data Base Layer
You can use elasticbeanstalk to easily installation and maintenance of Load balancing Layer i.e ELB and PHP Application Layer i.e ec2 servers
check out this doc for php application deployment using beanstack
make sure you are using multiple AZ(Availability Zone)s for in all layers to achieve high availability.
You can also use AWS RDS mysql or Aurora for DB layer
Yes! This technology is called "Load Balancing" for Servers, you could configure your servers with open source software, or you can do it with comercial companies that offers this.
You should do Load Balancing for each service that you need like: MySQL (or your database system) and Apache (or your webserver).
He an example: Setting Up A High-Availability Load Balancer (With Failover and Session Support) With HAProxy/Heartbeat On Debian Etch
I'm working on a project that is located on 2 domains within same server:
1. DataSource system, which provides data for main app
2. Main app, providing the data for front-end app.
App 1 needs to work on seperate domain, as it's data source for more applications. I'm trying to find some way to boost communication performance. Simple call from app 2 to app 1 takes approximately 0.3-0.4s.
Is there anyway to force server to bypass TCP/IP communication and call service directly from localhost?
Both applications are written in PHP with Zend Framework. The server is IIS. Both applications are based on SOAP solutions.
Would appreciate any tips. Will provide additional information if needed.
Thank you in advance for any help.
You have a misunderstanding here. If you call services from localhost (i.e. via Zend_HTTP_Client), this means you are using the tcp/ip and http layers here. Everything works via sockets, no matter if localhost or external ip address.
If the other application needs to be accessible "from the outside" (no integration possible) you can imho only speed up by using a faster webserver (e.g. nginx), turning off modules in your webserver that you don't need or writing your own socket server, dismissing a whole lot of the processing apache and nginx do. http://devzone.zend.com/209/writing-socket-servers-in-php may help you with your first steps.