I'm trying to build an autoscaled infrastructure for a WordPress site on Google Compute Engine. For WordPress, I want to use the LEMP(Ubuntu-18, Nginx, Mysql, PHP) stack but with a separate Cloud SQL instance as Database.
Here's my plan:
Create a Boot disk with the WordPress site installed & setup
Create an Instance template from that Boot Disk
Create Instance Groups for my required regions with the Template above.
Create an HTTP Load Balancer to autoscale the instances.
But, I'm really confused at the first step, how I should set up for WordPress site to create an Instance Template, I don't know how can we set up our Apps on Custom Image OR Boot Disk.
Is the approach above is the right one?
How can I set up my WordPress site to use in an Instance Template?
Help me, please!
Thanks in advance!
Autoscaling feature of managed instance groups is usually applicable to stateless VM instances. An autoscaler adds or removes instances from a managed instance group. Therefore, any data stored on root disks of VMs can be lost.
As you specified in your plan, the stateful component of your LEMP stack (Databases) has to be implemented outside of the managed instance group.
To create a template for the managed instance group, you can take the following steps:
Setup, configure and test your website on a single VM (stateless components) which is configured to connect to Cloud SQL instance (stateful component).
Create a custom image from the VM's disk
gcloud compute images create [IMAGE_NAME] --source-disk [SOURCE_DISK] --source-disk-zone [ZONE]
Use this custom image to create an instance template for your managed instance group
These steps can be done by using gcloud command or Google Cloud Console.
Related
I wanted to move a website from a shared server to Google Cloud but I cannot wrap my head around it. Before giving up completely, I decided to make this question:
I already completed the Hello World tutorial (https://cloud.google.com/php/getting-started/hello-world). But what if I want to update the index.html file? Where would I find it?
I was expecting to see it in one of the storage Buckets, but that's not the case... even when installing a Kubernetes Engine.
If you decide to use Google App Engine Flexible (as the hello world sample app that you linked to) you need to understand the idea of this additional layer of abstraction over your server(s). App Engine Flexible is designed to make things easier for you - you focus on your code in your local machine where you modify it, update it and then with one command (gcloud app deploy) you instruct the App Engine to do one of the following:
start a VM (your server) and a Docker container with your app in it
if it's not already running
in case you are updating an existing app, it will update the code in the VM which is your server. If your app receives a lot of traffic, you may have more than one container and VM running and all of them will get updated.
Both things are presented schematically in the image in this section.
This way you can develop your app locally and not worry about actually getting inside the server with for e.g. ssh. Your code is there in those VM(s) and App Engine manages it for you (however, if you really need to, it is still possible to ssh into the VM in App Engine Flex environment).
If you have a static website, it can be hosted in the Storage buckets, which is a different scenario. However, as you're using PHP I assume it's more likely that your website is dynamic.
I'm new to AWS, I'm running code on its EBS environment. I wanna regularly deploy code to the beanstalk environment to make updates to all our running instances.
But I also have a WordPress blog for our main website separate from the main website code. I have already setup the RDS instance to be used by WordPress. But the thing is every time I deploy code to our main beanstalk environment it overrides the files of WordPress that we have available locally. For example if some author made a new post before i deployed the code, the WordPress files gets overwritten, removing the new post files (images and stuff).
So my question is, how can I detach WordPress from our beanstalk environment? I don't want to create a separate beanstalk environment just for WordPress.
Is there any way I can use S3 buckets to host WordPress files and then make the files somehow available in the beanstalk environment we're running for our main site without creating a new environment? If there's an option then what happens to dynamic files being uploaded by user? Will they be saved in S3 by WordPress?
You should definitely separate WordPress from your application. They are different systems, there's no reason to run them on the same host.
There are some extensions for WordPress that can publish the WordPress site as static HTML, which can then be hosted from an Amazon S3 bucket. This makes the site read-only so interactive features won't work (eg search, eCommerce) but it's fine for normal blog pages.
WP Static HTML Output
Simply Static
WP-Static
Question on Quora
If that's not suitable, simply run it on a separate EC2 instance outside of the Beanstalk environment. You might even consider using Amazon Lightsail.
The main problem seems to me that your are not using the right configurations that belong to a wordpress + EBS installation.
EBS creates a new application version when you deploy.
Therefore you are cannot access anything from the previous application version including uploads folder
Conclusion, you need to detach dynamic files from the application level while keeping the database the same.
How to? Use either a mounted EFS and/or S3. In combination with the Wordpress S3 offload plugin
I am assuming you are using an RDS database that doesn't run on the host instance. If not, that is definitely not recommended. I can highly recommend following the best practices step-by-step installation including files here.
I want to build Laravel CMS with following requirements:
Admin (manage all sites/database).
Multi-sites (running on sub-domains, manage own database) each with API access.
Same Codebase (can be replicated if needed).
Same Database with different data for each site.
Can you please let me know how to setup this environment using Laravel 5.4.
Thanks.
This project has been finished now. So posting my own solution for the question.
Following are points to take care:
I've used Laravel 5.4 (later upgraded to 5.5).
For multi tenancy used Landlord extension for Laravel.
Server is configured to listen for wild-card sub-domains.
Each site is running on sub-domain.
Using single database with site-id (tenant-id) under each DB table.
Whenever server gets request, sub-domain is matched with tenant id in the middleware & load all the records for that tenant only.
I've hope this might help someone else.
For build a Laravel CMS with following requirements you need use:
You can set an Admin Role with policies and gates or uses entrust
For Multi-sites, You can used something like https://github.com/hyn/multi-tenant, so If you use multitenant you can
Multi-sites (running on sub-domains, manage own database) each with
API access. Same Codebase (can be replicated if needed). Same
Database with different data for each site.
For config subdomain you can read the official documentation https://laravel.com/docs/5.4/routing#route-group-sub-domain-routing
I'm trying to deploy an Google App engine app with this setup:
www.domain.com -> Wordpress Frontend
app.domain.com -> AngularJS Backend
api.domain.com -> Rest API used by Angular Backend
Can I achieve this using the basic app schema? Or should I use the modules API?
My main worry about using modules is that they use different instances, increasing the billing. Am I correct?
Modules API is your best bet in this case. You can set automatic scaling to all modules so that new instances are only spun up when there's requests.
It's completely up to you...
Depending on how you structure your project you could do it in either way but of course with Modules things would be a lot nicer organized, although yes, it would increase your monthly bill, while with a single default module your bill would likely to be smaller but your code organization - messier.
If "api.domain.com -> Rest API used by Angular Backend" uses any backend language other than PHP (Wordpress) then you would have to run them as two separate modules/projects since you cannot have both PHP and Python/Java/Go runtimes on the same instance.
If your "app.domain.com -> AngularJS Backend" part consists of static files only and no backend code (php/python/go/java) then that wouldn't require running instances as everything would be served from Google's frontend servers and not directly from your instances (the static files are normally not even included with the code you deploy unless you specify that you want that in app.yaml).
I am currently building a storage service, but I am very small and don't want to setup or pay for an Amazon S3 account. I already have my own hosting service which I want to use. However, I want to make it simple to switch to Amazon S3 if the need arises. Therefore, I would like to have essentially an S3 'clone' on my server, which I can simply redirect to amazons servers at a later time. Is there any package that can do this?
EDIT: I am on a shared server where I cannot install software, so is there a simple php page that can do this?
Nimbus allows for that. From FAQ:
Cumulus is an open source implementation of the S3 REST API. Some
features such as versioning and COPY are not yet implemented, but some
additional features are added, such as file system usage quotas.
http://www.nimbusproject.org/doc/nimbus/faq/#what-is-cumulus
http://www.ubuntu.com/cloud
you would need several host computers but it works well.
I too have a backup service, but it's based on several 48TB raid arrays.