What is the best way to mirror a DB server? [closed] - php

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am creating a website and am expecting somewhat normal usage. I am setting up the system for now with 1 Apache Server and 2 DB servers. I want any DB operations to be reflected in both DB servers so that I can have 1 server as Backup. Now how do I do it ?
The ways I can think of are :
Perform same operations in both DB from PHP. This seems like a terrible idea.
Update 1 DB and sync both DB servers periodically. This seems better.
Is there any better way to achieve this ? How is it done in Enterprises ?

If you're using MySQL, there is quite powerful built-in replication.
Check out the docs

A terrible idea is to have backup each time a new operation happens. No modern, nor old application works this way. Even Windows System Restore makes backup on scheduled times, not on each operation.
I'd suggest you to make an sql dump script. And schedule a cron job wich will run it once a day, or twice a day. If you really need the data on the server immediately (assuming, you need if one of the DB servers crashes, your app continue working immediately with the backup server) you can make an import script, which will run once the dump finishes.
If you are not in the special case, when you need once the first DB server is shutdown'd, to have another one opened, you can just store the dumped sql files on the machine and not load them on real database, if they are not needed.

Related

Is it possible for laravel to reach 1000 TPS? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
There is an application built on Laravel and the application should be ready for a load of 1000 requests per second.
I have done the below tasks:
1- Composer autoload has been dumped
2- Query results are cached
3- All views have been minified
What else should I consider ?
(App runs on docker container)
How are you measuring if you reach the TPS? I would first get a baseline in order to know if your far of and based on that start looking into which part of your application stack (this includes the web and database server and other services used.) Tools that are available to use are JMeter or Apache Bench
In order to reach the 1000 TPS you'll need to tweak the webserver to allows for this type of loads. How to approach this is dependent on the webserver used. So it is difficult to provide you with specifics.
With regards to your DB server there are tools available to benchmark them as well such as pgBadger(postgres) or log files specific for the slow queries.
Ultimately you would also like to be on one of the latests PHP version as they are quite some performance gains in every new version. Currently the latest released PHP version is 7.4
In my opinion these tweaks would have a greater performance gain then tweaking the PHP code (assuming there is no mis-use of php). But this of course depends on the specifics of you application.
Optionally you should also be able to scale vertically (oppose of horizontally) to increase the TPS every time with the number of TPS per application server.
Tips to Improve Laravel Performance
Config caching,
Routes caching.
Remove Unused Service.
Classmap optimization.
Optimizing the composer autoload.
Limit Use Of Plugins.
Here is full detailed article click

Tracking the source of slowdowns [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I was wondering if someone could give a high-level answer about how to track functions which are causing a slow-down.
We have a site with 6 thousand lines of code and at times there's a significant slowdown.
I was wondering what would be the best way to track the source of these occasional slowdowns? Should we attach a time execution tracker on each function or would you recommend something else?
It's a standard LAMP stack setup with PHP 5.2.9 (no frameworks).
The only way to properly track down a why and where a script is slowing down, is by the use of a profiler.
There are a few of these available for PHP. Some of which requires that you install a module on the server, some which uses a PHP-only library, and others again which are stand alone.
My preferred profiler is Zend Studio, mainly because I use it as my IDE. It has the benefit of being both stand-alone, and to be used in the conjunction with server-side modules (or the Zend Server package). Allowing you to profile both locally, and on production systems.
One of the easiest things to look for, however, are SELECT queries inside loops. They are notorious for causing slow-downs, especially when you have a more than a few hundred records in the table being queried.
Another if is you have multiple AJAX calls in rapid succession, and you're using the default PHP session handler (flat files). This can cause the loading time to increase significantly because the IO-operations are locking. This means that it can only handle one request that uses session at a time, even though AJAX is by its very nature asynchronous.
The best way to combat this, is to use/write a custom session handler that utilizes a database to store the sessions. Just make sure you don't saturate the DB connection limit.
First and foremost though: Get yourself a proper profiler. ;)

AWS with Moodle with load Balance

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 days ago.
Improve this question
Am planning to install Moodle in Amazon EC2 with ELB. The approach am thinking of is couple of Moodle intance and couple of DB instances. each Moodle instance points the DB instances through a load balancer and each DB syncs automatically.
Please advice will it works.
I don't think that there is an option to have multiple DB instances in AWS synchronized between each other and all being both read and write. It seems they can only have read replicas, see here: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html and https://aws.amazon.com/rds/mysql/ (Replication chapter).
Also, it would also be a big overhead to synchronize a high number of DB instances. Will this be synchronous? In that case it would leave a performance penalty.
The best option would be to have a number of Moodle instances behing a LB, all of them pointing to the same DB. I do not think the bottleneck sits in the DB. If you also tune the DB, add some performant storage (SSDs, see the link above for details) everything should be ok.

Best way to communicate from Android app to Linux daemon [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm on writing an Android App thats purposed to hand commands (or better data) to a running c daemon on another machine in the same network (should also work from an external network sometimes in the future), but I have problems to choose the best way (or protocol) to do that.
Communicating with an sort of API (PHP, Python, etc.) isn't really a option (maybe I'm wrong with this) because that data is time critical, should be the fastest way thats possible, so I trying to avoid the overhead coming with http and another thing between the daemon and the APP. On the other hand the daemon should be accessible by an local running PHP script too (there should be an API in the future, so maybe the extra "layer" isn't that critical?).
But even if I choose the API solution, what's the best way then? Sockets, general IPC?
Any suggestions or experience with a similar situation would be helpful.
In your question you say that it's time critical but also that it's under the same network. As far as your application doesn't have any performance problems, you won't find any issue with times. It depends also on your daemon though.
I've worked with a lot of even remote daemons and TCP sockets have always been a good option, I've never had any limitations using them, just be sure to choose between implementing a Service if your socket will need to be alive all your app's life cycle, or an AsyncThread or Thread if it's for a limited task.
This is what I use, for instance:
socket = new Socket();
socket.connect(new InetSocketAddress(host, port), timeout);
in = new BufferedReader(new InputStreamReader(socket.getInputStream(), "ISO-8859-1"));

One user for all databases? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I have been doing a lot of work on my local machine and I currently have 5 instances of Wordpress running using wamp. I am sure the answer is a matter of preference but I want to know if you should create a new user and database for each wordpress site? Here are a few of my names of my directorires that I have locally. wordpress, wordpress-dev, wpsandbox, wpxp, etc..They all have there use and I am actually adding one now which is why I am asking this question.
Currenlty each wordpress install has its own database and user with the same name so moving forward should I stick with this or is one global user assigned to each database best?
In a production environment you certainly want to have separate users to better manage permissions, minimize damage if one of the user credentials is compromised, etc.
In a development environment on your local computer I doubt it matters much, one user for all your databases is more convenient. I find myself doing that at times.
However one could make the argument that a developer should be using best practices from the get go, even in a development environment. So if your question is "should I be employing good security measure even in my local development environment" I think the answer is always "yes, unless you have a compelling reason not to."
If this is a localmachine where you have access, and only you. Then having 1 user for all databases should not be a problem.
Yes this is down to personal preference; but I see it as, in a live production web service. You should get into the practice of having 1 user with only the rights that it needs.
Example:
Database Name: Testing
Database Name: Another
Users:
Root -- Connect to append changes to the structure of the actual table/schema.
User1 -- Select, Insert, Delete and other necessary functions specific to Testing
User2 -- Same as the above, but specific to Another

Categories