Im having a really hard time trying to go down the RIGHT road in a project.
I'm a one man band with a tight budget.
2 dedicated servers
MySQL 5 / php5
I'm using server 1 to consume a lot of data from various feeds. The server/software is running 24/7 generating a huge database.
Server 2 - holds a copy
Of the database with a web frontend
I don't have any experience of MySQL replication. I've been researching and from what I can tell the slaves are updated right after the master.
I want to have a very speedy website so that's why the processing is done on server 1, whilst sever 2 simply selects data.
If MySQL replication is mimicking server 1 then surely this is going slow down server 2 and have the opposite of the desired effect.
What I thought might best suit this scenario is to write a script to automate the process.
Server 2 has 2 databases. One for live one for processing.
The script ascertains which database is live and instead uses the other one.
It's drops any tables in it.
The script dumps the database from server 1.
Installs it on server 2's newly emptied database.
The script changes the websites config file to utilise the new database.
The process can be repeated over and over.
Whilst the database install will be large it can happen its entirety at night and should mean no down time.
Is this better than doing MySQL replication ?
I would welcome advice.
Its hard to believe that a database dump/load cycle would be faster than replication. Especially row-based (non-query) replication. Replication can be lagged (by running SLAVE STOP SQL_THREAD on the slave) if you don't want it during peak times (but of course you must have sufficient non-peak times to catch up). (Remember that MySQL has three replication modes: statement, row, and mixed. Statement-based does the exact same update load on the slaves, row-based just sends the rows that changed, and should be fairly cheap CPU-wise)
Either all your slaves are fast enough to apply changes, and still have plenty of I/O bandwidth and CPU time to handle SELECTs, or no number of slaves will help. Its possible some other method (e.g., direct copying of data files) might be faster, but more fragile, and really you're talking some relatively minor gains. If you can't handle the update load, your choice with MySQL is to shard (split so each server is only responsible for part of the data) or buy faster hardware.
But ultimately, this is all taking shots in the dark. You can fairly easily change from replication, to rsync, to some insane scheme involving drbd, to whatever, that really only affects your database layer, maybe only the database itself. You need actual benchmarks—actual data—to make decisions like this. I will tell you that as a general rule, properly-designed large OLTP databases run out of I/O bandwidth first.
I'd suggest start with what's easy. And that'd be a single database server, or built-in replication. Keep in mind that sharding may be necessary at some point.
Actually, there is probably one question you want to answer fairly early: Do you really want to go with MySQL? Consider PostgreSQL.
A high volume of inserts can most certainly impact front end performance, but the answer for your scenario depends on very specifically how your processing engine impacts resources. There are certain combinations of settings that will allow high performance on selects while inserting data constantly. It depends on your specific duty cycle, storage engine, indexing scheme, etc.
You start by thoroughly understanding table locking http://dev.mysql.com/doc/refman/5.0/en/table-locking.html This is a must!
Then you can explore features like INSERT DELAYED http://dev.mysql.com/doc/refman/5.0/en/insert-delayed.html
And optimize your indices (as few as possible) to reduce the impact of each insert http://dev.mysql.com/doc/refman/5.0/en/insert-speed.html
Since it sounds like your requirements are driven by lots of data growth (inserts), if you can't get the performance you need from a single instance, replication probably won't help. In which case you should go for the nightly load scenario.
We have a similar use case, and we do nightly batch loads, with replication for backup/failover purposes only.
You say "If MySQL replication is mimicking server 1 then surely this is going slow down server 2 and have the opposite of the desired effect."
I don't think this is going to slow down the server. Have you tried it and measured any performance difference? I really think this is the best way to go for you, unless you clearly measure a performance impact because of the replication.
You really haven't provided enough info on what you're aiming to do, but here's my best understanding: server1 is fetching data (using bandwidth) and processing it in some way, (using processing power and I/O); server2 is serving live info to users that is based on the post-processed data. Availability for server2 is more important than for server1, and a problem on server1 should not affect server2's operations.
If the there's a significant difference between the raw data that server1 is fetching and the 'finished' data for use on server2, perhaps with some temporary data being produced along the way, just have server1 do its work, and use some kind of a script to periodically bring post-processed data from server1 to server2. Perhaps post-processed data is smaller than the raw stuff that server1 is working on?
If server1 is not really doing much processing, just fetching of data and insertion into db, then replication might be reasonable way to move data from #1 to #2.
An in-between approach would be to only replicate certain post-processed tables, so server1 can do its work in other tables in mysql, and when the final product is being inserted into the replicated table, it will automatically appear on server2.
Have fun.
Related
How to choose a web server?
I have a website and it has good traffic.
But the problem is that the database gets over thousands hit per seconds so I would like to get suggestions from you guys.
The server is capable of handling much traffic but MariaDB service gets off so What should I do?
I was thinking to use remote databases and a CDN server for contents.
Current server + Database server + CDN Server.
Does anyone have a better suggestion that will help me to get rid of the problem and also reduce the cost of CDN/database servers??
[Sorry for my grammatical mistakes]
but MariaDB service gets off so
I have no idea what this means.
I was thinking to use remote databases and a CDN server for contents
If you are serving content directly from your database then you will have performance problems. However using a CDN is only going to help if:
A large part of your customer base has a long RTT to your server
You have already optimized the caching of your content
If 1 is not the case, then you'll get as much benefit running your own cache. If 2 is the case then you're either going to have to fix your caching or enhance your database capacity.
gets over thousands hit per seconds
So what? Depending on the queries and the hardware, that might be a a constraint on your service performance. It might not. Tuning your DBMS and queries might help.
Readonly Slaves can be used to provide 'infinite' read scaling.
Turn on the slowlog, etc, to find the 'worst' queries to improve on. See http://mysql.rjweb.org/doc.php/mysql_analysis#slow_queries_and_slowlog
Galera Clustering may provide a small amount of write scaling.
Combining queries may help some.
I am in the final stages of configuring a service that is accessible from four global locations (with plans to add more later). I will be running the servers on an Ubuntu 12.04 box with MariaDB. My initial thought was to create servers that run independently of each other with 4 distinct databases and live with the constraint that users would only be able to login to the server where they were initially registered.
However, I have just run into this article that has got me thinking... .
From my reading of things if I set up a Galera cluster with master-master replication as suggested in the article I can move have the luxury of one large database that is consistently available across all four servers. I have gathered (and am hoping) that with the cluster setup correctly and functioning well I need do pretty much nothing in my PHP code (the four MariaDB instances will have the same user to access the database) - not even alter the PDO connection string.
However, this sounds almost too good to be true. My questions are:
are there other issues involved here that make for complications?
Do the PHP PDO connection strings need to be altered in anway?
Does the fact that my application is already structured to ensure that there is absolutely zero chance of two servers attempting to simultaneously write the same row help?
And finally, reading from the MariaDB docs, that this will not work with the TokuDB storage engine?
Is there a way to specifically stop the replication of a selected table? Could I in fact exploit the "only InnoDB/XtraDB" constraint and use another storage engine on the table I do not want to have replicated?
are there other issues involved here that make for complications?
There are some Known Limitations that you should be aware of. Generally, with clusters, you should ideally have an odd number of nodes to prevent split brain conditions, but an even number will usually work just as well.
Do the PHP PDO connection strings need to be altered in anway?
No. Your existing connection strings should work.
Does the fact that my application is already structured to ensure that there is absolutely zero chance of two servers attempting to simultaneously write the same row help?
Look at the known limitations and make sure your application will still do that. If you're using named locks, you'll need to change your application.
And finally, reading from the MariaDB docs, that this will not work with the TokuDB storage engine?
TokuDB support was added in the recent galera cluster distribution. I have used some and it does replicate just like InnoDB but I wouldn't rely on it since it's new in the galera cluster build.
Is there a way to specifically stop the replication of a selected table? Could I in fact exploit the "only InnoDB/XtraDB" constraint and use another storage engine on the table I do not want to have replicated?
I've heard a lot of people ask if they can omit tables or databases from replication but I still haven't heard a good reason why. Galera replication provides HA and is cheap and easy so even if some tables aren't important I can't find any realistic reason to not replicate the data. That being said, you could have data not replicated by using MyISAM/Aria.
I've been using MariaDB with galera in multiple moderately sized projects and it is the best solution I've found for HA and it also provides performance benefits. Other solutions are generally expensive or not mature. One thing you should consider is setting up a proxy for connecting to the database servers like HA Proxy, mysql-proxy, or glbd (which I use) to provide better redundancy and connection balancing for performance.
In response to DroidOS's comment below:
Every write in the cluster needs to be agreed upon by every node so any latency between nodes is added to every write. So, basically, every write will have the greatest round trip time between the writing server and the other nodes added to it.
No. Galera replication is all or nothing across the entire cluster. If any node has a problem writing the data, which can happen if a table doesn't have a primary key, the node will gracefully kill itself since it can't guarantee its data is consistent with the rest of the cluster. If that happens, the rest of the cluster will continue to operate normally. If there is a network issue, if one of the segments has quorum, it will continue to operate normally. Any segments without quorum will wait for more nodes to get quorum but will not accept queries. With this behavior, you can be sure that any node that you are able to query is consistent with the rest of the cluster.
Given that this has turned out to be such a popular question I thought I should add an extra answer by way of comment for anyone who runs into it.
The big issue with synchronous replication is the latency that introduced by the process. There will certainly be times when synchronous replication is required and latency has to be managed and then lived with. However, you might on reflection -as I did - realize that you can live with lazy replication. There are commercial solutions that deliver this albeit at a hefty fee. You also have the possibility of spinning your own solution - easier than you might think.
Consider a web app in which a call to the app consists of PHP script running several MySQL queries, some of them memcached.
The PHP does not do very complex job. It is mainly serving the MySQL data with some formatting.
In the past it used to be recommended to put MySQL and the app engine (PHP/Apache) on separate boxes.
However, when the data can be divided horizontally (for example when there are ten different customers using the service and it is possible to divide the data per customer) and when Nginx +FastCGI is used instead of heavier Apache, doesn't it make sense to put Nginx Memcache and MySQL on the same box? Then when more customers come, add similar boxes?
Background: We are moving to Amazon Ec2. And a separate box for MySQL and app server means double EBS volumes (needed on app servers to keep the code persistent as it changes often). Also if something happens to the database box, more customers will fail.
Clarification: Currently the app is running with LAMP on a single server (before moving to EC2).
If your application architecture is already designed to support Nginx and MySQL on separate instances, you may want to host all your services on the same instance until you receive enough traffic that justifies the separation.
In general, creating new identical instances with the full stack (Nginx + Your Application + MySQL) will make your setup much more difficult to maintain. Think about taking backups, releasing application updates, patching the database engine, updating the database schema, generating reports on all your clients, etc. If you opt for this method, you would really need to find some big advantages in order to offset all the disadvantages.
You need to measure carefully how much memory overhead everything has - I can't see enginex vs Apache making much difference, it's PHP which will use all the RAM (this in turn depends on how many processes the web server chooses to run, but that's more of a tuning issue).
Personally I'd stay away from enginex on the grounds that it is too risky to run such a weird server in production.
Databases always need lots of ram, and the only way you can sensibly tune the memory buffers is to have them on dedicated servers. This is assuming you have big data.
If you have very small data, you could keep it on the same box.
Likewise, memcached makes almost no sense if you're not running it on dedicated boxes. Taking memory from MySQL to give to memcached is really robbing Peter to pay Paul. MySQL can cache stuff in its innodb_buffer_pool quite efficiently (This saves IO, but may end up using more CPU as you won't cache presentation logic etc, which may be possible with memcached).
Memcached is only sensible if you're running it on dedicated boxes with lots of ram; it is also only sensible if you don't have enough grunt in your db servers to serve the read-workload of your app. Think about this before deploying it.
If your application is able to work with PHP and MySQL on different servers (I don't see why this wouldn't work, actually), then, it'll also work with PHP and MySQL on the same server.
The real question is : will your servers be able to handle the load of both Apache/nginx/PHP, MySQL, and memcached ?
And there is only one way to answer that question : you have to test in a "real" "production" configuration, to determine own loaded your servers are -- or use some tool like ab, siege, or OpenSTA to "simulate" that load.
If there is not too much load with everything on the same server... Well, go with it, if it makes the hosting of your application cheapier ;-)
I need a simple way for multiple running PHP scripts to share data.
Should I create a MySQL DB with a RAM storage engine, and share data via that (can multiple scripts connect to the same DB simultaneously?)
Or would flat files with one piece of data per line be better?
Flat files? Nooooooo...
Use a good DB engine (MySQL, SQLite, etc). Then, for maximum performance, use memcached to cache content.
In this way, you have the ease and reliability of sharing data between processes using proven server software that handles concurrency, etc... But you get the speed of having your data cached.
Keep in mind a couple things:
MySQL has a query cache. If you are issuing the same queries repeteadly, you can gain a lot of performance without adding a caching layer.
MySQL is really fast anyway. Have you load-tested to demonstrate it is not fast enough?
Please don't use flat files, for the sanity of the maintainers.
If you're just looking to have shared data, as fast as possible, and you can hold it all in RAM, then memcached is the perfect solution.
If you'd like persistence of data, then use a DBMS, like MySQL.
Generally, a DB is better, however, if you are sharing a small, mostly static amount of data, there might be performance benefits (and simplicity) of doing it with flat files.
Anything other than trivial data sharing and I would pick a DB however.
1- Where the flat file can be usefull:
Flat file can be faster than a database, but in very specific applications.
They are faster if the data is read from start to finish without any search or write.
If the data dont fit in memory and need to be read fully to get the job done, It 'can' be faster than a database. Also if there is lot more write than read, flat file also shine, most default databases setups will need to make the read queries wait for the write to finish in order maintain indexes and foreign keys. Making the write queries usually slower than simple reads.
TD/LR vesion:
Use flat files for jobs based system(Aka, simple logs parsing), not for web searches queries.
2- Flat files pit falls:
If your going with a flat file, you will need to synchronize your scripts when the file change using custom lock mechanism. Which can lead to slowdown, corruption up to dead lock if you have a bug.
3- Ram based Database ?
Most databases have in memory cache for query results, search indexes, making them very hard to beat with a flat file. Because they cache in memory, making it run entirely from memory is most of the time ineffective and dangerous. Better to properly tune the database configuration.
If your looking to optimize performance using ram, I would first look at running your php scrips, html pages, and small images from a ram drive. Where the cache mechanism is more likely to be crude and hit the hard drive systematically for non changing static data.
Better result can be reach with a load balancer, clustering with a back plane connections up to ram based SAN array. But that's a whole other topic.
5- can multiple scripts connect to the same DB simultaneously?
Yes, its called connection pooling. In php (client side) its the function to open a connection its mysql-pconnect(http://php.net/manual/en/function.mysql-pconnect.php).
You can configure the maximum open connection in php.ini I think. Similar setting on mysql server side define the maximum of concurrent client connections in /etc/mysql/my.cnf.
You must do this in order to take advantage of parrallel processessing of the cpu and avoid php script to wait the query of each other finish. It greatly increase performance under heavy load.
There is also one connection pool/thread pool in Apache configuration for regular web clients. See httpd.conf.
Sorry for the wall of text, was bored.
Louis.
If you're running them on multiple servers, a filesystem-based approach will not cut it (unless you've got a consistent shared filesystem, which is unlikely and may not be scalable).
Therefore you'll need a server-based database anyway to allow the sharing of data between web servers. If you're serious about either performance or availability, your application will support multiple web servers.
I would say that the MySql DB would be better choice unless you have some mechanism in place to deal with locks on the flat files (and some way to control access). In this case the DB layer (regardless of specific DBMS) is acting as an indirection layer, letting you not worry about it.
Since the OP doesn't specify a web server (and PHP actually can run from a commandline) then I'm not certain that the caching technologies are what they're after here. The OP could be looking to do some sort of flying data transform that isn't website driven. Who knows.
If your system has a PHP cache (that caches compiled PHP code in memory, like APC), try putting your data into a PHP file, as PHP code. If you have to write data, there are some security issues.
I need a simple way for multiple
running PHP scripts to share data.
APC, and memcached are both good options depending on context. shared memory may also be an option.
Should I create a MySQL DB with a RAM
storage engine, and share data via
that (can multiple scripts connect to
the same DB simultaneously?)
That's also a decent option, but will probably not be as fast as APC or memcached.
Or would flat files with one piece of
data per line be better?
If this is read-only data, that's a possibility -- but may be slower than any of the options above. Especially if the data is large. Rather than writing custom parsing code, however, consider simply building a PHP array, and include() the file.
If this is a datastore that may be accessed by several writers simultaneously, by all means do NOT use a flat file! Writing to a flat file from multiple processes is likely to lead to file corruption. You can lock the file, but you risk lock contention issues, and long lock wait times.
Handling concurrent writes is the reason applications like mysql and memcached exist.
The site I am developing in php makes many MySQL database requests per page viewed. Albeit many are small requests with properly designed index's. I do not know if it will be worth while to develop a cache script for these pages.
Are file I/O generally faster than database requests? Does this depend on the server? Is there a way to test how many of each your server can handle?
One of the pages checks the database for a filename, then checks the server to see if it exists, then decides what to display. This I would assume would benefit from a cached page view?
Also if there is any other information on this topic that you could forward me to that would be greatly appreciated.
If you're doing read-heavy access (looking up filenames, etc) you might benefit from memcached. You could store the "hottest" (most recently created, recently used, depending on your app) data in memory, then only query the DB (and possibly files) when the cache misses. Memory access is far, far faster than database or files.
If you need write-heavy access, a database is the way to go. If you're using MySQL, use InnoDB tables, or another engine that supports row-level locking. That will avoid people blocking while someone else writes (or worse, writing anyway).
But ultimately, it depends on the data.
It depends on how the data is structured, how much there is and how often it changes.
If you've got relatively small amounts, of relatively static data with relatively simple relationships - then flat files are the right tool for the job.
Relational databases come into their own when the connections between the data are more complex. For basic 'look up tables' they can be a bit overkill.
But, if the data is constantly changing, then it can be easier to just use a database rather than handle the configuration management by hand - and for large amounts of data, with flat files you've got the additional problem of how do you find the one bit that you need, efficiently.
This really depends on many factors. If you have a fast database with much data cached in the RAM or a fast RAID system, chances are bad, that you will gain much from simple file system caching on the web server. Also think about scalibility. Under high workload a simple caching mechanism might easily become a bottle neck while a database is well designed to handle high work loads.
If there are not so much requests and you (or the operating system) is able to keep the cache in RAM, you might be able to gain some performance. But now the question arises, if it is realy neccessary to perform caching under low work load.
From plain performance perspective, it is wiser to tune the database server and not complicate the data access logic with intermediate file caches. A good database server would do the caching on its own if the results are cacheable. (I'm not sure what is teh case with mysql).
If you have performance problems, you should profile the pages to see the real bottlenecks. Even when you are -like me- a fan of the optimized codes, putting a stronger/more hardware into the equation is cheaper on the long run.
If you still need to use caches, consider using an existing solution, like memcached.