large blobs in mysql database are making my website slow - php

I have website running under drupal 7 CMS with mysql database and i'm facing a problem in the data base because i have to store a lot of large texts in blob type in 3 tables in the current time the size of each table from those 3 tables is about 10 GB.
I use on those 3 tables 'insert' and 'select' query.
although my server is 16 GB RAM but I believe due to the database my website is so slow, what is your suggestions to solve this problem ? How large websites deal with mega data problems ?
I'm thinking to put this 3 tables in another database also in another server ?

The best solution will depend a lot on the nature of your site and exactly what you're looking for, so it's very difficult to give a concise answer here.
One common approach, for sites which aren't extremely latency-sensitive, is to actually store the textual/binary data in another service (e.g., Amazon's S3), and then only keep a key to that service stored in your database. Your application can then perform a database query, retrieve the key, and either send a request to the service directly (if you want to process the BLOB server-side) or instruct the client application to download the file from the service.

Related

MySQL multiple remote sources queries

what's the best approach to achieve data moved between from 3 to 5 remote mysql sources + one local?
We are setting up "centralized" panel to control these (I know this isn't the best/secure way to do it), we don't have any kind of sensitive data anyway and work over https so this is the fastest way to achieve this.
Currently when we do insert / select to these, we are receiving huge load. Is it better to build API to each site to receive the select/update/insert per node (this might reduce/share the load between servers)? Or just keep connecting remotely to the mysql server and do all those directly to mysql?
Ofcourse we must build this api to fit each site and talk to other(s), this way each server would use just local mysql and receives all data thought api from other sources ?
There might be SELECT queries like 1000-2000 rows but these are rare, so this could be from 5 sources up to 8000 rows. Possibly there is a lot of grouping data and just making those values to stack i.e. plus each int values to get total from all sources.

Access and store large amount of data from mysql server

We are developing an iOS/Android application which downloads large amounts of data from a server.
We're using JSON to transfer data between the server and client devices.
Recently the size of our data increased a lot (about 30000 records).
When fetching this data, the server request gets timed out and no data gets fetched.
Can anyone suggest the best method to achieve a fast transfer of data?
Is there any method to prepare data initially and download data later?
Is there any advantage of using multiple databases in the device(SQLite dbS) and perform parallel insertion into db's?
Currently we are downloading/uploading only changed data (using UUID and time-stamp).
Is there any best approach to achieve this efficiently?
---- Edit -----
i think its not only the problem of mysql records, at peak times multiple devices are connecting to the server to access data, so connections also goes to waiting. we are using performance server. i am mainly looking for a solution to handle this sync in device. any good method to simplify the sync or make it faster using multi threading, multiple sqlite db etc,...? or data compression, using views or ...?
A good way to achieve this would probably be to download no data at all.
I guess you won't be showing these 30k lines at your client, so why download them in the first place?
It would probably be better to create an API on your server which would help the mobile devices to communicate with the database so the clients would only download the data they actually need / want.
Then, with a cache system on the mobile side you could make yourself sure that clients won't download the same thing every time and that content they have already seen would be available off-line.
When fetching this data, the server request gets timed out and no data gets fetched.
Are you talking only about reads or writes, too?
If you are talking about writing access, as well: Are the 30,000 the result of a single insert/update? Are you using a transactional engine like InnoDB, e.g.? If so, Are your queries wrapped in a single transaction? Having auto commit mode enabled can lead to massive performance issues:
Wrap several modifications into a single transaction to reduce the number of flush operations. InnoDB must flush the log to disk at each transaction commit if that transaction made modifications to the database. The rotation speed of a disk is typically at most 167 revolutions/second (for a 10,000RPM disk), which constrains the number of commits to the same 167th of a second if the disk does not “fool” the operating system.
Source
Can anyone suggest the best method to achieve a fast transfer of data?
How complex is your query designed? Inner or outer joins, correlated or non-correlated subqueries, etc? Use EXPLAIN to inspect the efficiency? Read about EXPLAIN
Also, take a look at your table design: Have you made use of normalization? Are you indexing properly?
Is there any method to prepare data initially and download data later?
How do you mean that? Maybe temporary tables could do the trick.
But without knowing any details of your project, downloading 30,000 records on a mobile at one time sounds weird to me. Probably your application/DB-design needs to be reviewd.
Anyway, for any data that need not be updated/inserted directly to the database use a local SQLite on the mobile. This is much faster, as SQLite is a file-based DB and the data doesn't need to be transferred over the net.

Can MySQL handle large amounts of data?

I a developing college management web application with PHP and MySQL. I chose MySQL as my database because of its free license. Will it handle large amounts of data? College datas gradually increases with more schools and number of years the datas are accumulated. Is MySQL the best one for large amount of datas?
Thanks in advance
MySQL is perfectly fine; facebook uses mySQL for instance; I can't imagine a database size more extensive... see https://blog.facebook.com/blog.php?post=7899307130 from facebooks blog.
MySQL is definitely the best choice for you to start, as it is...
free available
a defacto standard in combination with PHP
a good start for beginners
and yes, can handle a huge amount of data
I've seen lots of companies and startups, which are using MySQL and handling tons of data. If you ran into performance issues later, you can care about it then, e.g. use a caching layer, optimize MySQL, etc.
MySQL will handle large amounts of data just fine, making sure your tables are properly indexed is going to go along way into ensuring that you can retrieve large data sets in a timely manner. We have a client that has a database with over 5 million records, and don't have much trouble outside the normal issues in dealing with a table that large.
Each flavor of SQL has it's own differences, just make sure you do your due diligence to find out the best options for your database and tables based on your needs.
MySQL Table size has a max of 4GB by default, you can change this. PostgresSQL, you set the limit when you create a table.
Mysql is OK choice, bit if you're expecting vast amounst of data I would prefer postgres sql (imho best free db avaliable).

Managing multiple MySQL 1 GB databases with data input from PHP application

I am creating an application that utilizes MySQL and PHP. My current web hosting provider has a MySQL database size limitation of 1 GB, but I am allowed to create many 1 GB databases. Even if was able to find another web hosting provider that allowed larger databases, I wonder how is data integrity and speed affected by larger databases? Is it better to keep databases small in terms of disk size? In other words, what is the best practice method of storing the same data (all text) from thousands of users? I am new to database design and planning. Eventually, I would imagine that a single database with data from thousands of users would grow to be inefficient and optimally the data should be distributed among smaller databases. Do I have this correct?
On a related note, how would my application know when to create another table (or switch to another table that was manually created)? For example, if I had 1 database that filled up with 1 GB of data, I would want my application to continue working without any service delays. How would I control the input of data from 1 table to a second, newly created database?
Similarly, if a user joins the website in 2011 and creates 100 records of information, and thousands of other users do the same, and then the 1 GB database becomes filled. Later on, that original user adds an additional 100 records that are created in another 1 GB database. How would my PHP code know which database to query for the 2 sets of 100 records? Would this be managed automatically in some way on the MySQL end? Would it need to be managed in the PHP code would IF/THEN/ELSE statements? Is this a service that some web hosting providers offer?
This is a very abstract question and I'm not sure the generic stackoverflow is the right place to do it.
In any case. What is the best practice method of storing? How about: in a file on disk. Keep in mind that a database is just a glorified file that has fancy 'read' and 'write' commands.
Optimization is hard, you can only ever trade things. CPU for memory usage, read speed for write speed, bulk data storage or speed. (Or get a better host provider and make your databases as large as you want ;) )
To answer your second question, if you do go with your database approach you will need to set up some system to 'migrate' users from a database to another if one gets full. If you reach 80% of 1GB, start migrating users.
Detecting the size of a database is a tricky problem. You could, I suppose look at the RAW files on disk to see how big they are, but perhaps there are more clever ways.
I would suggest using SQLite will the best option in your case. It supports 2 terabytes (2^41 bytes) database and best part is that it requires no server side installation. So it is compatible everywhere. All you need is a library to work with SQLite database.
You can also choose your host without looking on what databases and sizes do they support.

Best way to store complex relations of huge amount of data php/mysql

I'm trying to develop a web based digital asset management application. I'm planning to do it in Codeigniter with mysql as db. This is for a popular regional newspaper. They will have 1000 of entries and TB's of data as daily tons of information will be entered.
There will be different content types like Media, Personality, Event, Issue, Misc etc... All this will be categorized. The thing is all will be interconnected. For example the event "olympics" will be connected to all the participants in personlity table and all the media related to this. I'm planning to implement this complex inter-connection using a table 'connections'
id - subject - connection - type
-------------------------------------------
1 98 190 media
2 283 992 issue
3 498 130 info
So when a person takes the event olympics... all the connections will be populated from this table. The 'subject' column will have id of 'olympics' and connection will have id of the connected entry.
Is there a better way to do this? The content will have to searched based on 100's of different criteria. But the end-users will be very less. Only the reporters of the newspaper(Max 100) will have access to this app so the traffic or load will be very less but the amount of information stored will be very high. I would like to hear from experienced developers as i don't have much experience doing something big like this.
This is a complex question in that you need to know a lot about tuning and configuring your MySQL database in order to handle both the load and data. With such a low amount of users you will be okay in terms of connections so the time to execute is the real bottleneck.
If you are on a 32bit server the max rows for a table is 4.2billion and 4GB without any configuration changes. You can up the 4GB table limit but as far as I know the 4.2billion row limit is the max on a 32bit server.
Your table seems like it would be okay but I would change "type" to an ENUM so the data is not text (reduces overall table size).
You will have to index this table properly and from what it looks like it would be on subject,type. Without hard numbers/examples query with joins it would be hard to guestimate how fast this query would run but if it's indexed properly and has a high cardinality you should be okay.
You can always throw a Memcache layer in between PHP and MySQL to cache some results so you can get better performance if they are executing similar searches. With the "100's of different criteria" though you will most likely be hitting the database quite a bit.
Contrarily you could also take a look at some NoSQL options such as MongoDB which depending on your data might be a better fit.

Categories