I need to check if some integer value is already in my database (which is growing all the time). And it should be done several thousand times in one script. I'm considering two alternatives:
Read all those numbers from MySQL database into PHP array and every time I need to check it, use in_array function.
Every time I need to check the number, just execute something like SELECT number FROM table WHERE number='#' LIMIT 1
On the one hand, searching in array which is stored in RAM should be faster than querying mysql every time (as I have mentioned, these checks are performed about a thousand times during one script execution). On the other hand, DB is growing, ant that array may become quite big and that may slow things down.
Question is - which way is faster or better by some other aspects?
I have to agree that #2 is your best choice. When performing a query with a LIMIT 1 MySQL stops the query when it finds the first match. Make sure the columns you intend to search by are indexed.
It sounds like you are duplicating a Unique Constraint in code...
CREATE TABLE MyTable(
SomeUniqueValue INT NOT NULL
CONSTRAINT MyUniqueKey UNIQUE (SomeUniqueValue));
How does the number of times you need to check compare with the number of values stored in the database? If it's 1:100 then your probably better of searching in the database each time, if it's (some amount) less then preloading the list will be faster. What happened when you tested it?
However even if the ratio is low enough for it to be faster loading the full table, this will gobble up memory and could, as a result, make everything else run more slowly.
So I would recommend not loading it all into memory. But if you can, then batch the checks up to minimise the number of round trips to the database.
C.
querying the database is the best option, one because you said the database is growing so that means new values are being added to the table, whereis in in_array you would be reading old values. Secondly, you might exhaust the RAM alloted to PHP with very large amount of data. Thirdly, mysql has its own query optimizers and other optimizations which makes it a far better choice as compared to php
Related
I have a table with following columns:
ItemCode VARCHAR
PriceA DECIMAL(10,4)
PriceB DECIMAL(10,4)
The table has around 1,000 rows.
My requirement is to check the difference (PriceA-PriceB) for each row and then display top 50 items that have maximum price differences.
There are two ways I can implement this
1) Trust that SQL calculation is non-complex, easy and fast, so I run the following query:
SELECT ItemCode, (PriceA - PriceB) AS PDiff FROM testtable ORDER BY PDiff DESC LIMIT 50
and second,
2) Add one more column (called PriceDiff), which will store the difference (PriceA-PriceB).
However, these will have to be inserted manually and need extra space. But it can simply run the MAX(PriceDiff) select query for top 50.
My question is - in terms of speed and efficiency for a web application (displaying results on a website/app), which of the above method is better?
I have attempted to generate time consumed for each query, but both are reporting similar figures so unable to make any inferences.
Any explanation by the experts, or any fine-tuning of code, will be really appreciated.
Thanks
In general, to improve performance you always have to make a tradeoff between memory and time. Caching results will improve speed, however takes more memory. You can reduce memory usage by calculating stuff on the fly at the expense of performance.
In your case, storing additional 1000+ values in the DB is a matter of few extra Kb. Calculating the diff on the fly will have a negligible impact on performance. Either option is absolute peanuts to any DB and server.
I would stick with doing calculations on the fly as that is less complex and keeps the db normalized.
The first method is fastest, but is prone to error, as was mentioned.
May I suggest another solution, using a primary key. You could then set the value of the new column to what you are trying to figure from within the web application.
Then, when wanting to know the top 50, you could use your original method of finding the top 50, using your second method, where you would select from the table which stores the differences.
These links explain primary keys and how to use them:
http://www.mysqltutorial.org/mysql-primary-key/
https://www.w3schools.com/sql/sql_primarykey.asp
I know this has been asked before at least in this thread:
is php sort better than mysql "order by"?
However, I'm still not sure about the right option here since the performance on doing the sorting on PHP side is almost 40 times faster.
This MySQL query runs in about 350-400ms
SELECT
keywords as id,
SUM(impressions) as impressions,
SUM(clicks) as clicks,
SUM(conversions) as conversions,
SUM(not_ctr) as not_ctr,
SUM(revenue) as revenue,
SUM(cost) as cost
FROM visits WHERE campaign_id = 104 GROUP BY keywords(it's an integer) DESC
Keywords and campaign_id columns are indexed.
Using about 150k rows and returns around 1500 rows in total.
The results are then recalculated (we calculate click through rates, conversion rates, ROI etc, as well as the totals for the whole result set). The calculations are done in PHP.
Now my idea was to store the results with PHP APC for quick retrieval, however we need to be able to order these results by the columns as well as the calculated values, therefore if I wanted to order by click-through rate I'd have to use
(SUM(clicks) / (SUM(impressions) - SUM(not_ctr)) within the query which makes it around 40ms slower and the initial 400ms is a really long time already.
In addition we paginate these results, but adding LIMIT 0,200 doesn't really affect the performance.
While testing the APC approach I executed the query, did the additional calculations and stored the array in memory so it would only be executed once during the initial request and that worked like a charm. Fetching and sorting the array from memory only took around 10ms, however the script memory usage was about 25mb. Maybe it's worth loading the results into a memory table and then querying that table directly?
This is all done on my local machine(i7, 8gb ram) which has the default MySQL install and the production server is a 512MB box on Rackspace on which I haven't tested yet, so if possible ignore the server setup.
So the real question is: Is it worth using memory tables or should I just use the PHP sorting and ignore the RAM usage since I can always upgrade the RAM? What other options would you consider in optimizing the performance?
In general, you want to do sorting on the database server and not in the application. One good reason is that the database should be implementing parallel sorts and it has access to indexes. A general rule may not be applicable in all circumstances.
I'm wondering if you indexes are helping you. I would recommend that you try the query:
With no indexes
With an index only on campaign_id
With both indexes
Indexes are not always useful. One particularly important factor is called "selectivity". If you only have two campaigns in the table, then you are probably better off doing a full-table scan rather than indirectly searching through an index. This because particularly important when the table does not fit into memory (resulting in a condition where every row requires load a page into cache).
Finally, if this is going to be an application that expands beyond your single server, be careful. What is optimal on a single machine may not be optimal in a different environment.
I have a large table of about 14 million rows. Each row has contains a block of text. I also have another table with about 6000 rows and each row has a word and six numerical values for each word. I need to take each block of text from the first table and find the amount of times each word in the second table appears then calculate the mean of the six values for each block of text and store it.
I have a debian machine with an i7 and 8gb of memory which should be able to handle it. At the moment I am using the php substr_count() function. However PHP just doesn't feel like its the right solution for this problem. Other than working around time-out and memory limit problems does anyone have a better way of doing this? Is it possible to use just SQL? If not what would be the best way to execute my PHP without overloading the server?
Do each record from the 'big' table one-at-a-time. Load that single 'block' of text into your program (php or what ever), and do the searching and calculation, then save the appropriate values where ever you need them.
Do each record as its own transaction, in isolation from the rest. If you are interrupted, use the saved values to determine where to start again.
Once you are done the existing records, you only need to do this in the future when you enter or update a record, so it's much easier. You just need to take your big bite right now to get the data updated.
What are you trying to do exactly? If you are trying to create something like a search engine with a weighting function, you maybe should drop that and instead use the MySQL fulltext search functions and indices that are there. If you still need to have this specific solution, you can of course do this completely in SQL. You can do this in one query or with a trigger that is run each time after a row is inserted or updated. You wont be able to get this done properly with PHP without jumping through a lot of hoops.
To give you a specific answer, we indeed would need more information about the queries, data structures and what you are trying to do.
Redesign IT()
If for size on disc is not !important just joints table into one
Table with 6000 put into memory [ memory table ] and make backup every one hour
INSERT IGNORE into back.table SELECT * FROM my.table;
Create "own" index in big table eq
Add column "name index" into big table with id of row
--
Need more info about query to find solution
I have a table in which approx 100,000 rows are added every day. I am supposed to generate reports from this table. I am using PHP to generate these reports. Recently the script which used to do this is taking too long to complete. How can I improve the performance by shifting to something else than MYSQL which is scalable in the long run.
MySQL is very scalable, that's for sure.
The key is not changing the db from Mysql to other but you should:
Optimize your queries (can sound silly for others but I remember for instance that a huge improvment I've done sometime ago is to change SELECT * into selecting only the column(s) I need. It's a frequent issue I meet in others code too)
Optimize your table(s) design (normalization etc).
Add indexes on the column(s) you are using frequently in the queries.
Similar advices here
For generating reports or file downloads with large chunks of data you should concider using flush and increasing time_limit and memory limit.
I doubt the problem lies in the amount of rows, since MySQL can support ALOT of rows. But you can of course fetch x rows a time and process them in chunks.
I do assume your MySQL is properly tweaked for performance.
First analyse why (or: whether) your queries are slow: http://dev.mysql.com/doc/refman/5.1/en/using-explain.html
You should read the following and learn a little bit about the advantages of a well designed innodb table and how best to use clustered indexes - only available with innodb !
The example includes a table with 500 million rows with query times of 0.02 seconds.
MySQL and NoSQL: Help me to choose the right one
Hope you find this of interest.
Another thought is to move records beyond a certain age to a historical database for archiving, reporting, etc. If you don't need that large volume for transactional processing it might make sense to extract them from the transactional data store.
It's common to separate transactional and reporting databases.
I am going to make some assumptions
Your 100k rows added every day have timestamps which are either real-time, or are offset by a relatively short amount of time (hours at most); your 100k rows are added either throughout the day or in a few big batches.
The data are never updated
You are using InnoDB engine (Frankly you would be insane to use MyISAM for large tables because in the event of a crash, index rebuild takes a prohibitive time)
You haven't explained what kind of reports you're trying to generate, but I'm assuming that your table looks like this:
CREATE TABLE logdata (
dateandtime some_timestamp_type NOT NULL,
property1 some_type_1 NOT NULL,
property2 some_type_2 NOT NULL,
some_quantity some_numerical_type NOT NULL,
... some other columns not required for reports ...
... some indexes ...
);
And that your reports look like
SELECT count(*), SUM(some_quantity), property1 FROM logdata WHERE dateandtime BETWEEEN some_time_range GROUP BY property1;
SELECT count(*), SUM(some_quantity), property2 FROM logdata WHERE dateandtime BETWEEEN some_time_range GROUP BY property2;
Now, as we can see, both of these reports are doing a scan of a large amount of the table, because you are reporting on a lot of rows.
The bigger the time range becomes the slower the reports will be. Moreover, if you have a lot of OTHER columns (say some varchars or blobs) which you aren't interested in reporting on, then they slow your report down too (because the server still needs to inspect the rows).
You can use several possible techniques for speeding this up:
Add covering index for each type of report, to support the columns you need and omit columns you don't. This may help a lot but slow inserts down.
Summarise data according to the dimension(s) that you want to report on. In this ficticious case, all your reports are either counting rows, or SUM()ing some_quantity.
Build mirror tables (containing the same data) which have appropriate primary keys / indexes/ columns to make the reports faster.
Use a column engine (e.g. Infobright)
Summarisation is usually an attractive option if your use-case supports it;
You may wish to ask a more detailed question with an explanation of your use-case.
The time limit can be temporarily turned off for a particular file if you know that it is going to potentially run over the time limit by calling set_time_limit (0); at the start of your script.
Other considerations such as indexing or archiving very old data to a different table should also be looked at.
Your best bet is something like MongoDB or CouchDB, both of which are non-relational databases oriented toward storing massive amounts of data. This is assuming that you've already tweaked your MySQL installation for performance and that your situation wouldn't benefit from parallelization.
I have about 1 million rows so its going pretty slow. Here's the query:
$sql = "SELECT `plays`,`year`,`month` FROM `game`";
I've looked up indexes but it only makes sense to me when there's a 'where' clause.
Any ideas?
Indexes can make a difference even without a WHERE clause depending on what other columns you have in your table. If the 3 columns you are selecting only make up a small proportion of the table contents a covering index on them could reduce the amount of pages that need to be scanned.
Not moving as much data around though, either by adding a WHERE clause or doing the processing in the database would be better if possible.
If you don't need all 1 million records, you can pull n records:
$sql = "SELECT `plays`,`year`,`month` FROM `game` LIMIT 0, 1000";
Where the first number is the offset (where to start from) and the second number is the number of rows. You might want to use ORDER BY too, if only pulling a select number of records.
You won't be able to make that query much faster, short of fetching the data from a memory cache instead of the db. Fetching a million rows takes time. If you need more speed, figure out if you can have the DB do some of the work, e.g. sum/group togehter things.
If you're not using all the rows, you should use the LIMIT clause in your SQL to fetch only a certain range of those million rows.
If you really need all the 1 million rows to build your output, there's not much you can do from the database side.
However you may want to cache the result on the application side, so that the next time you'd want to serve the same output, you can return the processed output from your cache.
The realistic answer is no. With no restrictions (ie. a WHERE clause or a LIMIT) on your query, then you're almost guaranteed a full table scan every time.
The only way to decrease the scan time would be to have less data (or perhaps a faster disk). It's possible that you could re-work your data to make your rows more efficient (CHARS instead of VARCHARS in some cases, TINYINTS instead of INTS, etc.), but you're really not going to see much of a speed difference with that kind of micro-optimization. Indexes are where it's at.
Generally if you're stuck with a case like this where you can't use indexes, but you have large tables, then it's the business logic that requires some re-working. Do you always need to select every record? Can you do some application-side caching? Can you fragment the data into smaller sets or tables, perhaps organized by day or month? Etc.