PHP & SQL Query Efficieny for * or Specific Columns - php

I have an Oracle database view in which I have access to 17 columns and approximately 15k rows (this grows at a rate of about 700 rows per year). I only need to use 10 of the columns. At the moment I am searching for ways to make my query more efficient since my app load about 7.5k of the entries at first. I know I could only load lets say 1k entries and that would be a way to speed up the loading process; however, the users often need to query through more than the 1k entries loaded initially, and I do not want to make them wait through a second loading of data into the app.
So I guess my main question is that when I query the Oracle view should I query and just do a * query on the database or select specific columns? I know that best practices state only query the columns you need; however, I am looking at this from a performance standpoint and would I see a significant performance increase by only querying the 10 specific columns I need rather than a * query on the view?

As #AndyLester says, the only way to know for sure is to try it out and see. There are reasons to expect that specifying the actual set of columns you need will be faster. The question is whether the difference will be "significant" which is something only you can tell us.
There are a few reasons to expect performance improvements
Specifying the actual set of columns decreases the amount of data that has to be transmitted over the network and decreases the amount of memory that is consumed on the client. Whether this is significant or not depends on the relative size of the columns that you're selecting vs. the columns you're excluding. If you only need a bunch of varchar2(10) columns and the columns that you don't need include some varchar2(1000) columns, you might be eliminating the vast majority of your network traffic and of the RAM consumed on the client. If you're only excluding a few char(1) columns while you're selecting a bunch of clob columns, the reduction may be trivial.
Specifying the actual set of columns can produce a more efficient plan. Depending on the Oracle version, the view definition, and the definition of the underlying tables it's possible that some of the joins can be eliminated when you're selecting a subset of columns. This, in turn, can produce a much more efficient plan.
Specifying the actual set of columns means that your application's performance is much less likely to change if additional columns are added to the view. Your code won't suddenly start pulling that new data over the network into memory structures on the client. It may not need to join in the additional tables that might be referenced.
Since there is no downside to specifying the column list, I'd strongly suggest doing so regardless of the size of the performance improvement. If you're really concerned about performance, however, it's likely that you'd want to be looking at performance more holistically (examining what is actually taking time in your process, for example).

Related

Which database for dealing with very large result-sets?

I am currently working on a PHP application (pre-release).
Background
We have the a table in our MySQL database which is expected to grow extremely large - it would not be unusual for a single user to own 250,000 rows in this table. Each row in the table is given an amount and a date, among other things.
Furthermore, this particular table is read from (and written to) very frequently - on the majority of pages. Given that each row has a date, I'm using GROUP BY date to minimise the size of the result-set given by MySQL - rows contained in the same year can now be seen as just one total.
However, a typical page will still have a result-set between 1000-3000 results. There are also places where many SUM()'s are performed, totalling many tens - if not hundreds - of thousands of rows.
Trying MySQL
On a usual page, MySQL was usually taking around around 600-900ms. Using LIMIT and offsets weren't helping performance and the data has been heavily normalised, and so it doesn't seem like further normalisation would help.
To make matters worse, there are parts of the application which require the retrieval of 10,000-15,000 rows from the database. The results are then used in a calculation by PHP and formatted accordingly. Given this, the performance of MySQL wasn't acceptable.
Trying MongoDB
I have converted the table to MongoDB, and it's speed is faster - it usually takes around 250ms to retrieve 2,000 documents. However, the $group command in the aggregation pipeline - needed to aggregate fields depending on the year they fall in - slows things down. Unfortunately, keeping a total and updating that whenever a document is removed/updated/inserted is also out of the question, because although we can use a yearly total for some parts of the app, in other parts the calculations require that each amount falls on a specific date.
I've also considered Redis, although I think the complexity of the data is beyond what Redis was designed for.
The Final Straw
On top of all of this, speed is important. So performance is up there it terms of priorities.
Questions:
What is the best way to store data which is frequently read/written and rapidly growing, with the knowledge that most queries will retrieve a very large result-set?
Is there another solution to the problem? I'm totally open to suggestions.
I'm a little stuck at the moment, I haven't been able to retrieve such a large result-set in an acceptable amount of time. It seems most datastores are great for small retrieval sizes - even on large amounts of data - but I haven't been able to find anything on retrieving large amounts of data from an even larger table/collection.
I only read the first two lines but you are using aggregation (GROUP BY) and then expecting it to just do realtime?
I will say you are new to the internals of databases not to undermine you but to try and help you.
The group operator in both MySQL and MongoDB is in-memory. In other words it takes whatever data structure you povide, whether it be an index or a document (row) and it will go through each row/document taking the field and grouping it up.
This means that you can speed it up in both MySQL and MongoDB by making sure you are using an index for the grouping, but still this only goes so far, even with housing the index in your direct working set in MongoDB (memory).
In fact using LIMIT with a OFFSET as well is probably just slowing things down even further frankly. Since after writing out the set MySQL then needs to query again to get your answer.
Once done it will write out the result, MySQL will write it out to a result set (memory and IO being used here) and MongoDB will reply inline if you have not set $out, the maximum size of the inline output being 16MB (the maximum size of a document).
The final point to take away here is: aggregation is horrible
There is no silver bullet that will save you here, some databases will attempt to boast about their speed etc etc but fact is most big aggregators use something called "pre-aggregated reports". You can find a quick introduction within the MongoDB documentation: http://docs.mongodb.org/ecosystem/use-cases/pre-aggregated-reports/
This means that you put the effort of aggregating and grouping onto some other process which could do it easily enough allowing your reading thread, the one that needs to be realtime to do it's thang in realtime.

MySQL Count / Sum Performance

Im in the process of developing a large scale application that will contain a few tables with a large dataset. (Potentially 1M+ rows). This application will be a game with multiple users completing tasks at the same time and will be very data intensive.
In this application, data will be aggregated for users statistics. I have came up with two scenarios to achieve my desired affect of calculating all the statistics.
Scenario 1
Maintain a separate table to calculate user statistics. Meaning as a move is processed, the field would increase by one.
Table Statistics (Moves, Origins, Points)
$Moves++;
$Origins++
$Points = $Points + $Points;
Scenario 2
Count and sum the data fields as needed across all data.
Table Moves (Points, Origins)
SUM(Points)
SUM(Origins)
COUNT(Moves)
My question is, which of these two scenarios would be the most efficient on the database driver. It is my belief that Scenario 2 could possibly be more efficient because there will be far less data manipulation, but I'm unsure of the load that these queries may place on the DB.
I am using MySQL 5.5 InnoDB with a UTF8 Charset
The best route will depend on the frequency of reads vs. writes of points, origins and moves. Those frequencies, in turn, will be dependent upon use cases, code style and use (or lack) of caching.
It's difficult to provide a qualified opinion without more details, but consider the fact that a dedicated table brings with it some additional complications in the way of additional writes necessary for each operation and ensuring that those data tallies must always be correct (match the underlying detail data). In light of the additional complication storing logical data elements once rather than twice in a relational database is usually the best course of action.
If you're worried about performance and scaleability you might want to consider a non-relational approach using database platforms like Mongo or DynamoDB.

Suggestions on Structuring a Database with Large Amounts of Data

I'm doing an RIA with JavaScript, MySQL and PHP on a Windows server.
I have 5,000 identically structured data sets I want to put in a database. 5 tables is enough for the data, all of which will be reasonably small except for one table that will have 300,000+ records for a typical data set.
Additionally, 500 users will get read only access to statistics compiled from those data sets. Those statistics are provided by PHP (no direct access is allowed). What's more, their access to data varies. Some users can only use one data set, others some, a few, all.
The results users see are relatively small; most requests return well under 100 rows, and the largest requests will be about 700 rows. All requests are through a JavaScript RIA which uses Ajax to connect to PHP which in turn connects to the data, does its thing and outputs JSON in response, which JavaScript then presents accordingly.
In thinking about how to structure this, three options present themselves:
Put the data sets in the same tables. That could easily give me 1,500,000,000 records in the largest table.
Use separate tables for each data set. That would limit the largest table size, but could mean 25,000 tables.
Forget the database and stick with the proprietary format.
I'm leaning towards #2 for a few reasons.
I'm concerned about issues in using very large tables (eg: query speeds, implementation limits, etc...).
Separate tables seem safer; they limit the impact of errors and structure changes.
Separate tables allow me to use MySQL's table level security rather than implementing my own row level security. This means less work and better protection; for instance, if a query is accidentally sent without row level security, users can get unauthorized data. Not so with table level security, as the database will reject the query out of hand.
Those are my thoughts, but I'd like yours. Do you think this is the right choice? If not, why not? What considerations have I missed? Should I consider other platforms if scale-ability is an issue?
1) I'm concerned about issues in using very large tables (eg: query speeds, implementation limits, etc...).
Whether the DBMS has to...
search through the large index of one table,
or search for the right table and then search through the smaller index of that table
...probably doesn't make much of a difference performance-wise. If anything, the second case has an undocumented component (the performance of locating the right table), so I'd be reluctant to trust it fully.
If you want to physically partition the data, MySQL supports that directly since version 5.1, so you don't have to emulate it via separate tables.
2) Separate tables seem safer; they limit the impact of errors and structure changes.
That's what backups are for.
3) Separate tables allow me to use MySQL's table level security rather than implementing my own row level security.
True enough, however similar effect can be achieved through views or stored procedures.
All in all, my instinct is to go with a single table, unless you know in advance that these data-sets differ enough structurally to warrant separate tables. BTW, I doubt you'd be able to do better with a proprietary format compared to a well-optimized database.

How to handle user's data in MySQL/PHP, for large number of users and data entries

Let's pretend with me here:
PHP/MySQL web-application. Assume a single server and a single MySQL DB.
I have 1,000 bosses. Every boss has 10 workers under them. These 10 workers (times 1k, totaling 10,000 workers) each have at least 5 database entries (call them work orders for this purpose) in the WebApplication every work day. That's 50k entries a day in this work orders table.
Server issues aside, I see two main ways to handle the basic logic of the database here:
Each Boss has an ID. There is one table called workorders and it has a column named BossID to associate every work order with a boss. This leaves you with approximately 1 million entries a month in a single table, and to me that seems to add up fast.
Each Boss has it's own table that is created when that Boss signed up, i.e. work_bossID where bossID = the boss' unique ID. This leaves you with 1,000 tables, but these tables are much more manageable.
Is there a third option that I'm overlooking?
Which method would be the better-functioning method?
How big is too big for number of entries in a table (let's assume a small number of columns: less than 10)? (this can include: it's time to get a second server when...)
How big is too big for number of tables in a database? (this can include: it's time to get a second server when...)
I know that at some point we have to bring in talks of multiple servers, and databases linked together... but again, let's focus on a single server here with a singly MySQL DB.
If you use a single server, I don't think there is a problem with how big the table gets. It isn't just the number of records in a table, but how frequently it is accessed.
To manage large datasets, you can use multiple servers. In this case:
You can keep all workorders in a single table, and mirror them across different servers (so that you have slave servers)
You can shard the workorders table by boss (in this case you access the server depending on where the workorder belongs) - search for database sharding for more information
Which option you choose depends on how you will use your database.
Mirrors (master/slave)
Keeping all workorders in a single table is good for querying when you don't know which boss a workorder belongs to, eg. if you are searching by product type, but any boss can have orders in any product type.
However, you have to store a copy of everything on every mirror. In addition only one server (the master) can deal with update (or adding workorder) SQL requests. This is fine if most of your SQL queries are SELECT queries.
Sharding
The advantage of sharding is that you don't have to store a copy of the record on every mirror server.
However, if you are searching workorders by some attribute for any boss, you would have to query every server to check every shard.
How to choose
In summary, use a single table if you can have all sorts of queries, including browsing workorders by an attribute (other than which boss it belongs to), and you are likely to have more SELECT (read) queries than write queries.
Use shards if you can have write queries on the same order of magnitude as read queries, and/or you want to save memory, and queries searching by other attributes (not boss) are rare.
Keeping queries fast
Large databases are not really a big problem, if they are not overwhelmed by queries, because they can keep most of the database on hard disk, and only keep what was accessed recently in cache (on memory).
The other important thing to prevent any single query from running slowly is to make sure you add the right index for each query you might perform to avoid linear searches. This is to allow the database to binary search for the record(s) required.
If you need to maintain a count of records, whether of the whole table, or by attribute (category or boss), then keep counter caches.
When to get a new server
There isn't really a single number you can assign to determine when a new server is needed because there are too many variables. This decision can be made by looking at how fast queries are performing, and the CPU/memory usage of your server.
Scaling is often a case of experimentation as it's not always clear from the outset where the bottlenecks will be. Since you seem to have a pretty good idea of the kind of load the system will be under, one of the first things to do is capture this in a spreadsheet so you can work out some hypotheticals. This allows you do do a lot of quick "what if" scenarios and come up with a reasonable upper end for how far you have to scale with your first build.
For collecting large numbers of records there's some straight-forward rules:
Use the most efficient data type to represent what you're describing. Don't worry about using smaller integer types to shave off a few bytes, or shrinking varchars. What's important here is using integers for numbers, date fields for dates, and so on. Don't use a varchar for data that already has a proper type.
Don't over-index your table, add only what is strictly necessary. The larger the number of indexes you have, the slower your inserts will get as the table grows.
Purge data that's no longer necessary. Where practical delete it. Where it needs to be retained for an extended period of time, make alternate tables you can dump it into. For instance, you may be able to rotate out your main orders table every quarter or fiscal year to keep it running quickly. You can always adjust your queries to run against the other tables if required for reporting. Keep your working data set as small as practical.
Tune your MySQL server by benchmarking, tinkering, researching, and experimenting. There's no magic bullet here. There's many variables that may work for some people but might slow down your application. They're also highly dependent on OS, hardware, and the structure and size of your data. You can easily double or quadruple performance by allocating more memory to your database engine, for instance, either InnoDB or MyISAM.
Try using other MySQL forks if you think they might help significantly. There are a few that offer improved performance over the regular MySQL, Percona in particular.
If you query large tables often and aggressively, it may make sense to de-normalize some of your data to reduce the number of expensive joins that have to be done. For instance, on a message board you might include the user's name in every message even though that seems like a waste of data, but it makes displaying large lists of messages very, very fast.
With all that in mind, the best thing to do is design your schema, build your tables, and then exercise them. Simulate loading in 6-12 months of data and see how well it performs once really loaded down. You'll find all kinds of issues if you use EXPLAIN on your slower queries. It's even better to do this on a development system that's slower than your production database server so you won't have any surprises when you deploy.
The golden rule of scaling is only optimize what's actually a problem and avoid tuning things just because it seems like a good idea. It's very easy to over-engineer a solution that will later do the opposite of what you intend or prove to be extremely difficult to un-do.
MySQL can handle millions if not billions of rows without too much trouble if you're careful to experiment and prove it works in some capacity before rolling it out.
i had database size problem as well in one of my networks so big that it use to slow the server down when i run query on that table..
in my opinion divide your database into dates decide what table size would be too big for you - let say 1 million entries then calculate how long it will take you to get to that amount. and then have a script every that period of time to either create a new table with the date and move all current data over or just back that table up and empty it.
like putting out dated material in archives.
if you chose the first option you'll be able to access that date easily by referring to that table.
Hope that idea helps
Just create a workers table, bosses table, a relationships table for the two, and then all of your other tables. With a relationship structure like this, it's very dynamic. Because, if it ever got large enough you could create another relationship table between the work orders to the bosses or to the workers.
You might want to look into bigints, but I doubt you'll need that. I know it that the relationships table will get massive, but thats good db design.
Of course bigint is for mySQL, which can go up to -9223372036854775808 to 9223372036854775807 normal. 0 to 18446744073709551615 UNSIGNED*

MySQL Table Optimization

I'm looking to optimize a few tables in a database because currently under high load the wait times are far too long...
Ignore the naming schema (it's terrible), but here's an example of one of the mailing list tables with around 1,000,000 records in it. At the moment I don't think I can really normalize it anymore without completely re-doing it all.
Now... How much impact will the following have:
Changing fields like the active field
to use a Boolean as opposed to a
String of Yes/No
Combining some of the fields such as
Address1, 2, 3, 4 to use a single
'TEXT' field
Reducing characters available e.g.
making it a VARCHAR(200) instead of
Setting values to NULL rather than
leaving them blank
One other thing I'm interested in, a couple of tables including this one use InnoDB as opposed to the standard MyISAM, is this recommended?
The front-end is coded in PHP so I'll be looking through that code aswell, at the moment I'm just looking at a DB level but any suggestions or help will be more than welcomed!
Thanks in advance!
None of the changes you propose for the table are likely to have any measurable impact on performance.
Reducing the max length of the VARCHAR columns won't matter if the row format is dynamic, and given the number and length of the VARCHAR columns, dynamic row format would be most appropriate.)
What you really need to tune is the SQL that runs against the table.
Likely, adding, replacing and/or removing indexes is going to be the low hanging fruit.
Without the actual SQL, no one can make any reliable tuning recommendations.
For this query:
SELECT email from table WHERE mailinglistID = X.
I'd make sure I had an index on (mailinglistId, email) e.g.
CREATE INDEX mytable_ix2 ON mytable (mailinglistId, email);
However, beware of adding indexes that aren't needed, because maintenance of indexes isn't free, indexes use resources (memory and i/o).
That's about the only tuning you're going to be able to do on that table, without some coding changes.
To really tune the database, you need to identify the performance bottleneck. (Is it the design of the application SQL: obtaining table locks? concurrent inserts from multiple sessions blocking?, or does the instance need to be tuned: increase size of buffer cache, innodb buffer cache, keysize area, SHOW INNODB STATUS may give you some clues,

Categories