I'm using a table to do login, the table has a record of 2500. when i try to login it takes nearly 30-40 seconds to load. I'm using PHP and MySQL.I need to make the sql query faster to check the data. Solutions are welcomed thanks in advance pls help me out
When locating the causes of problems of performance issues, there are many things to consider in the your application stack:
Database: Indexes, Joins, Query Formation
Network in between: Routing issues, Bandwith, connectivity speed
Code: Check if your code structure is not creating unecessary delays, forexample some people have their validations span over client and server both in a single method which increases method lifetime longer. Try to put validation's core logic on database side like in stored procedures. Try to use methods with lesser overheads.
You should have included your query so we can examine it.
Don't pull all your records (e.g. don't use Select * from users) and then loop through to find a match.
Instead, use WHERE clause to limit your records to (i.e. one row). That is
SELECT col1,col2,.. FROM Users
WHERE username='username' AND password='password'
You can try that and see the performance...
Related
I am working with php and oracle to get data from oracle database and show up at php table with php code by connecting with oracle database. The problem is, I am getting data after long time during I have some for loop to run other query
At first I have run a query which give me total card issued for till 23/11/2019(the date I have assigned at query), and this query also give all card canceled between 23/11/2019 to 25/11/2019 here are below the query in php with oci_excecute()
now I wanted to write another query which can give total Reissue cards but condition is employee have canceled a card before and reissue again between the date 23/11/2019 to 25/11/2019. for this I did following code
Now problem is there have a for loop executing a query till the array size which taking much time and I am getting result after long time. can you please tell me how can I make it fast to get result? Thanks
There can be many causes of poor query performance. We cannot just look at a query, stroke our chins and then say "Ah ha! It's this line". Please read this excellent post on asking Oracle tuning questions.
Having said which, in this case you should reconsider the application design. Query loops within query loops are always a red flag. A single query which joins all the required tables for rendering in the client would likely be more efficient:
select eofficeuat.cardprintlog_cpa.empid
from eofficeuat.cardprintlog_cpa
where eofficeuat.cardprintlog_cpa.cardstatus='READY'
and eofficeuat.cardprintlog_cpa.dateofissue BETWEEN TO_DATE('23/11/2019', 'dd/mm/yyyy') AND TO_DATE('25/11/2019', 'dd/mm/yyyy')
and eofficeuat.cardprintlog_cpa.empid in (
select eofficeuat.cardprintlog_cpa.empid
from eofficeuat.cardprintlog_cpa
where eofficeuat.cardprintlog_cpa.cardstatus='DISCARDED'
)
After you sort out the SQL as other answers have suggested, then consider these other performance tips:
Use bind variables instead of string concatenation syntax like:
and eofficeuat.cardprintlog_cpa.empid='". $emp[$i] ."'
String concatenation is a SQL Injection security risk,
Tune oci8.default_prefetch or oci_set_prefetch() to reduce 'round trips' between PHP and the database when fetching query results.
There isn't a single "make it fast" solution, without first understanding the performance profile of your code. I strongly recommend utilising some type of application performance monitoring for your code. This will let you measure how long your script takes to run, how long it takes waiting on SQL queries, etc. etc.
There are a few things that jump out as potential performance issues (and the data from your APM solution will confirm this):
SQL Queries that have subqueries in their FROM clause can cause some performance issues depending on the query and data.
If you're retrieving lists of data that you only want the aggregate from, try using COUNT() other functions. For example SELECT COUNT(empid) as empid_count. This way you're not processing the data twice (once in the query, and a second time in your code).
I have select which is counting number of rows from 3 tables, I'm using wp function $wpdb->get_var( $sql ), there are about 10 000 rows in tables. Sometimes this select takes <1 second to load sometimes more than 15. If I run this sql in phpmyadmin it always returns number of rows in less than 1 second, where could be problem?
There are a couple of things you can do.
First, do an analysis of your query. Putting EXPLAIN before the query will output data about the query and you may be able to find any problems with that.
Read more about EXPLAIN here
Also, WordPress may not have indexed the most commonly used columns.
Try indexing some of the columns which you most commonly use within your query and see if it helps.
For example:
ALTER TABLE wp_posts ADD INDEX (post_author,post_status)
Plugin
You could try a plugin such as Debug Queries which prints the queries onto the front-end and it helps debug where things are taking along time. This is recommended to run only in the dev area and not on a live website
I would also recommending hooking up something like New Relic and trying to profile what's happening on the application side. If New Relic is not an option, you might be able to use xhprof (http://pecl.php.net/package/xhprof) and/or IfP (https://code.google.com/p/instrumentation-for-php/). Very few queries will perform the same in production in an application as they do in direct SQL queries. You may have contention, read locks, or any other number of things that cause a query from php to effectively stall on its way over to MySQL. In which case you might literally see the query running very fast, but the time it takes to actually begin executing that query from PHP would be very slow. You'll definitely need to profile what's happening on the way from WordPress to MySQL and back, based on what you're saying. The tools I mentioned should all be very useful for helping you accomplish that.
Background: I'm working on a system where the developers seem to be using a function which executes a MYSQL query like "SELECT MAX(id) AS id FROM TABLE" whenever they need to get the id of the LAST inserted row (the table having an auto_increment column).
I know this is a horrible practice (because concurrent requests will mess the records), and I'm trying to communicate that to the non-tech / management team, to which their response is...
"Oh okay, we'll only face this problem when we have
(a) a lot of users, or
(b) it'll only happen when two people try doing something
at _exactly_ the same time"
I don't disagree with either point, and think we'll run into this problem much sooner than we plan. However, I'm trying to calculate (or figure a mechanism) to calculate how many users should be using the system before we start seeing messed up links.
Any mathematical insights into that? Again, I KNOW its a horrible practice, I just want to understand the variables in this situation...
Update: Thanks for the comments folks - we're moving in the right direction and getting the code fixed!
The point is not if potential bad situations are likely. The point is if they are possible. As long as there's a non-trivial probability of the issue occurring, if it's known it should be avoided.
It's not like we're talking about changing a one line function call into a 5000 line monster to deal with a remotely possible edge case. We're talking about actually shortening the call to a more readable, and more correct usage.
I kind of agree with #Mark Baker that there is some performance consideration, but since id is a primary key, the MAX query will be very quick. Sure, the LAST_INSERT_ID() will be faster (since it's just reading from a session variable), but only by a trivial amount.
And you don't need a lot of users for this to occur. All you need is a lot of concurrent requests (not even that many). If the time between the start of the insert and the start of the select is 50 milliseconds (assuming a transaction safe DB engine), then you only need 20 requests per second to start hitting an issue with this consistently. The point is that the window for error is non-trivial. If you say 20 requests per second (which in reality is not a lot), and assuming that the average person visits one page per minute, you're only talking 1200 users. And that's for it to happen regularly. It could happen once with only 2 users.
And right from the MySQL documentation on the subject:
You can generate sequences without calling LAST_INSERT_ID(), but the utility of
using the function this way is that the ID value is maintained in the server as
the last automatically generated value. It is multi-user safe because multiple
clients can issue the UPDATE statement and get their own sequence value with the
SELECT statement (or mysql_insert_id()), without affecting or being affected by
other clients that generate their own sequence values.
Instead of using SELECT MAX(id) you shoud do as the documentation says :
Instead, use the internal MySQL SQL function LAST_INSERT_ID() in an SQL query
Even so, neither SELECT MAX(id) nor mysql_insert_id() are "thread-safe" and you still could have race condition. The best option you have is to lock tables before and after your requests. Or even better use transactions.
I don't have the math for it, but I would point out that response (a) is a little silly. Doesn't the company want a lot of users? Isn't that a goal? That response implies that they'd rather solve the problem twice, possibly at great expense the second time, instead of solve it once correctly the first time.
This will happen when someone has added something to the table between one insert and that query running. So to answer your question, two people using the system has the potential for things to go wrong.
At least using the LAST_INSERT_ID() will get the last ID for a particular resource so it won't matter how many new entries have been added in between.
In addition to the risk of getting the wrong ID value returned, there's also the additional database query overhead of SELECT MAX(id), and it's more PHP code to actually execute than a simple mysql_insert_id(). Why deliberately code something to be slow?
Recently I've been doing quite a big project with php + mysql. And now I'm concerned about my mysql. What should I do to make my mysql as optimal as possible? Tell everything you know, I'll be really very grateful.
Second question, I use one mysql query per page load which takes information from mysql. It's quite a big query, because I take information from a few tables with a join. Maybe I should do something else?
Thank you.
Some top tips from MySQL Performance tips forge
Specific Query Performance:
Use EXPLAIN to profile the query
execution plan
Use Slow Query Log (always have it
on!)
Don't use DISTINCT when you have or
could use GROUP BY Insert
performance
Batch INSERT and REPLACE
Use LOAD DATA instead of INSERT
LIMIT m,n may not be as fast as it
sounds
Don't use ORDER BY RAND() if you
have > ~2K records
Use SQL_NO_CACHE when you are
SELECTing frequently updated data or
large sets of data
Avoid wildcards at the start of LIKE
queries
Avoid correlated subqueries and in
select and where clause (try to
avoid in)
Scaling Performance Tips:
Use benchmarking
isolate workloads don't let administrative work interfere with customer performance. (ie backups)
Debugging sucks, testing rocks!
As your data grows, indexing may change (cardinality and selectivity change). Structuring may want to change. Make your schema as modular as your code. Make your code able to scale. Plan and embrace change, and get developers to do the same.
Network Performance Tips:
Minimize traffic by fetching only what you need.
1. Paging/chunked data retrieval to limit
2. Don't use SELECT *
3. Be wary of lots of small quick queries if a longer query can be more efficient
Use multi_query if appropriate to reduce round-trips
Use stored procedures to avoid bandwidth wastage
OS Performance Tips:
Use proper data partitions
1. For Cluster. Start thinking about Cluster before you need them
Keep the database host as clean as possible. Do you really need a windowing system on that server?
Utilize the strengths of the OS
pare down cron scripts
create a test environment
Learn to use the explain tool.
Three things:
Joins are not necessarily suboptimal. Oftentimes schemata that use joins will be faster than those that achieve the same but avoid table joins. The important thing is to know that your joins are optimal. EXPLAIN is very helpful but you also need to know how indexes work.
If you're grabbing data from the DB on every page hit, consider if a cacheing system would work for you. If so, check out PHP memcache and memcached. It's easy to use in PHP and very fast. It's popular for a reason.
Back to mysql: make sure you're key buffer is sized correctly. You can also think about using dedicated key buffers for critical indices that should remain in cache. Read about CACHE INDEX and LOAD INDEX INTO CACHE. See also here.
"...because I take information from a few tables with a join"
Joins, even "big" joins aren't bad. Just be sure that you have good indexes.
Also note that performance with a couple of records is a lot different than performance with hundreds of thousands of records, so test accordingly.
For performance, this book is good: High Perofmanace MYSQL. The associated blog is good too.
my 2cents: set your log_slow_queries to <2sec and use mysqlsla (get it from hackmysql.com) to analyse the 'slow' queries... Thisway you can just drilldown into the slower queries as they come along...
(the mysqlsla can also benefit from the log-queries-not-using-indexes option)
on mysqlhack.com there's a script called 'mysqlreport' that gives estimates on how your installation is runnig... (once it's running a while) and also gives pointers as to where to tune your setup more precisely...
Being perfect is a bit of a challenge and not the first target to set yourself.
Enable mysql logging of all queries, and write some code which parses the log files and removes any literal values from the SQL statements.
e.g. changes
SELECT * FROM atable WHERE something=5 AND other='splodgy';
and
SELECT * FROM atable WHERE something=1 AND other='zippy';
to something like:
SELECT * FROM atable WHERE something=:1 AND other=:2;
(Sorry, I've not got my code which does this to hand - but it's not rocket science)
Then shove the re-written log into a table so you can prioritize your performance fixes based on length and frequency of execution.
Is it possible to do a simple count(*) query in a PHP script while another PHP script is doing insert...select... query?
The situation is that I need to create a table with ~1M or more rows from another table, and while inserting, I do not want the user feel the page is freezing, so I am trying to keep update the counting, but by using a select count(\*) from table when background in inserting, I got only 0 until the insert is completed.
So is there any way to ask MySQL returns partial result first? Or is there a fast way to do a series of insert with data fetched from a previous select query while having about the same performance as insert...select... query?
The environment is php4.3 and MySQL4.1.
Without reducing performance? Not likely. With a little performance loss, maybe...
But why are you regularily creating tables and inserting millions of row? If you do this only very seldom, can't you just warn the admin (presumably the only one allowed to do such a thing) that this takes a long time. If you're doing this all the time, are you really sure you're not doing it wrong?
I agree with Stein's comment that this is a red flag if you're copying 1 million rows at a time during a PHP request.
I believe that in a majority of cases where people are trying to micro-optimize SQL, they could get much greater performance and throughput by approaching the problem in a different way. SQL shouldn't be your bottleneck.
If you're doing a single INSERT...SELECT, then no, you won't be able to get intermediate results. In fact this would be a Bad Thing, as users should never see a database in an intermediate state showing only a partial result of a statement or transaction. For more information, read up on ACID compliance.
That said, the MyISAM engine may play fast and loose with this. I'm pretty sure I've seen MyISAM commit some but not all of the rows from an INSERT...SELECT when I've aborted it part of the way through. You haven't said which engine your table is using, though.
The other users can't see the insertion until it's committed. That's normally a good thing, since it makes sure they can't see half-done data. However, if you want them to see intermediate data, you could throw in an occassional call to "commit" while you're inserting.
By the way - don't let anybody tell you to turn autocommit on. That a HUGE time waster. I have a "delete and re-insert" job on my database that takes 1/3rd as long when I turn off autocommit.
Just to be clear, MySQL 4 isn't configured by default to use transactions. It uses the MyISAM table type which locks the entire table for each insert, if I remember correctly.
Your best bet would be to use one of the MySQL bulk insertion functions, such as LOAD DATA INFILE, as these are dramatically faster at inserting large amounts of data. As for the counting, well, you could break the inserts into N groups of 1000 (or Y) then divide your progress meter into N sections and just update it on each group's request.
Edit: Another thing to consider is, if this is static data for a template, then you could use a "select into" to create a new table with the same data. Not sure what your application is, or the intended functionality, but that could work as well.
If you can get to the console, you can ask various status questions that will give you the information you are looking for. There's a command that goes something like "SHOW processlist".