Mysql 'where clause' with long tables? - php

I have designed a website like bit.ly, but a bit different. Written in php with mysql. When I was running it at localhost, everything seemed to work fine, pages loaded in 4.5 milliseconds, and I was as happy as a clam.
I uploaded it to the server, and users started surfing the website, and using it. Everything seemed to work fine untill the main table started reaching millions of rows.
The table is one million of rows length right now (it has to be that way), and growing. The pages that needs that table take 500ms to load... The mysql query is the next:
select link
from table
where kind = $kind and kind_idd = $kind_idd and live = 1;
It can return more than 1 link, in fact, it usually returns between 10-50 links.
The problem is that where clause. I am sure that mysql should have something to make it faster. I have been asking google and I found indexes, keys, and so on. But I couldn't find a website that explained it for dummies.. If someone could give me an example to make this thing go fast, I would really appreciate it.
Thank you!

Try using the mysql explain plan
so that you can see what is happening.
You probably need to ensure that you have indexes on kind, kind_idd and live see http://dev.mysql.com/doc/refman/5.0/en/create-index.html

Related

sql query return time is random

I have a weird problem with displaying the results from a database on a website. My database is 2.3million rows long and has a fulltext index. I am using this as a query:
SELECT *
FROM $tbl_name
WHERE MATCH(DESCRIPTION)
AGAINST ('$search')
LIMIT $start, $limit.
The results are displayed using a php pagination which is why I use limit. Running the query on the actual database server through phpadmin consistently returns with times under 100ms. Althogh the query time is under 100ms it can still take u to 2 minutes because when I run the query it gets stuck at "loading". When the webpage runs the query I get anywhere from 300ms load time to 2 minutes. Based on that I used Chrome's built in developer console and see that the receiving time of my php script is the problem. Copying the table to a new table and creating a new fulltext index solves the problem for all of 2 minutes, then it just goes back to being insanely slow. I am using hostgator btw which im guessing is the problem. If anyone has any idea on why it is so random I would greatly appreciate it as 2 minutes to run a search is not acceptable ^.^, thanks a ton!
If this is a shared Webserver it could be that other people's sites are bogging down the server. Maybe you need to get a dedicated server instead?
You could also try running the query against the database over and over a few times to see if performance degrades there after the first 2 minutes like you notice through php.
Also, generally 2millions rows is a lot for MySQL, moreover for FULLTEXT w/ MySQL. You might want to consider a proper search engine at that point like Solr instead.

wordpress database select speed

I have select which is counting number of rows from 3 tables, I'm using wp function $wpdb->get_var( $sql ), there are about 10 000 rows in tables. Sometimes this select takes <1 second to load sometimes more than 15. If I run this sql in phpmyadmin it always returns number of rows in less than 1 second, where could be problem?
There are a couple of things you can do.
First, do an analysis of your query. Putting EXPLAIN before the query will output data about the query and you may be able to find any problems with that.
Read more about EXPLAIN here
Also, WordPress may not have indexed the most commonly used columns.
Try indexing some of the columns which you most commonly use within your query and see if it helps.
For example:
ALTER TABLE wp_posts ADD INDEX (post_author,post_status)
Plugin
You could try a plugin such as Debug Queries which prints the queries onto the front-end and it helps debug where things are taking along time. This is recommended to run only in the dev area and not on a live website
I would also recommending hooking up something like New Relic and trying to profile what's happening on the application side. If New Relic is not an option, you might be able to use xhprof (http://pecl.php.net/package/xhprof) and/or IfP (https://code.google.com/p/instrumentation-for-php/). Very few queries will perform the same in production in an application as they do in direct SQL queries. You may have contention, read locks, or any other number of things that cause a query from php to effectively stall on its way over to MySQL. In which case you might literally see the query running very fast, but the time it takes to actually begin executing that query from PHP would be very slow. You'll definitely need to profile what's happening on the way from WordPress to MySQL and back, based on what you're saying. The tools I mentioned should all be very useful for helping you accomplish that.

Monitoring in PhpMyAdmin

I guess I'm a little confused what's going on here.
In phpmyadmin in the Status->Monitor section, when my website
is not even doing anything sql based at the time, I'm getting 6000 questions and 200 connections.
This very much does not seem normal, can anyone give me some tips about what's really going on here, this can't be normal, right?
edit:
Im trying to connect to about four different tables every five seconds and pull information from them, and I believe it's causing my server to crash. Is this a bad parctice?
I'm using jquery and php. I think even through bad programming I can't be hitting the 400 queries a second phpmyadmin is saying I'm hitting.
You might be able to see which queries are running from Status > Monitor, in a particular time range taken from the moving graph. See http://www.youtube.com/watch?v=7ZRZoCsrKis starting at 6:00.

Optimizing and compressing mysql db with drupal

I have developed a news website in a local language(utf-8) which server average 28k users a day. The site has recently started to show much errors and slow down. I got a call from the host saying that the db is using almost 150GB of space. I believe its way too much for the db and think there something critically wrong however i cannot understand what it could be. The site is in Drupal and the db is Mysql(innoDb). Can any one give directions as to what i should do.
UPDATE: Seems like innoDb dump is using the space. What can be done about it? Whats the standard procedure to deal with this issue.
The question does not have enough info for a specific answer, maybe your code is writing the same data to the DB multiple times, maybe you are logging to the table and the logs have become very big, maybe somebody managed to get access to your site/DB and is misusing it.
You need to login to your database and check which table is taking the most space. Use SHOW TABLE STATUS (link) which will tell you the size of each table. Then manually check the data in the table to figure out what is wrong.

MySql cache problems... some questions

First of all, I am using PhpMyAdmin, is this okay or not?
Because when I have cache disabled, and do two queries after eachother, the second query always is faster, so I am thinking maybe there is an internal cache on PhpMyAdmin?
Secondly, is there any way to get the time of how long a query takes, into php, and echo it onto the browser? (so I can use php instead of phpMyAdmin)
Thirdly, SHOW STATUS LIKE '%qcache%' gives me this:
Qcache_free_blocks 1
Qcache_free_memory 25154096
Qcache_hits 0
Qcache_inserts 2
Qcache_lowmem_prunes 0
Qcache_not_cached 62
Qcache_queries_in_cache 2
Qcache_total_blocks 6
How come Qcache_not_cached grows by a number of 5 or 10 for every query I make? Shouldn't there only be 1 increase per query?
Also, when I enabled the cache, and did a query, the Qcache_queries_in_cache got increased by 2... I thought it would be increased by 1 per every query, explain someone?
THEN, when I did another query the same as the one I cached, there was no performance gain at all, the query took as long as without the cache enabled...
Any help here please except for referring to the manual (I have read it already).
Thanks
UPDATE
Here is a typical query I make:
SELECT * FROM `langlinks` WHERE ll_title='Africa'
First of all, I am using PhpMyAdmin,
is this okay or not?
I suppose it's better than nothing -- and more user-friendly than a command-line client ; but a thing I don't like with phpMyAdmin is that it sends queries you didn't write.
I've already seen phpMyAdmin send some queries that were "hurting" a server, while the one that had been written by the user was OK, for instance (I don't have the exact example in mind).
Generally speaking, though, I'd say it's "ok" as long as you accept that more requests will be sent : phpMyAdmin displays lots of informations (like the list of databases, tables, and so on), and has to get those informations from somewehre !
Shouldn't there only be 1 increase per
query?
If you really want to see the impact of your query and no other, you'd probably better use the command-line mysql client, instead of phpMyAdmin : that graphical tool has to send queries to get the informations it displays.
The question, actually, is : do you prefer a user-friendly tool ? Or do you want to monitor only what your query actually does ?
In most cases, the answer is "user-friendly tool" -- and that's why phpMyAdmin has so much success ;-)
PhpMyAdmin do query and update status from the mysql server, so you won't see it increment by one in phpmyadmin

Categories