sql query return time is random - php

I have a weird problem with displaying the results from a database on a website. My database is 2.3million rows long and has a fulltext index. I am using this as a query:
SELECT *
FROM $tbl_name
WHERE MATCH(DESCRIPTION)
AGAINST ('$search')
LIMIT $start, $limit.
The results are displayed using a php pagination which is why I use limit. Running the query on the actual database server through phpadmin consistently returns with times under 100ms. Althogh the query time is under 100ms it can still take u to 2 minutes because when I run the query it gets stuck at "loading". When the webpage runs the query I get anywhere from 300ms load time to 2 minutes. Based on that I used Chrome's built in developer console and see that the receiving time of my php script is the problem. Copying the table to a new table and creating a new fulltext index solves the problem for all of 2 minutes, then it just goes back to being insanely slow. I am using hostgator btw which im guessing is the problem. If anyone has any idea on why it is so random I would greatly appreciate it as 2 minutes to run a search is not acceptable ^.^, thanks a ton!

If this is a shared Webserver it could be that other people's sites are bogging down the server. Maybe you need to get a dedicated server instead?
You could also try running the query against the database over and over a few times to see if performance degrades there after the first 2 minutes like you notice through php.
Also, generally 2millions rows is a lot for MySQL, moreover for FULLTEXT w/ MySQL. You might want to consider a proper search engine at that point like Solr instead.

Related

Simple query slow in Laravel, but insanely fast in database console

I have a very strange problem, that I cannot get my head around.
I am using Laravel for my backend application, where I am running a very simple query on table with 30k records all with proper indexes on it.
Here is the query:
DB::select('select * from Orders where ClientId = ?', [$id])
From the Laravel application this query runs for 1.2 seconds (The same thing is if I use Eloquent model.):
"query" => "select * from Orders where ClientId = ?"
"bindings" => array:1 [▼
0 => "44087"
]
"time" => 1015.2
The problem is, if I run THE SAME query inside the database console or PHPMyAdmin, the query takes approximate 20miliseconds.
I do not understand how is that possible since I am using the same database, same query, same computer and same connection to the database.
What can be the reason?
PHPMyAdmin will automatically add LIMIT for you.
This is because PHPMyAdmin will always by default paginate your query.
In your Laravel/Eloquent query, you are loading all 30k records in one go. It must take time.
To remedy this try pagination or chunking your query.
The total will take long, yes, but the chunks themselves will be very quick.
I would try debug the queries with the Debug Bar, to see how much time it takes, and which is taking longer,... It's very easy to use and install: https://github.com/barryvdh/laravel-debugbar
I think you are interested in DB administrations.. read this also,you can get some idea.good luck
There are several issues here. First one is how laravel works. Laravel only loads services and classes that are executed during your script. This is done to conserve resources, since PHP is meant to be run as a CGI script instead of a long running process. As a result, your timing might include the connection setup step instead of just executing the query. For a more "reliable" result, execute any query before timing your simple query.
There's another side of that behavior. In long running process, like Job runner, you ought not to change service parameters. This can cause undesired behavior and cause your parameter changes spill into other jobs. For example, if you provide SMTP login feature, you ought to reset the Email Sender credentials after sending the email, otherwise you will come into an issue where a user who doesn't use that feature will send an email as another user who does. This comes from thinking that services are reloaded every time a job is executed, as such is a behavior when running HTTP part.
Second, you're not using limit. As some other posters pointed out.
I'm almost sure this is due to the using limit by PHPMyAdmin, related to what you are seeing in the page output.
If you see top of the PHPMyAdmin page you see something like this:
Showing rows 0 - 24 (314 total, Query took 0.0009 seconds.)
You should have the same performance when you add the limit to your query.
How to enable MySQL Query Log?
Run query through phpmyadmin.
See which queries you actually have in MySQL.
Run app.
See which queries you actually have in MySQL.
Tell us, what was those extra, that slows down.
Query should be have the same speed in phpmyadmin or else whatever was the application try to use explain statement to see more details about query
Cause of this conflict may be due to many reasons other than MySQL as example
The php script itself have some functions that causes slow loading
Try to check server error.log maybe there's errors in functions
Basically phpmyadmin could have different than larval in the MySQL connection function try to check extension used in connection maybe it's not compatible with php version you use and I think this is the cause of slow query
I have noticed that in some app I have made and the cause was always in the php functions or in connection as example mysql_connect was much faster than PDO exten on php < 5.6 as I experienced but cause was always from php functions in the script

wordpress database select speed

I have select which is counting number of rows from 3 tables, I'm using wp function $wpdb->get_var( $sql ), there are about 10 000 rows in tables. Sometimes this select takes <1 second to load sometimes more than 15. If I run this sql in phpmyadmin it always returns number of rows in less than 1 second, where could be problem?
There are a couple of things you can do.
First, do an analysis of your query. Putting EXPLAIN before the query will output data about the query and you may be able to find any problems with that.
Read more about EXPLAIN here
Also, WordPress may not have indexed the most commonly used columns.
Try indexing some of the columns which you most commonly use within your query and see if it helps.
For example:
ALTER TABLE wp_posts ADD INDEX (post_author,post_status)
Plugin
You could try a plugin such as Debug Queries which prints the queries onto the front-end and it helps debug where things are taking along time. This is recommended to run only in the dev area and not on a live website
I would also recommending hooking up something like New Relic and trying to profile what's happening on the application side. If New Relic is not an option, you might be able to use xhprof (http://pecl.php.net/package/xhprof) and/or IfP (https://code.google.com/p/instrumentation-for-php/). Very few queries will perform the same in production in an application as they do in direct SQL queries. You may have contention, read locks, or any other number of things that cause a query from php to effectively stall on its way over to MySQL. In which case you might literally see the query running very fast, but the time it takes to actually begin executing that query from PHP would be very slow. You'll definitely need to profile what's happening on the way from WordPress to MySQL and back, based on what you're saying. The tools I mentioned should all be very useful for helping you accomplish that.

Optimizing ajax to reduce mysql server load

I have an auction site that sometimes becomes heavily loaded & mostly mysql is seen to consume lot of memory & cpu. The situation i have is as below.
An ajax query is going to mysql every second for every user who is online & watching the auction to check the bid count against a previous value. If anyone places a bid, the count is different, so this ajax invokes one more ajax that retrieves records & displays in a table bids that are specific to the user who is watching / logged in. I'm limiting this to first 10 to reduce load.
However the problem is if there are 50 users online, & one of them places a bid, 50 queries go into mysql & all of them detect the bid count has changed & issue further queries to get records to display bids corresponding to each user.
THe bigger problem is if there are 500 users online then 500 queries go into mysql to detect a change & if a bid is placed another 500 queries (a query specific to each online user) go into mysql & potentially crash the server.
Note: Currently there is a single mysql connection object used as a singleton in a php that is responsible for executing queries, retrieving records, etc.
I'm essentially looking at a solution where 500 queries don't goto mysql if 500 users are online, but all of them should get an update even if one of them places a bid for a particular auction. Any ideas / suggestions highly welcome.
How can i best implement a solution for this scenario that reduce the load on mysql ?
Resource wise we are fairly ok, doing a VPS4 on Hostgator. The only problem is cpu / memory usage which is 95% when many users are placing bids.
Appreciate some suggestions
It sounds like you will want to take a look at memcached or some other caching service. You can have a process querying MySQL and updating it into memcached, and ajax making a query directly into memcached to retrieve the rows.
Memcached does not keep the relational consistency, and querying it is much less resource consuming than querying MySQL every single time.
PHP has a very nice interface to work with memcached: Memcache
The website of the memcached project.
There are a few other caching services. You might also want to look at query caching in MySQL, but this would still need several connections into MySQL, which will be very resource consuming either way.
In the short-term, you could also just run the detailed query. It will return nothing when there's nothing to update (which replaces the first query!).
That might buy you some time for caching or deeper analysis of your query speed.

API connect to mysql sometimes not performing queries

a very puzzling and exhausting problem of mine:
I developed an API to allow others to draw information from my database. My server collects the POSTed info, writes up a mysql query, performs the query [$query = mysql_query($string, $connection);] , and returns the results (very simple).
the problem is that sometimes (say 1 out of every 5 tries) no info is returned. My server logs say that the resource ($query) is boolean (and therefore no results). My server receives the info from the remote users of the API every single time, the problem seems to be that my queries are sometimes just not being performed...
Why is this happening?
Is it a mysql performance issue? I never seem to have even a hint of a performance issue for queries made on my own page (i.e. not from the API)!
please help...
Your query might be failing. Try doing this:
mysql_query($string, $conn)or die(mysql_error());
If the query is generating an exception/error, it will stop the script and display the MySQL error. Using that error, you can fix your query so that everything will work fine eventually.
By the way, you are using $string, but it might be a better idea to use $builtQuery, because "string" might be confusing if you are going to need to edit the script later on.
Greetings.

MySql cache problems... some questions

First of all, I am using PhpMyAdmin, is this okay or not?
Because when I have cache disabled, and do two queries after eachother, the second query always is faster, so I am thinking maybe there is an internal cache on PhpMyAdmin?
Secondly, is there any way to get the time of how long a query takes, into php, and echo it onto the browser? (so I can use php instead of phpMyAdmin)
Thirdly, SHOW STATUS LIKE '%qcache%' gives me this:
Qcache_free_blocks 1
Qcache_free_memory 25154096
Qcache_hits 0
Qcache_inserts 2
Qcache_lowmem_prunes 0
Qcache_not_cached 62
Qcache_queries_in_cache 2
Qcache_total_blocks 6
How come Qcache_not_cached grows by a number of 5 or 10 for every query I make? Shouldn't there only be 1 increase per query?
Also, when I enabled the cache, and did a query, the Qcache_queries_in_cache got increased by 2... I thought it would be increased by 1 per every query, explain someone?
THEN, when I did another query the same as the one I cached, there was no performance gain at all, the query took as long as without the cache enabled...
Any help here please except for referring to the manual (I have read it already).
Thanks
UPDATE
Here is a typical query I make:
SELECT * FROM `langlinks` WHERE ll_title='Africa'
First of all, I am using PhpMyAdmin,
is this okay or not?
I suppose it's better than nothing -- and more user-friendly than a command-line client ; but a thing I don't like with phpMyAdmin is that it sends queries you didn't write.
I've already seen phpMyAdmin send some queries that were "hurting" a server, while the one that had been written by the user was OK, for instance (I don't have the exact example in mind).
Generally speaking, though, I'd say it's "ok" as long as you accept that more requests will be sent : phpMyAdmin displays lots of informations (like the list of databases, tables, and so on), and has to get those informations from somewehre !
Shouldn't there only be 1 increase per
query?
If you really want to see the impact of your query and no other, you'd probably better use the command-line mysql client, instead of phpMyAdmin : that graphical tool has to send queries to get the informations it displays.
The question, actually, is : do you prefer a user-friendly tool ? Or do you want to monitor only what your query actually does ?
In most cases, the answer is "user-friendly tool" -- and that's why phpMyAdmin has so much success ;-)
PhpMyAdmin do query and update status from the mysql server, so you won't see it increment by one in phpmyadmin

Categories