a very puzzling and exhausting problem of mine:
I developed an API to allow others to draw information from my database. My server collects the POSTed info, writes up a mysql query, performs the query [$query = mysql_query($string, $connection);] , and returns the results (very simple).
the problem is that sometimes (say 1 out of every 5 tries) no info is returned. My server logs say that the resource ($query) is boolean (and therefore no results). My server receives the info from the remote users of the API every single time, the problem seems to be that my queries are sometimes just not being performed...
Why is this happening?
Is it a mysql performance issue? I never seem to have even a hint of a performance issue for queries made on my own page (i.e. not from the API)!
please help...
Your query might be failing. Try doing this:
mysql_query($string, $conn)or die(mysql_error());
If the query is generating an exception/error, it will stop the script and display the MySQL error. Using that error, you can fix your query so that everything will work fine eventually.
By the way, you are using $string, but it might be a better idea to use $builtQuery, because "string" might be confusing if you are going to need to edit the script later on.
Greetings.
Related
I have a very strange problem, that I cannot get my head around.
I am using Laravel for my backend application, where I am running a very simple query on table with 30k records all with proper indexes on it.
Here is the query:
DB::select('select * from Orders where ClientId = ?', [$id])
From the Laravel application this query runs for 1.2 seconds (The same thing is if I use Eloquent model.):
"query" => "select * from Orders where ClientId = ?"
"bindings" => array:1 [▼
0 => "44087"
]
"time" => 1015.2
The problem is, if I run THE SAME query inside the database console or PHPMyAdmin, the query takes approximate 20miliseconds.
I do not understand how is that possible since I am using the same database, same query, same computer and same connection to the database.
What can be the reason?
PHPMyAdmin will automatically add LIMIT for you.
This is because PHPMyAdmin will always by default paginate your query.
In your Laravel/Eloquent query, you are loading all 30k records in one go. It must take time.
To remedy this try pagination or chunking your query.
The total will take long, yes, but the chunks themselves will be very quick.
I would try debug the queries with the Debug Bar, to see how much time it takes, and which is taking longer,... It's very easy to use and install: https://github.com/barryvdh/laravel-debugbar
I think you are interested in DB administrations.. read this also,you can get some idea.good luck
There are several issues here. First one is how laravel works. Laravel only loads services and classes that are executed during your script. This is done to conserve resources, since PHP is meant to be run as a CGI script instead of a long running process. As a result, your timing might include the connection setup step instead of just executing the query. For a more "reliable" result, execute any query before timing your simple query.
There's another side of that behavior. In long running process, like Job runner, you ought not to change service parameters. This can cause undesired behavior and cause your parameter changes spill into other jobs. For example, if you provide SMTP login feature, you ought to reset the Email Sender credentials after sending the email, otherwise you will come into an issue where a user who doesn't use that feature will send an email as another user who does. This comes from thinking that services are reloaded every time a job is executed, as such is a behavior when running HTTP part.
Second, you're not using limit. As some other posters pointed out.
I'm almost sure this is due to the using limit by PHPMyAdmin, related to what you are seeing in the page output.
If you see top of the PHPMyAdmin page you see something like this:
Showing rows 0 - 24 (314 total, Query took 0.0009 seconds.)
You should have the same performance when you add the limit to your query.
How to enable MySQL Query Log?
Run query through phpmyadmin.
See which queries you actually have in MySQL.
Run app.
See which queries you actually have in MySQL.
Tell us, what was those extra, that slows down.
Query should be have the same speed in phpmyadmin or else whatever was the application try to use explain statement to see more details about query
Cause of this conflict may be due to many reasons other than MySQL as example
The php script itself have some functions that causes slow loading
Try to check server error.log maybe there's errors in functions
Basically phpmyadmin could have different than larval in the MySQL connection function try to check extension used in connection maybe it's not compatible with php version you use and I think this is the cause of slow query
I have noticed that in some app I have made and the cause was always in the php functions or in connection as example mysql_connect was much faster than PDO exten on php < 5.6 as I experienced but cause was always from php functions in the script
I have a weird problem with displaying the results from a database on a website. My database is 2.3million rows long and has a fulltext index. I am using this as a query:
SELECT *
FROM $tbl_name
WHERE MATCH(DESCRIPTION)
AGAINST ('$search')
LIMIT $start, $limit.
The results are displayed using a php pagination which is why I use limit. Running the query on the actual database server through phpadmin consistently returns with times under 100ms. Althogh the query time is under 100ms it can still take u to 2 minutes because when I run the query it gets stuck at "loading". When the webpage runs the query I get anywhere from 300ms load time to 2 minutes. Based on that I used Chrome's built in developer console and see that the receiving time of my php script is the problem. Copying the table to a new table and creating a new fulltext index solves the problem for all of 2 minutes, then it just goes back to being insanely slow. I am using hostgator btw which im guessing is the problem. If anyone has any idea on why it is so random I would greatly appreciate it as 2 minutes to run a search is not acceptable ^.^, thanks a ton!
If this is a shared Webserver it could be that other people's sites are bogging down the server. Maybe you need to get a dedicated server instead?
You could also try running the query against the database over and over a few times to see if performance degrades there after the first 2 minutes like you notice through php.
Also, generally 2millions rows is a lot for MySQL, moreover for FULLTEXT w/ MySQL. You might want to consider a proper search engine at that point like Solr instead.
I'm not sure if this is a duplicate of another question, but I have a small PHP file that calls some SQL INSERT and DELETE for an image tagging system. Most of the time both insertions and deletes work, but on some occasions the insertions don't work.
Is there a way to view why the SQL statements failed to execute, something similar to when you use SQL functions in Python or Java, and if it fails, it tells you why (example: duplicate key insertion, unterminated quote etc...)?
There are two things I can think of off the top of my head, and one thing that I stole from amitchhajer:
pg_last_error will tell you the last error in your session. This is awesome for obvious reasons, and you're going to want to log the error to a text file on disk in case the issue is something like the DB going down. If you try to store the error in the DB, you might have some HILARIOUS* hi-jinks in the process of figuring out why.
Log every query to this text file, even the successful ones. Find out if the issue affects identical operations (an issue with your DB or connection, again) or certain queries every time (issue with your app.)
If you have access to the guts of your server (or your shared hosting is good,) enable and examine the database's query log. This won't help if there's a network issue between the app and server, though.
But if I had to guess, I would imagine that when the app fails it's getting weird input. Nine times out of ten the input isn't getting escaped properly or - since you're using PHP, which murders variables as a matter of routine during type conversions - it's being set to FALSE or NULL or something and the system is generating a broken query like INSERT INTO wizards (hats, cloaks, spell_count) VALUES ('Wizard Hat', 'Robes', );
*not actually hilarious
Start monitoring your SQL queries by starting the log. There you can look what all queries are fired and errors if any.
This tutorial to start the logger will help.
Depending on which API your PHP file uses (let's hope it's PDO ;) you could check for errors in your current transaction with s.th. like
$naughtyPdoStatement->execute();
if ($naughtyPdoStatement->errorCode() != '00000')
DebuggerOfChoice::log( implode (' ', $naughtyPdoStatement->errorInfo() );
When using the legacy-APIs there's equivalents like mysql_errno, mysql_error, pg_last_error, etc... which should enable to do the same. DebuggerOfChoice::Log of course can be whatever log function you'd like to utilise
I have a working live search system that on the whole works very well. However it often runs into the problem that many versions of the search query on the server are running simultaneously, if users are typing faster than the results can be returned.
I am aborting the ajax request on receoipt of a new one, but that of course does not affect the query already in process on the server, and you end up with a severe bottleneck and a long wait to get your final results. I am using MySQL with MyISAM tables for this, and there does not seem to be any advantage in converting to InnoDB as the result sets will be the sane rows.
I tried using a session variable to make php wait if this session already has a query in progress but that seems to stop it working altogether.
The problem is solved if I make the ajax requests syncrhonous, but that would rather defeat the object here.
I was wondering if anyone had any suggestions as to how to make this work properly.
Best regards
John
Before doing anything more complicated, have you considered not sending the request until the user has stopped typing for at least a certain time interval (say, 1 second)? That should dramatically cut the number of requests being made with little effort on your part.
I'm using ajax, and quite often if not all the time, the first request is timed out. In fact, if I delay for several minutes before making a new request, I always have this issue. But the subsequent requests are all OK. So I'm guessing that the first time used a database connect that is dead. I'm using MySQL.
Any good solution?
Can you clarify:
are you trying to make a persistent connection?
do basic MySQL queries work (e.g. SELECT 'hard-coded' FROM DUAL)
how long does the MySQL query take for your ajax call (e.g. if you run it from a mysql command-line or GUI client.)
how often do you write to the MySQL tables used in your AJAX query?
Answering those questions should help rule-out other problems that have nothing to do with making a persistent connection: basic database connectivity, table indexing / slow-running SQL, MySQL cache invalidation etc.
chances are that you problem is NOT opening the connection, but actually serving the request.
subsequent calls are fast because of the mysql query cache.
what you need to do is to look for slow mysql queries, for example by turning on the slow query log, or by looking at the server at real time using mytop or "SHOW PROCESSLIST" to see if there is a query that takes too long. if you found one, use EXPLAIN to make sure it's properly indexed.