I'm troubleshooting a bug and trying to rule out all possible explanations for why I'm witnessing the behavior that I am. I'm executing a number of MySQL queries in PHP (via CodeIgniter's Active Record class) and one explanation for the behavior that I'm seeing is that the queries aren't being executed synchronously, i.e. that PHP isn't waiting for the query to complete before issuing the next one.
I've always coded under the assumptions that if I insert something into a MySQL table via PHP, and then my next line of code executes a select, the results of my insertion will be available in the next statement. Are there any exceptions to this being the case?
Thanks for helping me preserve my sanity...
If you select it on the same server, and are using the same session/connection, and you haven't used INSERT DELAYED it should exist indeed, but loadbalanced MySQL servers / implementations / caching may divert SELECT's to other servers or data locations....
Related
I have a very strange problem, that I cannot get my head around.
I am using Laravel for my backend application, where I am running a very simple query on table with 30k records all with proper indexes on it.
Here is the query:
DB::select('select * from Orders where ClientId = ?', [$id])
From the Laravel application this query runs for 1.2 seconds (The same thing is if I use Eloquent model.):
"query" => "select * from Orders where ClientId = ?"
"bindings" => array:1 [▼
0 => "44087"
]
"time" => 1015.2
The problem is, if I run THE SAME query inside the database console or PHPMyAdmin, the query takes approximate 20miliseconds.
I do not understand how is that possible since I am using the same database, same query, same computer and same connection to the database.
What can be the reason?
PHPMyAdmin will automatically add LIMIT for you.
This is because PHPMyAdmin will always by default paginate your query.
In your Laravel/Eloquent query, you are loading all 30k records in one go. It must take time.
To remedy this try pagination or chunking your query.
The total will take long, yes, but the chunks themselves will be very quick.
I would try debug the queries with the Debug Bar, to see how much time it takes, and which is taking longer,... It's very easy to use and install: https://github.com/barryvdh/laravel-debugbar
I think you are interested in DB administrations.. read this also,you can get some idea.good luck
There are several issues here. First one is how laravel works. Laravel only loads services and classes that are executed during your script. This is done to conserve resources, since PHP is meant to be run as a CGI script instead of a long running process. As a result, your timing might include the connection setup step instead of just executing the query. For a more "reliable" result, execute any query before timing your simple query.
There's another side of that behavior. In long running process, like Job runner, you ought not to change service parameters. This can cause undesired behavior and cause your parameter changes spill into other jobs. For example, if you provide SMTP login feature, you ought to reset the Email Sender credentials after sending the email, otherwise you will come into an issue where a user who doesn't use that feature will send an email as another user who does. This comes from thinking that services are reloaded every time a job is executed, as such is a behavior when running HTTP part.
Second, you're not using limit. As some other posters pointed out.
I'm almost sure this is due to the using limit by PHPMyAdmin, related to what you are seeing in the page output.
If you see top of the PHPMyAdmin page you see something like this:
Showing rows 0 - 24 (314 total, Query took 0.0009 seconds.)
You should have the same performance when you add the limit to your query.
How to enable MySQL Query Log?
Run query through phpmyadmin.
See which queries you actually have in MySQL.
Run app.
See which queries you actually have in MySQL.
Tell us, what was those extra, that slows down.
Query should be have the same speed in phpmyadmin or else whatever was the application try to use explain statement to see more details about query
Cause of this conflict may be due to many reasons other than MySQL as example
The php script itself have some functions that causes slow loading
Try to check server error.log maybe there's errors in functions
Basically phpmyadmin could have different than larval in the MySQL connection function try to check extension used in connection maybe it's not compatible with php version you use and I think this is the cause of slow query
I have noticed that in some app I have made and the cause was always in the php functions or in connection as example mysql_connect was much faster than PDO exten on php < 5.6 as I experienced but cause was always from php functions in the script
This seems like a pretty basic question but one I don't know the answer to.
I wrote a script in PHP that loops through some data and then performs an UPDATE to records in our database. There are roughly some 150,000 records, so the script certainly takes a while to complete.
Could I potentially harm or interfere with the data insertion if I run a basic SELECT statement?
Say...I want to ensure that the script is working properly so if I run a basic SELECT COUNT() to see if it's increasing in real time as the script runs. Is this possible or would it screw something up?
Thank you!
Generally a SELECT call is incapable of "causing harm" provided you're not talking about SQL injection problems.
The InnoDB engine, which you should be using, has what's called Multi-Version Concurrency Control or MVCC for short. It means that until your UPDATE statement is finished, or the transaction that the statement is a part of, the SELECT will be done against the last consistent database state.
If you're using MyISAM, which is a very bad idea in most production environments due to the limitations of that engine and the way the data is stored without a rollback journal, the SELECT call will probably block until the UPDATE is applied since it does not support MVCC.
I'm using the mysql_insert_id within my code to get an auto increment.
I have read around and it looks like there is no race condition regarding this for different user connections, but what about the same user? Will I be likely to run into race condition problems when connecting to the database using the same username/user but still from different connection sessions?
My application is PHP. When a user submits a web request my PHP executes code and for that particular request/connection session I keep a persistent SQL connection open in to MySQL for the length of that request. Will this cause me any race condition problems?
None for any practical purpose, If you execute the last_id request right after executing your insert then there is practically not enough time for another insert to spoil that. Theoretically might be
possible
According to PHP Manual
Note:
Because mysql_insert_id() acts on the last performed query, be sure to
call mysql_insert_id() immediately after the query that generates the
value.
Just in case you want to double check you can use this function to confirm your previous query
mysql_info
The use of persistent connections doesn't mean that every request will use the same connection. It means that each apache thread will have its own connection that is shared between all requests executing on that thread.
The requests will run serially (one after another) which means that the same persistent connection will not be used by two threads running at the same time.
Because of this, your last_insert_id value will be safe, but be sure that you check the result of your inserts before using it, because it will return the last_insert_id of the last successful INSERT, even if it wasn't the last executed INSERT.
I'm not sure if this is a duplicate of another question, but I have a small PHP file that calls some SQL INSERT and DELETE for an image tagging system. Most of the time both insertions and deletes work, but on some occasions the insertions don't work.
Is there a way to view why the SQL statements failed to execute, something similar to when you use SQL functions in Python or Java, and if it fails, it tells you why (example: duplicate key insertion, unterminated quote etc...)?
There are two things I can think of off the top of my head, and one thing that I stole from amitchhajer:
pg_last_error will tell you the last error in your session. This is awesome for obvious reasons, and you're going to want to log the error to a text file on disk in case the issue is something like the DB going down. If you try to store the error in the DB, you might have some HILARIOUS* hi-jinks in the process of figuring out why.
Log every query to this text file, even the successful ones. Find out if the issue affects identical operations (an issue with your DB or connection, again) or certain queries every time (issue with your app.)
If you have access to the guts of your server (or your shared hosting is good,) enable and examine the database's query log. This won't help if there's a network issue between the app and server, though.
But if I had to guess, I would imagine that when the app fails it's getting weird input. Nine times out of ten the input isn't getting escaped properly or - since you're using PHP, which murders variables as a matter of routine during type conversions - it's being set to FALSE or NULL or something and the system is generating a broken query like INSERT INTO wizards (hats, cloaks, spell_count) VALUES ('Wizard Hat', 'Robes', );
*not actually hilarious
Start monitoring your SQL queries by starting the log. There you can look what all queries are fired and errors if any.
This tutorial to start the logger will help.
Depending on which API your PHP file uses (let's hope it's PDO ;) you could check for errors in your current transaction with s.th. like
$naughtyPdoStatement->execute();
if ($naughtyPdoStatement->errorCode() != '00000')
DebuggerOfChoice::log( implode (' ', $naughtyPdoStatement->errorInfo() );
When using the legacy-APIs there's equivalents like mysql_errno, mysql_error, pg_last_error, etc... which should enable to do the same. DebuggerOfChoice::Log of course can be whatever log function you'd like to utilise
I am having a PHP script, which starts another php script multiple times in an foreach loop. The other php scripts writes data to the same database table.
Will this cause any problems, because there will be around 30 processes writing to the same database table...
Or is this automatically handeled by MySQL ?
Thanks you!
Bye,
WorldSignia
It depends on what you are writing. INSERT can be used simultaneously. UPDATE ... WHERE ... might lead to conflicts.
Imagine you are executing UPDATE ... WHERE id=2 from two scripts at once. One might overwrite the other. You need to implement some locking facility.
You should be fine until two processes attempt to modify/retreive the same row(s). If you suspect you might run into such problems, you may take a look at mysql transactions(You need mysql server 5 or later for it) http://dev.mysql.com/doc/refman/5.0/en/commit.html