mysqli_insert_id() is specific to the database connection -- it returns the ID of the row that this script invocation inserted most recently, not any other MySQL client. So there's no conflict if multiple applications are inserting into the database at the same time.
I am confused in 2 things first one is
specific to the database connection
and second thing
MySQL client
Please is there anyone who can explian that how mysql connections work for different clients or how these clients behave in any Application. I am sorry if it is a riddiculous question but i am confused that why it is not reflected by the data entry at the same time.
specific to the database connection
Suppose I run a PHP script that inserts a row, and then calls mysqli_insert_id(), and it tells me it generated value 1234.
Next, you also run a separate PHP script that inserts a row to the same table. We assume it generated value 1235.
Then my PHP script, still running in the same request as before, calls mysqli_insert_id() again. Does it report 1235? No -- it still reports 1234, because it will tell me only the most recent id generated during my session.
If my PHP request finishes, and then I make another PHP request, and it connects to MySQL, this counts as a new connection. Mysqli_insert_id() reports nothing by default, unless I do a new insert.
MySQL client
In this context, a mysql client is for example the PHP script that makes a connection to MySQL during one request.
A MySQL client can also be the command-line mysql tool, or a GUI interface like MySQL Workbench, or another application written in PHP or another language. The term "MySQL client" is meant to include all of these. Each client makes a connection to the MySQL database server, and each connection has its own idea of what the last insert id was.
Related
Let's say I'm using mysqli and pure php. I'm going to write pseudocode.
in config.php
$msql = new mysqli("username","password","dbname");
$msql->connect();
in app.php
include "config.php";
$row = mysqli_select("select * from posts");
foreach($row as $r){
var_dump($r);
}
1) Question is : Everytime user makes a request or access the webpage on mywebsite.com/app.php, every time new mysql instance gets created and the old one gets destroyed? or there's only one mysql instance at all (one single connection to database)
Yes each time your script runs it will make a new connection to the database and a new request to retrieve data.
Even though you don't close the connection at the end of your script, the mysqli connection is destroyed at the end of it, so you can't "trick" it to stay open or work in a cookie-way let's say.
I mean it's what it is supposed a script to be doing. Connects to the db, does it's job, leaves the db (connection dies).
On the other hand if in the same script you have like 2-3 or more queries then it's another story because as i mentioned above the connection of mysqli dies at the end of the script, meaning that you make 1 connection, run all your script queries and you exit after.
Edit for answering comment:
Let's assume that i come into your page and a friend of mine comes at the same time (let's assume as i said). I connect in your database and request some data and so does a friend of mine. Let's see the procedure:
We both trigger a backend script to run for each of us. In this script an instance of mysqli is created so we have 2 instances running at this time but for two separate users.
And that makes total sense and let me elaborate on this:
Think of a page that you book your holidays. If i want to see ticket prices for France and you want to see ticket prices for England then in the php script that runs there is going to be put a where clause for each of us like :
->where('destination','France');
After that the data is send in the frontend and i am able to see what i requested. Meanwhile my instance is dead as i queried the database, got my result and there is nothing more to be done.
The same happens with every user who will join at this time. He/she will create his instance, get the data he wants to get and let his instance die.
Latest edit:
After reading your latest comment i figured out what was your issue first hand. So as i mentioned in my post an instance that a user created in mysqli can not be shared with any other user. It is build to be unique.
What you can do if you have that much traffic is that you cache your data. You can use Reddis database that is build specifically for that reason, to be queried a lot and you can set caching to it, so it deletes the data after some time if you want to.
I'm trying to do some migrations from an old site to a new site. The old site uses MySQL and the new site uses PostgreSQL. My problem is I wrote a migration script in PHP that queries info from the old DB so that I can insert them into the new DB within that same script. The reason I need the script is I have to call other functions that do things and manipulate the data since the table columns aren't a one for one match so I can't just do a backup and restore type situation. I have a class for both DB's that I use.
The mysql queries work but postgres' don't. They get error messages saying pg_query(): 19 is not a valid PostgreSQL link resource in xxx
So is it possible to run them both in the same script? If I call the two scripts separately it works ok but I can't get the data from the old server to the new one.
I've looked everywhere and don't see many questions needing to use both DB's in one file.
Any help would be cool.
You are using the same variable for both resources and passing the mysql resource to the postgresql function
I'm experiencing a very strange problem whereby my Solr index is not able to see a change just written to a MySQL database on another connection.
Here is the chain of events:
The user initiates an action on the website that causes a row to be added to a table in MySQL.
The row is added via mysql_query() (no transactions). If I query the database again from the same connection I can naturally see the change I just made.*
A call is immediately sent to a Solr instance via curl to tell it to do a partial update of its index using the Data Import Handler.
Solr connects to the MySQL database via a separate JDBC connection (same credentials and everything) and executes a query for all records updated since its last update.
At this point, however, the results returned to Solr do not include the last-added row, unless I insert a sleep() call immediately after making the change to the database and before sending the message to Solr.
*Note that if I actually do query the database at this point though, this takes enough time for the change to actually be picked up by Solr. The same occurs if I simply sleep(1) (for one second).
What I'm looking for is some reliable solution that can allow me to make sure the change will be seen by Solr before sending it the refresh message. According to all documentation I've found, however, the call to mysql_query() should already be atomic and synchronous and should not return control to PHP until the database has been updated. Therefore there doesn't appear to be any function I can call to force this.
Does anyone have any advice/ideas? I'm banging my head over this one.
Check what the auto-commit is set to when inserting the record. Chances are the record just inserted is in the same database session and thus is seen (but isn't committed). After this, some event causes the commit to occur and hence another thread/session can then "see" the record. Also check the transaction isolation level settings.
I typically do not use the Data Import handler and would have the update in the website trigger a mechanism (either internal or external) to update the record into Solr using the appropriate Solr Client for the programming language being used. I have personally not had a lot of luck with the Data Import Handler in the past and as a result have preferred to use custom code for synchronizing Solr with the corresponding data storage platform.
I have an issue where an instance of Solr is querying my MySQL database to refresh its index immediately after an update is made to that database, but the Solr query is not seeing the change made immediately prior.
I imagine the problem has to be something like Solr is using a different database connection, and somehow the change is not being "committed" (I'm not using transactions, just a call to mysql_query) before the other connection can see it. If I throw a sufficiently long sleep() call in there, it works most of the time, but obviously this is not acceptable.
Is there a PHP or MySQL function that I can call to force a write/update/flush of the database before continuing?
You might make Solr use SET TRANSACTION ISOLATION LEVEL = READ-COMMITTED to get more prompt view of updated data.
You should be able to do this with the transactionIsolation property of the JDBC URL.
I'm updating an old script using php mysql functions to use mysqli and I've already found an interesting question (mysqli_use_result() and concurrency) but it doesn't clarify one thing:
Lets say five users connects to a webpage, they connected at same time and user1 was the first to select the data from a huge MyIsam database (15000 records of forum posts with left join to 'users' and 'attachments' table).
while the php scripts retrieves the result, the other users will not be able to get results, is that right?
Also using the same situation above, when user1 fully received it's result, an 'UPDATE set view_count = INC(1)' query is sent and the table is locked I suppose, and this same query will fail for the other users?
About the article you've quoted. It just means, that you should not do this:
mysqli_query($link, $query);
mysqli_use_result($link);
// lots of 'client processing'
// table is blocked for updates during this
sleep(10)
mysqli_fetch_* .....
In such situtations you are adviced to do so:
mysqli_query($link, $query);
mysqli_store_result($link);
// lots of 'client processing'
// table is NOT blocked for updates during this
sleep(10)
mysqli_fetch_* .....
The article further says, that if a second query will be issued - after calling mysql_use_result() and before fetching the results from the query it will fail. This is meant per connection - per script. So other user's queries won't fail during this.
while the php scripts retrieves the result, the other users will not be able to get results, is that right?
No this is not right. MySQL supports as many parallel connections as you have configured in my.ini max_connections. Concurrent reads are handled by the mysql server. Client code has not to worry about that unless the max connection limit is reached and mysqli_connect() would fail. If your application reaches a point where this happens frequently you'll in most cases first try to tweak your mysql config so that mysql allows more parrallel connections. If a threshold is reached you'll use an attempt like replication or mysql cluster.
Also using the same situation above, when user1 fully received it's result, an 'UPDATE set view_count = INC(1)' query is sent and the table is locked I suppose, and this same query will fail for the other users?
When there are concurrent reads and writes this is of course a performance issue. But the MySQL server handle this for you, meaning the client code has not worry about it as long as connecting to mysql works. If you have really high load you'll mostly use master slave Replication or MySQL cluster.
and this same query will fail for the other users?
A database server usually a bit more intelligent than a plain text file.
So, your queries won't fail. They'd wait instead.
Though I wouldn't use mysqli_use_result() at all.
Why not just fetch your results already?