when using mysqli, does it make new connections every single time? - php

Let's say I'm using mysqli and pure php. I'm going to write pseudocode.
in config.php
$msql = new mysqli("username","password","dbname");
$msql->connect();
in app.php
include "config.php";
$row = mysqli_select("select * from posts");
foreach($row as $r){
var_dump($r);
}
1) Question is : Everytime user makes a request or access the webpage on mywebsite.com/app.php, every time new mysql instance gets created and the old one gets destroyed? or there's only one mysql instance at all (one single connection to database)

Yes each time your script runs it will make a new connection to the database and a new request to retrieve data.
Even though you don't close the connection at the end of your script, the mysqli connection is destroyed at the end of it, so you can't "trick" it to stay open or work in a cookie-way let's say.
I mean it's what it is supposed a script to be doing. Connects to the db, does it's job, leaves the db (connection dies).
On the other hand if in the same script you have like 2-3 or more queries then it's another story because as i mentioned above the connection of mysqli dies at the end of the script, meaning that you make 1 connection, run all your script queries and you exit after.
Edit for answering comment:
Let's assume that i come into your page and a friend of mine comes at the same time (let's assume as i said). I connect in your database and request some data and so does a friend of mine. Let's see the procedure:
We both trigger a backend script to run for each of us. In this script an instance of mysqli is created so we have 2 instances running at this time but for two separate users.
And that makes total sense and let me elaborate on this:
Think of a page that you book your holidays. If i want to see ticket prices for France and you want to see ticket prices for England then in the php script that runs there is going to be put a where clause for each of us like :
->where('destination','France');
After that the data is send in the frontend and i am able to see what i requested. Meanwhile my instance is dead as i queried the database, got my result and there is nothing more to be done.
The same happens with every user who will join at this time. He/she will create his instance, get the data he wants to get and let his instance die.
Latest edit:
After reading your latest comment i figured out what was your issue first hand. So as i mentioned in my post an instance that a user created in mysqli can not be shared with any other user. It is build to be unique.
What you can do if you have that much traffic is that you cache your data. You can use Reddis database that is build specifically for that reason, to be queried a lot and you can set caching to it, so it deletes the data after some time if you want to.

Related

FileMaker PHP API - why is the initial connection so slow?

I've just set up my first remote connection with FileMaker Server using the PHP API and something a bit strange is happening.
The first connection and response takes around 5 seconds, if I hit reload immediately afterwards, I get a response within 0.5 second.
I can get a quick response for around 60 seconds or so (not timed it yet but it seems like at least a minute but less than 5 minutes) and then it goes back to taking 5 seconds to get a response. (after that it's quick again)
Is there any way of ensuring that it's always a quick response?
I can't give you an exact answer on where the speed difference may be coming from, but I'd agree with NATH's notion on caching. It's likely due to how FileMaker Server handles caching the results on the server side and when it clears that cache out.
In addition to that, a couple of things that are helpful to know when using custom web publishing with FileMaker when it comes to speed:
The fields on your layout will determine how much data is pulled
When you perform a find in the PHP api on a specific layout, e.g.:
$request = $fm->newFindCommand('myLayout');
$request->addFindCriterion('name', $myname);
$result = $request->execute();
What's being returned is data from all of the fields available on the my layout layout.
In sql terms, the above query is equivalent to:
SELECT * FROM myLayout WHERE `name` = ?; // and the $myname variable is bound to ?
The FileMaker find will return every field/column available. You designate the returned columns by placing the fields you want on the layout. To get a true * select all from your table, you would include every field from the table on your layout.
All of that said, you can speed up your requests by only including fields on the layout that you want returned in the queries. If you only need data from 3 fields returned to your php to get the job done, only include those 3 fields on the layout the requests use.
Once you have the records, hold on to them so you can edit them
Taking the example from above, if you know you need to make changes to those records somewhere down the line in your php, store the records in a variable and use the setField and commit methods to edit them. e.g.:
$request = $fm->newFindCommand('my layout');
$request->addFindCriterion('name', $myname);
$result = $request->execute();
$records = $result->getRecords();
...
// say we want to update a flag on each of the records down the line in our php code
foreach($records as $record){
$record->setField('active', true);
$record->commit();
}
Since you have the records already, you can act on them and commit them when needed.
I say this as opposed to grabbing them once for one purpose and then grabbing them again from the database later do make updates to the records.
It's not really an answer to your original question, but since FileMaker's API is a bit different than others and it doesn't have the greatest documentation I though I'd mention it.
There are some delays that you can remove.
Ensure that the layouts you are accessing via PHP are very simple, no unnecessary or slow calculations, few layout objects etc. When the PHP engine first accesses that layout it needs to load it up.
Also check for layout and file script triggers that may be run, IIRC the OnFirstWindowOpen script trigger is called when a connection is made.
I don't think that it's related to caching. Also, it's the same when accessing via XML. Haven't tested ODBC, but am assuming that it is an issue with this too.
Once the connection is established with FileMaker Server and your machine, FileMaker Server keeps this connection alive for about 3 minutes. You can see the connection in the client list in the FM Server Admin Console. The initial connection takes a few seconds to set up (depending on how many others are connected), and then ANY further queries are lightning fast. If you run your app again, it'll reuse that connection and give results in very little time.
You can do completely different queries (on different tables) in a different application, but as long as you execute the second one on the same machine and use the same credentials, FileMaker Server will reuse the existing connection and provide results instantly. This means that it is not due to caching, but it's just the time that it takes FMServer to initially establish a connection.
In our case, we're using a web server to make FileMaker PHP API calls. We have set up a cron every 2 minutes to keep that connection alive, which has pretty much eliminated all delays.
This is probably way late to answer this, but I'm posting here in case anyone else sees this.
I've seen this happen when using external authentication with FileMaker Server. The first query establishes a connection to Active Directory, which takes some time, and then subsequent queries are fast as FMS has got the authentication figured out. If you can, use local authentication in your FileMaker file for your PHP access and make sure it sits above any external authentication in your accounts list. FileMaker runs through the auth list from top to bottom, so this will make sure that FMS successfully authenticates your web query before it gets to attempt an external authentication request, making the authentication process very fast.

Passing database object in PHP functions

Not sure if appropriate but here goes.
I have build a small system for online reservations of slots during a day.
Because I use a database and connect to it all the time to run some queries I created a simple class that creates the connection (using PDO) and running queries (preparing them, running them, if an error happens it manages and logs it, etc).
I also use AJAX a lot so basically, when a user wants to register, log in, get the schedule of the day, book a slot, cancel a booking and so one, I use AJAX to load a script that goes through the procedure for each action (I will call this the AJAX script) and in that script I include the relevant script that contains all the functions needed (I will call this the function script). The idea is that the AJAX script just gets the parameters, calls a list of functions which based on the results returns some kind of response. The function script contain all the code, that builds the queries, and gets the database data, makes any checks, creates new objects if needed etc.
The way I was doing it was that in the start of the AJAX script I create my database class instance and then just pass it through to the functions as needed (Mostly because I started with all the code in the AJAX script and then moved to creating the separate function in the second script and just leaving the minimum code needed to guide the action)...
So my question is, is it a good/better practise to remove the database class instance all together from the AJAX script and instead include the database class script in the function script and just instantiate inside each function? I am wondering about the idea of creating connections along with each function and then destroying them (most of the functions are small and usually have one query or two , there are some that have a lot in which i use transactions).
I have read about using a singleton as well but I am not sure how it would work in the web. My understanding is if there 2 users logged in the site and both try to fetch the schedule, then the script is called once for each user, making a diffrent connection - even if the parameters of the connection are the same ( I have a guest_user with select/insert/update privileges in my database). So even if I had a singleton, then I would still have two separate connection in the above scenario right? Hoever the difference is as I have it now I would have two connections, open for 1 sec but with change I ask about I would have 10 for each user let's say (10 functions called) for 100ms each time... Is this good or bad? Can it cause problems (if I extrapolate this to real world, with say 1000 users, usually 20-40 at the same time on the site)...
What about security, can these connections be used to steal the data that are exchanged (okay this is farfetched and not really an issue, the data are relatively harmless, other than phones but...)

Concurrent database access

When the web server receives a request for my PHP script, I presume the server creates a dedicated process to run the script. If, before the script exits, another request to the same script comes, another process gets started -- am I correct, or the second request will be queued in the server, waiting for the first request to exit? (Question 1)
If the former is correct, i.e. the same script can run simultaneously in a different process, then they will try to access my database.
When I connect to the database in the script:
$DB = mysqli_connect("localhost", ...);
query it, conduct more or less lengthy calculations and update it, I don't want the contents of the database to be modified by another instance of a running script.
Question 2: Does it mean that since connecting to the database until closing it:
mysqli_close($DB);
the database is blocked for any access from other software components? If so, it effectively prevents the script instances from running concurrently.
UPDATE: #OllieJones kindly explained that the database was not blocked.
Let's consider the following scenario. The script in the first process discovers an eligible user in the Users table and starts preparing data to append for that user in the Counter table. At this moment the script in the other process preempts and deletes the user from the Users table and the associate data in the Counter table; it then gets preempted by the first script which writes the data for the user no more existing. These data become in the head-detached state, i.e. unaccessible.
How to prevent such a contention?
In modern web servers, there's a pool of processes (or possibly threads) handling requests from users. Concurrent requests to the same script can run concurrently. Each request-handler has its own connection to the DBMS (they're actually maintained in a pool, but that's a story for another day).
The database is not blocked while individual request-handlers are using it, unless you block it explicitly by locking a table or doing a request like SELECT ... FOR UPDATE. For more information on this deep topic, read about transactions.
Therefore, it's important to write your database queries in such a way that they won't interfere with each other. For example, if you need to learn the value of an auto-incremented column right after you insert a row, you should use LAST_INSERT_ID() or mysqli_insert_id() instead of trying to query the data base: another user may have inserted another row in the meantime.
The system test discipline for scaled-up web sites usually involves a rigorous load test in order to shake out all this concurrency.
If you're doing a bunch of work on a particular entity, in your case a User, you use a transaction.
First you do
BEGIN
to start the transaction. Then you do
SELECT whatever FROM User WHERE user_id = <<whatever>> FOR UPDATE
to choose the user and mark that user's row as busy-being-updated. Then you do all the work you need to do to fill out various rows in various tables relating to that user.
Finally you do
COMMIT
If you messed things up, or don't want to go through with the change, you do
ROLLBACK
and all your changes will be restored to their state right before the SELECT ... FOR UPDATE.
Why does this work? Because if another client does the same SELECT .... FOR UPDATE, MySQL will delay that request until the first one either gets COMMIT or ROLLBACK.
If another client works with a different userid, the operations may proceed concurrently.
You need the InnoDB access method to use transactions: MyISAM doesn't support them.
Multiple reads can be done concurrently, if there is a write operation then it will block all other operations. A read will block all writes.

Force MySQL to write back

I have an issue where an instance of Solr is querying my MySQL database to refresh its index immediately after an update is made to that database, but the Solr query is not seeing the change made immediately prior.
I imagine the problem has to be something like Solr is using a different database connection, and somehow the change is not being "committed" (I'm not using transactions, just a call to mysql_query) before the other connection can see it. If I throw a sufficiently long sleep() call in there, it works most of the time, but obviously this is not acceptable.
Is there a PHP or MySQL function that I can call to force a write/update/flush of the database before continuing?
You might make Solr use SET TRANSACTION ISOLATION LEVEL = READ-COMMITTED to get more prompt view of updated data.
You should be able to do this with the transactionIsolation property of the JDBC URL.

Live update notification on database changes MYSQL PHP

I was wondering how to trigger a notification if a new record is inserted into a database, using PHP and MySQL.
You can create a trigger than runs when an update happens. It's possible to run/notify an external process using a UDF (user defined function). There aren't any builtin methods of doing so, so it's a case of loading a UDF plugin that'll do it for you.
Google for 'mysql udf sys_exec' or 'mysql udf ipc'.
The simplest thing is probably to poll the DB every few seconds and see if new records have been inserted. Due to query caching in the DB this shouldn't effect DB performance substantially.
MySQL does now have triggers and stored procedures, but I don't believe they have any way of notifying an external process, so as far as I know it's not possible. You'd have to poll the database every second or so to look for new records.
Even if it were, this assumes that your PHP process is long-lived, such that it can afford to hang around for a record to appear. Given that most PHP is used for web sites where the code runs and then exits as quickly as possible it's unclear whether that's compatible with what you have.
If all your database changes are made by PHP I would create a wrapper function for mysql_query and if the query type was INSERT, REPLACE, UPDATE or DELETE I would call a function to send the respective email.
EDIT: I forgot to mention but you could also do something like the following:
if (mysql_affected_rows($this->connection) > 0)
{
// mail(...)
}
One day I ask in MySQL forum if event like in Firebird or Interbase exist in MySQL and I see that someone answer Yes (I'm really not sure)
check this : http://forums.mysql.com/read.php?84,3629,175177#msg-175177
This can be done relatively easily using stored procedures and triggers. I have created a 'Live View' screen which has a scrolling display which is updated with new events from my events table. It can be a bit fiddly but once its running its quick.

Categories