Passing database object in PHP functions - php

Not sure if appropriate but here goes.
I have build a small system for online reservations of slots during a day.
Because I use a database and connect to it all the time to run some queries I created a simple class that creates the connection (using PDO) and running queries (preparing them, running them, if an error happens it manages and logs it, etc).
I also use AJAX a lot so basically, when a user wants to register, log in, get the schedule of the day, book a slot, cancel a booking and so one, I use AJAX to load a script that goes through the procedure for each action (I will call this the AJAX script) and in that script I include the relevant script that contains all the functions needed (I will call this the function script). The idea is that the AJAX script just gets the parameters, calls a list of functions which based on the results returns some kind of response. The function script contain all the code, that builds the queries, and gets the database data, makes any checks, creates new objects if needed etc.
The way I was doing it was that in the start of the AJAX script I create my database class instance and then just pass it through to the functions as needed (Mostly because I started with all the code in the AJAX script and then moved to creating the separate function in the second script and just leaving the minimum code needed to guide the action)...
So my question is, is it a good/better practise to remove the database class instance all together from the AJAX script and instead include the database class script in the function script and just instantiate inside each function? I am wondering about the idea of creating connections along with each function and then destroying them (most of the functions are small and usually have one query or two , there are some that have a lot in which i use transactions).
I have read about using a singleton as well but I am not sure how it would work in the web. My understanding is if there 2 users logged in the site and both try to fetch the schedule, then the script is called once for each user, making a diffrent connection - even if the parameters of the connection are the same ( I have a guest_user with select/insert/update privileges in my database). So even if I had a singleton, then I would still have two separate connection in the above scenario right? Hoever the difference is as I have it now I would have two connections, open for 1 sec but with change I ask about I would have 10 for each user let's say (10 functions called) for 100ms each time... Is this good or bad? Can it cause problems (if I extrapolate this to real world, with say 1000 users, usually 20-40 at the same time on the site)...
What about security, can these connections be used to steal the data that are exchanged (okay this is farfetched and not really an issue, the data are relatively harmless, other than phones but...)

Related

when using mysqli, does it make new connections every single time?

Let's say I'm using mysqli and pure php. I'm going to write pseudocode.
in config.php
$msql = new mysqli("username","password","dbname");
$msql->connect();
in app.php
include "config.php";
$row = mysqli_select("select * from posts");
foreach($row as $r){
var_dump($r);
}
1) Question is : Everytime user makes a request or access the webpage on mywebsite.com/app.php, every time new mysql instance gets created and the old one gets destroyed? or there's only one mysql instance at all (one single connection to database)
Yes each time your script runs it will make a new connection to the database and a new request to retrieve data.
Even though you don't close the connection at the end of your script, the mysqli connection is destroyed at the end of it, so you can't "trick" it to stay open or work in a cookie-way let's say.
I mean it's what it is supposed a script to be doing. Connects to the db, does it's job, leaves the db (connection dies).
On the other hand if in the same script you have like 2-3 or more queries then it's another story because as i mentioned above the connection of mysqli dies at the end of the script, meaning that you make 1 connection, run all your script queries and you exit after.
Edit for answering comment:
Let's assume that i come into your page and a friend of mine comes at the same time (let's assume as i said). I connect in your database and request some data and so does a friend of mine. Let's see the procedure:
We both trigger a backend script to run for each of us. In this script an instance of mysqli is created so we have 2 instances running at this time but for two separate users.
And that makes total sense and let me elaborate on this:
Think of a page that you book your holidays. If i want to see ticket prices for France and you want to see ticket prices for England then in the php script that runs there is going to be put a where clause for each of us like :
->where('destination','France');
After that the data is send in the frontend and i am able to see what i requested. Meanwhile my instance is dead as i queried the database, got my result and there is nothing more to be done.
The same happens with every user who will join at this time. He/she will create his instance, get the data he wants to get and let his instance die.
Latest edit:
After reading your latest comment i figured out what was your issue first hand. So as i mentioned in my post an instance that a user created in mysqli can not be shared with any other user. It is build to be unique.
What you can do if you have that much traffic is that you cache your data. You can use Reddis database that is build specifically for that reason, to be queried a lot and you can set caching to it, so it deletes the data after some time if you want to.

FileMaker PHP API - why is the initial connection so slow?

I've just set up my first remote connection with FileMaker Server using the PHP API and something a bit strange is happening.
The first connection and response takes around 5 seconds, if I hit reload immediately afterwards, I get a response within 0.5 second.
I can get a quick response for around 60 seconds or so (not timed it yet but it seems like at least a minute but less than 5 minutes) and then it goes back to taking 5 seconds to get a response. (after that it's quick again)
Is there any way of ensuring that it's always a quick response?
I can't give you an exact answer on where the speed difference may be coming from, but I'd agree with NATH's notion on caching. It's likely due to how FileMaker Server handles caching the results on the server side and when it clears that cache out.
In addition to that, a couple of things that are helpful to know when using custom web publishing with FileMaker when it comes to speed:
The fields on your layout will determine how much data is pulled
When you perform a find in the PHP api on a specific layout, e.g.:
$request = $fm->newFindCommand('myLayout');
$request->addFindCriterion('name', $myname);
$result = $request->execute();
What's being returned is data from all of the fields available on the my layout layout.
In sql terms, the above query is equivalent to:
SELECT * FROM myLayout WHERE `name` = ?; // and the $myname variable is bound to ?
The FileMaker find will return every field/column available. You designate the returned columns by placing the fields you want on the layout. To get a true * select all from your table, you would include every field from the table on your layout.
All of that said, you can speed up your requests by only including fields on the layout that you want returned in the queries. If you only need data from 3 fields returned to your php to get the job done, only include those 3 fields on the layout the requests use.
Once you have the records, hold on to them so you can edit them
Taking the example from above, if you know you need to make changes to those records somewhere down the line in your php, store the records in a variable and use the setField and commit methods to edit them. e.g.:
$request = $fm->newFindCommand('my layout');
$request->addFindCriterion('name', $myname);
$result = $request->execute();
$records = $result->getRecords();
...
// say we want to update a flag on each of the records down the line in our php code
foreach($records as $record){
$record->setField('active', true);
$record->commit();
}
Since you have the records already, you can act on them and commit them when needed.
I say this as opposed to grabbing them once for one purpose and then grabbing them again from the database later do make updates to the records.
It's not really an answer to your original question, but since FileMaker's API is a bit different than others and it doesn't have the greatest documentation I though I'd mention it.
There are some delays that you can remove.
Ensure that the layouts you are accessing via PHP are very simple, no unnecessary or slow calculations, few layout objects etc. When the PHP engine first accesses that layout it needs to load it up.
Also check for layout and file script triggers that may be run, IIRC the OnFirstWindowOpen script trigger is called when a connection is made.
I don't think that it's related to caching. Also, it's the same when accessing via XML. Haven't tested ODBC, but am assuming that it is an issue with this too.
Once the connection is established with FileMaker Server and your machine, FileMaker Server keeps this connection alive for about 3 minutes. You can see the connection in the client list in the FM Server Admin Console. The initial connection takes a few seconds to set up (depending on how many others are connected), and then ANY further queries are lightning fast. If you run your app again, it'll reuse that connection and give results in very little time.
You can do completely different queries (on different tables) in a different application, but as long as you execute the second one on the same machine and use the same credentials, FileMaker Server will reuse the existing connection and provide results instantly. This means that it is not due to caching, but it's just the time that it takes FMServer to initially establish a connection.
In our case, we're using a web server to make FileMaker PHP API calls. We have set up a cron every 2 minutes to keep that connection alive, which has pretty much eliminated all delays.
This is probably way late to answer this, but I'm posting here in case anyone else sees this.
I've seen this happen when using external authentication with FileMaker Server. The first query establishes a connection to Active Directory, which takes some time, and then subsequent queries are fast as FMS has got the authentication figured out. If you can, use local authentication in your FileMaker file for your PHP access and make sure it sits above any external authentication in your accounts list. FileMaker runs through the auth list from top to bottom, so this will make sure that FMS successfully authenticates your web query before it gets to attempt an external authentication request, making the authentication process very fast.

MySQL & webpage: Notification on insert in table

I have an application on a server that monitors a log file, I've also added a view at the client side (in the form of a website). Now I would like to implement the following: Whenever a new entry has been added, the view should update as fast as possible.
First I have thought of two practical solutions:
1) Call an AJAX function that requests a php page every second, which checks for updates, and if so show's them. (Disadvantages: Lots of HTTP overhead, a lot of the time there may be no message, lots of SQL calls)
2) Call an AJAX function that requests a different php page every minute, which also checks for updates for 1 minute, but only returns if it has found an update, or else after 1 minute. (Disadvantages: HTTP overhead, but less as option 1, may still have times without message, still a lot of SQL calls)
Which of those ones would be better, or what alternative would you advice?
I have also thought of yet another solution, but I'm unsure of how to implement it. That would be that on every INSERT on a specific table in the MySQL database, the webpage would directly be notified, perhaps via a push connection, but I'm also unsure of how those work.

PHP One time database query on application startup

I have a ajax based PHP app (without any frameworks etc.).
I need to retrieve some records from the database (html select element items) ONCE, and once only, during application startup, store it in a PHP array, and have this array available for future use to prevent future database calls, for ALL future users.
I could do this easily in Spring with initializing beans. And this bean would have the application scope (context) so that it could be used for ALL future user threads needing the data. That means the database retrieval would be once, only during app boot, and then some bean would hold the dropdown data permanently.
I can't understand how to replicate the usecase in PHP.
There's no "application" bootstrapping as such, not until the first user actually does something to invoke my php files.
Moreover, there is no application context - records retrieved for the first user will not be available to another user.
How do I solve this problem? (Note: I don't want to use any library like memcache or whatever.)
If you truly need to get the data only the first time the app is loaded by any user, than you could write something that gets the data from your database, and then rewrites the html page that you're wanting those values in. That way when the next user comes along, they are viewing a static page that has been written by a program.
I'm not so sure that 1 call to the database everytime a user hits your app is going to kill you though. Maybe you've got a good reason, but avoiding the database all but 1 time seems rediculous IMO.
If you need to hit the database one time per visitor, you could use $_SESSION. At the beginning of your script(s) you would start up a session and check to see if there are values in it from the database. If not, it's the user's first visit and you need to query the database. Store the database values in the $_SESSION superglobal and carry on. If the data is in the session, use it and don't query the database.
Would that cover you?

Object oriented coding in a multi threaded request environment - PHP

I am writing a web application in an object oriented design. This application would be interacting with the database pretty often. A few regular operations are verifying a user's ACL permissions for the function/method requested, performing certain functions etc. In a nutshell, the database would be used a lot. So my question here is, if I do develop my application using OOP, and declare class level variables which would be used to set the input coming in, and if there is any parallel or concurrent request coming in from another user, would the input data be changed??
Would I have to do something separate to make sure that the application is multi-threaded and the input coming in be not changed until the process isn't finished??
ex:
class myProces{
var $input1;
var $input2;
function process1($ip1, $ip2){
$this->input1 = $ip1;
$this->input2 = $ip2;
$this->getDataDB();
}
function getDataDB(){
//do some database activity with the class level variables;
// I would pass the values in the class level variables;
$query = "select column from table where col1 = $this->input1 and col2= $this->input2";
mysql_query($query);
return something;
}
}
Now if I have two users hitting my application at the same time, and make a call to the functions in this class
user1:
$obj = new myProces();
$obj->process1(1,2);
user2:
$obj = new myProces();
$obj->process1(5,6);
Now if I do have class level variables, would they have changed values when concurrent requests come in?? Would PHP doing any kind of handling for multi threading? I am not sure if Apache can act as a message queue, where requests can be queued.
Can anybody explain if OOP for web applications with heavy number of users is good or if any kind of multithreading has to be done by developers??
A couple of things:
This has nothing to do with OOP.
PHP doesn't support user threads
Each request will be using its own memory, so you don't have to worry about concurrent usage updating variables behind your back.
However, you do have to take care when dealing with data from a database. User 1 may read something, then User 2 may read the same thing and update it before User 1 finishes. Then when User 1 updates it, he may be accidentally overwriting something User 2 did.
These sorts of things can be handled with transactions, locks, etc. Again, it has nothing to do with OOP or multithreading.
First: try to learn about PDO (unless that VAR before the variables, means that you're using PHP4).
Second: As konforce and Grossman said, each user gets differents instances of PHP.
Third: This problem may occur in Java projects (and others), that uses static objects or static methods. Don't worry with this in PHP.
There is no need to worry about mixing things up on the PHP side, but when you come up with a need to update or insert data, having several users being able to modify the same subset of data will lead you into unwanted consequences. Such as inserting duplicate rows or modifying the same row. Thus, you need to use SQL commands such as locking tables or rows.
This isn't a problem you have to worry about. Each connection to your web server spawns a totally separate instance of the PHP interpreter, with totally separate memory and resource handles. No objects in one will be affected by the other, no database connections in one will be affected by the other. Your class properties in one process are not ever modified by a request in another process.
Many of the top sites on the web run on Apache and PHP, with hundreds of concurrent request happening simultaneously all day long, and they do not have to write any special code to handle it.

Categories