I am writing a web application in an object oriented design. This application would be interacting with the database pretty often. A few regular operations are verifying a user's ACL permissions for the function/method requested, performing certain functions etc. In a nutshell, the database would be used a lot. So my question here is, if I do develop my application using OOP, and declare class level variables which would be used to set the input coming in, and if there is any parallel or concurrent request coming in from another user, would the input data be changed??
Would I have to do something separate to make sure that the application is multi-threaded and the input coming in be not changed until the process isn't finished??
ex:
class myProces{
var $input1;
var $input2;
function process1($ip1, $ip2){
$this->input1 = $ip1;
$this->input2 = $ip2;
$this->getDataDB();
}
function getDataDB(){
//do some database activity with the class level variables;
// I would pass the values in the class level variables;
$query = "select column from table where col1 = $this->input1 and col2= $this->input2";
mysql_query($query);
return something;
}
}
Now if I have two users hitting my application at the same time, and make a call to the functions in this class
user1:
$obj = new myProces();
$obj->process1(1,2);
user2:
$obj = new myProces();
$obj->process1(5,6);
Now if I do have class level variables, would they have changed values when concurrent requests come in?? Would PHP doing any kind of handling for multi threading? I am not sure if Apache can act as a message queue, where requests can be queued.
Can anybody explain if OOP for web applications with heavy number of users is good or if any kind of multithreading has to be done by developers??
A couple of things:
This has nothing to do with OOP.
PHP doesn't support user threads
Each request will be using its own memory, so you don't have to worry about concurrent usage updating variables behind your back.
However, you do have to take care when dealing with data from a database. User 1 may read something, then User 2 may read the same thing and update it before User 1 finishes. Then when User 1 updates it, he may be accidentally overwriting something User 2 did.
These sorts of things can be handled with transactions, locks, etc. Again, it has nothing to do with OOP or multithreading.
First: try to learn about PDO (unless that VAR before the variables, means that you're using PHP4).
Second: As konforce and Grossman said, each user gets differents instances of PHP.
Third: This problem may occur in Java projects (and others), that uses static objects or static methods. Don't worry with this in PHP.
There is no need to worry about mixing things up on the PHP side, but when you come up with a need to update or insert data, having several users being able to modify the same subset of data will lead you into unwanted consequences. Such as inserting duplicate rows or modifying the same row. Thus, you need to use SQL commands such as locking tables or rows.
This isn't a problem you have to worry about. Each connection to your web server spawns a totally separate instance of the PHP interpreter, with totally separate memory and resource handles. No objects in one will be affected by the other, no database connections in one will be affected by the other. Your class properties in one process are not ever modified by a request in another process.
Many of the top sites on the web run on Apache and PHP, with hundreds of concurrent request happening simultaneously all day long, and they do not have to write any special code to handle it.
Related
I'm having a problem retrieving documents from a MongoDB collection immediately after inserting them. I'm creating documents, then running a query (that cannot be determined in advance) to get a subset of all documents in the collection. The problem is that some or all of the documents I inserted aren't included in the results.
The process is:
Find the timestamp of the most recent record
Find transactions that have taken place since that time
Generate records for those transactions and insert() each one (this can and will become a single bulk insert)
find() some of the records
The documents are always written successfully, but more often than not the new documents aren't included when I run the find(). They are available after a few seconds.
I believe the new documents haven't propagated to all members of the replica set by the time I try to retrieve them, though I am suspicious that this may not be the case as I'm using the same connection to insert() and find().
I believe this can be solved with a write concern, but I'm not sure what value to specify to ensure that the documents have propagated to all members of the replica set, or at least the member that will be used for the find() operation if it's possible to know that in advance.
I don't want to hard code the total number of members, as this will break when another member is added. It doesn't matter if the insert() operations are slow.
Read preference
When you write to a collection, it's a good practice to set the readPreference to "primary" to make sure you're reading from the same MongoDB server that you've written to.
You do that with the MongoCollection::setReadPreference() method.
$db->mycollection->setReadPreference(MongoClient::RP_PRIMARY);
$db->mycollection->insert(['foo' => 'bar']);
$result = $db->mycollection->find([]);
Write concern (don't do it!)
You might be tempted to use write concern to wait for the data to be replicated to all secondaries by using w=3 (for a 3 server setup). However this is not the way to go.
One of the nice things about MongoDB replication, is that is will do automatic fail over. In that case you might have less than 3 servers that can accept the data, cause your script to wait for ever.
There is no w=all to write to all server that are up. Using such a write concern wouldn't be good. A secondary that have just recovered from a fail over might be hours behind, taking a long time to catch up. You script would wait (hang) until all secondaries are caught up.
A good practice is never to use w=N with N > majority outside of administrative tasks.
Basically you are looking for a write concern, which (in Layman's term) allows you to specify when insert is finished.
In PHP this is done, by providing an option in the insert statement, so you need something like
w=N Replica Set Acknowledged The write will be acknowledged by the primary server, and replicated to N-1 secondaries.
or if you do not want to hard code N:
w= Replica Set Tag Set Acknowledged The write will be
acknowledged by members of the entire tag set
$collection->insert($someDoc, ["w" => 3]);
Not sure if appropriate but here goes.
I have build a small system for online reservations of slots during a day.
Because I use a database and connect to it all the time to run some queries I created a simple class that creates the connection (using PDO) and running queries (preparing them, running them, if an error happens it manages and logs it, etc).
I also use AJAX a lot so basically, when a user wants to register, log in, get the schedule of the day, book a slot, cancel a booking and so one, I use AJAX to load a script that goes through the procedure for each action (I will call this the AJAX script) and in that script I include the relevant script that contains all the functions needed (I will call this the function script). The idea is that the AJAX script just gets the parameters, calls a list of functions which based on the results returns some kind of response. The function script contain all the code, that builds the queries, and gets the database data, makes any checks, creates new objects if needed etc.
The way I was doing it was that in the start of the AJAX script I create my database class instance and then just pass it through to the functions as needed (Mostly because I started with all the code in the AJAX script and then moved to creating the separate function in the second script and just leaving the minimum code needed to guide the action)...
So my question is, is it a good/better practise to remove the database class instance all together from the AJAX script and instead include the database class script in the function script and just instantiate inside each function? I am wondering about the idea of creating connections along with each function and then destroying them (most of the functions are small and usually have one query or two , there are some that have a lot in which i use transactions).
I have read about using a singleton as well but I am not sure how it would work in the web. My understanding is if there 2 users logged in the site and both try to fetch the schedule, then the script is called once for each user, making a diffrent connection - even if the parameters of the connection are the same ( I have a guest_user with select/insert/update privileges in my database). So even if I had a singleton, then I would still have two separate connection in the above scenario right? Hoever the difference is as I have it now I would have two connections, open for 1 sec but with change I ask about I would have 10 for each user let's say (10 functions called) for 100ms each time... Is this good or bad? Can it cause problems (if I extrapolate this to real world, with say 1000 users, usually 20-40 at the same time on the site)...
What about security, can these connections be used to steal the data that are exchanged (okay this is farfetched and not really an issue, the data are relatively harmless, other than phones but...)
PHP uses cookies, sessions or databases (and ORMs) in order to remember data (so they are not lost after single HTTP request). However, in Java (I mean servlets etc.) there is another solution: in brief you may choose for an object different scopes (how long it exists). Besides of session-scope or simple single HTTP-request "life" (scope), it can "live" during whole HTTP-server runtime and can be initialized at the startup of the HTTP-server.
Data can be therefore shared between different users / sessions, and no database requests are required (causing decrease of efficiency of the whole web-application). (I mean they're not required when HTTP-Server is already running - the object and its state is "remembered").
(And I do as much as I can to decrease SQL requests, using even PHP arrays for frequently read, but actually never modified DB data).
What I need in PHP is a way to:
Remember (store somewhere) data that can be changed and shared between many users, but not into DB
Without using sessions (nor cookies) I want to have multiple data-informations for many requests (etc. AJAX no single, but many requests to the same URL), which of course must be stored somewhere else for some time. For instance, I want to read all data (rows) with a single SQL request, remember them for a short period in PHP, and only then, one by one row, send responses with, say, each row in seperate response into appropriate AJAX function
Anyone can give me some hints how can I achieve this in PHP, preferably easiest possible way?
As a preface to this answer (which I'm sure you've already grasped), PHP's execution model essentially 'restarts' the process between requests and as such storage of anything cross-request in PHP alone is unachievable.
That leaves you with a few options, and they're all really 'strengths' of database:
Use a simple key-value in-memory persistance layer, like memcached or Redis
Use a noSQL solution with a bit more structure (and consistency should this be required) but that's still working in-memory and is comparably quicker than an RDB
Use an RDBMS because it'll work great, and the quantity if traffic you'll need to topple a well designed schema on moderate hardware is probably much higher than you think
HTH
I've implemented an Access Control List using 2 static arrays (for the roles and the resources), but I added a new table in my database for the permissions.
The idea of using a static array for the roles is that we won't create new roles all the time, so the data won't change all the time. I thought the same for the resources, also because I think the resources are something that only the developers should treat, because they're more related to the code than to a data. Do you have any knowledge of why to use a static array instead of a database table? When/why?
The problem with hardcoding values into your code is that compared with a database change, code changes are much more expensive:
Usually need to create a new package to deploy. That package would need to be regression tested, to verify that no bugs have been introduced. Hint: even if you only change one line of code, regression tests are necessary to verify that nothing went wrong in the build process (e.g. a library isn't correctly packaged causing a module to fail).
Updating code can mean downtime, which also increases risk because what if the update fails, there always is a risk of this
In an enterprise environment it is usually a lot quicker to get DB updates approved than code change.
All that costs time/effort/money. Note, in my opinion holding reference data or static data in a database does not mean a hit on performance, because the data can always be cached.
Your static array is an example of 'hard-coding' your data into your program, which is fine if you never ever want to change it.
In my experience, for your use case, this is not ever going to be true, and hard-coding your data into your source will result in you being constantly asked to update those things you assume will never change.
Protip: to a project manager and/or client, nothing is immutable.
I think this just boils down to how you think the database will be used in the future. If you leave the data in arrays, and then later want to create another application that interacts with this database, you will start to have to maintain the roles/resources data in both code bases. But, if you put the roles/resources into the database, the database will be the one authority on them.
I would recommend putting them in the database. You could read the tables into arrays at startup, and you'll have the same performance benefits and the flexibility to have other applications able to get this information.
Also, when/if you get to writing a user management system, it is easier to display the roles/resources of a user by joining the tables than it is to get back the roles/resources IDs and have to look up the pretty names in your arrays.
Using static arrays you get performance, considering that you do not need to access the database all the time, but safety is more important than performance, so I suggest you do the control of permissions in the database.
Study on RBAC.
Things considered static should be coded static. That is if you really consider them static.
But I suggest using class constants instead of static array values.
I am fairly new to web programming, I have mainly used java to create desktop applications in the past.
I'm trying to figure out how to create persistent objects in php. Maybe persistent isn't the right word, I don't want the object to be unique to each client, like i would get by serializing it in a session variable. I want the object to be created on the server and have that same object be accessible at all times. The object would query the database and store some data. This way, each page load, the php code would get that data from the same persistent object rather than having to query the database each time.
I am currently using the singleton pattern for object creation because my initial understanding was that it would allow me to accomplish what i want. Part of the object is an array, and when i execute a php page that adds a value to the array, and access that value on the same page, its fine. However when i add a value to the array and then load another page that accesses the value, the array is back to being empty.
Is this possible? Am i overreacting in thinking that querying the database so much is bad? There will at times be as many as 20 users requesting data during any one second, and i feel like thats ridiculous to query the db each time.
Thanks
PHP does not have the concept of persistence as Java does: the JVM allows for Java applications to persist in memory between HTTP requests; the webserver forks a new PHP process every time a new HTTP request is served so an object's static variables will not persist data for you between requests.
Use a database to store persistent data. Web programming focuses on concurrency, so you shouldn't worry about database queries - 20 a second is few. If you reach the limits of your database you have the options to add a caching layer or "scale out" hardware by adding read-only slaves.
Usually you get your persistence by using the database. If that's a bottleneck you start caching data, for example in memcached or maybe a local file with a serialized array on your webserver.
Though it may not be the prettiest solution, but you can use SESSIONS for this matter.
class SomeObject{
public function __costructor{
$_SESSION['object'] = serialize($this);
}
}
and on another page, you can call the object simply with:
$object = unserialize($_SESSION['object']);
Though ofcourse this approach seems the easiest. It should come with utmost precaution:
Know that sessions depending on the traffic of your server should not be too large in size as many users concurrently ask for each of these sessions. Scale the size at your own discretion.
always serialize and unserialize as sessions not done so will misbehave.
What ever sails your boat. Do so at your own mindful analyzation. goodluck
Data persistence in Web programming is done thru the use of cookies/sessions and passing cookie/session variables across Web page calls. Theoretically, any kind of data can be stored in these variables - but for most practical purposes, only the more important data (look at them more like tokens) needed to identify/reconstruct the needed objects (with or without a database) are transferred to and from server and browser.
I'd advise you take a good look at memcached. When you're talking server load and performance capabilities, a useful metric is often pages/second. If you have a dedicated server and unoptimized but very intensive stuff going on, you may only be able to serve 5 pages/second. Utilizing data caching is a great way to increase that 3 to 10 fold. However, it's always a tradeoff as far as how stale the data can get. You will really need to test your site to properly understand (quantify) other possible performance-limiting factors such as other connections per page (images, css, etc), file i/o, other network activity, and last but not least the actual
It is possible to store objects in the current session. Now just create a base class which is able to store and recreate the object itself. Any derived object will then also be persistent.
You might want to read here : persistent base class and example
As far as i know the session is stored in RAM and thus should be faster than serialize the objects to disk to achieve persistence.
Stop using singleton and use dependency injection.
Best way is to use DataMapper (http://www.martinfowler.com/eaaCatalog/dataMapper.html) and attach it to an object by means of dynamic properties. Let the data mapper handle persistence.
$CS = new CookieStorage();
$SS = new SessionStorage();
$FS = new FileStorage('config/object.db');
$DM = new ObjectDataMapper($FS);
$O = new Object($DM);
$Object->DynamicProperty = 1;
Now DynamicProperty will automatically persist and will automatically be loaded from file object.db. And the class definition:
class Object
{
public function __construct(MapperInstance $Storage = NULL)
{
$this->setMapper($Storage?: new ObjectDataMapper(new FileStorage('config/object.db')));
}
public function __get($name)
{
$this->_Mapper->getProperty($name);
}
public function __isset($name)
{
$this->_Mapper->isProperty($name);
}
public function __set($name, $value)
{
$this->Mapper->setProperty($name, $value);
}
}