Any idea of what would be the best way of writing a function in PHP for an online registration system with possibility of objects' occupancy;
Just to be clear:
I want to check the availability of one object in the database by writing a function and by comparing two variables:
Starting time of reservations;
Their duration (finishing time);
So when a new reservation is entered I check the database; if it doesn't pass the limit of objects in that period (by comparing to previous reservations) it gives a message which I will then pass it to JavaScript and enable the Submission button; but if it passes the limit in my JavaScript I'll suggest a duration which is available for the entered Starting Time;
In my current PHP function I am having some problems:
First I am using so many variables and so many loops (which may cause the system slow if it gets bigger) and the code seems quite messy!
It doesn't recognize the difference between serial or concurrent reservations therefore it behaves the same to these reservations.
Here is a snippet of my function:
$reservation = new Reservation;
$reservations = $reservation -> fetch_all();
foreach ($reservations as $reservation) {
for ($j = $res_code['res_code_st']; $j < $res_code['res_code_fi']; $j++) {
for ($i = $reservation['res_code_st']; $i < $reservation['res_code_fi']; $i++) {
if ($i == $j) {
$count = $count + 1;
$arr[] = $reservation['res_code_st'];
$arr[] = $reservation['res_code_fi'];
break 2;
Which actually I'm storing time in this format;
For example for 12:30 I'm storing 1230 or for 09:20 I'm storing 0920 and then I'm checking every minute of any item with every minute of new reservation (everything happens in the same day: Days don't matter!) and in case it finds a match I count it as a new reservation in that period (the reason why it doesn't differ concurrency and serial);
I believe it should be simple but I'm kinda confused and my mind doesn't work for a better solution right now!
Thanks for your times :)
EDIT
I tried the suggested way of #kamil-maraz , I think it saves some time for reducing complexity but I still couldn't figure out how to check the concurrency.
Let me give one example:
There are four possibility of disturbance I try to show in this symbolic figure,
Suppose each line is a reservation across time, first line is for new reservation and next four are already stored in the DB;
Four disturbance are as :
One that starts before and ends at the middle of new request,
One that starts before and ends after the new request;
One that is completely inside the new reservation;
One that starts after the new request and ends after it;
0-----------------0
0--------------------------------0
0--------------0
0----------0
0-----0
$result = $db -> prepare('SELECT Count(reservation_id) FROM reservations WHERE (res_code_st < ? AND res_code_fi > ?) OR (res_code_st > ? AND res_code_fi < ?) OR (res_code_st < ? AND res_code_fi > ?) OR (res_code_st < ? AND res_code_fi > ?)');
$result -> execute(array($res_code['res_code_st'], $res_code['res_code_st'], $res_code['res_code_st'], $res_code['res_code_fi'], $res_code['res_code_st'], $res_code['res_code_fi'], $res_code['res_code_fi'], $res_code['res_code_fi']));
$row = $result -> fetch();
This is giving me the number of reservations in the interval of new request; But what about this case:
0--------------------------0
0-----0
0-----0
0------0
Although in the interval there are 4 reservations which is invalid (Suppose the #object limit == 3 ), but since at each time not more than 2 reservations are made it is still valid (the concurrency problem which I was talking about).
Any idea how should I change the SQL function to fix this ?
It seems to me, that it could be done entirely on the database. You are fetching all results and then you do some magic over data. But you can do it through a database query.
for example somethging like this:
SELECT COUNT(id) FROM reservations WHERE date < ... AND date > ... AND ... etc
then, in the php, you can test count ...
if you want to test different types of reservations, concurent, etc. you can use aggregated table (Like somebedy used here and you can store in rows types of reservations too.
Related
I can't figure out how to efficiently get SQL data for a Room/Rates/Dates=Amount...
First I load all the "RateData" for a date range with a PDO select. There are many rows, for many rooms, each with many rates... maybe, or maybe it is all empty except a couple of Amounts. It needs to display $0 for missing dates, so next...
I load the Rooms with PDO and loop through them, and for each room I load the Rates with PDO and loop through them (not a ton of rates per room, and not a ton of rooms, but possibly a very long date-range).
So then I loop through the date range and add $0 to the giant UI grid of Amounts by Rate/Date, nested under each Room. I have to do this anyway, as I also have a ton of logic on what to display in the parent Room row that averages the Rates and such.
So what I need to do is instead of using $0, I need to see if the Room/Rate/Date exists in RateData...
$RateAmount = 0;
$RateDataRow = $RateData.filter('Room=1 && Rate=1 && Date=2022-10-01');
If ($RateDataRow exists) {$RateAmount = $RateDataRow['Amount']}
How to I write the above sudocode in PHP?
The only alternative I can think of would be to do 1000's of SQL calls to populate the grid... which seems bad. Maybe it is not that bad though if PDO caches and doesn't actually query the DB for each grid cell. Please advise. Thanks.
I tried this:
$currcost = 0;
//if $ratedata exists for currdate + currrate + currroom
function ratematch($row)
{if (($row['RoomType_Code']=$currrate)
&& ($row['RatePlan_Code']=$currroom)
&& ($row[ 'Rate_Date']=$currdate->format("Y-m-d")))
{return 1;}
else {return 0;}
}
$match = array_filter($ratedata, 'ratematch');
if (!empty($match)) {$currcost = $match['Rate_Amount'];}
But got an error about redeclaring a function. I have to redeclare it because it is in a loop of currdate under a loop of currrate under a loop of currroom (about 1000 cells).
I made it work, I just have to manually loop through the RateData...
//Get Amount from DB
$currcost = 0;
foreach($ratedata as $datarow)
{if (($datarow['RoomType_Code']==$currroom)
&& ($datarow['RatePlan_Code']==$currrate)
&& ($datarow[ 'Rate_Date']==$currdate->format("Y-m-d")))
{$currcost = $datarow['Rate_Amount'];
break;
}
}
If anyone knows a more efficient way to "query" a previous query without a trip to the SQL server, please post about it. Looping through the fetchAll 1000's times seems bad, but not as bad as doing 1000's of WHERE queries on the SQL Server.
I suppose I could make a 3-dimentional array of $0, then just loop through the RateData once to update that array, and finally loop through the 3-dimentional array to do my other calculations (average rate per room per week sat-sun). Sounds hard though.
I am using php to get special records from Database.
which one is better?
1.
Select * From [table] Limit 50000, 10;
while($row = $stmt->fetch()){
//save in array, total 10 times
}
or
2.
Select * From [table];
$start = 50000;
$length = 10;
while($row = $stmt->fetch()){
if($i < $start+$length && $j >=$start){
//save in array, total 50010 times
}
}
In this case, which one should I use?
Which one using DB with less resources?
which one is better?
Too vague: what is "better"?
Which one using DB with less resources?
You're much better off with the first approach. It's efficient to select as little data as you need and no more. Selecting the whole table will force your script to use a lot more memory because all that data needs to be kept live
The best answer you'll get is: test! You can run your queries multiple times in multiple ways and see for yourself. Just use SELECT SQL_NO_CACHE... instead of the generic SELECT... to force the DB to restart the work from scratch. Measure how long it takes to run the query and process results
function wayOne(){
// execute your 1st query and loop through results
}
function wayTwo(){
// execute 2nd query and loop through results
}
//Measures # of milliseconds it takes to execute another function
function timeThis(callable $callback){
$start_time = microtime();
call_user_func($callback);
$microsecs = microtime()-$start_time; //duration in microseconds
return round($microsecs*1000);//duration in milliseconds
}
$wayOneTime = timeThis('wayOne');
$wayTwoTime = timeThis('wayTwo');
You can then compare the two times. Generally (not always) a process that takes significantly less time uses fewer resources
function generateRandomData(){
# $db = new mysqli('localhost','XXX','XXX','scores');
if(mysqli_connect_errno()) {
echo 'Failed to connect to database. Please try again later.';
exit;
}
$query = "insert into scoretable values(?,?,?)";
for($a = 0; $a < 1000000; $a++)
{
$stmt = $db->prepare($query);
$id = rand(1,75000);
$score = rand(1,100000);
$time = rand(1367038800 ,1369630800);
$stmt->bind_param("iii",$id,$score,$time);
$stmt->execute();
}
}
I am trying to populate a data table in mysql with a million rows of data. However, this process is extremely slow. Is there anything obvious I'm doing wrong that I could fix in order to make it run faster?
As hinted in the comments, you need to reduce the number of queries by catenating as many inserts as possible together. In PHP, it is easy to achieve that:
$query = "insert into scoretable values";
for($a = 0; $a < 1000000; $a++) {
$id = rand(1,75000);
$score = rand(1,100000);
$time = rand(1367038800 ,1369630800);
$query .= "($id, $score, $time),";
}
$query[strlen($query)-1]= ' ';
There is a limit on the maximum size of queries you can execute, which is directly related to the max_allowed_packet server setting (This page of the mysql documentation describes how to tune that setting to your advantage).
Therfore, you will have to reduce the loop count above to reach an appropriate query size, and repeat the process to reach the total number you want to insert, by wrapping that code with another loop.
Another practice is to disable check constraints on the table you wish to do bulk insert:
ALTER TABLE yourtablename DISABLE KEYS;
SET FOREIGN_KEY_CHECKS=0;
-- bulk insert comes here
SET FOREIGN_KEY_CHECKS=1;
ALTER TABLE yourtablename ENABLE KEYS;
This practice however must be done carefully, especially in your case since you generate the values randomly. If you have any unique key within the columns you generate, you cannot use that technique with your query as it is, as it may generate a duplicate key insert. You probably want to add a IGNORE clause to it:
$query = "insert INGORE into scoretable values";
This will cause the server to silently ignore duplicate entries on unique keys. To reach the total number of requiered inserts, just loop as many time as needed to fill up the remaining missing lines.
I suppose that the only place where you could have a unique key constraint is on the id column. In that case, you will never be able to reach the number of lines you wish to have, since it is way above the range of random values you generate for that field. Consider raising that limit, or better yet, generate your ids differently (perhaps simply by using a counter, which will make sure every record is using a different key).
You are doing several things wrong. First thing you have to take into account is what MySQL engine you're using.
The default one is InnoDB, previously the default engine is MyISAM.
I'll write this answer under assumption you're using InnoDB, which you should be using for plethora of reasons.
InnoDB operates in something called autocommit mode. That means that every query you make is wrapped in a transaction.
To translate that to a language that us mere mortals can understand - every query you do without specifying BEGIN WORK; block is a transaction - ergo, MySQL will wait until hard drive confirms data has been written.
Knowing that hard drives are slow (mechanical ones are still the ones most widely used), that means your inserts will be as fast as the hard drive is. Usually, mechanical hard drives can perform about 300 input output operations per second, ergo assuming you can do 300 inserts a second - yes, you'll wait quite a bit to insert 1 million records.
So, knowing how things work - you can use them to your advantage.
The amount of data that the HDD will write per transaction will be generally very small (4KB or even less), and knowing today's HDDs can write over 100MB/sec - that indicates that we should wrap several queries into a single transaction.
That way MySQL will send quite a bit of data and wait for the HDD to confirm it wrote everything and that the whole world is fine and dandy.
So, assuming you have 1M rows you want to populate - you'll execute 1M queries. If your transactions commit 1000 queries at a time, you should perform only about 1000 write operations.
That way, your code becomes something like this:
(I am not familiar with mysqli interface so function names might be wrong, and seeing I'm typing without actually running the code - the example might not work so use it at your own risk)
function generateRandomData()
{
$db = new mysqli('localhost','XXX','XXX','scores');
if(mysqli_connect_errno()) {
echo 'Failed to connect to database. Please try again later.';
exit;
}
$query = "insert into scoretable values(?,?,?)";
// We prepare ONCE, that's the point of prepared statements
$stmt = $db->prepare($query);
$start = 0;
$top = 1000000;
for($a = $start; $a < $top; $a++)
{
// If this is the very first iteration, start the transaction
if($a == 0)
{
$db->begin_transaction();
}
$id = rand(1,75000);
$score = rand(1,100000);
$time = rand(1367038800 ,1369630800);
$stmt->bind_param("iii",$id,$score,$time);
$stmt->execute();
// Commit on every thousandth query
if( ($a % 1000) == 0 && $a != ($top - 1) )
{
$db->commit();
$db->begin_transaction();
}
// If this is the very last query, then we just need to commit and end
if($a == ($top - 1) )
{
$db->commit();
}
}
}
DB querying involves many interrelated tasks. As a result it is an 'expensive' process. It is even more 'expensive' when it comes to insertion/update.
Running query once is the best way to enhance performance.
You can prepare the statements in the loop and run it once.
eg.
$query = "insert into scoretable values ";
for($a = 0; $a < 1000000; $a++)
{
$values = " ('".$?."','".$?."','".$?."'), ";
$query.=$values;
...
}
...
//remove the last comma
...
$stmt = $db->prepare($query);
...
$stmt->execute();
Have a look at this gist I've created. It takes about 5 minutes to insert a million rows on my laptop.
How would I go about programming a daily message on my site that changes daily? I'm thinking of preloading all the messages in a MySQL database.
Any help would be appreciated!
Thanks,
I've tried
$msg_sql = "SELECT * FROM ".TABLE_PREFIX."quotes ORDER BY rand(curdate()) LIMIT 3";
$msg_res = mysqli_fetch_assoc(mysqli_query($link, $msg_sql));
But this only grabs the first MySQL result?
If you want a real message changing daily, you actually don't need to rely on a database or anything fancy. A simple idea might be to create a directory (say /var/www/motds) and populate it with files named YYYY-MM-DD.txt (where YYYY is a 4 digit year number, MM is a two digit month number and DD is a 2 digit day number).
Then, the only thing you need to do in order to display your motd is:
$filename = '/var/www/motds/'.date("Y-m-d").'.txt';
if (file_exists($filename)) {
echo file_get_contents($filename);
}
If you want your daily messages to be taken from a pool of entries (that you can pre-load), you might do something as follows:
$files = scandir('/var/www/motds'); // put files into an array
$messagecount = count($files) - 2; // .. and . shall not be considered
$day = date("z"); // what day do we have today?
echo file_get_contents('/var/www/motds/' . $files[($day % $messagecount) + 2]);
There are plenty of ways to get this done. You list PHP in your tags, so maybe check here:
PHP Script: Quote of the Day
or maybe here
I have two msyql tables, Badges and Events. I use a join to find all the events and return the badge info for that event (title & description) using the following code:
SELECT COUNT(Badges.badge_ID) AS
badge_count,title,Badges.description
FROM Badges JOIN Events ON
Badges.badge_id=Events.badge_id GROUP
BY title ASC
In addition to the counts, I need to know the value of the event with the most entries. I thought I'd do this in php with the max() function, but I had trouble getting that to work correctly. So, I decided I could get the same result by modifying the above query by using "ORDER BY badgecount DESC LIMIT 1," which returns an array of a single element, whose value is the highest count total of all the events.
While this solution works well for me, I'm curious if it is taking more resources to make 2 calls to the server (b/c I'm now using two queries) instead of working it out in php. If I did do it in php, how could I get the max value of a particular item in an associative array (it would be nice to be able to return the key and the value, if possible)?
EDIT:
OK, it's amazing what a few hours of rest will do for the mind. I opened up my code this morning, and made a simple modification to the code, which worked out for me. I simply created a variable on the count field and, if the new one was greater than the old one, changed it to the new value (see the "if" statement in the following code):
if ( $c > $highestCount ) {
$highestCount = $c; }
This might again lead to a "religious war", but I would go with the two queries version. To me it is cleaner to have data handling in the database as much as possible. In the long run, query caching, etc.. would even out the overhead caused by the extra query.
Anyway, to get the max in PHP, you simply need to iterate over your $results array:
getMax($results) {
if (count($results) == 0) {
return NULL;
}
$max = reset($results);
for($results as $elem) {
if ($max < $elem) { // need to do specific comparison here
$max = $elem;
}
}
return $max;
}