Background
I am creating a ranking system which retrieves a bunch of records to compare their grades, put it on rank and delete that record for the next comparison. However, I'm having trouble on how to delete the record I tried unset() but it does not seems to work either.
Question
This the code that I'm using. Please take note that this is just pseudo-code of what we are doing and is not the actual code just to avoid confusions from the question. Take a look at this code:
// Retrive all the student records with grades.
$students = $this->grades->RetrieveRecords();
// Occupy slot.
$iterator=0;
$highest_index =0 ;
for($i=0;$i<5;$i++){
// Search student for rank $i.
foreach($students as $student)
{
// Some comparisons
// consider we found the highest yet.
if($highest<$student['grade']){
// Store which index it is, because it will be deleted
// on the next cycle if this $student['grade'] is indeed the highest on this cycle.
$highest_index = $iterator;
}
$iterator+=1;
}
// After getting the highest for rank $i. Delete that current record
// from $students so on next cycle, it will be removed from the comparison.
$unset($students[$highest_index]); // Does not work, any alternative? - Greg
// Reset the foreach iterator for next comparison cycle.
$iterator=0;
The $unset($students[$highest_index]); is the one we need to do the job but doesn't. We just need to delete a specific record from result_array() which is the $students. For now we ran out of alternatives and still searching across the internet/documentations. However, I'll just leave this here for some help.
We will also update this if ever we got a solution for the few hours.
you can use array_filter:
$students = array_filter($students, function($student) use($highest)
{
return $student['grade'] < $highest;
});
Related
I can't figure out how to efficiently get SQL data for a Room/Rates/Dates=Amount...
First I load all the "RateData" for a date range with a PDO select. There are many rows, for many rooms, each with many rates... maybe, or maybe it is all empty except a couple of Amounts. It needs to display $0 for missing dates, so next...
I load the Rooms with PDO and loop through them, and for each room I load the Rates with PDO and loop through them (not a ton of rates per room, and not a ton of rooms, but possibly a very long date-range).
So then I loop through the date range and add $0 to the giant UI grid of Amounts by Rate/Date, nested under each Room. I have to do this anyway, as I also have a ton of logic on what to display in the parent Room row that averages the Rates and such.
So what I need to do is instead of using $0, I need to see if the Room/Rate/Date exists in RateData...
$RateAmount = 0;
$RateDataRow = $RateData.filter('Room=1 && Rate=1 && Date=2022-10-01');
If ($RateDataRow exists) {$RateAmount = $RateDataRow['Amount']}
How to I write the above sudocode in PHP?
The only alternative I can think of would be to do 1000's of SQL calls to populate the grid... which seems bad. Maybe it is not that bad though if PDO caches and doesn't actually query the DB for each grid cell. Please advise. Thanks.
I tried this:
$currcost = 0;
//if $ratedata exists for currdate + currrate + currroom
function ratematch($row)
{if (($row['RoomType_Code']=$currrate)
&& ($row['RatePlan_Code']=$currroom)
&& ($row[ 'Rate_Date']=$currdate->format("Y-m-d")))
{return 1;}
else {return 0;}
}
$match = array_filter($ratedata, 'ratematch');
if (!empty($match)) {$currcost = $match['Rate_Amount'];}
But got an error about redeclaring a function. I have to redeclare it because it is in a loop of currdate under a loop of currrate under a loop of currroom (about 1000 cells).
I made it work, I just have to manually loop through the RateData...
//Get Amount from DB
$currcost = 0;
foreach($ratedata as $datarow)
{if (($datarow['RoomType_Code']==$currroom)
&& ($datarow['RatePlan_Code']==$currrate)
&& ($datarow[ 'Rate_Date']==$currdate->format("Y-m-d")))
{$currcost = $datarow['Rate_Amount'];
break;
}
}
If anyone knows a more efficient way to "query" a previous query without a trip to the SQL server, please post about it. Looping through the fetchAll 1000's times seems bad, but not as bad as doing 1000's of WHERE queries on the SQL Server.
I suppose I could make a 3-dimentional array of $0, then just loop through the RateData once to update that array, and finally loop through the 3-dimentional array to do my other calculations (average rate per room per week sat-sun). Sounds hard though.
So, basically, i have to loop thought an array of 25000 items, then compare each item with another array's ID and if the ID's from the first array and the second match then create another array of the matched items. That looks something like this.
foreach ($all_games as $game) {
foreach ($user_games_ids as $user_game_id) {
if ($user_game_id == $game["appid"]) {
$game_details['title'][] = $game["title"];
$game_details['price'][] = $game["price"];
$game_details['image'][] = $game["image_url"];
$game_details['appid'][] = $game["appid"];
}
}
}
I tested this loop with only 2500 records from the first array ($all_games) and about 2000 records from the second array ($user_games_ids) and as far as i figured, it takes about 10 seconds for the execution of that chunk of code, only the loops execution. Is that normal? Should that take that long or I'm I approaching the issue from the wrong side? Is there a way to reduce that time? Because when i apply that code to 25000 records that time will significantly increase.
Any help is appreciated,
Thanks.
EDIT: So there is no confusion, I can't use the DB query to improve the performance, although, i have added all 25000 games to the database, i can't do the same for the user games ids. There is no way that i know to get all of the users through that API i'm accessing, and even there is, that would be really a lot of users. I get user games ids on the fly when a user enters it's ID in the form and based on that i use file_get_contents to obtain those ids and then cross reference them with the database that stores all games. Again, that might not be the best way, but only one i could think of at this point.
If you re-index the $game array by appid using array_column(), then you can reduce it to one loop and just check if the data isset...
$game = array_column($game,null,"appid");
foreach ($user_games_ids as $user_game_id) {
if (isset( $game[$user_game_id])) {
$game_details['title'][] = $game[$user_game_id]["title"];
$game_details['price'][] = $game[$user_game_id]["price"];
$game_details['image'][] = $game[$user_game_id]["image_url"];
$game_details['appid'][] = $game[$user_game_id]["appid"];
}
}
you see, I have these variables that are set within a While, I'm using MySQL also, What I want to do is to sort of "Group" those variables into one, and then the rest of the variables that are not repeated, i want to assign them "Display:none" So , look at this image:
Now, as you can see there is two duplicate dates, but the values are different, so in this case (Take in mind that I only want to group these "Recibos" but dont worry about it) you can see in the first row it has a different date, and has no repeated dates, while the last two of "Recibos" is sort of duplicated
Basically what I want to do is to group the two last "Recibos" (The duplicate dates), only one of the same date has to appear, and the rest of them, are gonna be shown when the TR is hover
Though , I only want a simple version of what I want, it can be some sort of testing, as you can see in this image, i've been trying for weeks and still didnt get anything, I hope you can help me! thanks in advance
I have made this code, which is the one in the image above:
Inside the while:
$acumulado-=$monto;
if(($fila>=$mostrar_de_aca_en_adelante && $detalles=='minimo') || ($detalles=='maximos')){
$conteo++;
$string_fecha[$conteo] = $fecha_operacion;
$string_fecha_tmp[$conteo] = $fecha_operacion;
$recibo_id_cliente[$conteo]=$id_cliente;
$recibo_monto[$conteo]=$monto;
$recibo_saldo_total[$conteo]=$saldo_total;
$recibo_detalle_movimiento[$conteo]=$datos_detalles['detalle_movimiento'];
$recibo_recibo[$conteo]=$recibo[0];
$recibo_verificado[$conteo]=$datos_detalles['verificado'];
$recibo_acumulado[$conteo]=$acumulado;
$label = $monto;
}
Outside the while:
echo "<br><br><br>";
for($i = 1; $i <= $conteo; $i++){
if($string_fecha[$i] == $string_fecha_tmp[($i + 1)]){
$label += $recibo_monto[$i];
$string_imprime_tmp .= "<br>[Repetido]".$string_fecha[$i]."(".$label.")";
}
}
echo $string_imprime_tmp;
var_dump($string_fecha);
By the way: I know this can be done in MySQL, but it can only be done in php i'm limited to php.
I have a problem with Laravel's ORM Eloquent chunk() method.
It misses some results.
Here is a test query :
$destinataires = Destinataire::where('statut', '<', 3)
->where('tokenized_at', '<', $date_active)
->chunk($this->chunk, function ($destinataires) {
foreach($destinataires as $destinataire) {
$this->i++;
}
}
echo $this->i;
It gives 124838 results.
But :
$num_dest = Destinataire::where('statut', '<', 3)
->where('tokenized_at', '<', $date_active)
->count();
echo $num_dest;
gives 249676, so just TWICE as the first code example.
My script is supposed to edit all matching records in the database. If I launch it multiple times, it just hands out half the remaining records, each time.
I tried with DB::table() instead of the Model.
I tried to add a ->take(20000) but it doesn't seem to be taken into account.
I echoed the query with ->toSql() and eveything seems to be fine (the LIMIT clause is added when I add the ->take() parameter).
Any suggestions ?
Imagine you are using chunk method to delete all of the records. The table has 2,000,000 records and you are going to delete all of them by 1000 chunks.
$query->orderBy('id')->chunk(1000, function ($items) {
foreach($items as $item) {
$item->delete();
}
});
It will delete the first 1000 records by getting first 1000 records in a query like this:
SELECT * FROM table ORDER BY id LIMIT 0,1000
And then the other query from chunk method is:
SELECT * FROM table ORDER BY id LIMIT 1000,2000
Our problem is here, that we delete 1000 records and then getting results from 1000 to 2000. Actually we are missing first 1000 records and this means that we are not going to delete 1000 records in first step of chunk! This scenario will be the same for other steps. In each step we are going to miss 1000 records and this is the reason that we are not getting best result in these situations.
I made an example for deletion because this way we could know the exact behavior of chunk method.
UPDATE:
You can use chunkById() for deleting safely.
Read more here:
http://laravel.at.jeffsbox.eu/laravel-5-eloquent-builder-chunk-chunkbyid
https://laravel.com/api/5.4/Illuminate/Database/Eloquent/Builder.html#method_chunkById
Quick answer: Use chunkById() instead of chunk().
When updating or deleting records while iterating over them, any changes to the primary key or foreign keys could affect the chunk query. This could potentially result in records not being included in the results.
The explanation can be found in the Laravel documentation:
If you are updating database records while chunking results, your chunk results could change in unexpected ways. If you plan to update the retrieved records while chunking, it is always best to use the chunkById method instead. This method will automatically paginate the results based on the record's primary key.
Example usage of chunkById():
DB::table('users')->where('active', false)
->chunkById(100, function ($users) {
foreach ($users as $user) {
DB::table('users')
->where('id', $user->id)
->update(['active' => true]);
}
});
(end of the update)
Below is the original answer which used the cursor() method instead of the chunk() method to solve the problem:
I had the same problem - only half of the total results were passed to the callback function of the chunk() method.
Here is the code which had the same problem - half of the transactions were not processed:
Transaction::whereNull('processed')->chunk(100, function ($transactions) {
$transactions->each(function($transaction){
$transaction->process();
});
});
I used Laravel 5.4 and managed to solve the problem replacing the chunk() method with cursor() method and changing the code accordingly:
foreach (Transaction::whereNull('processed')->cursor() as $transaction) {
$transaction->process();
}
Even though the answer doesn't address the problem itself, it provides a valuable solution.
For anyone looking for a bit of code that solves this, here you go:
while (Model::where('x', '>', 'y')->count() > 0)
{
Model::where('x', '>', 'y')->chunk(10, function ($models)
{
foreach ($models as $model)
{
$model->delete();
}
});
}
The problem is in the deletion / removal of the model while chunking away at the total. Including it in a while loop makes sure you get them all! This example works when deleting Models, change the while condition to suit your needs!
When you fetch data using chunk the same SQL query is being executed only the offset is different. Actually increasing as specified on the chunk method param. For example:
SELECT * FROM users WHERE status = 0;
Let's say there are 200 records(let's suppose that is a lot so we want to retrieve these data as chunks). So this looks like:
SELECT * FROM users WHERE status = 0 LIMIT 50 OFFSET 0(offset has a dynamic value
which means next time is 50, after that 100, and the last time 150).
The problem when using laravel chunk while updating is that we are only changing the offset. And this means the number of results is different each time we try to retrieve a chunk of data. So the first time there are 200 records that match the where condition. But if we update the status, for example to 1(status = 1) this means the next time when we try to fetch data we still execute the same query:
SELECT * FROM users WHERE status = 0 LIMIT 50 OFFSET 50(offset has a dynamic value
which means next time 100 and the last time 150).
We only have 150 records that match this query since we updated the table status = 1 for 50rows. Also we said the offset on the second time is going to be 50. And what is going to happen is that we skip 50 rows from 150rows since the offset is 50. And do the same update to these data. This means rows from 50->100 status is being updated to 1(status = 1) from the total of 150 rows.
The third time we run this query:
SELECT * FROM users WHERE status = 0 LIMIT 50 OFFSET 150(offset is going to be 150).
But the result of the query is 100 users in total that have status = 0. So no more data to go through.
This is not what you would expect to happen on the first thought. But this is how it works and why only half of data are being updated and the other part of data is being skipped.
I have two msyql tables, Badges and Events. I use a join to find all the events and return the badge info for that event (title & description) using the following code:
SELECT COUNT(Badges.badge_ID) AS
badge_count,title,Badges.description
FROM Badges JOIN Events ON
Badges.badge_id=Events.badge_id GROUP
BY title ASC
In addition to the counts, I need to know the value of the event with the most entries. I thought I'd do this in php with the max() function, but I had trouble getting that to work correctly. So, I decided I could get the same result by modifying the above query by using "ORDER BY badgecount DESC LIMIT 1," which returns an array of a single element, whose value is the highest count total of all the events.
While this solution works well for me, I'm curious if it is taking more resources to make 2 calls to the server (b/c I'm now using two queries) instead of working it out in php. If I did do it in php, how could I get the max value of a particular item in an associative array (it would be nice to be able to return the key and the value, if possible)?
EDIT:
OK, it's amazing what a few hours of rest will do for the mind. I opened up my code this morning, and made a simple modification to the code, which worked out for me. I simply created a variable on the count field and, if the new one was greater than the old one, changed it to the new value (see the "if" statement in the following code):
if ( $c > $highestCount ) {
$highestCount = $c; }
This might again lead to a "religious war", but I would go with the two queries version. To me it is cleaner to have data handling in the database as much as possible. In the long run, query caching, etc.. would even out the overhead caused by the extra query.
Anyway, to get the max in PHP, you simply need to iterate over your $results array:
getMax($results) {
if (count($results) == 0) {
return NULL;
}
$max = reset($results);
for($results as $elem) {
if ($max < $elem) { // need to do specific comparison here
$max = $elem;
}
}
return $max;
}