I believe that I've encountered a bug in how Laravel 5.3 handles eager loading when the foreign key is a string which contains zerofilled numbers.
I have two models (using a mysql backend), School and Student. The id field in School is a string, and contains 5-digit numbers assigned by the state, which include leading zeros. Student BelongsTo School, and contains a school_id field which is defined identically to the id field in School. I have done some testing, and I'm finding that when I call Student::with('school') I get the expected School models as long as the school_id in Student is free of leading zeroes, but it returns no School models for school_id values with leading zeroes.
I've done direct testing with freshly minted School records, and the values are being stored correctly with leading zeroes in both database tables, and when I try directly querying the tables the leading zeroes work fine, but as soon as with() enters the equation, things break. I've attempted to reproduce the failure through other means, even manually constructing whereIn() calls to mirror syntax of the queries constructed by with(), but everything else correctly returns the expected records.
This code worked fine prior to climbing the Laravel upgrade ladder from 4.1 to 5.3, so I'm wondering what may have changed. I've gone as far as digging into the GitHub repository for BelongsTo, and none of the parameter handling seems to strip leading zeroes, so I'm really at a loss as to why with() is breaking in this way.
So, does anybody have any insights they can share? I'm stumped, and would rather not have to engineer around with(). I'll also state up front that I can't drop the leading zeroes from the id field, it's not an option, they must actually be stored, and not just displayed as with ZEROFILL.
UPDATE: I've attached an example that establishes that the school_id stored in Student can successfully connect to the corresponding School when used separately from the with() statement:
$interventions = Intervention::with('school')->where('id','=',780)->get();
$schools = School::whereIn('id',$interventions->pluck('school_id')->all())->get();
throw new \Exception(print_r($interventions,true).'|'.print_r($schools,true));
Here are the (edited for brevity) results of the \Exception:
Exception in AdminController.php line 273:
Illuminate\Database\Eloquent\Collection Object
(
[items:protected] => Array
(
[0] => App\Models\Student Object
(
[attributes:protected] => Array
(
[id] => 780
[school_id] => 01234
)
[relations:protected] => Array
(
[school] =>
)
)
)
)
Illuminate\Database\Eloquent\Collection Object
(
[items:protected] => Array
(
[0] => App\Models\School Object
(
[attributes:protected] => Array
(
[id] => 01234
[school] => Test1
[district_id] => 81000
[inactive] => 0
[see] => 0
)
)
)
)
So, while Student::with('school') fails to pull up the corresponding School, feeding the same Student->school_id values to School::whereIn() succeeds. I remain mystified.
You are not showing the model classes, but my guess is you need public $incrementing = false; in the School Eloquent model. Otherwise it will be coerced to int when matching the relationship, losing all leading zeroes.
Related
First of all, I'm happy to see people helping people here :D.
I got also a small issue sadly,
DB 1 contains:
UUID and much values.
DB 2 contains:
UUID and again many other values.
what I want is
[0] => Array
(
[UUID] => 96
[DB1 values]
[DB2 values]
)
i thought that will work with array_combine but that isn't true sadly because then i miss some db1 values
here database images:
--EDIT--
Your picture helped. You can handle that at the SQL Level
SELECT Login.*, Location.*
FROM Login
LEFT JOIN Location ON Login.UUID=Location.UUID
GROUP BY Login.UUID
Then the resulting data in PHP will already be correctly formatted.
Notice that I chose table Login to drive the query, could have also chosen the Location table. depends on the underlying structure
I'm trying to update the DB using some data pulled from an external API. The appstatus and amount both come from the same API, and I can see the values are correct, and they make it all the way down through the action to be updated, but when they are updated amount is not persisted while the other two are persisted in the DB.
$this->Application->create();
$this->Application->id = $appVal['Application']['id'];
$saved = $this->Application->save($app);
$this->Application->clear();
Data Contents of $app added to model with the correct record ID of 147
2015-07-02 22:23:42 Debug: DATA TO SAVE: Array
(
[Application] => Array
(
[appstatus] => Approved
[app_step] => 5
[amount] => 13001
)
)
Output from $saved after update of record ID 147:
2015-07-02 22:23:42 Debug: DATA THAT WAS SAVED: Array
(
[Application] => Array
(
[appstatus] => Approved
[user_status] => 2
[app_step] => 5
[amount_due] => 13001
[modified] => 2015-07-02 22:23:42
)
)
The appstatus, app_step, and modified values are all different in the database after an update, but the amount column is never persisted in the database and remains as 0 even though the update response indicates the amount to be 13001.
The migration used to add the amount column to the existing DB table looks okay, and I can see it in the DB:
ALTER TABLE `applications` ADD `amount_due` DOUBLE NOT NULL AFTER `app_step`
I've looked through the docs, and when using create to update you should use clear(), but I tried it and it doesn't do anything. Any ideas why this is not being persisted?
UPDATE
I can update any other field in the model just before it saves like: address, postalcode, amount, street_name, etc; and update them with no problem, but no what amount_due is never updated.
Apparently, CakePHP caches the models so the new field that was just added wasn't in the cache. After blowing away the cache it all works as it should. That is time I'll never get back, but that's the first thing I'll check from now on, so lesson learnt.
I have two, big, 2 dimensional arrays (pulled from some xml data) one (A list) is ~1000 items containing 5 fields the other (B list) is dinamically between 10.000-12.000 items containing 5 fields.
My idea was to compare EACH id key of list A against EACH id key of list B and on "true" compose a new array of combined fields, or just fields from array A if no match.
I used nested foreach loops and ended up with millions of iterations taking long time to process. needless to say...not a solution.
The form of this two structures and my needed result reminded me straight away of a sql join.
The questions are:
1.) Should i try sql or nested foreach might not be the best php way?
2.) Will a relational query be much faster than the iterations?
EDIT:
I pull data only periodically from an xml file (in a separate process) which contains 10+ fields for each node. Than i store the 5 fields i need in a CSV file to later compare with table A that i pull out from a mysql database. basically much like catalog update of attributes with fresh feed.
I'm affraid the original idea of storing into CSV was an error and i should just save the feed updates into a database too.
EDIT 2
The array list B look like this
Array
(
[0] => Array
(
[code] => HTS541010A9E680
[name] => HDD Mobile HGST Travelstar 5K100 (2.5", 1TB, 8MB, SATA III-600)
[price] => 385.21
[avail] => 0
[retail] => asbis
)
...
...
while the A list is similar in all but the 'code' field which is the only one useful for comparison
Array
(
[0] => Array
(
[code] => ASD-HTS541010A
[name] => HDD Mobile HGST Travelstar 5K100 (2.5", 1TB, 8MB, SATA III-600)
[price] => 385.21
[avail] => 0
[retail] => asbis
)
As you can see each feed will have universal code BUT some different random data as prefix or suffix so in each loop i have to do a couple of operations on the string to stripos or compare it to feeds id for a match or close match.
Pseudo code:
$mylist = loadfromDB();
$whslist = loadfromCSV();
foreach ($mylist as $myl) {
foreach ($whslist as $whl){
if ((stripos(code_a,code_b) OR (code_b,code_a) !== false)){
...
}
elseif (stripos(substr(strstr(code_a,'-'),1),code_b) !== false) {
...
}
elseif (stripos( substr(code_a,0,-5);) == !false ){
...
}
}
}
Using SQL will be faster because most SQL engines are optimized for joins, and your method is a brute-force method. However, inserting all that data to MySQL tables is quite a heavy task, so it's still not the best solution.
I suggest you do the join in PHP - but use a smarter algorithm. Start by sorting the two arrays by the field you want to match. Iterate both sorted arrays together - use two iterators(or pointers or indices or whatever) - lets say a iterates over A and b over B. On each iteration of the loop, compare the comparison field of the elements pointed by the a and b. If a's is smaller - advance a. If b's is smaller - advance b. If a's is equal to b's - you have a match, which you should store in a new list, and then advance both a and b(assuming the relation is one-to-one - if it's one-to-many you only advance the many iterator, and if it's many-to-many you need a bit more complex solution).
Mongo make an array with date as key ?
[_id] => MongoId Object (
[$id] => 4fcf2f2313cfcd225700000d
)
[id] => 14
[name] => Aryan Roban
[news] => Array (
[08-06-2012] => 12
)
Here I want a make news as array with date as key and how to delete a particular key row ?
For example
I want to delete array element with key '08-06-2012' in news array , I dont know the value of it.
Finding Documents won't be a problem, this is very easy. Simply look if news has got a key which matches your search criteria:
db.foo.find({'news.08-06-2012': {'$exists': true}})
Don't forget to put an index on news.
But deleting them is not easily possible. There is another thread which show shows a way to do that, but it's really rather a workaround: In mongoDb, how do you remove an array element by its index Sadly this only works for arrays with numerical indexes and not for associative arrays.
Maybe you could use an own collection for news? Then you could update and delete them easily. Otherwise you could load a full document from your database, manipulate the news in your application and save it afterwards. This would require two datebase queries, but should work.
We can use like
db.foo.update({/*...*/}, {$unset: {'news.11-06-2012', 1}})
I'm using FQL to query the Facebook Insights table and return the data to my server, where I plan to store it within a MySQL database.
I've found that all the Insights metrics return different data types for the value field. For example, something like application_installation_adds will return a value like this:
Array
(
[0] => stdClass Object
(
[metric] => application_installation_adds
[value] => 3
)
)
..while a metric like application_permission_views_top will return this if it has data:
Array
(
[0] => stdClass Object
(
[metric] => application_permission_views_top
[value] => stdClass Object
(
[permissions_impression_email] => 5
[permissions_impression_user_birthday] => 4
[permissions_impression_read_insights] => 4
)
)
)
..and this if it's empty:
Array
(
[0] => stdClass Object
(
[metric] => application_permission_views_top
[value] => Array
(
)
)
)
Given the different data types and values, I was wondering what the best way to store this data within my database would be.
I was thinking of setting up a table like this:
metric_name
metric_subname
value
and then using a process like this:
Get FQL result.
If it's an empty Array, do nothing (because there is no data).
If it's a single value, insert metric_name and value. Give the metric_subname a value of "Singlular" (so that I know it's just a one-value metric).
If it's a stdCLass Object, use a foreach loop to fill in the metric_subname column.
This should give me a table like so, which will allow me to query the data in an easy manner down the track: http://i.stack.imgur.com/0fekJ.png
Can anyone please provide feedback? Is this a good way to go about it, or are there better options?
Thanks in advance! :)
Do you know how many levels deep the hierarchy of metrics can go? If more than two, your solution doesn't scale.
Assume at some point depth will reach > 2 and design a generalized schema. One that comes to mind would be to keep two tables: metrics, and user_metrics. The metrics table would just contain the hierarchy of possible metrics, with one change: instead of name and subname, implement id, name, parent_id. The second table would associate a metric with a user, and store the value, so: metric_id, user_id, value.