We are noticing a weird issue in Laravel 8 with PHP 7.4 version.
When we save data to the Mysql db, it is storing garbage value.
An example data:
$product = Product::create([
'product_id' => 6791044858042,
'inventory_id' => 42309695242426
]);
The same is happening using $product = new Product();
The value in the DB is
"product_id":"701563066","inventory_id":"-27590470"
When we log the $product in the file just after saving, it has the correct value. The data is not re-saved and the data modified is notified either.
Inspected the max integer value for the column in db schema. It has bigint(20), I did change it to biging(100) and even varchar(255), yet it is the same result.
Further the garbage value is random and not a fixed value.
I had changed it back to 7.3 in the server and it seems fine.
Any one has faced the issue before?
Related
We are using Magento 1.9 for our application. Here is my sample code
$customer_collection= Mage::getModel("customer/customer")->load($data['customer_id']);
foreach ($data['data'] as $key => $customer) {
$customer_collection->setData($customer['attribute_name'] , $customer['attribute_value']);
}
$customer_collection->save(); //finally saving the data
Above code is working for all the fields except date field. Issue is when we send multiple data including date fields, other fields are getting updated but date field is not getting updated. Can anyone help me to solve this issue?
For date field update try to use
$object->setData($attributeCode, Varien_Date::now());
As #mladen-ilić Suggested,
I did Flush Cache Storage and tried again to post the data. It works like a charm.
I am using php api for adding/editing/viewing a FileMaker database. I am using Filemaker pro 14 and FMS 14 in windows environment.
I am having an issue with adding/editing container fields. Tried the solution given in the following link: https://community.filemaker.com/thread/66165
It was success. The FM script is:
Goto Layout[ The layout that shows your container field ]
New Record/Request
Set Variable[$url ; Value:Get(ScriptParameter)]
Insert from URL [Select, No Dialog ; database_name::ContainerField ; $url]
Exit Script
I don't want to add new record. I have several container fields in the layout so it's not a solution to add a record for each one, and I need to be able to modify older records' container fields.
I tried modifying the script as follows:
Go to Layout ["products" (products)]
Go to Record/Request/Page [Last]
Open Record/Request
Set Variable [$url; Value: Get(ScriptParameter)]
Insert from URL [Select, No Dialog; products::brochure; $url]
Exit Script []
note: (Last) parameter is just experimental.
The php script is as follows:
$runscript = $fm->newPerformScriptCommand('products', 'addContainerData', 'http://link_to_uploded_file');
$result = $runscript->execute();
$result returns success, but the file wasn't inserted in the container field.
Somebody pointed to me that to use "Insert from URL" I have to specify a record ID. So I did the follows:
modified the php script to:
$editCommand = $fm->newEditCommand('products', $recordID, $editedData);
$editCommand->setPreCommandScript('addContainerData', 'http://url_to_my_uploaded_file');
$result = $editCommand->execute();
and the FM script (addContainerData) to
Set Variable [$url; Value: Get(ScriptParameter)]
Insert from URL [Select, No Dialog; products::brochure; $url]
Exit Script []
Also the result was success BUT without inserting the file to the container field.
What am I missing? What to do to be able to add container data to new/old records?
A possible workaround is to use PHP functions to encode the file to base 64 and set that value to a text field in FileMaker. Once there, you can have auto enter or a script to take the base 64 value and decode it to a container field. This works well especially for files with smaller file sizes.
Konfiguration
Ubuntu 14.04
PHP 5.5.9
MongoDB 3.0.2
MongoDB PHP driver 1.6.6
Setup
I have a globally distributed MongoDB replica set (for testing purpose) with 3 servers. One has priority = 1, others have priority = 0 so they will never become primary.
This setup is used to distribute files to the replicated servers by adding them directly on the primary server using GridFS. Distribution works fine.
I have created a simple php watcher script which is executed on the secondary servers using read preference \MongoClient::RP_NEAREST. I wanted to determine the timestamp when the replicated received all files from the primary.
I wanted to make sure that the php script on the secondary servers are using the mongodb instance on their server (and not the primary), and therefore I stopped the primary mongodb server. After doing this, the two servers are keeping their secondary role.
Issue
If the primary server is unavailable, I was still able to execute queries like count() and find() on regular collections (also fs.files collection).
But calls that use GridFS will throw an MongoConnectionException: No candidate servers found exception.
Script
I have created a little script with which you should be able to reproduce the error.
$serverList = '...';
$conn = new \MongoClient(
'mongodb://'.$serverList,
array(
'replicaSet' => 'r0',
'readPreference' => \MongoClient::RP_NEAREST,
'username' => 'bat',
'password' => '',
'db' => 'bat'
)
);
$db = $conn->selectDB('bat');
echo 'works fine...';
$files = $db->selectCollection('fs.files');
$documentsCount = $files->count();
$documents = $files->find();
foreach($documents as $document) {
echo $document['filename'] . ', ';
}
echo 'throws exception...';
$gridfs = $db->getGridFS();
$documentsCount = $gridfs->find()->count();
$documents = $gridfs->find();
foreach($documents as $document) {
echo $document->getFilename();
}
If the primary server is unavailable, the lines after echo 'works fine...'; will work fine, while the line after echo 'throws exception...'; will throw an exception.
Maybe this is related to an issue in the java driver JAVA-401, there was a similar problem with usage of secondary servers and gridfs. Maybe the gridfs ist trying to ensuring indices if the fs.files collection contains less than 1000 files which is not possible on secondary.
I figured out the problem, it was simply the incorrect line $gridfs->find()->count(). If you execute $gridfs->count() or $gridfs->find() it works fine.
So you'll have to use $gridfs->count() instead of $gridfs->find()->count().
I dont't know why $gridfs->find()->count() works correctly (and only correctly) if a primary server is available.
I know what the issue is but I dont know how to fix it. I just migrated my magento store locally and I guess possibly some data may have been lost when transferring the DB. the DB is very large. Anyhow, when I login to my admin page, I get a 404 error, page was not found.
I debugged the issue and got down to the wire. The exception is thrown in Mage/Core/Model/App.php. Line 759 to be exacted. The following is a snippet.
Mage/Core/Model/App.php
if (empty($this->_stores[$id])) {
$store = Mage::getModel('core/store');
/* #var $store Mage_Core_Model_Store */
if (is_numeric($id)) {
$store->load($id); // THIS ID IS FROM Mage_Core_Model_App::ADMIN_STORE_ID and its empty which causes the error
} elseif (is_string($id)) {
$store->load($id, 'code');
}
if (!$store->getCode()) { // RETURNS FALSE HERE BECAUSE NO ID Specified
$this->throwStoreException();
}
$this->_stores[$store->getStoreId()] = $store;
$this->_stores[$store->getCode()] = $store;
}
The store returns null because $id is null so it therefore does not load any model which explains why it returns false when calling getCode()
[EDIT]
If you want clarification, please ask for more before voting my post down. Remember I am still trying to get help not get neglected.
I am using Version 1.4.1.1. When I type in the URL for admin, I get a 404 page. I walked through the code thouroughly and found that the Model MAGE_CORE_MODEL_STORE::getCode(); Returns Null which triggers the exception. and ends the script. I do not have any other detail. I further troubleshooted the issue by checking the database and that is what the screen shot is. Showing that there is infact data in the Code Colunn.
So my question is why is the Model returning a empty column when the column clearly has a value. What can I do to further troubleshoot and figure out why its not working
[EDIT UPDATE NEW]
I did some research. the reason its returning NULL is because the store ID is null being passed
Mage::getStoreConfigFlag('web/secure/use_in_adminhtml', Mage_Core_Model_App::ADMIN_STORE_ID); // THIS IS THE ID being specified
Mage_Core_Model_App::ADMIN_STORE_ID has no value in it, so this method throws the exception. Not sure why how to fix this.
You need to change the ID of the admin store, website and store_group to zero. It is due to a mysql import which didn't reset autoincrement values and Magento hard-coding the fallback store ID to zero.
Refer to this answer for more information.
I know this is an older post but I struggled with this exact same thing for a couple of days before figuring it out. I am not sure if it's something to do with the way the DB was backed up but when I was trying to import a Magento 1.9.x database from a backup I was getting all kinds of issues like this. I was able to fix the 404 by following the above steps but then there were tons of other ID's that were incremented wrong all over the DB causing all kinds of issues with products, menus, etc. The solution was to add this at the beginning of the SQL file
SET SQL_MODE="NO_AUTO_VALUE_ON_ZERO";
SET #OLD_CHARACTER_SET_CLIENT=##CHARACTER_SET_CLIENT;
SET #OLD_CHARACTER_SET_RESULTS=##CHARACTER_SET_RESULTS;
SET #OLD_COLLATION_CONNECTION=##COLLATION_CONNECTION;SET NAMES utf8;
SET #OLD_UNIQUE_CHECKS=##UNIQUE_CHECKS, UNIQUE_CHECKS=0;
SET #OLD_FOREIGN_KEY_CHECKS=##FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0;
SET #OLD_SQL_MODE=##SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO';
SET #OLD_SQL_NOTES=##SQL_NOTES, SQL_NOTES=0;
and add this at the end of the file
SET SQL_MODE=#OLD_SQL_MODE;
SET FOREIGN_KEY_CHECKS=#OLD_FOREIGN_KEY_CHECKS;
SET UNIQUE_CHECKS=#OLD_UNIQUE_CHECKS;
SET CHARACTER_SET_CLIENT=#OLD_CHARACTER_SET_CLIENT;
SET CHARACTER_SET_RESULTS=#OLD_CHARACTER_SET_RESULTS;
SET COLLATION_CONNECTION=#OLD_COLLATION_CONNECTION;
SET SQL_NOTES=#OLD_SQL_NOTES;
prior to importing. Once I did that, the import went fine and all ID's were as they should be. Hope this helps someone out of a pinch as it did me.
Thanks,
Tony
I'm getting duplicate _ids when inserting documents into our mongo database. This is an intermittent problem that only happens under some load (is reproducable with some test scripts).
Here's some test code so you don't think I'm trying to double-insert the same object (I know that the PHP mongo driver adds the _id field):
// Insert a job
$job = array(
'type' => 'cleanup',
'meta' => 'cleaning the data',
'user_id' => new MongoId($user_id),
'created' => time(),
'status' => 'pending'
);
$this->db->job->insert($job, array('safe' => true)); // <-- failz here
I went on a frenzy and installed the latest stable (1.1.4) mongo driver to no avail. This isn't under heavy load. We're doing maybe 5 req/s on one server, so the 16M rec/s limit for the inc value probably isn't the issue.
Any ideas would be greatly appreciated. I'm hoping someone somewhere has used mongo with PHP and inserted more than 5 docs/s and had this issue ;).
-EDIT-
On CentOS 5.4 x86_64, linux 2.6.18-164.el5xen, Apache worker 2.2.15, PHP 5.2.13, MongoDB 1.8.1
-EDIT2-
As noted in the comments, I'm using the latest version of the PECL driver as of now (1.2.0) and the problem is still happening.
-EDIT3-
Forgot to post exact error:
Uncaught exception 'MongoCursorException' with message 'E11000 duplicate key error index: hannibal.job.$_id_ dup key
There is a different solution for this (the preform/worker MPM didn't help in my case, we were running as prefork which is default anyway).
The issue is that the insert array is passed by reference, and modified by the PHP MongoDB library to include the ID. You need to clear the ID.
So imagine the following code:
$aToInsert = array('field'=>$val1);
$collection->insert($aToInsert); << This will have '_id' added
$aToInsert['field'] = $val2
$collection->insert($aToInsert); << This will fail with the above error
Why? What happens with the library is:
$aToInsert = array('field'=>$val1);
$collection->insert($aToInsert);
// $aToInsert has '_id' added by PHP MongoDB library
// Therefore $aToInsert = array('field'=>$val1, '_id'=>MongoID() );
$aToInsert['field'] = $val2
// Therefore $aToInsert = array('field'=>$val2, '_id'=>MongoID() );
$collection->insert($aToInsert);
// This will not add '_id' as it already exists. But will now fail.
Solution is to reinitialise the array
$aToInsert = array('field'=>$val1);
$collection->insert($aToInsert);
$aToInsert = array('field'=>$val2);
$collection->insert($aToInsert);
or to unset the id
$aToInsert = array('field'=>$val1);
$collection->insert($aToInsert);
unset($aToInsert['_id']);
$aToInsert['field'] = $val2
$collection->insert($aToInsert); << This will now work
Looks like it had to do with the Apache version installed (worker). After installing apache prefork, we've seen no more duplicate _id errors on the server.
My guess is this has something to do with the global counter the Mongo driver uses. I'm thinking the lack of communication between the threads may be the cause...maybe one pool has instance counters per-thread, but since the PID is the same, you get conflicts.
I don't know the internals, but this seems to be the most likely explanation. Don't use Apache Worker MPM with the PHP MongoDB driver. Please comment and correct me if this is not the case, or if you know of a fix.