The problem
As per my previous question here, it was pointed out to me that I shouldn't be trying to fill related models in a Laravel Factory (i.e I should fill them in their own factory).
However, I have an observer that looks for the related data during the creation and tries to fill related models (this is so I can create multiple related entities using just the create:: method and a single multistep form). Now, I need to add a check to see if this data is populated in the observer so I don't have to specify it in the factory.
In doing so, I now get a segmentation fault when trying to seed my database. I've narrowed down the cause to this line - without the isset check, it works fine (other than $data['day'] is not specified, hence the check);
Segmentation fault (core dumped)
if(isset($data['day'])) $event->day->fill($data['day']);
Related Code
EventFactory.php
$factory->define(App\Event::class, function (Faker $faker) {
return [
"name" => "A Test Event",
"description" => $faker->paragraphs(3, true),
"event_start_date" => today(),
"event_opening_date" => today(),
"event_closing_date" => tomorrow(),
"user_id" => 1,
"banner_id" => 1,
"gallery_id" => 1,
"related_event_id" => 1,
"status" => "published",
"purchase_limit" => 1000,
"limit_remaining" => 1000,
"delivery_method" => "collection",
"merchandise_delivery_method" => "collection"
];
});
EventObserver.php
public function created($event){
# get all attributes
$data = $event->getAttributes();
# fill any related models
if(isset($data['day'])) $event->day->fill($data['day']);
# save user
$event->push();
}
public function updating($model){
# get all attributes
$data = $model->getAttributes();
# fill any related models
if(isset($data['day'])) $model->day->fill($data['day']);
# save user
$model->push();
}
Other Info
Command: sudo php artisan migrate:reset --seed
Host: Windows 10
VM Environment: Vagrant running Ubuntu 16.04 via HyperV, mounted share with Samba
PHP Version: 7.1.20
Laravel Version: 5.7
Update
Turns out the issue is actually with this line;
$event->push();
Could there be something recursive happening here?
Update 2
With Namoshek's help, I can now narrow it down to the following error from xdebug;
Maximum function nesting level of '256' reached, aborting!
Increasing xdebug.max_nesting_level to 200000 brings back the segfault.
This seems to me like it's stuck in an infinite loop. However, I can't see how calling save() or push() in created would end up calling back to itself. Confused.
This did indeed turn out to be an infinite recursion issue. Eliminating the line:
$event->push(); // this line appears to call update again, which in turn calls push, which calls update etc...
Solved the problem.
Related
i am trying to update phpdi from 5.4.6 to 6.4 and i don't know how to write a definition for objects which should be created everytime it is injected.
In 5.4.6 i simly wrote
return [
'setasign\\Fpdi\\Fpdi' => DI\object()->scope(Scope::PROTOTYPE()),
]
But in 6.4 DI\object()->scope(Scope::PROTOTYPE()) does not exists anymore.
I read the documentation https://php-di.org/doc/scopes.html and understand that I can now use $container->make() instead of $container->get(), but then i have to rewrite much code in my repository.
Is there an option to resolve the problem in my definitionfile directly?
Thanks
With this way, i get the same instance every time i call $container->get()
setasign\\Fpdi\\Fpdi' => DI\factory(
function () {
return new setasign\Fpdi\Fpdi();
}
)
So I'm running CakePHP 4 on an EC2 instance, AWS ES 7 and I've setup the ElasticSearch plugin in CakePHP.
composer require cakephp/elastic-search "^3.0"
I've added the elastic datasource connection in config/app.php
'elastic' => [
'className' => 'Cake\ElasticSearch\Datasource\Connection',
'driver' => 'Cake\ElasticSearch\Datasource\Connection',
'host' => 'search-DOMAIN.REGION.es.amazonaws.com',
'port' => 443,
'transport' => "AwsAuthV4",
'aws_access_key_id' => "KEY",
'aws_secret_access_key' => "SECRET",
'aws_region' => "REGION",
'ssl' => 'true',
],
.. and I've activated the plugin
use Cake\ElasticSearch\Plugin as ElasticSearchPlugin;
class Application extends BaseApplication
{
public function bootstrap()
{
$this->addPlugin(ElasticSearchPlugin::class);
I've manually added 1 index record to ES via curl from the EC2 instance. So I know the communication between EC2 and ES works.
curl -XPUT -u 'KEY:SECRET' 'https://search-DOMAIN.REGION.es.amazonaws.com/movies?pretty' -d '{"director": "Burton, Tim", "genre": ["Comedy","Sci-Fi"], "year": 1996, "actor": ["Jack Nicholson","Pierce Brosnan","Sarah Jessica Parker"], "title": "Mars Attacks!"}' -H 'Content-Type: application/json'
I also managed to search for this record via curl without any problems.
In the AppController.php I tried this simple search just to see if the plugin works and for the life of me, I can't get it to work.
# /src/Controller/AppController.php
...
use Cake\ElasticSearch\IndexRegistry;
class AppController extends Controller
{
public function initialize(): void
{
parent::initialize();
$this->loadModel('movies', 'Elastic');
$query = $this->movies->find('all');
$results = $query->toArray();
I'm getting the following error:
Client error: POST
https://search-DOMAIN.REGION.es.amazonaws.com/movies/movies/_search
resulted in a 403 Forbidden response: {"message":"The security token
included in the request is invalid."}
Elastica\Exception\Connection\GuzzleException
Seems like the plugin adds the 'Index' name twice for some reason. I looked everywhere for a setting that I might have missed. If I copy and paste the above URL and remove the duplicate Index from the URL in a browser it works fine.
https://search-DOMAIN.REGION.es.amazonaws.com/movies/_search
Am I missing something here?
I've even tried this method, and I get the same problem with duplicated index values in the url.
$Movies = IndexRegistry::get('movies');
$query = $Movies->find('all');
$results = $query->toArray();
I've tried a new/clean CakePHP instance and I get the same problem? Is there something wrong with the plugin? Is there a better approach to communicate with ES via CakePHP?
I'm not familiar with the plugin and Elasticsearch, but as far as I understand, one of the movies is the index name, and one of them is the type name, where the type name - at least according to the documentation - should be singular, ie the path would instead be expected to look like:
/movies/movie/_search
Furthermore, Index classes assume that the type mapping has the singular name of the index. For example the articles index has a type mapping of article.
https://book.cakephp.org/elasticsearch/3/en/3-0-upgrade-guide.html#types-renamed-to-indexes
Whether that would be the correct path with respect to what is supported by the used Elasticsearch version, that might be a different question.
You may want to open an issue over at GitHub.
I'm using docker for running a Laravel project and i have this command in Laravel that writes into a file in storage folder
Artisan::call(
'app:cache',
[
"--message" => 'this is a command',
"--seconds" => 0
]
)
when i call it through web like
Route::get('/', function () {
\Artisan::call(
'app:cache',
[
"--message" => 'this is a command',
"--seconds" => 0
]
);
});
an exception from /src/vendor/symfony/console/Input/ArrayInput.php file is generated with this message: "Trying to access array offset on value of type int"
but in command line this command is working completely OK.
the issue was coming from my using version of PHP, the PHP version was 7.4.1 and the package that Laravel was using for this version of PHP was changed and that caused the error to happen, I changed my using PHP version to 7.2 and got worked.
I have been given the code but normally, programmers should not use this kind of packages this way, for example in this one, WEB should not call Artisan commands directly, because if a situation like this happens you will need to override your code like everywhere. Instead, use your own code and put what you want behind it, as an example:
imaging you have this artisan command that writes some information in a file and you want to use it in your WEB section then you must put your code into a class or function outside of your artisan command and use your class/function in your artisan and web for work. then code like bellow
Route::get('/', function () {
\Artisan::call(
'app:cache',
[
"--message" => 'this is a command',
"--seconds" => 0
]
);
});
will become to code like
Route::get('/', function () {
$cache = new \App\Custom\Cache(
$message = 'this is a command',
$seconds = 0
);
});
In this way, your functioning code is separated from your framework and if you even use some packages in your class/function and a package needs to change like it was to call Artisan command then there is just one place to update, not multiple places.
Try calling it like this:
Artisan::call('app:cache --message="this is a command" --seconds=0');
and if you want to put dynamic variables in it:
Artisan::call('app:cache --message=' . $yourMessage . '--seconds=' . $seconds);
You will not have to handle any arrays by passing your variables in a single string line like this. It still makes it simple to read as a single string.
I am experiencing a strange issue where, on my local machine, my code runs perfectly fine without any errors, but on my live server (production) I am getting these errors when running jobs automatically (which goes to the CLI instead of running through PHP, I believe):
SQLSTATE[23000]: Integrity constraint violation: 1062 Duplicate entry
...this only seems to happen when the function that calls the SQL inserts is run via a "php artisan" command (or when that command is run automatically by the server/jobs/cronjobs). The inserts are done like this:
$carValues = [
'name' => $carName,
'car_id' => $carID,
'count' => $count
];
if ($car){
//Record already exists - update it
$car->fill($carValues);
$car->save(); //save changes
} else {
//Record does not exist - add new record
$car = Car::create($carValues);
}
In the above example, 'name' has a unique key and is triggering the errors. Basically if $car was already not null before the above code segment, then we'd do an update, otherwise it would create the record. This works all 100% of the time on my local machine. It seems that, somehow, on the live server only (when using php artisan command or letting the command run through CLI/scheduled jobs) it's running into these duplicate entries but it's not pointing me to any specific segment of code, the errors are dispatching as PDOException from .../vendor/laravel/framework/src/Illuminate/Database/Connection.php
Really confused on this one. Is there perhaps some way to have PDOException ignore these as it's stopping my scheduled jobs from running consistently, when ideally it should continue on without throwing these errors. Again - it works on my local machine (running Homestead/Vagrant box) which matches my online servers setup.
This is a concurrency problem.
You are only experiencing this in your production environment because that is where you have set up queue execution of jobs.
This means that there might be multiple jobs that run simultaneously.
So this happens:
job A tries to fetch $car (does not exist)
job B tries to fetch $car (does not exist)
job A then inserts it into the database
job B then tries to insert it into the database but can't because it has just been inserted by job A.
So you have to either add retries, or make use of "insert.. on duplicate key update" which makes the "update or create" on database level.
Even though laravel has a build in "updateOrCreate" function, this is not concurrency safe either!
An easy fix/way to test that this is actually the case is to wrap your code into
DB::transaction(function () {
... your code ....
}, 5);
Which will retry the transaction 5 times if it fails.
Another way to solve this is to ensure that the retry_after is longer than the longest executing job time.
So, if a job takes you 120 seconds, make sure to keep this value 180 seconds.
You can find the retry_after value in the config/queue.php file.
Here is what mine looks like for the Redis connection
'redis' => [
'driver' => 'redis',
'connection' => 'default',
'queue' => 'default',
'retry_after' => 240, // the longest job runs for 180 seconds. To prevent job duplication, make this longer
'block_for' => 2,
'after_commit' => false,
],
I'm getting a very strange error when running a unit test:
PDOException : SQLSTATE[42S02]: Base table or view not found: 1146 Table 'test.result' doesn't exist
/var/www/html/project1/rami/tests/Data/Models/DataImportTest.php:60
The test code in question (simplified to try to isolate the problem):
/**
* #covers \Project1\Rami\Data\Models\DataImport::insertData
*/
public function testInsertData(): void {
$this->object->insertData(1);
$sql = 'SELECT request_id
FROM requests
WHERE request_id = 1;';
$queryTable = $this->getConnection()->createQueryTable('result', $sql);
$expectedTable = $this->createArrayDataSet([
'result' => [
[
'request_id' => '1'
]
]
])->getTable('result');
static::assertTablesEqual($expectedTable, $queryTable);
}
What's even more strangely is that assertions in other tests that use assertTablesEqual run and pass fine, it is only this test that is failing. PHPUnit appears to be introspecting a table on the database called "result" when creating the expected table (which does not exist on the database), but it doesn't do that for any of the other tests.
I have tried dropping the database and recreating it, reloading the dev/test environment (a Vagrant box), and even reprovisioning the Vagrant box with a fresh install of MariaDB, all without success.
Googling the error only shows Laravel related problems, with a small handful of similar problems in other PHP frameworks, but nothing related to testing.
The implementation works fine as far as I can tell, and running the query manually on the test database doesn't cause any error.
Any ideas?