how to set cron user wise in laravel - php

I have set cron that runs stock updation for users, Now I have user wise setting if stock updation is set to 'Yes' by user than only that cron should run for particular user.
I have googled it but could not find any solution
Any reference or advice are welcome.

In Laravel it's bad practice to put logic in cron. Use commands and scheduler for that. This problem is common, but another approach that is simple is filter which users need an update in the command.
class UpdateStocks extends Command
{
protected $signature = 'update:stocks';
public function handle()
{
User::where('update_stock', 'Yes')->get()->each(function(User $user) {
// run logic to update the stock
})
}
Put the command in the scheduler.
$schedule->call(new UpdateStocks)->daily();

Related

How can I Publish A Blog Posts In The Future On A Specific Date And TIme In Laravel

I'm at the beginner level using the laravel PHP framework. I worked on a blog web application, but I want to do some upgrade.
One of the upgrades is to be able to schedule posts to be published in the future on a selected date and time. Like that of Facebook, or that of rainlab blog in October CMS.
I don't know how to go about this, I would really appreciate it if someone can help me out.
The easiest way to implement delayed posting is to add publish date column (e.g published_at) to posts table and retrieve posts where publish date < now.
Schema:
$table->timestamp('published_at');
Retrieve example:
$posts = Post
::where('published_at', '<', now())
->orderByDesc('published_at')
->paginate(50);
The firstly i will create in database column like
posted_at and show,that columns will be helpful later.
You should create command using
php make:command MyCommand
Then in your app/console/command you will have your command
In app/console/kernel in
protected variable $commands
register your command,put path
Inside your command using Eloquent or Db query get all posts where show=0 and posted_at
$now=date("Y-m-d");
$data=DB::table('test')->where('show',0)->whereRaw("posted_at<$now")->get();
Now you can use each loop and change show=1,something like that:
$date->each(function ($item){
DB::table('test')->where('id',$item->id)->update(['show'=>1]);
});
Last job is put in kernel code which will be run after 1m,try this ->
$schedule->command('myCommand')->everyMinute();
EDIT:
So i checked my code i put changes so your command more or less should looks like this :
$now=date("Y-m-d");
$data=DB::table('test')->where('show_',0)->whereRaw("date(posted_at)<='$now'")->get();
$data->each(function ($item){
DB::table('test')->where('id',$item->id)->update(['show_'=>1]);
Remember to put in header of your command this if you use DB
Use DB;
if Eloquent this but you must change the DB to Model_name
use App\Name_model;
And that is Kernel.php
protected $commands = [
'App\Console\Commands\MyCommand',
];
// and
protected function schedule(Schedule $schedule)
{
$schedule->command('My:Command')->everyMinute();
}
I check if after 1min records in my test database was change ,and show_=0 changed to show_=1 and that's all

laravel scout: How to update index in controller

Here is my question. I want to update the scout index saved in storage in my controller. Any ideas how to do it?
I am using tntsearch package. I know I can do artisan command in command prompt with $ php artisan scout:import App\\Models\\Paper
But I'm working on a website that everyone can submit their journals in it and I need a powerful search engine on my website. So in this situation, I need to update the index every time a journal submitted. So that everyone can be able to search the journals.
I manage to do a part of this task by making a provider TNTSearchScoutServiceProvider.
here is TNTSearchScoutServiceProvider:
class TNTSearchScoutServiceProvider extends \TeamTNT\Scout\TNTSearchScoutServiceProvider
{
public function boot()
{
$this->app[EngineManager::class]->extend('tntsearch', function ($app) {
$tnt = new TNTSearch();
$driver = config('database.default');
$config = config('scout.tntsearch') + config("database.connections.{$driver}");
$tnt->loadConfig($config);
$tnt->setDatabaseHandle(app('db')->connection()->getPdo());
$this->setFuzziness($tnt);
$this->setAsYouType($tnt);
return new TNTSearchEngine($tnt);
});
// To allow us run commands if we're not running in the console
$this->commands([
ImportCommand::class,
]);
}
}
After adding this provider to config/app.php. In the controller I am using the provider like this:
Artisan::call('tntsearch:import', ['model' => 'App\Models\Paper']);
But this throwes this error:
unlink(C:\wamp64\www\mywbsite\storage/papers.index): Resource temporarily unavailable
Here is what I accomplish so far:
although it throws the error,but I can only get the last updated row in search results and the oldest rows doesn't show up in the search results.
So what are your suggestions? Is it a better way to do this? Or I should check out the site every day and run the artisan commands so that the table can be indexed?
I finally managed to solve this problem:
to update the index in storage you just make a new obj from TNTindexer class; First, you create that index and after that, you select the columns you want to update with query() method. then run() the indexer.Before that make sure to load the configuration. here is the method that I write in the controller:
protected function add_to_search(){
$indexer = new TNTIndexer;
$driver = config('database.default');
$config = config('scout.tntsearch') + config("database.connections.{$driver}");
$indexer->loadConfig($config);
$indexer->createIndex('paper.index');
$indexer->query('SELECT id,title,description,abstract,keywords FROM papers;');
$indexer->run();
}
this way the index always updated through a controller.

How do i get laravel to set a row in a database to a specific value at midnight?

I have a table in a mysql database and i want to every midnight set a row in that table to false.
How would i go about doing this?
Create a controller function and execute it at midnight.
Lets say you have ScheduleController
class ScheduleController extends Controller {
public function resetDataBase()
{
//write query here to change the table row.
//You may use raw queries
}
}
Then call this function in App\Console\Kernel.php
protected function schedule(Schedule $schedule)
{
$schedule->call('\App\Http\Controllers\ScheduleController#resetDataBase')
->dailyAt('00:00');
}
On the server where you have hosted the application, you have to setup a crontab entry
* * * * * php /path/to/artisan schedule:run >> /dev/null 2>&1
Refer docs for more info.
You can setup a task to run at midnight and do whatever you want to do in that task. Check out the docs for task scheduling here.

How to perform a delete operation on a Model?

Coming from a Ruby on Rails experience where you load up the rails console to delete a user or all users. I am new to Laravel 5 and I am looking for something similar to delete a user already in the sqlite3 database.
I see where people are talking about User::find(1)->delete(); to delete a user but where to you put that and run in? Is there a console to perform a delete task in? I would like to know how to delete a user without dropping the table. I do not want to soft delete.
You can put this code for example in controller.
You can use
$user = User::find($id);
$user->delete();
if you don't use SoftDeletingTrait trait or
$user = User::find($id);
$user->forceDelete();
if you do, and you want to really remove user from database, not just hide it from results.
More you can read at Laravel page
in Laravel 5 you can use the destroy method.
$user->destroy($id);
and, sure, you have a command line to do so.
$ php artisan tinker
and you can run for example
>> $var = new App\User;
>> $user= $user->find($id);
>> $user->destroy();
Several ways to do this.
If your controller defines the user as an argument:
public function destroy(User $user)
{
return $user->delete();
}
You can also delete any user by $id:
User::destroy ($id);
Assuming you're wrapping these routes with some security.
Edit: Corrected spelling
You can use bellow example to delete data with multiple
parameters......
>
> tableName::where('field_1','=',$para1)
> ->where('field_2,'=',$para2)
> ->delete();
This still works with laravel 7, i use tinker command line and the delete() method:
php artisan tinker
Now i can run commands directly:
> App\User::find($id)->delete();

Laravel migration transaction

When developing i'm having so many issues with migrations in laravel.
I create a migration. When i finish creating it, there's a small error by the middle of the migration (say, a foreign key constraint) that makes "php artisan migrate" fail. He tells me where the error is, indeed, but then migrate gets to an unconsistent state, where all the modifications to the database made before the error are made, and not the next ones.
This makes that when I fix the error and re-run migrate, the first statement fails, as the column/table is already created/modified. Then the only solution I know is to go to my database and "rollback" everything by hand, which is way longer to do.
migrate:rollback tries to rollback the previous migrations, as the current was not applied succesfully.
I also tried to wrap all my code into a DB::transaction(), but it still doesn't work.
Is there any solution for this? Or i just have to keep rolling things back by hand?
edit, adding an example (not writing Schema builder code, just some kind of pseudo-code):
Migration1:
Create Table users (id, name, last_name, email)
Migration1 executed OK. Some days later we make Migration 2:
Create Table items (id, user_id references users.id)
Alter Table users make_some_error_here
Now what will happen is that migrate will call the first statement and will create the table items with his foreign key to users. Then when he tries to apply the next statement it will fail.
If we fix the make_some_error_here, we can't run migrate because the table "items" it's created. We can't rollback (nor refresh, nor reset), because we can't delete the table users since there's a foreign key constraint from the table items.
Then the only way to continue is to go to the database and delete the table items by hand, to get migrate in a consistent state.
It is not a Laravel limitation, I bet you use MYSQL, right?
As MYSQL documentation says here
Some statements cannot be rolled back. In general, these include data
definition language (DDL) statements, such as those that create or
drop databases, those that create, drop, or alter tables or stored
routines.
And we have a recommendation of Taylor Otwell himself here saying:
My best advice is to do a single operation per migration so that your
migrations stay very granular.
-- UPDATE --
Do not worry!
The best practices say:
You should never make a breaking change.
It means, in one deployment you create new tables and fields and deploy a new release that uses them. In a next deployment, you delete unused tables and fields.
Now, even if you'll get a problem in either of these deployments, don't worry if your migration failed, the working release uses the functional data structure anyway. And with the single operation per migration, you'll find a problem in no time.
I'm using MySql and I'm having this problem.
My solution depends that your down() method does exactly what you do in the up() but backwards.
This is what i go:
try{
Schema::create('table1', function (Blueprint $table) {
//...
});
Schema::create('tabla2', function (Blueprint $table) {
//...
});
}catch(PDOException $ex){
$this->down();
throw $ex;
}
So here if something fails automatically calls the down() method and throws again the exception.
Instead of using the migration between transaction() do it between this try
Like Yevgeniy Afanasyev highlighted Taylor Otwell as saying (but an approach I already took myself): have your migrations only work on specific tables or do a specific operation such as adding/removing a column or key. That way, when you get failed migrations that cause inconsistent states like this, you can just drop the table and attempt the migration again.
I’ve experienced exactly the issue you’ve described, but as of yet haven’t found a way around it.
Just remove the failed code from the migration file and generate a new migration for the failed statement. Now when it fails again the creation of the database is still intact because it lives in another migration file.
Another advantage of using this approach is, that you have more control and smaller steps while reverting the DB.
Hope that helps :D
I think the best way to do it is like shown in the documentation:
DB::transaction(function () {
DB::table('users')->update(['votes' => 1]);
DB::table('posts')->delete();
});
See: https://laravel.com/docs/5.8/database#database-transactions
I know it's an old topic, but there was activity a month ago, so here are my 2 cents.
This answer is for MySql 8 and Laravel 5.8
MySql, since MySql 8, introduced atomic DDL: https://dev.mysql.com/doc/refman/8.0/en/atomic-ddl.html
Laravel at the start of migration checks if the schema grammar supports migrations in a transaction and if it does starts it as such.
The problem is that the MySql schema grammar has it set to false. We can extend the Migrator, MySql schema grammar and MigrationServiceProvider, and register the service provider like so:
<?php
namespace App\Console;
use Illuminate\Database\Migrations\Migrator as BaseMigrator;
use App\Database\Schema\Grammars\MySqlGrammar;
class Migrator extends BaseMigrator {
protected function getSchemaGrammar( $connection ) {
if ( get_class( $connection ) === 'Illuminate\Database\MySqlConnection' ) {
$connection->setSchemaGrammar( new MySqlGrammar );
}
if ( is_null( $grammar = $connection->getSchemaGrammar() ) ) {
$connection->useDefaultSchemaGrammar();
$grammar = $connection->getSchemaGrammar();
}
return $grammar;
}
}
<?php
namespace App\Database\Schema\Grammars;
use Illuminate\Database\Schema\Grammars\MySqlGrammar as BaseMySqlGrammar;
class MySqlGrammar extends BaseMySqlGrammar {
public function __construct() {
$this->transactions = config( "database.transactions", false );
}
}
<?php
namespace App\Providers;
use Illuminate\Database\MigrationServiceProvider as BaseMigrationServiceProvider;
use App\Console\Migrator;
class MigrationServiceProvider extends BaseMigrationServiceProvider {
/**
* Register the migrator service.
* #return void
*/
protected function registerMigrator() {
$this->app->singleton( 'migrator', function( $app ) {
return new Migrator( $app[ 'migration.repository' ], $app[ 'db' ], $app[ 'files' ] );
} );
$this->app->singleton(\Illuminate\Database\Migrations\Migrator::class, function ( $app ) {
return $app[ 'migrator' ];
} );
}
<?php
return [
'providers' => [
/*
* Laravel Framework Service Providers...
*/
App\Providers\MigrationServiceProvider::class,
],
];
Of course, we have to add transactions to our database config...
DISCLAIMER - Haven't tested yet, but looking only at the code it should work as advertised :) Update to follow when I test...
Most of the answers overlook a very important fact about a very simple way to structure your development against this. If one were to make all migrations reversible and add as much of the dev testing data as possible through seeders, then when artisan migrate fails on the dev environment one can correct the error and then do
php artisan migrate:fresh --seed
Optionally coupled with a :rollback to test rolling back.
For me personally artisan migrate:fresh --seed is the second most used artisan command after artisan tinker.

Categories