run update/delete query when mysql server starts/restarts? - php

I am developing an app based on PHP/Laravel.There is a scenario where I have to insert large number of data in MySQL table,it takes ~1-2 minutes to complete insert query once submitted by user.I wanted to bypass another insert query while previous is still in progress.
So I made a log table, where value of 0 is inserted on is_completed field before the bulk insert starts on another table and when the loop of bulk insert completes, I have trrigered another query that updates 1 on is_completed field of log table.
This way I can check if is_complete flag is set to 1 to allow next request by user else just bypass the query.I have used transaction for bulk insert and I clear log when handling exceptions if it occurs any.
Now the problem occurs if the server restarts while bulk insert is in progress in that case the is_complete flag is set to 0 on log table.If so user won't be able to request to insert data until the next day.
Is there provision of any sorts of automation on MYSQL, so that when the DB server just starts we can run query ? If so I will trigger query to clear log table.Please can any one tell me how to do that or suggest me better solution,regarding my scenario ?

You can use the init_file server option to run SQL statements on server startup.
https://dev.mysql.com/doc/refman/5.6/en/server-options.html#option_mysqld_init-file

Related

Mysql: How to lock a whole table during transaction?

I have a transaction like this (innoDB):
START TRANSACTION;
SELECT 1 FROM test WHERE id > 5; // Let's assume this returns 0 rows
// Some very long operation here
//If the previous SELECT contained 0 results, this Insert will be executed
INSERT INTO test VALUES...;
COMMIT;
Now the problem is that if more sessions execute at the same time, then they will all end up executing the INSERT, because by the time the long task in those sessions has finished, all of the sessions had plenty of time to do the SELECT, and it will return 0 row result for all of them, since the INSERT haven't been executed quite yet due to the long task running.
So basically, I need to somehow lock the whole table test (so it can't be read by other sessions and they will be forced to wait) after I execute START TRANSACTION, but I am not sure how, because I can't use the LOCK TABLES test query, because that COMMITs the transaction I have started.
I also cannot use SELECT .. FOR UPDATE, because that only prevents existing rows from being modified, but it won't prevent new rows from being inserted.
If you've got some long-running task which only needs to be run once, set a flag in a separate table to say that the task has started, and check that instead of the number of rows written to the target table, so that another instance of the service does not kick off the job twice.
This also has the advantage that you're not relying on the specifics of the task in order to know the status of it (and therefore if the nature of the task changes, the integrity of your other code doesn't collapse), and you're not trying to work round the transaction by doing something horrible like locking an entire table. One of the points of using transactions is that it's not necessary to lock everything (although of course different isolation levels can be used, but that's not the subject of this discussion).
Then set the flag back to false when the last bit of the task has finished.

I need to push data to a table post uploading file. Is there any way of doing it without triggers

I am uploading data into temp table after uploading. I need to performs joins and select few columns and push those data to another table in php . Is there any way of doing it without using triggers
If you don't want to use a trigger to react to an insertion in a database, you pretty much need to do it by polling that table to see if there is new insertions at a given time interval. You will need a flag in that polled table to know if a row has already been processed in a previous run to prevent reacting multiple times for a single insert.
One way of doing this could be with a CRON job ran at every minutes that execute a php script that simply "select all rows of your table where the "isProcessed" flag is false". Of course the default value for the "isProcessed" column must be false.
Then for each row obtained with the previous DB query, you do what you want to do, i.e. "performs joins and select few columns and push those data to another table in php" to quote yourself.
Then update the same rows in your original "temp" table so that their "isProcessed" is now at true.
I guess you have a good reason not to use database triggers , because triggers are there to do exactly what you want to do in the simplest way that it can be done.

PHP MySQL Task API, Prevent Duplicate Records

I am building a PHP RESTful-API for remote "worker" machines to self-assign tasks. The MySQL InnoDB table on the API host holds pending records that the workers can pick up from the API whenever they are ready to work on a record. How do I prevent concurrently requesting worker system from ever getting the same record?
My initial plan to prevent this is to UPDATE a single record with a uniquely generated ID in a default NULL field, and then poll for the details of the record where the unique ID field matches.
For example:
UPDATE mytable SET status = 'Assigned', uniqueidfield = '3kj29slsad'
WHERE uniqueidfield IS NULL LIMIT 1
And in the same PHP instance, the next query:
SELECT id, status, etc FROM mytable WHERE uniqueidfield = '3kj29slsad'
The resulting record from the SELECT statement above is then given to the worker. Would this prevent simultaneously requesting workers from getting the same records shown to them? I am not exactly sure on how MySQL handles the lookups within an UPDATE query, and if two UPDATES could "find" the same record, and then update it sequentially. If this works, is there a more elegant or standardized way of doing this (not sure if FOR UPDATE would need to be applied to this)? Thanks!
Nevermind my previous answer. I believe I understand what you are asking. I'll reword it so maybe it is clearer to others.
"If I issue two of the above update statements at the same time, what would happen?"
According to http://dev.mysql.com/doc/refman/5.0/en/lock-tables-restrictions.html, the second statement would not interfere with the first one.
Normally, you do not need to lock tables, because all single UPDATE
statements are atomic; no other session can interfere with any other
currently executing SQL statement.
A more elegant way is probably opinion based, but I don't see anything wrong with what you're doing.

Termination of PHP script between two mysql queries

Is it possible that a php script gets terminated between two mysql queries.
For example user registration:
1st INSERT : I will enter the basic first-name,last-name,address in one table
and
2nd INSERT : i will enter the user's hashed password and salt in another table
This operation probably requires two queries and either of them independently are useless records.
What if the php script terminates after executing the first query?
User will just get a server error message but one useless record will be generated.
Any Solutions??
EDIT ------
My Web host does not provide a InnoDB engine.
Only MyISAM supported
Use a transaction:
START TRANSACTION;
INSERT INTO foo ...
INSERT INTO bar ...
COMMIT
If either INSERT fails, you ROLLBACK the transaction and you won't be left with "useless" records.
if you use a transaction such as:
BEGIN;
INSERT INTO TBL(...) VALUES(...);
INSERT INTO TBL(...) VALUES(...);
COMMIT;
Everything is sent to the mysql server in one go, and then ran as a batch, meaning your script can terminate as soon as the transfer is complete, rather than waiting for each individual query. It also has the added bonus of using less ram and being much faster.

Could one user updates while other users cannot select

Is there a way in MySQL and PHP to allow only the person performing an update to view information about a particular record?
For example, if one user loads the page they are presented with a record which must be updated, until that user finishes updating this record, any other user accessing this page should not be able to view this particular record.
You'll have to manually lock the row (by adding a IsLocked column) when someone requests the edit page, since the connection to the database is lost as soon as the PHP script execution ends (despite pooling et al, script execution is stopped so you cannot unlock from the same thread again since that connection may go to another script).
Don't forget to create a kind of unlocking script, initiated by cron for example, to unlock rows that have been locked for more than a given amount of time.
I don't recommend adding any column to data table in question. I'd rather create special locks table to hold the information:
create table locks (
tablename varchar,
primarykey int,
userid int,
locktime datetime);
Then take the following principles into consideration:
each PHP request is a standalone mysql connection, that's why solutions like SELECT ... FOR UPDATE won't work and that's why you need to keep userid of a person - who actually did first request and performed the lock
every access to locks table must lock the table as a whole (using MySQL's LOCK statement) to avoid concurrency in locking the same row
if there is no way to know, whether particular uses has abandoned editing the row by closing the window with record - then either locktime timeout must be short or you should provide some ping (i.e. AJAX) mechanism that would reset locktime as long as user is working on locked record
user can save changes to record as long as he/she owns the lock and locktime did not expire
tablename and primarykey are of course samples and you should adjust them to your needs :-)
add a "locked" column to the table, and once a user calls the edit form, set the "locked" db value to the user_id, and after save set it back to false/null.
In your view action, check the locked value
You can either add a field named like locked where you set a status. Maybe you also add a field like lockedtime where you save a timestamp how long the lock is active. That depends on your needs.
There are also possibilities to do this native. Like
SELECT * FROM table WHERE primarykey = x FOR UPDATE;

Categories