I need to update and/or insert in a Postgres DB, a String value with letters and numbers, which must be incremented of +1 (through php strings functions) everytime there is an UPDATE.
I need to LOCK this table, in order for the php to complete its flow in inserting or updating it avoiding others that open the page to receive the same result of the SELECT.
The second arrived, would wait for the first to finish.
The second arrived, will ALWAYS make an UPDATE.
This update could generate a FILENAME0023, if there were 22 updates after the first insertion.
There could be more connections at the same time, I need to reserve the first result of the SELECT for the first one who connected to this php page.
The flow would be:
LOCK
SELECT column FROM table WHERE column = 'FILENAME0001';
IF NOT EXIST { INSERT INTO table (column) VALUES ($my_column); }
ELSE { UPDATE table SET column = '$my_new_column' WHERE column = '$my_column'; }
UNLOCK
The variable $my_new_column is a SUBSTR php function that would cut the number part of the string and will then be incremented of +1.
This link is helping me a lot: THIS.
But it does not contain everything.
I also tried working with stored procedure: LINK
But I should work on the php when it is an update because I cannot increment a DB value like shown HERE, because I do not have a INT but a string and I can not change this
Anyone who can help me?
I'd like to share my code, but believe me I'd rather start fresh, all the codes I tried are leading to nowhere.
you should try to use
`INSERT ON ON DUPLICATE KEY UPDATE`
Here link for more example about it
Ive basically done a transaction in two different functions, in my code which are called back to back. The code is written entirely in PHP using MySQL.
insert_into_table1()
{
// transact start
// perform delete from table1
// insert into table from select statement from the database
// commit if done, else rollback
}
insert_into_table2()
{
// transact start
// perform delete from table2
// insert into table data from table1 after some processing (but same number of rows)
// commit if done, else rollback
}
The above is an example of the psuedo-code.
Initially, tables are empty.
The issue arises, after the code runs. Both tables get inserted with N number of rows.
When the code is run again, the tables are emptied using the delete command (MySQL). And N number of rows are inserted again into both tables.
But after the second (or more) run, the 'id' (I have this on auto_increment with offset 1 for both tables. Ive confirmed that the value doesn't change) field of table1 continues with N+1, but there is an offset of 10 for the second table whereas it should also continue with N+1.
I would like to know why this behavior occurs and how to fix it.
When you insert, the auto increment value increases. When you delete, it doesn't roll back.
A TRUNCATE of the table will reset it too but you still lose all data in the table.
Fine in your case as they are empty to begin with... but can you always guarantee that?
MyIsam DB
To be specific, I insert into one table 5-10 rows at the time usually. After inserts are done, I want to get COUNT(DISTINCT(column)) and SUM(some_other_column) of the newly inserted rows and insert it into another table.
How I figured this might be done (but I don't know if it can) is to make a trigger on after insert and then make one select on table where I inserted the rows, insert it into another table and break trigger's for loop.
Suggestions please. I feel bad about this somehow.
I was searching for a solution to my problem and didn't find one. Basically the user can add/delete information which will delete the row. The user can also arrange this information up/down which would also switch the primary keys (called placeholder). The problems are is that if the user deletes a row it won't move the placeholder int up one. I was wondering If there's any way to move the rows up and arrange them starting from 1.
You shouldn't amend auto increment. It's part of the database design. You should use another column name as the identifier, ID perhaps?
This is my db structure:
ID NAME SOMEVAL API_ID
1 TEST 123456 A123
2 TEST2 223232 A123
3 TEST3 918922 A999
4 TEST4 118922 A999
I'm filling it using a function that calls an API and gets some data from an external service.
The first run, I want to insert all the data I get back from the API. After that, each time I run the function, I just want to update the current rows and add rows in case I got them from the API call and are not in the db.
So my initial thought regarding the update process is to go through each row I get from the API and SELECT to see if it already exists.
I'm just wondering if this is the most efficient way to do it, or maybe it's better to DELETE the relevant rows from the db and just re-inserting them all.
NOTE: each batch of rows I get from the API has an API_ID, so when I say delete the rows i mean something like DELETE FROM table WHERE API_ID = 'A999' for example.
If you retrieving all the rows from the service i recommend you the drop all indexes, truncate the table, then insert all the data and recreate indexes.
If you retrieving some data from the service i would drop all indexes, remove all relevant rows, insert all rows then recreate all indexes.
In such scenarios I'm usually going with:
start transaction
get row from external source
select local store to check if it's there
if it's there: update its values, remember local row id in list
if it's not there: insert it, remember local row id in list
at the end delete all rows that are not in remembered list of local row ids (NOT IN clause if the count of ids allows for this, or other ways if it's possible that there will be many deleted rows)
commit transaction
Why? Because usually I have local rows referenced by other tables, and deleting them all would break the references (not to mention deletete cascade).
I don't see any problem in performing SELECT, then deciding between an INSERT or UPDATE. However, MySQL has the ability to perform so-called "upserts", where it will insert a row if it does not exist, or update an existing row otherwise.
This SO answer shows how to do that.
I would recommend using INSERT...ON DUPLICATE KEY UPDATE.
If you use INSERT IGNORE, then the row won't actually be inserted if it results in a duplicate key on API_ID.
Add unique key index on API_ID column.
If you have all of the data returned from the API that you need to completely reconstruct the rows after you delete them, then go ahead and delete them, and insert afterwards.
Be sure, though, that you do this in a transaction, and that you are using an engine that supports transactions properly, such as InnoDB, so that other clients of the database don't see rows missing from the table just because they are going to be updated.
For efficiency, you should insert as many rows as you can in a single query. Much faster that way.
BEGIN;
DELETE FROM table WHERE API_ID = 'A987';
INSERT INTO table (NAME, SOMEVAL, API_ID) VALUES
('TEST5', 12345, 'A987'),
('TEST6', 23456, 'A987'),
('TEST7', 34567, 'A987'),
...
('TEST123', 123321, 'A987');
COMMIT;