php mysql Insert, delete, or update - php

I have two tables:
people:
peopleID (PK)
name
peopleaddress:
addressID (PK)
peopleID (fK)
address
addresstype
...other fields...
A person can have more addresses.
I have a form to add a new person (and addresses) and another form to edit info. When I load the edit form, it takes the info from the DB about that person and put them in the fields value="".
Right now, when I submit the edit form, I use 2 sql commands:
$stmt = $conn->prepare("DELETE from peopleaddress WHERE peopleID=?");
and
$stmt = $conn->prepare("INSERT INTO peopleaddress (peopleID, addresstype, active, street.....) VALUES (?, ?, ?, ....)");
It works well, only downside is that addressID changes every update and it tends to grow fast.
Am I doing it the right way or there is a way in php or better sql to say:
if new address exists » update
if doensn't exist » insert
if all fields of existing address empty » delete
Thanks for your help!

Here's what you're looking for: "INSERT ... ON DUPLICATE KEY UPDATE"
INSERT INTO peopleaddress (peopleID, addresstype, active, ...) VALUES (1,2,3, ...)
ON DUPLICATE KEY UPDATE addresstype=2, active=3, ...;
It will find the address if it is already in the database and update it, if it can. Otherwise, it does the insert.
You may have to re-work your '?' substitutions.

If someone adds a new address to an existing account it's okay to have a new row (e.g. id), keeping old addresses isn't necessarily a good or bad thing and having that as a separate table is good (some people may have a home and a business address in example).
If all the fields are missing then your code's logic (PHP?) shouldn't even be getting to the point of any SQL queries.
function admin_members_address_update()
{
// 1. Create $required array.
// 2. Check foreach $_POST[array_here].
// 3. If any of the required $_POST variables aren't set HTTP 403 and end the function.
//if something wrong, HTTP 403
else
{
///Normal code here.
}
}
You should always check for variables you expect to be set just like you always escape (e.g. not trust) client data and have your error reporting set to maximum. A hacker will omit a form field easily with any dev tool and try to find vulnerabilities based on error messages generated by the server.
Always check for failure before you presume success.
//if () {}
//else if () {}
//else if () {}
//else {}
SQL should be handled in a similar fashion:
$query1 = 'SELECT * from table;';
$result1 = mysqli_query($db,$query1);
if ($result1) {}
else {/*SQL error reporting, send __magic_function__ as param with well named functions*/}
That is the very rough of it (don't have access to my actual code) though it should give you something to go on. Comment if you'd like me to update my answer when I have access to my code later.

A MySQL unsigned INT (max 4294967295) will support an average of 13 inserts per second, 24x7, for 10 years. That's already probably as many addresses as there are on earth. If you're pushing the limit somehow, a BIGINT will be massively larger and probably never run out in a million years.
Basically, don't worry about using up IDs with auto increment, the decision of whether to DELETE -> INSERT or UPDATE/DELETE/INSERT should be based on whether you need to maintain persistent IDs for individual addresses. Deleting then inserting assigns a new ID, even if it's really the same address, which is undesirable if you want to create a foreign key that references address IDs. If you don't need that you don't need to worry about it, though as a personal preference I would probably incorporate UPDATE.

Related

Best way to generate (and save) incremental invoice numbers in a multi-tenant MySQL database

I have found two different ways to, first, get the next invoice number and, then, save the invoice in a multi-tenant database where, of course, each tenant will have his own invoices with different incremental numbers.
My first (and actual) approach is this (works fine):
Add a new record to the invoices tables. No matter the invoice number yet (for example, 0, or empty)
I get the unique ID of THAT created record after insert
Now I do a "SELECT table where ID = $lastcreatedID **FOR UPDATE**"
Here I get the latest saved invoice number with "SELECT #A:=MAX(NUMBER)+1 FROM TABLE WHERE......"
Finally I update the previously saved record with that invoice number with an "UPDATE table SET NUMBER = $mynumber WHERE ID = $lastcreatedID"
This works fine, but I don't know if the "for update" is really needed or if this is the correct way to do this in a multi-tenant DB, due to performance, etc.
The second (and simpler) approach is this (and works too, but I don't know if it is a secure approach):
INSERT INTO table (NUMBER,TENANT) SELECT COALESCE(MAX(NUMBER),0)+1,$tenant FROM table WHERE....
That's it
Both methods are working, but I would like to know the differences between them regarding speed, performance, if it may create duplicates, etc.
Or... is there any better way to do this?
I'm using MySQL and PHP. The application is an invoice/sales cloud software that will be used by a lot of customers (tenants).
Thanks
Regardless of if you're using these values as database IDs or not, re-using IDs is virtually guaranteed to cause problems at some point. Even if you're not re-using IDs you're going to run into the case where two invoice creation requests run at the same time and get the same MAX()+1 result.
To get around all this you need to reimplement a simple sequence generator that locks its storage while a value is being issued. Eg:
CREATE TABLE client_invoice_serial (
-- note: also FK this back to the client record
client_id INTEGER UNSIGNED NOT NULL PRIMARY KEY,
serial INTEGER UNSIGNED NOT NULL DEFAULT 0
);
$dbh = new PDO('mysql:...');
/* this defaults to 'on', making every query an implicit transaction. it needs to
be off for this. you may or may not want to set this globally, or just turn it off
before this, and back on at the end. */
$dbh->setAttribute(PDO::ATTR_AUTOCOMMIT,0);
// simple best practice, ensures that SQL errors MUST be dealt with. is assumed to be enabled for the below try/catch.
$dbh->setAttribute(PDO::ATTR_ERRMODE_EXCEPTION,1);
$dbh->beginTransaction();
try {
// the below will lock the selected row
$select = $dbh->prepare("SELECT * FROM client_invoice_serial WHERE client_id = ? FOR UPDATE;");
$select->execute([$client_id]);
if( $select->rowCount() === 0 ) {
$insert = $dbh->prepare("INSERT INTO client_invoice_serial (client_id, serial) VALUES (?, 1);");
$insert->execute([$client_id]);
$invoice_id = 1;
} else {
$invoice_id = $select->fetch(PDO::FETCH_ASSOC)['serial'] + 1;
$update = $dbh->prepare("UPDATE client_invoice_serial SET serial = serial + 1 WHERE client_id = ?");
$update->execute([$client_id])
}
$dbh->commit();
} catch(\PDOException $e) {
// make sure that the transaction is cleaned up ASAP, then let the exception bubble up into your general error handling.
$dbh->rollback();
throw $e; // or throw a more pertinent error/exception of your choosing.
}
// both committing and rolling back will release the lock
At a very basic level this is what MySQL is doing in the background for AUTOINCREMENT columns.
Do not use MAX(id)+1. It will, someday, bite you. There will be two invoices with the same number, and it will take us a few paragraphs to explain why it happened.
Instead, use AUTO_INCREMENT the way it is intended.
INSERT INTO Invoices (id, ...) VALUES (NULL, ...);
SELECT LAST_INSERT_ID(); -- specific to the conne ction
That is safe even when multiple connections are doing the same thing. No FOR UPDATE, no BEGIN, etc is necessary. (You may want such for other purposes.)
And, never delete rows. Instead, use the standard business practice of invalidating bad invoices. Imagine being audited.
All that said, there is still a potential problem. After a ROLLBACK or system crash, an id may be "burned". Also things like INSERT IGNORE allocate the id before checking to see whether it will be needed.
If you can live with the caveats, use AUTO_INCREMENT.
If not, then create a 1-row, 2-column table to simulate a sequence number generator: http://mysql.rjweb.org/doc.php/index_cookbook_mysql#sequence
Or use MariaDB's SEQUENCE
Both the approaches do work, but each with its own demerits in high traffic situations.
The first approach runs 3 queries for every invoice you create, putting extra load on your server.
The second approach can lead to duplicates in events where two invoices are generated with very little time difference (such that the SELECT query return same max number for both invoices).
Both the approaches may lead to problems in high traffic conditions.
Two solutions to the problems are listed below:
Use generated columns: Mysql supports generated columns, which are basically derived using other column values for each row. Refer this
Calculate invoice number on the fly: Since you're using the primary key as part of the invoice, let the DB handle generating unique primary keys, and then generate invoice numbers on the fly in your business logic using the id for each invoice.

MySQL multiple condition Type

I'm creating a CMS from scratch.
I'm looking for a way to set a value in my database following a specific condition.
The context here is I want to save records for the deleted or edited comments that would have been reported by the community.
Then I wanna view those logs/records but I have trouble defining wheter or not those values are deleted or edited.
( this is important to view the logs, obviously )
Here is the code i've done so far to insert the logs.
// Insert logs moderation
public function insertLogs($idCommentaire){
$sql ="INSERT INTO logs(com_id, com_date, com_author, com_content, post_id)
SELECT com_id, com_date, com_author, com_content, post_id FROM
comments WHERE com_id = ?";
$this->executeRequest($sql, array($idCommentaire));
}
Now I would like to set it up if it's modified or deleted, depending on which method I call this SQL, here is an example for the deletion :
$this->admin->insertLogs($idCommentaire);
$this->admin->suppressCom($idCommentaire);
I've created a new column ENUM from MySql ("deleted" "modified") but can't figure out how can I update this Logs table with the datas on it.
Here is the SQL I'm thinking about :
UPDATE logs SET type =("modified"OR"deleted") WHERE com_id = 70;
Note that it's not a good coding, just what I want to do in my mind.
I'm talking about plain MySQL here, if it's possible to combine it all in one request.
Otherwise I would set up 1 more request, 1 for each, but I don't know if it's really clean to do so.
What's your advices and thoughts about it ?
Thanks you all.
Your logic isn't too bad. However, I'll note a couple of things.
TYPE is a reserved word in MySQL and probably most RDBMSs. Pick
another name for your column that indicates if a post has been
modified or deleted.
There is at least one more state for a post; it's approved, or
published, or 'okay', or whatever you'd like to call it. So if you
create an ENUM field called
SomethingOtherThanTypeThatStillMeansType, or foo or
post_status or whatever, include fields for all potential states
your data may have, including deleted, modified, posted, not_yet_posted, edited, or whatever you think the system may support at the time of some future feature update.
You might consider using an INT type for your status field. I'm
thinking that might be a tad faster than an ENUM.

Processing feedback about duplicate rows on bulk insert

I have a service that allows users to import multiple items at once besides filling the form, uploading the csv file where each row is representing an item - entity using an id that is set under an unique field in my mysql database (only one item with specific id can exist).
When user finishes with the upload and csv processing, I would like to provide feedback about what items in their file already existed in the database. I decided to go with INSERT IGNORE, parsing the id's out of warnings (regex) and retrieving item information (SELECT) based on collected id's. Browsing the internet, I did not find the common solution for this so I would like to know if this approach is correct, specially when dealing with larger number of rows (500+).
Base idea:
INSERT IGNORE INTO (id, name, address, phone) VALUES (x,xx,xxx,xxxx), (y,yy,yyy,yyyy), etc;
SHOW WARNINGS;
$warning_example = [0=>['Message'=>'Duplicate entry on '123456'...'], 1=>['Message'=>'Duplicate entry on '234567'...']];
$duplicates_count = 0;
foreach($warning_example as $duplicated_item) {
preg_match('/regex_to_extract_id/', $duplicated_item['Message'], $result);
$id[$duplicates_count] = $result;
$duplicates_count++;
}
$duplicates_string = implode(',',$id);
SELECT name FROM items WHERE id IN ($duplicates_string);
Also, what would be the simplest and most efficient regex for this task since the message structure is the same every time.
Duplicate entry '12345678' for key 'id'
Duplicate entry '23456789' for key 'id'
etc.
With preg_match:
preg_match(
"/Duplicate entry '(\d+)' for key 'id'/",
$duplicated_item['Message'],
$result
);
$id[$duplicates_count] = $result[1];
(\d+) represents a sequence of digits (\d), that should be captured (surrounding parentheses).
However, there are better ways to proceed, if you have control over the way the data is imported. To start with, I would recommend first running a SELECT statement to check if a record already exists, and running the INSERT only when needed. This avoids generating errors on the database side. Also, it is much more accurate than using INSERT IGNORE, which basically ignores all error that occur during insertion (wrong data type or length, non-nullable value, ...) : for this reason, it is usually not a good tool to check for unicity.

Simple concurrency in PHP?

I have a small PHP function on my website which basically does 3 things:
check if user is logged in
if yes, check if he has the right to do this action (DB Select)
if yes, do the related action (DB Insert/Update)
If I have several users connected at the same time on my website that try to access this specific function, is there any possibility of concurrency problem, like we can have in Java for example? I've seen some examples about semaphore or native PHP synchronization, but is it relevant for this case?
My PHP code is below:
if ( user is logged ) {
sql execution : "SELECT....."
if(sql select give no results){
sql execution : "INSERT....."
}else if(sql select give 1 result){
if(selected column from result is >= 1){
sql execution : "UPDATE....."
}
}else{
nothing here....
}
}else{
nothing important here...
}
Each user who accesses your website is running a dedicated PHP process. So, you do not need semaphores or anything like that. Taking care of the simultaneous access issues is your database's problem.
Not in PHP. But you might have users inserting or updating the same content.
You have to make shure this does not happen.
So if you have them update their user profile only the user can access. No collision will occur.
BUT if they are editing content like in a Content-Management System... they can overwrite each others edits. Then you have to implement some locking mechanism.
For example(there are a lot of ways...) if you write an update on the content keeping the current time and user.
Then the user has a lock on the content for maybe 10 min. You should show the (in this case) 10 min countdown in the frontend to the user. And a cancel button to unlock the content and ... you probably get the idea
If another person tries to load the content in those 10 min .. it gets an error. "user xy is already... lock expires at xx:xx"
Hope this helps.
In general, it is not safe to decide whether to INSERT or UPDATE based on a SELECT result, because a concurrent PHP process can INSERT the row after you executed your SELECT and saw no row in the table.
There are two solutions. Solution number one is to use REPLACE or INSERT ... ON DUPLICATE KEY UPDATE. These two query types are "atomic" from perspective of your script, and solve most cases. REPLACE tries to insert the row, but if it hits a duplicate key it replaces the conflicting existing row with the values you provide, INSERT ... ON DUPLICATE KEY UPDATE is a little bit more sophisticated, but is used in a similar situations. See the documentation here:
http://dev.mysql.com/doc/refman/5.0/en/insert-on-duplicate.html
http://dev.mysql.com/doc/refman/5.0/en/replace.html
For example, if you have a table product_descriptions, and want to insert a product with ID = 5 and a certain description, but if a product with ID 5 already exists, you want to update the description, then you can just execute the following query (assuming there's a UNIQUE or PRIMARY key on ID):
REPLACE INTO product_description (ID, description) VALUES(5, 'some description')
It will insert a new row with ID 5 if it does not exist yet, or will update the existing row with ID 5 if it already exists, which is probably exactly what you want.
If it is not, then approach number two is to use locking, like so:
query('LOCK TABLE users WRITE')
if (num_rows('SELECT * FROM users WHERE ...')) {
query('UPDATE users ...');
}
else {
query('INSERT INTO users ...');
}
query('UNLOCK TABLES')

Loop until passcode is unique

This is for a file sharing website. In order to make sure a "passcode", which is unique to each file, is truely unique, I'm trying this:
$genpasscode = mysql_real_escape_string(sha1($row['name'].time())); //Make passcode out of time + filename.
$i = 0;
while ($i < 1) //Create new passcode in loop until $i = 1;
{
$query = "SELECT * FROM files WHERE passcode='".$genpasscode."'";
$res = mysql_query($query);
if (mysql_num_rows($res) == 0) // Passcode doesn't exist yet? Stop making a new one!
{
$i = 1;
}
else // Passcode exists? Make a new one!
{
$genpasscode = mysql_real_escape_string(sha1($row['name'].time()));
}
}
This really only prevents a double passcode if two users upload a file with the same name at the exact same time, but hey better safe than sorry right? My question is; does this work the way I intend it to? I have no way to reliably (read: easily) test it because even one second off would generate a unique passcode anyway.
UPDATE:
Lee suggest I do it like this:
do {
$query = "INSERT IGNORE INTO files
(filename, passcode) values ('whatever', SHA1(NOW()))";
$res = mysql_query($query);
} while( $res && (0 == mysql_affected_rows()) )
[Edit: I updated above example to include two crucial fixes. See my answer below for details. -#Lee]
But I'm afraid it will update someone else's row. Which wouldn't be a problem if filename and passcode were the only fields in the database. But in addition to that there's also checks for mime type etc. so I was thinking of this:
//Add file
$sql = "INSERT INTO files (name) VALUES ('".$str."')";
mysql_query($sql) or die(mysql_error());
//Add passcode to last inserted file
$lastid = mysql_insert_id();
$genpasscode = mysql_real_escape_string(sha1($str.$lastid.time())); //Make passcode out of time + id + filename.
$sql = "UPDATE files SET passcode='".$genpasscode."' WHERE id=$lastid";
mysql_query($sql) or die(mysql_error());
Would that be the best solution? The last-inserted-id field is always unique so the passcode should be too. Any thoughts?
UPDATE2: Apperenatly IGNORE does not replace a row if it already exists. This was a misunderstanding on my part, so that's probably the best way to go!
Strictly speaking, your test for uniqueness won't guarantee uniqueness under a concurrent load. The problem is that you check for uniqueness prior to (and separately from) the place where you insert a row to "claim" your newly generated passcode. Another process could be doing the same thing, at the same time. Here's how that goes...
Two processes generate the exact same passcode. They each begin by checking for uniqueness. Since neither process has (yet) inserted a row to the table, both processes will find no matching passcode in database, and so both processes will assume that the code is unique. Now as the processes each continue their work, eventually they will both insert a row to the files table using the generated code -- and thus you get a duplicate.
To get around this, you must perform the check, and do the insert in a single "atomic" operation. Following is an explanation of this approach:
If you want passcode to be unique, you should define the column in your database as UNIQUE. This will ensure uniqueness (even if your php code does not) by refusing to insert a row that would cause a duplicate passcode.
CREATE TABLE files (
id int(10) unsigned NOT NULL auto_increment PRIMARY KEY,
filename varchar(255) NOT NULL,
passcode varchar(64) NOT NULL UNIQUE,
)
Now, use mysql's SHA1() and NOW() to generate your passcode as part of the insert statement. Combine this with INSERT IGNORE ... (docs), and loop until a row is successfully inserted:
do {
$query = "INSERT IGNORE INTO files
(filename, passcode) values ('whatever', SHA1(NOW()))";
$res = mysql_query($query);
} while( $res && (0 == mysql_affected_rows()) )
if( !$res ) {
// an error occurred (eg. lost connection, insufficient permissions on table, etc)
// no passcode was generated. handle the error, and either abort or retry.
} else {
// success, unique code was generated and inserted into db.
// you can now do a select to retrieve the generated code (described below)
// or you can proceed with the rest of your program logic.
}
Note: The above example was edited to account for the excellent observations posted by #martinstoeckli in the comments section. The following changes were made:
changed mysql_num_rows() (docs) to mysql_affected_rows() (docs) -- num_rows doesn't apply to inserts. Also removed the argument to mysql_affected_rows(), as this function operates on the connection level, not the result level (and in any case, the result of an insert is boolean, not a resource number).
added error checking in the loop condition, and added a test for error/success after loop exits. The error handling is important, as without it, database errors (like lost connections, or permissions problems), will cause the loop to spin forever. The approach shown above (using IGNORE, and mysql_affected_rows(), and testing $res separately for errors) allows us to distinguish these "real database errors" from the unique constraint violation (which is a completely valid non-error condition in this section of logic).
If you need to get the passcode after it has been generated, just select the record again:
$res = mysql_query("SELECT * FROM files WHERE id=LAST_INSERT_ID()");
$row = mysql_fetch_assoc($res);
$passcode = $row['passcode'];
Edit: changed above example to use the mysql function LAST_INSERT_ID(), rather than PHP's function. This is a more efficient way to accomplish the same thing, and the resulting code is cleaner, clearer, and less cluttered.
I'd personally would have write it on a different way but I'll provide you a much easier solution: sessions.
I guess you're familiar with sessions? Sessions are server-side remembered variables that timeout at some point, depending on the server configuration (the default value is 10 minutes or longer). The session is linked to a client using a session id, a random generated string.
If you start a session at the upload page, an id will be generated which is guaranteed to be unique as long the session is not destroyed, which should take about 10 minutes. That means that when you're combining the session id and the current time you'll never have the same passcode. A session id + the current time (in microseconds, milliseconds or seconds) are NEVER the same.
In your upload page:
session_start();
In the page where you handle the upload:
$genpasscode = mysql_real_escape_string(sha1($row['name'].time().session_id()));
// No need for the slow, whacky while loop, insert immediately
// Optionally you can destroy the session id
If you do destroy the session id, that would mean there's a very slim chance that another client can generate the same session id so I wouldn't advice that. I'd just allow the session to expire.
Your question is:
does this work the way I intend it to?
Well, I'd say... yes, it does work, but it could be optimized.
Database
To make sure to not have the same value in the field passcode on the database layer, add a unique key to this:
/* SQL */
ALTER TABLE `yourtable` ADD UNIQUE `passcode` (`passcode`);
(duplicate key handling has to be taken care of than ofc)
Code
To wait a second until a new Hash is created, is ok, but if you're talking heavy load, then a single second might be a tiny eternity. Therefore I'd rather add another component to the sha1-part of your code, maybe a file id from the same database record, userid or whatever which makes this really unique.
If you don't have a unique id at hand, you still can fall back to a random number rand-function in php.
I don't think mysql_real_escape_string is needed in this context. The sha1 returns a 40-character hexadecimal number anyway, even if there are some bad characters in your rows.
$genpasscode = sha1(rand().$row['name'].time());
...should suffice.
Style
Two times the passcode generation code is used in your code sample. Start cleaning this up in moving this into a function.
$genpasscode = gen_pc(row['name']);
...
function gen_pc($x)
{
return sha1($row[rand().$x.time());
}
If I'd do it, I'd do it differently, I'd use the session_id() to avoid duplicates as good as possible. This way you wouldn't need to loop and communicate with your database in that loop possibly several times.
You can add unique constraint to your table.
ALTER TABLE files ADD UNIQUE (passcode);
PS: You can use microtime or uniqid to make the passcode more unique.
Edit:
You make your best to generate a unique value in php, and unique constraint is used to guarantee that at database side. If your unique value is very unique, but in very rare case it failed to be unique, just feel free to give the message like The system is busy now. Please try again:).

Categories