Desktop/Webapp shared database overwriting data on duplicate key - php

I'm developing a webapp that shares a database with an in-production desktop app (aka I cannot modify the database, only try to mimic behaviors). The module I'm working on now will store notes into this database in the notes table. I was able to get it to work, I added notes and they showed up in the desktop app, then after some time I realized the notes actual note text and descriptions were being overwritten. Looking at the rows in the database, I noticed modified_by user was set, telling me there was a duplicate key on insert, then later update. The primary key for this table is to auto-increment so I was very confused. After some digging I found a table called counters with a column called notes that had a count that matched the current index of notes table. Before just simply +1 the counter on every insert, I downloaded wireshark onto the db server and recorded the traffic on the db port and found this:
(Procedure when adding a note from desktop app)
UPDATE counters SET in_use = 'Y';
SELECT notes FROM counters WHERE key_col = 1;
/* Desktop app uses current count for new index */
UPDATE counters SET notes = /* current count +1 */ WHERE key_col = 1;
UPDATE counters SET in_use = 'N';
/* ...Inserts new note here with explicit ID = current count ... */
Now I'm even more confused. Why set the table to auto-increment at all? Second, there was never any checking of in_use before selecting the count and adding one... so what's the point of in_use? Couldn't this code lead to overwrites if two users inserted at the same time? Wouldn't the correct way to do this be to lock the counters table for every operation? I could try this, but I'm not sure how the desktop app will handle encountering a lock (based on experience - fatal error).
Aside from exactly duplicating this procedure and hoping for the best, I'm not exactly sure where to go from here. One thought is to:
<?php
const MAX_ATTEMPTS = 3;
$curKey;
for($i = 0; $i < MAX_ATTEMPTS; $i++){
/*
SELECT in_use, notes from counters where key_col = 1;
...
*/
if( 'N' === $result['in_use'] ){
$curKey = $result['notes'];
/* INSERT count here - $curKey++ */
break;
}
/* Sleep for .25 seconds to allow for current operation to finish */
usleep(250000);
}
if( null == $curKey ){
throw \Exception('Could not insert note because counter table locked after '. MAX_ATTEMPTS .' attempts');
}
/* INSET note code here... */
This seems ok, but could still possibly overwrite because a) time between select count and insert new count b) Desktop app does not seem to do any checking.
Any thoughts/suggestions?
EDIT: Made a stored procedure to do checking during select and insert.
DELIMITER $$
CREATE DEFINER=`testUser`#`%` FUNCTION `getNextNoteIndex`(appKey INTEGER) RETURNS int(11)
BEGIN
SELECT IF(`in_use` = 'N', `notes`, NULL) INTO #curIndex FROM `counters` WHERE `app_key` = appKey;
IF #curIndex IS NOT NULL
THEN
SET #newIndex = #curIndex + 1;
UPDATE `counters` SET `notes` = #newIndex WHERE `app_key` = appKey AND `in_use` = 'N' AND `notes` = #curIndex;
IF ROW_COUNT() = 1
THEN
RETURN #newIndex;
END IF;
END IF;
RETURN NULL;
END
Usage:
SELECT testDB.getNextNoteIndex(1) AS $index;

I do not know for what purpose they would need to create a table that does the auto incrementing, it doesn't sound like a standard solution.
I'm confused as to what you can and cannot change (db, backend code, etc):
If you're on the inside, are you not able to ask the developer who built that intermediate incrementing table what it is for and potentially get clarity there, or bypass it altogether.
If you're on the outside, does it make sense to ask them for an API and use the endpoints they gave you? Then any problems that arise from overwriting fall on their court.

Related

How to check each id in a column then allocate the first available number in PHP for mysql?

Using PHP i'd like to check the next available number to use as an id after comparing it to the query that lists all my id's
In theory I can do this;
$clientid = '0';
$getid_query = "SELECT clientid FROM clients";
$response = mysqli_query($conny, $getid_query);
while($data = mysql_fetch_assoc($response)){
$row[] = $data;
}
$freeid = False;
while($freeid == false){
if($row[clientid].contains($clientid){
$clientid = $clientid + 1;
}
else {
$freeid = true;
}
}
This leaves $clientid as an unique id ready to be used for the next created client
there might be a few syntax errors but in general I've tried most combinations and seem to get it right, I've been testing the different outputs such as
echo "Error:" . $row[clientid];
and sometimes (more often than not, nothing displays).
Edit 2:
Hold up! I wanted to do it php side because the associated username is generated from the clientid (all in php). So im going to follow the links from liridyn and see if i can do something.
Would there be a safe way to query the database again as soon as a new client has been registered in order to get their allocated clientid, so i can then update their row with the generated username?
Thanks
So, SELECT MAX(clientId) + 1 AS clientId FROM clients would replace your mixed PHP/SQL with a single query - but I would recommend against that, as adding AUTO_INCREMENT to clientId would instruct the SQL client to manage that for you automatically; if you then need that id, instead of generating it from PHP or calling SELECT MAX(clientID) FROM clients (which still has a race condition), you can follow the advice in this answer or call mysqli_insert_id.
I've ended up using Auto_increment on the db side. Thanks

CRON Job Send email to users in DB?

If I've got a database of users that have filled out a form, can I use a cron job to send an automated email? If so, what is the best way to "loop" it so that it sends the email once to each user?
$data = mysql_query("
SELECT *
FROM completed
WHERE
followupsent='0000-00-00 00:00:00'
AND valuesent + INTERVAL 4 DAY <= NOW()
")
or die(mysql_error());
while($info = mysql_fetch_array( $data ))
{
}
This checks to see if "followupsent" has been updated already as it updates with NOW() when it sends and also checks to see how many days since the value was sent.
I'm worried that by putting the email sending information in the while tags is going to loop for each row and end up sending a ton of emails.
Would using and if instead of a while:
if($info = mysql_fetch_array( $data ))
{
}
In order to send out to the first in the database and then let the cron job handle the rest by checking every minute which one is next?
Thats a perfectly fine way to do it. It will only send it it once per record (so assuming you have no duplicates in your completed table).
I assume you are updating the valuesent field in the loop
I suggest:
You ensure that your completed table uses a transactional storage engine, e.g. InnoDB:
ALTER TABLE completed ENGINE=InnoDB;
You define a new BOOLEAN column that indicates (to any other database connections) that a given record is in the process of being updated:
ALTER TABLE completed ADD COLUMN updating BOOLEAN NOT NULL DEFAULT FALSE;
You use a locking read to SELECT the records that are to be emailed, followed within the same transaction by an UPDATE to the newly created column. For example, using PDO:
$dbh->beginTransaction();
$select = $dbh->query('
SELECT *
FROM completed
WHERE followupsent IS NULL
AND valuesent <= CURRENT_TIMESTAMP - INTERVAL 4 DAY
AND NOT updating
FOR UPDATE
');
$dbh->exec('
UPDATE completed
SET updating = TRUE
WHERE followupsent IS NULL
AND valuesent <= CURRENT_TIMESTAMP - INTERVAL 4 DAY
AND NOT updating
');
$dbh->commit();
You can then update the database (removing the updating flag, together with whatever other status you require) as each email is sent:
$success = $dbh->prepare('
UPDATE completed
SET followupsent = CURRENT_TIMESTAMP,
updating = FALSE
WHERE id = ?
');
$failure = $dbh->prepare('
UPDATE completed
SET updating = FALSE
WHERE id = ?
');
foreach ($select as $row) {
$command = mail(...) ? $success : $failure;
$command->exec(array($row['id']));
}
Note that if the script terminates prematurely, the database may be left in an undesirable stateā€”i.e. records may have updating=TRUE but there is no longer any script that is processing them; this could lead to some records not being emailed at all. You may want to guard against this eventuality by registering a custom shutdown function, or else by periodically inspecting the database for such "orphaned" records (however you must of course be sure that they are not currently being processed by a running script).

I am trying to synchronise a Sage Line 50 Access database with Mysql via ODBC

I am trying to source the structure and data from a sage line 50 database but am having trouble with my update/create script.
Basically, I am writing a tool so that the local intranet site can display data sourced from sage to normal employees without using up a sage login (orders due in/stock levels etc). I am having trouble with this because it seems that the Sage50 database was developed by total morons. There are no Unique keys in this database, or, more accurately, very very few. The structure is really old school you can find the structure on pastebin HERE (bit too big for here). You'll notice that there are some tables with 300+ columns, which I think is a bit stupid. However, I have to work with this and so I need a solution.
There are a few problems syncing that I have encountered. Primarily it's the fact ODBC can't limit statements to 1 row so I can check data type, and secondly, with there being no IDs, I can't check if it's a duplicate when doing the insert. At the moment, this is what I have:
$rConn = odbc_connect("SageLine50", "user", "password");
if ($rConn == 0) {
die('Unable to connect to the Sage Line 50 V12 ODBC datasource.');
}
$result = odbc_tables($rConn);
$tables = array();
while (odbc_fetch_row($result)){
if(odbc_result($result,"TABLE_TYPE")=="TABLE") {
$tables[odbc_result($result,"TABLE_NAME")] = array();
}
}
This produces the first level of the list you see on pastebin.
A foreach statement is then run to produce the next level with the columns within the table
foreach($tables as $k=> $v) {
$query = "SELECT * FROM ".$k;
$rRes = odbc_exec($rConn,$query);
$rFields = odbc_num_fields ($rRes);
$i = 1;
while($i <= $rFields) {
$tables[$k][] = odbc_field_name($rRes, $i);
$i++;
}
CreateTableandRows($k,$tables[$k]);
}
At the moment, I then have a bodged together function to create each table (not that I like the way it does it).
Because I can't automatically grab back one row (or a few rows), to check the type of data with get_type() to then automatically set the row type, it means the only way I can figure out to do this is to set the row type as text and then change them retrospectively based upon a Mysql query.
Here is the function that's called for the table creation after the foreach above.
function CreateTableandRows($table,$rows) {
$db = array(
"host" => '10.0.0.200',
"user" => 'user',
"pass" => 'password',
"table" => 'ccl_sagedata'
);
$DB = new Database($db);
$LocSQL = "CREATE TABLE IF NOT EXISTS `".$table."` (
`id` int(11) unsigned NOT NULL auto_increment,
PRIMARY KEY (`id`),";
foreach($rows as $k=>$v) {
$LocSQL .= "
".$v." TEXT NOT NULL default '',";
}
$LocSQL = rtrim($LocSQL, ',');
$LocSQL .= "
) ENGINE=MyISAM DEFAULT CHARSET=utf8";
echo '<pre>'.$LocSQL.'</pre>';
$DB->query($LocSQL);
}
I then need/want a function that takes each table at a time and synchronizes the data to the ccl_sagedata database. However, it needs to make sure it's not inserting duplicates, i.e. this script will be run to sync the sage database at the start or end of each day and without ID numbers INSERT REPLACE won't work. I am obviously implementing auto inc primary ID's for each new table in the ccl_sagedata db. But I needs to be able to reference something static in each table that I can identify through ODBC (I hope that makes sense). In my current function, it has to call the mysql database for each row on the sage database and see if there is a matching row.
function InsertDataFromSage($ODBCTable) {
$rConn = odbc_connect("SageLine50", "user", "pass");
$query = "SELECT * FROM ".$ODBCTable;
$rRes = odbc_exec($rConn,$query);
$rFields = odbc_num_fields ($rRes);
while( $row = odbc_fetch_array($rRes) ) {
$result[] = $row;
}
$DB = new Database($db);
foreach($result as $k => $v) {
$CHECKEXISTS = "SELECT * FROM ".$ODBCTable." WHERE";
$DB->query($CHECKEXISTS);
// HERE WOULD BE A PART THAT PUTS DATA INTO THE DATABASE IF IT DOESN'T ALREADY EXIST
}
}
The only other thing I can think to note is that the 'new Database' class is simply just a functionalised standard mysqli database class. It's not something I'm having problems with.
So to re-cap.
I am trying to create a synchronization script that creates (if not exists) tables within a mysql database and then imports/syncs the data.
ODBC Can't limit the output so I can't figure out the data types in the columns automatically (can't do it manually because it's a massive db with 80+ tables
I can't figure out how to stop the script creating duplicates because there are no IDs in the sage source database.
For those of you not in the UK, Sage is a useless accounting package that runs on water and coal.
The Sage database only provides data, it doesn't allow you to input data outside of csv files in the actual program.
I know this is a bit late but Im already doing the same thing but with MS SQL.
Ive used a DTS package that truncates known copies of the tables (ie AUDIT_JOURNAL) and then copies everything in daily.
I also hit a bit of a wall trying to handle updates of these tables hence the truncate and re-create. Sync time is seconds so its not a bad option. It may be a bit of a ball ache but I say design your sync tables manually.
As you rightly point out, sage is not very friendly to being poked, so id say don't try to sync it all either.
Presumably you'll need reports to present to users but you don't need that much to do this. I sync COMPANY,AUDIT_JOURNAL, AUDIT_USAGE, CAT_TITLE,CAT_TITLE_CUS, CHART_LIST,CHART_LIST_CUS, BANK,CATEGORY,CATEGORY_CUS,DEPARTMENT, NOMINAL_LEDGER,PURCHASE_LEDGER,SALES_LEDGER.
This allows recreation of all the main reports (balance sheet, trial balance, supplier balances, etc all with drill down). If you need more help this late on let me know. I have a web app called MIS that you could install locally but the sync is a combo of ODBC and the DTS.
OK you do not need to create a synchronisation script you can query ODBC in real time you can even do joins like you do in SQL to retrieve data from multiple tables. The only thing you cannot do is write data back to sage.

query if a entry has changed since last check and continuously check for a time

The High Level Idea:
I have a micro controller that can connect to my site via a http request...I want to feed the device a response as soon as a change is noted on the database...
Due to the the end device being a client ie micro controller...Im unaware of a method to pass the data to the client without having to set up port forwarding...which is heavily undesired ...The problem arise when trying send data from an external network to an internal one...Either A. port forwarding or B have the client device initiate the request which leads me to the idea of having the device send an http request to file that polls for changes
Update:
Much Thanks to Ollie Jones. I have implimented some of his
suggestions here.
Jason McCreary suggested having a modified column which is a big
improvement as it should increase speed and reliability ...Great
suggestion! :)
if the database being overworked is in question in this example
maybe the following would work where...when the data is inserted into
the database the changes are wrote to a file...then have the loop
that continuously checks that file for an update....thoughts?
I have table1 and i want to see if a specific row(based on a UID/key) has been updated since the last time i checked as well as continuously check for 60 seconds if the record bets updated...
I'm thinking i can do this using the INFORMATION_SCHEMA database.
This database contains information about tables, views, columns, etc.
attempt at a solution:
<?php
$timer = time() + (10);//add 60 seconds
$KEY=$_POST['KEY'];
$done=0;
if(isset($KEY)){
//loign stuff
require_once('Connections/check.php');
$mysqli = mysqli_connect($hostname_check, $username_check, $password_check,$database_check);
if (mysqli_connect_errno($mysqli))
{ echo "Failed to connect to MySQL: " . mysqli_connect_error(); }
//end login
$query = "SELECT data1, data2
FROM station
WHERE client = $KEY
AND noted = 0;";
$update=" UPDATE station
SET noted=1
WHERE client = $KEY
AND noted = 0;";
while($done==0) {
$result = mysqli_query($mysqli, $query);
$update = mysqli_query($mysqli, $update);
$row_cnt = mysqli_num_rows($result);
if ($row_cnt > 0) {
$row = mysqli_fetch_array($result);
echo 'data1:'.$row['data1'].'/';
echo 'data2:'.$row['data2'].'/';
print $row[0];
$done=1;
}
else {
$current = time();
if($timer > $current){ $done=0; sleep(1); } //so if I haven't had a result update i want to loop back an check again for 60seconds
else { $done=1; echo 'done:nochange';}//60seconds pass end loop
}}
mysqli_close($mysqli);
echo 'time:'.time();
}
else {echo 'error:nokey';}
?>
Is this an adequate method and suggestions to improve the speed as well as improve the reliability
If I understand your application correctly, your client is a microcontroller. It issues an HTTP request to your php / mysql web app once in a while. The frequency of that request is up to the microcontroller, but but seems to be once a minute or so.
The request basically asks, "dude, got anything new for me?"
Your web app needs to send the answer, "not now" or "here's what I have."
Another part of your app is providing the information in question. And it's doing so asynchronously with your microcontroller (that is, whenever it wants to).
To make the microcontroller query efficient is your present objective.
(Note, if I have any of these assumptions wrong, please correct me.)
Your table will need a last_update column, a which_microcontroller column or the equivalent, and a notified column. Just for grins, let's also put in value1 and value2 columns. You haven't told us what kind of data you're keeping in the table.
Your software which updates the table needs to do this:
UPDATE theTable
SET notified=0, last_update = now(),
value1=?data,
value2?=data
WHERE which_microcontroller = ?microid
It can do this as often as it needs to. The new data values replace and overwrite the old ones.
Your software which handles the microcontroller request needs to do this sequence of queries:
START TRANSACTION;
SELECT value1, value2
FROM theTable
WHERE notified = 0
AND microcontroller_id = ?microid
FOR UPDATE;
UPDATE theTable
SET notified=1
WHERE microcontroller_id = ?microid;
COMMIT;
This will retrieve the latest value1 and value2 items (your application's data, whatever it is) from the database, if it has been updated since last queried. Your php program which handles that request from the microcontroller can respond with that data.
If the SELECT statement returns no rows, your php code responds to the microcontroller with "no changes."
This all assumes microcontroller_id is a unique key. If it isn't, you can still do this, but it's a little more complicated.
Notice we didn't use last_update in this example. We just used the notified flag.
If you want to wait until sixty seconds after the last update, it's possible to do that. That is, if you want to wait until value1 and value2 stop changing, you could do this instead.
START TRANSACTION;
SELECT value1, value2
FROM theTable
WHERE notified = 0
AND last_update <= NOW() - INTERVAL 60 SECOND
AND microcontroller_id = ?microid
FOR UPDATE;
UPDATE theTable
SET notified=1
WHERE microcontroller_id = ?microid;
COMMIT;
For these queries to be efficient, you'll need this index:
(microcontroller_id, notified, last_update)
In this design, you don't need to have your PHP code poll the database in a loop. Rather, you query the database when your microcontroller checks in for an update/
If all table1 changes are handled by PHP, then there's no reason to poll the database. Add the logic you need at the PHP level when you're updating table1.
For example (assuming OOP):
public function update() {
if ($row->modified > (time() - 60)) {
// perform code for modified in last 60 seconds
}
// run mysql queries
}

mysql plus one a column

i use my own mvc framework.
and post action is :
function post($pid = 0 , $title = '')
{
$pid = $this->input->clean($pid);
$stm =$this->Post->query("UPDATE posts SET `show_count`=show_count+1 WHERE id = $pid");
$stm->execute();
$row = $this->Post->find()->where(array('id'=>$pid))->fetchOne();
$this->layout->set('comments' , $this->comments($pid));
$this->layout->set('row' , $row);
$this->side();
$this->layout->view('post');
echo $this->layout->render($row['title']);
}
i want to when a record fetch from database plus one show_count column .
i use this query :
UPDATE posts SET show_count = show_count + 1 WHERE id = $pid
this right in localhost but in my shared host when run query instead of one pluses ,2 plus show_count column.
how can i solve this problem?
It's working in your localhost but not your internet host. The host is probably doing something behind the scenes that makes it not work. Also, since you don't have access to all the configuration stuff on the host server, you probably won't be able to just fix it.
Simplest fix is to do it in the software layer (assuming $pid is unique).
First get the show_count.
Then:
$oldShowCount = the current show_count
$newShowCount = $oldShowCount + 1
UPDATE posts SET show_count = $newShowCount WHERE id = $pid AND show_count = $oldShowCount
At this point, if no race conditions occurred the row will update (rows update = 1). If a race condition occurred, the update will fail (rows updated = 0).
Then check the # rows updated to verify it worked and repeat until it does.
This is quite common case.
You are running your script twice. Probably because browser is calling it twice. Check your rewrite rules.

Categories