I'm creating a new web application and I only want to update certain tables when the db is changed, this will be done by firing off a PHP script in a JavaScript setInterval.
My issue is that I am having a hard time trying to find a way to easily determine if certain rows in the database table have changed. In other instances I have used "show table status" and compared that against the last update_time that I had already stored.
In this case though, I don't want to look at the entire table, I just need to know if the rows in my query have changed. I originally tried to md5 the array, as well as md5 on the serialization of the array. It seems that neither of them work and return the same md5 hash each time.
Just for example, here is my query:
$getUserTable = mysql_query("select * from uct_admin.resources where teamID='62' order by ResourceName");
Solution that I came up with after reading the answers:
$getUserTable = mysql_query("select * from uct_admin.resources where teamID='62' order by csiname");
while ($row = mysql_fetch_assoc($getUserTable)) {
$arrayTest[] = $row;
}
$hash = md5(http_build_query($arrayTest, 'flags_'));
This works perfectly due to the nested array returned from the query.
I would consider adding an additional TIMESTAMP field to the table that's updated on UPDATE/INSERT then you could just check SELECT MAX(lastModified) WHERE teamID=62
If you don't want to change the table structure you could add a trigger to this table and have it keep track of the last change to each teams's data in a separate table.
I'd create a table to log all changes, called dbLogTable. ID (foreign key) and timeStamp would be its columns. Each time your PHP script runs, use truncate table dbLogTable in order to clear the table. Your PHP script could detect any changes by running COUNT(*) FROM dbLogTable, which would be zero if there were no changes and nonzero if there were changes. All changes could then be accessed via the foreign key, ID.
Disclaimer: I'm a noob when it comes to DB work, so while this may be how I would do it, it might not be the best way to do it.
(I'd go with James' solution unless there's a specific reason you can't)
I did something similar and managed to get the hashing solution you alluded to working okay. It would be very strange to get the same hash from different data.
In my code I'm doing an implode on the array rather than a serialization. I then store the hash in the database and when I want to compare a new set of data against it I run the md5 on the new data and compare it to the hash.
If it still doesn't work can we maybe see your code for hashing and some sample data? I'd be curious to see what's up if this method doesn't work.
Related
Let's say I have dynamic numbers with unique id's to them.
I'd like to insert them into database. But if I already have that certain ID (UNIQUE) I need to add to the value that already exists.
I've already tried using "ON KEY UPDATE" ,but it's not really working out. And selecting the old data so we could add to it and then updating it ,is not efficient.
Is there any query that could do that?
Incrementing your value in your application does not guarantee you'll always have accurate results in your database because of concurrency issues. For instance, if two web requests need to increment the number with the same ID, depending on when the computer switches the processes on the CPU, you could have the requests overwriting each other.
Instead do an update similar to:
UPDATE `table` SET `number` = `number` + 1 WHERE `ID` = YOUR_ID
Check the return value from the statement. An update should return the number of rows affected, so if the value is 1, you can move on happy to know that you were as efficient as possible. On the other hand, if your return value is 0, then you'll have to run a subsequent insert statement to add your new ID/Value.
This is also the safest way to ensure concurrency.
Hope this helps and good luck!
Did something different. Instead of updating the old values ,I'm inserting new data and leaving old one ,but using certain uniques so I wouldn't have duplicates. And now to display that data I use a simple select query with sum property and then grouping it by an id. Works great ,just don't know if it's the most efficient way of doing it.
I read the topic https://meta.stackexchange.com/questions/36728/how-are-the-number-of-views-in-a-question-calculated . I understand the algorithm, but I not understand how do that thing in mysql, php.
Every time a new hit is registered, it is also added to a memory buffer in addition to the expiring cache entry. The buffer itself also expires after a few minutes or after it is filled up to a certain size, whichever happens first. When it expires, everything it has accumulated is written into the database in bulk. They call it a "buffered write scheme".
We use Storage Engine -MEMORY in mysql or maybe better solution with mysql,php.
Can anyone help me how "buffered write scheme" for view counter with php, mysql.
Thanks very much.
Well it wont go faster than MySQL.
A stored procedure for your query can speed-up the process but database-design is the other half.
Make sure you got one table to count:
user_therad_visit:
----------------------------
user_id | thread_id | count
----------------------------
Make sure there is an index on or better a two-rows unique index on columns "user_id" and "thread_id".
When a user logs in, read his entire and thread_id and count values and save them in $_SESSION array.
This way you can check by $_SESSION var if the user has already visited the page or not and simply ignore fetching the database if he was already here, this will reduce queries drastically.
Then simply dont forgot to UPDATE your database incase the user has never been on this thread and also directly update your $_SESSION array manually.
With query helper:
INSERT INTO table (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);
you can simply combine insert into and update, (whatever is needed) into one query also improving performance.
This way a query is only fired when the user enters the thread first time which you have no choice but to write down somewhere and a database is one of the fastest ways to do that.
Aslong as your thread_id and user_id fields are indexed you should be pretty fast with the SELECT query, even with a million rows in the table.
I've got a PHP script pulling a file from a server and plugging the values in it into a Database every 4 hours.
This file can and most likely change within the 4 hours (or whatever timeframe I finally choose). It's a list of properties and their owners.
Would it be better to check the file and compare it to each DB entry and update any if they need it, or create a temp table and then compare the two using an SQL query?
None.
What I'd personally do is run the INSERT command using ON DUPLICATE KEY UPDATE (assuming your table is properly designed and that you are using at least one piece of information from your file as UNIQUE key which you should based on your comment).
Reasons
Creating temp table is a hassle.
Comparing is a hassle too. You need to select a record, compare a record, if not equal update the record and so on - it's just a giant waste of time to compare a piece of info and there's a better way to do it.
It would be so much easier if you just insert everything you find and if a clash occurs - that means the record exists and most likely needs updating.
That way you took care of everything with 1 query and your data integrity is preserved also so you can just keep filling your table or updating with new records.
I think it would be best to download the file and update the existing table, maybe using REPLACE or REPLACE INTO. "REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted." http://dev.mysql.com/doc/refman/5.0/en/replace.html
Presumably you have a list of columns that will have to match in order for you to decide that the two things match.
If you create a UNIQUE index over those columns then you can use either INSERT ... ON DUPLICATE KEY UPDATE(manual) or REPLACE INTO ...(manual)
I want to use temporary tables in my PHP code. It is a form that will be mailed. I do use session variables and arrays but some data filled in must be stored in a table format and the user must be able to delete entries in case of typos etc. doing this with arrays could work (not sure) but I'm kinda new at the php and using tables seems so much simpler. My problem is that using mysql_connect creates the table and adds the line of data but when i add my 2nd line it drops table and create it again... Using mysql_pconnect works by not dropping the table but creates more than on instance of the table at times and deleting entry's? what a mess! How can I best use temporary tables and not have them droped when my page refreshes? not using temporary tables may cause other issues if the user closes the page and leaving the table in the database.
Sounds like a mess! I am not sure why you are using a temp table at all, but you could create a random table name and assign it to a session variable. But this is hugely wrong as you would have a table for each user!
If you must use a database, add field to the table called sessionID. When you do your inserting/deleting reference the php sessionid.
Just storing the data in the session would probably be much easier though...
Better to create a permanent table and temporary rows. So, say you've serialized the object holding all your semi-complete form data as $form_data. At the end of the script, the last thing that should happen is that $form_date should be stored to the database and the resulting row id be stored in your $_SESSION. Something like:
$form_data=serialize($form_object); //you can also serialize arrays, etc.
//$form_data may also need to be base64_encoded
$q="INSERT INTO incomplete_form_table(thedata) '$form_data'";
$r=mysqli->query($q);
$id=$mysql->last_insert_id;
$_SESSION['currentform']=$id;
Then, when you get to the next page, you reconstitute your form like this:
$q="SELECT thedata FROM incomplete_form_table WHERE id={$_SESSION['currentform']}";
$r=$mysql->query($q);
$form_data=$r->fetch_assoc();
$form=$form_data['thedata'];//if it was base64_encoded, don't forget to unencode it first
You can (auto-) clean up the incomplete_form_data table periodically if the table has a timestamp field. Just delete everything that you consider expired.
The above code has not been exhaustively checked for syntax errors.
I've got an application in php & mysql where the users writes and reads from a particular table. One of the write modes is in a batch, doing only one query with the multiple values. The table has an ID which auto-increments.
The idea is that for each row in the table that is inserted, a copy is inserted in a separate table, as a history log, including the ID that was generated.
The problem is that multiple users can do this at once, and I need to be sure that the ID loaded is the correct.
Can I be sure that if I do for example:
INSERT INTO table1 VALUES ('','test1'),('','test2')
that the ids generated are sequential?
How can I get the Id's that were just loaded, and be sure that those are the ones that were just loaded?
I've thinked of the LOCK TABLE, but the users shouldn't note this.
Hope I made myself clear...
Building an application that requires generated IDs to be sequential usually means you're taking a wrong approach - what happens when you have to delete a value some day, are you going to re-sequence the entire table? Much better to just let the values fall as they may, using a primary key to prevent duplication.
based on the current implementation of myisam and innodb, yes. however, this is not guaranteed to be so in the future, so i would not rely on it.