I want to use temporary tables in my PHP code. It is a form that will be mailed. I do use session variables and arrays but some data filled in must be stored in a table format and the user must be able to delete entries in case of typos etc. doing this with arrays could work (not sure) but I'm kinda new at the php and using tables seems so much simpler. My problem is that using mysql_connect creates the table and adds the line of data but when i add my 2nd line it drops table and create it again... Using mysql_pconnect works by not dropping the table but creates more than on instance of the table at times and deleting entry's? what a mess! How can I best use temporary tables and not have them droped when my page refreshes? not using temporary tables may cause other issues if the user closes the page and leaving the table in the database.
Sounds like a mess! I am not sure why you are using a temp table at all, but you could create a random table name and assign it to a session variable. But this is hugely wrong as you would have a table for each user!
If you must use a database, add field to the table called sessionID. When you do your inserting/deleting reference the php sessionid.
Just storing the data in the session would probably be much easier though...
Better to create a permanent table and temporary rows. So, say you've serialized the object holding all your semi-complete form data as $form_data. At the end of the script, the last thing that should happen is that $form_date should be stored to the database and the resulting row id be stored in your $_SESSION. Something like:
$form_data=serialize($form_object); //you can also serialize arrays, etc.
//$form_data may also need to be base64_encoded
$q="INSERT INTO incomplete_form_table(thedata) '$form_data'";
$r=mysqli->query($q);
$id=$mysql->last_insert_id;
$_SESSION['currentform']=$id;
Then, when you get to the next page, you reconstitute your form like this:
$q="SELECT thedata FROM incomplete_form_table WHERE id={$_SESSION['currentform']}";
$r=$mysql->query($q);
$form_data=$r->fetch_assoc();
$form=$form_data['thedata'];//if it was base64_encoded, don't forget to unencode it first
You can (auto-) clean up the incomplete_form_data table periodically if the table has a timestamp field. Just delete everything that you consider expired.
The above code has not been exhaustively checked for syntax errors.
Related
I have the following problem:
I have got a dataset inside a text file (not xml or csv encoded or something, just field values separated by \t and \n) which is updated every 2 minutes. I need to put the data from the file into a MariaDB Database, which itself is not very difficult to do.
What I am unsure about however, is how I would go about updating the table if the file's contents change. I thought about truncating the table and then filling it again, but doing that every 2 minutes with about 1000 datasets would mean some nasty problems with the database being incomplete during those updates, which makes it not a usable solution (which it wouldn't have been with fewer datasets either :D)
Another solution I thought about was to append the new data to the existing table, and use a delimter on the unique column (e.g. use cols 1-1000 before update, append data, then use values 1001-2000 after the update and remove 1-1000, after 2 or so updates start at id 1 again).
Updating the changing fields is not an option, because the raw data format would make that really difficult to keep track of the column that has changed (or hasn't)
I am, however unsure about best practices, as I am relatively new to SQL and stuff, and would like to hear your opinion, maybe I am just overlooking something obvious...
Even better...
CREATE TABLE new LIKE real; -- permanent, not TEMPORARY
load `new` from the incoming data
RENAME TABLE real TO old, new TO real;
DROP TABLE old.
Advantages:
The table real is never invisible, nor empty, to the application.
The RENAME is "instantaneous" and "atomic".
As suggested by Alex, I will create a temporary table, insert my data into the temporary table, truncate the production table and then insert from the temporary table. Works like a charm!
I need to generate a php script that will carry out a sequential backup and update/renaming a number of MySql tables. Can I do this in a single query or will I need to generate a query for each action?
I need the script to do the following in order
DROP TABLE backup2
RENAME TABLE backup1 TO backup2
RENAME TABLE main TO backup1
COPY TABLE incomingmain TO main
TRUNCATE TABLE incomingmain
In practice the TABLE incomingmain will be populated from an external import before the TABLE update sequence above is carried out.
Can any one advise please how I structure this after connecting to the database?
You are better off to use a mysqli::multi_query().
It also depends on the return values, meaning are you going to check for errors or blindly run them all at once? If I am you, I would code it sequentially, just because it will look much cleaner from a coding point of view.
I have two tables in my database 'file' and 'message'. If a user submit a file I first submit the file name in the file table, location and so on in my database. After that I inmedietley set the following variable:
$file_id = mysql_insert_id();
This will get the assigned id of the the file, with that variable I can the insert it into the message table, such as message_file_id. That way I can get message file, with just an id.
All is working, but my question is. If I were to have 2 users submit a message with a file at the same time (which would be hard, or even impossible) is there a possibility user A gets user B file id?
There is variable that could affect this like a user having faster or slower time than another.
No, there is no chance User A will get User B's file ID as long as both transactions were performed in the same connection. MySQL is smart enough to give out distinct auto-increment IDs to different connections, even if they are inserting into the same table concurrently. You don't even have to wrap the two inserts in a transaction.
So there is no problem at all as long as you're using mysql_insert_id() - it will always work as you would expect it to and won't mix up IDs, and you are completely safe to use the return value from mysql_insert_id as a the foreign key in a record related to the one you have just inserted.
http://dev.mysql.com/doc/refman/5.0/en/mysql-insert-id.html
I've got a PHP script pulling a file from a server and plugging the values in it into a Database every 4 hours.
This file can and most likely change within the 4 hours (or whatever timeframe I finally choose). It's a list of properties and their owners.
Would it be better to check the file and compare it to each DB entry and update any if they need it, or create a temp table and then compare the two using an SQL query?
None.
What I'd personally do is run the INSERT command using ON DUPLICATE KEY UPDATE (assuming your table is properly designed and that you are using at least one piece of information from your file as UNIQUE key which you should based on your comment).
Reasons
Creating temp table is a hassle.
Comparing is a hassle too. You need to select a record, compare a record, if not equal update the record and so on - it's just a giant waste of time to compare a piece of info and there's a better way to do it.
It would be so much easier if you just insert everything you find and if a clash occurs - that means the record exists and most likely needs updating.
That way you took care of everything with 1 query and your data integrity is preserved also so you can just keep filling your table or updating with new records.
I think it would be best to download the file and update the existing table, maybe using REPLACE or REPLACE INTO. "REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted." http://dev.mysql.com/doc/refman/5.0/en/replace.html
Presumably you have a list of columns that will have to match in order for you to decide that the two things match.
If you create a UNIQUE index over those columns then you can use either INSERT ... ON DUPLICATE KEY UPDATE(manual) or REPLACE INTO ...(manual)
I'm creating a new web application and I only want to update certain tables when the db is changed, this will be done by firing off a PHP script in a JavaScript setInterval.
My issue is that I am having a hard time trying to find a way to easily determine if certain rows in the database table have changed. In other instances I have used "show table status" and compared that against the last update_time that I had already stored.
In this case though, I don't want to look at the entire table, I just need to know if the rows in my query have changed. I originally tried to md5 the array, as well as md5 on the serialization of the array. It seems that neither of them work and return the same md5 hash each time.
Just for example, here is my query:
$getUserTable = mysql_query("select * from uct_admin.resources where teamID='62' order by ResourceName");
Solution that I came up with after reading the answers:
$getUserTable = mysql_query("select * from uct_admin.resources where teamID='62' order by csiname");
while ($row = mysql_fetch_assoc($getUserTable)) {
$arrayTest[] = $row;
}
$hash = md5(http_build_query($arrayTest, 'flags_'));
This works perfectly due to the nested array returned from the query.
I would consider adding an additional TIMESTAMP field to the table that's updated on UPDATE/INSERT then you could just check SELECT MAX(lastModified) WHERE teamID=62
If you don't want to change the table structure you could add a trigger to this table and have it keep track of the last change to each teams's data in a separate table.
I'd create a table to log all changes, called dbLogTable. ID (foreign key) and timeStamp would be its columns. Each time your PHP script runs, use truncate table dbLogTable in order to clear the table. Your PHP script could detect any changes by running COUNT(*) FROM dbLogTable, which would be zero if there were no changes and nonzero if there were changes. All changes could then be accessed via the foreign key, ID.
Disclaimer: I'm a noob when it comes to DB work, so while this may be how I would do it, it might not be the best way to do it.
(I'd go with James' solution unless there's a specific reason you can't)
I did something similar and managed to get the hashing solution you alluded to working okay. It would be very strange to get the same hash from different data.
In my code I'm doing an implode on the array rather than a serialization. I then store the hash in the database and when I want to compare a new set of data against it I run the md5 on the new data and compare it to the hash.
If it still doesn't work can we maybe see your code for hashing and some sample data? I'd be curious to see what's up if this method doesn't work.