I have a database that has a notice_current table and a notices_archive table. As part of a user logout process, I want to move all of their associated notices from the current table to the archive.
In my PHP application code I am currently making a transaction where I copy the notices over, and then delete the rows in the notices_current table if there were no errors in the copying. However, I am wondering if MySQL has some innate function or method for simply pushing notices from one table to another. If so, it would see that this would be more effective than my current implementation.
There's not a single built-in function for this, but if you're currently iterating over all of the rows, then something like this might be a lot more efficient:
BEGIN;
INSERT INTO notices_archive SELECT * FROM notice_current WHERE user_id=%;
DELETE FROM notice_current WHERE user_id=%;
COMMIT;
u can use phpmyadmin tools to do this.
goto phpmy admin then select your source db and click on operation tab and select copy and set the parameters then let phpmyadmin to do his job :D
Related
i'm developing a feedback form, where students will be allowed to give feedback on the particular subjects.
I have a table with 3 fields "ID, Unique No, Password", where students admission number are stored. Now here is what i want.
As soon as each students completes giving the feedback, his particular data's from the table must be deleted automatically.
please help.
This can be done with a JOIN, but I'll demonstrate a trigger here, because I mentioned it in my comment above.
I assume you've got another table where you store the students feedback data. Let's call it students_feedback and the other students_admission for this example.
Using MySQL Triggers, you assing the Database to delete the student admission data automatically ON INSERT. You'll want to use on create, because as soon as the feedback data is stored in the students_feedback table, the INSERT event is triggered.
So, for example:
CREATE TRIGGER delete_admission AFTER INSERT
ON students_feedback FOR EACH ROW
BEGIN
DELETE FROM students_admission WHERE students_admission.id=OLD.id LIMIT 1;
END;
Use whatever DELETE query you want here.
NOTE: Before MySQL 5.0.10, triggers cannot contain direct references to tables by name.
Like explained before use a trigger. Simply click on triggers and create a trigger that occurs after an INSERT in the table that records the feedback of the students. You could do something like this
I don't really agree though that using triggers is a good practice. Triggers are business logic and their logic should be implemented in your code. Separating business logic in your app and in your database makes it harder for the next developer to work on since he doesn't know where to look. The only reason that i think is viable to use them is when it is used for distributed databases to keep them updated in relation to each other.
In an old legacy based system we make updates to a users inventory. The inventory contains many different items, user will have one row per item id and in each row is a quantity of this item that they own.
Now somewhere in this rather old and behemoth like code is a problem whereby a user can end up with a minus quantity of an item. This should never happen.
Rather than approaching the problem from the top and going through each piece of code that interacts with the inventory table we thought we might try and create some reporting to help us find the problems.
Before I go about implementing something that I think may solve this problem I thought i'd put it out there to the community to find out how they might approach it.
Perhaps could start by creating on update MySQL rules which insert activities into another table for closer inspection etc. Be creative.
If you add a timestamp field then you'll know when the last operation was carried out - from that, you could find te update entry in the mysql log and possibly reconcile with the application logs.
Alternatively you could set a trigger on the table...
CREATE TRIGGER no_negatives_in_yourtable
BEFORE UPDATE ON yourtable
FOR EACH ROW
BEGIN
IF (NEW.value<0) THEN
/* log it (NB will be rolled back if subsequent statement enabled */
INSERT INTO badthings (....) VALUES (...);
/* this forces the operation to fail */
DROP TABLE `less than zero value in yourtable`;
END IF;
END
I have a script that reads an excel sheet containing list of products. These are almost 10000 products. The script reads these products & compares them with the products inside mysql database, & checks
if the product is not available, then ADD IT (so I have put insert query for that)
if the product is already available, then UPDATE IT (so I have put update query for that)
Now the problem is, it creates a very heavy load on mysql server & it shows a message as "mysql server gone away..".
I want to know is there a better method to do this excel sheet work without making load on mysql server?
I am not sure if this is the case, but judging from your post, I assume it could be the case that for every check you initilize a new connection to the MySQL server. If that indeed is the case you can simply connect once before you do this check, and run all future queries trought this connection.
Next to that a good optimization option would be to introduce indexes in MySQL that would significantly speed up product search, introduce index for those product table columns, that you reference most in your php search function.
Next to that you could increase MySQL buffer size to something above 256 MB in order to cache most of the results, and also use InnoDB so you do not need to lock whole table every time you do the check, and also the input function.
I'm not sure why PHP has come into the mix. Excel can connect directly to a MySql database and you should be able to do a WHERE NOT IN query to add items and a UPDATE statements of ons that have changed Using excel VBA.
http://helpdeskgeek.com/office-tips/excel-to-mysql/
You could try and condense your code somewhat (you might have already done this though) but if you think it can be whittled down more, post it and we can have a look.
Cache data you know exists already, so if a products variables don't change regularly you might not need to check them so often. You can cache the data for quick retrieval/changes later (see Memcached, other caching alternatives are available). You could end up reducing your work load dramatically.
Have you seperated your mysql server? Try running the product checks on a different sub-system, and merge the databases to your main, hourly or daily or whatever.
Ok, here is quick thought
Instead of running the query, after every check, where its present or not, add on to your sql as long as you reach the end and then finally execute it.
Example
$query = ""; //creat a query container
if($present) {
$query .= "UPDATE ....;"; //Remember the delimeter ";" symbol
} else {
$query .= "INSERT ....;";
}
//Now, finally run it
$result = mysql_query($query);
Now, you make one query at the last part.
Update: Approach this the another way
Use the query to handle it.
INSERT INTO table (a,b,c) VALUES (1,2,3)
ON DUPLICATE KEY UPDATE c=c+1;
UPDATE table SET c=c+1 WHERE a=1;
Reference
I have a MySQL DB that receives a lot of data from a source once every week on a certain day of the week at a given time (about 1.2million rows) and stores it in, lets call it, the "live" table.
I want to copy all the data from "live" table into an archive and truncate the live table to make space for the next "current data" that will come in the following week.
Can anyone suggest an efficient way of doing this. I am really trying to avoid -- insert into archive_table select * from live --. I would like the ability to run this archiver using PHP so I cant use Maatkit. Any suggestions?
EDIT: Also, the archived data needs to be readily accessible. Since every insert is timestamped, if I want to look for the data from last month, I can just search for it in the archives
The sneaky way:
Don't copy records over. That takes too long.
Instead, just rename the live table out of the way, and recreate:
RENAME TABLE live_table TO archive_table;
CREATE TABLE live_table (...);
It should be quite fast and painless.
EDIT: The method I described works best if you want an archive table per-rotation period. If you want to maintain a single archive table, might need to get trickier. However, if you're just wanting to do ad-hoc queries on historical data, you can probably just use UNION.
If you only wanted to save a few periods worth of data, you could do the rename thing a few times, in a manner similar to log rotation. You could then define a view that UNIONs the archive tables into one big honkin' table.
EDIT2: If you want to maintain auto-increment stuff, you might hope to try:
RENAME TABLE live TO archive1;
CREATE TABLE live (...);
ALTER TABLE LIVE AUTO_INCREMENT = (SELECT MAX(id) FROM archive1);
but sadly, that won't work. However, if you're driving the process with PHP, that's pretty easy to work around.
Write a script to run as a cron job to:
Dump the archive data from the "live" table (this is probably more efficient using mysqldump from a shell script)
Truncate the live table
Modify the INSERT statements in the dump file so that the table name references the archive table instead of the live table
Append the archive data to the archive table (again, could just import from dump file via shell script, e.g. mysql dbname < dumpfile.sql)
This would depend on what you're doing with the data once you've archived it, but have you considered using MySQL replication?
You could set up another server as a replication slave, and once all the data gets replicated, do your delete or truncate with a SET BIN-LOG 0 before it to avoid that statement also being replicated.
I'm working on a basic php/mysql CMS and have a few questions regarding performance.
When viewing a blog page (or other sortable data) from the front-end, I want to allow a simple 'sort' variable to be added to the querystring, allowing posts to be sorted by any column. Obviously I can't accept anything from the querystring, and need to make sure the column exists on the table.
At the moment I'm using
SHOW TABLES;
to get a list of all of the tables in the database, then looping the array of table names and performing
SHOW COLUMNS;
on each.
My worry is that my CMS might take a performance hit here. I thought about using a static array of the table names but need to keep this flexible as I'm implementing a plugin system.
Does anybody have any suggestions on how I can keep this more concise?
Thankyou
If you using mysql 5+ then you'll find database information_schema usefull for your task. In this database you can access information of tables, columns, references by simple SQL queries. For example you can find if there is specific column at the table:
SELECT count(*) from COLUMNS
WHERE
TABLE_SCHEMA='your_database_name' AND
TABLE_NAME='your_table' AND
COLUMN_NAME='your_column';
Here is list of tables with specific column exists:
SELECT TABLE_SCHEMA, TABLE_NAME from COLUMNS WHERE COLUMN_NAME='your_column';
Since you're currently hitting the db twice before you do your actual query, you might want to consider just wrapping the actual query in a try{} block. Then if the query works you've only done one operation instead of 3. And if the query fails, you've still only wasted one query instead of potentially two.
The important caveat (as usual!) is that any user input be cleaned before doing this.
You could query the table up front and store the columns in a cache layer (i.e. memcache or APC). You could then set the expire time on the file to infinite and only delete and re-create the cache file when a plugin has been newly added, updated, etc.
I guess the best bet is to put all that stuff ur getting from Show tables etc in a file already and just include it, instead of running that every time. Or implement some sort of caching if the project is still in development and u think the fields will change.