I have a table which contains orders, and orders are being added to the table by users as time goes by.
I want to implement a service that checks if a row was added to the table.
Is there a specific way to do that?
thanks!
If you want to know which rows have been added since last time you checked, put a timestamp in each row, and keep track somewhere (separately) of the newest row you've seen so far. To find new rows, query for all rows whose timestamp is newer than newest one you've seen before. Then take the most recent timestamp from the result set, and use it to update your "newest row seen so far" variable.
The database itself doesn't keep track of which rows have been newly-added because the meaning of "new" depends on who's asking. A row that was added six months ago is "new" to someone who hasn't checked since then. That's why you have to use timestamps, and have the application keep track of which timestamp currently marks the boundary between "old" and "new".
Edit: Actually, instead of timestamps, you might want to use an auto-increment integer column. With timestamps there's a slight chance that two rows may be added so close together in time that they get the same timestamp, and if the application does its query at a moment when only one of those rows has been inserted, it'll "miss" the other one next time it checks for new rows because it thinks that timestamp has been seen already. A value that always increases for every new row would avoid that problem, plus many tables have one already (for use as a primary key).
Related
I need to add in Cakephp records in my table. The amount of these records is very high. Every 2 months, some of these records will be removed from the database. Now my question is, if it is possible later to reuse the ids used before by the removed records, in order to save space?
In the sense that if records with ids from 10 to 40 are removed, then if I want to add a new record, can I reuse these ids from 10 to 40?
The data in the table will take up the same amount of space, regardless of what ID the row has, there isn't a blank row or anything left. I strongly recommend against reusing unique identifiers ... the clue is in the name that they should be unique to each record and not recycled!
I'm putting together a series of identical tables that consist of only dates submitted to DB. The dates mark when a particular job was executed.
The tables work like dominoes in that when a date is submitted in the update table/form the next table assumes the old date through a BEFORE UPDATE trigger, and so on, and so on.
Each of these tables can be linked from the update table, and the point of them is to view a history of work performed, when a particular job was executed. After 20 or so tables the older dates become somewhat irrelevant, but should all be eventually archived in a final, 21st table which absorbs all the dates that keep getting updated.
This last table is updated via trigger but the old dates/values should be kept, maybe separated by comma. In other words while tables 1-20 contain only one date/entry per field, the overwritten date from the previous table, table 21 will list ALL the dates associated with that particular field that have been, or will be passed down, so no OLD values are overwritten.
After extensive research I discoverted that INSERT does not overwrite old data, but every attempt at writing a trigger with INSERT to this last table has failed. All tables have the same ID-"1". No new tables are created, this is a simple exercise in storing data, and yet this last method is elusive.
No previous answers on SO really helped. How to do this simple job?
The UPDATE trigger that works, one derived from a previous SO question, for all the other tables, looks something like this:
BEGIN
UPDATE work2 SET
ins1 = OLD.ins1,
insp1 = OLD.insp1,
b1psp = OLD.b1psp,
b1ptp = OLD.b1ptp,
..........................etc
WHERE work2.id = OLD.id;
END
There must be a simple solution, yet I'm not familiar with PHP enough to solve this. I'm using EasyPHP DevServer 14.1.
We have a daily data feed. I need to determine what rows are new. (It's a long story, but there are no record numbers for the rows and they aren't going to be any.) We need to be able to identify which rows are new since the previous data feed. The file comes in as JSON and I have been putting it into a MySQL TABLE for other purposes.
How do I take yesterday's TABLE and compare it to today's TABLE, and to display those rows which have been added since yesterday? Can all this be done in MySQL, or do I need to do this with the help of PHP?
If I was doing this in PHP, I'm thinking I would search today's TABLE with yesterday's TABLE, and flag (an added column) in today's TABLE called NEW with a "N" when it's found. "Y" would be the default which means the row is new. Then using MySQL do a select where new="Y" and this would display the new fields. Is this how to do this? Am I overlooking a better method? Thanks!
If you actually have two separate tables (which is how it sounds from your description, but is odd) and aren't comparing literally the same table, you can
SELECT partnumber FROm Today_table where partnumber not in (select partnumber from Yesterday_table)
I'm in the process of writing a system to search through a MySQL database of real estate listings. I'm concerned about performance and wanted some input on how to handle this.
The table that will be the most frequently queried is the 'listings' table and will contain over 600k records with 86 columns. This table will also be updated every 30 minutes as listings change.
Almost every search will be against records with a status of 'active' which will be about 15k of the 600k records. However, I need to retain all of the records for our internal reports. Also, each query will likely be searching for various parameters (#beds, #baths, etc) so caching may not be feasible.
I was considering maintaining a second table containing the PK's of records marked 'active'. Create a view of the tables joined on the listing's PK. However, I know that under certain conditions, Views can be very inefficient.
I did have the thought of maintaining two databases since the inactive listings won't be searched frequently and will require less maintenance.
Fortunately it's not in production yet and I have time for performance testing. One more thing, this will be hosted on a dedicated Linux server with the front-end written in PHP. Any insight offered is greatly appreciated.
I suggest that you create an archive table. You could set up a process to run every 30 minutes or once per day, depending on the requirements.
The archive table would have the same columns as the original table plus and EffDate and EndDate, that have the dates/date times when the record is active.
Such a table will make it possible to recreate the history at any point in time -- something that will prove useful, I'm sure.
You will need code to create this. The basic logic is to lookup each record in your table with the most current version in the archive (EndDate is null and id = id). Then:
If new record is not present, create a new record with the current date as EffDate.
If present and all columns are the same, do nothing.
Otherwise update EndDate on the archive record and do (1).
Any archive records that do not have a new record at all should have EndDate set to the current date.
Typically, I have such tables updated once per day.
In code that does this, I have a big ugly query (Excel helps me build it) that does the comparisons and determines which records are "New", "Modified", and "Removed". The "Removed" and "Modified" records have the current EndDates set to the current date. The "New" and "Modified" records then get a new record with the EffDate set to the current date.
The values for EndDate and EffDate might be one more or less than stated, depending on how the updates really work. For a nightly update, for instance, the EffDate might be set to tomorrow or even to the date when the listing takes effect.
I'm working on a project for which I need to frequently insert ~500 or so records at a remote location. I will be doing this in a single INSERT to minimize network traffic. I do, however, need to know the exact id field (AUTO_INCREMENT) values.
My web searches seem to indicate I could probably use the last_insert_id and calculate all the id values from there. However, this wouldn't work if the rows get ids that are all over the place.
Could anyone please clarify what would or should happen, and if the mathematical solution is safe?
A multirow insert is an atomic operation in MySQL (both MyISAM and InnoDB). Since the table will be locked for writing during this operations, no other rows will be inserted/updated during it's execution.
This means IDs will in fact be consecutive (unless auto-increment-increment option is set to something different than 1
Auto increment does exactly that, it auto-increments - i.e. each new row next the numerically next ID. MySQL does not re-use IDs of rows that were deleted.
Your solution is safe because write operations aquire a table lock, so no other inserts can happen while your operation completes - so you will get n contiguous auto-increment values for n inserted rows.