Database table schema.
CREATE TABLE `stackoverflow`.`automatic` (
`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,
`values` VARCHAR( 200 ) NOT NULL ,
`counts` BIGINT NOT NULL
) ENGINE = InnoDB;
now i want to update counts everytime data is updated automatically in mysql without hitting database.
I mean i know i can select that row and get old value and then increament it by one but my concern is that whether there's any way that mysql handle such things.
I didn't tried any code.
I just wanted to know from experts out there if they know anything about it.
As i am running on deadline i don't wanna use lengthy approach and i thought it would be cool if mysql already has something that could help me now.
Thanks.
You can do it in your update query itself
UPDATE `stackoverflow`.`automatic` set values='xyz', counts=(counts+1) where id=1;
Related
I recently started learning some languages: html, css and now PhP and MySql. I created a sign up, log in and log out system using this tutorial:
http://www.tutorialrepublic.com/php-tutorial/php-mysql-login-system.php
I'm using XAMPP to run Apache server, MySql and PhPMyAdmin. Everything seems to work fine, except for an issue with the primary key. When my form was completed I started adding some fictional user accounts to test it out. After that I deleted them. The username and password were deleted, but the Primary Key (ID) won't change. Even though the first row should be the first ID of 1, it is stuck at 3 because the rows with ID's 1 and 2 were deleted. With this as a result:
image of issue.
Can anyone help me with this?
That's the AUTO_INCREMENT behavior. As you can see here, you can modify your auto_increment setting something like this:
ALTER TABLE foo AUTO_INCREMENT=1
But this isn't recommended.
In the table creation the id is set to auto_increment
CREATE TABLE users (
id INT NOT NULL PRIMARY KEY AUTO_INCREMENT,
username VARCHAR(50) NOT NULL UNIQUE,
password VARCHAR(255) NOT NULL,
created_at DATETIME DEFAULT CURRENT_TIMESTAMP
);
This causes the numbers to keep incrementing even though they may not be in the database.
You can reset the auto_increment value:
ALTER TABLE `table_name` AUTO_INCREMENT=1
You can either truncate your table (Operations tab in PHPMyAdmin) or run the following query:
ALTER TABLE `mytable` AUTO_INCREMENT = 1;
But just truncate it, it's best to just do that.
I am writing a little PHP script which is simply return data from a MYSQL table using below query
"SELECT * FROM data where status='0' limit 1";
After reading the data I update the status by getting Id of the particular row using below query
"Update data set status='1' WHERE id=" . $db_field['id'];
Things are working good for a single client. Now i am willing to make this particular page for multiple clients. There are more then 20 clients which will access the same page on almost same time continuously (24/7). Is there a possibility that two or more clients read same data from table? If yes then how to solve it?
Thanks
You are right to consider concurrency. Unless you have only 1 PHP thread responding to client requests, there's really nothing to stop them each from handing out the same row from data to be processed - in fact, since they will each run the same query, they'll each almost certainly hand out the same row.
The easiest way to solve that problem is locking, as suggested in the accepted answer. That may work if the time the PHP server thread takes to run the SELECT...FOR UPDATE or LOCK TABLE ... UNLOCK TABLES (non-transactional) is minimal, such that other threads can wait while each thread runs this code ( it's still wasteful, as they could be processing some other data row, but more on that later).
There is a better solution, though it requires a schema change. Imagine you have a table such as this:
CREATE TABLE `data` (
`data_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`data` blob,
`status` tinyint(1) DEFAULT '0',
PRIMARY KEY (`data_id`)
) ENGINE=InnoDB;
You don't have any way to transactionally update "the next processed record" because the only field you have to update is status. But imagine your table looks more like this:
CREATE TABLE `data` (
`data_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`data` blob,
`status` tinyint(1) DEFAULT '0',
`processing_id` int(10) unsigned DEFAULT NULL,
PRIMARY KEY (`data_id`)
) ENGINE=InnoDB;
Then you can write a query something like this to update the "next" column to be processed with your 'processing id':
UPDATE data
SET processing_id = #unique_processing_id
WHERE processing_id IS NULL and status = 0 LIMIT 1;
And any SQL engine worth a darn will make sure you don't have 2 distinct processing IDs accounting for the same record to be processed at the same time. Then at your leisure, you can
SELECT * FROM data WHERE processing_id = #unique_processing_id;
and know that you're getting a unique record every time.
This approach also lends it well to durability concerns; you're basically identify the batch processing run per data row, meaning you can account for each batch job whereas before you're potentially only accounting for the data rows.
I would probably implement the #unique_processing_id by adding a second table for this metadata ( the auto-increment key is the real trick to this, but other data processing metadata could be added):
CREATE TABLE `data_processing` (
`id` int(10) unsigned NOT NULL AUTO_INCREMENT,
`date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
`data_id` int(10) unsigned DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB;
and using that as a source for your unique IDs, you might end up with something like:
INSERT INTO data_processing SET date=NOW();
SET #unique_processing_id = (SELECT LAST_INSERT_ID());
UPDATE data
SET processing_id = #unique_processing_id
WHERE status = 0 LIMIT 1;
UPDATE data
JOIN data_processing ON data_processing.id = data.processing_id
SET data_processing.data_id = data.data_id;
SELECT * from data WHERE processing_id = #unique_processing_id;
-- you are now ready to marshal the data to the client ... and ...
UPDATE data SET status = 1
WHERE status = 0
AND processing_id = #unique_processing_id
LIMIT 1;
Thus solving your concurrency problem, and putting you in better shape to audit for durability as well, depending on how you set up data_processing table; you could track thread IDs, processing state, etc. to help verify that the data is really done being processed.
There are other solutions - a message queue might be ideal, allowing you to queue each unprocessed data object's ID to the clients directly ( or through a php script ) and then provide an interface for that data to be retrieved and marked processed separately from the queueing of the "next" data. But as far as "mysql-only" solutions go, the concepts behind what I've shown you here should server you pretty well.
The answer you seek might be using transactions. I suggest you read the following post and its accepted answer:
PHP + MySQL transactions examples
If not, there is also table locking you should look at:
13.3.5 LOCK TABLES and UNLOCK TABLES
I will suggest you to use session for this...
you can save that id into session...
so you can check if one client is checking that record, than you can not allow another client to access it ...
I have the following schema with the following attributes:
USER(TABLE_NAME)
USER_ID|USERNAME|PASSWORD|TOPIC_NAME|FLAG1|FLAG2
I have 2 questions basically:
How can I make an attribute USER_ID as primary key and it should
automatically increment the value each time I insert the value into
the database.It shouldn't be under my control.
How can I retrieve a record from the database, based on the latest
time from which it was updated.( for example if I updated a record
at 2pm and same record at 3pm, if I retrieve now at 4pm I should get
the record that was updated at 3pm i.e. the latest updated one.)
Please help.
I'm assuming that question one is in the context of MYSQL. So, you can use the ALTER TABLE statement to mark a field as PRIMARY KEY, and to mark it AUTOINCREMENT
ALTER TABLE User
ADD PRIMARY KEY (USER_ID);
ALTER TABLE User
MODIFY COLUMN USER_ID INT(4) AUTO_INCREMENT; -- of course, set the type appropriately
For the second question I'm not sure I understand correctly so I'm just going to go ahead and give you some basic information before giving an answer that may confuse you.
When you update the same record multiple times, only the most recent update is persisted. Basically, once you update a record, it's previous values are not kept. So, if you update a record at 2pm, and then update the same record at 3pm - when you query for the record you will automatically receive the most recent values.
Now, if by updating you mean you would insert new values for the same USER_ID multiple times and want to retrieve the most recent, then you would need to use a field in the table to store a timestamp of when each record is created/updated. Then you can query for the most recent value based on the timestamp.
I assume you're talking about Oracle since you tagged it as Oracle. You also tagged the question as MySQL where the approach will be different.
You can make the USER_ID column a primary key
ALTER TABLE <<table_name>>
ADD CONSTRAINT pk_user_id PRIMARY KEY( user_id );
If you want the value to increment automatically, you'd need to create a sequence
CREATE SEQUENCE user_id_seq
START WITH 1
INCREMENT BY 1
CACHE 20;
and then create a trigger on the table that uses the sequence
CREATE OR REPLACE TRIGGER trg_assign_user_id
BEFORE INSERT ON <<table name>>
FOR EACH ROW
BEGIN
:new.user_id := user_id_seq.nextval;
END;
As for your second question, I'm not sure that I understand. If you update a row and then commit that change, all subsequent queries are going to read the updated data (barring exceptionally unlikely cases where you've set a serializable transaction isolation level and you've got transactions that run for multiple hours and you're running the query in that transaction). You don't need to do anything to see the current data.
(Answer based on MySQL; conceptually similar answer if using Oracle, but the SQL will probably be different.)
If USER_ID was not defined as a primary key or automatically incrementing at the time of table creation, then you can use:
ALTER TABLE tablename MODIFY USER_ID INT NOT NULL PRIMARY KEY AUTO_INCREMENT;
To issue queries based on record dates, you have to have a field defined to hold date-related datetypes. The date and time of record modifications would be something you would manage (e.g. add/change) based on the way in which you are accessing the records (some PHP-related way? it's unclear what scripts you have in play, based on your question.) Once you have dates in your records you can ORDER BY the date field in your SELECT query.
Check this out
For your AUTOINCREMENT, Its a question already asked here
For your PRIMARY KEY use this
ALTER TABLE USER ADD PRIMARY KEY (USER_ID)
Can you provide more information. If the value gets updated you definitely do NOT have your old value that you entered at 2pm present in the dB. So querying for it will be fine
You can use something like this:
CREATE TABLE IF NOT EXISTS user (
USER_ID unsigned int(8) NOT NULL AUTO_INCREMENT,
username varchar(25) NOT NULL,
password varchar(25) NOT NULL,
topic_name varchar(100) NOT NULL,
flag1 smallint(1) NOT NULL DEFAULT 0,
flag2 smallint(1) NOT NULL DEFAULT 0,
update_time TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
PRIMARY KEY (uid)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
For selection use query:
SELECT * from user ORDER BY update_time DESC
I am trying to use Dreamweaver to build a Lyrics database website.
I have a table for the lyrics and I have a column called "views" that I want to increase by 1 every time that particular lyric is viewed in a browser.
How can I accomplish this using mysql?
What mysql datatype Or PHP can I use?
Please explain thoroughly because I do not know php or mysql that well, I'm just trying.
Remember I am using Dreamweaver.
Thanks.
Well we would need to see how your PHP and MySQL is laid out to be honest. Do you want someone to just write it for you or do you want to learn?
The query would be something like this:
$query = "UPDATE `myviewstable` SET count = count+1 WHERE id = '$id'";
I believe that would work. id is your lyric id and count is your column for keeping track of numbers.
UPDATE lyrics SET views=views+1 WHERE id = $id_of_song
and have that execute every time the lyrics page is loaded.
I've never been a fan of a 'views' column because there is no proof that is the actual number, instead I would create a transaction table where I would store a timestamp along with some other info, then if I wanted to get a count of how many times the lyrics were viewed I would just do:
SELECT count(*) FROM lyric_views WHERE lyric_id = ?
For demonstration purposes, the table design might look like:
CREATE TABLE `lyric_views` (
`id` int(11) unsigned NOT NULL auto_increment,
`lyric_id` int(11) unsigned NOT NULL,
`viewed_at` timestamp NOT NULL default CURRENT_TIMESTAMP
) ENGINE=InnoDB DEFAULT CHARSET=utf8
It might sound complicated, but it's not.
I have a table called "orders", with fields:
id INT(11) auto_increment,
realid INT(14)
Now, I want at every insert into this table, do something like:
INSERT INTO orders VALUES (null, id+1000);
However I'll be doing it on shop which is currently online, and I want to change everything in 5 minutes. Will something like this work? If not, how do I do that?
I would think a simpler solution would be a calculated column presuming your DBMS supports it (This is using SQL Server syntax, although the MySql syntax should be nearly identical except that you would use AUTO_INCREMENT instead of Identity(1,1)):
Create Table Foo (
Id int not null identity(1,1)
, Name varchar(50)
, Bar As Id + 1000
)
Of course, you could also just do it in the presentation or business tier instead of the database.