I am new with PHP development and just wondering if theres a existing function on PHP than duplicate the copy command on phpmyadmin, i know that the query sequence is below, but this is like a long query/code since the table has alot of columns. i mean if phpmyadmin has this feature maybe its calling a build in function?
SELECT * FROM table where id = X
INSERT INTO table (XXX)VALUES(XXX)
Where the information is based from the SELECT query
Note: The id is primary and auto increment.
Here is the copy command on phpmyadmin
i mean if phpmyadmin has this feature maybe its calling a build in function?
There is no built-in functionality in MySQL to duplicate a row other than an INSERT statement of the form: INSERT INTO tableName ( columns-specification ) SELECT columns-specification FROM tableName WHERE primaryKeyColumns = primaryKeyValue.
The problem is you need to know the names of the columns beforehand, you also need to exclude auto_increment columns, as well as primary-key columns, and know how to come up with "smart defaults" for non-auto_increment primary key columns, especially composite keys. You'll also need to consider if any triggers should be executed too - and how to handle any constraints and indexes that may be designed to prevent duplicate values that a "copy" operation might introduce.
You can still do it in PHP, or even pure-MySQL (inside a sproc, using Dynamic SQL) but you'll need to query information_schema to get metadata about your database - which may be more trouble than it's worth.
Related
working on the PHP project related to web scraping and my aim is to store the data into the mysql database,i'm using unique key index on 3 indexes in 9 columns table and records are more than 5k.
should i check for unique data at program level like putting values in arrays and then comparing before inserting into database ?
is there any way so that i can speed up my database insertion ?
Never ever create a duplicate table this is a anti SQL pattern and it makes it more difficult to work with your data.
Maybe PDO and prepared statement will give you a little boost but dont expect wonders from it.
multible INSERT IGNORE may also give you a little boost but dont expect wonders from it.
You should generate a multiinsert query like so
INSERT INTO database.table (columns) VALUES (values),(values),(values)
Keep in mind to keep under the max packet size that mysql will have.
this way the index file have to be updated once.
You could create a duplicate of the table that you currently have except no indices on any field. Store the data in this table.
Then use events to move the data from the temp table into the main table. Once the data is moved to the main table then delete if from the temp table.
you can follow your updates with triger. You should do update table and you have to right trigger for this table.
use PDO, mysqli_* function, to increase insertion into database
You could use "INSERT IGNORE" in your query. That way the record will not be inserted if any unique constraints are violated.
Example:
INSERT IGNORE INTO table_name SET name = 'foo', value = 'bar', id = 12345;
Just looking for some tips and pointers for a small project I am doing. I have some ideas but I am not sure if they are the best practice. I am using mysql and php.
I have a table called nomsing in the database.
It has a primary key called row id which is an integer.
Then I have about 8 other tables referencing this table.
That are called nomplu, accsing,accplu, datsing, datplu for instance.
Each has a column that references the primary key of nomsing.
Withing my php code I have all the information to insert into the tables except one thing , the row id primary key of the nomsing table. So that php generates a series of inserts like the following.
INSERT INTO nomsing(word,postress,gender) VALUES (''велосипед","8","mask").
INSERT INTO nomplu(word,postress,NOMSING?REFERENCE) VALUES (''велосипеды","2",#the reference to the id of the first insert#).
There are more inserts but this one gets the point across. The second insert should reference the auto generated id for the first insert. I was this to work as a transaction so all inserts should complete or none.
One idea I have is to not auto generate the id and generate it myself in php. That way would know the id given before the transaction but then I would have to check if the id was already in the db.
Another idea I have is to do the first insert and then query for the row id of that insert in php and then make the second insert. I mean both should work but they don't seem like an optimal solution. I am not too familiar with the database transactional features but what would be the best approach to do in this case. I don't like the idea of inserting then querying for the id and then running the rest of the queries. Just seems very inefficient or perhaps I am wrong.
Just insert a row in the master table. Then you can fetch the insert id ( lastInserId when on PDO) and use that to populate your other queries.
You could use the php version as given by JvdBerg , or Mysql's LAST_INSERT_ID. I usually use the former option.
See a similar SO question here.
You could add a new column to the nomsing table, called 'insert_order' (or similar) with a default value of 0, then instead of generating one SQL statement per insert create a bulk insert statement e.g.
INSERT INTO nomsing(word,postress,gender, insert_order)
VALUES (''велосипед","8","mask",1), (''abcd'',"9","hat",2).....
you generate the insert_order number with a counter in your loop starting at one. Then you can perform one SELECT on the table to get the ids e.g.
SELECT row_id
FROM nomsing
WHERE insert_order > 0;
now you have all the IDs you can now do a bulk insert for your following queries. At the end of your script just do an update to reset the insert_order column back to 0
UPDATE nomsing SET insert_order = 0 WHERE insert_order > 0;
It may seem messy to add an extra column to do this but it will add a significant speed increase over performing one query at a time.
I'm having a spot of trouble with a bit of code meant to find duplicates of a name along with the platform. This will also be adapted to find unique IDs later on.
So for example, if there is a server named "Apple" on the Xbox and you try to insert a record with the name "Apple" with the same platform it will reject it. However, another platform with the same name is allowed, such as "Apple" with PS3.
I've tried coming up with ideas and searching for answers, but I'm kind of in the dark as to what is the best way to go about checking for duplicates.
So far this is what I have:
$nameDuplicate_sql = $db->prepare("SELECT * FROM `servers` WHERE name=':name' AND platform=':platform'");
$nameDuplicate_sql->bindValue(':name', $name);
$nameDuplicate_sql->bindValue(':platform', $platform);
$nameDuplicate_sql->execute();
I've tried a bunch of different solutions, some from here, others from the PHP's manual and etc. None appear to work though.
I'm trying to stick with PDO, however, this is one instance where I cannot figure out where to turn. If this was in mysql_* I probably could just use mysql_affected_rows, but with PDO I have no clue. rowCount seemed promising, but it always returns 0 since this is neither an INSERT, UPDATE, or DELETE statement.
Oh, and I've tried the SQL statement in phpMyAdmin and it works; I tried it with a simple name/platform and it found rows properly.
If anyone can help me out here I'd appreciate it.
For most databases, PDOStatement::rowCount() does not return the
number of rows affected by a SELECT statement.
Instead, use PDO::query() to issue a SELECT COUNT(*) statement with the same predicates as your intended SELECT statement, then use
PDOStatement::fetchColumn() to retrieve the number of rows that will
be returned.
Your application can then perform the correct action.
Instead of checking for duplicates, why not just enforce it on the database table directly? Create a composite key that will prohibit entries being made if they are already there?
CREATE TABLE servers (
serverName varchar(50),
platform varchar(50),
PRIMARY KEY (serverName, platform)
)
This way, you will never get duplicates, and it also allows you to use the mysql insert... on duplicate key update... syntax which sounds like it might be rather handy for you.
If you already have a Primary Key on it or you don't want to make a new table, you can use the following:
ALTER TABLE servers DROP PRIMARY KEY, ADD PRIMARY KEY(serverName, platform);
Edit: A primary key is either a single row or a number of rows that have to have unique data in them. A single row cannot have the same value twice, but a composite key (which is what I am suggesting here) means that between the two columns, the same data cannot appear.
In this case, what you want to do, add in a server name and have it associated with a platform - the table will let you add in as many rows containing the same server name - as long as each one has a unique platform associated with it - and vice versa, you can have a platform listed as many times as you like, as long as all the server names are unique.
If you try to insert a record where the same servername/platform combination exists, the database simply won't let you do it. There is another golden benefit though. Due to this key constraint - mysql allows a special type of query to be used. It is the insert... on duplicate key update syntax. That means if you try to insert the same data twice (ie, database says no) you can catch it and update the row you already have in the table. For example:
You have a row with serverName=Fluffeh and it is on platform=Boosh but you don't know about it right now, so you try to insert a record with the intention of updating the server IP address.
Normally you would simply write something like this:
insert into servers (serverName, platform, IPAddress)
values ('$serverName', '$platform', '$IPAddy')
But with a nice primary key identified you can do this:
insert into servers (serverName, platform, IPAddress)
values ('$serverName', '$platform', '$IPAddy')
on duplicate key update set IPAddress='$IPAddy';
The second query will insert the row with all the data if it doesn't exist already. If it doesm, Bam! it will update the IP Address of the server which was your intention all along.
Remove the single quotes from your query on the parameter tokens... they will be quoted once they are bound... thats part of the reason for a prepared statement.
$nameDuplicate_sql = $db->prepare("SELECT * FROM `servers` WHERE name= :name AND platform= :platform");
This seems to be a simple problem, but after a while of searching I can't figure out the answer.
I currently have a MySQL table in my local database used by a webapp, and them same table on a database in a remote server. Right now, I'm using the CREATE TABLE IF NOT EXISTS command through PHP to create the table on the databases:
CREATE TABLE IF NOT EXISTS users (
`id` int(10) NOT NULL AUTO_INCREMENT,
`username` varchar(18) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=2 ;
However, let's say I make a modification to the local database, adding a collumn, for example. It would be really annoying to have to go and change the remote database every time I change the local one. Is there an easier way to run code to create a table if it doesn't exist, and if it does exist, make sure it's structure matches that of the create table structure?
Here's an example, to make what I'm trying to convey a little clearer. Let's say on the local database I have a users table, and I decide that in my webapp I want to have another collumn, password. So I go to the local database and add a password collumn. Is there PHP/MySQL code I can run to check if the users table exists, and if it does, make sure it has a password collumn, and if not, add it?
What you are actually looking for are Migrations, e.g. you are looking for a Schema Management Tool that lets you manage your Database structure in versioned code diffs.
For instance, for your described scenario you would first create a script to create the table, e.g. 001_create_user_table.sql. Then you'd use the schema manager to connect and deploy these changes to your databases.
When you want to change or add something, you just write another script, for instance, 002_Add_Password_Column_To_User_Table.sql. Fill in just the code to do that change. Then run the schema manager again.
Typically, you tell the Schema Manager to go through all existing migrations files. On each run, the Schema manager will update a changelog table in the database, so when you run it, it will know which of your scripts it should apply.
The good thing is, you can add these migrations to your regular VCS, so you will always know which database schema you had at which version of your application. And you will have a proper changelog for them.
To directly answer your question you can create temporary procedures to detect field existence like using a query like this:
SHOW COLUMNS FROM table_name LIKE 'column_name';
However in the real world, database changes are general rolled into three scripts. A create script and two deltas one up and one down. Then the database is versioned so that you know at what state the database is in at any given time.
To specifically check for a password column you can use DESCRIBE:
$colExists = false;
$res = mysql_query('DESCRIBE `users`');
while ($row = mysql_fetch_assoc($res)) {
if ($row['Field'] == 'password') {
$colExists = true;
break;
}
}
if (!$colExists) {
// create column
}
However, you should check into replication or some other automated tool to see if they would be a better solution for you.
Follow these steps (you can easily implement this in PHP, I assumed that the name of the table is Foo)
1.) Run the following code:
desc Foo
2.) Based on the result of the first step you can make your create table command (and you should)
3.) Store your data from the existing table which will be replaced in a variable (Optional, you only need this if you can potentially use data from the old table)
4.) Modify the extracted rows from step 3.) so they will be compatible with your new definition (Optional, you only need this if you can potentially use data from the old table)
5.) Get the rows from your new Foo table
6.) Merge the results got in steps 4.) an 5.) (Optional, you only need this if you can potentially use data from the old table)
7.) Run a drop table for the old table
8.) Generate a replace into command to insert all your rows into the newly created Foo table (you can read more about this here)
After these steps, as a result, you will have the new version of the table. If your tables are too large, you can do a CREATE TABLE IF NOT EXISTS command and if that was not successful, run the alter command.
Also, you can make a library to do these steps and will use that in the future instead of solving the same problem several times.
EDIT:
You can connect the database using this function: mysql-connect (documentation here)
You can run a query using this function: mysql-query (documentation here)
Based on the first step you will get the field names (let's assume you store it in a variable called $bar) and you can use your result to generate your select command (connecting to the database where you have important data. It may be both):
$field_list = "1";
foreach ($bar as $key => $value)
$field_list.= ",".$bar[$key];
mysql_connect(/*connection data*/);
mysql_query("select ".$field_list." from Foo");
You can use your new resource to build up an insert command to insert all your important data after deletion recreation (about resources read more here, about how you can generate your insert you can read here, but I suggest that you should use replace into instead of insert which works like the insert, except that it replaces the row if it already exists, it's better here than an insert, read more here)
So, use mysql_connect and mysql_query, and the resource returned by the mysql_query function can be used for replace into later (I've linked now the URL's for everything you need, so I'm pretty sure you'll solve the problem.), apologies for being not specific enough before.
I am currently working on a PHP project with an Oracle database. To update a table, the php code I'm working with uses a SQL "MERGE INTO" method to loop through a table and see if values for multiple records exist in another table. If they don't exist yet, the values are inserted into my table. If the values already exist, nothing happens.
I would like to have another query run after this that uses the auto incremented id's created in the MERGE INTO query. Is there a way to get an array of the newly created ids? I was hoping for something like mysql_insert_id, but I haven't found anything like that yet.
Thanks!
Oracle has supported the MERGE syntax since 9i. Haven't tried, but you might be able to use the RETURNING clause on the MERGE statement...
Oracle uses sequences for handling automatically incremented values. Once you've created a sequence, you can use:
sequence_name.CURVAL
..to get the current value, like what mysql_insert_id would return. To populate a primary key, you'd use:
sequence_name.NEXTVAL
To populate a primary in an INSERT statement, you'd use:
INSERT INTO your_table
(pk_id, ..
VALUES
(your_sequence.NEXTVAL, ...)
You can use triggers as an alternative, but they won't return the current value.
What auto_incremented ids? AFAIK, There is no such thing in Oracle. You can simulate the behaviour by adding a trigger on the table and a sequence number but there is certainly no equivalent of mysql_insert_id().
I think you need to go back and find another way to identify your records.
C.