I want to update one column for all rows in a table with a random number. As far as my research goes, there is no rand() in Doctrine by default. The options I see are 1. Add a custom DQL-Function, this would be MySQL specific, 2. Update every row with a PHP generated value.
Both options seem like bad practice to me. Is there something I'm missing?
I would go with native query. It is much simpler than creating custom DQL function.
$em = getEntityManager();
$tableName = $em->getClassMetadata('Your:Entity')->getTableName();
$em->getConnection()->exec('UPDATE '.$tableName.' SET column=RAND()');
But if You prefer DQL go with it.
But doing it in PHP will be the worst.
You will have to fetch all records first
You will have to update each row one by one
Database is not something You change every week so don't be afraid of using vendor specific functions.
Related
I trying to add +1 in a column after select but its not working, what I want is, when I make a search, the scripts adds +1 in a column to track how much searches I did.
Heres how it is now
$QUERY = "SELECT company FROM test WHERE number = '$number[0]' LIMIT 1";
And I want to add this
UPDATE users SET consultas=consultas+1 WHERE username = '$username'
If I add another $QUERY line the script breaks, any ideas ?
By nature, SELECT queries are for returning information from the database, not updating the database. To this end, triggers aren't even available for SELECT queries to react to the action. As such, if you want to increment a value, this must be done in a separate query, as an UPDATE query or possibly an INSERT ... ON DUPLICATE KEY UPDATE query if that better suits your needs.
You should execute those as two separate queries. Also, be very careful to ensure your data is properly escaped because it looks like you've forgotten to do that.
Be sure to check the result code of each as an error may occur at any time. If you use PDO there's a fairly robust error handling pattern you can follow.
Directly under this small intro here you'll see the layout of the database tables that I'm working with and then you'll see the details of my question. Please provide as much guidance as possible. I am still learning PHP and SQL and I really do appreciate your help as I get the hang of this.
Table One ('bue') --
chp_cd
rgn_no
bgu_cd
work_state
Table Two ('chapterassociation') --
chp_cd
rgn_no
bgu_cd
work_state
Database Type: PostgreSQL
I'm trying to do the following with these two tables, and I think it's a JOIN that I have to do but I'm not all that familiar with it and I'm trying to learn. I've created a query thus far to select a set of data from these tables so that the query isn't run on the entire database. Now with the data selected, I'm trying to do the following...
First and foremost, 'work_state' of table one ('bue') should be checked against 'work_state' of table two ('chapterassociation'). Once a match is found, 'bgu_cd' of table one ('bue') should be matched against 'bgu_cd' of table two ('chapterassociation'). When both matches are found, it will always point to a unique row within the second table ('chapterassociation'). Using that unique row within the second table ('chapterassociation'), the values of 'rgn_no' and 'chp_cd' should be UPDATED within the first table ('bue') to match the values within the second table ('chapterassociation').
I know this is probably asking a lot, but if someone could help me to construct a query to do this, it'd be wonderful! I really do want to learn, as I don't wish to be ignorant to this forever. Though I'm not sure if I completely understand how the JOIN and comparison here would work.
If I'm correct, I'll have to put this into seperate queries which will then be in PHP. So for example, it'll probably be a few IF ELSE statements that end with the final result of the final query, which updates the values from table two to table one.
A JOIN will do both level of matching for you...
bue
INNER JOIN
chapterassociation
ON bue.work_state = chapterassociation.work_state
AND bue.bgu_cd = chapterassociation.bgu_cd
The actual algorithm is determined by PostreSQL. It could be a merge, use hashes, etc, and depends on indexes and other statistics about the data. But you don't need to worry about that directly, SQL abstracts that away for you.
Then you just need a mechanism to write the data from one table to the other...
UPDATE
bue
SET
rgn_no = chapterassociation.rgn_no,
chp_cd = chapterassociation.chp_cd
FROM
chapterassociation
WHERE bue.work_state = chapterassociation.work_state
AND bue.bgu_cd = chapterassociation.bgu_cd
I have a script that imports CSV files. What ends up in my database is, among other things, a list of customers and a list of addresses. I have a table called customer and another called address, where address has a customer_id.
One thing that's important to me is not to have any duplicate rows. Therefore, each time I import an address, I do something like this:
$address = new Address();
$address->setLine_1($line_1);
$address->setZip($zip);
$address->setCountry($usa);
$address->setCity($city);
$address->setState($state);
$address = Doctrine::getTable('Address')->findOrCreate($address);
$address->save();
What findOrCreate() does, as you can probably guess, is find a matching address record if it exists, otherwise just return a new Address object. Here is the code:
public function findOrCreate($address)
{
$q = Doctrine_Query::create()
->select('a.*')
->from('Address a')
->where('a.line_1 = ?', $address->getLine_1())
->andWhere('a.line_2 = ?', $address->getLine_2())
->andWhere('a.country_id = ?', $address->getCountryId())
->andWhere('a.city = ?', $address->getCity())
->andWhere('a.state_id = ?', $address->getStateId())
->andWhere('a.zip = ?', $address->getZip());
$existing_address = $q->fetchOne();
if ($existing_address)
{
return $existing_address;
}
else
{
return $address;
}
}
The problem with doing this is that it's slow. To save each row in the CSV file (which translates into several INSERT statements on different tables), it takes about a quarter second. I'd like to get it as close to "instantaneous" as possible because I sometimes have over 50,000 rows in my CSV file. I've found that if I comment out the part of my import that saves addresses, it's much faster. Is there some faster way I could do this? I briefly considered putting an index on it but it seems like, since all the fields need to match, an index wouldn't help.
This certainly won't alleviate all of the time spent on tens of thousands of iterations, but why don't you manage your addresses outside of per-iteration DB queries? The general idea:
Get a list of all current addresses (store it in an array)
As you iterate, check array membership (checksums [sic]); if it doesn't exist, store the new address in the array and save the address to the database.
Unless I'm misunderstanding the scenario, this way you're only making INSERT queries if you have to, and you don't need to perform any SELECT queries aside from the first one.
I recommend that you investigate loading the CSV files into MySQL using LOAD DATA INFILE:
http://dev.mysql.com/doc/refman/5.1/en/load-data.html
In order to update existing rows, you have a couple of options. LOAD DATA INFILE does not have upsert functionality (insert...on duplicate key update), but it does have a REPLACE option, which you could use to update existing rows, but you need to make sure you have an appropriate unique index, and the REPLACE is really just a DELETE and INSERT, which is slower than an UPDATE.
Another option is to load the data from the CSV into a temporary table, then merge that table with the live table using INSERT...ON DUPLICATE KEY UPDATE. Again, make sure you have an appropriate unique index, but in this case you're doing an update instead of a delete so it should be faster.
It looks like your duplicate checking is what is slowing you down. To find out why, figure out what query Doctrine is creating and run EXPLAIN on it.
My guess would be that you will need to create some indexes. Searching through the entire table can be very slow, but adding an index to zip would allow the query to only do a full search through addresses with that zip code. The EXPLAIN will be able to guide you to other optimizations.
What I ended up doing, that improved performance greatly, was to use ON DUPLICATE KEY UPDATE instead of using findOrCreate().
I have two tables called clients, they are exactly the same but within two different db's. Now the master always needs to update with the secondary one. And all data should always be the same, the script runs once per day. What would be the best to accomplish this.
I had the following solution but I think maybe theres a better way to do this
$sql = "SELECT * FROM client";
$res = mysql_query($conn,$sql);
while($row = mysql_fetch_object($res)){
$sql = "SELECT count(*) FROM clients WHERE id={$row->id}";
$res1 = mysql_query($connSecond,$sql);
if(mysql_num_rows($res1) > 0){
//Update second table
}else{
//Insert into second table
}
}
and then I need a solution to delete all old data in second table thats not in master.
Any advise help would be appreaciated
This is by no means an answer to your php code, but you should take a look # Mysql Triggers, you should be able to create triggers (on updates / inserts / deletes) and have a trigger (like a stored proceedure) update your table.
Going off the description you give, I would create a trigger that would check for changes to the 2ndary table, then write that change to the primary table, and delete that initial entry (if so required) form the 2ndary table.
Triggers are run per conditions that you define.
Hopefully this gives you insight into 'another' way of doing this task.
More references on triggers for mysql:
http://dev.mysql.com/doc/refman/5.0/en/triggers.html
http://www.mysqltutorial.org/create-the-first-trigger-in-mysql.aspx
You can use mysql INSERT ... SELECT like this (but first truncate the target table):
TRUNCATE TABLE database2.client;
INSERT INTO database2.client SELECT * FROM database1.client;
It will be way faster than doing it by PHP.
And to your notice:
As long as the mysql user has been given the right permissions to all databases and tables where data is pulled from or pushed to, this will work. Though the mysql_select_db function selects one database, the mysql statement may reference another if you use complete reference like databasename.tablename
Not exactly answering your question, but how about just using 1 table, instead of 2? You could use a fedarated table to access the other (if it's on a different mysql instance) or reference the table directly (like shamittomar's suggestion)
If both are on the same MySQL instance, you could easily use a view:
CREATE VIEW database2.client SELECT * FROM database1.client;
And that's it! No synchronizing, no cron jobs, no voodoo :)
I have a field in a table recipes that has been inserted using mysql_real_escape_string, I want to count the number of line breaks in that field and order the records using this number.
p.s. the field is called Ingredients.
Thanks everyone
This would do it:
SELECT *, LENGTH(Ingredients) - LENGTH(REPLACE(Ingredients, '\n', '')) as Count
FROM Recipes
ORDER BY Count DESC
The way I am getting the amount of linebreaks is a bit of a hack, however, and I don't think there's a better way. I would recommend keeping a column that has the amount of linebreaks if performance is a huge issue. For medium-sized data sets, though, I think the above should be fine.
If you wanted to have a cache column as described above, you would do:
UPDATE
Recipes
SET
IngredientAmount = LENGTH(Ingredients) - LENGTH(REPLACE(Ingredients, '\n', ''))
After that, whenever you are updating/inserting a new row, you could calculate the amounts (probably with PHP) and fill in this column before-hand. Or, if you're into that sort of thing, try out triggers.
I'm assuming a lot here, but from what I'm reading in your post, you could change your database structure a little bit, and both solve this problem and open your dataset up to more interesting uses.
If you separate ingredients into its own table, and use a linking table to index which ingredients occur in which recipes, it'll be much easier to be creative with data manipulation. It becomes easier to count ingredients per recipe, to find similarities in recipes, to search for recipes containing sets of ingredients, etc. also your data would be more normalized and smaller. (storing one global list of all ingredients vs. storing a set for each recipe)
If you're using a single text entry field to enter ingredients for a recipe now, you could do something like break up that input by lines and use each line as an ingredient when saving to the database. You can use something like PHP's built-in levenshtein() or similar_text() functions to deal with misspelled ingredient names and keep the data as normalized as possbile without having to hand-groom your [users'] data entry too much.
This is just a suggestion, take it as you like.
You're going a bit beyond the capabilities and intent of SQL here. You could write a stored procedure to scan the string and return the number and then use this in your query.
However, I think you should revisit the design of whatever is inserting the Ingredients so that you avoid searching strings in of every row whenever you do this query. Add a 'num_linebreaks' column, calculate the number of line breaks and set this column when you're adding the Indgredients.
If you've no control over the app that's doing the insertion, then you could use a stored procedure to update num_linebreaks based on a trigger.
Got it thanks, the php code looks like:
$check = explode("\r\n", $_POST['ingredients']);
$lines = count($check);
So how could I update all the information in the table so Ingred_count based on field Ingredients in one fellow swoop for previous records?