PHP and SQL: code is really slow - php

$unique = array();
$sql = "SELECT ID, TitleName, ArtistDisplayName, Mix FROM values_to_insert as A
WHERE A.ID = ";
//Get a single row from our data that needs to be inserted...
while($result = $conn->query(($sql. $count)))
{
//Get the $data of the single row query for inserting.
$data = mysqli_fetch_row($result);
$count++;
//SQL to get a match of the single row of $data we just fetched...
$get_match = "SELECT TitleName_ti, Artist_ti, RemixName_ti from titles as B
Where B.TitleName_ti = '$data[1]'
and B.Artist_ti = '$data[2]'
and B.RemixName_ti = '$data[3]'
LIMIT 1";
//If this query returns a match, then push this data to our $unique value array.
if(!$result = $conn->query($get_match))
{
//If this data has been pushed already, (since our data includes repeats), then don't
//put a repeat of the data into our unique array. Else, push the data.
if(!in_array($unique, $data))
{
echo 'Pushed to array: ' . $data[0] . "---" . $data[1] . "</br>";
array_push($unique, $data);
}
else
echo'Nothing pushed... </br>';
}
}
This has taken 5+ minutes and nothing has even printed to screen. I'm curious as to what is eating up so much time and possibly an alternative method or function for whatever it is taking all this time up. I guess some pointers in the right direction would be great.
This code basically gets all rows, one at a time, of table 'A'. Checks if there is a match in table 'B', and if there is, then I don't want that $data, but if there isn't, I then check whether or not the data itself is a repeat because my table 'A' has some repeat values.
Table A has 60,000 rows
Table B has 200,000 rows

Queries within queries are rarely a good idea
But there appear to be multiple issues with your script. It might be easier to just do the whole lot in SQL and push the results to the array each time. SQL can remove the duplicates:-
<?php
$unique = array();
$sql = "SELECT DISTINCT A.ID,
A.TitleName,
A.ArtistDisplayName,
A.Mix
FROM values_to_insert as A
LEFT OUTER JOIN titles as B
ON B.TitleName_ti = A.ID
and B.Artist_ti = A.TitleName
and B.RemixName_ti = A.ArtistDisplayName
WHERE B.TitleName_ti IS NULL
ORDER BY a.ID";
if($result = $conn->query(($sql)))
{
//Get the $data of the single row query for inserting.
while($data = mysqli_fetch_row($result))
{
array_push($unique, $data);
}
}
As to your original query.
You have a count (I presume it is initialised to 0, but if a character then that will do odd things), and get the records with that value. If the first id was 1,000,000,000 then you have done 1b queries before you ever find a record to process. You can just get all the rows in ID order anyway by removing the WHERE clause and ordering by ID.
You then just get a single record from a 2nd query where the details match, but only process them if no record is found. You do not use any of the values that are returned. You can do this by doing a LEFT OUTER JOIN to get matches, and checking that there was no match in the WHERE clause.
EDIT - as you have pointed out, the fields you appear to be using to match the records do not appear to logically match. I have used them as you did but I expect you really want to match B.TitleName_ti to A.TitleName, B.Artist_ti to A.ArtistDisplayName and B.RemixName_ti to A.Mix

Related

I need to nest 2 arrays so that I can echo order header as well as order item details

I have updated my original post based on what I learned from your comments below. It is a much simpler process than I originally thought.
require '../database.php';
$pdo = Database::connect();
$pdo->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
$sql = "SELECT * FROM Orders WHERE id = 430";
$q = $pdo->prepare($sql);
$q->execute(array($id));
$data = $q->fetch(PDO::FETCH_ASSOC);
echo 'Order Num: ' . $data['id'] . '<br>';
$sql = "SELECT * FROM Order_items
JOIN Parts ON Parts.id = Order_Items.part_id
WHERE Order_Items.orders_id = 430";
$q = $pdo->prepare($sql);
$q->execute(array($line_item_id));
$data = $q->fetch(PDO::FETCH_ASSOC);
while ($data = $q->fetch(PDO::FETCH_ASSOC))
{
echo '- ' . $data['part_num'] . $data['qty'] . "<br>";
}
Database::disconnect();
Unfortunately, only my first query is producing results. The second query is producing the following ERROR LOG: "Base table or view not found: 1146 Table 'Order_items' doesn't exist" but I am expecting the following results.
Expected Results from Query 1:
Order Num: 430
Expected Results from Query 2:
- Screws 400
- Plates 35
- Clips 37
- Poles 7
- Zip ties 45
Now that I understand where you are coming from, let's explain a couple of things.
1.PDO and mysqli are two ways of accessing the database; they essentially do the same things, but the notation is different.
2.Arrays are variables with multiple "compartments". Most typical array has the compartments identified by a numerical index, like:
$array[0] = 'OR12345'; //order number
$array[1] = '2017-03-15'; //order date
$array[2] = 23; //id of a person/customer placing the order
etc. But this would require us to remember which number index means what. So in PHP there are associative arrays, which allow using text strings as indexes, and are used for fetching SQL query results.
3.The statement
$data = $q->fetch(PDO::FETCH_ASSOC)
or
$row = $result->fetch_assoc()
do exactly the same thing: put a record (row) from a query into an array, using field names as indexes. This way it's easy to use the data, because you can use field names (with a little bit around them) for displaying or manipulating the field values.
4.The
while ($row = $result->fetch_assoc())
does two things. It checks if there is a row still to fetch from the query results. and while there is one - it puts it into the array $row for you to use, and repeats (all the stuff between { and }).
So you fetch the row, display the results in whatever form you want, and then loop to fetch another row. If there are no more rows to fetch - the loop ends.
5.You should avoid using commas in the FROM clause in a query. This notation can be used only if the fields joining the tables are obvious (named the same), but it is bad practice anyway. The joins between tables should be specified explicitly. In the first query you want the header only, and there is no additional table needed in your example, so you should have just
SELECT *
FROM Orders
WHERE Orders.Order_ID = 12345
whereas in the second query I understand you have a table Parts, which contains descriptions of various parts that can be ordered? If so, then the second query should have:
SELECT *
FROM Order_items
JOIN Parts ON Parts.ID = Order_Items.Part_ID
WHEERE Order_Items.Order_ID = 12345
If in your Orders table you had a field for the ID of the supplier Supplier_ID, pointing to a Suppliers table, and an ID of the person placing the order Customer_ID, pointing to a Customers table, then the first query would look like this:
SELECT *
FROM Orders
JOIN Suppliers ON Suppliers.ID = Orders.Supplier_ID
JOIN Customers ON Customers.ID = Orders.Customer_ID
WHERE Orders.Order_ID = 12345
Hope this is enough for you to learn further on your own :).

Database Insert into Random Unused Row - With Transaction

I am writing an app that wishes to randomly assign a number to users, then puts in into a MySql database. There are many people who use it at the same time and as such I dont want parallel uses to overwite each other.
My current code is the following:
$sql_get = "SELECT * FROM database";
$results = mysql_query($sql_get, $bd);
$list = array();
while($row = mysql_fetch_array($results))
{
if ($row['userId'] == "")
{
array_push($list, $row['number']);
}
}
$rand_nums = array_rand($list , 1);
$sql_update = "UPDATE database SET userId='". $userId ."' WHERE number=". $rand_nums;
$results = mysql_query($sql_update, $bd);
So basically, it gets the empty rows, puts them into a list, chooses a random empty row number and puts the data into the row. The current issue is that the get and the empty rows can happen at the same time for multiple users and may overwrite data written at the same time.
How can I structure this code (transaction or otherwise) to ensure concurrent use has no bad effects?
Thank you
You could do it all in one query:
UPDATE database AS d
SET d.userId = $userId
WHERE d.userId = ''
ORDER BY RAND()
LIMIT 1

Can't fetch results of an SQL statement in PHP

i'm working on a search function in PHP, so i do want to be able to research any keyword in all database tables, but i can't fetch the result of the SQL statement which will do :
SELECT * FROM All_Tables
here's my code :
$getTables = $this->db->query("show tables");
$tmpString = '';
while ($table_data = $getTables->fetch(PDO::FETCH_NUM))
{
$tmpString.=$table_data[0].',';
}
$ALL_DATABASE_TABLES = substr($tmpString,0,strlen($tmpString)-1); //Remove the last ,
echo " $ALL_DATABASE_TABLES "; // Works, it shows all database tables
$query = "SELECT * FROM $ALL_DATABASE_TABLES" ;
$stmt = $this->db->query($query) or die(print_r($this->db->errorInfo())) ;
echo "Cool1"; // Works
echo "$ALL_DATABASE_TABLES "; //Works
// This Loop doesn't work-----------------------
while ($row = $stmt->fetch(PDO::FETCH_NUM))
{
echo "Cool2"; // Doesn't work
echo "$row[0]" ; // Doesn't work
}
//----------------------------------------------
$stmt->closeCursor();
Do you have any idea about that ? Thank you guys
Your code does the following:
Select's a full list of tables in your database.
Combines the full list found in #1 into a comma-separated list.
Uses the comma-separated list from #2 in a SELECT * query, thus attempting to select every record from every table in your database in a single result.
I'm surprised you actually don't receive an error or timeout/warning. The reason is is because using a comma-separated list will attempt to do a join between every table. From the manual:
INNER JOIN and , (comma) are semantically equivalent in the absence of
a join condition: both produce a Cartesian product between the
specified tables (that is, each and every row in the first table is
joined to each and every row in the second table).
When your table-list grows to even three or four tables and you have 5 or six columns in each table with a long list of rows, your query will take forever to run. Now, a normal database has 10+ tables (at least) - so MySQL would either throw an error or just timeout (in my experience) - which is why I find it odd to you're not receiving one.
What exactly is your goal, besides selecting every record from every table in your database?
If you really just want to list every record in every table, you can perform a separate SELECT * for each table:
$getTables = $this->db->query("show tables");
$tables = array();
while ($table_data = $getTables->fetch(PDO::FETCH_NUM)) {
$tables[] = $table_data[0];
}
foreach ($tables as $table) {
$stmt = $this->db->query('SELECT * FROM ' . $table) or die(print_r($this->db->errorInfo()));
while ($row = $stmt->fetch(PDO::FETCH_NUM)) {
echo $row[0] . '<br />';
}
$stmt->closeCursor();
}

Loop Through Records, Update one Record, and Exit

I want to select all records from my table and loop through all those records until I get to the record where the numtimespaid column is equal to 0. Once I find that column I want to update it to 2 for that record and then exit out. Here is what I have that is not working correctly:
$query1 = "SELECT * FROM ".$line." ORDER BY datestamp, timestamp";
$result1 = mysql_query($query1) or die(mysql_error());
while($row = mysql_fetch_array($result1)){
if ($row[numtimespaid] == 0) {
$queryupdate="UPDATE ".$line." SET numtimespaid=1";
$resultu=mysql_query($queryupdate);
break;
}
}
Any ideas as to what I'm doing wrong and/or the right way of doing this?
There is no need whatsoever to loop over rowset from a SELECT statement. You can simply update the first row with that value. This query will update exactly one record matching numtimespaid = 0. If you want to update all rows matching that criterion, just remove the LIMIT 1.
$result = mysql_query("UPDATE $line SET numtimespaid=1 WHERE numtimespaid = 0 ORDER BY datestamp, timestamp LIMIT 1");
By the way, we don't know what the contents of $line are, but hopefully you have properly filtered that value if it comes from user input. If it does comes from user input, it's recommended to check its value against a whitelist of possible table names:
// $line can be one of table1,table2,table3
if (!in_array($line, array('table1','table2','table3')) {
// FAIL, don't execute the query
}
if ($row[numtimespaid] == 0) {
Generally gets interpreted as numtimespaid being an undefined constant. Put quotes around it, like this:
if ($row['numtimespaid'] == 0) {
Then, realize Michael's answer is just better overall.

Checking if a value is in the db many times

I have a table A, with just two columns: 'id' and 'probably'. I need to go over a long list of ids, and determine for each one whether he is in A and has probability of '1'. What is the best way to do it?
I figured that it would be best to have 1 big query from A in the beginning of the script, and after that when I loop each id, I check the first query. but than i realized I don't know how to do that (efficiently). I mean, is there anyway to load all results from the first query to one array and than do in_array() check? I should mention that the first query should had few results, under 10 (while table A can be very large).
The other solution is doing a separate query in A for each id while I loop them. But this seems not very efficient...
Any ideas?
If you have the initial list of ids in array, you can use the php implode function like this:
$query = "select id
from A
where id in (".implode (',', $listOfIds).")
and probability = 1";
Now you pass the string as first parameter of mysql_query and receive the list of ids with probability = 1 that are within your initial list.
// $skip the amount of results you wish to skip over
// $limit the max amount of results you wish to return
function returnLimitedResultsFromTable($skip=0, $limit=10) {
// build the query
$query = "SELECT `id` FROM `A` WHERE `probability`=1 LIMIT $skip, $limit";
// store the result of the query
$result = mysql_query($query);
// if the query returned rows
if (mysql_num_rows($result) > 0) {
// while there are rows in $result
while ($row = mysql_fetch_assoc($result)) {
// populate results array with each row as an associative array
$results[] = $row;
}
return $results;
} else {
return false;
}
}
for each time you call this function you would need to increment skip by ten in order to retrieve the next ten results.
Then to use the values in the $results array (for example to print them):
foreach ($results as $key => $value) {
print "$key => $value <br/>";
}
Build a comma separated list with your ids and run a query like
SELECT id
FROM A
WHERE id IN (id1, id2, id3, ... idn)
AND probability = 1;
Your first solution proposal states that:
You will query the table A, probabyly using limit clause since A is a table with large data.
You will place the retrieved data in an array.
You will iterate through the array to look for the id's with probability of '1'.
You will repeat the first three steps several times until table A is iterated fully.
That is very inefficient!
Algorithm described above would require lots of database access and unneccessary memory (for the temporary array). Instead, just use a select statement with 'WHERE' clause and process with the data you want.
You need a query like the following I suppose:
SELECT id, probably FROM A WHERE A.probably = 1
If i understood you correctly, you should filter in the SQL query
SELECT * FROM A WHERE A.probably = 1

Categories