I'm using PHP PDO to access a sql database. There's a table with just two columns (ID and VALUE). I want to read that table into an array such that
$array[ID]=VALUE
I know how I can do it manually with a for loop or while loop going through one by oneā¦ but I was wondering if there's any better way of doing it.
You can use PDO::FETCH_KEY_PAIR constant
$sql = "select id, username from users limit 10";
$data = $pdo->query($sql)->fetchAll(PDO::FETCH_KEY_PAIR);
Related
I am building a reporting tool where an user can enter a SQL query and the tool will return the results in a CSV file. In addition to just writing to a CSV file, I also need to perform some additional logic here. So SELECT INTO OUTFILE will not work for me.
I know that executing arbitrary user provided SQL queries is bad, but this tool is going to be used only internally, so security shouldn't be a concern. Also I am limiting it to only select queries.
Now when I export the data in CSV format, I also want to output the column names of the query as the first row in the CSV file.
So my question is, is there a way to fetch the column names of a SQL query in PHP using PDO?
Mysql client tools like Sequel Pro are able to display the column names while displaying query results. So I am assuming that it should be possible, but I am not able to find it.
Here I am not writing full PDO connection code. You can use below code/logic to get the return column name.
$stmt = $conn->query("SELECT COUNT(*), first_column, second_column FROM table_name");
$row = $stmt->fetch(PDO::FETCH_ASSOC);
$columns = array_keys($row);
print_r($columns); // array(0 => COUNT(*), 1 => first_column, 2 => second_column)
The keys of the row result are the column names. You can display them like so:
$conn = mysqli_connect(YOUR_CONNECTION_INFO);
$result = mysqli_query($conn, YOUR_QUERY);
$row = mysqli_fetch_assoc($result);
foreach ($row as $key=>$value) {
echo $key;
}
You said security wasn't an issue since the code would only be used internally, but what happens if you create a new database user with limited rights and connect to the database using that user.
That way you can set up the rights as you want from your database and won't have to worry about users dropping tables.
I'm trying to update a mysql database with data I fetched. (btw I need to do this for specific individual items, but that's not the problem.) When it comes to creating separate statements for fetching or updating I can do that. Separately, I'm able to fetch data like this:
$query = "SELECT starting_amount FROM comp ORDER BY item DESC LIMIT 3, 1";
$result = $conn->query($query);
$row = mysqli_fetch_assoc($result);
and I'm able to update data like this:
$sql = "UPDATE comp SET final_amount=25 WHERE item='Y'";
but I can't put the two together (I tried several ways and failed). In other words, I am able to update a table record with data that I manually type, e.g. I type "25" manually in the update statement, which in this example is the data from 'staring_amount', but I don't know how to update with a statement that will automatically use data I fetch from the table. Again in other words, how do I write the update statement so that "SET final_amount=" is followed by fetched data? Thanks in advance for any help!
So, you just need to pass your fetched data into the query
$starting_amount = $row['starting_amount'];
$sql = "UPDATE comp SET final_amount=$starting_amount WHERE item='Y'";
Firstly, I highly recommend looking into prepared statements - using a prepared statement to insert data is an easy way to prevent SQL injection attacks and also will make what you want to do a little easier.
Here's an example of a prepared update statement using mysqli based on your example:
$statement = $conn->prepare("UPDATE comp SET final_amount=? WHERE item='Y'")
$statement->bind_param(25);
I'll assume for this answer that you want to use just the first row of the resultset.
Using your example above, you can replace the value in bind_param with a value from your row.
$statement->bind_param($row['starting_amount']);
There's no need to do them as separate statements, since you can join queries in an UPDATE.
UPDATE comp AS c1
JOIN (SELECT starting_amount
FROM comp
ORDER BY item DESC
LIMIT 3, 1) AS c2
SET c1.final_amount = c2.starting_amount
WHERE c1.item = 'Y'
I am trying to display the data from 'table' if a key inputted by the user is found in the database. Currently I have it set up so that the database checks if the key exists, like so:
//Select all from table if a key entry that matches the user specified key exists
$sql = 'SELECT * FROM `table` WHERE EXISTS(SELECT * FROM `keys` WHERE `key` = :key)';
//Prepare the SQL query
$query = $db->prepare($sql);
//Substitute the :key placeholder for the $key variable specified by the user
$query->execute(array(':key' => $key));
//While fetched data from the query exists. While $r is true
while($r = $query->fetch(PDO::FETCH_ASSOC)) {
//Debug: Display the data
echo $r['data'] . '<br>';
}
These aren't the only SQL statements in the program that are required. Later, an INSERT query along with possibly another SELECT query need to be made.
Now, to my understanding, using WHERE EXISTS isn't always efficient. However, would it be more efficient to split the query into two separate statements and just have PHP check if any rows are returned when looking for a matching key?
I took a look at a similar question, however it compares multiple statements on a much larger scale, as opposed to a single statement vs a single condition.
#MarkBaker Join doesn't have to be faster than exists statement. Query optymalizer is able to rewrite the query live if it sees better way to accomplish query. Exists statement is more readable than join.
Fetching all the data and making filtering directly in PHP is always bad idea. What if your table grow up to milions of records? MySQL is going to find the best execute plan for you. It will automaticaly cache the query if it is going to improve performance.
In other words, your made everything correctly as far as we can see your code now. For futher analyse show us all of your queries.
I need some suggestions and ideas.
Here's the scenario. The server receives a bunch of IDs from client via Ajax. Some of these IDs may already exist in database some may not. I need to save those that are not.
One way would be to set sql queries to select * whose ID is what I have. But this requires to a select statement for each id. Each time I receive something about 300 IDs which means 300 sql queries. This I think would slow the server. So what do you think is a better way to do this? Is there a way to extract the non-existing IDs with one SQL query?
P.S. The server is running on CakePHP.
I think what you need is SQL's IN keyword:
SELECT id FROM table WHERE id IN (?)
Where you would insert your IDs separated by comma, e.g.
$id_str = implode(',', $ids);
Make sure that $ids is an array of integers to prevent SQL injection
The outcome is a MySQL result containing all ids that exist. Build them into an array and use PHP's array_diff to get all IDs that do not exist. Full code:
$result = $connection->query('SELECT id FROM table WHERE id IN ('.
implode(',', $ids) . ')');
while($row = $result->fetch_row()) {
$existent[] = $row[0];
}
$not_existent = array_diff($ids, $existent);
If I understand you correctly, an insert ignore could do the trick
INSERT IGNORE INTO `table` (`id`,`col`,`col2`) VALUES ('id','val1','val2');
then any duplicate id's will be silently dropped, so long as id is unique or primary.
Also the keyword IN can be useful for finding rows with a value in a set. Eg
SELECT * FROM `table` WHERE `id` IN (2,4,6,7)
Is it possible to store the results of a query in a table?
For example, I currently run a query using the following approach in php:
$sql = "SELECT id FROM mytable";
$query = pg_query($connection, $sql);
I then use pg_fetch_row() to access the result and return it to the web-browser. However, I would like to also store the results in a new table. I understand that I could run the same query twice so I would also have:
$sql_2 = "CREATE newtable AS SELECT id FROM mytable";
$query = pg_query($connection, $sql_2);
I was curious if there was a more efficient way of structuring my queries, so that I could both access the results, and insert the results into a table through just one query.
Unfortunately the CREATE TABLE command doesn't appear to accept a RETURNING clause, so that doesn't allow just having that command also return the results. So you're likely to need to do two queries.
You could swap the queries to create the new table first, then fetch the results by selecting out of that new table. Accessing that newly created table may be a bit faster than doing the initial query twice since postgres won't need to look over any irrelevant data. This would also guarantee that the same results appear in both places.