In Doctrine/PDO, how do I fetch sql query column names - php

I'm writing a piece of software that generates a pdf report out of a raw user-defined SQL query execution.
The pdf contains a simple table with rows containing SQL result rows. I'd like to add a table header with column names retrieved along with the SQL results.
The column headers in SQL may have various structures, e.g:
a) select * from users;
b) select name, surname, email from users;
c) select name as UserName, surname as UserSurname, email as UserEmail from users;
So far I fetch SQL results as association array, take the keys of the first row and treat them as column names.
It works only if there is at least 1 result in result set, so it's a heavy flaw in this approach.
I could generate pdf with "No results" label.
I could run a regex on a SQL query for named columns and execute describe table x, but this is plain ridiculous.
I also have even more ridiculous ideas, but that's not the way.
Is there anybody having any idea for solving this?
I use Doctrine on MySQL for this, but simple PDO approach would be just as good as Doctrine's one.
EDIT
Right after posting this question it came to my mind I could generate a view out of my SQL query, then run SHOW COLUMNS FROM randomViewName; and drop the view immediately afterwards.
It's hacky and needs some db security work (I can handle that), but it's a working candidate.
What do you think?

It may not be perfect solution, but I go along with the approach mentioned in the question.
I create a MySQL view. This allows me to have an db object which is queryable and have exact column names as I want.
describe nameOfViewYouJustCreated;
gives me exactly what I need.
The view is dropped afterwards.

Related

How to update all records of all tables in database?

I have a table with a lot of records (could be more than 500 000 or 1 000 000).
I want to update some common columns with the same field name in all tables throughout the database.
I know the traditional way to write separate queries to individual tables but not one query to update all records of all tables.
What is the most efficient way to do this in SQL, without using some dialect-specific features, so it works everywhere (Oracle, MSSQL, MySQL, Postgres etc.)?
ADDITIONAL INFO: There are no calculated fields. There are indexes. Used generated SQL statements that update the table row by row.
(This sounds like the classic case for normalizing that 'column'.)
Anyway... No. There is no single query to locate that column across all tables, then perform an UPDATE on each of the tables.
In MySQL, you can use the table information_schema.COLUMNS to locate all the tables containing a particular named column. With such a SELECT, you can generate (using CONCAT(), etc) the desired UPDATE statements. But then, you need to manually run them (via copy and paste).
Granted, you could probably write a Stored Procedure to wrap that into a single call, but that is too risky. What if some other table has the same column name, but should not be updated?
Example of building ALTERs to change tables' Engines: http://mysql.rjweb.org/doc.php/myisam2innodb#generating_alters
Example of using an SP to "pivot" rows to columns, complete with executing the generated code: http://mysql.rjweb.org/doc.php/pivot
As for common code across multiple vendors -- forget it! Virtually every syntax needs some amount of tweaking.

Is it necessary to validate column names when submitting an SQL Query?

In my SQL Queries I am submitting data from forms filled out by the user, and as shown here it is not possible to parameterize my column names with PDO. This is important because the column names in the query are inserted dynamically based on the field names in the form. I can rather easily validate the column names submitted in the $_POST array by simply pulling them out of the database and throwing out any that don't match. Is this a good thing to do to avoid SQL injection or is simply a waste of system resources (as it effectively doubles the execution of any request that relies on the Database)?
Is this a good thing to do to avoid SQL injection
No.
or is simply a waste of system resources
No.
It cannot be a waste as it's just a simple select from the system table.
But it is still can be a some sort of injection when a user isn't allowed to some fields. Say, if there is an (imaginary) field "user_role" filled by site admin and a user will have a possibility to define it in the POST, they can alter their access privileges.
So, hardcoding (whitelisting) allowed fields is the only reliable way.
as it effectively doubles the execution of any request that relies on the Database
Man. Databases intended to be queried. It's the only their purpose. A database that cannot sustain a simple select query is a nonsense. Queries are different. An insert one is way more heavy than 10 selects. You have to distinguish queries by quality, not quantity.
the column names in the query are inserted dynamically based on the field names in the form.
Though for the insert/update queries it is quite true, for the SELECT ones it is a BIG SIGN of the bad design. I can stand variable field names in the WHERE/ORDER BY clauses but if you have to vem in the fieldset of table name clauses - your database design is wrong for sure.
Aside from hard-coding the list of columns, you could build a list of columns via another table in your database that you want to allow column querying from, such as
QuerableSources
SrcTable SrcColumn DescriptToUser
SomeTable SomeColumn Column used for
AnotherTable AnotherColumn Something Else
etc.
Then, you build for example a combobox for a user to pick the "DescriptionToUser" content for easier readability, and YOU control the valid column and table source.
As for the VALUE they are searching for, DEFINITELY Scrub / clean it to prevent SQL-Injection.
You can hard-code the column names to make it faster. You can also cache the pulled table description, so that you don't need to update the code every time table schema changes.

Inserting in two tables with a single query

I am developing a web app using zend framework and the problem is about combining 2 sql queries for improving efficiency. My table structure is like this
>table message
id(int auto incr)
body(varchar)
time(datetime)
>table message_map
id(int auto incr)
message_id(forgain key from message table's id column)
sender(int ) comment 'user id of sender'
receiver(int) comment 'user id of receiver'
To get the code working, I am first inserting the message body and time to the message table and then using the last inserted id, I am inserting message sender and receiver to message_map table. Now what I want to do is to do this task in a single query as using one query will be more efficient. Is there any way to do so.
No there isn't. You can insert in only one table at once.
But I can't imagine you need to insert so much messages that performance really becomes an issue. Even with these separate statements, any database can easily insert thousands of records a minute.
bulk inserts
Of course, when inserting multiple records in the same table, that's a different matter. This is indeed possible in MySQL and it will make your query a lot faster. It will give you trouble, though, if you need to insert_ids from all those records.
mysql_insert_id() returns the first id that is inserted in the last insert statement, if it is a bulk insert. So you could query all id's that are >= that id. It should give you all records you just inserted, although the result may contain id's that other people inserted between your insert and the following query for those ids.
if its for only these two tables. Why dont you create a single table having all these columns in one as
>table message
id(int auto incr)
body(varchar)
sender(int ) comment 'user id of sender'
receiver(int) comment 'user id of receiver'
time(datetime)
then it will be like the way you want.
I agree with GolezTrol or otherwise if you want an optimized performance for your query perhaps you may choose to use Stored Procedures
Indeed combining those two inserts wouldn't be possible. While you van use JOIN in get queries, you can't combine insert queries. If your really worrying about performance, isn't there anyway to join those two tables together? As far is I can see there's no point in keeping them separated; there both about the message.
As stated before, executing a second insert query isn't that much of a server load by the way.
As others pointed out, you cannot really update multiple tables at once. And, you should not really be worried about performance, unless you are inserting thousands of messages in a short period of time.
Now, there is one thing you could worry about. Imagine, you first insert the message body, and then try to insert the receiver/sender IDs. Suppose first succeeds, while second (for whatever reason) fails. That would corrupt your data a bit. To avoid that, you can use transactions, e.g.
mysql_query("START TRANSACTION", $connection);
//your code
mysql_query("COMMIT", $connection);
That would ensure that either both inserts get into the database, or neither do. If you are using PDO, look into http://www.php.net/manual/en/pdo.begintransaction.php for examples.

How can I search all of the databases on my mysql server for a single string of information

I have around 150 different databases, with dozens of tables each on one of my servers. I am looking to see which database contains a specific person's name. Right now, i'm using phpmyadmin to search each database indvidually, but I would really like to be able to search all databases and all tables at once. Is this possible? How would I go about doing this?
A solution would be to use the information_schema database, to list all database, all tables, all fields, and loop over all that...
There is this script that could help for at least some part of the work : anywhereindb (quoting) :
This code is search all the tables and
all the rows and columns in a MYSQL
Database. The code is written in PHP.
For faster result, we are only
searching in the varchar field.
But, as Harmen noted, this only works with one database -- which means you'd have to wrap something arround it, to loop over each database on your server.
For more informations about that, take a look at Chapter 19. INFORMATION_SCHEMA Tables ; especially, the SCHEMATA table, which contains the name of all databases on the server.
Here's another solution, based on a stored procedure -- which means less client/server calls, which might make it faster : http://kedar.nitty-witty.com/miscpages/mysql-search-through-all-database-tables-columns-stored-procedure.php
The right way to go about it would be to NORMALIZE your data in the first place!!!
You say name - but most people have at least 2 names (a surname and a forename) are these split up or in the same field? If they are in the same field, then what order do they appear in? how are they capitalized?
The most efficient way to try to identify where the data might be would be to write a program in C which sifts the raw data files (while the DBMS is shut down) looking for the data - but that will only tell you what table they apppear in.
Failing that you need to write some PHP which iterates through each database ('SHOW databases' works much like a select statement), then iterates through each table in the database, then generates a SELECT statement filtering on each CHAR or VARCHAR column large enough to hold the name you are looking for (try running 'DESC $table').
Good luck.
C.
The best answer probably depends on how often you want to do this. If it is ad-hoc once a week type stuff then the above answers are good.
If you want to do this kind of search once a second, maybe create a "data warehouse" database that contains just the table:columns you want to search (heavily indexed, with a reference back to the source database if that is needed) populated by cron job or by stored procedures driven by changes in the 150 databases...

mysql show table / columns - performance question

I'm working on a basic php/mysql CMS and have a few questions regarding performance.
When viewing a blog page (or other sortable data) from the front-end, I want to allow a simple 'sort' variable to be added to the querystring, allowing posts to be sorted by any column. Obviously I can't accept anything from the querystring, and need to make sure the column exists on the table.
At the moment I'm using
SHOW TABLES;
to get a list of all of the tables in the database, then looping the array of table names and performing
SHOW COLUMNS;
on each.
My worry is that my CMS might take a performance hit here. I thought about using a static array of the table names but need to keep this flexible as I'm implementing a plugin system.
Does anybody have any suggestions on how I can keep this more concise?
Thankyou
If you using mysql 5+ then you'll find database information_schema usefull for your task. In this database you can access information of tables, columns, references by simple SQL queries. For example you can find if there is specific column at the table:
SELECT count(*) from COLUMNS
WHERE
TABLE_SCHEMA='your_database_name' AND
TABLE_NAME='your_table' AND
COLUMN_NAME='your_column';
Here is list of tables with specific column exists:
SELECT TABLE_SCHEMA, TABLE_NAME from COLUMNS WHERE COLUMN_NAME='your_column';
Since you're currently hitting the db twice before you do your actual query, you might want to consider just wrapping the actual query in a try{} block. Then if the query works you've only done one operation instead of 3. And if the query fails, you've still only wasted one query instead of potentially two.
The important caveat (as usual!) is that any user input be cleaned before doing this.
You could query the table up front and store the columns in a cache layer (i.e. memcache or APC). You could then set the expire time on the file to infinite and only delete and re-create the cache file when a plugin has been newly added, updated, etc.
I guess the best bet is to put all that stuff ur getting from Show tables etc in a file already and just include it, instead of running that every time. Or implement some sort of caching if the project is still in development and u think the fields will change.

Categories