I have a windows program which generates PGP forms which will be filled in later.
Those PHP forms will populate a database. It looks very much like MySql, but I can't be certain, so let's call it ODBC.
And, yes, it does have to be a windows program.
There will also be PHP forms which query the database - examine which tables and fields it contains and then generates forms which can be used to search the database (e.g, it finds a table with fields "employee_name", etc and generates a form which lets you search based on employee name.
Let's call that design time and run time.
At design time, some manager or IT guy or similar gets to define the nature of the database and at runtime 1) a worker fills in the form daily and 2) management can extract reports.
Here's my question: given that the database is defined at "design time" (and populated at run time), where and how is best to do so?
1 I could use an ODBC interface from the windows program, but I am having difficulty finding something good to work with Delphi. Things like ADO and firebird tend to expect you to already have a database and allow you to manipulate it, but I can find no code example of how to create a database and some tables, so ...
2 I could used DOS commands from Delphi in my windows program. I just tried and got a response to MySql --version, but am not sure if MySql etc are more interactive. That is, can I use a script file or a very long stacked command with semicolons and returns separating? e.g 'CREATE DATABASE db; CREATE TABLE t1;'
3) Since the best way to work with databases seems to be PHP, perhaps my windows program could spit out a PHP page which would, when run in a browser, create the database.
I have tried to make this as uncomplicated as I can, but please feel free to ask questions. It may be that there are several valid ways, but there is probably one 'better' solution in terms of ease of implementation or maintenance.
Better scratch option 3. What if the user later wants to come back and have the windows program change the input form? It needs to update the database too.
Creating a database is usually a database administrator task. Unless it is a local database, maybe an embedded one, the user would need to know where and how create the database on the remote server, and she can have no clue about it. Where to store the database files? Which disks are available? And there could be many more parameters to set (memoery buffers size, etc.), users to be created and so on. And also you need very elevate privileges to be able to create a database, not something you give to average users or applications.
Thereby usually you ask the database administrator to create your database/schema, he will give you the credentials you need to connect, and then your application (or its setup) will create and initialize the needed objects (tables, etc.). Creating table (and other object) is usually as simple as running "CREATE TABLE...." statements. Just remember SQL takes one command only, if you need to run several commands you have to send them one after another yourself, although there are Delphi components which are able to split a script in commands and run one after another.
Related
I just took over a pretty terrible database design job, which heavily use comma separated value to store data. I know I know, it is hell.
The db is mysql, currently accessing it using MySql Workbench.
I already had idea in mind what to remove, and what new relations table needed.
So, my question is, how shall I proceed by migrating comma separated data to the new table? Any tools specialize for normalizing database?
Edit:
The server code is in PHP.
Define you new tables and attributes first.
Then, use PHP or Python or your favorite language with MySQL calls and write a 1 time converter which loops and reads the old table(s) and records and inserts the proper records into the new tables.
It appears you are looking for standard practices. There are varying degree of denormalized databases out there. The ones I have come across have been normalized with custom code and tools.
SQL Server Integration Services (SSIS) can be used for some case. In your case, I'd build a script for the migration that involves:
creation of normalized tables
creating stored procedure or PHP script(s) to read data from denormalized table, transform it and load it into normalized table
creating a log table or log file
performing the migration in sandbox; write logs while doing so
version control the script
correct the proc/script as needed
create another sandbox
run the full script on sandbox
if successful, run the full script on prod (with logging)
SSIS is used for ETL in many organizations; it's standard tool for Microsoft BI stack and can also be used to migrate data between non-Microsoft DBs also.
Open Source ETL tool called Talend might also help in transforming your data. I personally believe that a PHP script will be the fastest and easiest to manipulate data.
My problem is I'm using a HUGE web application (a school system), with no documentation for the internal logic. I need to make a bulk update of a particular value, but I don't know what tables in the MySQL database contain the relevant data to update. The app it's self runs from php. Is there an easy way to compare the database before I do an operation and after I do it so I can see what tables are effected? I tried using a diff comparing tool on the dumped sql before and after, but the database is so huge it's really impractical to use, wondering if there is something better or if I can just configure php somehow to log any mysql operations from whatever file happens to trigger them.
You may want to run the performance tool from the mysql workbench and look at the performance reports/statement analysis. This will work if you pick a time when the system is not being used and then run some function in the web that updates the tables with the values you need to change. Look at the performance table before and after you run your experiment and look for those sql statements which show use. It's not perfect, but this will at least help you begin to hone in on the data you're looking for. The big 'gotcha' here is if the value you want to change is dynamically derived during the query process. Then you'll have to understand how the derivation works and the source columns. But, again, this will give you a brute-force starting place.
Is it preferred to create tables in mysql using a third party application (phpmyadmin, TOAD, etc...) instead of php?
The end result is the same, I was just wondering if one way is protocol.
No, there isn't a 'set-in-stone' program to manage your database and query to it.
However, I highly recommend MySQL Workbench.
It allows you to graphically design your database, query to your database server and do all kinds of administration tasks.
I'd say it is far easier to do so within an application created for that purpose. The database itself obviously doesn't care as it's just DDL to it. Using Toad or PHP MyAdmin would help you do the job quicker and allow you to catch syntax errors prior to execution or use a wizard where you're not writing it by hand in the first place.
usually a software project provides one or more text files containing the ddl statements to create the necessary tables. what tool you use to execute those statements doesn't really matter. some php projects alwo provide a installer wizard php file which can be executed directly in the browser, so you don't need any additional tools at all.
I'll try to only answer what your question is - "Is it preferred to create tables in mysql using a third party application (phpmyadmin, TOAD, etc...) instead of php?"...
Yes, it is preferred to create tables or alter them or delete them or perhaps do any DB-related activity that is outside the scope of what interfaces your application provides, in MySQL using any of the many available MySQL clients. And the reason is because these applications are designed to perform DB related tasks and are best at doing them.
Though you may as well use PHP for creating tables depending on the situations, like if the application uses dynamic tables or needs "temporary" tables for performing complex jobs or storing intermediary results/calculations. Or perhaps if the application provides interfaces to manage/control certain aspects, like assume that a certain application consists of various user-roles that have their respective columns in the table. If the application provides rights to the admin to delete or add new roles, which will need to delete or add new columns, it's best to do such queries from PHP.
So, putting it again, use MySQL for any DB work that is not related or affected by what functionality or interfaces your PHP code provides.
Sidenote: Though I've used phpMyAdmin, TOAD, WorkBench and a few others, I think nothing's as efficient and quick as the MySQL client itself, i.e. working directly on the MySQL prompt. If you've always used GUI clients, you might find it unattractive to work on the prompt initially but it's real fun and helps you keep syntaxes on your tips :-)
You question might have been misunderstood by some people.
Charles Sprayberry was saying there's no best practice as far as which 3rd party MySQL client (i.e. phpmyadmin, TOAD, etc.) to use to edit your database. It comes down to personal preference.
Abhay was saying (and I really think this was the answer to your question), that typically, your application does not do DDL (although exceptions exist). Rather, your application will usually be performing DML commands only.
DML is Data Manipulation Language. For example:
select
insert
update
delete
DDL is Data Definition Language. For example:
create table
alter table
drop table
Basic SQL statements: DDL and DML
So, first things first, I'm a student. I'm developing an application where other students can have access to a MySQL database. Basically, I wanted to spare the students the need to search for hosting or even installing MySQL on their computers. Another plus is the fact that they can present their works to the class just by browsing a website. So, my idea was to use the same database for everyone, and add a login system for the students. This way, I can associate a prefix to every student, and they can execute any type of query without worrying if it will clash with someone's table, because the system would prefix their queries tables automatically. My idea was to limit how much tables and rows each user can have, which shouldn't be hard with a parser. It doesn't necessarily need to be a parser in PHP, it could be in perl or python. PHP is just more convenient. .NET would be more troublesome because of Windows
By the way, each class of "introduction to database systems" has around 50 students and there are 3 classes, so it could reach about 150 students...
For example, SELECT * FROM employees
has to become
SELECT * FROM prefix_employees
I do not know how the query will look like, it could get fairly complex so I'd probably need a well written parser, which I haven't found yet for PHP.
Thanks guys, I hope I have made myself clear
Unfortunately, MySQL does not (AFAIK) have schemas as some other databases (e.g. PostgreSQL) have them (for seperating content (tables, etc...) logically within one database).
But I would definitely go for the seperate databases-scenario.
Your parser (with the 'prefixing sheme') will be broken (unwillingly and also possibly willingly) unless you are willing to put an extreme amount of time into making this work.
I'd rather go with the "one database per user" approach. This solution requires some administration (you can either create the users/databases manually using a tool like phpMyAdmin, or simply create your own little administration panel in which you allow the students to register), but will require far less amount of work from you than filtering all requests.
This way, each student has his login/password, with preferably a database of the same name on which he has all rights (this can be done automatically with phpMyAdmin), and is able to work without interferring with other students. You can be sure that some will try to break your security, no matter how hard you try and how good-willing you are. Clustering them in different databases will leave them no choice than trying to gain admin access of your DB, which will be pretty hard if you maintain an up to date server and complex enough passwords (and you don't store them in clear on a "readable by all" .txt file on your university server.
Plus, you will be able to monitor the disk space, usage, etc... of each database individually, which is easier than having to look at tables separately.
Depending on your exact requirements, you may be able to use table permissions to prevent one student from modifying (or viewing) data from another student. You would still need a process to allow students to create a new table with their assigned prefix (and create an appropriate permissions entry), but once created, the DB would control access through all queries so you would not have to (just don't allow student accounts to directly create/alter tables).
As for quota, I'm not aware of MySQL directly supporting a quota system but you could create the files that back the tables for each user on a separate directory and use OS level quota systems to limit disk space usage.
Related to my previous question:
PHP and Databases: Views, Functions and Stored Procedures performance
Just to make a more specific question regarding large SELECT queries.
When would it be more convenient to use a View instead of writing the SELECT query in the code and calling it:
$connector->query($sql)->fetchAll();
What are the factors to take into account when deciding wether its best to use a view, or just leave it as it is. Say, if you join several tables, select certain amount of data, etc.
I'm asking in the context of a big web app (with PHP & Postgres), and looking for performance and optimization.
One thing to take into account when you are using PHP source code + views (instead of only PHP source code) is that you now have two kind of sources to modify when you update your application :
you must put the new PHP sources on the server
and you must update the views
And you sometimes must do that exactly at the same time if you don't want your application to crash... Or you have to program thinking that the application must run OK with an outdated / more recent version of the views (for a couple of seconds).
Something else you might have to consider is versionning : versionning PHP scripts is easy : just use SVN and its allright, as it's text files.
With views, to get the same kind of versionning, you have to work in text files (commited on the SVN before you update them on the DB production server), and keep those in sync with the DB server -- seems easy, but it's not when you have to push an emergency patch to production ^^
Personnaly, I generally use views / stored procedures when it really makes a diffenrence : for instance, if a calculation would require thousands of SQL queries (and, so, thousands of call from PHP, waiting for the response, and so on) or too many data exchanges between the two servers, using a stored proc can really be great !
(Never used postgre, but the idea is the same with other products)