I am trying to figure out how one would start the setup of a small CMS.
I have the groundwork built, but the step of creating the database tables in mysql, should this all be done at once in a install.php file? Is there a preferred method for creating many tables at once, even if I don't need to insert data into them at this time?
You can
Import the schema file to your database prior to deploying the application
You can have a script that creates the schema
You can have a script that makes any changes to the current schema (for upgrades)
For a small CMS, I'd just keep the SQL in a schema file and import it when I need it.
You could also do a database copy from your dev -> live system. So you make the changes in the dev database as you need them and then push them to the live database. Something like SQLCompare for SQL Server works well.
Wordpress does the install.php route, where you have to enter your credentials and such for the target database and it then pushes the changes to it.
If you're going to be distributing your application for 3rd parties to install on their own servers, a very common approach is to provide (as you said) a simple install.php file. If your application is more complicated, often times an installation directory will come packaged. The user installing the application opens this in a browser, where your script typically does a few things:
Check PHP installation - verify (using function_exists()) all the required functions (and thus libraries) are installed and available. Alert the user of anything missing.
Allow the user to enter their configuration parameters - application specific settings required. Typically database hostname, username & password.
Test database connection - if successful, load initial tables. Commonly you keep your base schema file stored as a SQL file, so the application pushes this through the native mysql client, or issues the individual SQL commands directly.
Related
We're currently developing a 'sort of' e-commerce platform for our customers that are using our POS system.
This mainly exists of:
An Angular client-side
A PHP API as back-end
A MySQL database
Before I distribute the application to clients, I want to have a 'manageable' system for deploying and updating their platforms in case of code changes etc.
The initial setup would be:
Create database
Copy PHP files
Run composer
Run migrations
Modify configuration file for database credentials, salts, domain,..
Copy client side files
I was looking at Deployer for PHP, but I'm not sure how the whole database creation and config file modifications would work. I've originaly have the database creation in one of my migrations, but this would require a root db-user (or one with create permissions) and this user would need to be created as well.
The intial setup part could be done manually (it's not like it will be more than 5+ installations per week or so, but I would like to make it as simple as possible so that our support can do this instead of me every time)
The next part would be Updates.
I don't want to FTP to every server and apply changes. Updates can be both server side and client side. What would be the best way to do this:
Have a central system with all versions and registered websites at our end and let the client server daily check for a new version. If there is a new version, download all files from our server and run the migrations.
Push via deployer the new version to all clients. But this would overwrite or move the original config file with the DB credentials etc with the new version?
What if I need to add a new config setting? (application settings are stored in the database, but like the 'API' settings are within a config file.)
There will be a chance that all these client-servers will be distributed via our hosting provider, so we'll have access to all of them and they'll all be the same (for the configuration and such)
I've only written web applications used on one (server) location, so updating those were easy, for example via deploybot and such and the database setup was done manually, but now I'm stepping up my game and I want to make sure that I don't give myself more work than it should be.
Here's our case on developing an e-commerce platform - maybe you'll find answers to your questions there.
Codenetix spezializes in custom development, mostly web apps, so if you need help - let us know.
Good luck with your project!
I have a PHP application which is on a production server and it`s meant to register users to some services. It has two forms which registers a user in a different table from my database.
Problem is that today one of the tables disappeared and I was able to restore it from a backup. But this dose`t get rid of the problem.
How do I investigate this in order to determine how that table got lost and most likely dropped by some bot or something.
How would you proceed in a situation like this?
There are two ways:
Have a working backup of the system, and restore the files from it.
An undelete tool might help, if you deleted the db very recently (and ideally, if you unplugged the computer right afterward).
As for doing it with MySQL, though...on all systems i'm aware of, no. MySQL tables are files in the server's data directory, and dropping a table deletes those files. Once they're gone, they're gone, and only the methods above can get them back. A database is a directory of those files, and dropping it deletes the whole directory.
Check this free software
http://www.majorgeeks.com/Restoration_d4474.html
More information here - http://emaillenin.blogspot.com/2010/11/recover-accidentally-deleted-mysql.html
If your tables got dropped, find out what mysql users have privileges to drop a table (there shouldn't be many) and what services log in with that user credentials.
Maybe you have a web form with a php backend that doesn't clean up (escape) input, so you were maybe open to sql injection.
In that case, you could check you webserver access log.
I am deploying a small PHP + MySQL service to my client and would like to know what is the proper way to set up the database:
Should I use the hosting provider control panel and create the database schema?
Or should I put SQL CREATE scripts in my PHP to run during the "init phase"? Do hosting providers even allow PHP to create tables?
It's a really small site, one tiny info page and one web service page for fetching data from the database.
I usually offload all deployment tasks into an install script. This way you can deploy in a matter of seconds, and can repeat if necessary. I do not know of a way to restrict scripts from database modifications (other than mysql user permissions, which will typically be defined by you)
It may depend what your hosting provider offers - personally I would use the control panel which should at least provide phpMyAdmin. You can then export your schema from your development database and import it to the live version.
Depending on your hosting provider you get a number of databases. Worst is 1 database, with a fixed name, most do 5 or more, with the ability to create your own database name. Often with a prefix.
I would go for the panel from the hoster, all though you can give any SQL statement through PHP.
Why add the complication of PHP for the installation?
Just use raw SQL. Simpler. Fire that into the database.
Use PHP for the interface. Creating tables/stored procedures/triggers etc is a one off event.
We're currently designing a rewrite of our PHP website. The new version will be under SVN version control and have a separate database for development and live sites.
Currently we have about 200,000 images on the site and we add around 5-10 a month. We'd like to have these images under SVN as well.
The current plan is to store and serve the images from the file system while serving their meta data from the database. Images will be served through a PHP imaging system with Apache rewrite rules so that http://host/image/ImageID will access a PHP script that queries the database for an image with the specified ID and (based on a path column in the table) returns the appropriate image.
The issue I'm having is keeping the image files and their meta data in sync between live and development sites.
Adding new images is (awkward, but) easy for the development team: we can add the image to our SVN repository in the same manner we do all files and manually create the meta data in both the live and test databases.
The problem arises when our employees need to upload new images through the website itself.
One viable solution I've been able to come up with is having our PHP upload script commit the new images to SVN and send INSERT queries to both live and development databases. But to me this seems inefficient. Plus SVN support in PHP is still experimental and I dislike having to rely on exec() calls.
I've also considered a third, separate database just for image meta data. As well as not storing the images in SVN (but they are part of the application and not just 'content' images that would be better off just being backed up).
I'd really like to keep images in SVN and if I do I need them to stay consistent with their meta data between the live and development site. I also have to provide a mechanism for user uploaded images.
What is the best way of handling this type of scenario?
The best way to handle this would be to use a separate process to keep your images and meta data in sync between live and dev. For the image files you can use a bash script running from cron to do a "svn add" and "svn commit" for any images uploaded to your live environment. Then you can run a periodic "svn up" in your dev environment to ensure that dev has the latest set. Mysql replication would be the best way to handle keeping the live and dev databases in sync given your data set. This solution assumes two things: 1) Data flows in one direction, from prod to dev and not the other way around. 2) Your users can tolerate a small degree of latency (the amount of time for which live and dev will be out of sync). The amount of latency will be directly proportional to the amount of data uploaded to prod. Given the 5 - 10 images added per month, latency should be infinitesimal.
I've had to solve this sort of problem for a number of different environments. Here's some of the techniques that I've used; some combination may solve your problem, or at least give you the right insight to solve your problem.
Version controlling application data during development
I worked on a database application that needed to be able to deliver certain data as part of the application. When we delivered a new version of the application, the database schema was likely to evolve, so we needed SQL scripts that would either (1) create all of the application tables from scratch, or (2) update all of the existing tables to match the new schema, add new tables, and drop unneeded tables. In addition, we needed to be able to prove that the upgrade scripts would work no matter which version of the application was being upgraded (we had no control of the deployment environment or upgrade schedules, so it was possible that a given site might need to upgrade from 1.1 to 1.3, skipping 1.2).
In this instance, what I did was take a tool that would dump the database as one large SQL script containing all of the table definitions and data. I then wrote a tool that split apart this huge script into separate files (fragments) for each table, stored procedure, function, etc. I wrote another tool that would take all of the fragments and produce a single SQL script. Finally, I wrote a third tool that was used during installation that would determine which scripts to run during installation based upon the state of the database and installed application. Once I was happy with the tools, I ran them against the current database, and then edited the fragments to eliminate extraneous data to leave only the parts that we wanted to ship. I then version-controlled the fragments along with a set of database dumps representing databases from the field.
My regression test for the database would involve restoring a database dump, running the installer to upgrade the database, and the dumping the result and splitting the dump into fragments, and then comparing the fragments against the committed version. If there were any differences, then that pointed to problems in the upgrade or installation fragments.
During development, the developers would run the installation tool to initialize (really upgrade) their development databases, then make their changes. They'd run the dump/split tool, and commit the changed fragments, along with an upgrade script that would upgrade any existing tables to match the new schema. A continuous integration server would check out the changes, build everything, and run all of the unit tests (including my database regression tests), then point the finger at any developer that forgot to commit all of their database changes (or the appropriate upgrade script).
Migrating Live data to a Test site
I build websites using Wordpress (on PHP and MySQL) and I need to keep 'live' and 'test' versions of each site. In particular, I frequently need to pull all of the data from 'live' to 'test' so that I can see how certain changes will look with live data. The data in this case is web pages, uploaded images, and image metadata, with the image metadata stored in MySQL. Each site has completely independent files and databases.
The approach that I worked out is a set of scripts that do the following:
Pull two sets (source and target) of database credentials and file locations from the configuration data.
Tar up the files in question for the source website.
Wipe out the file area for the target website.
Untar the files into the target file area.
Dump the tables in question for the source database to a file.
Delete all the data from the matching tables in the target database.
Load the table data from the dump file.
Run SQL queries to fix any source pathnames to match the target file area.
The same scripts could be used bidirectionally, so that they could be used to pull data to test from live or push site changes from test to live.
If you already have a solution to deal with data migration from dev to prod for your databases, why not store the actual images as BLOBs in the DB, along with the metadata?
As the images are requested, you can have a script write them to flat files on the server (or use something like mem_cache to help serve up common images) the first time, and then treat them as files afterwords (doing a file_exists() check or similar). Have your mod_rewrite script handle the DB lookup. This way, you will get the benefit of still having the majority of your users access 'flat' image files handled by your mod_rewrite script, and everything being nicely in sync with the various DBs. The downside is that your DBs get big of course.
Where, when and how to create the administrator account/user for a private website?
So what I am asking is what's the preferable technique for creating that first administrator account/user. In my case it's for a private webapplication. I am talking about the account/user that will own the application and will if needed create/promote the other administrators. I guess you can this guy the root user?
Here are a few ways I encountered in other websites/webapplication.
Installation wizard:
You see this a lot in blog software or forums. When you install the application it will ask you to create an administrator user. Private webapplication will most likely not have this.
Installation file:
A file you run to install your application. This file will create the administrator account for you.
Configuration files:
A configuration file that holds the credentials for the administrator account.
Manually insert it into a database:
Manually insert the administrator info into the database.
When:
On a bootstrapping phase. Someone has suggested seeds.rb. I personally prefer to use the bootstrapper gem (with some addtions that allow me to parse csv files).
This action allows you to create a rake task which can be invoked like this:
rake db:bootstrap
This will create the initial admin user, as well as any seeding data (such as the list of countries, or a default blog format, etc). The script is very flexible. You can make it ask for a password, or accept a password parameter, if you feel like it.
How:
In all cases I use declarative_authorization in order to manage user permissions.
Your admin user must return a role called 'admin' (or whatever name you choose) on the list of roles attached to it. I usually have 1 single role per user, mainly because I can use role inheritance (e.g. admins are also editors by default). This means that on my database I've got a single field for users called "role_id". 0 is usually for the admin role, since it is the first one created.
Where:
A specific file inside db/bootstrap/users.rb (or yaml, or csv) specifies the details of a user with the admin role activated. The rake db:boostrap order parses that file and creates the user accordingly.
I see you tagged ruby on rails here. In RoR you would probably use the seeds.rb file under /your_app/db.
If you are using asp.net, I might assume you are using MSSQL or maybe Oracle. Having a stored proc that runs as an install script might do the job.
I have seen php apps using an install.php file that when run once installs the necessary data into the database and then tells the installer to delete the file before the app will run.
So there are three ways to deal with it.
If you have user accounts on your website (and I see you have them), config file with administrator's credentials is very awkward. This solution enforces you to duplicate a big part of authentication logic. Better keep the account in database.
I understand you are preparing application for yourself, not delivering it to your customers. Preparing installation wizard or installation files seems to be waste of time.
I would do the simplest - just raw insert. Pros: no extra work, same authentication mechanism as for other users. If you are using some kind of database migrations, you could create a migration which create a root account with some dummy password you can change later.
Installation wizard:
- definitvely the best approach. Clean, sure and user-friendly. Should be integrated with application installer.
Installation file:
- ok, but only if you have one and only script to run. Having more -> problems and potentially security flaws (all the folks who forget to delete this file after ...)
Configuration files:
To avoid. You are demanding user to know PHP, internals of your app, maybe server side configuration (anything above ftp can be "difficult")
Manually insert it into a database:
To avoid * 2.
In addition, two last solutions are impossible if you are using password hashing (ie. md5 or sha1 with site specific salt) - which is quite an obligation today.