My company have develop a web application using php + mysql. The system can display a product's original price and discount price to the user. If you haven't logined, you get the original price, if you loginned , you get the discount price. It is pretty easy to understand.
But my company want more features in the system, it want to display different prices base on different user. For example, user A is a golden parnter, he can get 50% off. User B is a silver parnter, only have 30 % off. But this logic is not prepare in the original system, so I need to add some attribute in the database, at least a user type in this example. Is there any recommendation on how to merge current database to my new version of database. Also, all the data should preserver, and the server should works 24/7. (within stop the database)
Is it possible to do so? Also , any recommend for future maintaince advice? Thz u.
I would recommend writing a tool to run SQL queries to your databases incrementally. Much like Rails migrations.
In the system I am currently working on, we have such tool written in python, we name our scripts something like 000000_somename.sql, where the 0s is the revision number in our SCM (subversion), and the tool is run as part of development/testing and finally deploying to production.
This has the benefit of being able to go back in time in terms of database changes, much like in code (if you use a source code version control tool) too.
http://dev.mysql.com/doc/refman/5.1/en/alter-table.html
Here are more concrete examples of ALTER TABLE.
http://php.about.com/od/learnmysql/p/alter_table.htm
You can add the necessary columns to your table with ALTER TABLE, then set the user type for each user with UPDATE. Then deploy the new version of your app. that uses the new column.
Did you use an ORM for data access layer ? I know Doctrine comes with a migration API which allow version switch up and down (in case something went wrong with new version).
Outside any framework or ORM consideration, a fast script will minimize slowdown (or downtime if process is too long).
To my opinion, I'd rather prefer a 30sec website access interruption with an information page, than getting shorter interuption time but getting visible bugs or no display at all. If interruption times matters, it's best doing this at night or when lesser traffic.
This can all be done in one script (or at least launched by one commande line), when we'd to do such scripts we include in a shell script :
putting application in standby (temporary static page) : you can use .htaccess redirect or whatever applicable to your app/server environment.
svn udpate (or switch) for source code and assets upgrade
empty caches, cleaning up temp files, etc.
rebuild generated classes (symfony specific)
upgrade DB structure with ALTER / CREATE TABLE querys
if needed, migrate data from old structure to new : depending on what you changed on structure, it may require fetching data before altering DB structure, or use tmp tables.
if all went well, remove temporary page. Upgrade done
if something went wrong display a red message to the operator so it can see what happened, try to fix it and then remove waiting page by hand.
The script should do checks at each steps and stop a first error, and it should be verbose (but concise) about what it does at all steps, thus you can fix the app faster if something has to went wrong.
The best would be a recoverable script (error at step 2 - stop process - manual fix - recover at step 3), I never took the time to implement it this way.
If works pretty well but these kind of script have to be intensively tested, on an environnement as closest as possible to the production one.
In general we develop such scripts locally, and test them on the same platform tha the production env (just different paths and DB)
If the waiting page is not an option, you can go whithout but you need to ensure data and users session integrity. As an example, use LOCK on tables during upgrade/data transfer and use exclusive locks on modified files (SVN does I think)
There could other better solutions, but it's basically what I use and it do the job for us. The major drawback is that kind of script had to be rewritten at each major release, this incitate me to search for other options to do this, but which one ??? I would be glad if someone here had better and simpler alternative.
Related
I want to make a detailed logger for my application and because it can get very complex and have to save a lot of different things I wonder where is the best to save it in a database(and if database wich kind of database is better for this kind of opperations) or in file(and if file what kind of format:text,csv,json,xml).My first thought was of course file because in database I see a lot of problems but I also want to be able to show those logs and for this is easier with database.
I am building a log for HIPPA compliance and here is my rough implementation (not finished yet).
File VS. DB
I use a database table to store the last 3 months of data. Every night a cron will run and push the older data (data past 3 months) off into compressed files. I haven't written this script yet but it should not be difficult. That way the last 3 months can be searched, filtered, etc. But the database won't be overwhelmed with log entries.
Database Preference
I am using MSSQL because I don't have a choice. I usually prefer MySQL though as it has better pager optimization. If you are doing more than a very minimal amount of searching and filtering or if you are concerned about performance you may want to consider an apache solr middle man. I'm not a db expert so I can't give you much more than that.
Table Structure
My table is 5 columns. Date, Operation (create, update, delete), Object (patient, appointment, doctor), ObjectID, and Diff (a serialized array of before and after values, changed values only no empty or unchanged values for the sake of saving space).
Summary
The most important piece to consider is: Do you need people to be able to access and filter/search the data regularly? IF yes consider a database for the recent history or the most important data.
If no a file is probably a better option.
My hybrid solution is also worth considering. I'll be pushing the files off to a amz file server so it doesn't take up my web servers space.
You can create the detail & Complex logger with using the some existing libraries like log4php because that is fully tested as part of the performance compare to you design custom for your self and it will also save time of development, I personally used few libraries from php and dotnet for our complex logger need in some financial and medical domain projects
here i would suggest if you need to do from the php then use this
https://logging.apache.org/log4php/
I think the right answer is actually: Neither.
Neither the file or a DB give you proper search, filtering, and you need that when looking at logs. I deal with logs all day long (see http://sematext.com/logsene to see why), and I'd tackle this as follows:
log to file
use a lightweight log shipper (e.g. Logagent or Filebeat)
index logs into either your own Elasticsearch cluster (if you don't mind managing and learning) or one of the Cloud log management services (if you don't want to deal with Elasticsearch management, scaling, etc. -- Logsene, Loggly, Logentries...)
I have a list of single-use discount codes for an ecommerce site I'm partnering with. I need to set up a page on my site where my users can fill out a form, and then will be given one of the codes. The codes are pre-determined and have been sent to me in a text file; I can't just generate them on the fly. I need to figure out the best way to get an unused code from the list, and then remove it from the list (or update a flag to mark it as used) at the same time, to avoid any possibility of giving two people the same code. In other words, something similar to a queue, where I can remove one item from the queue atomically.
This webapp will be running on AWS and the current code is Python (though I could potentially use something else if necessary; PHP would be easy). Ideally I'd use one of the AWS services or mysql to do this, but I'm open to other solutions if they're not a royal pain to get integrated. Since I thought "queue," SQS popped into my head, but this is clearly not what it's intended for (e.g. the 14 day limit on messages remaining in the queue will definitely not work for me). While I'm expecting very modest traffic (which means even really hacky solutions would probably work), I'd rather learn about the RIGHT way to do this even at scale.
I cant give actual code examples, but one of the easiest ways to do it would just be an increment counter in the file, so something like
0
code1
code2
code3
etc
and just skipping that many lines every time a code is used.
You could also do this pretty simply in a database
Amazon DynamoDB is a fast, NoSQL database from AWS, and it is potentially a good fit for this use case. Setting up a database table is easy, and you could load your codes into there. DynamoDB has a DeleteItem operation that also allows you to retrieve the data within the same, atomic operation (by setting the ReturnValues parameter to ALL_OLD). This would allow you to get and delete a code in one shot, so no other requests/processes can get the same code. AWS publishes official SDKs to help you connect to and use their services, including both a Python and PHP SDK (see http://aws.amazon.com/tools/).
A very flowery title indeed.
I have a PHP web application that is in the form of a web based wizard. A user can run through the wizard and select options, run process (DB queries) etc. They can go backwards and forwards and run process again and again.
I am trying to work out how to best save the state of what users do/did, what process they ran etc. So basically a glorified log that I can pull up later.
How do I save these states or sessions? One option which is being considered by my colleague is using an XML file for each session and to save everything there. My idea is to use a database table to do this.
There are pros and cons for each and I was hoping I could get answers on which option to go for? Suggestiosn of other options that are feasible would be great! Or what kind of questions should I ask myself to choose the right implementation.
Technologies Currently Used
Backend: PHP and MS SQL Server, running on Windows Server 2005
FrontEnd: HTML, CSS, JavaScript (JQuery)
Any help will be greatly appreciated.
EDIT
There will be only one/two/three users per site where this system will be launched. Each site will not be connected in any way. The system can have about 10 to 100 sessions per month.
Using a database is probably the way to go. Just create a simple table, that tracks actions by session id. Don't index anything, as you want inserting rows to be a low-cost operation (you can create a temp table, add indexes, and run reports on it later).
XML files could also work -- you'd want to write a separate file for each sessionid -- but doing analysis will probably be much more straightforward if you can leverage your database's featureset.
If you're talking about a large number of users doing there operations simultaneously, and you'd want to trace their steps, I think it's better to go for a database-oriented approach. The database server can optimize data flow and disk writes, leading to a better concurrent performance than constantly writing files on the disk. You really should try to stress-test the system, whichever you choose, to make sure performance does not suffer in the event of a big load.
I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed.
I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects.
What's a efficient and sufficient (no overkill) way to version my database schemata?
I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions.
[edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit]
[edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]
Simple way for a small company: dump your database to SQL and add it to your repository. Then every time you change something, add the changes in the dump file.
You can then use diff to see changes between versions, not to mention have comments explaining your changes. This will also make you virtually immune to MySQL upgrades.
The one downside I've seen to this is that you have to remember to manually add the SQL to your dumpfile. You can train yourself to always remember, but be careful if you work with others. Missing an update could be a pain later on.
This could be mitigated by creating some elaborate script to do it for you when submitting to subversion but it's a bit much for a one man show.
Edit: In the year that's gone by since this answer, I've had to implement a versioning scheme for MySQL for a small team. Manually adding each change was seen as a cumbersome solution, much like it was mentioned in the comments, so we went with dumping the database and adding that file to version control.
What we found was that test data was ending up in the dump and was making it quite difficult to figure out what had changed. This could be solved by dumping the schema only, but this was impossible for our projects since our applications depended on certain data in the database to function. Eventually we returned to manually adding changes to the database dump.
Not only was this the simplest solution, but it also solved certain issues that some versions of MySQL have with exporting/importing. Normally we would have to dump the development database, remove any test data, log entries, etc, remove/change certain names where applicable and only then be able to create the production database. By manually adding changes we could control exactly what would end up in production, a little at a time, so that in the end everything was ready and moving to the production environment was as painless as possible.
How about versioning file generated by doing this:
mysqldump --no-data database > database.sql
Where I work we have an install script for each new version of the app which has the sql we need to run for the upgrade. This works well enough for 6 devs with some branching for maintenance releases. We're considering moving to Auto Patch http://autopatch.sourceforge.net/ which handles working out what patches to apply to any database you are upgrading. It looks like there may be some small complication handling branching with auto Patch, but it doesn't sound like that'll be an issue for you.
i'd guess, a batch file like this should do the job (didn't try tough) ...
mysqldump --no-data -ufoo -pbar dbname > path/to/app/schema.sql
svn commit path/to/app/schema.sql
just run the batch file after changing the schema, or let a cron/scheduler do it (but i don't know ... i think, commits work if just the timestamps changed, even if the contents is the same. don't know if that would be a problem.)
The main ideea is to have a folder with this structure in your project base path
/__DB
—-/changesets
——–/1123
—-/data
—-/tables
Now who the whole thing works is that you have 3 folders:
Tables
Holds the table create query. I recommend using the naming “table_name.sql”.
Data
Holds the table insert data query. I recommend using the same naming “table_name.sql”.
Note: Not all tables need a data file, you would only add the ones that need this initial data on project install.
Changesets
This is the main folder you will work with.
This holds the change sets made to the initial structure. This holds actually folders with changesets.
For example i added a folder 1123 wich will contain the modifications made in revision 1123 ( the number is from your code source control ) and may contain one or more sql files.
I like to add them grouped into tables with the naming xx_tablename.sql - the xx is a number that tells the order they need to be runned, since sometimes you need the modification runned in a certain order.
Note:
When you modify a table, you also add those modifications to table and data files … since those are the file s that will be used to do a fresh install.
This is the main ideea.
for more details you could check this blog post
Take a look at SchemaSync. It will generate the patch and revert scripts (.sql files) needed to migrate and version your database schema over time. It's a command line utility for MySQL that is language and framework independent.
Some months ago I searched tool for versioning MySQL schema. I found many useful tools, like Doctrine migration, RoR migration, some tools writen in Java and Python.
But no one of them was satisfied my requirements.
My requirements:
No requirements , exclude PHP and MySQL
No schema configuration files, like schema.yml in Doctrine
Able to read current schema from connection and create new migration script, than represent identical schema in other installations of application.
I started to write my migration tool, and today I have beta version.
Please, try it, if you have an interest in this topic.
Please send me future requests and bugreports.
Source code: bitbucket.org/idler/mmp/src
Overview in English: bitbucket.org/idler/mmp/wiki/Home
Overview in Russian: antonoff.info/development/mysql-migration-with-php-project
Our solution is MySQL Workbench. We regularly reverse-engineer the existing Database into a Model with the appropriate version number. It is then possible to easily perform Diffs between versions as needed. Plus, we get nice EER Diagrams, etc.
At our company we did it this way:
We put all tables / db objects in their own file, like tbl_Foo.sql. The files contain several "parts" that are delimited with
-- part: create
where create is just a descriptive identification for a given part, the file looks like:
-- part: create
IF not exists ...
CREATE TABLE tbl_Foo ...
-- part: addtimestamp
IF not exists ...
BEGIN
ALTER TABLE ...
END
Then we have an xml file that references every single part that we want executed when we update database to new schema.
It looks pretty much like this:
<playlist>
<classes>
<class name="table" desc="Table creation" />
<class name="schema" desc="Table optimization" />
</classes>
<dbschema>
<steps db="a_database">
<step file="tbl_Foo.sql" part="create" class="table" />
<step file="tbl_Bar.sql" part="create" class="table" />
</steps>
<steps db="a_database">
<step file="tbl_Foo.sql" part="addtimestamp" class="schema" />
</steps>
</dbschema>
</playlist>
The <classes/> part if for GUI, and <dbschema/> with <steps/> is to partition changes. The <step/>:s are executed sequentially. We have some other entities, like sqlclr to do different things like deploy binary files, but that's pretty much it.
Of course we have a component that takes that playlist file and a resource / filesystem object that crossreferences the playlist and takes out wanted parts and then runs them as admin on database.
Since the "parts" in .sql's are written so they can be executed on any version of DB, we can run all parts on every previous/older version of DB and modify it to be current.
Of course there are some cases where SQL server parses column names "early" and we have to later modify part's to become exec_sqls, but it doesn't happen often.
I think this question deserves a modern answer so I'm going to give it myself. When I wrote the question in 2009 I don't think Phinx already existed and most definitely Laravel didn't.
Today, the answer to this question is very clear: Write incremental DB migration scripts, each with an up and a down method and run all these scripts or a delta of them when installing or updating your app. And obviously add the migration scripts to your VCS.
As mentioned in the beginning, there are excellent tools today in the PHP world which help you manage your migrations easily. Laravel has DB migrations built-in including the respective shell commands. Everyone else has a similarly powerful framework agnostic solution with Phinx.
Both Artisan migrations (Laravel) and Phinx work the same. For every change in the DB, create a new migration, use plain SQL or the built-in query builder to write the up and down methods and run artisan migrate resp. phinx migrate in the console.
I do something similar to Manos except I have a 'master' file (master.sql) that I update with some regularity (once every 2 months). Then, for each change I build a version named .sql file with the changes. This way I can start off with the master.sql and add each version named .sql file until I get up to the current version and I can update clients using the version named .sql files to make things simpler.
Related to my previous question:
PHP and Databases: Views, Functions and Stored Procedures performance
Just to make a more specific question regarding large SELECT queries.
When would it be more convenient to use a View instead of writing the SELECT query in the code and calling it:
$connector->query($sql)->fetchAll();
What are the factors to take into account when deciding wether its best to use a view, or just leave it as it is. Say, if you join several tables, select certain amount of data, etc.
I'm asking in the context of a big web app (with PHP & Postgres), and looking for performance and optimization.
One thing to take into account when you are using PHP source code + views (instead of only PHP source code) is that you now have two kind of sources to modify when you update your application :
you must put the new PHP sources on the server
and you must update the views
And you sometimes must do that exactly at the same time if you don't want your application to crash... Or you have to program thinking that the application must run OK with an outdated / more recent version of the views (for a couple of seconds).
Something else you might have to consider is versionning : versionning PHP scripts is easy : just use SVN and its allright, as it's text files.
With views, to get the same kind of versionning, you have to work in text files (commited on the SVN before you update them on the DB production server), and keep those in sync with the DB server -- seems easy, but it's not when you have to push an emergency patch to production ^^
Personnaly, I generally use views / stored procedures when it really makes a diffenrence : for instance, if a calculation would require thousands of SQL queries (and, so, thousands of call from PHP, waiting for the response, and so on) or too many data exchanges between the two servers, using a stored proc can really be great !
(Never used postgre, but the idea is the same with other products)