A very flowery title indeed.
I have a PHP web application that is in the form of a web based wizard. A user can run through the wizard and select options, run process (DB queries) etc. They can go backwards and forwards and run process again and again.
I am trying to work out how to best save the state of what users do/did, what process they ran etc. So basically a glorified log that I can pull up later.
How do I save these states or sessions? One option which is being considered by my colleague is using an XML file for each session and to save everything there. My idea is to use a database table to do this.
There are pros and cons for each and I was hoping I could get answers on which option to go for? Suggestiosn of other options that are feasible would be great! Or what kind of questions should I ask myself to choose the right implementation.
Technologies Currently Used
Backend: PHP and MS SQL Server, running on Windows Server 2005
FrontEnd: HTML, CSS, JavaScript (JQuery)
Any help will be greatly appreciated.
EDIT
There will be only one/two/three users per site where this system will be launched. Each site will not be connected in any way. The system can have about 10 to 100 sessions per month.
Using a database is probably the way to go. Just create a simple table, that tracks actions by session id. Don't index anything, as you want inserting rows to be a low-cost operation (you can create a temp table, add indexes, and run reports on it later).
XML files could also work -- you'd want to write a separate file for each sessionid -- but doing analysis will probably be much more straightforward if you can leverage your database's featureset.
If you're talking about a large number of users doing there operations simultaneously, and you'd want to trace their steps, I think it's better to go for a database-oriented approach. The database server can optimize data flow and disk writes, leading to a better concurrent performance than constantly writing files on the disk. You really should try to stress-test the system, whichever you choose, to make sure performance does not suffer in the event of a big load.
Related
I want to make a detailed logger for my application and because it can get very complex and have to save a lot of different things I wonder where is the best to save it in a database(and if database wich kind of database is better for this kind of opperations) or in file(and if file what kind of format:text,csv,json,xml).My first thought was of course file because in database I see a lot of problems but I also want to be able to show those logs and for this is easier with database.
I am building a log for HIPPA compliance and here is my rough implementation (not finished yet).
File VS. DB
I use a database table to store the last 3 months of data. Every night a cron will run and push the older data (data past 3 months) off into compressed files. I haven't written this script yet but it should not be difficult. That way the last 3 months can be searched, filtered, etc. But the database won't be overwhelmed with log entries.
Database Preference
I am using MSSQL because I don't have a choice. I usually prefer MySQL though as it has better pager optimization. If you are doing more than a very minimal amount of searching and filtering or if you are concerned about performance you may want to consider an apache solr middle man. I'm not a db expert so I can't give you much more than that.
Table Structure
My table is 5 columns. Date, Operation (create, update, delete), Object (patient, appointment, doctor), ObjectID, and Diff (a serialized array of before and after values, changed values only no empty or unchanged values for the sake of saving space).
Summary
The most important piece to consider is: Do you need people to be able to access and filter/search the data regularly? IF yes consider a database for the recent history or the most important data.
If no a file is probably a better option.
My hybrid solution is also worth considering. I'll be pushing the files off to a amz file server so it doesn't take up my web servers space.
You can create the detail & Complex logger with using the some existing libraries like log4php because that is fully tested as part of the performance compare to you design custom for your self and it will also save time of development, I personally used few libraries from php and dotnet for our complex logger need in some financial and medical domain projects
here i would suggest if you need to do from the php then use this
https://logging.apache.org/log4php/
I think the right answer is actually: Neither.
Neither the file or a DB give you proper search, filtering, and you need that when looking at logs. I deal with logs all day long (see http://sematext.com/logsene to see why), and I'd tackle this as follows:
log to file
use a lightweight log shipper (e.g. Logagent or Filebeat)
index logs into either your own Elasticsearch cluster (if you don't mind managing and learning) or one of the Cloud log management services (if you don't want to deal with Elasticsearch management, scaling, etc. -- Logsene, Loggly, Logentries...)
I've just finished a basic PHP file, that lets indie game developers / application developers store user data, handle user logins, self-deleting variables etc. It all revolves around storage.
I've made systems like this before, but always hit the max_user_connections issue - which I personally can't currently change, as I use a friends hosting - and often free hosting providers limit the max_user_connections anyway. This time, I've made the system fully text file based (each of them holding JSON structures).
The system works fine currently, as it's being tested by only me and another 4/5 users per second. The PHP script basically opens a text file (based upon query arguments), uses json_decode to convert the contents into the relevant PHP structures, then alters and writes back to the file. Again, this works fine at the moment, as there are few users using the system - but I believe if two users attempted to alter a single file at the same time, the person who writes to it last will overwrite the data that the previous user wrote to it.
Using SQL databases always seemed to handle queries quite slowly - even basic queries. Should I try to implement some form of server-side caching system, or possibly file write stacking system? Or should I just attempt to bump up the max_user_connections, and make it fully SQL based?
Are there limits to the number of users that can READ text files per second?
I know game / application / web developers must create optimized PHP storage solutions all the time, but what are the best practices in dealing with traffic?
It seems most hosting companies set the max_user_connections to a fairly low number to begin with - is there any way to alter this within the PHP file?
Here's the current PHP file, if you wish to view it:
https://www.dropbox.com/s/rr5ua4175w3rhw0/storage.php
And here's a forum topic showing the queries:
http://gmc.yoyogames.com/index.php?showtopic=623357
I did plan to release the PHP file, so developers could host it on their own site, but I would like to make it work as well as possible, before doing this.
Many thanks for any help provided.
Dan.
I strongly suggest you not re-invent the wheel. There are many options available for persistent storage. If you don't want to use SQL consider trying out any of the popular "NoSQL" options like MongoDB, Redis, CouchDB, etc. Many smart people have spent many hours solving the problems you are mentioning already, and they are hard at work improving and supporting their software.
Scaling a MySQL database service is outside the scope of this answer, but if you want to throttle up what your database service can handle you need to move out of a shared hosting environment in any case.
"but I believe if two users attempted to alter a single file at the same time, the person who writes to it last will overwrite the data that the previous user wrote to it."
- that is for sure. It even throws an error if the 2nd tries to save while the first has it open.
"Are there limits to the number of users that can READ text files per second?"
- no, but it is pointless to open a file, just for read multiple times. That file needs to be cached in a content management network.
"I know game / application / web developers must create optimized PHP storage solutions all the time, but what are the best practices in dealing with traffic?"
- usually a new database will do a better job than files, starting from the fact that the most often selects are stored in the RAM, the most often .txt files are not. As #oliakaoil read about the DB difference and see what you need.
I'm hoping to develop a LAMP application that will centre around a small table, probably less than 100 rows, maybe 5 fields per row. This table will need to have the data stored within accessed rapidly, maybe up to once a second per user (though this is the 'ideal', in practice, this could probably drop slightly). There will be a number of updates made to this table, but SELECTs will far outstrip UPDATES.
Available hardware isn't massively powerful (it'll be launched on a VPS with perhaps 512mb RAM) and it needs to be scalable - there may only be 10 concurrent users at launch, but this could raise to the thousands (and, as we all hope with these things, maybe 10,000s, but this level there will be more powerful hardware available).
As such I was wondering if anyone could point me in the right direction for a starting point - all the data retrieved will be the same for all users, so I'm trying to investigate if there is anyway of sharing this data across all users, rather than performing 10,000 identical selects a second. Soooo:
1) Would the mysql_query_cache cache these results and allow access to the data, WITHOUT requiring a re-select for each user?
2) (Apologies for how broad this question is, I'd appreciate even the briefest of reponses greatly!) I've been looking into the APC cache as we already use this for an opcode cache - is there a method of caching the data in the APC cache, and just doing one MYSQL select per second to update this cache - and then just accessing the APC for each user? Or perhaps an alternative cache?
Failing all of this, I may look into having a seperate script which handles the queries and outputs the data, and somehow just piping this one script's data to all users. This isn't a fully formed thought and I'm not sure of the implementation, but perhaps a combo of AJAX to pull the outputted data from... "Somewhere"... :)
Once again, apologies for the breadth of these question - a couple of brief pointers from anyone would be very, very greatly appreciated.
Thanks again in advance
If you're doing something like an AJAX chat which polls the server constantly, you may want to look at node.js instead, which keeps an open connection between server and browser. This way, you can have changes pushed to the user when they happen and you won't need to do all that redundant checking once per second. This can scale very well to thousands of users and is written in javascript on the server-side, so not too difficult.
The problem with using the MySQL cache is that the entire table cache gets invalidated on any write to that table. You're better off using a caching solution like memcached or APC if you're trying to control that behavior more precisely. And yes, APC would be able to cache that information.
One other thing to keep in mind is that you need to know when to invalidate the cache as well, so you don't have stale data.
You can use apc,xcache or memcache for database query caching or you can use vanish or squid for gateway caching...
My company have develop a web application using php + mysql. The system can display a product's original price and discount price to the user. If you haven't logined, you get the original price, if you loginned , you get the discount price. It is pretty easy to understand.
But my company want more features in the system, it want to display different prices base on different user. For example, user A is a golden parnter, he can get 50% off. User B is a silver parnter, only have 30 % off. But this logic is not prepare in the original system, so I need to add some attribute in the database, at least a user type in this example. Is there any recommendation on how to merge current database to my new version of database. Also, all the data should preserver, and the server should works 24/7. (within stop the database)
Is it possible to do so? Also , any recommend for future maintaince advice? Thz u.
I would recommend writing a tool to run SQL queries to your databases incrementally. Much like Rails migrations.
In the system I am currently working on, we have such tool written in python, we name our scripts something like 000000_somename.sql, where the 0s is the revision number in our SCM (subversion), and the tool is run as part of development/testing and finally deploying to production.
This has the benefit of being able to go back in time in terms of database changes, much like in code (if you use a source code version control tool) too.
http://dev.mysql.com/doc/refman/5.1/en/alter-table.html
Here are more concrete examples of ALTER TABLE.
http://php.about.com/od/learnmysql/p/alter_table.htm
You can add the necessary columns to your table with ALTER TABLE, then set the user type for each user with UPDATE. Then deploy the new version of your app. that uses the new column.
Did you use an ORM for data access layer ? I know Doctrine comes with a migration API which allow version switch up and down (in case something went wrong with new version).
Outside any framework or ORM consideration, a fast script will minimize slowdown (or downtime if process is too long).
To my opinion, I'd rather prefer a 30sec website access interruption with an information page, than getting shorter interuption time but getting visible bugs or no display at all. If interruption times matters, it's best doing this at night or when lesser traffic.
This can all be done in one script (or at least launched by one commande line), when we'd to do such scripts we include in a shell script :
putting application in standby (temporary static page) : you can use .htaccess redirect or whatever applicable to your app/server environment.
svn udpate (or switch) for source code and assets upgrade
empty caches, cleaning up temp files, etc.
rebuild generated classes (symfony specific)
upgrade DB structure with ALTER / CREATE TABLE querys
if needed, migrate data from old structure to new : depending on what you changed on structure, it may require fetching data before altering DB structure, or use tmp tables.
if all went well, remove temporary page. Upgrade done
if something went wrong display a red message to the operator so it can see what happened, try to fix it and then remove waiting page by hand.
The script should do checks at each steps and stop a first error, and it should be verbose (but concise) about what it does at all steps, thus you can fix the app faster if something has to went wrong.
The best would be a recoverable script (error at step 2 - stop process - manual fix - recover at step 3), I never took the time to implement it this way.
If works pretty well but these kind of script have to be intensively tested, on an environnement as closest as possible to the production one.
In general we develop such scripts locally, and test them on the same platform tha the production env (just different paths and DB)
If the waiting page is not an option, you can go whithout but you need to ensure data and users session integrity. As an example, use LOCK on tables during upgrade/data transfer and use exclusive locks on modified files (SVN does I think)
There could other better solutions, but it's basically what I use and it do the job for us. The major drawback is that kind of script had to be rewritten at each major release, this incitate me to search for other options to do this, but which one ??? I would be glad if someone here had better and simpler alternative.
First of all, the website I run is hosted and I don't have access to be able to install anything interesting like memcached.
I have several web pages displaying HTML tables. The data for these HTML tables are generated using expensive and complex MySQL queries. I've optimized the queries as far as I can, and put indexes in place to improve performance. The problem is if I have high traffic to my site the MySQL server gets hammered, and struggles.
Interestingly - the data within the MySQL tables doesn't change very often. In fact it changes only after a certain 'event' that takes place every few weeks.
So what I have done now is this:
Save the HTML table once generated to a file
When the URL is accessed check the saved file if it exists
If the file is older than 1hr, run the query and save a new file, if not output the file
This ensures that for the vast majority of requests the page loads very fast, and the data can at most be 1hr old. For my purpose this isn't too bad.
What I would really like is to guarantee that if any data changes in the database, the cache file is deleted. This could be done by finding all scripts that do any change queries on the table and adding code to remove the cache file, but it's flimsy as all future changes need to also take care of this mechanism.
Is there an elegant way to do this?
I don't have anything but vanilla PHP and MySQL (recent versions) - I'd like to play with memcached, but I can't.
Ok - serious answer.
If you have any sort of database abstraction layer (hopefully you will), you could maintain a field in the database for the last time anything was updated, and manage that from a single point in your abstraction layer.
e.g. (pseudocode): On any update set last_updated.value = Time.now()
Then compare this to the time of the cached file at runtime to see if you need to re-query.
If you don't have an abstraction layer, create a wrapper function to any SQL update call that does this, and always use the wrapper function for any future functionality.
There are only two hard things in
Computer Science: cache invalidation
and naming things.
—Phil Karlton
Sorry, doesn't help much, but it is sooooo true.
You have most of the ends covered, but a last_modified field and cron job might help.
There's no way of deleting files from MySQL, Postgres would give you that facility, but MySQL can't.
You can cache your output to a string using PHP's output buffering functions. Google it and you'll find a nice collection of websites explaining how this is done.
I'm wondering however, how do you know that the data expires after an hour? Or are you assuming the data wont change that dramatically in 60 minutes to warrant constant page generation?