Using PHP to separate read / writes - php

We have decided that we are going to move from a single database to a replicated database in a master-slave architecture and are going to get all of our reads to go to the slave and writes to the master.
The reason we are going down this route is an addition to our product means that we are getting a large increase in database connections which leads to performance problems with our reporting suite.
We are using MySQL(5.1.55) and the application is developed in PHP.
A couple of general queries on this:
How would you tell the application which db to read from? Would you do it within the PHP? Or just something like mysqldnd_ms or mysql proxy?
Where would ajax requests read from? We have a page which allows users to flag a record. This is then saved in the database and users can see which records have been flagged.
Thanks for any advice.

Related

PHP application / game database - SQL vs Text files - Speed / User connections

I've just finished a basic PHP file, that lets indie game developers / application developers store user data, handle user logins, self-deleting variables etc. It all revolves around storage.
I've made systems like this before, but always hit the max_user_connections issue - which I personally can't currently change, as I use a friends hosting - and often free hosting providers limit the max_user_connections anyway. This time, I've made the system fully text file based (each of them holding JSON structures).
The system works fine currently, as it's being tested by only me and another 4/5 users per second. The PHP script basically opens a text file (based upon query arguments), uses json_decode to convert the contents into the relevant PHP structures, then alters and writes back to the file. Again, this works fine at the moment, as there are few users using the system - but I believe if two users attempted to alter a single file at the same time, the person who writes to it last will overwrite the data that the previous user wrote to it.
Using SQL databases always seemed to handle queries quite slowly - even basic queries. Should I try to implement some form of server-side caching system, or possibly file write stacking system? Or should I just attempt to bump up the max_user_connections, and make it fully SQL based?
Are there limits to the number of users that can READ text files per second?
I know game / application / web developers must create optimized PHP storage solutions all the time, but what are the best practices in dealing with traffic?
It seems most hosting companies set the max_user_connections to a fairly low number to begin with - is there any way to alter this within the PHP file?
Here's the current PHP file, if you wish to view it:
https://www.dropbox.com/s/rr5ua4175w3rhw0/storage.php
And here's a forum topic showing the queries:
http://gmc.yoyogames.com/index.php?showtopic=623357
I did plan to release the PHP file, so developers could host it on their own site, but I would like to make it work as well as possible, before doing this.
Many thanks for any help provided.
Dan.
I strongly suggest you not re-invent the wheel. There are many options available for persistent storage. If you don't want to use SQL consider trying out any of the popular "NoSQL" options like MongoDB, Redis, CouchDB, etc. Many smart people have spent many hours solving the problems you are mentioning already, and they are hard at work improving and supporting their software.
Scaling a MySQL database service is outside the scope of this answer, but if you want to throttle up what your database service can handle you need to move out of a shared hosting environment in any case.
"but I believe if two users attempted to alter a single file at the same time, the person who writes to it last will overwrite the data that the previous user wrote to it."
- that is for sure. It even throws an error if the 2nd tries to save while the first has it open.
"Are there limits to the number of users that can READ text files per second?"
- no, but it is pointless to open a file, just for read multiple times. That file needs to be cached in a content management network.
"I know game / application / web developers must create optimized PHP storage solutions all the time, but what are the best practices in dealing with traffic?"
- usually a new database will do a better job than files, starting from the fact that the most often selects are stored in the RAM, the most often .txt files are not. As #oliakaoil read about the DB difference and see what you need.

what are the negative impacts of too many mysql (ajax) requests for a php chat application?

hi i created a simple php/mysql/Ajax chat application and I have a few questions. before that let me explain how it works.
So, if a user is on the chat page, the ajax script sends a request to a php file that shows the chat histories (latest messages), and returns it in HTML. This request is looped every second to show the latest messages to the user viewing the page.
so far its been working great.
now my question and concern is, 1.) What are the cons of using a method like this, if any? 2.) What things should i worry most about, if it gets a large user base and many people are using it simultaneously? (mostly because its making a request every second, for each user on it..)
the mysql table is an innodb table, and I'm using only one SELECT statement without a WHERE clause.. something like SELECT * FROM table ORDER BY id DESC LIMIT 10 etc.. (basically, I'm making mysql do something very easy like cake)
3.) Any suggestion are welcome ;)
thanks very much
vikash
Definitely, you will need to look at scalability issues for both the web server and database server. There are technologies such as MySQL clustering for improving performance on the database and web clustering for the HTTP side of things.
With large scale use you may also look at trimming down the table by removing early posts and dumping them to a separate table for low-frequency access. You could also have some method of caching the database requests via some worker threads so the database reads are minimal, but the front-end will have the ability to cope with the high volume of requests.
I got 60 people in phpFreeChat (php/ajax/mysql chat) and it was a complete processor hog. It brought an 8 core server to its knees.

MySQL query to reduce server load

I have a MySQL server running that will be queried regularly through a php front end. I'm slightly worried about server load as there will be a fair amount of people accessing the webpage, with each session querying the database regularly. The results of the query, and in essence the webpage will be the same for all users.
Is there a way of querying the database once, and outputting the data/results to the webpage, from which all users connect to and view? Basically running the query for all users that connect to the webpage, rather than each user querying the database.
Any suggestions appreciated.
Thanks
You don't have to worry.
Databases intended for that.
Most sites in the world run exactly the same way: MySQL server running that will be queried regularly through a php front end. Nothing bad with it.
Well tuned SQL server and properly designed query will serve much more than you think.
You will need exceptionally high traffic to start worrying about such things.
Don't forget that MysQL has it's own query cache.
Also please note that there are no users "connected" to the webpage. They connect, get page contents and disconnect.
You should give the server a try. If the server is overloaded,
you can always try Memcached tool. It can be used via PHP or by MySQL directly. It will save you from querying DB server with similar queries, i.e. the load on server will be decreased drastically.
If the webpage will be the same for all users, why do you even need to have a MySQL backend?
I think the best solution would be to have a standalone script running periodically (e.g. as a cron) which generates the static HTML for your web pages. That way, there is no need for users to query the database when they are just going to end up with the exact same page anyway.
If its a large query with joins you could create a view in mysql with the queried data and query the view, and update the view if the data changes.

different databases for handling sessions...am I doing the right thing?

I'm looking for some advice on whether or not I should use a separate database to handle my sessions.
We are writing a web app for multiple users to login and check/update their account specific information. We didn't want to use the file storage method on the webserver for storing session information, so we decided to use a database (MySQL). It's working fine, but I'm wondering about performance when this gets into production.
Currently, we have two databases (rst_sessions, and rst). The "RST" database is where all the tables are stored for the webapp...they are all MYSQL InnoDB using Referential Integrity/foreign keys to link the tables. The "RST_SESSIONS" database simply has one table and all the session information gets stored there.
Here's one of my concerns. In the PHP code if I want to run a query against "RST" then I have to select that database as such inside php ( $db->select("RST") )...when I'm done with the query I have to re-select the "RST_SESSIONS" ( $db->select("RST_SESSIONS") ) or else the session specific information doesn't get set. So, throught the webapp the code is doing a lot of selecting and reselecting of the two databases. Is this likely to cause performance issues with user base of say (10,000 - 15,000)? Would we be better off moving the RST_SESSIONS table into the RST database to avoid all the selecting?
One reason we initially set things up this way was to be able to store the sessions information on a separate database server so it didn't interfere with the operations of the webapp database.
What are some of the pro's and con's of both methods and what would you suggest we do for performance? Thanks in advance.
If you're worrying about performances, another alternate solution would be to not store your sessions in database, but to use something like memcached -- the PHP library to dialog with memcached already provides a handler for sessions.
A couple of advantages of using memcached :
No hit to the disk : everything is in RAM
Of course, this means sessions will be lost if your server crashes ; but if a crash happens, you'll probably have other troubles than jsut losing sessions, and this is not likely to happen often
Used in production by many websites, and works well (I'm using it for a couple of websites)
Better scalability : if you need more RAM or more CPU-power for your memcached cluster, just add a couple of servers
And I would add : once you've started using memcached, you can also use it as a caching mecanism ;-)
Now, to answer to your specific questions :
Instead of selecting the DB, I would use two distinct connections :
One for the DB that's use for the application,
And one other for the DB that's used for the sessions.
Of course, this means a bit more load on the server (it doubles the number of opened connections), but it make sure that, the day it becomes needed, you'll be able to move the "session" database to another server : you'll just have to re-configure a connection string ; and as the application already uses two separate connections, it'll still work fine.
If you can live with it, just open a second connection to the database. That way you won't have to switch between databases at all. Of course, now you consume twice as many connections, and may need to bump the limit.
Unless there's some overriding reason to put your auth information in a separate database, why not put it with the rest of your data? You may find it convenient to have everything in one place.
Notice also that you can qualify your table names in your sql queries with a schema (database) name e.g.
SELECT ACTIVE
FROM RST_SESSIONS.SESSION
WHERE SID=*whatever*
This may get you out of the need to switch dbs explicitly, if they're both on the same server.

Saving the State of a System

A very flowery title indeed.
I have a PHP web application that is in the form of a web based wizard. A user can run through the wizard and select options, run process (DB queries) etc. They can go backwards and forwards and run process again and again.
I am trying to work out how to best save the state of what users do/did, what process they ran etc. So basically a glorified log that I can pull up later.
How do I save these states or sessions? One option which is being considered by my colleague is using an XML file for each session and to save everything there. My idea is to use a database table to do this.
There are pros and cons for each and I was hoping I could get answers on which option to go for? Suggestiosn of other options that are feasible would be great! Or what kind of questions should I ask myself to choose the right implementation.
Technologies Currently Used
Backend: PHP and MS SQL Server, running on Windows Server 2005
FrontEnd: HTML, CSS, JavaScript (JQuery)
Any help will be greatly appreciated.
EDIT
There will be only one/two/three users per site where this system will be launched. Each site will not be connected in any way. The system can have about 10 to 100 sessions per month.
Using a database is probably the way to go. Just create a simple table, that tracks actions by session id. Don't index anything, as you want inserting rows to be a low-cost operation (you can create a temp table, add indexes, and run reports on it later).
XML files could also work -- you'd want to write a separate file for each sessionid -- but doing analysis will probably be much more straightforward if you can leverage your database's featureset.
If you're talking about a large number of users doing there operations simultaneously, and you'd want to trace their steps, I think it's better to go for a database-oriented approach. The database server can optimize data flow and disk writes, leading to a better concurrent performance than constantly writing files on the disk. You really should try to stress-test the system, whichever you choose, to make sure performance does not suffer in the event of a big load.

Categories