Accessing MySQL from PHP and another process at the same time - php

I'm writing a program that runs (24/7) on a Linux server and adds entries to a MySQL database.
The contents of the database are presented on a web interface with PHP and the user should be able to delete entries using the web interface.
Is it possible to access the database from multiple processes at the same time?

Yes, databases are designed for this purpose quite well. You'll want to keep a few things in mind in your designs:
Concurrency and race conditions on database writes.
Performance.
Separate database permissions for separate applications.

Unless you're doing something like accessing the DB using a singleton, the max number of simultaneous mysql connections php will use is limited in your php.ini. I believe it defaults to 100.

Yes multiple users can access the database at the same time.
You should however take care that the data is consistent.
If you create/edit entry with many small sql statements and in the meantime someone useses the web interface this may lead to some errors.
If you have a simple db this should not be a problem, else you should consider using transactions.
http://dev.mysql.com/doc/refman/5.0/en/ansi-diff-transactions.html

Yes and there will not be any problems while trying to delete records in the presence of that automated program which runs 24/7 if you are using the InnoDb engine. This is because transactions happen one at a time, one starts after another has finished and the database is consistent everytime.
This answer How to implement the ACID model for a database has many relevant points.
Read about the ACID Properties of a database. A Mysql database with InnoDb engine will take care of all these things for you and you need not worry about that.

Related

Mysql, data migration between databases/servers (migrate now with regular updates later)

This is somewhat of an abstract question but hopefully pretty simple at the same time. I just have no idea the best way to go about this except for an export/import and I can't do that due to permission issues. So i need some alternatives.
On one server, we'll call it 1.2.3 I have a database with 2 schemas, Rdb and test. These schemas have 27 and 3 tables respectively. This database stores call info from our phone system but we have reader access only so we're very limited in what we can do beyond selecting and joining for data records and info.
I then have a production database server, call it 3.2.1 With my main schemas and I'd like to place the previous 30 tables into one of these production schemas. After the migration is done, I'll need to create a script that will check the data on the first connection and then update the new schema on the production connection, but that's after the bulk migration is done.
I'm wondering if a php script would be the way to go about this initial migration, though. I'm using MySQL workbench and the export wizard fails for the read only database, but if there's another way in the interface then I don't know about it.
It's quite a bit of data, and I'm not necessarily looking for the fastest way but the easiest and most fail safe way.
For a one time data move, the easiest way is to use the command line tool mysqldump to dump your tables to file, then load the resulting file with mysql. This assumes that you are either shutting down 1.2.3, or will reconfigure your phone system to point to 3.2.1 (or update DNS appropriately). Also, this is much easier if you can get downtime on the phone system to move the data.
we have reader access only so we're very limited in what we can do beyond selecting and joining for data records
This really limits your options.
Master/Slave replication requires REPLICATION SLAVE privilege, which you probably need a user with SUPER privilege to create a replication user.
Trigger based replication solutions like SymetricDS will require a user with CREATE ROUTINE in order to create the triggers
An "Extract, Transform, Load" solution like Clover ETL will work best if tables have LAST_CHANGED timestamps. If they don't, then you would need ALTER TABLE privilege.
Different tools for different goals.
Master/Slave replication is generally used for Disaster Recovery, Availability or Read Scaling
Hetergenous Replication to replicate some (or all) tables between different environments (could be different RDBMS, or different replica sets) in a continuous, but asynchronous fashion.
ETL for bulk, hourly/daily/periodic data movements, with the ability to pick a subset of columns, aggregate, convert timestamp formats, merge with multiple sources, and generally fix whatever you need to with the data.
That should help you determine really what your situation is - whether it's a one time load with a temporary data sync, or if it's an on-going replication (real-time, or delayed).
Edit:
https://www.percona.com/doc/percona-toolkit/LATEST/index.html
Check out the Persona Toolkit. Specifically pt-table-sync and pt-table-checksum. They will help with this.

MYSQL PHP Application performance with single database

I am designing an "high" traffic application which realies mainly on PHP and MySQL database queries.
I am designing the database tables so they can hold 100'000 rows, each page loading queries the db for user data.
I can experience slow performances or database errors when there are say 1000 users connected ?
Asking because i cannot find specification on the real performance limits of mysql databases.
Thanks
If the userdata remains unchanged due loading another page, you could think about storing those information in a session.
Also, you should analyze how the read/write ratio in your database/ on specific tables is. MyIsam and InnoDB are very different when it comes to locking. Many connections can slow down your server, try to cache connections.
Take a look at http://php.net/manual/en/pdo.connections.php
if designed wrongly, one user might kill your server. you need to have performance tests, find bottle necks profiling your code. use explain for your queries...
Well designed databases can handle with tens of millions of rows, but poor designed can't.
Don't worry about performance, try to design it well.
It's just hard to say a design was good or not,you should always do some stress tests before you set up your application or website to help you see the performance,tools i often used were mysqlslap(for mysql only) and apache's ab command.you can google them for details.

Best practice to record large amount of hits into MySQL database

Well, this is the thing. Let's say that my future PHP CMS need to drive 500k visitors daily and I need to record them all in MySQL database (referrer, ip address, time etc.). This way I need to insert 300-500 rows per minute and update 50 more. The main problem is that script would call database every time I want to insert new row, which is every time someone hits a page.
My question, is there any way to locally cache incoming hits first (and what is the best solution for that apc, csv...?) and periodically send them to database every 10 minutes for example? Is this good solution and what is the best practice for this situation?
500k daily it's just 5-7 queries per second. If each request will be served for 0.2 sec, then you will have almost 0 simultaneous queries, so there is nothing to worry about.
Even if you will have 5 times more users - all should work fine.
You can just use INSERT DELAYED and tune your mysql.
About tuning: http://www.day32.com/MySQL/ - there is very useful script (will change nothing, just show you the tips how to optimize settings).
You can use memcache or APC to write log there first, but with using INSERT DELAYED MySQL will do almost same work, and will do it better :)
Do not use files for this. DB will serve locks much better, than PHP. It's not so trivial to write effective mutexes, so let DB (or memcache, APC) do this work.
A frequently used solution:
You could implement an counter in memcached which you increment on an visit, and push an update to the database for every 100 (or 1000) hits.
We do this by storing locally on each server to CSV, then having a minutely cron job to push the entries into the database. This is to avoid needing a highly available MySQL database more than anything - the database should be able to cope with that volume of inserts without a problem.
Save them to a directory-based database (or flat file, depends) somewhere and at a certain time, use a PHP code to insert/update them into your MySQL database. Your php code can be executed periodically using Cron, so check if your server has Cron so that you can set the schedule for that, say every 10 minutes.
Have a look at this page: http://damonparker.org/blog/2006/05/10/php-cron-script-to-run-automated-jobs/. Some codes have been written in the cloud and are ready for you to use :)
One way would be to use Apache access.log. You can get a quite fine logging by using cronolog utility with apache . Cronolog will handle the storage of a very big number of rows in files, and can rotate it based on volume day, year, etc. Using this utility will prevent your Apache from suffering of log writes.
Then as said by others, use a cron-based job to analyse these log and push whatever summarized or raw data you want in MySQL.
You may think of using a dedicated database (or even database server) for write-intensive jobs, with specific settings. For example you may not need InnoDB storage and keep a simple MyIsam. And you could even think of another database storage (as said by #Riccardo Galli)
If you absolutely HAVE to log directly to MySQL, consider using two databases. One optimized for quick inserts, which means no keys other than possibly an auto_increment primary key. And another with keys on everything you'd be querying for, optimized for fast searches. A timed job would copy hits from the insert-only to the read-only database on a regular basis, and you end up with the best of both worlds. The only drawback is that your available statistics will only be as fresh as the previous "copy" run.
I have also previously seen a system which records the data into a flat file on the local disc on each web server (be careful to do only atomic appends if using multiple proceses), and periodically asynchronously write them into the database using a daemon process or cron job.
This appears to be the prevailing optimium solution; your web app remains available if the audit database is down and users don't suffer poor performance if the database is slow for any reason.
The only thing I can say, is be sure that you have monitoring on these locally-generated files - a build-up definitely indicates a problem and your Ops engineers might not otherwise notice.
For an high number of write operations and this kind of data you might find more suitable mongodb or couchdb
Because INSERT DELAYED is only supported by MyISAM, it is not an option for many users.
We use MySQL Proxy to defer the execution of queries matching a certain signature.
This will require a custom Lua script; example scripts are here, and some tutorials are here.
The script will implement a Queue data structure for storage of query strings, and pattern matching to determine what queries to defer. Once the queue reaches a certain size, or a certain amount of time has elapsed, or whatever event X occurs, the query queue is emptied as each query is sent to the server.
you can use a Queue strategy using beanstalk or IronQ

Why would you use two (or more) databases instead of one?

Many database libraries come setup for multiple database connections - but I've never actually known of an scripting application that needed to connect to two databases during it's run. (compiled, daemon-running languages are a different matter).
I understand having database slaves so that you can spread the load out - but usually on startup only one of them is chosen to handle that scripts needs.
So why would a PHP or Ruby application need to connect to more than one database? Or rather, why would you split your data up among several databases?
The only thing I can think of is bad design from a slowly evolving system that started off in multiple separate parts.
Are you talking about different physical database servers or different databases in the "schema" sense?
Regarding physical servers, If you're using MySQL replication you might write to a master and always read from a slave. This helps split the load among each database.
The simple answer is "scalability".
The ready availability of replication and clustering in a number of database products makes multiple database use a definite 'this must be possible'. Any decent ORM should know how to connect to multiple databases as required.
But even when the main application doesn't connect to more than one, there will often be other needs that do. Report generation, either scripted or ad-hoc, often involve queries that run for a long time. These are best run on database replicants dedicated (and configured) for these queries so they don't disrupt the main application.
Another good use is a type of scripted processing. Many apps will have a regular process that needs to rummage through a large part of the database. Whislt updates obviously have to go to the master, the big read queries can be run off a replicant.
Of course, the obvious need is simple performance. I oversaw a webapp and database that grew from surviving comfortably on one MySQL databse on a 32-bit dual-core machine with 3Gb to needing two 8-core 64-bit servers with 8Gb. Once it reached this stage, it relied on the database handler directing traffic to both servers. We had a window of about 50 minutes in a day where it could survive on just one database.
I have a Ruby application that connects to multiple databases. One database contains user login credentials (which is shared between several other projects). Another database contains archived data that my application tracks and compares (that only my application accesses). Another database contains data regarding physical machine resources which my application uses to generate new data (these resources are used by several different applications). By splitting the data into multiple databases, different applications only access the data that they need to be accessing.
It is all too frequently the case that some of the data you need is stored in The Wrong Database. Sometimes it's personnel records in a PeopleSoft (Oracle) database. Maybe it's Enterprise CRM data on Informix. Or some departmental database stored in MS SQL Server. Whatever it is, it's in a different database, but you still need access (hopefully read-only).
Unless your primary database is magic-based, it isn't going to be able to provide you with remote table access for every other database out there. (Most will only provide remote access to other databases of the same type, eg: MySQL->MySQL.) When that all too frequent situation occurs, you'll have no other option but to have multiple database connections, and be glad that your framework supports it.
I have a site that connects with two databases. One powers the website content (CMS DB) the other drives a web application that runs within the site (large amounts of non-CMS data) In fact, the latter uses replication.
I don't feel that's bad design. If one set of data has no relation to the other, then it makes sense even from a pure organization perspective to house it in a separate DB. Otherwise, people would just put all their tables in one DB.
For added security, I always create two accounts for every database: a read-only account (good for SELECT) and a read-write account (for SELECT, UPDATE, INSERT, DELETE and whatever else I might need). On some pages, I may need to use both accounts, thus I will consume two connections for only one database.
Well, reading from one and writing to another is a very common use case. It's easy and fun to write a data access layer that reads from one connection (reading from the slave), and writes to another (the master). A single script might make multiple reads before writing -- perhaps some lookups are necessary for validation, for instance.
Scripting languages are also frequently used for integration. You might have two off-the-shelf codebases, both of which want to maintain their own database. Your integration code might want to talk to both of them.
In general, you can usually design out of using more than one connection, but in general, I don't see anything fundamentally wrong with using connections to more than one database.
Other reasons to have multiple databases. We have one application that everyone can access. We also have client database that are very differnt from client to client. It is easier to maintain the application that all clients use (and which is maintained by a differnte team) if the client_specific data is separated out to their own databases. It is also easier to move the client to a new server when they become a large enterprise client rather than the smaller clietns who run on a server with many other clients.
Further there are types of data that are transactional and need to be in databases that are set to full recovery mode with full transaction logging. Other data is only populated from imports and does not need transactional logging and which might slow down the system as the log grew enough to handle the 10,000,000 record import. These are often split out to a separate databse so they can be in simple recovery mode as it si not necessary to recover data from the transaction log if there is a problem, it can be easily recoverd by re-running the import.
Then data is split out into datawarehouses which are optimized for data reporting not transactions. Again these reporting databases are usually separate databases (often on separate servers).
Then you have the databases for multiple different COTS applications (we have accounting databases, Credit Card transaction porcessing databases, HR databases, our project management database). A particular website might need to access more than one of these or transfer information from one to the other. Believe me vendors won't let you copy their database structure into one database to rule them all.
We have several hundred databases here on many differnt servers.

different databases for handling sessions...am I doing the right thing?

I'm looking for some advice on whether or not I should use a separate database to handle my sessions.
We are writing a web app for multiple users to login and check/update their account specific information. We didn't want to use the file storage method on the webserver for storing session information, so we decided to use a database (MySQL). It's working fine, but I'm wondering about performance when this gets into production.
Currently, we have two databases (rst_sessions, and rst). The "RST" database is where all the tables are stored for the webapp...they are all MYSQL InnoDB using Referential Integrity/foreign keys to link the tables. The "RST_SESSIONS" database simply has one table and all the session information gets stored there.
Here's one of my concerns. In the PHP code if I want to run a query against "RST" then I have to select that database as such inside php ( $db->select("RST") )...when I'm done with the query I have to re-select the "RST_SESSIONS" ( $db->select("RST_SESSIONS") ) or else the session specific information doesn't get set. So, throught the webapp the code is doing a lot of selecting and reselecting of the two databases. Is this likely to cause performance issues with user base of say (10,000 - 15,000)? Would we be better off moving the RST_SESSIONS table into the RST database to avoid all the selecting?
One reason we initially set things up this way was to be able to store the sessions information on a separate database server so it didn't interfere with the operations of the webapp database.
What are some of the pro's and con's of both methods and what would you suggest we do for performance? Thanks in advance.
If you're worrying about performances, another alternate solution would be to not store your sessions in database, but to use something like memcached -- the PHP library to dialog with memcached already provides a handler for sessions.
A couple of advantages of using memcached :
No hit to the disk : everything is in RAM
Of course, this means sessions will be lost if your server crashes ; but if a crash happens, you'll probably have other troubles than jsut losing sessions, and this is not likely to happen often
Used in production by many websites, and works well (I'm using it for a couple of websites)
Better scalability : if you need more RAM or more CPU-power for your memcached cluster, just add a couple of servers
And I would add : once you've started using memcached, you can also use it as a caching mecanism ;-)
Now, to answer to your specific questions :
Instead of selecting the DB, I would use two distinct connections :
One for the DB that's use for the application,
And one other for the DB that's used for the sessions.
Of course, this means a bit more load on the server (it doubles the number of opened connections), but it make sure that, the day it becomes needed, you'll be able to move the "session" database to another server : you'll just have to re-configure a connection string ; and as the application already uses two separate connections, it'll still work fine.
If you can live with it, just open a second connection to the database. That way you won't have to switch between databases at all. Of course, now you consume twice as many connections, and may need to bump the limit.
Unless there's some overriding reason to put your auth information in a separate database, why not put it with the rest of your data? You may find it convenient to have everything in one place.
Notice also that you can qualify your table names in your sql queries with a schema (database) name e.g.
SELECT ACTIVE
FROM RST_SESSIONS.SESSION
WHERE SID=*whatever*
This may get you out of the need to switch dbs explicitly, if they're both on the same server.

Categories