I made a website using mysql (InnoDB), php-mysqli and PDO to track progress data.
Originally, there are massively simultaneous SELECT-UPDATE queries at the backend. In one transaction, two statements are presented: SELECT id FROM ... WHERE started = 0 FOR UPDATE, UPDATE ... SET started = 1 .... They are separated because of some pre-processing. The backend works quite well.
Now I'm coding a frontend monitoring the database and plan to use statements like SELECT COUNT(started) FROM ... WHERE started = 0. I'm unsure about which kind of lock mysql will use under such circumstance. As I needn't guarantee every frontend call is successful, the success rate of the backend is more important.
To clarify, "backend" here is several PHP files that would be accessed by program as REST API; "frontend" is a PHP accessed by user.
Q: May these monitoring SELECT result in backend failures? If so, is there a way of prioritizing backend access?
Related
I have a SugarCRM installation.
My problem is when I do a search in sugarcrm the search query block all other queries.
Id User Host db Command Time State Info
49498 sugar xx.xx.xx.xx:59568 sugarcrm Query 5 Sorting result SELECT leads.id ,leads_cstm.cedula_c,leads_cstm.numplanilla_c,leads_cstm.profession_c,leads_cstm.b
49502 sugar xx.xx.xx.xx:59593 sugarcrm Sleep 5 NULL
As you can see the query Id 49502 is, I presume, waiting for query 49498 to finalize.
first query is a search query which last a looong time to execute
second query is a query for the index page
The odd thing is that if I open two terminals and connect to mysql using the same user as my sugarcrm installation I can execute both queries concurrently but if I make a search in the browser and open a new tab and try to access the index page, that second tab hungs until the first tab completes execution or it get a timeout from server.
I have tested this using php both as a module and as cgi.
So I guess it should be something with the mysql_query function itself?
Any ideas? It's very hard to optimize the db (lots of tables, lots of content) but at least the site should be able to be used concurrently...
Probably because you're using file-based sessions. PHP will lock session files while they're in used, which essentially makes all requests by a particular session to become serial.
The only ways around this are to either use session_write_close() to release the session lock on a per-script basis, after any session-changing code is complete, or implementing your own session handler which can have its own locking logic.
We have decided that we are going to move from a single database to a replicated database in a master-slave architecture and are going to get all of our reads to go to the slave and writes to the master.
The reason we are going down this route is an addition to our product means that we are getting a large increase in database connections which leads to performance problems with our reporting suite.
We are using MySQL(5.1.55) and the application is developed in PHP.
A couple of general queries on this:
How would you tell the application which db to read from? Would you do it within the PHP? Or just something like mysqldnd_ms or mysql proxy?
Where would ajax requests read from? We have a page which allows users to flag a record. This is then saved in the database and users can see which records have been flagged.
Thanks for any advice.
hi i created a simple php/mysql/Ajax chat application and I have a few questions. before that let me explain how it works.
So, if a user is on the chat page, the ajax script sends a request to a php file that shows the chat histories (latest messages), and returns it in HTML. This request is looped every second to show the latest messages to the user viewing the page.
so far its been working great.
now my question and concern is, 1.) What are the cons of using a method like this, if any? 2.) What things should i worry most about, if it gets a large user base and many people are using it simultaneously? (mostly because its making a request every second, for each user on it..)
the mysql table is an innodb table, and I'm using only one SELECT statement without a WHERE clause.. something like SELECT * FROM table ORDER BY id DESC LIMIT 10 etc.. (basically, I'm making mysql do something very easy like cake)
3.) Any suggestion are welcome ;)
thanks very much
vikash
Definitely, you will need to look at scalability issues for both the web server and database server. There are technologies such as MySQL clustering for improving performance on the database and web clustering for the HTTP side of things.
With large scale use you may also look at trimming down the table by removing early posts and dumping them to a separate table for low-frequency access. You could also have some method of caching the database requests via some worker threads so the database reads are minimal, but the front-end will have the ability to cope with the high volume of requests.
I got 60 people in phpFreeChat (php/ajax/mysql chat) and it was a complete processor hog. It brought an 8 core server to its knees.
I would like to create an interface for manipulating invoices in a transaction-like manner.
The database consists of an invoices table, which holds billing information, and an invoice_lines table, which holds line items for the invoices. The website is a set of scripts which allow the addition, modification, and removal of invoices and their corresponding lines.
The problem I have is this, I would like the ACID properties of the database to be reflected in the web application.
Atomic: When the user hits save, either the entire invoice is modified or the entire invoice is not changed at all.
Consistent: The application code already ensures consistency, lines cannot be added to non-existent invoices. Invoice IDs cannot be duplicated.
Isolated: If a user is in the middle of a set of changes to an invoice, I would like to hide those changes from other users until the user clicks save.
Durable: If the web site dies, the data should be safe. This already works.
If I were writing a desktop application, it would maintain a connection to the MySQL database at all times, allowing me to simply use the BEGIN TRANSACTION and COMMIT at the beginning and end of the edit.
From what I understand you cannot BEGIN TRANSACTION on one PHP page and COMMIT on a different page because the connection is closed between pages.
Is there a way to make this possible without extensions? From what I have found, only SQL Relay does this (but it is an extension).
you don't want to have long running transactions, because that'll limit concurrency. http://en.wikipedia.org/wiki/Command_pattern
The translation on the web for this type of processing is the use of session data or data stored in the page itself. Typically what is done is that after each web page is completed the data is stored in the session (or in the page itself) and at the point in which all of the pages have been completed (via data entry) and a "Process" (or "Save") button is hit, the data is converted into the database form and saved - even with the relational aspect of data like you mentioned. There are many ways to do this but I would say that most developers have an architecture similar to what I mentioned (using session data or state within the page) to satisfy what you are talking about.
You'll get much advice here on different architectures but I can say that the Zend Framework (http://framework.zend.com) and the use of Doctrine (http://www.doctrine-project.org/) make this fairy easy since Zend provides much of the MVC architecture and session management and Doctrine provides the basic CRUD (create, retrieve, update, delete) you are looking for - plus all of the other aspects (uniqueness, commit, rollback, etc). Keeping the connection open to mysql may cause timeouts and lack of available connections.
Database transactions aren't really intended for this purpose - if you did use them, you'd probably run into other problems.
But also you can't use them as each page request uses its own connection (potentially) so cannot share a transaction with any others.
Keep the modifications to the invoice somewhere else while the user is editing them, then apply them when she hits save; you can do this final apply step in a transaction (albeit quite a short-lived one).
Long-lived transactions are usually bad.
The solution is not to open the transaction during the GET phase. Do all aspects of the transaction—BEGIN TRANSACTION, processing, and COMMIT—all during the POST triggered by the "save" button.
Persistent connections may help you:
http://php.net/manual/en/features.persistent-connections.php
Another is that when using
transactions, a transaction block will
also carry over to the next script
which uses that connection if script
execution ends before the transaction
block does.
But I recommend you to find another approach to the problem.
For example: create a cache table.
When you need to "commit", transfer the records from the cache table to the "real" tables.
Altough there are some good answers, I think that found some good responses to your question, that I was stuck with also. I think the best approach is using a framework like Doctrine (O/R mapping) that has this kind of approach somehow implemented. Here you have a link to what I'm talking about.
Related to my previous question:
PHP and Databases: Views, Functions and Stored Procedures performance
Just to make a more specific question regarding large SELECT queries.
When would it be more convenient to use a View instead of writing the SELECT query in the code and calling it:
$connector->query($sql)->fetchAll();
What are the factors to take into account when deciding wether its best to use a view, or just leave it as it is. Say, if you join several tables, select certain amount of data, etc.
I'm asking in the context of a big web app (with PHP & Postgres), and looking for performance and optimization.
One thing to take into account when you are using PHP source code + views (instead of only PHP source code) is that you now have two kind of sources to modify when you update your application :
you must put the new PHP sources on the server
and you must update the views
And you sometimes must do that exactly at the same time if you don't want your application to crash... Or you have to program thinking that the application must run OK with an outdated / more recent version of the views (for a couple of seconds).
Something else you might have to consider is versionning : versionning PHP scripts is easy : just use SVN and its allright, as it's text files.
With views, to get the same kind of versionning, you have to work in text files (commited on the SVN before you update them on the DB production server), and keep those in sync with the DB server -- seems easy, but it's not when you have to push an emergency patch to production ^^
Personnaly, I generally use views / stored procedures when it really makes a diffenrence : for instance, if a calculation would require thousands of SQL queries (and, so, thousands of call from PHP, waiting for the response, and so on) or too many data exchanges between the two servers, using a stored proc can really be great !
(Never used postgre, but the idea is the same with other products)