I'm using ajax, and quite often if not all the time, the first request is timed out. In fact, if I delay for several minutes before making a new request, I always have this issue. But the subsequent requests are all OK. So I'm guessing that the first time used a database connect that is dead. I'm using MySQL.
Any good solution?
Can you clarify:
are you trying to make a persistent connection?
do basic MySQL queries work (e.g. SELECT 'hard-coded' FROM DUAL)
how long does the MySQL query take for your ajax call (e.g. if you run it from a mysql command-line or GUI client.)
how often do you write to the MySQL tables used in your AJAX query?
Answering those questions should help rule-out other problems that have nothing to do with making a persistent connection: basic database connectivity, table indexing / slow-running SQL, MySQL cache invalidation etc.
chances are that you problem is NOT opening the connection, but actually serving the request.
subsequent calls are fast because of the mysql query cache.
what you need to do is to look for slow mysql queries, for example by turning on the slow query log, or by looking at the server at real time using mytop or "SHOW PROCESSLIST" to see if there is a query that takes too long. if you found one, use EXPLAIN to make sure it's properly indexed.
Related
I'm developing a project where I need to retrieve HUGE amounts of data from an MsSQL database and treat that data. The data retrieval comes from 4 tables, 2 of them with 800-1000 rows, but the other two with 55000-65000 rows each one.
The execution time wasn't tollerable, so I started to rewrite the code, but I'm quite inexperienced with PHP and MsSQL. My execution of PHP atm is in localhost:8000. I'm generating the server using "php -S localhost:8000".
I think that this is one of my problems, the poor server for a huge ammount of data. I thought about XAMPP, but I need a server where I can put without problems the MsSQL Drivers to use the functions.
I cannot change the MsSQL for MySQL or some other changes like that, the company wants it that way...
Can you give me some advices about how to improve the performance? Any server that I can use to improve the PHP execution? Thank you really much in advance.
The PHP execution should least of your concerns. If it is, most likely you are going about things in the wrong way. All the PHP should be doing is running the SQL query against the database. If you are not using PDO, consider it: http://php.net/manual/en/book.pdo.php
First look to the way your SQL query is structured, and how it can be optimised. If in complete doubt, you could try posting the query here. Be aware that if you can't post a single SQL query that encapsulates your problem you're probably approaching the problem from the wrong angle.
I am assuming from your post that you do not have recourse to alter the database schema, but if so that would be the second course of action.
Try to do as much data processing in SQL Server as possible. Don't do data joining or other type of data processing that can be done in the RDBMS.
I've seen PHP code that retrieved data from multiple tables and matched lines based on several conditions. This is just an example of a misuse.
Also try to handle data in sets in SQL (be it MS* or My*) and avoid, if possible, line-by-line processing. The optimizer will output a much more performant plan.
This is small database. Really. My advices:
- Use paging for the tables and get data by portions (by parts)
- Use indexes for tables
- Try to find more powerful server. Often hosters companies uses one database server for thousands user's databases and speed is very slow. I suffered from this and bought dedicated server finally.
I am working on a website that needs to serve multiple requests from the same table simultaneously. We made a simple index page in CakePHP which draws some data from the database (10 rows, to be precise), and a colleague executed a test simulating 1000 users viewing the same page at the same time, meaning that 1000 identical requests would be issued to the database. The thing is that at around 500 requests, the database stopped being responsive, everything just froze and we had to kill the processes.
What comes to mind is that each and every request is executed on its own connection, and this would explain why the MySQL server was overwhelmed. From a few searches online, and on SO, I can see that PHP does not support connection pooling natively, as can be done in a Java application, for instance. Having based our app on CakePHP 2.5.3, however, I would like to think that there is some underlying mechanism that overcomes these limitations. Perhaps I am not doing something right?
Any suggestion is welcome, I just want to make sure to exhaust every possible solution.
If results gonna be same for each query, you can cache the query result, then it will not send multiple request to database,
try this plugin:-
https://github.com/ndejong/CakephpAutocachePlugin
Now I must start by saying, I can't copy the string. This is a general question.
I've got a query with several joins in that takes 0.9 seconds when run using the mysql CLI. I'm now trying to run the same query on a PHP site and it's taking 8 seconds. There are some other big joins on the site that are obviously slower, but this string is taking much too long. Is there a PHP cache for database connections that I need to increase? Or is this just to be expected.
PHP doesn't really do much with MySQL; it sends a query string to the server, and processes the results. The only bottleneck here is if it's an absolutely vast query string, or if you're getting a lot of results back - PHP has to cache and parse them into an object (or array, depending on which mysql_fetch_* you use). Without knowing what your query or results are like, I can only guess.
(From comments): If we have 30 columns and around, say, a million rows, this will take an age to parse (we later find that it's only 10k rows, so that's ok). You need to rethink how you do things:-
See if you can reduce the result set. If you're paginating things, you can use LIMIT clauses.
Do more in the MySQL query; instead of getting PHP to sort the results, let MySQL do it by using ORDER BY clauses.
Reduce the number of columns you fetch by explicitly specifying each column name instead of SELECT * FROM ....
Some wild guesses:
The PHP-version uses different parameters and variables each query: MySQL cannot cache it. While the version you type on the MySQL-CLI uses the same parameter: MySQL can fetch it from its cache. Try adding the SQL_NO_CACHE to your query on CLI to see if the result requires more time.
You are not testing on the same machine? Is the MySQL database you test the PHP-MySQL query with and the CLI the same machine? I mean: you are not testing one on your laptop and the other one on some production server, are you?
You are testing over a network: When the MySQL server is not installed on the same host as your PHP app, you will see some MySQL connection that uses "someserver.tld" instead of "localhost" as database host. In that case PHP will need to connect over a network, while your CLI already has that connection, or connects only local.
The actual connection is taking a long time. Try to run and time the query from your PHP-system a thousand times after each other. Instead of "connect to server, query database, disconnect", you should see the query timing when it is "connect to server, query database thousand times, disconnect". Some PHP-applications do that: they connect and disconnect for each and every query. And if your MySQL server is not configured correctly, connecting can take a gigantic amount of the total time.
How are you timing it?
If the 'long' version is accessed through a php page on a website, could the additional 7.1 seconds not just be the time it takes to send the request and then process and render the results?
How are you connecting? Does the account you're using use a hostname in the grant tables? If you're connectinv via TCP, MySQL will have to do a reverse DNS lookup on your IP to figure out if you're allowed in.
If it's the connection causing this, then do a simple test:
select version();
if that takes 8seconds, then it's connection overhead. If it return instantly, then it's PHP overhead in processing the data you've fetched.
The function mysql_query should should take the same time as mysql client. But any extra mysql_fetch_* will add up.
I have a working live search system that on the whole works very well. However it often runs into the problem that many versions of the search query on the server are running simultaneously, if users are typing faster than the results can be returned.
I am aborting the ajax request on receoipt of a new one, but that of course does not affect the query already in process on the server, and you end up with a severe bottleneck and a long wait to get your final results. I am using MySQL with MyISAM tables for this, and there does not seem to be any advantage in converting to InnoDB as the result sets will be the sane rows.
I tried using a session variable to make php wait if this session already has a query in progress but that seems to stop it working altogether.
The problem is solved if I make the ajax requests syncrhonous, but that would rather defeat the object here.
I was wondering if anyone had any suggestions as to how to make this work properly.
Best regards
John
Before doing anything more complicated, have you considered not sending the request until the user has stopped typing for at least a certain time interval (say, 1 second)? That should dramatically cut the number of requests being made with little effort on your part.
I have a MySQL server running that will be queried regularly through a php front end. I'm slightly worried about server load as there will be a fair amount of people accessing the webpage, with each session querying the database regularly. The results of the query, and in essence the webpage will be the same for all users.
Is there a way of querying the database once, and outputting the data/results to the webpage, from which all users connect to and view? Basically running the query for all users that connect to the webpage, rather than each user querying the database.
Any suggestions appreciated.
Thanks
You don't have to worry.
Databases intended for that.
Most sites in the world run exactly the same way: MySQL server running that will be queried regularly through a php front end. Nothing bad with it.
Well tuned SQL server and properly designed query will serve much more than you think.
You will need exceptionally high traffic to start worrying about such things.
Don't forget that MysQL has it's own query cache.
Also please note that there are no users "connected" to the webpage. They connect, get page contents and disconnect.
You should give the server a try. If the server is overloaded,
you can always try Memcached tool. It can be used via PHP or by MySQL directly. It will save you from querying DB server with similar queries, i.e. the load on server will be decreased drastically.
If the webpage will be the same for all users, why do you even need to have a MySQL backend?
I think the best solution would be to have a standalone script running periodically (e.g. as a cron) which generates the static HTML for your web pages. That way, there is no need for users to query the database when they are just going to end up with the exact same page anyway.
If its a large query with joins you could create a view in mysql with the queried data and query the view, and update the view if the data changes.