I guess I'm a little confused what's going on here.
In phpmyadmin in the Status->Monitor section, when my website
is not even doing anything sql based at the time, I'm getting 6000 questions and 200 connections.
This very much does not seem normal, can anyone give me some tips about what's really going on here, this can't be normal, right?
edit:
Im trying to connect to about four different tables every five seconds and pull information from them, and I believe it's causing my server to crash. Is this a bad parctice?
I'm using jquery and php. I think even through bad programming I can't be hitting the 400 queries a second phpmyadmin is saying I'm hitting.
You might be able to see which queries are running from Status > Monitor, in a particular time range taken from the moving graph. See http://www.youtube.com/watch?v=7ZRZoCsrKis starting at 6:00.
Related
I have Debian VPS configured with a standard LAMP.
On this server, there is only one site (shop) which has a few cron jobs - mostly PHP scripts. One of them is update script executed by Lynx browser, which sends tons of queries.
When this script runs (it takes 3-4 minutes to complete) it consumes all MySQL resources, and the site almost doesn't work (page generates in 30-60 seconds instead of 1-2s).
How can I limit this script (i.e. extending its execution time limiting available resources) to allow other services to run properly? I believe there is a simple solution to the problem but can't find it. Seems my Google superpowers are limited last two days.
You don't have access to modify the offending script, so fixing this requires database administrator work, not programming work. Your task is called tuning the MySQL databse.
(I guess you already asked your vendor for help with this, and they said no.)
Ron top or htop while the script runs. Is CPU pinned at 100%? Is RAM exhausted?
1) Just live with it, and run the update script at a time of day when your web site doesn't have many visitors. Fairly easy, but not a real solution.
2) As an experiment, add RAM to your VPS instance. It may let MySQL do things all-in-RAM that it's presently putting on the hard drive in temporary tables. If it helps, that may be a way to solve your problem with a small amount of work, and a larger server rental fee.
3) Add some indexes to speed up the queries in your script, so each query gets done faster. The question is, what indexes will help? (Just adding indexes randomly generally doesn't help much.)
First, figure out which queries are slow. Give the command SHOW FULL PROCESSLIST repeatedluy while your script runs. The Info column in that result shows all the running queries. Copy them into a text file to keep them. (Or you can use MySQL's slow query log, about which you can read online.)
Second, analyze the worst offending queries to see whether there's an obvious index to add. Telling you how to do that generally is beyond the scope of a Stack Overflow answer. You might ask another question about a specific query. Before you do, please
reead this note about asking good SQL questions, and pay attention to the section on query performance.
3) It's possible your script is SELECTing many rows, or using SELECT to summarize many rows, from tables that also need to be updated when users visit your web site. In that case your visitors may be waiting for those SELECTs to finish. If you could change the script, you could put this statement right before long-running SELECTS.
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
This allows the SELECT statement after it to do a "dirty read", in which it might get an earlier version of an updated row. See here.
Or, if you can figure out how to insert one statement into your obscured script, put this one right after it opens a database session.
SET SESSION TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Without access to the source code, though, you only have one way to see if this is the problem. That is, access the MySQL server from a privileged account, right before your script runs, and give these SQL commands.
SHOW VARIABLES LIKE 'tx_isolation';
SET GLOBAL TRANSACTION ISOLATION LEVEL READ UNCOMMITED;
then see if the performance problem is improved. Set it back after your script finishes, probably like this (depending on the tx_isolation value retrieved above)
SET GLOBAL TRANSACTION ISOLATION LEVEL READ COMMITED;
Warning a permanent global change to the isolation level might foul up your application if it relies on transaction consistency. This is just an experiment.
4) Harass the script's author to fix this problem.
Slow queries? High CPU? High I/O? Then you must look at the queries. You cannot "tune your way out of a performance problem". Tuning might give you a few percent improvement; fixing the indexes and queries is likely to give you a lot more improvement.
See this for finding the 'worst' queries; then come back with SELECTs, EXPLAINs, and SHOW CREATE TABLEs for help.
I have a large application written using Laravel 5.2. The application seems to be running good for a while "day or two" and then it starts slowing down "each request table 15+ seconds."
I am trying to figure out what could be causing the speed degrade. To get started, I listed the top 4 categories "below" that I should review in order.
SQL Server problems. Locks, long running queries.
PHP problem which could be causing extra/not needed work like long running loops
Web Server Issues like memory leaks or slow response time.
Network Issues.
For the first category "i.e. SQL Problems" I evaluated all of the queries and everything seems to be light and fairly fast. There are no long running queries, and I find no SQL locks. Although I did not eliminate this as a possible issue, but for now it is fair to look elsewhere. One thing to note that the application is generating lots of queries which suggests that I may be running into the N+1 case.
While categories 3 and 4 are important, I like to focus some time on the second category "i.e. Code problem" which is where I need help. I need to be able to figure out couple of things to help me make an educated judgement if there is an code issue or not. Here are few thing that I like to know/start logging.
How long each class takes to execute to see if one takes longer time to run.
How many/list of all queries that being generated from each class where I can identify the source of the N+1 case.
I am using Clockwork extension in Google Chrome which is helping me a log. But, I am unable to break down the result at the class level which will give me a deep understanding of what is going on.
How can gather the 2 items listed above? Is it possible to hook into Clockwork and add that info as a filter to it where I just see all that into Google Extension?
I've been given a big project by a big client and I've been working on it for 2 months now. I'm getting closer and closer to a solution but it's just so insanely complex that I can't quite get there, and so I need ideas.
The project is quite simple: There is a 1mil+ database of lat/lng coordinates with lots of additional data for each record. A user will visit a page and enter some search terms which will filter out quite a lot of the records. All of the records that match the filter are displayed (often clustered) on a Google Maps.
The problem with this is that the client demands it's fast, lean, and low-bandwidth. Hence, I'm stuck. What I'm currently doing is: Present the first clusters, and when they hover over a cluster, begin loading in the data for that clusters children.
However, I've upped it to 30,000 of the millions of listings and it's starting to drag a little. I've made as many optimizations that I possibly can. When the filter is changed, I AJAX a query to the DB and return all the ID's of the matches, then update the map to reflect this.
So, optimization is not an option. I need an entirely new conceptual model for this. Any input at all would be highly appreciated, as this is an incredibly complex project of which I can't find anything in history even remotely close to it- I even looked at MMORPG's which have a lot of similar problems, and I have made a few, but the concept of having a million players in one room is still something MMORPG makers cringe at. It's getting common that people think there may be bottlenecks, but let me say that it's not a case of optimizing this way. I need a new model in which a huge database stays on the server, but is displayed fluidly to the user.
I'll be awarding 500 rep as soon as it becomes available for anything that solves this.
Thanks- Daniel.
I think there are a number of possible answers to your question depending on where it is slowing down, so here goes a few thoughts.
A wider table can effect the speed with which a query is returned. Longer records mean that more disc is being accessed to get the right data, so you might want to think about limiting your initial table to hold only the information that can be filtered out. Having said that, it will also depend on the db engine you are using, some suffer more than others.
Ensuring that your tables are correctly indexed makes a HUGE difference in performance. You need to make sure that the query is using the indexes to quickly get to the records that it needs.
A friend was working with Google Maps and said that the API really suffered if too much was displayed on the maps. This might just be totally out of your control.
Having worked for Epic Games in the past, the reason that "millions of players in a room" is something to cringe at is more often hardware driven. In a game, having that number of players would grind the graphics card to a halt as it tries to render all the polygons of the models. Secondly (and likely more importantly) the problem would be that you have to send each client information about what each item/player is doing. This means that your bandwidth use will spike very heavily. Your server might handle the load, but the players internet connection might not.
I do think that you need to edit your question though with some extra information on WHAT is slowing down. Your database? Your query? Google API? The transfer of data between server and client machine?
Let's be honest here; a db with 1 million records being accessed by presumably a large amount of users, is not going to run very well unless you put some extremely powerful hardware behind it.
In this type of case, I would suggest using several different database servers, and setting up some decent load balancing regimes in order to keep them running as smoothly as possible. First and foremost, you will need to find out the "average" load you can place on a db server before it starts to lag up; let's say for example, this is 50,000 records. Setting a low MaxClients per server may assist you with server performance and preventing against crashes, but it might aggravate your users when they can't execute any queries due to high load.. but it's something to keep in mind if your budget doesn't allow for much wiggle room hardware-wise.
On the topic of hardware however, that's something you really need to take a look at. Databases typically don't use a huge amount of CPU/RAM, but they can be quite taxing on your HDD. I would recommend going for SAS or SSD before looking at other components on your setup; these will make the world of a difference for you.
As far as load balancing goes, a very common technique used for most content providers is that when one query/particular content item (such as a popular video on youtube etc) is pulling in an above average amount of traffic, you can cache its result. A quick and dirty approach to this is to use an if statement in your search bar, which will then grab a static html page instead of actually running the query.
Another approach to this is to have a seperate db server on standalone, only for running queries which are taking in an excessive amount of traffic.
With that, never underestimate your code optimisation. While the differences may seem subtle to you, when run across millions of queries by thousands of users, those tiny differences really do add up.
Best of luck with it - let me know if you need any further assistance.
Eoghan
Google has a service named "Big Query". It is a sql Server in the cloud. It uses its fast servers for sql and it can search millions of data rows quickly. Unfortunately it is not free.. but maybe it will help you out:
https://developers.google.com/bigquery/
I have a working live search system that on the whole works very well. However it often runs into the problem that many versions of the search query on the server are running simultaneously, if users are typing faster than the results can be returned.
I am aborting the ajax request on receoipt of a new one, but that of course does not affect the query already in process on the server, and you end up with a severe bottleneck and a long wait to get your final results. I am using MySQL with MyISAM tables for this, and there does not seem to be any advantage in converting to InnoDB as the result sets will be the sane rows.
I tried using a session variable to make php wait if this session already has a query in progress but that seems to stop it working altogether.
The problem is solved if I make the ajax requests syncrhonous, but that would rather defeat the object here.
I was wondering if anyone had any suggestions as to how to make this work properly.
Best regards
John
Before doing anything more complicated, have you considered not sending the request until the user has stopped typing for at least a certain time interval (say, 1 second)? That should dramatically cut the number of requests being made with little effort on your part.
I have developed a news website in a local language(utf-8) which server average 28k users a day. The site has recently started to show much errors and slow down. I got a call from the host saying that the db is using almost 150GB of space. I believe its way too much for the db and think there something critically wrong however i cannot understand what it could be. The site is in Drupal and the db is Mysql(innoDb). Can any one give directions as to what i should do.
UPDATE: Seems like innoDb dump is using the space. What can be done about it? Whats the standard procedure to deal with this issue.
The question does not have enough info for a specific answer, maybe your code is writing the same data to the DB multiple times, maybe you are logging to the table and the logs have become very big, maybe somebody managed to get access to your site/DB and is misusing it.
You need to login to your database and check which table is taking the most space. Use SHOW TABLE STATUS (link) which will tell you the size of each table. Then manually check the data in the table to figure out what is wrong.