I have a small PHP framework that basically just loads a controller and a view.
An empty page loads in about 0.004 seconds, using microtime() at the beginning and end of execution.
Here's the "problem". If I do the following:
$link = #mysql_connect($server,$user,$pass,$link);
#mysql_select_db($database, $link);
the page loads in about 0.500 seconds. A whopping 12500% jump in time to render the empty page.
is this normal or am I doing something seriously wrong here... (I'm hoping for the latter).
EDIT: Could someone say what a normal time penalty is for just connecting to a mysql db like above.
Error suppression with # will slow down your script. Also an SQL connection is reliant on the speed of the server. So if the server is slow to respond, you will get a slow execution of your script.
Actually, I don't get what you're trying to do with the $link variable .
$link = #mysql_connect($server,$user,$pass,$link); is probably wrong, possibly not doing what you want (ie nothing in your example) unless if you have more than one link to databases (advanced stuff)
the php.net documention states
new_link
If a second call is made to
mysql_connect() with the same
arguments, no new link will be
established, but instead, the link
identifier of the already opened link
will be returned. The new_link
parameter modifies this behavior and
makes mysql_connect() always open a
new link, even if mysql_connect()
was called before with the same
parameters. In SQL safe mode, this
parameter is ignored.
On my webserver, the load time is always about the same on average (4000 µsecs the first time and 600 µsec the second time)
Half a second to connect to a mysql database is a bit slow but not unusual either. If it's on another server on the network with an existing load, it might be absolutely normal.
I wouldn't worry too much about this problem.
(oh, old question ! Nevermind, still replying)
Have you tried verifying your data integrity? do a repair table, and an optimze table to all tables. I know that sometimes when a table has gone corrupt the connection time can take an enormous amount of time / fail all together.
May be the reason is domain resolve slow.
skip-name-resolve
Add this to my.cnf, then restart mysqld.
And if you skip-name-resolve, you can't use hostname in mysql for user permission.
Related
I'm in a situation which really puzzles me, and nobody seems to know what the problem is.
I've got a website written in php/Laravel, which connects to a mysql server. If I call an api endpoint several times very fast (by clicking on a website button about 10-15 times very fast), the last few times, the call takes very long. When it normally lasts about 200ms, it suddenly takes multiples of 30 full seconds. So one call takes 30 seconds, another one (or more) 60, and another one or more calls take 90 seconds, etc. All calls end successfully, but they just take an enormous time to finish.
I first thought it could maybe be the php max_execution_time, so I set that to 15. Unfortunately, there was no change; the call still takes multiples of 30 seconds. Plus, if that would be the problem, it would return an error, while in this case I get correct 200 responses.
After some fiddling around I ran watch -n 0.3 "mysqladmin processlist" to see if mysql was maybe the cause. During the long lasting calls I see the following:
I'm not exactly sure what this is and why it might happen. I thought that mysql might be hanging, so I surrounded the mysql query in php with syslogs to see if the code actually hangs on the mysql call:
syslog(LOG_ERR, "BEFORE");
$results = TheTable::where('edg', $edg)->get();
$theResponse = response(json_encode($results), 200)->header('Content-Type', 'application/json');
syslog(LOG_ERR, "AFTER");
return $theResponse
But that doesn't seem to be the case. The BEFORE and AFTER syslogs always appear immediately after eachother, even if I see the queries in the mysqladmin processlist on "sleep".
Furthermore, as you can see it is a very simply read query, so I guess some kind of mysql lock can't be the problem (because I think locks only get used on writes).
And from this point I'm kinda lost on how to proceed debugging this. Does anybody know what these results tell me and where I can find the solution to this problem? All tips are welcome!
[PS: Some more info on the setup]
The setup is slightly more complex than I described above, but since I don't think that has anything to do with it, I saved that information for the last. We've actually got two Ubuntu 16.04 servers with a load balancer in front of them. The two servers both contain a mysql server which is in some kind of master/master sync mode. Although I feel very confortable on Linux, I didn't set this up and I'm not a sysadmin, so I might be missing something there. I asked the guys who administer this setup, but they say the problem must be in the code. I'm not sure where that could be though.
Again; all tips are welcome!
Ideally the code should close the connection post completion of task or should reuse the connection upon requirement based on how to code is written.
If you want MySQL to take care of this task, i would suggest check wait_timeout setting/variable. The value for this setting/veritable represents in second(s).
For example, if you set wait_timeout=10 than any connection inside MySQL in sleep state for more than 10 seconds will be closed by MySQL automatically.
Note: Above setting is global or dynamic in nature hence it can be set without restarting the MySQL.
Example command:
mysql> set global wait_timeout=10;
Some pages on my site can take a bit of time to process and thus the connection to mysql closes.
I'm not in a position to increase the timeout variable for mysql, so I'd therefore like to find a way to tell codeigniter to attempt to reconnect X amount of times when it tries to run a query before dying.
I've done this in non-ci projects by running all queries through a custom query comment which:
checks if the connection is there
if it isn't, tries to reconnect and stores the number of attempts in a global variable
runs the query
returns the results.
What is the best method of doing something like this in CI?
i'm testing an app hosted at: app.promls.net, but there is some mistake on the script execution, on localhost
takes only -> timer: 0.12875008583069 seconds. .
in the execution when is just plain text that are created via php.
and when content is created dinamically and cames from mysql database:
timer: 0.44203495979309 seconds. /timer: 0.65762710571289 seconds. / timer: 0.48272085189819 seconds.
the times are diferent on the server. takes like 8 seconds on execution.
does anyone could give me a recomendation of how test and optimize my php execution.
i was optimizing the mysql database, cause some querys returns a tons of rows for a simple search, using describe and explain.
but know i have finished, and i would like to explore some new options for php execution.
i know that adding compression to html helps, but it only help on time of trasportation between server and final host when returns an html response. know i want to optimize php execution and if there are some tricks on mysql that could be implemented to help me improve the time response better.
note: i have thinking in use the hiphop for php and memcache or cassandra. but i guess those thinks are not re result for problem, cause i have no activities( means user actions) and no much information on my app.
thanks in advance i'm available for any comments or suggestions.
with such a big difference on execution we would need details on the host configuration (shared? dedicated?).
Is mysql skipping DNS test? if not try using skip-name-resolve setting in my.cnf, or use IP and not DNS in the PRIVILEGE query/user table, the only time I have seen such latency it was because of DNS timeout in the connection between MySQL and PHP.
First, off, try doing the following to your MYSQL DB:
Run "OPTIMIZE TABLE mytable" on all of your tables
Run "ANALYZE TABLE mytable" on all of your tables.
Add indexes to the table fields that you are using
Be sure to substitute each table name for "mytable" in the above statements.
See if doing the first two makes a difference, then add the indexes.
I have a basic HTML file, using jQuery's ajax, that is connecting to my polling.php script every 2 seconds.
The polling.php simply connections to mysql, checks for ID's newer than my hidden, stored current ID, and then echo's if there is anything new. Since the javascript is connecting every 2 seconds, I am getting thousands of connections in TIME_WAIT, just for my client. This is because my script is re-connecting to MySQL over and over again. I have tried mysql_pconnect but it didn't help any.
Is there any way I can get PHP to open 1 connection, and continue to query using it? Instead of reconnecting every single time and making all these TIME_WAIT connections. Unsure what to do here to make this work properly.
I actually ended up doing basic Long Polling. I made a simple PHP script to to an infinite while loop, and it queries every 2 seconds. If it finds something new, it echoes it out, and breaks the loop. My jquery simply ajax connects to it, and waits for a reponse; on reponse, it updates my page, and restarts the polling. Very simple!
PS, the Long Polling method also reduces browser memory issues, as well as drastically reduces the TIME_WAIT connections on the server.
There's no trivial way of doing this, as pconnect doesn't work across multiple web page calls. However, some approaches to minimise the database throughput would be:
Lower the polling time. (2 seconds is perhaps a bit excessive?)
Have a "master" PHP script that runs every 'n' seconds, extracts the data from the database and saves it in the appropriate format (serialised PHP array, XML, HTML data, etc.) in the filesystem. (I'd recommend writing to a temp file and then renaming over the existing one to minimise any partial file collection issues.) The Ajax requested PHP page would then simply use the information in this data file.
In terms of executing the master PHP script, you could either use cron or simply let the user who first requests the page when the contents of file is deemed too stale. (You could use the data file's timestamp for this purpose via the filemtime function.) I'd personally use the latter approach, as cron is overkill for this purpose.
You could take this even further and use memcached instead of a flat file, etc. if so required. (That said, it would perhaps be an over-complex solution at this stage of affairs.)
I'm building a web application that allows users to run a query against one of two databases. They are only allowed to submit 1 query to each database at a time. Currently, I'm setting $_SESSION['runningAdvancedQuery'] (for instance) to let me know if they're running a query. This is to prevent someone from opening up a 2nd browser tab and hitting the resource again. It works great except...
If I start a query in a tab, then close that tab before it finishes, the flag does not get unset. Therefore, if I reopen the page, I cannot run any queries because it thinks I have one still running. Anyone have advice for getting around this?
Set this value not to for example 1, but to unix timestamp, and do checking by comprasion last-query-timestamp to now, setting up some time difference that must go by to execute next query. Remeber to set a block-time to safe value - longest time which query can be executed. If user closes his tab, after a short time he will be "unlocked".
<?php
ignore_user_abort(true);
//if session variable is not set
//set session variable
//run query
//unset session variable
//else
//show error: "your other query isn't finished yet"
?>
Instead of setting $_SESSION['runningAdvancedQuery']; to true, you could set it to the output of SELECT CONNECTION_ID();, and check show processlist; whether it's still running. This would be an addon to the other answers though: especially when using persistent connections other processes could have the connection_id in use.
As the same user, you are always allowed to see your own processes. So, to combine the other checks:
Store timestamp, connection_id & actual sql-query (I'd remove all white-space & cast to either upper or lower string to avoid slight differences in presentation to matter).
Of course use the ignore_user_abort() functionality, do a session_write_close() before the query, restart the session & set it to finished after the query.
On a check for the running query, check whether
The connection-id is still present.
The timestamp + seconds the query is running are reasonably close to the current time.
The query (lowercase, stripped whitespace) is about the same query as requested (use SHOW FULL PROCESSLIST; to get the whole query).
Optionally, take it a step further, and give people the possibility to KILL QUERY <connection_id>; (only if the previous 3 checks all resulted in a positive).