I would like to inform you about the another problem of the server. When we are updating the records, it updates the records in the database but when we want to access it on the browser through the frontend application of php files it is not displaying the updated records immediately. It takes the time of 15-20 minutes or more or sometimes we closes the browser and open the another browser then the updated records display there. I have already deleted the browser cache but the problem is still remains. I have checked this in the different different broswers like IE 6.0,7.0, chrome, safari, mozilla but unable to find the solution.
please suggest me what is the problem with the server?
please check this url :
http://www.nicee.org/trial/view.php
To expand on my comment slightly, I should point out that PHP isn't really something I'm familiar with, just the problem described sounded familiar.
To pseudo code it:
-- On Update
Open Transaction with Database
Run SQL command(s)
Close/commit transaction
If you don't do this final step (i.e. commiting the transaction & closing it), then it will remain open and will lock this table (depending on how your DB is set up). You must ensure that it is always finished (rolled back if there is an error too).
From a quick google, this may be useful:
http://www.devarticles.com/c/a/MySQL/Using-Transactions-with-MySQL-4.0-and-PHP/
Related
I am aware that similar answers were given before, but I feel that my issue is somehow context-specific. My apologies if it turns out to be an exact duplicate, I am very open to suggestions on how to make the post clearer.
I am having a hard time replicating the issue, so I have to "qualitatively" describe the issue.
What I have, is a html form that has to be submitted to a mysql db hosted on AWS. The issue is that the .php that contains the query to write on the db doesn't always work. I know it's a vague depiction of the issue, but what happens is that it works when I test it from the devices I have available, but in my real case scenario (a survey on Mturk) it misses most of the connections.
I wonder if it might be a memory issue, because I keep seeing under "Current Activity" that there are connections opened, even though I specified in my php mysqli_close($conn);
However, the survey on MTurk has been so far published in 1 batch of 9 people at a time, so even assuming the connection aren't being closed...is it a number that might generate issue if all the sql does is posting a form?
I have been suggested to look at show processlist while running the query, and this is the output after submitting the form 3-4 times in a row:
Straight to my issue, I have a database which has casino tables; each table has some places, where I can add people and remove them.
All this works fine, but when I open my project on two different browsers, I cant see the updates which I've done from the other one.
So I was thinking for AJAX request on every 5sec or something like this, but I don't like this approach.
Then I started to look for another solution and found this MQTT server, but couldn't find a good example how it works with MySQL. I saw that Mosquito-PHP library, and maybe I can get it works on my server, but I'm confused How to get the status. If someone add a person to a table. How Do I check, there is a change?
I've red that the MQTT use something as infinity loop is it good idea to check in MySQL for changes in this loop?
Thank you in advance for any suggestions; and, sorry for my English, still learning.
I believe you need to divide your complicated task into simpler parts, possibly these could be a guideline:
for each browser session you should have a last update date
whenever the browser extracts relevant data, your session's update date should be updated
you should have a last event date on the server
you should send ajax request every five seconds to the server, called heartbeat event
upon each heartbeat, the server should check whether your last update is earlier than the last event and send a response in this vein
if your ajax request yields the result that your status is not at least as new as the last update, the client-side should send another request for new info
Here is the problem we have been facing for the past few weeks.
1/ Our setup
PHP 5.4 + MySQL
2 dedicated servers, load-balanced
Sessions are replicated between the 2 servers using memcached
3 applications running on these servers :
One custom-developped application, using default php session settings
Another custom-developped application, using different session settings (cookie name, path)
One Wordpress CMS
2/ The problem
The problem occurs on our first application.
Some of our users reported that they sometimes get disconnected after a few minutes (when the session is setup to last 3 hours). It can happen to them several time in the same day, then no disconnection for a few days, but the problem always comes back.
So far the fraction of users impacted is small, but I would like to solve this before it "spreads" to other users.
The problem seems to occur in different places of the application, though we have identified 3 scenarii where most of the errors occur :
Some involve submitting a form ($_SESSION variable is modified)
Other simply involve opening a popup page, with no modification of the session data
We have tried to reproduce the different scenarii described by the users : sometimes we have been able to, but most of the time we don't have any problem, which makes it hard to debug.
Other notes :
The problem is recent, this application had been running for years without any problem.
It doesn't seem to be related to our server load, because the problem still occured during the summer break when our trafic was low
It only affects one session/users at a time: all the other users logged in at the same time don't experience this problem
The problem occured on all the different browsers (IE, Firefox, Chrome)
3/ Technical analysis
When a disconnect occurs, the user is redirected to a page "Your session has expired or you don't have the right to view". When this page is loaded, we get a technical email with a dump of the $_SESSION variable.
When a session expires the normal way, the email we get shows that the $_SESSION variable is empty (normal behavior).
When an unexpected disconnect occurs, what is interesting is that the $_SESSION is not entirely empty : out of the ~20 elements the array contained, only one is left (always the same).
So this would mean the session is not expired, but not enough data is left to "identify" the user, hence the "no rights" page displayed. As a confirmation when this occurs, we can check in memcached that this session still holds some data.
These are the potential problem causes we have identified so far, and what we have done to rule them out :
Memcached indicates between 70 et 80% freespace, so we don't think it is the problem.
We removed Memcached and went back to using a NFS shared directory for session files: the problem actually got worse. This would point to an applicative bug, because NFS being slower to write data, session loss would occur more often.
We have browsed all the different forums (including SO) talking about PHP session data loss, and reviewed our code accordingly. The code base is big, but we have used automated tools and scripts to avoid missing a file.
session_start() is called at the beginning of each page.
exit() is called after each header("Location...")
register_globals is Off
We have tested the possible interractions between our 2 other applications and the problematic one, though they don't share any code, database or session handling. Nothing identified there.
We have analyzed our access logs around the times of the disconnections, to check for behavior patterns : no luck here either.
So we have no idea what causes this problem, as it seems to occur randomly, so my questions are :
The problem could come from our code: did we miss anything to check ? This solutions seems unlikely as the code works most of the time for all our users, but I am still considering it.
The problem could come from another application/process that would "empty" part of the session variable array. We have also reviewed the code from the other applications, but didn't find anything that could cause this.
And if another process is doing this, why would it only empty some sessions and not all of them ?
Thanks for your help.
I don't think you'll get a definitive answer to your question. There are too many probable causes and you haven't shown any code.
Still, my guess is that you have memcached.sess_locking turned Off, or if you have a custom session implementation - that it doesn't implement locking at all.
Eventually, this leads to a race condition between two simultaneous HTTP requests.
My guess is based on the often seen bad advice to turn off locks or free them as soon as possible, in order to achieve higher performance.
If this problem "suddenly" occurred, check what has changed. Did you do any work on the application? If so check committed code (you talked about automated tools so I expect there to be a repository which would allow for accurate finding of code changes).
Did you change anything on the server? Like upgrade software, upgrade/change hardware, make changes to the other two applications ?
One thing that popped to mind, did you check the drives you use for caching? It could be a corrupted part of the file system. Which would explain the random user part.
I couple of things I always to is:
Try to determine the moment of first occurrence as accurate as possible. At my work this occasionally triggers someone saying "oh yeah that might have to do with when I changed/updated/created this or that" so this might help. On the other hand it can sometimes takes days, weeks or more before something gets noticed so start expanding that time-frame if nothing comes up.
You have already a couple of scenario, find the common factor in these. If they don't share any code, stop looking there. If they DO share code search there. Of course sharing (part of) it here might allow us to help you search.
Do an organised search. I usually do the main application check when I am the one working most on the application (or even better when I created it). A colleague will check surrounding applications that might have influence on it. In your case those 2 other applications. Finally our sysadmin will check for newly installed or updated software on the server(s) and he will also check with our network guys if anything changed hardware wise or network related (for other people this could be the hosting provider).
It could be as simple as a WordPress plugin that uses sessions and calls either session_name() or session_id() with a different value, overlapping your custom applications with default session settings.
Since WordPress itself does not use sessions, plugins are often written from the perspective of having free rein with sessions. I just did a search on a WordPress test site and found sessions used in a gallery plugin, a plugin for putting a background image on the page, a shopping cart plugin, and a plugin I was writing that needed to carry an uploaded file from one admin page to another.
I'm working on a simple PHP application, using CouchDB and PHP-on-Couch to access some views, and it's working great. My next step is to introduce Ajax to update the frontend with data from the database.
I understand you can use the _changes notifications to detect any changes made on the database easily enough. So, its a matter of index.html monitoring for changes (through long polling), which calls loadView.php to update the page content.
Firstly, I hope the above is the correct method of going about it...
Secondly, when browsing to index.html, the page seems to never fully load (page load bar never completes). When a change is made, Firebug shows a the results as expected, but not any subsequent changes. At this time, the page seems to have stopped the infinite loading.
So far, i'm using jQuery to make the Ajax call...
$.getJSON('http://localhost:5984/db?callback=?', function(db) {
console.log(db.update_seq);
$.getJSON('http://localhost:5984/db/_changes?since='+db.update_seq+'&feed=continuous&callback=?', function(changes) {
console.log(changes);
});
});
Any ideas what could be happening here?
I believe the answer is simple enough.
A longpoll query is AJAX, guaranteed to respond only once, like fetching HTML or an image. It may take a little while to respond while it waits for a change; or it may reply immediately if changes have already happened.
A continuous query is COMET. It will never "finish" the HTTP reply, it will keep the connection open forever (except for errors, crashes, etc). Every time a change happens, zoom, Couch sends it to you.
So in other words, try changing feed=longpoll to feed=continuous and see if that solves it.
For background, I suggest the CouchDB Definitive Guide on change notifications and of course the excellent Couchbase Single Server changes API documentation.
We just ran into a problem with our cloud host - they've changed their apache settings to force a much shorter page timeout, and now during certain processes (report creation, etc.) that take more than 15 seconds (which the client is fine with; we're processing huge amounts of data) we get an error:
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request POST /administrator/index.php.
Reason: Error reading from remote server
I have confirmed that our code is still running correctly in the background, and double-checked with the host that this is really just a timeout. Their suggestion was to create a progress bar that is associated with the backend code; that way apache knows something is still going on and won't time out.
I've done progress bars associated the page load events (i.e. when all images are loaded, etc.) but have no idea how to go about creating a progress bar associated with backend code. This is a Joomla site, coded in mvc php, and the code that's causing the issue is part of the model - the various pieces that could be involved are all doing humongous queries. The tables are indexed correctly and the queries are optimized; the issue is not how to make the processes take less time - because we're on a cloud server the timeout limit could be changed to 5 seconds tomorrow without any kind of warning. What I need is someone to point me in the right direction of how to create the progress bar so it's actually associated with the function being run in the model.
Any ideas? I'm a complete beginner as far as this goes.
The easiest way I can think of is to use a two-step process:
Have the model write out events to a textfile or something when it gets to a critical point.
Use some ajax method to have the page regularly check that file for updates, and update the progress bar as such.
Whatever the background process does should update something like a file or database entry with a percentage completed every X seconds or at set places in its flow. Then you can call another script from Javascript every X seconds and it returns the percentage completed via the database record.
updateRecord(0);
readLargeFile();
updateRecord(25);
encodeLargeFile();
updateRecord(50);
writeLargeFile();
updateRecord(75);
celebrate();
updateRecord(100);