When I enter this query:
sqlite> DELETE FROM mails WHERE (id = 71);
SQLite returns this error:
SQL error: database is locked
How do I unlock the database so this query will work?
In windows you can try this program http://www.nirsoft.net/utils/opened_files_view.html to find out the process is handling db file. Try closed that program for unlock database
In Linux and macOS you can do something similar, for example, if your locked file is development.db:
$ fuser development.db
This command will show what process is locking the file:
> development.db: 5430
Just kill the process...
kill -9 5430
...And your database will be unlocked.
I caused my sqlite db to become locked by crashing an app during a write. Here is how i fixed it:
echo ".dump" | sqlite old.db | sqlite new.db
Taken from: http://random.kakaopor.hu/how-to-repair-an-sqlite-database
The SQLite wiki DatabaseIsLocked page offers an explanation of this error message. It states, in part, that the source of contention is internal (to the process emitting the error). What this page doesn't explain is how SQLite decides that something in your process holds a lock and what conditions could lead to a false positive.
This error code occurs when you try to do two incompatible things with a database at the same time from the same database connection.
Changes related to file locking introduced in v3 and may be useful for future readers and can be found here: File Locking And Concurrency In SQLite Version 3
If you want to remove a "database is locked" error then follow these steps:
Copy your database file to some other location.
Replace the database with the copied database. This will dereference all processes which were accessing your database file.
Deleting the -journal file sounds like a terrible idea. It's there to allow sqlite to roll back the database to a consistent state after a crash. If you delete it while the database is in an inconsistent state, then you're left with a corrupted database. Citing a page from the sqlite site:
If a crash or power loss does occur and a hot journal is left on the disk, it is essential that the original database file and the hot journal remain on disk with their original names until the database file is opened by another SQLite process and rolled back. [...]
We suspect that a common failure mode for SQLite recovery happens like this: A power failure occurs. After power is restored, a well-meaning user or system administrator begins looking around on the disk for damage. They see their database file named "important.data". This file is perhaps familiar to them. But after the crash, there is also a hot journal named "important.data-journal". The user then deletes the hot journal, thinking that they are helping to cleanup the system. We know of no way to prevent this other than user education.
The rollback is supposed to happen automatically the next time the database is opened, but it will fail if the process can't lock the database. As others have said, one possible reason for this is that another process currently has it open. Another possibility is a stale NFS lock, if the database is on an NFS volume. In that case, a workaround is to replace the database file with a fresh copy that isn't locked on the NFS server (mv database.db original.db; cp original.db database.db). Note that the sqlite FAQ recommends caution regarding concurrent access to databases on NFS volumes, because of buggy implementations of NFS file locking.
I can't explain why deleting a -journal file would let you lock a database that you couldn't before. Is that reproducible?
By the way, the presence of a -journal file doesn't necessarily mean that there was a crash or that there are changes to be rolled back. Sqlite has a few different journal modes, and in PERSIST or TRUNCATE modes it leaves the -journal file in place always, and changes the contents to indicate whether or not there are partial transactions to roll back.
the SQLite db files are just files, so the first step would be to make sure it isn't read-only. The other thing to do is to make sure that you don't have some sort of GUI SQLite DB viewer with the DB open. You could have the DB open in another shell, or your code may have the DB open. Typically you would see this if a different thread, or application such as SQLite Database Browser has the DB open for writing.
My lock was caused by the system crashing and not by a hanging process. To resolve this, I simply renamed the file then copied it back to its original name and location.
Using a Linux shell that would be:
mv mydata.db temp.db
cp temp.db mydata.db
If a process has a lock on an SQLite DB and crashes, the DB stays locked permanently. That's the problem. It's not that some other process has a lock.
I had this problem just now, using an SQLite database on a remote server, stored on an NFS mount. SQLite was unable to obtain a lock after the remote shell session I used had crashed while the database was open.
The recipes for recovery suggested above did not work for me (including the idea to first move and then copy the database back). But after copying it to a non-NFS system, the database became usable and not data appears to have been lost.
Some functions, like INDEX'ing, can take a very long time - and it locks the whole database while it runs. In instances like that, it might not even use the journal file!
So the best/only way to check if your database is locked because a process is ACTIVELY writing to it (and thus you should leave it the hell alone until its completed its operation) is to md5 (or md5sum on some systems) the file twice.
If you get a different checksum, the database is being written, and you really really REALLY don't want to kill -9 that process because you can easily end up with a corrupt table/database if you do.
I'll reiterate, because it's important - the solution is NOT to find the locking program and kill it - it's to find if the database has a write lock for a good reason, and go from there. Sometimes the correct solution is just a coffee break.
The only way to create this locked-but-not-being-written-to situation is if your program runs BEGIN EXCLUSIVE, because it wanted to do some table alterations or something, then for whatever reason never sends an END afterwards, and the process never terminates. All three conditions being met is highly unlikely in any properly-written code, and as such 99 times out of 100 when someone wants to kill -9 their locking process, the locking process is actually locking your database for a good reason. Programmers don't typically add the BEGIN EXCLUSIVE condition unless they really need to, because it prevents concurrency and increases user complaints. SQLite itself only adds it when it really needs to (like when indexing).
Finally, the 'locked' status does not exist INSIDE the file as several answers have stated - it resides in the Operating System's kernel. The process which ran BEGIN EXCLUSIVE has requested from the OS a lock be placed on the file. Even if your exclusive process has crashed, your OS will be able to figure out if it should maintain the file lock or not!! It is not possible to end up with a database which is locked but no process is actively locking it!!
When it comes to seeing which process is locking the file, it's typically better to use lsof rather than fuser (this is a good demonstration of why: https://unix.stackexchange.com/questions/94316/fuser-vs-lsof-to-check-files-in-use). Alternatively if you have DTrace (OSX) you can use iosnoop on the file.
I added "Pooling=true" to connection string and it worked.
This error can be thrown if the file is in a remote folder, like a shared folder. I changed the database to a local directory and it worked perfectly.
I found the documentation of the various states of locking in SQLite to be very helpful. Michael, if you can perform reads but can't perform writes to the database, that means that a process has gotten a RESERVED lock on your database but hasn't executed the write yet. If you're using SQLite3, there's a new lock called PENDING where no more processes are allowed to connect but existing connections can sill perform reads, so if this is the issue you should look at that instead.
I have such problem within the app, which access to SQLite from 2 connections - one was read-only and second for writing and reading. It looks like that read-only connection blocked writing from second connection. Finally, it is turns out that it is required to finalize or, at least, reset prepared statements IMMEDIATELY after use. Until prepared statement is opened, it caused to database was blocked for writing.
DON'T FORGET CALL:
sqlite_reset(xxx);
or
sqlite_finalize(xxx);
I just had something similar happen to me - my web application was able to read from the database, but could not perform any inserts or updates. A reboot of Apache solved the issue at least temporarily.
It'd be nice, however, to be able to track down the root cause.
lsof command on my Linux environment helped me to figure it out that a process was hanging keeping the file open.
Killed the process and problem was solved.
This link solve the problem. : When Sqlite gives : Database locked error
It solved my problem may be useful to you.
And you can use begin transaction and end transaction to not make database locked in future.
Should be a database's internal problem...
For me it has been manifested after trying to browse database with "SQLite manager"...
So, if you can't find another process connect to database and you just can't fix it,
just try this radical solution:
Provide to export your tables (You can use "SQLite manager" on Firefox)
If the migration alter your database scheme delete the last failed migration
Rename your "database.sqlite" file
Execute "rake db:migrate" to make a new working database
Provide to give the right permissions to database for table's importing
Import your backed up tables
Write the new migration
Execute it with "rake db:migrate"
In my experience, this error is caused by: You opened multiple connections.
e.g.:
1 or more sqlitebrowser (GUI)
1 or more electron thread
rails thread
I am nore sure about the details of SQLITE3 how to handle the multiple thread/request, but when I close the sqlitebrowser and electron thread, then rails is running well and won't block any more.
I ran into this same problem on Mac OS X 10.5.7 running Python scripts from a terminal session. Even though I had stopped the scripts and the terminal window was sitting at the command prompt, it would give this error the next time it ran. The solution was to close the terminal window and then open it up again. Doesn't make sense to me, but it worked.
I just had the same error.
After 5 minets google-ing I found that I didun't closed one shell witch were using the db.
Just close it and try again ;)
I had the same problem. Apparently the rollback function seems to overwrite the db file with the journal which is the same as the db file but without the most recent change. I've implemented this in my code below and it's been working fine since then, whereas before my code would just get stuck in the loop as the database stayed locked.
Hope this helps
my python code
##############
#### Defs ####
##############
def conn_exec( connection , cursor , cmd_str ):
done = False
try_count = 0.0
while not done:
try:
cursor.execute( cmd_str )
done = True
except sqlite.IntegrityError:
# Ignore this error because it means the item already exists in the database
done = True
except Exception, error:
if try_count%60.0 == 0.0: # print error every minute
print "\t" , "Error executing command" , cmd_str
print "Message:" , error
if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back
print "Forcing Unlock"
connection.rollback()
time.sleep(0.05)
try_count += 0.05
def conn_comit( connection ):
done = False
try_count = 0.0
while not done:
try:
connection.commit()
done = True
except sqlite.IntegrityError:
# Ignore this error because it means the item already exists in the database
done = True
except Exception, error:
if try_count%60.0 == 0.0: # print error every minute
print "\t" , "Error executing command" , cmd_str
print "Message:" , error
if try_count%120.0 == 0.0: # if waited for 2 miutes, roll back
print "Forcing Unlock"
connection.rollback()
time.sleep(0.05)
try_count += 0.05
##################
#### Run Code ####
##################
connection = sqlite.connect( db_path )
cursor = connection.cursor()
# Create tables if database does not exist
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS fix (path TEXT PRIMARY KEY);''')
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS tx (path TEXT PRIMARY KEY);''')
conn_exec( connection , cursor , '''CREATE TABLE IF NOT EXISTS completed (fix DATE, tx DATE);''')
conn_comit( connection )
One common reason for getting this exception is when you are trying to do a write operation while still holding resources for a read operation. For example, if you SELECT from a table, and then try to UPDATE something you've selected without closing your ResultSet first.
I was having "database is locked" errors in a multi-threaded application as well, which appears to be the SQLITE_BUSY result code, and I solved it with setting sqlite3_busy_timeout to something suitably long like 30000.
(On a side-note, how odd that on a 7 year old question nobody found this out already! SQLite really is a peculiar and amazing project...)
Before going down the reboot option, it is worthwhile to see if you can find the user of the sqlite database.
On Linux, one can employ fuser to this end:
$ fuser database.db
$ fuser database.db-journal
In my case I got the following response:
philip 3556 4700 0 10:24 pts/3 00:00:01 /usr/bin/python manage.py shell
Which showed that I had another Python program with pid 3556 (manage.py) using the database.
An old question, with a lot of answers, here's the steps I've recently followed reading the answers above, but in my case the problem was due to cifs resource sharing. This case is not reported previously, so hope it helps someone.
Check no connections are left open in your java code.
Check no other processes are using your SQLite db file with lsof.
Check the user owner of your running jvm process has r/w permissions over the file.
Try to force the lock mode on the connection opening with
final SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(false);
config.setLockingMode(LockingMode.NORMAL);
connection = DriverManager.getConnection(url, config.toProperties());
If your using your SQLite db file over a NFS shared folder, check this point of the SQLite faq, and review your mounting configuration options to make sure your avoiding locks, as described here:
//myserver /mymount cifs username=*****,password=*****,iocharset=utf8,sec=ntlm,file,nolock,file_mode=0700,dir_mode=0700,uid=0500,gid=0500 0 0
I got this error in a scenario a little different from the ones describe here.
The SQLite database rested on a NFS filesystem shared by 3 servers. On 2 of the servers I was able do run queries on the database successfully, on the third one thought I was getting the "database is locked" message.
The thing with this 3rd machine was that it had no space left on /var. Everytime I tried to run a query in ANY SQLite database located in this filesystem I got the "database is locked" message and also this error over the logs:
Aug 8 10:33:38 server01 kernel: lockd: cannot monitor 172.22.84.87
And this one also:
Aug 8 10:33:38 server01 rpc.statd[7430]: Failed to insert: writing /var/lib/nfs/statd/sm/other.server.name.com: No space left on device
Aug 8 10:33:38 server01 rpc.statd[7430]: STAT_FAIL to server01 for SM_MON of 172.22.84.87
After the space situation was handled everything got back to normal.
If you're trying to unlock the Chrome database to view it with SQLite, then just shut down Chrome.
Windows
%userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Web Data
or
%userprofile%\Local Settings\Application Data\Google\Chrome\User Data\Default\Chrome Web Data
Mac
~/Library/Application Support/Google/Chrome/Default/Web Data
From your previous comments you said a -journal file was present.
This could mean that you have opened and (EXCLUSIVE?) transaction and have not yet committed the data. Did your program or some other process leave the -journal behind??
Restarting the sqlite process will look at the journal file and clean up any uncommitted actions and remove the -journal file.
As Seun Osewa has said, sometimes a zombie process will sit in the terminal with a lock aquired, even if you don't think it possible. Your script runs, crashes, and you go back to the prompt, but there's a zombie process spawned somewhere by a library call, and that process has the lock.
Closing the terminal you were in (on OSX) might work. Rebooting will work. You could look for "python" processes (for example) that are not doing anything, and kill them.
Related
My Problem I am Having:
I load the database page for one of my innoDB databases from within phpMyAdmin and it loads EXTREMELY slow. We're talking like up to 5 minutes of load time. This only happens on the MAIN page, meaning, when you view the database and the left sidebar that shows all the tables shows up.
After that initial load time, each individual table can be clicked on and load almost immediately. But those tables are loaded in an iframe without reloading the left sidebar of database tables which is why they load so quickly.
After that initial load time, each individual table can be opened in a new tab/window immediately, but doing it that way does not include the left sidebar of database tables, which I am sure is the reason they load so quickly.
What I Expect To Be Happening:
I expect to be able to load the main page of my innoDB database from within phpMyAdmin without it taking 5 minutes to load.
What I've tried:
I've had this issue for months and it drives me crazy every day. I've come to live with it actually. I simply load that initial page immediately every day, and go do something else so i don't have to watch it, because it just makes me angry.
I have my timeout set to about 15 minutes, so if I think its been longer than 10 minutes, I will open where it says "localhost" in a different tab, which brings me to the login screen, log back in, and then it brings me to the list of databases, which loads quickly. This is because if I simply load that main page, then log in, it will bring me back to that index page and i'll wait another 5 minutes for it to load. Grr..
OK so, I Googled and Googled and found tons of suggestions about making innoDB not do row counts and stuff like that. I've tried all of them. Nothing is working! :(
I found something called "$cfg['Server']['IgnoreSomeISrows'] = true;" which did not help whatsoever. I don't even know what it did, but it didn't work, so I removed it, but I forgot to remove that part and so I just left it there. No, commenting it out does not help either thank you.
Some Version Info:
OS
CentOS release 6.5 (Final)
Database:
Server: Localhost via UNIX socket
Software: MySQL
Software version: 5.1.71-log - Source distribution
Protocol version: 10
Web Server
Apache/2.2.15 (CentOS)
Database client version: libmysql - 5.1.71
PHP extension: mysqli Documentation
phpMyAdmin
Version information: 3.5.8.2, latest stable version: 4.1.5
Personally I also experience extremely slow with phpmyadmin, when I view in "View" Table. What I did is upgrade the phpmyadmin to the latest version, then my problem is solved. Maybe u can give a try for phpymadmin v4
Thank you Tom Kim for leading me to the answer.
There wasn't enough room in comments so I will elaborate with an additional answer on exactly what I did to solve my issue. I do not know why the yum version of phpMyAdmin was causing me distress.
backup your config file (if you have made one)
remove the yum version(s) of phpMyAdmin (there are 2 different ones)
download the latest version of phpMyAdmin from their website
unzip it and move it into the normal place
replace (or create) the config file
add a virtual host entry for it and make sure to restrict access to you ONLY YOUR IP ADDRESS for security purposes
restart Apache
Have some tequila to celebrate! preferably reposado because it's the best type :) (this part is VERY important)
Here is my answer in bash form:
(I assume you have phpMyAdmin or phpmyadmin already installed and configured... I won't give you a config file, but I'll give you the vhost file, its mostly based on the one from the yum version of phpMyAdmin):
mkdir /tmp/phpMyAdminNew;
cp /usr/share/phpMyAdmin/config.inc.php /tmp/phpMyAdminNew/config.inc.php;
yum remove phpMyAdmin phpmyadmin;
cd /tmp;
wget -O /tmp/phpMyAdminNew/phpMyAdmin-4.1.5-all-languages.zip http://sourceforge.net/projects/phpmyadmin/files/phpMyAdmin/4.1.5/phpMyAdmin-4.1.5-all-languages.zip;
unzip -d /tmp/phpMyAdminNew /tmp/phpMyAdminNew/phpMyAdmin-4.1.5-all-languages.zip;
mv /tmp/phpMyAdminNew/phpMyAdmin-4.1.5-all-languages /usr/share/phpMyAdminNew
cp /tmp/phpMyAdminNew/config.inc.php /usr/share/phpMyAdminNew/config.inc.php
echo -e 'Alias /my_secret_phpmyadmin_portal /usr/share/phpMyAdminNew\n\n<Directory /usr/share/phpMyAdminNew/>\n\t<IfModule mod_authz_core.c>\n\t\t# Apache 2.4\n\t\t<RequireAny>\n\t\t\tRequire ip 127.0.0.1\n\t\t\tRequire ip ::1\n\t\t\t# Require ip xxx.xxx.xxx.xxx\n\t\t</RequireAny>\n\t</IfModule>\n\t<IfModule !mod_authz_core.c>\n\t\t# Apache 2.2\n\t\tOrder Deny,Allow\n\t\tDeny from All\n\t\tAllow from 127.0.0.1\n\t\tAllow from ::1\n\t\t# Allow from xxx.xxx.xxx.xxx\n\t</IfModule>\n</Directory>\n\n<Directory /usr/share/phpMyAdminNew/setup/>\n\t<IfModule mod_authz_core.c>\n\t\t# Apache 2.4\n\t\t<RequireAny>\n\t\t\tRequire ip 127.0.0.1\n\t\t\tRequire ip ::1\n\t\t\t# Require ip xxx.xxx.xxx.xxx\n\t\t</RequireAny>\n\t</IfModule>\n\t<IfModule !mod_authz_core.c>\n\t\t# Apache 2.2\n\t\tOrder Deny,Allow\n\t\tDeny from All\n\t\tAllow from 127.0.0.1\n\t\tAllow from ::1\n\t\t# Allow from xxx.xxx.xxx.xxx\n\t</IfModule>\n</Directory>\n\n# These directories do not require access over HTTP - taken from the original\n# phpMyAdmin upstream tarball\n\n<Directory /usr/share/phpMyAdminNew/libraries/>\n\tOrder Deny,Allow\n\tDeny from All\n\tAllow from None\n</Directory>\n\n<Directory /usr/share/phpMyAdminNew/setup/lib/>\n\tOrder Deny,Allow\n\tDeny from All\n\tAllow from None\n</Directory>\n\n<Directory /usr/share/phpMyAdminNew/setup/frames/>\n\tOrder Deny,Allow\n\tDeny from All\t\nAllow from None\n</Directory>\n\n# This configuration prevents mod_security at phpMyAdmin directories from\n# filtering SQL etc. This may break your mod_security implementation.\n#\n#<IfModule mod_security.c>\n#\t<Directory /usr/share/phpMyAdminNew/>\n#\t\tSecRuleInheritance Off\n#\t</Directory>\n#</IfModule>' > /etc/httpd/conf.d/phpMyAdminNew.conf;
rm -rf /tmp/phpMyAdminNew
service httpd graceful
clear; echo -e '\n\n##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~##\n ##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~##\n ##~~~~~~~~~~~~~~~~~~~~~~~~~~~~~##\n ##~~~~~~~~~~~~~~~~~~~~~~~~~~~##\n ##~~~~~~~~~~~~~~~~~~~~~~~~~##\n ##~~~~~~~~~~~~~~~~~~~~~~~##\n ##~~~~~~~~~~~~~~~~~~~~~##\n ##~~~~~~~~~~~~~~~~~~~##\n ###~~~~~~~~~~~~~~~###\n ####~~~~~~~~~~~####\n #####~~~~~~~~#####\n ##################\n ## TEQUILA SHOT ##\n ##################\n\n';
Im playing with MongoDB and Im trying to import .csv files to DB and Im getting strange error. In process of uploading script just ends for no reason and when I try to run it again nothing happens only solution is to restart apache. I have already set unlimited timeout in php.ini Here is the script.
$dir = "tokens/";
$fileNames = array_diff( scandir("data/"), array(".", "..") );
foreach($fileNames as $filename)
if(file_exists($dir.$filename))
exec("d:\mongodb\bin\mongoimport.exe -d import -c ".$filename." -f Date,Open,Next,Amount,Type --type csv --file ".$dir.$filename."");
I got around 7000 .csv files and it manage to insert only about 200 before script ends.
Can anyone help? I would appreciate any help
You are missing back end infrastructure. It is just insane to try to load 7000 files into a database as part of a web request that is supposed to be short lived and is expected, by some of the software components as well as the end user, to only last a few seconds or maybe a minute.
Instead, create a backend service and command and control for this procedure. In the web app, write each file name to be processed to a database table or even a plain text file on the server and then tell the end user that their request has been queued and will be processed within the next NN minutes. Then have a cron job that runs every 5 minutes (or even 1 minute) that looks in the right place for stuff to do and can create reports of success or failure and/or send emails to tell the original requestor that it is done.
If this is intended as an import script and you are set on using PHP, it would be preferable to at least use the PHP CLI environment instead of performing this task through a web server. As it stands, it appears the CSV files are located on the server itself, so I see no reason to get HTTP involved. This would avoid an issue where the web request terminates and abruptly aborts the import process.
For processing the CSV, I'd start by looking at fgetcsv or str_getcsv. The mongoimport command really does very little in the way of validation and sanitization. Parsing the CSV yourself will allow you to skip records that are missing fields, provide default values where necessary, or take other appropriate action. As you iterate through records, you can collect documents to insert in an array and then pass the results on to MongoCollection::batchInsert() in batches. The driver will take care of splitting up large batches into chunks to actually send over the wire in 16MB messages (MongoDB's document size limit, which also applies to wire protocol communication).
I'm running php5 FPM with APC as an opcode and application cache. As is usual, I am logging php errors into a file.
Since that is becoming quite large, I tried to configure logrotate. It works, but after rotation, php continues to log to the existing logfile, even when it is renamed. This results in scripts.log being a 0B file, and scripts.log.1 continuing to grow further.
I think (haven't tried) that running php5-fpm reload in postrotate could resolve this, but that would clear my APC cache each time.
Does anybody know how to get this working properly?
I found that "copytruncate" option to logrotate ensures that the inode doesn't change. Basically what is [sic!] was looking for.
This is probably what you're looking for. Taken from: How does logrotate work? - Linuxquestions.org.
As written in my comment, you need to prevent PHP from writing into the same (renamed) file. Copying a file normally creates a new one, and the truncating is as well part of that options' name, so I would assume, the copytruncate option is an easy solution (from the manpage):
copytruncate
Truncate the original log file in place after creating a copy,
instead of moving the old log file and optionally creating a new
one, It can be used when some program can not be told to close
its logfile and thus might continue writing (appending) to the
previous log file forever. Note that there is a very small time
slice between copying the file and truncating it, so some log-
ging data might be lost. When this option is used, the create
option will have no effect, as the old log file stays in place.
See Also:
Why we should use create and copytruncate together?
Another solution I found on a server of mine is to tell php to reopen the logs. I think nginx has this feature too, which makes me think it must be quite common place. Here is my configuration :
/var/log/php5-fpm.log {
rotate 12
weekly
missingok
notifempty
compress
delaycompress
postrotate
invoke-rc.d php5-fpm reopen-logs > /dev/null
endscript
}
since a few hours our server hangs every time you do a session_start.
For testing purposes i created a script which looks like this:
<?php
session_start();
?>
Calling it from the console hangs and it can't even be stopped with ctrl-c, only kill -9 works. The same for calling it via Apache. /var/lib/php/session/ stays empty but permissions are absolutely fine, www can write and also has read permissions for all parent folders.
According to the admins there were no changes made on the server and there is no special code registered for sessions. The Server is CentOS 4 or 5 and yesterday everything was working perfectly. We rebooted the server and updated PHP, but nothing changed.
I've ran out of ideas, any suggestions?
UPDATE
We solved this problem by moving the project to another server, so while the problem still exists on one server there is no immediate need for a solution anymore.
I will keep the question open in case someone has an idea for others having a similar problem in the future, though.
There are many reasons for that, here are a few of them:
A. The session file could be opened exclusively.
When the file lock is not released properly for whatever reason, it is causing session_start() to hang infinitely on any future script executions.
Workaround: use session_set_save_handler() and make sure the write function uses fopen($file, 'w') instead of fopen($file, 'x')
B. Never use the following in your php.ini file (entropie file to "/dev/random"), this will cause your session_start() to hang:
<?php
ini_set("session.entropy_file", "/dev/random");
ini_set("session.entropy_length", "512");
?>
C.
session_start() needs a directory to write to.
You can get Apache plus PHP running in a normal user account. Apache will then of course have to listen to an other port than 80 (for instance, 8080).
Be sure to do the following things:
- create a temporary directory PREFIX/tmp
- put php.ini in PREFIX/lib
- edit php.ini and set session.save_path to the directory you just created
Otherwise, your scripts will seem to 'hang' on session_start().
If this helps:
In my scenario, session_start() was hanging at the same time I was using the XDebug debugger within PHPStorm, the IDE, on Windows. I found that there was a clear cause: Whenever I killed the debug session from within PHPStorm, the next time I tried to run a debug session, session_start() would hang.
The solution, if this is your scenario, is to make sure to restart Apache every time you kill an XDebug session within your IDE.
I had a weird issue with this myself.
I am using CentOS 5.5x64, PHP 5.2.10-1. A clean ANSI file in the root with nothing other than session_start() was hanging. The session was being written to disk and no errors were being thrown. It just hung.
I tried everything suggested by Thariama, and checked PHP compile settings etc.
My Fix:
yum reinstall php; /etc/init.d/httpd restart
Hope this helps someone.
To everyone complaining about the 30 seconds of downtime being unacceptable, this was an inexplicable issue on a brand new, clean OS install, NOT a running production machine. This solution should NOT be used in a production environment.
Ok I face the same problem on 2 PC, 1 is MAC mini XAMPP, 1 is Windows 10 Xampp.
Both is php spent infinity to run session_start(). Both PHP version is 7.x.x
I found that session files is lock to read and write. So that I added code to make PHP read session files and immediately unlock when done with
<?php
session_start([
'read_and_close' => true,
]);
?>
or
<?php
//For PHP 5.x
session_start();
session_write_close();
?>
After this PHP unlock session file => Problems solve
The problem: -
Iv experienced (and fixed) the problem where file based sessions hang the request, and database based sessions get out of sync by storing out of date session data (like storing each session save in the wrong order).
This is caused by any subsequent request that loads a session (simultaneous requests), like ajax, video embed where the video file is delivered via php script, dynamic resource file (like script or css) delivered via php script, etc.
In file based sessions file locking prevents session writing thus causing a deadlock between the simultaneous request threads.
In database based session the last request thread to complete becomes the most recent save, so for example a video delivery script will complete long after the page request and overwrite the since updated session with old session data.
The fix: -
If your ajax or resource delivery script doesnt need to use sessions then easiest to just remove session usage from it.
Otherwise you'd best make yourself a coffee and do the following: -
Write or employ a session handler (if not already doing so) as per http://www.php.net//manual/en/class.sessionhandler.php (many other examples available via google search).
In your session handler function write() prepend the code ...
// processes may declare their session as read only ...
if(!empty($_SESSION['no_session_write'])) {
unset($_SESSION['no_session_write']);
return true;
}
In your ajax or resource delivery php script add the code (after the session is started) ...
$_SESSION['no_session_write'] = true;
I realise this seems like a lot of stuffing around for what should be a tiny fix, but unfortunately if you need to have simultaneous requests each loading a session then it is required.
NOTE if your ajax or resource delivery script does actually need to write/save data, then you need to do it somewhere other than in the session, like database.
Just put session_write_close(); befor Session_start();
as below:
<?php
session_write_close();
session_start();
.....
?>
I don't know why, but changing this value in /etc/php/7.4/apache2/php.ini worked for me:
;session.save_path = "/var/lib/php/sessions"
session.save_path = "/tmp"
To throw another answer into the mix for those going bananas, I had a session_start() dying only in particular cases and scripts. The reason my session was dying was ultimately because I was storing a lot of data in them after a particularly intensive script, and ultimately the call to session_start() was exhausting the 'memory_limit' setting in php.ini.
After increasing 'memory_limit', those session_start() calls no longer killed my script.
For me, the problem seemed to originate from SeLinux. The needed command was chcon -R -t httpd_sys_content_t [www directory] to give access to the right directory.
See https://askubuntu.com/questions/451922/apache-access-denied-because-search-permissions-are-missing
If you use pgAdmin 4 this can happen as well.
If you have File > Preferences > SQL Editor > Options > "Auto Commit" disabled, and you just ran a query using the query tool but didn't manually commit, then session_start() will freeze.
Enable auto commit, or manually commit, or just close pgAdmin, and it will no longer freeze.
In my case it seems like it was the NFS Share that was locking the session , after restarting the NFS server and only enabled 1 node of web clients the sessions worked normally .
Yet another few cents that might help someone. In my case I was storing in $_SESSION complex data with several different class objects in them and session_start() couldn't handle the whole unserialization as not every class was loaded on session_start. The solution is my case was to serialize/jsonify data before saving it into the $_SESSION and reversing the process after I got the data out of session.
I am using MySQL 5.0 for a site that is hosted by GoDaddy (linux).
I was doing some testing on my web app, and suddenly I noticed that the pages were refreshing really slowly. Finally, after a long wait, I got to a page that said something along the lines of "MySQL Error, Too many connections...", and it pointed to my config.php file which connects to the database.
It has just been me connecting to the database, no other users. On each of my pages, I include the config.php file at the top, and close the mysql connection at the end of the page. There may be several queries in between. I fear that I am not closing mysql connections enough (mysql_close()).
However, when I try to close them after running a query, I receive connection errors on the page. My pages are PHP and HTML. When I try to close a query, it seems that the next one won't connect. Would I have to include config.php again after the close in order to connect?
This error scared me because in 2 weeks, about 84 people start using this web application.
Thanks.
EDIT:
Here is some pseudo-code of my page:
require_once('../scripts/config.php');
<?php
mysql_query..
if(this button is pressed){
mysql_query...
}
if(this button is pressed){
mysql_query...
}
if(this button is pressed){
mysql_query...
}
?>
some html..
..
..
..
..
<?php
another mysql_query...
?>
some more html..
..
..
<?php mysql_close(); ?>
I figured that this way, each time the page opens, the connection opens, and then the connection closes when the page is done loading. Then, the connection opens again when someone clicks a button on the page, and so on...
EDIT:
Okay, so I just got off the phone with GoDaddy. Apparently, with my Economy Package, I'm limited to 50 connections at a time. While my issue today happened with only me accessing the site, they said that they were having some server problems earlier. However, seeing as how I am going to have 84 users for my web app, I should probably upgrade to "Deluxe", which allows for 100 connections at a time. On a given day, there may be around 30 users accessing my site at a time, so I think the 100 would be a safer bet. Do you guys agree?
Shared-hosting providers generally allow a pretty small amount of simultaneous connections for the same user.
What your code does is :
open a connection to the MySQL server
do it's stuff (generating the page)
close the connection at the end of the page.
The last step, when done at the end of the page is not mandatory : (quoting mysql_close's manual) :
Using mysql_close() isn't usually
necessary, as non-persistent open
links are automatically closed at the
end of the script's execution.
But note you probably shouldn't use persistent connections anyway...
Two tips :
use mysql_connect insead of mysql_pconnect (already OK for you)
Set the fourth parameter of mysql_connect to false (already OK for you, as it's the default value) : (quoting the manual) :
If a second call is made to
mysql_connect() with the same
arguments, no new link will be
established, but instead, the link
identifier of the already opened link
will be returned.
The new_link
parameter modifies this behavior and
makes mysql_connect() always open a
new link, even if mysql_connect() was
called before with the same
parameters.
What could cause the problem, then ?
Maybe you are trying to access several pages in parallel (using multiple tabs in your browser, for instance), which will simulate several users using the website at the same time ?
If you have many users using the site at the same time and the code between mysql_connect and the closing of the connection takes lots of time, it will mean many connections being opened at the same time... And you'll reach the limit :-(
Still, as you are the only user of the application, considering you have up to 200 simultaneous connections allowed, there is something odd going on...
Well, thinking about "too many connections" and "max_connections"...
If I remember correctly, max_connections does not limit the number of connections you can open to the MySQL Server, but the total number of connections that can bo opened to that server, by anyone connecting to it.
Quoting MySQL's documentation on Too many connections :
If you get a Too many connections
error when you try to connect to the
mysqld server, this means that all
available connections are in use by
other clients.
The number of connections allowed is
controlled by the max_connections
system variable. Its default value is
100. If you need to support more connections, you should set a larger
value for this variable.
So, actually, the problem might not come from you nor your code (which looks fine, actually) : it might "just" be that you are not the only one trying to connect to that MySQL server (remember, "shared hosting"), and that there are too many people using it at the same time...
... And if I'm right and it's that, there's nothing you can do to solve the problem : as long as there are too many databases / users on that server and that max_connection is set to 200, you will continue suffering...
As a sidenote : before going back to GoDaddy asking them about that, it would be nice if someone could validate what I just said ^^
I had about 18 months of dealing with this (http://ianchanning.wordpress.com/2010/08/25/18-months-of-dealing-with-a-mysql-too-many-connections-error/)
The solutions I had (that would apply to you) in the end were:
tune the database according to MySQLTuner.
defragment the tables weekly based on this post
Defragmenting bash script from the post:
#!/bin/bash
# Get a list of all fragmented tables
FRAGMENTED_TABLES="$( mysql -e `use information_schema; SELECT TABLE_SCHEMA,TABLE_NAME
FROM TABLES WHERE TABLE_SCHEMA NOT IN ('information_schema','mysql') AND
Data_free > 0` | grep -v '^+' | sed 's,t,.,' )"
for fragment in $FRAGMENTED_TABLES; do
database="$( echo $fragment | cut -d. -f1 )"
table="$( echo $fragment | cut -d. -f2 )"
[ $fragment != "TABLE_SCHEMA.TABLE_NAME" ] && mysql -e "USE $database;
OPTIMIZE TABLE $table;" > /dev/null 2>&1
done
Make sure you are not using persistent connections. This is usually a bad idea..
If you've got that .. At the very most you will need to support just as much connections as you have apache processes. Are you able to change the max_connections setting?
Are you completely sure that the database server is completely dedicated to you?
Log on to the datbase as root and use "SHOW PROCESSLIST" to see who's connected. Ideally hook this into your monitoring system to view how many connections there are over time and alert if there are too many.
The maximum database connections can be configured in my.cnf, but watch out for running out of memory or address space.
If you have shell access, use netstat to see how many sockets are opened to your database and where they come from.
On Linux, type:
netstat -n -a |grep 3306
On windows, type:
netstat -n -a |findstr 3306
The solution could one of these, i came across this in a MCQA test, even i did not understood which one is right!
Set this in my.cnf "set-variable=max_connections=200"
Execute the command "SET GLOBALmax_connections = 200"
Use always mysql_connect() function in order to connect to the mysql server
Use always mysql_pconnect() function in order to connect to the mysql server
Followings are possible solutions:
1) Increase the max connection setting by setting the global variable in mysql.
set global max_connection=200;
Note: It will increase the server load.
2) Empty your connection pool as below :
FLUSH HOSTS;
3) check your processList and kill specific processlist if you don't want any of them.
You may refer this :-
article link