Using session_set_save_handler in PHP 5.4.40, I have created a session handler that works well ...except for session.upload_progress data when uploading a file.
My session handler completely ignores upload progress data, and I can only seem to get the progress data to save at all when:
session.save_handler=files; and
the session file is saved in the same path as session.save_path
Is it possible to save session.upload_progress data in the database?
Update: as at PHP 7 this still appears to be an issue. I have therefore opened PHP 7 Bug #74131
The php documentation says this here:
Warning: The web server's request buffering has to be disabled for this to work properly, else PHP may see the file upload only once fully uploaded. Servers such as Nginx are known to buffer larger requests.
So what you want to do may be impossible...
#Pancho I switched to use PECL uploadprogress. Works fine. Have to use mod_php and not fastcgi/fpm. User sessions are all in the db.
Related
A basic page page with just session_start(); loads just fine, but once I've set something, for example $_SESSION['pet']="dog";, the page load time is around 5 seconds.
I'm using AWS's memcached server and the connection time to it from the EC2 instance is really fast. I'm not sure where the slow down is coming from.
The session.save_handler is set to memcached and session.save_path is set to xxx.cfg.use1.cache.amazonaws.com:11211
phpinfo also displays Registered save handlers as files user memcache memcached
EDIT :
I uploaded test files to demonstrate the issue. The first file is simply session_start(); print_r($_SESSION); (http://rr915webapi.us-east-1.elasticbeanstalk.com/session.php). The second file is session_start();$_SESSION['pet']="dog";$_SESSION['name']="bob";(http://rr915webapi.us-east-1.elasticbeanstalk.com/session-set.php). After you load the second file, you can see the first takes a while longer to load than initially did.
By setting the following in the PHP ini file, the response time was reduced down to milliseconds.
session.lazy_write = 0
memcached.sess_locking = Off
Some possibilities :
if your PHP server running your PHP code and your memcached server / cfg.use1.cache.amazonaws.com are hosted on different regions, it can explain all this time...
there seems to be a a bug in libmemcached 1.0.16...if you update to 1.0.18, will fix the problem, see https://github.com/iuscommunity/wishlist/issues/143 comments and https://bugs.launchpad.net/libmemcached/+bug/1589344
I know this has been asked tons of times here and all over the internet however solutions I found are not working and this has been driving me crazy for several months now.
I have a very simple PHP page:
<?php session_start(); ?>
I'm getting the nightmare errors headers already sent and cache limiter in my error_log. Although it doesn't affect the function of any of the scripts but it's filling the error_log so much. There is no error when running from browser.
I have tried the TextWrangler editor for Mac and choosing Unicode UTF-16 with no BOM option when saving. However, after creating the file using Textwrangler, making sure that the extension is PHP, and uploading the file to server. I tried running the file directly and I got the following in browser:
<�?php session_start(); ?>
So the file is not encoded properly. I don't know why. With regular encoding of UTF-8 from either TextEdit or TextWrangler, the header error would appear in cron job as stated before.
I write all the text myself without copying so that no BOM characters in the file. Is there any REAL solution for this error? Should I use an ANSI editor? Isn't the save as utf-16 with no bom option used to avoid this errors? Or this errors must appear if there is session in cron?
Lastly I use the following cron job in cpanel: php -q /path/to/file.php
Calling your php script from cron will invoke PHP-CLI environment, which is not the same as calling the same script from the browser.
For example there are no cookies and obviously there is no session by default.
However if you still want to enjoy the sessions , there is a way. You can try this:
<?php
session_id ("temp");
session_start();
print_r ($_SESSION);
?>
If you give us information about the final goal or the use case behind this script we can help you more!
I have the following PHP Files:
fileUploadForm.php
handleUpload.php
fileUploadForm contains the following output:
output $_SESSION['errorMessage'] (if any)
Output a file upload form that posts to handleUpload.php
handleUpload.php performs the following actions:
Validates session (redirects to login if validation fails)
Validates file (sets $_SESSION['errorMessage'] if validation fails)
Scan File for Virus
MoveFile
Update database
The script is having trouble on large file uploads. I have set all of the php.ini settings regarding file uploads to be ridiculously huge, for testing purposes. So I don't believe the issue is a configuration issue.
The following behavior is confusing me:
When I watch the file grow in tmp, the file upload continues well past the max_input_time that was set. My understanding was that once the max_input_time is exceeded, the script will terminate, and in turn, so would the file upload. Any thoughts on why this isn't happening?
if I stop the file upload midstream and refresh fileUploadForm (not resubmit it), the script will output error messages related to file validation that are set in handleUpload. This seems to indicate that even though the file upload did not complete, lines of code in handleUpload are being executed. Does php execute a script and receive the form data asynchronously? I would have thought that the script would wait until all form data was received before executing any code. But this assumption is contradicted by the behavior I am seeing. What is the order in which a data POST / script execution occurs?
When max_input_time, along with the rest of the config values, is set to be ridiculously large for testing, very large uploads will complete. However, the rest of the script just seems to die. i.e. the virus scan and file move never happen, nor do the database updates. I have error handling set for each action in the script, but no errors are thrown. The page just seems to have died. Any thoughts on why this might happen / how to catch such an error?
Thanks in advance.
Kate
This quote from your second question answers (at least partially) the other two:
I would have thought that the script would wait until all form data was received before executing any code.
In order for PHP to be able to handle all input data, Apache (or whatever HTTP server you are using) will first wait for the file upload to be complete and only after that it will process the PHP script. So, PHP's max_input_time check will come into play after the file upload process is completed.
Now, in that case you'd probably ask then why your virus scanning, file moving and any other script procedures don't happen, since it's logical that any time counter related to PHP should start with the script's execution and this should happen after all input data is received. Well, that SHOULD be the case and to be honest - my thoughts on this are a kind of shot in the dark, but well ... either some other limit is exceeded or the script is started with the request, but is being suspended by the httpd until ready to proceed with it and effectively - some of those counters might expire during this time.
I don't know how to answer your second question as a refresh would mean that all of the data is re-POST-ed and should be re-processed. I doubt that you'd do the other thing - simply loading handleUpload.php without re-submitting the form, but it's a possibility that I should mention. A secound guess would be that if the first request was terminated unexpectedly - some garbage collection and/or recovery process happens the second time.
Hope that clears it up a bit.
We have a web app using Andrew Valums ajax file uploader, if we kick off 5 - 10 image uploads at once, more often then not at least 2 or 3 will result in the same gd error "Corrupt JPEG data"
Warning: imagecreatefromjpeg() [function.imagecreatefromjpeg]:
gd-jpeg, libjpeg: recoverable error: Corrupt JPEG data:
47 extraneous bytes before marker 0xd9 in ....
However this did not happen on our old test server, or local development box's, only on our new production server.
The file size on the server is the same as the original on my local machine, so it completes the upload but I think the data is being corrupted by the server.
I can "fix" the broken files by deleting them and uploading again, or manually uploading via FTP
We had a shared host on Godaddy and just have started to have this issue on a new box (that I set up, so probably explains a lot :) CentOS 5.5+, Apache 2.2.3, PHP 5.2.10
You can see some example good and bad picture here. http://174.127.115.220/temp/pics.zip
When I BinDiffed them I see a consistent pattern the corruption is always 64 byte blocks, and while the distance between corrupted blocks is not constant the number 4356 comes up a lot.
I really think we can rule out the Internet as error checking and retransmission with TCP is pretty reliable, further there seems to be no difference between browser versions, or if I turn anti-virus and firewalls off.
So I'm picking configuration of Apache / PHP?
Some cameras will append some data inside the file that will get interpreted incorrectly (most likely do to character encoding with in the headers).
A solution I found was to read the file in binary mode like so
$fh = fopen('test.jpg', 'rb');
$str = '';
while($fh !== false && !feof($fh)){
$str .= fread($fh, 1024);
}
$test = #imagecreatefromstring($str);
imagepng($test,'save.png');
Well, i think the problem is jpeg-header data, and as far as i know there is nothing to do with it by PHP, i think the problem is your fileuploader, maybe there are some configuration for it that you are missing.
Hmm - a 64 byte corruption?...or did you mean 64 bit?
I'm going to suggest that the issue is in fact as a result of the PHP script. the problem that regularly comes up here is that the script inserts CRLFs into the data stream being uploaded, and is caused by differences between the Window/*nix standards.
Solution is to force the php script to upload in binary mode (use the +b switch for ALL fopen() commands in the php upload). It is safe to upload a text file in binary mode as at least you can still see the data.
Read here for more information on this issue:
http://us2.php.net/manual/en/function.fopen.php
This can be solved with:
ini_set ('gd.jpeg_ignore_warning', 1);
I had this problem with GoDaddy hosting.
I had created the database on GoDaddy using their cPanel interface. It was created as "latin collation" (or something like that). The database on the development server was UTF8. I've tried all solutions on this page, to no avail. Then I converted the database to UTF8, and it worked.
Database encoding shouldn't affect BLOB data (or so I would think). BLOB stands for BINARY Large Object (something...), to my knowledge!
Also, strangely, the data was copied from the dev to production server while the database was still "latin", and it was not corrupted at all. It's only when inserting new images that the problem appeared. So I guess the image data was being fed to MySQL as text data, and I think there is a way (when using SQL) of inserting binary data, and I did not follow it.
Edit: just took a look at the MySQL export script, here it is:
INSERT INTO ... VALUES (..., _binary 0xFFD8FF ...
Anyway, hope this will help someone. The OP did not indicate what solved his problem...
since a few hours our server hangs every time you do a session_start.
For testing purposes i created a script which looks like this:
<?php
session_start();
?>
Calling it from the console hangs and it can't even be stopped with ctrl-c, only kill -9 works. The same for calling it via Apache. /var/lib/php/session/ stays empty but permissions are absolutely fine, www can write and also has read permissions for all parent folders.
According to the admins there were no changes made on the server and there is no special code registered for sessions. The Server is CentOS 4 or 5 and yesterday everything was working perfectly. We rebooted the server and updated PHP, but nothing changed.
I've ran out of ideas, any suggestions?
UPDATE
We solved this problem by moving the project to another server, so while the problem still exists on one server there is no immediate need for a solution anymore.
I will keep the question open in case someone has an idea for others having a similar problem in the future, though.
There are many reasons for that, here are a few of them:
A. The session file could be opened exclusively.
When the file lock is not released properly for whatever reason, it is causing session_start() to hang infinitely on any future script executions.
Workaround: use session_set_save_handler() and make sure the write function uses fopen($file, 'w') instead of fopen($file, 'x')
B. Never use the following in your php.ini file (entropie file to "/dev/random"), this will cause your session_start() to hang:
<?php
ini_set("session.entropy_file", "/dev/random");
ini_set("session.entropy_length", "512");
?>
C.
session_start() needs a directory to write to.
You can get Apache plus PHP running in a normal user account. Apache will then of course have to listen to an other port than 80 (for instance, 8080).
Be sure to do the following things:
- create a temporary directory PREFIX/tmp
- put php.ini in PREFIX/lib
- edit php.ini and set session.save_path to the directory you just created
Otherwise, your scripts will seem to 'hang' on session_start().
If this helps:
In my scenario, session_start() was hanging at the same time I was using the XDebug debugger within PHPStorm, the IDE, on Windows. I found that there was a clear cause: Whenever I killed the debug session from within PHPStorm, the next time I tried to run a debug session, session_start() would hang.
The solution, if this is your scenario, is to make sure to restart Apache every time you kill an XDebug session within your IDE.
I had a weird issue with this myself.
I am using CentOS 5.5x64, PHP 5.2.10-1. A clean ANSI file in the root with nothing other than session_start() was hanging. The session was being written to disk and no errors were being thrown. It just hung.
I tried everything suggested by Thariama, and checked PHP compile settings etc.
My Fix:
yum reinstall php; /etc/init.d/httpd restart
Hope this helps someone.
To everyone complaining about the 30 seconds of downtime being unacceptable, this was an inexplicable issue on a brand new, clean OS install, NOT a running production machine. This solution should NOT be used in a production environment.
Ok I face the same problem on 2 PC, 1 is MAC mini XAMPP, 1 is Windows 10 Xampp.
Both is php spent infinity to run session_start(). Both PHP version is 7.x.x
I found that session files is lock to read and write. So that I added code to make PHP read session files and immediately unlock when done with
<?php
session_start([
'read_and_close' => true,
]);
?>
or
<?php
//For PHP 5.x
session_start();
session_write_close();
?>
After this PHP unlock session file => Problems solve
The problem: -
Iv experienced (and fixed) the problem where file based sessions hang the request, and database based sessions get out of sync by storing out of date session data (like storing each session save in the wrong order).
This is caused by any subsequent request that loads a session (simultaneous requests), like ajax, video embed where the video file is delivered via php script, dynamic resource file (like script or css) delivered via php script, etc.
In file based sessions file locking prevents session writing thus causing a deadlock between the simultaneous request threads.
In database based session the last request thread to complete becomes the most recent save, so for example a video delivery script will complete long after the page request and overwrite the since updated session with old session data.
The fix: -
If your ajax or resource delivery script doesnt need to use sessions then easiest to just remove session usage from it.
Otherwise you'd best make yourself a coffee and do the following: -
Write or employ a session handler (if not already doing so) as per http://www.php.net//manual/en/class.sessionhandler.php (many other examples available via google search).
In your session handler function write() prepend the code ...
// processes may declare their session as read only ...
if(!empty($_SESSION['no_session_write'])) {
unset($_SESSION['no_session_write']);
return true;
}
In your ajax or resource delivery php script add the code (after the session is started) ...
$_SESSION['no_session_write'] = true;
I realise this seems like a lot of stuffing around for what should be a tiny fix, but unfortunately if you need to have simultaneous requests each loading a session then it is required.
NOTE if your ajax or resource delivery script does actually need to write/save data, then you need to do it somewhere other than in the session, like database.
Just put session_write_close(); befor Session_start();
as below:
<?php
session_write_close();
session_start();
.....
?>
I don't know why, but changing this value in /etc/php/7.4/apache2/php.ini worked for me:
;session.save_path = "/var/lib/php/sessions"
session.save_path = "/tmp"
To throw another answer into the mix for those going bananas, I had a session_start() dying only in particular cases and scripts. The reason my session was dying was ultimately because I was storing a lot of data in them after a particularly intensive script, and ultimately the call to session_start() was exhausting the 'memory_limit' setting in php.ini.
After increasing 'memory_limit', those session_start() calls no longer killed my script.
For me, the problem seemed to originate from SeLinux. The needed command was chcon -R -t httpd_sys_content_t [www directory] to give access to the right directory.
See https://askubuntu.com/questions/451922/apache-access-denied-because-search-permissions-are-missing
If you use pgAdmin 4 this can happen as well.
If you have File > Preferences > SQL Editor > Options > "Auto Commit" disabled, and you just ran a query using the query tool but didn't manually commit, then session_start() will freeze.
Enable auto commit, or manually commit, or just close pgAdmin, and it will no longer freeze.
In my case it seems like it was the NFS Share that was locking the session , after restarting the NFS server and only enabled 1 node of web clients the sessions worked normally .
Yet another few cents that might help someone. In my case I was storing in $_SESSION complex data with several different class objects in them and session_start() couldn't handle the whole unserialization as not every class was loaded on session_start. The solution is my case was to serialize/jsonify data before saving it into the $_SESSION and reversing the process after I got the data out of session.