I have a folder named "tmp" in my webserver, it contains 237 files.
Title is something like this sess_1at4ka9r77f0f4n4ijldv561d1 and it contains data something like this FBRLH_state|s:32:"183047584cfac9ca4353c21535caa39d";
All files were last modified on 06/08/2018 (today). But the thing is
I think it contain PHP session data.
I use this code to create php sessions
ini_set('session.gc_maxlifetime', 86400 * 90);
session_start();
Now this is a bit confusing. only 12 people visited my website today. But why there are 237 session files?
Did i wrote some bad code? How can I minimize tmp folder size?
Nothing in the code you've shared states whether your session files will be eventually removed or not. That code only says that, if garbage collection is triggered as a result of this script's execution, only files older than that will be removed.
Whether current script will launch session clean-up and how often it'll happen is determined by the runtime values of these directives:
session.gc_probability
session.gc_divisor
(All this, assuming you haven't fiddled with session.save_handler to implement a custom mechanism, in which case your custom code should take care of handling data removal properly.)
As about the 237 files, I presume it's the default shared directory where all PHP scripts store sessions by default—but of course it's just a guess (I don't even know their modification time).
Related
I'm currently running into a huge problem regarding performance more precisely: Load generated through lots of I/O on an flat sessions folder with lots of files (+100.000).
The sessionfiles/htdocs folder is located on an managed storage - two seperate servers (behind a loadbalancer) are using these files (apache2) through an nsf-mount and are accessing the same sessions folder (to keep it persistent) at once.
Unfortunately the project is highly frequented and is generating a lot of sessionfiles. Even with ans max_lifetime of 2 hours we're generating +100.000 sessionfiles which is way to much for IO Nodes.
Is there a possibility to split - dynamically - these sessions into subfolders? e.g. all sessionfiles with sess_1* into /tmp/sessions/1, sess_2 into /tmp/sessions/2 and so on? With this approach the storage/IO Nodes only had to handle ~10.000 each folder which should speed up the garbage collection and should guard against IO Load.
I found this excerpt from the PHP (session.save_path) doc:
http://de3.php.net/manual/en/session.configuration.php#ini.session.save-path
There is an optional N argument to this directive that determines the
number of directory levels your session files will be spread around
in. For example, setting to '5;/tmp' may end up creating a session
file and location like
/tmp/4/b/1/e/3/sess_4b1e384ad74619bd212e236e52a5a174If . In order to
use N you must create all of these directories before use. A small
shell script exists in ext/session to do this, it's called
mod_files.sh, with a Windows version called mod_files.bat. Also note
that if N is used and greater than 0 then automatic garbage collection
will not be performed, see a copy of php.ini for further information.
Also, if you use N, be sure to surround session.save_path in "quotes"
because the separator (;) is also used for comments in php.ini.
Does anyone have been implementing this before in php and could provide me with some sample php code to handle sessionfiles in subfolders using this mod_files.sh?
unfortunatly its pretty poorly documented ...
Sometimes the solution is easier than it might appear at first. Somehow I thought the PHP has to handle and manage the apache requests to the sessions directory tree. However the Apache does it on its own once the session:save_path has been changed.
1.) call this (modified) script ( http://snipplr.com/view/27710/modfilessh-php/ ) once via ssh:
*sh path/to/script/mod_files.sh path/to/sessions depth* (in my case: "mod_files.sh /tmp/sessions 1"
2.) doublecheck chown rights of new sessions directory tree
3.) change "session.save_path" to "1;/tmp/sessions"
Thanks for your help nevertheless!
My Problem is quickly described by the need to extend the session data life over it's default settings within the php.ini without changing the php.ini. I am looking for a solution that can be applied to a number of different php setups across server platforms so there is no need for the script to be changed for every install.
Since I don't want to change defaults on my server and want to stay as independent as possible with my script I am looking for a way to exceed the default 1440 seconds that are set for the garbage collector to dispose of my session data prematurely.
Simply setting ini_set('session.gc_maxlifetime',36000);
to 10 hours will not work as on some servers the GC will run unaffected by php's settings and delete my sessions after 24min anyway as described here.
To get around this problem the author suggests to change the session.save_path to another folder unaffected by the os's gc and thereby enforcing the set session.gc_maxlifetime to my settings.
Unfortunately I was unable to create a temp folder within php's tmp space and though I like to I don't seem to be able to since I don't have 0600 access on most servers.
One solution would be to link my session data to my own folder created right in my shared host folder but that seems insecure as this folder must then be available online and therefor exposed to possible id theft.
Though I do not know whether that is the case.
Another solution would be to include $_SESSION["stayalaive"]=time(); since the gc only deletes sessions untouched for the specific amount of time to the login script so that the session will be extended every time the login script is called though that means if the user does not click anything for 24min the session will be deleted anyway which is something I could possibly live with but it also seems to put on another process that seems unnecessary.
So my question is how to set up my session data to stay alive for 10 hours without clocking too much performance for it.
I have used php.ini directives inside scripts before and besides you can make directories inside your hosting reserved space.
So (at the very beginning of your script) this must be work, no doubt:
<?php
// obtain current directory
$APPPATH = dirname(__FILE__);
if ( ! file_exists($APPPATH . '/tmp/sessions'))
{
mkdir($APPPATH . '/tmp/sessions', 0700, TRUE);
}
ini_set('session.save_path', $APPPATH . '/tmp/sessions');
ini_set('session.gc_maxlifetime', 36000);
session_start();
?>
Both directives have PHP_INI_ALL changeable mode, so can be set inside scripts.
Any webhost worth their salt will give you a directory above your public_html (or whatever) folder. If yours does, then you can create a directory for sessions there, and it won't be accessible from the web.
If your hosting is so crappy that anything you're allowed to touch via FTP/SSH/whatever is also available via HTTP, things are more annoying.
So assuming you have a crappy host, here are a few ideas:
1) Store sessions inside your web root, and use .htaccess to make it non-browsable.
2) Store session data in the database.
Either of those options should enable you to set your own garbage-collection rules via ini-set(), and avoid having other processes clobber your sessions.
My php.ini file is set to expire sessions within 24 hours. But my users complain after being logged out after just 20 minutes or so.
I use session_start at the beginning of every page. Could that be messing things up for me?
Or could there be anything else causing this?
Just realized I might be on a shared hosting. And it might have some group settings for garbage collection with sessions. Anyone know how to look into this or set mine to be more specific?
Thanks!
Check phpinfo() to see what the settings really are. PHP has multiple .ini files, and its settings can be overridden in multiple places, so your session session may not be the ones actually in effect. phpinfo's output will show what the "Local" this-is-now-whats-in-effect settings are.
Beyond that, session_start() won't delete a session itself, but it MAY trigger a session garbage collector run based on a few gc_* .ini settings. It's a probabilistic thing, though, and won't happen every time you start a session.
Another possibility is that your session files are going into a system temp directory somewhere, and something external to PHP is cleaning up that directory at 20 minute intervals. So check what the session.save_path setting is and see if anything's cleaning up that location.
ini_set('session.gc_maxlifetime',28800); #28800 - just an example time - set your own
ini_set('session.gc_probability',1);
ini_set('session.gc_divisor',1);
session_save_path('/path to your sessions folder');
ob_start();
session_start();
You do need to create a session folder first.
This works for sure on GoDaddy shared hosting.
On VPS you can use this or just update your php.ini file.
I have a script named INDEX.php that runs from root directory //htdocs because that script needs to use $SESSION variables and other things in sub folder.
Now If I try to debug using eclipse, it asks me new work space, even if i put new work space under htdocs. still the settings inside script are lost.
How to resolve this? How to set dev env in eclipse so that it treats as if code is run from htdocs?
This is a poorly asked question. What do you mean "script needs to use $SESSION variables and other things in sub folder"? If you're referring to $_SESSION, it has nothing to do with folders.
If you're saying that values within $_SESSION are not staying there from one execution to the next, then you need to make sure that cookies are enabled, and that whatever browser/environment you are using to view the page supports cookies.
The cookie holds the ID that identifies the session that allows PHP to find the session data. You can also pass the ID from one URL to another, but that probably won't work in your case.
I've got a really annoying problem with file uploads.
Users can choose a file in an html file field. When they submit the form, this file will be uploaded.
On the serverside I just use standard PHP code (move_uploaded_file). I do nothing weird.
Everything works perfectly.
I can see the file on the server, I can download it again, ...
However sometimes this doesn't work. I upload the file, process it and I get no errors.
But the file just doesn't exist on the server.
Each time I upload that specific file I get no errors but it never gets saved.
Only if I rename it (test.file to tst.file for example) I can upload it and it'll actually get saved.
I get this problem very rarely. And renaming always works. But I can't ask users to rename their files obviously...
I have no access to the apache tmp file directory, no access to logs or settings so this makes debugging even harder. I only have this problem on this particular server (which I don't manage; I don't even have access to it) and I use the exact same code on lots of servers that don't have this problem.
I would be grateful if someone could help me out here or point me in the right direction.
Trying adding this debug code:
echo '<pre>';
print_r($_FILES);
echo '</pre>';
You should see an error number. You can lookup what it means at http://uk3.php.net/manual/en/features.file-upload.errors.php
Might also be worth checking to make sure the destination file doesn't already exist.
My first thought was filesize issues. In the php.ini, if the post_max_size or upload_max_filesize are too small, you can end up with similar results - where the file just seems to disappear. You would get an error in the apache logs (which you mention you've no access to).
In those cases, the $_FILES array would simply be empty - as if the file never arrived. Since your responses to Gumbo and James Hall show that php is reporting a proper upload, I'm led to wonder about the processing you mention.
If, during the process, your memory gets maxed or the script runs too long, the script may be dying out before it gets a chance to move it. You'll want to check these:
memory_limit
max_execution_time
max_input_time
Otherwise, without the apache logs, I'd say it might be a good idea to start outputting to a log file of your own throughout your file processing script. Try a file_exists on the tmp file, see what info you can get from the file (permissions, etc).
Unfortunately PHP doesn't get involved until the upload is finished, which means you won't get much info during - only after the fact. You best option might be to talk to the hosting company and get access to the logs - even if for a short time. In my experience, I've rarely had trouble getting ot the logs - or at least getting a tech to check the logs for me while I run tests (in the case where a shared server doesn't split their logs - seems ridiculous, but I've seen it before).
Edit: I realize you can't change those php settings, but you might want to see what they are in order to find out if they're potential problems for your script. For instance, a low memory limit will kill your processor script if it's less than the size of the uploaded file.
If an upload failes you don’t get the same kind of error like a PHP syntax error or such.
But you can check the file upload status and report the error to the user yourself.
This is what you said...
"I have no access to the apache tmp file directory, no access to logs or settings so this makes debugging even harder. I only have this problem on this particular server (which I don't manage; I don't even have access to it) and I use the exact same code on lots of servers that don't have this problem."
According to what you said above, I assume that you are using a server that is shared among many users. If the Apache of this server is configured with something like "mod_suphp", then your PHP scripts will be executed using the privileges of your UNIX user account ("jef1234", for example), which means the files you create will have you ("jef1234") as the owner (instead of "apache" or "www-data").
The system's temporary directory (usually "/tmp") is usually configured with the "sticky bit" on. This means everyone can create files in this directory, but the created files are only accessible by the owner (you may treat this as the one who created it).
As a result, if the server configuration is not careful enough, you may have file naming collisions with other users' files. For example, when you upload "test.file", if another user has already uploaded another file with the same name, the system refuses to overwrite the file created by him, as thus you have to use another name.
Usually the problem does not exist because PHP is smart enough to generate temporary names for the uploaded file (ie. $_FILES["html_form_input_name"]["tmp_name"]). If somehow you can confirm that this is really the reason, the server is obviously mis-configured. Tell your system administrator the problem as ask him to solve it. If this could not be solved, you may do some JavaScript tricks on the name of the file before it is uploaded (not tested, just an idea)...
★ When the user submits the form, rename the file from, for example, "test.file" to "jef1234-test.file-jef1234". After the file is uploaded, move the file (ie move_uploaded_file()) to another place and rename it to the original filename by removing the added strings.
Hope this helps...
Asuka Kenji