I am trying to save tokens to a php file using this code, but after 2kb the file mysteriously empties and I lose all the data. Why does this happen? how do I prevent it?
$fh = fopen('token.txt', 'a+');
fwrite($fh, $access_token . "\n");
fclose($fh);
This token data you speak of, where does it originate from? It doesn't appear that you are appending to the file more so than you are just writing and over writing (I could be wrong), as I don't write to files often. Anyway If this data is being accumulated in something like a session or cookie or a get variable or something to the extent there of before off shooting into the text files you have, that could be some of the issue there. As I know in most cases sessions, cookies, and gets, have a limit, where after said limit is reach they break in one shape form or another. So if thats the case, maybe if your session, cookie, get is too large, the operation to do something with it is treating it as null, invalid, empty, whatever the case, then putting that equivalant into the file your writing to..
Unfortunately without giving more context to your overall script, where are these tokens are generated from to whatever runs, occurs or keeps reoccuring to these tokens to make them larger and larger. Its hard to give a really good answer that maybe of any assistance to you. Based on the code you have above I see no real limits exactly
Also, this could be a windows issue (permissions or otherwise), or a server configuration issue, or a php configuration issue, a lot of different variables tie into this problem, more so as it sounds like its a self hosted development stack on your own machine.
So all in all the more information you can give us, the better we are at helping :-)
Related
Most examples I've seen to update text based log files seem to suggest checking that the file exists, if so load it into a big string with file_get_contents(), add your new logs onto it, and then writing it back with file_put_contents().
I may be over-thinking this, but I think I see two problems there. First, if the log file gets big, isn't it somewhat wasteful of the the scripts available memory to stuff the huge file contents into a variable? Second, it seems that if you did any processing between the 'get' and 'put', you risk the possibility that multiple site visitors may update between the two calls, resulting in lost log info.
So for a script that is simply called (GET or POST) and exited after doing some work, wouldn't it be better to just build up your current (shorter) log string to be written, and at then just before exit(), just open in APPEND mode and WRITE?
It would seem that either approach could lead to losing data if there were no LOCK on the file between get and put. In the case of file_get/put_contents, I see that method does have a flag available called "LOCK_EX", which I assume attempts to prevent that occurrence. But then there is that issue of the time taken to move a large file into an array, and add to it before writing back. Wouldn't it be better to use fopen (append) with some kind of 'lock', between the fopen() and the fwrite()?
I apologise as I DO understand that "best way to do something" questions are not appreciated by the community. But surely the is a preferred way that addresses the concerns I'm raising?
Thanks for any help.
What I'm running here is a graphical file manager, akin to OneDrive or OpenCloud or something like that. Files, Folders, Accounts, and the main server settings are all stored in the database as JSON-encoded objects (yes, I did get rid of columns in favor of json). The problem is that is multiple requests use the same object at once, it'll often save back incorrect data because the requests obviously can't communicate changes to each other.
For example, when someone starts a download, it loads the account object of the owner of that file, increments its bandwidth counter, and then encodes/saves it back to the DB at the end of the download. But say if I have 3 downloads of the same file at once, they'll all load the same account object, change the data as they see fit, and save back their data without regards to the others that overlap. In this case, the 3 downloads would show as 1.
Besides that downloads and bandwidth are being uncounted, I'm also having a problem where I'm trying to create a maintenance function that loads the server object and doesn't save it back for potentially several minutes - this obviously won't work while downloads are happening and manipulating the server object all the meanwhile, because it'll all just be overwritten with old data when the maintenance function finishes.
Basically it's a threading issue. I've looked into PHP APC in the hope I could make objects persist globally between threads but that doesn't work since it just serializes/deserialized data for each request rather than actually having each request point to an object in memory.
I have absolutely no idea how to fix this without completely designing a new system that's totally different.... which sucks.
Any ideas on how I should go about this would be awesome.
Thanks!
It's not a threading issue. Your database doesn't conform to neither of the standards of building databases, including even the first normal form: every cell must contain only one value. When you're storing JSON data in DB, you cannot write an SQL request to make that transaction atomic. So, yes, you need to put that code in a trash bin.
In case you really need to get that code working, you can use some mutexes to synchronize running PHP scripts. The most common implementation in PHP is file mutex.
You can try to use flock , I guess you already have a user id before getting JSON from DB.
$lockfile = "/tmp/userlocks/$userid.txt";
$fp = fopen($lockfile, "w+");
if (flock($fp, LOCK_EX)) {
//Do your JSON update
flock($fp, LOCK_UN); //unlock
}else{
// lock exist
}
What you need to figure out is what to do when there is a lock, maybe wait for 0.5 secs and try to obtain lock again , or send a message "Only one simultaneous download allowed " or ....
This will be a newbie question but I'm learning php for one sole purpose (atm) to implement a solution--everything i've learned about php was learned in the last 18 hours.
The goal is adding indirection to my javascript get requests to allow for cross-domain accesses of another website. I also don't wish to throttle said website and want to put safeguards in place. I can't rely on them being in javascript because that can't account for other peers sending their requests.
So right now I have the following makeshift code, without any throttling measures:
<?php
$expires = 15;
if(!$_GET["target"])
exit();
$fn = md5($_GET["target"]);
if(!$_GET["cache"]) {
if(!array_search($fn, scandir("cache/")) ||
time() - filemtime($file) > $expires)
echo file_get_contents("cache/".$fn);
else
echo file_get_contents(file);
}
else if($_GET["data"]) {
file_put_contents("cache/".$fn, $_GET["data"]);
}
?>
It works perfectly, as far as I can tell (doesn't account for the improbable checksum clash). Now what I want to know is, and what my search queries in google refuse to procure for me, is how php actually launches and when it ends.
Obviously if I was running my own web server I'd have a bit more insight into this: I'm not, I have no shell access either.
Basically I'm trying to figure out whether I can control for when the script ends in the code, and whether every 'get' request to the php file would launch a new instance of the script or whether it can 'wake up' the same script. The reason being I wish to track whether, say, it already sent a request to 'target' within the last n milliseconds, and it seems a bit wasteful to dump the value to a savefile and then recover it, over and over, for something that doesn't need to be kept in memory for very long.
Every HTTP request starts a new instance of the interpreter; it's basically an implementation detail whether this is a whole new process, or a reuse of an existing one.
This generally pushes you towards good simple and scalable designs: you can run multiple server processes and threads and you won't get varying behaviour depending whether the request goes back to the same instance or not.
Loading a recently-touched file will be very fast on Linux, since it will come right from the cache. Don't worry about it.
Do worry about the fact that by directly appending request parameters to the path you have a serious security hole: people can get data=../../../etc/passwd and so on. Read http://www.php.net/manual/en/security.variables.php and so on. (In this particular example you're hashing the inputs before putting them in the path so it's not a practical problem but it is something to watch for.)
More generally, if you want to hold a cache across multiple requests the typical thing these days is to use memcached.
php is done from a per-connection basis. IE: each request for a php file is seen as a new instance. Each instance is ended, generally, when the connection is closed. You can however use sessions to save data between connections for a specific user
For basic use of sessions look into:
session_start()
$_SESSION
session_destroy()
Ok I get a script from: http://abeautifulsite.net/blog/2008/03/jquery-file-tree/
Its a directory listing script. I am having troubles with it. It works out of the box no problems per say other than the put fact that it goes way back into the system structure then I am allowed to even see some how.
The person that made the script has this one line that throws me off and I can't make heads of tales of it per say.
file_exists($root . $_POST['dir'])
I've never seen $root in that context before. Nor is it defined anywhere in the script from what I can tell. So is that a valid thing? If not can anyone tell me how I can use this script beneficially to just displaying directories starting at a specific directory. The document I point to with the above link shows an example, but it doesn't seem to mean anything to the scripts workings.
On the other hand if someone knows of a canned script thats very similar in nature I'd be happy to give that a look too. But I'd really like to edit this one to work the way I want it to work so any help would be appreciated.
an example of how far its going back can be found at http://domainsvault.com/tree/
I say its going far back because I don't even have access to those directories through my ftp.. its a shared system.. hostgator..
*EDIT* Thanks Everyone for the input, this essentially what I was afraid of hearing. It was hopped that we could skip reinventing the wheel by using this concept. But its appearing more so than not that its basically a bricked concept and far from worth using and attempting to tamper with. It'd likely be a lot more easy for me to build something from scratch than have to deal with this. This was just one of those canned scripts you find it looks ascetically pleasing to the eye, and you hope for the best. Didn't turn out to be the case, thanks again all.
file_exists($root . $_POST['dir'])
Run away.
This connector script does no checking on what paths you pass to it, so it's perfectly possible to escape the root (which, yes, you're supposed to set manually) and browse any files on your server that the web user has access to.
Also, it fails to do URL-escaping, and mangles Unicode through inadvisable use of htmlentities. This will make files with various punctuation or non-ASCII characters in fail.
This is a shonky and insecure script. Do not deploy it.
$root is a user-defined variable. It should be defined somewhere in the script - it may be a global. The script can still work if the variable doesn't exist (it might have been deleted in a previous code refactor), in that case you should just delete the variable from the line you copied here.
I think $root means $_SERVER[ 'DOCUMENT_ROOT']
you can defined as
$root=$_SERVER[ 'DOCUMENT_ROOT']
at the beginning
I'm creating a script that makes use of the $GLOBALS variable quite a lot. Is there too much you can put into a variable?
If I have a lot of information stored in $GLOBAL variable when the page loads, is this going to slow down the site much or not really?
Is there a limit to how much information one should store in a variable? How does it work?
And would it be better to remove information from that variable when I am done with it?
Thanks for your help! Want to make sure i get this right before I go any further.
In PHP, there's a memory_limit configuration directive (in php.ini) that you should be aware of.
As meder says, you should really be taking a step back and re-evaluating things. Do you actually use all of those data on each and every web server request.
In almost every case, you'd be better off loading only the data you need, when you need it.
For instance, even if you're reading all this data from some file, instead of a database, you're probably better off splitting that file up into logical groups, and loading the data you need (once!), just before using it (the first time).
Assuming you're running Apache/mod_php, loading everything on every request will balloon the size of your httpd processes, and when you scale with traffic, you'll just start swapping out (which means your app will slow to a crawl, or even worse, become deadlocked) that much faster.
I you really need all or most of the data available for all (or nearly all) requests, consider looking into something like memcache. You can devise ways to share (read-only) data between processes, instead of duplicating it for each and every request.
Some people use a "Registry" object to handle globals.
See how Kevin Waterson does it:
http://www.phpro.org/tutorials/Model-View-Controller-MVC.html (See "5. The Registry")