Tearing eyeballs out, PHPThumb class on shared hosting is failing - php

UPDATE & SOLUTION: Everyone, for whoever has this problem in the future, I figured out how to solve it. If you use the PHPThumb class, you MUST import the settings from the config file, as it will not do this otherwise. I discovered this by opening the testing file that Clint gave. To do so, insert this code after you define the object:
if (include_once($PATH . 'phpThumb.config.php')) {
foreach ($PHPTHUMB_CONFIG as $key => $value) {
$keyname = 'config_'.$key;
$phpThumb->setParameter($keyname, $value);
}
}
Thanks to the people who attempted to help, and thanks Clint for at least giving me somewhere to look.
Original Question:
Before I cannot read this message due to my newfound blindness, I need help with something. And before you go farther, I must warn you that I have to link to other sites to show you the problems. So get ready to have a few tabs open.
So I am using PHPthumb to generate images for my gallery website. It was going along really great actually until I uploaded the script so that my business partner could start showing them (clients) an alpha stage of the script (I know it says beta).
http://speakwire.net/excitebeta2/?m=2
The problem becomes very obvious on the two gallery pages, which I happened to link to. The images are not being created at all. If you go into the admin panel, they seem to work, but that is merely a cache generated from my desktop. I have meticulously stopped at every step and even tried to manipulate the class code. I looked for other scripts but they did not help me, because they did not have what I needed. Because the code is proprietary though, I cannot share it. I bet you are thinking "Oh my farce god", but here is something you can look at - because I am able to replicate the same problem with the code I got before.
http://speakwire.net/phpthumbtest/
The second website has the EXACT same structure and code as:
http://mrphp.com.au/code/image-cache-using-phpthumb-and-modrewrite
The few exceptions are allowing the 100x100 parameters, but those are supposed to be changed and I know that is not causing the error, because its very existence is optional and removing it only allows people to do naughty things. The second is the thing I made only after the error persisted, and that was chmod(dirname($path), 0777); because for some weird reason, mkdir won't give the folder 777 permissions.
The old image: http://speakwire.net/phpthumbtest/images/flowerupright.JPG
The new image: http://speakwire.net/phpthumbtest/thumbs/100x100/images/flowerupright.JPG
As seen in the new image, it is unable to write the file. This happens to be the fault of PHPThumb. Whether that be the lack of parameters given, or the hosting does not permit.
Which brings me to the point, the script works superbly on my desktop WAMP, but fails when on the GoDaddy hosting. My business partner is going to open up an account on the hosting we plan to have people on soon, but the problem is still existent and if it is happening here, it most certainly can happen there too. Even though, it won't be on GoDaddy's servers later.
The specific place where it is failing I will insert here, but the rest you need to open up the mrphp.com.au site to see. It is way to long to post here.
require('../phpthumb/phpthumb.class.php');
$phpThumb = new phpThumb();
$phpThumb->setSourceFilename($image);
$phpThumb->setParameter('w',$width);
$phpThumb->setParameter('h',$height);
$phpThumb->setParameter('f',substr($thumb,-3,3)); // set the output format
//$phpThumb->setParameter('far','C'); // scale outside
//$phpThumb->setParameter('bg','FFFFFF'); // scale outside
if (!$phpThumb->GenerateThumbnail()) { // RETURNS FALSE FOR SOME REASON
error('cannot generate thumbnail'); // And is called due to fail.
}
I would love you long time whoever helps me with this, because I have spent essentially all my free time for the last few days, including time meant to be sleeping, trying to figure this out.
EDIT: http://speakwire.net/phpthumbtest/index2.php
I added this as Clints suggestion, seems imagemagick isn't working, could that really be the problem and how would I fix it?

Sounds like a permissions issue. Make sure whatever folder you are writing to, apache has write access.

Related

PHP - Allow access from only one domain

I have kind of a strange situation right now. Basically, my company is currently putting links to the latest builds of our software behind a gate, but once the user signs in, they can distribute the link to the builds. Free 30-day trials, unaccounted for.
So I would like to block access to URL /downloads/file.extension, or even just /downloads/ entirely, unless the referring URL is allowed_domain.com.
I would love to use .htaccess, but the hosting provider (WP Engine) we use has their servers set up on Nginx. Not only that, but I have to send my Nginx code to WP Engine for them to implement for me, which is not practical.
Thus, I am looking for a PHP ONLY solution, or a WordPress plugin that I apparently didn't notice.
And yes, I know this is a really stupid way for my company to be storing important files, but I need to fix this now until they come up with a better way.
You could use this method that I'm using it.
Enjoy.
You're going to need to list down those IP address.
<?php $allow = array("201.168.0.1", "456.789.123", "789.123.456"); //allowed IPs
if(!in_array($_SERVER['REMOTE_ADDR'], $allow) && !in_array($_SERVER["HTTP_X_FORWARDED_FOR"], $allow)) {
header("Location: http://yourdomain.com/index.php"); //redirect
exit();
} ?>
Unfortunately it is not possible to do what you described. That is because when someone tries to access a file from the site, the web server will first check if it exists and if it does, it will serve the file immediately. It does not go through any PHP code. Therefore you cannot write any PHP code that will intercept this request and block it.

How can I edit a live webpage without causing chaos for the page's visitors during my editing session?

Pardon me if these very basic questions have been asked and answered many times, but several hours of searching turned up nothing pertinent.
(1) After "going live" with a webpage, a Stackoverflow developer wants to make some changes to a page, but does not want those changes to "go live" until he has completed a cycle of PROTOTYPING and TESTING the changes (you know, basic SDLC). How does he do this in the website was live?
(2) An even more basic rephrasing of the question: While I am spending, say, 30 minutes making updates to an existing, live webpage, it appears that any visitor to that webpage during that time will observe every incremental change (including inadvertent blunders, typos, etc.) that I am making IN REAL TIME. I must be missing something really obvious here, so forgive me! How do I make changes to a currently-live webpage without causing such chaos during my edit sessions?
I am seeing so many bad comments and one bad answer. You never work on the live pages. What if you break it? The fact that you are asking this question tells me that you are not good enough not to make a mistake.
There is really only one good way to do this in my opinion.
Create a sub domain dev.whatever.com and put your current live site there. Work on your local project to start, when you are happy with that result, then move it to your dev site to make sure there are no issues, do it all there first, then when you are happy, you move to live.
Why not just save a copy of the page with a different file name. Upload it and work off that until it works how you want. No one will know the page exists except you so no one will see any issues. Once you are happy with it just use that pages code.
I think working on a local repository and learning version control would be best. This way, incase you do happen to completely break your code, you can revert to a previous working version without any worries. You can also use a local server this way only you can see the current updates going on with your code. Once you are happy with everything and all code works, you can then push the changes onto your remote server.
version control: https://git-scm.com/
Further to my comment, here is a simple caching page script that uses a define for allowing the page to run in cache mode or dynamic mode. Toggle CACHESITE to false to return it to live rendering of the page. This would need to be expanded out to bypass the cache if you are an admin (or some other trigger that you decide), so you can see the changes being made but the user sees the cached page:
<?php
// If true, cache this page
define("CACHESITE",true);
function CachePage($content = '',$saveto = 'index.php')
{
ob_start();
echo $content;
$data = ob_get_contents();
ob_end_clean();
if(CACHESITE) {
// If you have a define for site root, use instead
$dir = $_SERVER['DOCUMENT_ROOT'].'/temp/';
if(is_file($cached = $dir.$saveto)) {
include($cached);
return;
}
if(!is_dir($dir))
mkdir($dir,0755,true);
$cfile = fopen($cached,'a');
fwrite($cfile,$data);
fclose($cfile);
}
return $data;
}
// Whatever your content is, it needs to be in a string
// Use output buffering if you have include pages
$test = 'test is the best'.rand();
echo CachePage($test,'thispage.php');
?>
At any rate, there are lots of ways to do this sort of thing, but caching works relatively well.

My write-to folder keeps emptying

I am trying to save tokens to a php file using this code, but after 2kb the file mysteriously empties and I lose all the data. Why does this happen? how do I prevent it?
$fh = fopen('token.txt', 'a+');
fwrite($fh, $access_token . "\n");
fclose($fh);
This token data you speak of, where does it originate from? It doesn't appear that you are appending to the file more so than you are just writing and over writing (I could be wrong), as I don't write to files often. Anyway If this data is being accumulated in something like a session or cookie or a get variable or something to the extent there of before off shooting into the text files you have, that could be some of the issue there. As I know in most cases sessions, cookies, and gets, have a limit, where after said limit is reach they break in one shape form or another. So if thats the case, maybe if your session, cookie, get is too large, the operation to do something with it is treating it as null, invalid, empty, whatever the case, then putting that equivalant into the file your writing to..
Unfortunately without giving more context to your overall script, where are these tokens are generated from to whatever runs, occurs or keeps reoccuring to these tokens to make them larger and larger. Its hard to give a really good answer that maybe of any assistance to you. Based on the code you have above I see no real limits exactly
Also, this could be a windows issue (permissions or otherwise), or a server configuration issue, or a php configuration issue, a lot of different variables tie into this problem, more so as it sounds like its a self hosted development stack on your own machine.
So all in all the more information you can give us, the better we are at helping :-)

php script that deletes itself after completion

There's a problem that I'm currently investigating: after a coworker left, one night some files that he created, basicly all his work on a completed project that the boss hasn't payed him for, got deleted. From what I know all access credentials have been changed.
Is it possible to do this by setting up a file to do the deletion task and then delete the file in question? Or something similar that would change the code after the task has been done? Is this untraceable? (i'm thinking he could have cleverly disguised the request as a normal request, and i have skimmed through the code base and through the raw access logs and found nothing).
It's impossible to tell whether this is what actually happened or not, but setting up a mechanism that deletes files is trivial.
This works for me:
<? // index.php
unlink("index.php");
it would be a piece of cake to set up a script that, if given a certain GET variable for example, would delete itself and a number of other files.
Except for the server access logs, I'm not aware of a way to trace this - however, depending on your OS and file system, an undelete utility may be able to recover the files.
It has already been said in the comments how to prevent this - using centralized source control, and backups. (And of course paying your developers - although this kind of stuff can happen to anyone.)
Is is possible to do this by setting up a file to do the deletion task
and then delete the file in question?
Yes it is. He could have left an innoculous looking php file on the server which when accessed over the web later, would give him shell access. Getting this file to self delete when he is done is possible.
Create a php file with the following in it:
<?php
if ($_GET['vanish'] == 'y') {
echo "You wouldn't find me the next time you look!";
#unlink(preg_replace('!\(\d+\)\s.*!', '', __FILE__));
} else {
echo "I can self destruct ... generally";
}
?>
Put on your server and navigate to it. Then navigate again with a "vanish=y" argument and see what happens

PHP odd variable Question. Pertaining to "Root"

Ok I get a script from: http://abeautifulsite.net/blog/2008/03/jquery-file-tree/
Its a directory listing script. I am having troubles with it. It works out of the box no problems per say other than the put fact that it goes way back into the system structure then I am allowed to even see some how.
The person that made the script has this one line that throws me off and I can't make heads of tales of it per say.
file_exists($root . $_POST['dir'])
I've never seen $root in that context before. Nor is it defined anywhere in the script from what I can tell. So is that a valid thing? If not can anyone tell me how I can use this script beneficially to just displaying directories starting at a specific directory. The document I point to with the above link shows an example, but it doesn't seem to mean anything to the scripts workings.
On the other hand if someone knows of a canned script thats very similar in nature I'd be happy to give that a look too. But I'd really like to edit this one to work the way I want it to work so any help would be appreciated.
an example of how far its going back can be found at http://domainsvault.com/tree/
I say its going far back because I don't even have access to those directories through my ftp.. its a shared system.. hostgator..
*EDIT* Thanks Everyone for the input, this essentially what I was afraid of hearing. It was hopped that we could skip reinventing the wheel by using this concept. But its appearing more so than not that its basically a bricked concept and far from worth using and attempting to tamper with. It'd likely be a lot more easy for me to build something from scratch than have to deal with this. This was just one of those canned scripts you find it looks ascetically pleasing to the eye, and you hope for the best. Didn't turn out to be the case, thanks again all.
file_exists($root . $_POST['dir'])
Run away.
This connector script does no checking on what paths you pass to it, so it's perfectly possible to escape the root (which, yes, you're supposed to set manually) and browse any files on your server that the web user has access to.
Also, it fails to do URL-escaping, and mangles Unicode through inadvisable use of htmlentities. This will make files with various punctuation or non-ASCII characters in fail.
This is a shonky and insecure script. Do not deploy it.
$root is a user-defined variable. It should be defined somewhere in the script - it may be a global. The script can still work if the variable doesn't exist (it might have been deleted in a previous code refactor), in that case you should just delete the variable from the line you copied here.
I think $root means $_SERVER[ 'DOCUMENT_ROOT']
you can defined as
$root=$_SERVER[ 'DOCUMENT_ROOT']
at the beginning

Categories