I am working on a CMS that will be installed for many clients, but as I keep on improving it I make modifications to a few files. I want these updates to be automatically applied to all the projects using the same files.
I thought of executing a check file every time the CMS is opened. This file would compare the version of the local file with the remote file, for this I can keep a log or something for the versions, no big deal, but that's not the problem, here is some sample code I thought of:
$url = 'http://www.example.com/myfile.php';
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_HEADER, false);
$data = curl_exec($curl);
curl_close($curl);
The problem is getting the content of myfile.php since its a PHP file the server will execute it and return the output, but I want the actual content of the file. I understand that this is not possible as it would be a security problem, anybody would be able to get the php code of other sites, but is there any way to get the contents of a remote php file maybe by giving special permissions to a remote connection?
Thanks.
You should create a download script on your remote server which will return the original php code by using readfile().
<?php
$file = $_SERVER['DOCUMENT_ROOT'] . $_GET['file'];
// #TODO: Add security check if file is of type php and below document root. Use realpath() to check this.
header("Content-Type: text/plain");
header("Content-Disposition: attachment; filename=\"$file\"");
readfile($file);
?>
Get file contents by fethcing http://example.com/download.php?file=fileName
Related
I'm trying to create a small service that is able to receive a file and resend to another location without creating a physical copy in any way (for security reasons).
I'm sending the file with a POST request as raw binary data and accessing it with file_get_contents('php://input')
simplified code:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://totally-legit-url');
curl_setopt($ch, CURLOPT_RETURNTRASNFER, 1);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: text/plain']);
curl_setopt($ch, CURLOPT_POSTFIELDS, file_get_contents('php://input'));
$response = curl_exec($ch);
The problem I'm however encountering is that whenever I send a file, even small files (I tried many file sizes like for instance 2KB, 299KB and 8MB), PHP automatically creates a temporary file in the sys_temp_dir directory which is exactly what I'm trying to prevent. I've been searching everywhere for an answer but I have not found a way to prevent PHP from creating a temporary file. I've even tried setting the sys_temp_dir to /dev/null but then the script just crashed with an error.
The script is running on Apache 2.4.25 in PHP 7.1.3
Is there a way to force the server to not create temp files and work with memory only?
I am working on a small project to download files from a ftp server using a web based query. I have create a HTML form as a front end, which takes from user the ftp server name and the file names, and then interacts with the php script, to connect with ftp server and download the files specified. I have kept both the html and php files on my university webserver. I am able to download files on the machine running the webserver when I run the PHP script directly form the command line from the server. But I am not able to download files on my local computer using a web browser.
| Web-browser running on local machine | <--not able to download file-->
| My PHP script running on web-server machine | <--Able to download file-->| FTP Server |
<?php
$targetFile = 'file.txt';
$curl = curl_init();
$fh = fopen(dirname(__FILE__) . '/'.$targetFile,'w+b');
if ($fh == FALSE){
print "File not opened<br>";
exit;
}
echo "<br>configuring curl...<br>";
curl_setopt($curl, CURLOPT_URL, "ftp://full_path_name_of_file");
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_HEADER, 0);
curl_setopt($curl, CURLOPT_VERBOSE, 1);
curl_setopt($curl, CURLOPT_FILE, $fh);
curl_exec($curl);
echo curl_error($curl);
curl_close($curl);
fclose($fh);
?>
File is downloaded successfully when I run this php script from the command line from the server machine. But when I invoke the script from the web browser using my personal machine, I get an error "File not opened".
Can you please tell is there any way I can download the file via my web browser.
Thanks!
this might be a file ownership issue
Please check the permission and the ownership of the file.
In order to debug this a bit better, you might use parts of the script provided here:
https://stackoverflow.com/a/10377285/1163786
check the php configuration
there is a difference between the php configuration of the CLI and the one for the webserver. the later might have some restrictions, when compared to the CLI one. please compare or diff the files (to see the configuration difference).
the download itself is not initiated
The script downloads a file via curl from a ftp server
and stores it to a folder on your webserver,
but your are not pointing the browser (client) to the downloaded file
nor initiating a downstream to the browser from the script.
I would add a check, that the script is not called from the CLI
and then do a header forward to the downloaded file.
if(PHP_SAPI !== 'cli') {
header("Location: WWW_URL_SERVER . $path_to . $targetFile);
}
You can test the download standalone by using the "direct link" in your browser to initiate a download. This is again a permission thing, this time the webserver itself serves the static file and needs permission to do so.
Referencing: Redirect page after process complete in PHP
This code i wrote does work and downloads file to client browser but with a delay, because it first downloads file to a some kind of temp directory maybe and then when thats done it just passes it to client.
I don't want this i want file to be downloaded without any delay.
header('Content-Type: application/force-download');
header('Content-Disposition: attachment; filename="'.$filename.'"');
header('Content-Length: '.filesize');
$chdownload = curl_init();
curl_setopt($ch, CURLOPT_URL, $url]);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($ch);
curl_close($ch);
echo $data;
The file is not downloaded to a temporary disk location. The delay that you're describing is caused by the fact that curl_exec() is blocking your script's execution as it needs to wait until the entire file is downloaded before your script can continue to run.
To get access to the data while it's being downloaded, you'll need to use streams with curl (using CURLOPT_FILE) for older curl versions or use CURLOPT_WRITEFUNCTION with a callback function for newer versions of curl. Check out this answer for a detailed solution.
I trying download a zip file using curl from one virtual host to another, in a same server. Zip file contains *.php and *.jpg files.
The problem is: sometimes JPG files get corrupt, like this:
Here is my code :
$out = fopen(ABSPATH.'/templates/default.zip','w+');
$ch = curl_init();
curl_setopt($ch, CURLOPT_FILE, $out);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_URL, 'http://share.example.com/templates/default.zip');
curl_exec($ch);
curl_close($ch);
$zip = new ZipArchive;
if ($zip->open(ABSPATH.'/templates/default.zip') === TRUE)
{
if($zip->extractTo(ABSPATH.'/templates'))
{
echo 'OK';
}
$zip->close();
}
//$zip->close();
I don't understand what happen to my jpg. I also tried using pclzip.lib.php, but no luck. How to solve this problem ?
Thanks in advance
Have you tried downloading the file via curl and unzipping it normally (i.e. without php)? To figure out whether the download causes the problem or the unzip.
You might also try to replace one of both parts using shell_exec (wget instead of curl, unzip instead of ZipArchive). I mean just for debugging, not for production maybe.
Finally i found what is the problem.
I'm using Nginx web server, when i change nginx config files:
sendfile on;
became
sendfile off;
My image not corrupt anymore. So its not php or curl problem. Interesting article: http://technosophos.com/node/172
I have a script that pulls URLs from the database and downloads them (pdf or jpg) to a local file.
Code is:
$cp = curl_init($remote_url);
$fp = fopen($dest_temp, "w");
#curl_setopt($cp, CURLOPT_FILE, $fp);
#curl_setopt($ch, CURLOPT_HEADER, TRUE);
curl_exec($cp);
curl_close($cp);
fclose($fp);
If the remote file is there, it works fine. If the remote file is not there, it just bombs and the browser hangs forever.
What's the best approach to handling this, should I somehow ping for the file first? or can I set options above that will handle this. I tried setting timeouts but it had no effect.
this is my first experience using cURL
I used to use wget much as you're using curl and got frustrated with the lack of ability to know what is going on because its essentially calling out to an external program.
I use perl WWW:Mechanize and the link below is a PHP version which might be a bit more robust for you to be able to deal with such instances.
http://www.compasswebpublisher.com/php/www-mechanize-for-php
Hope this helps.