Using TorrentFlux's Download Action Programmatically - php

Summary : I'm coding a script which automatically download .torrent file from isohunt.com and then download that torrent to DOWNLOADS folder. But i can't download contents of torrent.
I have a torrent file (file.torrent) . I can use TorrentFlux's WebApp interface and download torrent file. But i want to start download programmatically.
I found TorrentFLux WebApp using dispatcher.php file like this to start downloading :
dispatcher.php?action=start&transfer=_file.torrent
I'm trying request this file with cUrl, but it's not working.
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://localhost/torrentflux/html/dispatcher.php?action=start&transfer=file.torrent");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_exec($ch);
curl_close();
Note : I'm asking here because Torrentflux official forum has database error.

Related

How to prevent temp files from being created in PHP when receiving binary files

I'm trying to create a small service that is able to receive a file and resend to another location without creating a physical copy in any way (for security reasons).
I'm sending the file with a POST request as raw binary data and accessing it with file_get_contents('php://input')
simplified code:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'http://totally-legit-url');
curl_setopt($ch, CURLOPT_RETURNTRASNFER, 1);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: text/plain']);
curl_setopt($ch, CURLOPT_POSTFIELDS, file_get_contents('php://input'));
$response = curl_exec($ch);
The problem I'm however encountering is that whenever I send a file, even small files (I tried many file sizes like for instance 2KB, 299KB and 8MB), PHP automatically creates a temporary file in the sys_temp_dir directory which is exactly what I'm trying to prevent. I've been searching everywhere for an answer but I have not found a way to prevent PHP from creating a temporary file. I've even tried setting the sys_temp_dir to /dev/null but then the script just crashed with an error.
The script is running on Apache 2.4.25 in PHP 7.1.3
Is there a way to force the server to not create temp files and work with memory only?

Update local php file from remote php file

I am working on a CMS that will be installed for many clients, but as I keep on improving it I make modifications to a few files. I want these updates to be automatically applied to all the projects using the same files.
I thought of executing a check file every time the CMS is opened. This file would compare the version of the local file with the remote file, for this I can keep a log or something for the versions, no big deal, but that's not the problem, here is some sample code I thought of:
$url = 'http://www.example.com/myfile.php';
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl, CURLOPT_HEADER, false);
$data = curl_exec($curl);
curl_close($curl);
The problem is getting the content of myfile.php since its a PHP file the server will execute it and return the output, but I want the actual content of the file. I understand that this is not possible as it would be a security problem, anybody would be able to get the php code of other sites, but is there any way to get the contents of a remote php file maybe by giving special permissions to a remote connection?
Thanks.
You should create a download script on your remote server which will return the original php code by using readfile().
<?php
$file = $_SERVER['DOCUMENT_ROOT'] . $_GET['file'];
// #TODO: Add security check if file is of type php and below document root. Use realpath() to check this.
header("Content-Type: text/plain");
header("Content-Disposition: attachment; filename=\"$file\"");
readfile($file);
?>
Get file contents by fethcing http://example.com/download.php?file=fileName

PHP script to download file using web browser

I am working on a small project to download files from a ftp server using a web based query. I have create a HTML form as a front end, which takes from user the ftp server name and the file names, and then interacts with the php script, to connect with ftp server and download the files specified. I have kept both the html and php files on my university webserver. I am able to download files on the machine running the webserver when I run the PHP script directly form the command line from the server. But I am not able to download files on my local computer using a web browser.
| Web-browser running on local machine | <--not able to download file-->
| My PHP script running on web-server machine | <--Able to download file-->| FTP Server |
<?php
$targetFile = 'file.txt';
$curl = curl_init();
$fh = fopen(dirname(__FILE__) . '/'.$targetFile,'w+b');
if ($fh == FALSE){
print "File not opened<br>";
exit;
}
echo "<br>configuring curl...<br>";
curl_setopt($curl, CURLOPT_URL, "ftp://full_path_name_of_file");
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_HEADER, 0);
curl_setopt($curl, CURLOPT_VERBOSE, 1);
curl_setopt($curl, CURLOPT_FILE, $fh);
curl_exec($curl);
echo curl_error($curl);
curl_close($curl);
fclose($fh);
?>
File is downloaded successfully when I run this php script from the command line from the server machine. But when I invoke the script from the web browser using my personal machine, I get an error "File not opened".
Can you please tell is there any way I can download the file via my web browser.
Thanks!
this might be a file ownership issue
Please check the permission and the ownership of the file.
In order to debug this a bit better, you might use parts of the script provided here:
https://stackoverflow.com/a/10377285/1163786
check the php configuration
there is a difference between the php configuration of the CLI and the one for the webserver. the later might have some restrictions, when compared to the CLI one. please compare or diff the files (to see the configuration difference).
the download itself is not initiated
The script downloads a file via curl from a ftp server
and stores it to a folder on your webserver,
but your are not pointing the browser (client) to the downloaded file
nor initiating a downstream to the browser from the script.
I would add a check, that the script is not called from the CLI
and then do a header forward to the downloaded file.
if(PHP_SAPI !== 'cli') {
header("Location: WWW_URL_SERVER . $path_to . $targetFile);
}
You can test the download standalone by using the "direct link" in your browser to initiate a download. This is again a permission thing, this time the webserver itself serves the static file and needs permission to do so.
Referencing: Redirect page after process complete in PHP

Archives downloaded with CURL not valid

I have a PHP script that downloads automatically some zip files from certain URLs with cURL functions.
But there's a problem: zip archives downloaded with CURL, if opened with Windows native Zip Extractor, it gives me an "invalid archive" error. If I download the zip file from URL with my browser, it is ok.
For example: zip downloaded with CURL is 21,8 Kb and the one downloaded from browser is 21,4 Kb.
Here's my Curl Setup:
curl_setopt($this->ch, CURLOPT_URL, $link);
curl_setopt($this->ch, CURLOPT_HEADER, TRUE);
$data = curl_exec($this->ch);
Then I save the file ($data) locally on my website like this:
$file = fopen($full_path, "w+");
fputs($file, $data);
fclose($file);
With WinRar both zips are fine, but I need the script to download zip files that are 100% valid.
Can anyone help me with this?
Figured out the solution: CURLOPT_HEADER must be set to false, otherwise it will write HTTP headers in response (and so in my zip files).

How to ensure PHP curl fully downloads pdf and pdftk command to skip damaged pdfs when merging

Currently I have a php scrip that downloads like 50+ pdfs and merges them. But sometimes when downloading it does not download a pdf fully thus it is damaged. When executing the merging command using pdftk it throws exception because of damaged pdfs.
I am using curl to download the pdfs, is it possible to have a check to ensure that the file is fully downloaded before downloading the next one? Or is it possible for pdftk to merge all files skipping the damaged ones?
Below is the code:
Downloading:
$fp = fopen($pathS, 'w');
$ch = curl_init();
curl_setopt($ch,CURLOPT_URL,$urlS);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
fclose($fp);
Merging:
"C:\Program Files\PDF Labs\PDFtk Server\bin\pdftk.exe" 1.pdf...0.pdf cat output %mydate%.pdf"
By using:
curl_setopt ($ch, CURLOPT_CONNECTTIMEOUT, 0);
curl_setopt ($ch, CURLOPT_TIMEOUT, 0);
I can ensure that a pdf is fully downloaded before proceeding to download the next one.

Categories