I have a php-script that uploads files to an ftp server from a location with a very low bandwith, low reliability connection. I am currently using the ftp functions of php.
Sometimes the connection drops in the middle of a transfer. Is there a way to later resume the upload? How can this be done?
Edit:
People misunderstood that this is happening from a browser. However, this is not the case. It is a php cli script. So it is about a local file being uploaded to ftp, no user interaction.
Try getting the remote file's size and then issuing the APPE ftp command with the difference. This will append to the file. See http://webmasterworld.com/forum88/4703.htm for an example. If you want real control over this, I recommend using PHP's cURL functions.
I think all the previous answers are wrong. The PHP manage the resume, for that FTP_AUTODESK must be activated, and you have to activate the FTP_AUTORESUME during upload. The code should be something like this :
$ftp_connect = ftp_connect($host, $port);
ftp_set_option ($ftp_connect, FTP_AUTOSEEK, TRUE);
$stream = fopen($local_file, 'r');
$upload = ftp_fput($ftp_connect, $file_on_ftp, $stream, FTP_BINARY, FTP_AUTORESUME);
fclose($stream);
ftp_(f)put has a $startpos parameter. You should be able to do what you need using this parameter. (starts the transfer from the file size on your server).
However I never used it, so I don't know about the reliability. You should try.
EDIT:
Another Example: by cballou
Look here. A very simple and good example.
You could do something like this:
<?php
$host = 'your.host.name';
$user = 'user';
$passwd = 'yourpasswd';
$ftp_stream = ftp_connect($host);
//EDIT
ftp_set_option($ftp_stream, FTP_AUTOSEEK, 1);
if ( ftp_login($ftp_stream,$user,$passwd) ){
//activate passive mode
//ftp_pasv($ftp_stream, TRUE);
$ret = ftp_nb_put($ftp_stream, $remotefile, $localfile, FTP_BINARY,FTP_AUTORESUME);
while ( FTP_MOREDATA == $ret ) {
// continue transfer
$ret = ftp_nb_continue($ftp_stream);
}
if (FTP_FINISHED !== $ret){
echo 'Failure occured on transfer ...';
}
}
else {
print "FAILURE while login ...";
}
//free
ftp_close($ftp_stream);
?>
are you meaning you are going to allow user resume upload from you web interface ?
is this suit you ?
http://www.webmasterworld.com/php/3931706.htm
just use the filezilla can done everything for you ...i mean the uploading process...
it will keep the data and reopen after close you will be able to continue process q...
for automate and tracking , simply use our way...which is git and capistrano
there is an option for php
try google it...
Related
I'm a student, new to PHP (and web development in general), and am trying to write a simple interface with my college's WebDAV server.
The following code (with appropriate credentials and address), which uses an HTTP WebDAV Client plugin I found (https://github.com/pear/HTTP_WebDAV_Client), successfully returns the first 8k of data from the .txt/.html/.js files I've tried it with, but no more.
From what I can tell, the likely culprit is that the server is using chunked transfer encoding (which makes sense), which leads me to believe I will have to read in a stream of data, rather than a single file/chunk (again, I'm new to this). If so, I'm not certain of how to accomplish this.
My understanding is that cURL would likely be the fastest way to do this, but I don't think Dreamhost has cURL enabled for php.
//this loads the HTTP_WebDAV_Client plugin:
// https://github.com/pear/HTTP_WebDAV_Client
require_once "HTTP/WebDAV/Client.php";
//this is for testing purposes only (for obvious reasons)
$user = $_GET['user'];
$pass = $_GET['pass'];
$fileName = $_GET['fileName'];
//actual address obscured
$dir = "webdavs://" . $user . ":" . $pass . "#webfs.xxx.com/main/hcwebdav/";
$file = fopen($dir.$fileName, "rb");
//$content;
if (!$file) {
die("Error opening file :( $user $pass");
} else {
//This returns only the first chunk:
echo file_get_contents($dir.$fileName);
//this has the same behavior
/*
while ($line = fread($file, 8192)) {
$content .= $line;
}
echo $content;
fclose($file);
*/
}
I hope this question isn't too stupid :/ I'm trying to write web applications to help intro-level students learn to code, and this plug-in would make it very easy for them to publish their own websites from a browser-based code editor/mini-IDE!
Cheers!
My suggestions leads away from the package you are using and your issue with it, but SabreDAV is the most popular WebDAV in the PHP community, so why not using it instead.
http://sabre.io/
https://github.com/fruux/sabre-dav
Docu: http://sabre.io/dav/davclient/
I want to store some data retrieved using an API on my server. Specifically, these are .mp3 files of (free) learning tracks. I'm running into a problem though. The mp3 link returned from the request isn't to a straight .mp3 file, but rather makes an ADDITIONAL API call which normally would prompt you to download the mp3 file.
file_put_contents doesn't seem to like that. The mp3 file is empty.
Here's the code:
$id = $_POST['cid'];
$title = $_POST['title'];
if (!file_exists("tags/".$id."_".$title))
{
mkdir("tags/".$id."_".$title);
}
else
echo "Dir already exists";
file_put_contents("tags/{$id}_{$title}/all.mp3", fopen($_POST['all'], 'r'));
And here is an example of the second API I mentioned earlier:
http://www.barbershoptags.com/dbaction.php?action=DownloadFile&dbase=tags&id=31&fldname=AllParts
Is there some way to bypass this intermediate step? If there's no way to access the direct URL of the mp3, is there a way to redirect the file download prompt to my server?
Thank you in advance for your help!
EDIT
Here is the current snippet. I should be echoing something, correct?
$handle = fopen("http://www.barbershoptags.com/dbaction.php?action=DownloadFile&dbase=tags&id=31&fldname=AllParts", 'rb');
$contents = stream_get_contents($handle);
echo $contents;
Because this echos nothing.
SOLUTION
Ok, I guess file_get_contents is supposed to handle redirects just fine, but this wasn't happening. So I found this function: https://stackoverflow.com/a/4102293/2723783 to return the final redirect of the API. I plugged that URL into file_get_contents and volia!
You seem to be just opening the file handler and not getting the contents using fread() or another similar function:
http://www.php.net/manual/en/function.fread.php
$handle = fopen($_POST['all'], 'rb')
file_put_contents("tags/{$id}_{$title}/all.mp3", stream_get_contents($handle));
Can you please suggest simple PHP script that I can insert to web-page, that will track and record every HTTP_REFERER of users that came to web-page?
Thank you so much for help in advance
Using $_SERVER['HTTP_REFERER'] is not reliable.
However, if you still want to go that route, you can use the following.
What this does is use a ternary operator to check if the referer is set.
If a referer is found, then it will record to it to file, and appending/adding to it using the a switch. Otherwise, if one is not found or isn't recordable, it will simply echo and not write anything to file.
If you don't want to keep adding to the file, use the w switch.
Caution - Using the w switch will overwrite any previously-written content.
<?php
$refer = isset($_SERVER['HTTP_REFERER']) ? $_SERVER['HTTP_REFERER'] : null;
if(!empty($refer)){
$fp = fopen('file.txt', 'a');
fwrite($fp, "$refer" . "\n"); // Using \n makes a new line. \r\n Windows/MAC
fclose($fp);
echo "Referer found and written to file.";
}
else{
echo "No referer, nothing written to file.";
}
// Or use this to write "No referer" in the file as a replacement
/*
else{
$fp = fopen('file.txt', 'a');
fwrite($fp, "No referer" . "\n");
fclose($fp);
echo "No referer.";
}
*/
this is a very simple script that will let you archive it:
$fp = fopen('myfile.txt', 'w');
fwrite($fp, $_SERVER['HTTP_REFERER']);
fclose($fp);
It might be a better idea to log this in apache (if thats the platform).
About logfiles:
http://httpd.apache.org/docs/1.3/logs.html
Then use some designated software to analyze the logs.
The reason for this is that to build your own tracking-script is a lot more work than it might seem, even at the simplest level. If its to be usable.
Another idea is to install some logging software from 3rd party. I think statcounter uses logfiles and can give you what you want for instance.
I am using this code to download a package from server A and put it in server B (copy)..but it does not work always, sometimes transfer does not get completed, file is not complete and some times it goes well. Can I improve this code in anyway or use cURL to do the same thing?
This is my code:
// from server a to server b
$filename = 'http://domain.com/file.zip';
$dest_folder = TEMPPATH.'/';
$out_file = #fopen(basename($filename), 'w');
$in_file = #fopen($filename, 'r');
if ($in_file && $out_file) {
while ($chunk = #fgets($in_file)) {
#fputs($out_file, $chunk);
}
#fclose($in_file);
#fclose($out_file);
$zip = new ZipArchive();
$result = $zip->open(basename($filename));
if ($result) {
$zip->extractTo($dest_folder);
$zip->close();
}
}
Problem is that it is not consistent. It does not get transfered all times, many times it comes missing and the script does not run well.
$filename = 'http://domain.com/file.zip';
echo `wget $filename`;
echo `unzip $filename`;
or
$ch = curl_init();
$timeout = 5;
curl_setopt($ch,CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,$timeout);
$data = curl_exec($ch);
curl_close($ch);
fwrite(fopen($destfile,'w'),$data);
Really though, you need to figure out why it is failing. Is the zip operation killing it? Is the php script timing out because it took too long to execute? Is it running out of memory? Is the server on the other end timing out? Get some error reporting and debug data and try to figure out why it's not working. The code you have should be fine, and reliable.
Have you checked the timeout settings on your server. Maybe they are
causing your script to timeout before the code is executed
completely.
Make sure that you are allowed to open the external url by fopen in your servers settings. And you also have right access settings to get this file.
Make sure the firewall of the Server A is allowing Server B and not just blocking its ip.
Try to use curl or file_get_contents and file_put_contents. It is likely to work too and prevents the loops.
Check if the problem is with the ZipArchive class or getting the file itself.
The fact that you are having temperamental issues suggests you may be having the same issue I've encountered - which has nothing to do with the code.
I'm pulling my zip from a remote server using cURL and then extracting the locally saved zip. Sometimes it works, sometimes not... this caused some serious hair pulling to begin with.
I'm uploading my zip via filezilla and what I have found is it frequently crashes out, retries a few times and eventually works. The uploaded file has the correct file size and looks like it's successfully uploaded but if I download it again sometimes it's simply corrupted and can't be unzipped.
So long as I make sure my uploaded zip is fine my script works fine... so here it is:
$zip_url = "http://www.mydomain.com.au/";
$version = "1.0.1.zip"; // zip name
$ch = curl_init();
$tmp_zip = fopen($version, 'w'); // open local file for writing
curl_setopt($ch, CURLOPT_URL, "$zip_url$version"); // pull remote file
curl_setopt($ch, CURLOPT_FILE, $tmp_zip); // save to local file
$data = curl_exec($ch); // do execute
curl_close($ch);
fclose($tmp_zip); // close local file
// extract latest build
$zip = new ZipArchive;
$zip->open($version);
$result = $zip->extractTo("."); // extract to this directory
$zip->close();
if ($result) #unlink($version); // delete local zip if extracted
else echo "failed to unzip";
One big difference in my code from the previous answer is I'm using CURLOPT_FILE rather than CURLOPT_RETURNTRANSFER. You can read why CURLOPT_FILE is better for large transfers at:
www.phpriot.com/articles/download-with-curl-and-php
So yea, im working on a windows system and while this works locally, know it will break on other peoples servers. Whats a cross platform way to do the same as this
function fetch($get,$put){
file_put_contents($put,file_get_contents($get));
}
I don't see why that would fail unless the other computer is on PHP4. What you would need to do to make that backwards compatible is add functionality to provide replacements for file_get_contents & file_put_contents:
if(version_compare(phpversion(),'5','<')) {
function file_get_contents($file) {
// mimick functionality here
}
function file_put_contents($file,$data) {
// mimick functionality here
}
}
Here would be the solution using simple file operations:
<?php
$file = "http://www.domain.com/thisisthefileiwant.zip";
$hostfile = fopen($file, 'r');
$fh = fopen("thisisthenameofthefileiwantafterdownloading.zip", 'w');
while (!feof($hostfile)) {
$output = fread($hostfile, 8192);
fwrite($fh, $output);
}
fclose($hostfile);
fclose($fh);
?>
Ensure your directory has write permissions enabled. (CHMOD)
Therefore, a replacement for your fetch($get, $put) would be:
function fetch($get, $put) {
$hostfile = fopen($get, 'r');
$fh = fopen($put, 'w');
while (!feof($hostfile)) {
$output = fread($hostfile, 8192);
fwrite($fh, $output);
}
fclose($hostfile);
fclose($fh);
}
Hope it helped! =)
Cheers,
KrX
Shawn's answer is absolute correct, the only thing is that you need to make sure your $put varialable is a valid path on either the Windows Server on the Unix server.
well when i read your question I understood you wanted to bring a file from a remote server to your server locally, this can be done with the FTP extension from php
http://www.php.net/manual/en/function.ftp-fget.php
if this is not what you intent I believe what shawn says is correct
else tell me in the comments and i'll help you more
If the fopen wrappers are not enabled, the curl extension could be: http://php.net/curl