Instead of uploading and moving the file directly to a place on the server, I would rather save it into the Session and upload it on a condition at a later point.
Here is my Method that currently saves the File to my server:
public function step3store() {
$file = Input::file('file');
$identifier = date("Ymd") . " - " . Session::get('lastName') . "_" . Session::get('firstName');
$destinationPath = base_path() . '/uploads/'. $identifier ;
$extension = $file->getClientOriginalExtension();
$filename = $identifier . " - " . uniqid() . "." . $extension;
$upload_success = Input::file('file')->move($destinationPath, $filename);
if( $upload_success ) {
return Response::json('success', 200);
} else {
return Response::json('error', 400);
}
}
And I am thinking about using something like this instead:
Session::put([
'file' => Input::get('file'),
]);
But whenever I check my Session, after I uploaded a file, I get the value "null" for "file".
Since I am uploading multiple files per Ajax, I am not sure if it somehow breaks the way I put files into the Session.
So, how do I save multiple files per Ajax into the Laravel Session?
Thanks in advance.
Sessions are for small, trivial bits of data only, not large bits of data and files like images.
Store the image in a directory like normal, then move them to another directory if the user completes the form. Have a “garbage collection” script that runs periodically that cleans any images from the first directory in the case of a user hasn’t completed the form after some time.
Your sentence, “only then I wanted to use real server resource” makes no sense, as if you were to save the file to a session then that would still use your server’s resource. Sessions are written to disk. Same thing if you were to store the file to the database as a BLOB (don’t do that either). You’re still using your server’s resources. So the theory of saving the file to the session doesn’t stop using your server’s resources.
This is how you should do it. Storing an entire image in the session is not a good idea. Session cookies can't store big data.
Store the image on the server. Give the image an id. And store that id on the session.
Related
Previously, I stored client files locally, on a server using PHP (and running Apache). They would upload files, and each one would be given a randomized string ending in a pdf / jpg file extension. The original file name would be kept in a database along with the randomized name to link them back together when the user wanted the file.
I wanted to transition to storing files on a private bucket in S3. The first thing I'm seeing is this article which says to give Object keys a unique name, but all the examples I'm seeing just put the user's file name in there.
This is an issue because if a user stores test.pdf and another, entirely different user uploads test.pdf, then it won't be uploaded. Another issue is if I use the random file names like I have previously been doing, and then the user gets the file from a pre-signed request, then they will be accessing a file named via some random string and not the file they thought they uploaded.
What should I be doing to separate out a user's files, while keeping the original file name on s3?
Personally, I do exactly what you describe in your first example. The S3 file gets a UUID generated for the file name in the bucket and all the metadata including the original file name goes in the database.
I don't even bother giving the S3 file an extension.
To expand on my comments and the question about how to read the files back;
I'm using Laravel with Intervention\Image (site).
My GET endpoint for the attachment controller returns this function in my model:
/**
* Gets an image from Amazon and returns it
* #param boolean $thumb
* #return null|Image
*/
public function output($thumb = false)
{
if ($this->s3_filename === null) {
return null;
}
// Grab the image from S3
$this->image = $this->s3->get('/' . $this->getPath() . '/' . ($thumb ? 'thumb/' : '') . $this->s3_filename);
if ($this->image === null) {
return null;
}
return Image::make($this->image)->response()->withHeaders([
'content-disposition' => 'inline; filename="' . ($thumb ? 'thumb_' : '') . $this->filename . '"',
]);
}
How about considering using buckets/folders?
Buckets need to have unique names (across ALL of AWS... not sure if that has changed). But the folders within them are fine.
But otherwise:
myBucket/
user1/
test.pdf
user2/
test.pdf
There's not an additional cost to having directories within buckets AFAIK so you should be good.
You can also use a UUID instead of user1, and have a table somewhere that maps usernames to UUID to generate the bucket/folder path.
I am making an external API call which returns several objects with download URL inside. Those are all PDF-s of which I would like to provide a single ZIP download link.
The way it works now is that I am downloading every single PDF to a specific user folder, and then making a ZIP out of it and presenting it back to user.
I see this as a somewhat slow and inefficient of doing things as the user will have to trigger another download for the ZIP, and this way I am making user wait basically for 2 downloads of the same file batch.
Is there a smarter way to deal with this?
$path = '/user_downloads/' . Auth::user()->id . '/';
if (!Storage::disk('uploads')->has($path)) {
Storage::disk('uploads')->makeDirectory($path);
}
$zipper = new \Chumper\Zipper\Zipper;
$zip_filename = 'zipped/' . uniqid() . '_merged.zip';
foreach (json_decode($res)->hits as $hit) {
$filename = public_path() . $path . uniqid() . '_file.pdf';
copy($hit->document_download_link, $filename);
$zipper->make($zip_filename)->add($filename);
}
The add method can receive an array of files, so, I would create the array of files inside the foreach and when finished, create the zip:
foreach (json_decode($res)->hits as $hit) {
copy($hit->document_download_link, $filename);
$files[] = public_path() . $path . uniqid() . '_file.pdf';
}
$zipper->make($zip_filename)->add($files);
This question has a couple of ways you could present the files to the user on by one but this is less user friendly and might get blocked by browsers.
You could probably also use JSZip (haven't looked too closely at it) but that would use the browser's RAM to compress the PDF's which is not ideal, especially on mobile devices.
I think your current solution is the best one.
So I've been using ftp functions (manually setting $conn_id, making fpt_put($conn_id,...), conn_close etc) in my project,
now I've added "use Storage" in my controller, set host, username and password for ftp in filesystems.php and changed all the functions in my controller to "Storage::" type.
The problem is that my files get damaged while uploading on the storage. After upload files successfully appear (I've tried uploading on both local and remote ftp storage), but I can't open them, getting the "Could not load image" error on files put in my /storage/app folder and empty square when opening an url from the remote storage. While I was using ftp_put(...) and stuff, everything was working perfectly.
The only thing I've noticed is the error explanation given when trying to open a file placed in /storage/app:
Error interpreting JPEG image file (Not a JPEG file: starts with 0x2f
0x76)
What could this one mean and how could I handle this situation? Would highly appreciate any possible help!
UPD: looks like the file somewhere during the upload stops being a file of its native format, and then gets renamed back forcibly, which causes corruption. Like, I upload .jpeg file, something happens, then it gets saved with .jpeg at the end, not being a .jpeg anymore. Still no idea.
Well I got it, the problem was that I left all the paths in () like they were with ftp_put(), like (to, from), but Storage:: requires contents, not path, in "from" place, so Storage::put(to, file_get_contents(from), 'public') solved my problem.
This is for information purposes, since she has requested another way of doing it. No need to thumb it up or down.
public function store(Request $request){
$this->validate($request, array(
// I have done the validations but skip to show it here
// OBTAINING THE IMAGES
$files = $request->images;
// COUNTING HOW MANY WERE RECEIVED
$file_count = count($files);
// INITIALIZING A COUNTER
$uploadcount = 0;
foreach($files as $file) {
$filename = $file->getClientOriginalName();
$temporary = public_path(). '/uploads/temporary/' . $property->id;
if(!file_exists($temporary)) File::makeDirectory($temporary);
$temp = $file->move($temporary, $filename); // This is where they temporary stay to be fetched for processing
$thumbs = public_path(). '/uploads/thumbs/' . $property->id;
if(!file_exists($thumbs)) File::makeDirectory($thumbs);
Image::make($temp)->resize(240,160)->save($thumbs . '/' . $filename);
// We are setting up another directory where we want to save copies with other sizes
$gallery= public_path(). '/uploads/gallery/' . $property->id;
if(!file_exists($gallery)) File::makeDirectory($gallery);
Image::make($temp)->resize(400,300)->save($gallery . '/' . $filename);
$picture = new Picture;
$picture->property_id = $property->id;
$picture->name = $filename;
$picture->save();
$uploadcount ++;
}
if($uploadcount == $file_count){
Session::flash('success', 'Upload successfully');
return Redirect()->route('property.show', $property->id);
}
else{ Session::flash('errors', 'screwed up');
return Redirect::to('upload')->withInput()->withErrors($validator);
}
}
So I have an upload service with many people uploading the same files to my Amazon S3 bucket. I changed my app design so the SHA1 of the file is calculated upon upload and checked against the list of uploaded files.
If it exists, I simply assign the file to the new uploader as well.
The problem with this is, the file being named as the first uploader named it. All subsequent uploaders will get the same first name.
I can use download="" attribute in HTML5 but it doesn't work in IE:
http://caniuse.com/#search=download
The files are stored remotely so I can't change the header unless I download it first to my local server which is illogical.
Please advice.
The only way I see this possible is to add a meta-data just before redirecting user to download.
You can add meta-data to your files in S3. With key as "Content-Disposition" and value as 'attachment; filename="~actual file name~"'. You can force the name and trigger download.
This way, you don't have to download any files to local file system.
The caveat to this is if someone else is also requesting the same file with in milli-seconds, first user might get the name as requested by second user.
use the rename() method
example:
$file = "boo.png";
$newName = "scary";
$ext = substr( $file, strpos( "." ), strlen( $file ) );
$path = "/images/downloaded/";
rename($path . $file, $path . $newName . $ext);
php documentation
all the idea i need to be sure that the file doesn't saved more than one time and don't lose any file because if tow files get the same (md5) the second file will not saved
(my goal don't save the same file Twice on hard disk)
In other words,
if one user upload image and after that another user upload the same image i need to don't save the the second image because it's already exist in the hard disk all of this because
i need to save space on my hard disk
this is my code it works fine
$targetFolder = '/test/uploadify/uploads'; // Relative to the root
$tempFile = $_FILES['Filedata']['tmp_name'];
$targetPath = $_SERVER['DOCUMENT_ROOT'] . $targetFolder;
$myhash = md5_file($_FILES['Filedata']['tmp_name']);
$temp = explode(".", $_FILES['Filedata']['name']);
$extension = end($temp);
$targetFile = rtrim($targetPath,'/') . '/' .$myhash.'.'.$extension;
if(file_exists($targetFile)){
echo 'exist';
}
// Validate the file type
$fileTypes = array('jpg','jpeg','gif','png'); // File extensions
$fileParts = pathinfo($_FILES['Filedata']['name']);
if (in_array($fileParts['extension'],$fileTypes)) {
move_uploaded_file($tempFile,$targetFile);
}
else {
echo 'Invalid file type.';
}
thanks for all of you
Well, of course you can do this, in fact this is the way I use to avoid file duplications (and I mean not having two files wit the same content and not just silly name collision).
If you are worried about collisions, then you might take a look at sha1_file:
http://es1.php.net/manual/en/function.sha1-file.php
What are the chances that two messages have the same MD5 digest and the same SHA1 digest?
I've been using the md5 approach the way you are suggesting here for image galleries and it works just fine.
Another thing to take care about is the time it takes to calculate the hash, the more complex the hash, the more time it needs, but I'm talking about processing really big batches.
If I understand your question correctly your goal is just to generate unique file names. If so, there is no point in reinventing the wheel - every hash function with fixed output length is going to have collisions - just use built in tempnam function.
Manual states:
Creates a file with a unique filename, with access permission set to 0600, in the specified directory. If the directory does not exist, tempnam() may generate a file in the system's temporary directory, and return the full path to that file, including its name.
Following should work well enough:
$targetDirectory = $_SERVER['DOCUMENT_ROOT'] . '/test/uploadify/uploads';
$uploadedFile = $_FILES['Filedata']['tmp_name'];
$targetFile = tempnam($targetDirectory, '');
move_uploaded_file($uploadedFile, $targetFile);
You could always add the systems current time in milliseconds to the filename. That plus the md5, would have a very unlikely chance of returning the same values.
It's very small, but the chance is there. You can read more here and here
I suggest you add a salt to the end of the filename to make it practically impossible for files to conflict(You should put the salt in a different md5 function though)
$salt = md5(round(microtime(true) * 1000));
$hash = md5_file($_FILES['Filedata']['tmp_name']);
$targetFile = rtrim($targetPath,'/') . '/' .$hash.$salt.'.'.$extension;
You should then insert the filename in a database so you can access it later.