When uploading an image PHP stores the temp image in a local on the server.
Is it possible to change this temp location so its off the local server.
Reason: using loading balancing without sticky sessions and I don't want files to be uploaded to one server and then not avaliable on another server. Note: I don't necessaryly complete the file upload and work on the file in the one go.
Preferred temp location would be AWS S3 - also just interested to know if this is possible.
If its not possible I could make the file upload a complete process that also puts the finished file in the final location.
just interested to know if the PHP temp image/file location can be off the the local server?
thankyou
You can mount S3 bucket with s3fs on your Instances which are under ELB, so that all your uploads are shared between application Servers. About /tmp, don't touch it as destination is S3 and it is shared - you don't have to worry.
If you have a lot of uploads, S3 might be bottleneck. In this case, I suggest to setup NAS. Personally, I use GlusterFS because it scales well and very easy to set up. It has replication issues, but you might not use replicated volumes at all and you are fine.
Another alternatives are Ceph, Sector/Sphere, XtreemFS, Tahoe-LAFS, POHMELFS and many others...
You can directly upload a file from a client to S3 with some newer technologies as detailed in this post:
http://www.ioncannon.net/programming/1539/direct-browser-uploading-amazon-s3-cors-fileapi-xhr2-and-signed-puts/
Otherwise, I personally would suggest using each server's tmp folder for exactly that-- temporary storage. After the file is on your server, you can always upload to S3, which would then be accessible across all of your load balanced servers.
Related
I have logic to upload images to s3 in php. I need to upload the same set of images in a SFTP server. In my view there are 2 options. First is to find a logic to upload image from my local to the server, when I am uploading images to s3 and the other option is to write some script to transfer the images from s3 to sftp server. I need the same set of images to be in the server and s3.
Out of the 2 approaches, which one is optimal? Is there any other way to approach my requirement? Any sample php script available for local to SFTP file transfer, if yes please provide the code.
I cannot say for sure which one is optimal, but I can definitely see a potential issue with option #1. If you perform the second upload (i.e. from your "local" server to the SFTP server) during the first upload, you make PHP wait on that operation before returning the response to the client. This could cause some unnecessary hanging for the user agent connecting to the local server.
I would explore option #2 first. If possible, look into SSHFS. This is a way to mount a remote filesystem over SSH. It uses the SFTP to transfer files. This may be a possible solution where all you have to do is write the file once to the local server's filesystem and then again to the mounted, remote filesystem. SSHFS would take care of the transfer for you.
I want to upload a large file from My computer to S3 Server without editing php.ini. Firstly,I choose file from browse button and submit upload button and then upload to s3 server. But I can't post form file data when I upload a large file. But I don't want to edit php.ini.Is there any way to upload a large local file to s3 server?
I've done this by implementing Fine Uploader's php implementation for S3. As of recently it is through an MIT license. It's an easy way to upload huge files to S3 without changing your php.ini at all.
It's not the worst thing in the world to set up. You'll need to set some environment variables for the public/secret keys, set up CORS settings on the bucket, and write a php page based on one of the examples which will call a php endpoint that'll handle the signing.
One thing that was not made obvious to me was that, when setting the environment variables, they expect you to make two separate AWS users with different privileges for security reasons.
ini_set("upload_max_filesize","300M");
try this
I have an Nginx Server with php-fpm installed on Centos 6.4
1. Current status:
I use the NFS server to hold versions and files, Nginx mounts the NFS and serves files from it.
this scenario is working and handles large volumes of traffic
2. Desired scenario
Replace the NFS with S3 and mount it using fuse & s3fs.
In that scenario the server fails when forced to handle high traffic.
Is fuse & s3fs much slower or am i missing something?
Thx
The problem is in thinking that S3 behaves like an NFS mount. It does not. NFS and other disk volumes work on blocks, s3 works on entire objects. Generally when accessing files from s3fs, the entire file is copied to tmp and it has much higher latency than disk access.
A couple of solutions:
If you prefer NFS, you can set up an NFS share from a separate instance and mount it on the instances that need it.
Otherwise, you can deploy your code directly to the instance itself, (which is something you can automate) and run your code from there. Static assets could be served directly from s3 via cloudfront and/or with cloudfront using a custom origin.
Take a look at RioFS project, it allows you to mount S3 bucket as local directory (we are using fuse).
As datasage mentioned earlier: you can't really compare NFS with S3, as these two are completely different filesystems used in different scenarios. RioFS let you upload / download files and list directory content. Anything extra (like appending data to a file) is not supported.
I'm working on a project that is being hosted on Amazon Web Services. The server setup consists of two EC2 instances, one Elastic Load Balancer and an extra Elastic Block Store on which the web application resides. The project is supposed to use S3 for storage of files that users upload. For the sake of this question, I'll call the S3 bucket static.example.com
I have tried using s3fs (https://code.google.com/p/s3fs/wiki/FuseOverAmazon), RioFS (https://github.com/skoobe/riofs) and s3ql (https://code.google.com/p/s3ql/). s3fs will mount the filesystem but won't let me write to the bucket (I asked this question on SO: How can I mount an S3 volume with proper permissions using FUSE). RioFS will mount the filesystem and will let me write to the bucket from the shell, but files that are saved using PHP don't appear in the bucket (I opened an issue with the project on GitHub). s3ql will mount the bucket, but none of the files that are already in the bucket appear in the filesystem.
These are the mount commands I used:
s3fs static.example.com -ouse_cache=/tmp,allow_other /mnt/static.example.com
riofs -o allow_other http://s3.amazonaws.com static.example.com /mnt/static.example.com
s3ql mount.s3ql s3://static.example.com /mnt/static.example.com
I've also tried using this S3 class: https://github.com/tpyo/amazon-s3-php-class/ and this FuelPHP specific S3 package: https://github.com/tomschlick/fuel-s3. I was able to get the FuelPHP package to list the available buckets and files, but saving files to the bucket failed (but did not error).
Have you ever mounted an S3 bucket on a local linux filesystem and used PHP to write a file to the bucket successfully? What tool(s) did you use? If you used one of the above mentioned tools, what version did you use?
EDIT
I have been informed that the issue I opened with RioFS on GitHub has been resolved. Although I decided to use the S3 REST API rather than attempting to mount a bucket as a volume, it seems that RioFS may be a viable option these days.
Have you ever mounted an S3 bucket on a local linux filesystem?
No. It's fun for testing, but I wouldn't let it near a production system. It's much better to use a library to communicate with S3. Here's why:
It won't hide errors. A filesystem only has a few errors codes it can send you to indicate a problem. An S3 library will give you the exact error message from Amazon so you understand what's going on, log it, handle corner cases, etc.
A library will use less memory. Filesystems layers will cache lots of random stuff that you many never use again. A library puts you in control to decide what to cache and not to cache.
Expansion. If you ever need to do anything fancy (set an ACL on a file, generate a signed link, versioning, lifecycle, change durability, etc), then you'll have to dump your filesystem abstraction and use a library anyway.
Timing and retries. Some fraction of requests randomly error out and can be retried. Sometimes you may want to retry a lot, sometimes you would rather error out quickly. A filesystem doesn't give you granular control, but a library will.
The bottom line is that S3 under FUSE is a leaky abstraction. S3 doesn't have (or need) directories. Filesystems weren't built for billions of files. Their permissions models are incompatible. You are wasting a lot of the power of S3 by trying to shoehorn it into a filesystem.
Two random PHP libraries for talking to S3:
https://github.com/KnpLabs/Gaufrette
https://aws.amazon.com/sdkforphp/ - this one is useful if you expand beyond just using S3, or if you need to do any of the fancy requests mentioned above.
Quite often, it is advantageous to write files to the EBS volume, then force subsequent public requests for the file(s) to route through CloudFront CDN.
In that way, if the app must do any transformations to the file, it's much easier to do on the local drive & system, then force requests for the transformed files to pull from the origin via CloudFront.
e.g. if your user is uploading an image for an avatar, and the avatar image needs several iterations for size & crop, your app can create these on the local volume, but all public requests for the file will take place through a cloudfront origin-pull request. In that way, you have maximum flexibility to keep the original file (or an optimized version of the file), and any subsequent user requests can either pull an existing version from cloud front edge, or cloud front will route the request back to the app and create any necessary iterations.
An elementary example of the above would be WordPress, which creates multiple sized/cropped versions of any graphic image uploaded, in addition to keeping the original (subject to file size restrictions, and/or plugin transformations). CDN-capable WordPress plugins such as W3 Total Cache rewrite requests to pull through CDN, so the app only needs to create unique first-request iterations. Adding browser caching URL versioning (http://domain.tld/file.php?x123) further refines and leverages CDN functionality.
If you are concerned about rapid expansion of EBS volume file size or inodes, you can automate a pruning process for seldom-requested files, or aged files.
I need a flash uploader, to use it in my CMS project.
I need something like this, but with greater max upload size (it doesn't allow to upload files larger ini_get('upload_max_filesize')).
My server doesn't allow me to overwrite ini settings, so I'm looking for an uploader which can upload large files independently from the ini settings.
If you want to go around the ini limit, one option would be to change and use an FTP uploader.
I've used once net2ftp and it was easy enough in its installation; I've never used it again since (almost 1 year and a half), but I see from their page that the project is updated and not dead, so you might give it a try.
You just download the package, place it in your webapp, customize it, and you're set.
You might want to create a dedicated FTP user with appropriate permissions, and not use the root one, of course.
You wont be able to post more data to the server than the max_upload_size.
As a workaround you can upload the data to Amazon S3 and sync it back via s3sync.
We have a setup with plupload in place for one of our clients and are able to upload up to 2GB per file (that's a client restriction, I don't know about S3 restrictions)
Mind you that S3 costs some money.