I have a wordpress blog which is replicated, i.e. 2 servers behind load balancer serve the same wordpress blog. I pointed the database on both servers to the same database so I have no problem there. However, when a user is forwarded (by the load balancer) to server-1 and uploads files, they are kept on server-1. The same goes for server-2. Those files are not shared between the 2 servers and therefore user who is forwarded to server-2 will not see the files (e.g. images) which were uploaded to server-1.
I read that the upload folder can be changed but "This path can not be absolute. It is always relative to ABSPATH".
What are the best practices to share the upload folder between servers?
Options:
Set something up to replicate files between servers. ie: rsync in a cron job
Mount a network share to the uploads folder on both servers.
You are already load balancing, why not get rid of some of the http load.
Move the uploads to something like s3
Here is one plugin for it http://wordpress.org/plugins/wp2cloud-wordpress-to-cloud/
Moving the rest of your static files, eg. theme & plugin files would also be good for the server load.
Related
So I have launched a classified ads website but I am realizing that the website is getting too slow when loading ... I have one single upload folder and one thumb_cache folder that contains 4680 images in each one of them ... so each folder has 4680 images .. I have a regular shared hosting ... Could this be why the site is slowing down ? Can over 4000 image files in one upload folder slow down a site ? I developed the site in PHP , would PHP have a hard time finding images in one folder with over 4000 image files ?
How should I organize the upload directory for better performance ? automatically create a folder within the upload folder for each ad with PHP ?
I get Warning:
imagepng() [function.imagepng]: Unable to open
'/home/content/72/9959172/html/thumb_cache/
185x200__width__uploaded_files^classified_img^tractor61354.PNG' for writing:
Stale NFS file handle in
/home/content/72/9959172/html/al/includes/funcs_lib.inc.php on line 1168
The answer is highly depends on Your server configuration. Basically -- yes, it POSSIBLE that this CAN be a problem when PHP is run under certain server installations.
In any way, it CAN AND WILL be a problem when You will start thinking about backups and maintenance.
Many servers and applications use two-letter prefixes on file's hashsums. For example, file with hash 75f5164e4bd2de372b99a3e2e2718aed will be placed under 75/f5/ folder. This can solve some problems.
No this should not be issue, I had stored around 10000 images inside one folder... only if you are loading all the images of this folder on a single page will then have an issue.. but that too can be solved by a a jQuery lazy load plugin.
PHP has nothing to do about how files are stored on your OS, unless if you are retrieving it in a loop and processing those.
I have an PHP application hosted on AppFog and sometimes it need to generate some files and store them on the server. Files are saved with file_put_contents() or with imagejpeg() and imagepng() functions. After a while files became removed. Can you tell me why and how can I prevent it?
Many PAAS providers, including AppFog, do not provide a persistent filesystem. Generally, you can save files but they will be removed when you redeploy your application.
For persistent file storage, you are encouraged to use a cloud provider like Amazon S3.
From the AppFog FAQ:
Does AppFog have a persistent file system?
Not yet. We're working on this feature, but in the meantime, the file system is volatile. This means that any changes you make to the file system through a web interface, including any admin changes and content uploads, will be lost on the app's next start, stop, restart, deploy, or resource change. Because of this, you should make any changes to the file system on a local development environment and keep media assets and content uploads on an external storage system like Amazon's S3.
How can I set my uploader to save uplodaded files into folder on my computer? I'm hosting a free server on 000webhost.com and I have a simple uploader script ^^
So I'd like to do something like this:
//specify folder for file upload
$folder = "C:\Users\Tepa\Desktop\Ohjelmat\Uploads";
In order to have a website save files on your local computer, you will need to install a ftp or other fileserver on your home computer and send the file by IP address, or DNS address.
I would suggest instead to tackle this from the opposite direction and write a script to pull these files to your local computer rather than open it up to the outside world. You can set it up as either a cron job / scheduled task so that the files are pulled in automatically.
If the files you are interested in are stored in a publicly accessible folder on your host, actually accomplishing this task is fairly trivial (see: cURL or file_get_contents)
You loose the real-time synchronicity, but gain some peace of mind.
When uploading an image PHP stores the temp image in a local on the server.
Is it possible to change this temp location so its off the local server.
Reason: using loading balancing without sticky sessions and I don't want files to be uploaded to one server and then not avaliable on another server. Note: I don't necessaryly complete the file upload and work on the file in the one go.
Preferred temp location would be AWS S3 - also just interested to know if this is possible.
If its not possible I could make the file upload a complete process that also puts the finished file in the final location.
just interested to know if the PHP temp image/file location can be off the the local server?
thankyou
You can mount S3 bucket with s3fs on your Instances which are under ELB, so that all your uploads are shared between application Servers. About /tmp, don't touch it as destination is S3 and it is shared - you don't have to worry.
If you have a lot of uploads, S3 might be bottleneck. In this case, I suggest to setup NAS. Personally, I use GlusterFS because it scales well and very easy to set up. It has replication issues, but you might not use replicated volumes at all and you are fine.
Another alternatives are Ceph, Sector/Sphere, XtreemFS, Tahoe-LAFS, POHMELFS and many others...
You can directly upload a file from a client to S3 with some newer technologies as detailed in this post:
http://www.ioncannon.net/programming/1539/direct-browser-uploading-amazon-s3-cors-fileapi-xhr2-and-signed-puts/
Otherwise, I personally would suggest using each server's tmp folder for exactly that-- temporary storage. After the file is on your server, you can always upload to S3, which would then be accessible across all of your load balanced servers.
I'm planning to use multiple file servers to host my website uploaded files. what's the best way to do it ? should I install a web server on other machines as well? or is there any special software for routing files on the network? what would you pros do?
Thanks,
Taher.
Here's one way you could do it...
Create a central routing handler specifically for grabbing files off the network and have your file servers named as sub domains pointing to your various file servers.
When a user clicks on the download link, e.g.
www.example.com/GetDownload.php?id=10
...the GetDownload.php page would look in the database to see where the file has been stored (assuming you're keeping track of the files locations in the database) or through whatever your convention is for keeping track of uploads, then determine the location of the file on your network. Then it could simply redirect the URL to the appropriate server/download folder. So GetDownload.php?id=10, upon finding the location of the file would redirect to the appropriate Server/URL:
AFile.doc is on FileServerB, redirect...
FileServerA.Example.com
Here! --> FileServerB.Example.com/A/AFile.doc
FileServerC.Example.com
You can also configure gluster and mount your glusterfs on the webserver ... You will aslo have a fault tolerant system.