I'm writing my first web application using the PHP framework Laravel and a MySQL database. The application is quite image heavy with users likly to typically have 1 GB or more of images hosted by me. Images will be stored in the server's file system with a file reference stored in the database.
I'm just thinking ahead as I code because the application will initially be hosted on my InMoting VPS system to start with but (hopefully) will ultimately outgrow this.
Is there anything I should be doing at this early stage to make sure that the application could be scaled?
I would say the easiest and best thing to do right now is move all of your users images over to a CDN like Rackspace cloud files or similar. This way if/when you upgrade the framework, change servers, add new ones, etc most of your physical files are located somewhere else...in addition to having all of the benefits of a CDN network. Besides that I wouldn't worry about over optimizing too much.
Related
I am currently developing a big web application. Users will be able to post images as well as music files to my server. I am using PHP with Codeigniter framework and work off an Apache server from A2hosting.com . I was wondering how I will be able to manage space. I know that they offer unlimited storage but I know that I am going to run into issues if too many people are uploading too much.
How is the best way to deal with this? Would you have your own separate hosting plan for storing all this media? Could it be stored in a through a third party? Will my site eventually be slowing down because there is way too much memory that I am holding for people?
I guess I would kind of like to know what issues I am going to be running into? My project is almost completed and I want to avoid any large scale errors that may occur. I am the only one working on this project so man power is pretty precious, as well as time.
Any help and insights will be greatly appreciated.
Anyone offering you "unlimited storage" at a fixed rate is having you on.
We put our media files on Amazon S3, which is designed to handle trillions of files.
If you do host the uploaded data locally please don't place your uploads folder anywhere in your web root or in a place directly accessible by remote users. Expanding your storage is easy but recovering from a total website or server compromise is not!
I have a website right now that is currently utilizing 2 servers, a application server and a database server, however the load on the application server is increasing so we are going to add a second application server.
The problem I have is that the website has users upload files to the server. How do I get the uploaded files on both of the servers?
I do not want to store images directly in a database as our application is database intensive already.
Is there a way to sync the servers across each other or is there something else I can do?
Any help would be appreciated.
Thanks
EDIT: I am adding the following links for people that helped me understand this question more:
Synchronize Files on Multiple Servers
and
Keep Uploaded Files in Sync Across Multiple Servers - LAMP
For all Reading this post NFS seems to be the better of the 2.
NFS will keep files in sync but you could also use ftp to upload the files across all servers as well but NFS looks like the way to go.
This is a question for serverfault.
Anyway I think you should definitely consider getting in the "cloud".
Syncing uploads from one server to another is simply unreliable - you have no idea what kind of errors you can get and why you can get them. Also the syncing process will load both servers. For me the proper solution is going in the cloud.
Should you chose the syncing method you have a couple of solutions:
Use rsync to sync the files you need between the servers.
Use crontab to sync the files every X minutes/hours/days.
Copy the files upon some event (user login etc)
I got this answer from server fault:
The most appropriate course of action in a situation like this is to break the file share into a separate service of its own. Don't duplicate files if you have a network that can let the files be "everywhere (almost) at once." You can do this through NFS/CIFS or through a proper storage protocol like iSCSI. Mount as local storage in the appropriate directory. Depending on the performance of your network and your storage needs, this could add a couple of undetectable milliseconds to page load time.
So using NFS to share server files would work OR
as stated by #kgb you could specify one single server to hold all uploaded files and have other servers pull from that (just make sure you run a cron or something to back up the file)
Most sites solve this problem by using a 3rd party designated file server like Amazon S3 for the user uploads.
Another answer could be to use a piece of software called BTSync, it is very easy to install and use and could allow you to easily keep files in sync accross as many servers as you need to. It takes only 3 terminal commands to install and is very efficient.
Take a look here
and here
You can use db server for storage... Not in the db i mean, have a web server running there too. It is not going to increase cpu load much, but is going to require a better channel.
you could do it with rsync.. people have suggested using nfs.. but that way you create one point of failure... if the nfs server goes down.. both your servers are screwed... correct me if im wrong
I have a simple CRM system that allows sales to put in customer info and upload appropriate files to create a project.
The system is already being hosted in the cloud. But the office internet upload speed is horrendous. One file may take up to 15 minutes or more to finish, causing a bottleneck in the sales process.
Upgrading our office internet is not an option; what other good solutions are out there?
I propose splitting the project submission form into 2 parts. Project info fields are posted directly to our cloud server webapp and stored in the appropriate DB table, the file submission will actually be submitted to a LAN server with a simple DB and api that will allow the cloud-hosted server webapp to communicate with to retrieve the file if ever needed again via a download link. Details need to be worked out for this set-up. But this is what I want to do in general.
Is this a good approach to solving this slow upload problem? I've never done this before, so are there also any obstacles to this implementation (cross-domain restrictions is something that comes into mind, but I believe that can be fixed with using an iFrame)?
If bandwidth is the bottleneck, then you need a solution that doesn't chew up all your bandwidth. You mentioned that you can't upgrade your bandwidth - what about putting in a second connection?
If not, the files need to stay on the LAN a little longer. It sounds like your plan would be to keep the files on the LAN forever, but you can store them locally initially and then push them later.
When you do copy the files out to the cloud, be sure to compress them and also setup rate limiting (so they take up maybe 10% of your available bandwidth during business hours).
Also put some monitoring in place to make sure the files are being sent in a timely manner.
I hope nobody needs to download those files! :(
I have a CodeIgniter app in a git repo. Currently i deploy a new installation on my server for each new client i signup.
Each new client has its own database and its own files in a folder such as: /var/www/vhosts/example.com/client/client1/
Each client gets a subdomain that i map out through plesk. client1.example.com.
My question:
Is it better performing to have a single app setup to manage all of these client installations with different database.php config files.
IE: /var/www/vhosts/example.com/httpdocs/*
and use a htaccess redirect for the sub domains to remap the URI to different configs.
Or is it better performing to have a seperate installation for each client like i listed above.
Server Information:
PHP 5.3
MySQL 5.x
CodeIgniter 2.1
WireDesignz HMVC
Sparks (various)
CentOS DV4 from MediaTemple
I'd say keep them apart.
Each client will have their own set of requirements. While the Server won't change that much , your code base will. It will become hard over time NOT to break something for one client while building something for another.
As they will be separate projects you'll be able to move larger sites away from the smaller sites. But this depends on what type of traffic you're clients receive.
And having each Application in it's own Repository (You are using Version Control, Right ?) would make your world that much more organized.
Performance wise the smaller application designed for a client, and only running what they want will probably outperform one monolithic site serving all your clients any day.
Hope I understood that correctly, it's my personal opinion.
I am currently working on configuring my CakePHP (1.3) based web app to run in a HA Setup. I have 4 web boxes running the app itself a MySQL cluster for database backend. I have users uploading 12,000 - 24,000 images a week (35-70 GB). The app then generates 2 additional files from the original, a thumbnail and a medium size image for preview. This means a total of 36,000 - 72,000 possible files added to the repositories each week.
What I am trying to wrap my head around is how to handle large numbers of static file request coming from users trying to view these images. I mean I can have have multiple web boxes serving only static files with a load-balancer dispatching the requests.
But does anyone on here have any ideas on how to keep all static file servers in sync?
If any of you have any experiences you would like to share, or any useful links for me, it would be very appreciated.
Thanks,
serialk
It's quite a thorny problem.
Technically you can get a high-availability shared directory through something like NFS (or SMB if you like), using DRBD and Linux-HA for an active/passive setup. Such a setup will have good availability against single server loss, however, such a setup is quite wasteful and not easy to scale - you'd have to have the app itself decide which server(s) to go to, configure NFS mounts etc, and it all gets rather complicated.
So I'd probably prompt for avoiding keeping the images in a filesystem at all - or at least, not the conventional kind. I am assuming that you need this to be flexible to add more storage in the future - if you can keep the storage and IO requirement constant, DRBD, HA NFS is probably a good system.
For storing files in a flexible "cloud", either
Tahoe LAFS
Or perhaps, at a push, Cassandra, which would require a bit more integration but maybe better in some ways.
MySQL-cluster is not great for big blobs as it (mostly) keeps the data in ram; also the high consistency it provides requires a lot of locking which makes updates scale (relatively) badly at high workloads.
But you could still consider putting the images in mysql-cluster anyway, particularly as you have already set it up - it would require no more operational overhead.