How can I create an Amazon S3 clone on my host? - php

I am currently building a storage service, but I am very small and don't want to setup or pay for an Amazon S3 account. I already have my own hosting service which I want to use. However, I want to make it simple to switch to Amazon S3 if the need arises. Therefore, I would like to have essentially an S3 'clone' on my server, which I can simply redirect to amazons servers at a later time. Is there any package that can do this?
EDIT: I am on a shared server where I cannot install software, so is there a simple php page that can do this?

Nimbus allows for that. From FAQ:
Cumulus is an open source implementation of the S3 REST API. Some
features such as versioning and COPY are not yet implemented, but some
additional features are added, such as file system usage quotas.
http://www.nimbusproject.org/doc/nimbus/faq/#what-is-cumulus

http://www.ubuntu.com/cloud
you would need several host computers but it works well.
I too have a backup service, but it's based on several 48TB raid arrays.

Related

How to backup a AmazonAWS website (Laravel) to AmazonAWS S3 bucket?

It would strike me as something rather mundane, but so far I cannot find any reference to how I can backup my Amazon AWS hosted application to my AWS S3 Bucket automatically on a regular basis?
Long story short, I have a AWS VPS with MySQL and WordPress installed and now am looking to do weekly/monthly backups of the site, including the DB. This seems to be possible via a WP plugin called "Updatedraft Plus" which allows for backups to be created and pushed over to S3 buckets, but given that this is a rather simple feature, I am quite sure that Amazon must have a solution for this already, I just cant seem to figure out how to do it.
I run both a WP site and a Laravel application if that matters, so could rely on either to assist with the backup.
Thanks for any advice on this.

Accessing SCORM sitting in ec2 instance from a Application server

I need to access a SCORM content through my application (LMS). Now, this content is stored in the filesystem in a different AWS ec2 instance (ubuntu machine).
My server has different application instances installed for serving different clients. Each client instance has a separate filesystem repo sitting on a same ec2 server.
How do I maintain that the SCORM opened by a specific user interacts only with the specific client application instance so that SCORM interaction parameter values get saved in the correct database?
Note : My application does not have a multitenant architecture.
Sorry for a such a generic question, actually I am a little confused hence a little direction would be appreciated so that further I can find my way out.
SCORM and Cross Domain would require some work. If you can get the content and the Platform on the same domain that would go easier. Another option would be to just get all the content files to point to their CSS/JS/IMG assets on the other server. Then your playing in the right sandbox.
Short of that there are some IFRAME hacks out there to do the same but it pretty much requires you to touch things that would lend it self to just pointing to all the Assets anyway without the go-between.
SCORM is a JavaScript<->JavaScript communication. The LMS will send the data stored in a session to the backend commonly on a commit call.

Google Compute Engine - PHP upload files to bucket

I created a Google Cloud Platform account, deployed the LAMP server, and logged in through SFTP and uploaded my existing site.
My site is currently running on another server, so I also uploaded all my databases and everything.
On my current server with hostgator I upload files to the /home/username/public_html/uploads folder, but on google you can upload to buckets. I figured I would use their Cloud Storage instead.
I tried doing a PHP move_uploaded_file() to the bucket: gs://mybucket/uploads; however, that's not working.
What's the trick? The only documentation I can find is using the App Engine, but I'm just using the Compute Engine with LAMP installed.
Also, which would be better? Should I stick with saving the uploads to the LAMP server, or should I use buckets?
Thanks!
You could use the json api directly:
https://cloud.google.com/storage/docs/json_api/v1/how-tos/upload
There is also a PHP library for using the Google APIs in general:
https://developers.google.com/api-client-library/php/start/get_started
Then you could use that library to insert objects into your bucket with a call like this:
https://cloud.google.com/storage/docs/json_api/v1/objects/insert
If you're already running a LAMP stack, it's fine to serve files directly from the VM itself, using its local disk; you don't need to start using Google Cloud Storage for typical, simple use cases.
Google Cloud Storage, which is where the buckets terminology comes from, is useful for storing lots of files, or for serving a very large site, either for processing via Hadoop or for serving to a global audience. Because this involves more work on your part, this makes sense if your site is so popular that the cost of serving data directly from the VM is more expensive than using Google Cloud Storage.
However, Google Cloud Storage has much higher scalability than disks on a VM (though you can attach additional disks, you're limited to having up to 10TB of persistent disk per VM, and it's not an automatic process), but that depends whether your use case will need that much space.
Bottom line: if your site is working fine with the LAMP stack on your VM, it's fine to keep it. It's a good idea to keep track of your TCO so keep an eye on the cost of your compute, storage, and network traffic, and see at which point it may make sense to move some of your assets to Google Cloud Storage.
You can use Google Cloud Platform Pricing Calculator to estimate your costs, but also take a look at pricing for Google Compute Engine (which includes compute, storage, and networking) as well as Google Cloud Storage, and see how your particular use case will work out.

Online file storage logic

I want to develop a website like file manager. Where user register and will get fix disk space lets say 20MB.
Now user can upload their pdf, doc, txt, jpeg etc files upto their disk limit.
I can develop upto this using PHP.
Now below is my issue:
1) If user's files are corrupted they can rollback their folders before 2-3 days.
Files must be secure and safe from viruses as users are uploading their important documents.
Is there any 3rd party storage server who provides such facility?
2) Also all files should be previewed from browser.
I am using Google doc viewer. Is is good and safe way to preview file in browser?
But google links are accessible from all, I need to add some restrictions as file can be viewed only by their owner.
I know it's a major task, but i just need some sort of logic. Please share your thoughts.
Thanks.
Any cloud storage service can be used for this. You'll get HDD space. There is not storage server who provides revision control system for this. You can use git, svn for this though. But as the files are binary you can not get full facility of these tools.
How file will be previewed depends on you. If you use PHP you make the site and at the backend you use the API to interact with the storage service. Google doc is not an option for this if you use PHP. Also note Google links can be made private.
I suggest you this,
Find a cloud storage service and use the storage in your server. Any will do.
Create UI using PHP and control the access using PHP too.
Manipulate files in your server directly or in 3rd party storage server via API
Use a revision control system to track the changes. And use its API in PHP end.
Some cloud storage service
Amazon S3. It also supports Versioning.
Google Cloud Storage
Microsoft Azure
Try Microsoft SkyDrive or Google Drive or Dropbox

Storing and retrieving user files on a different server / sub domain

I'm looking for some quick info about best practices on storing user's uploaded files on different servers or sub domains...
For example, photos on facebook of course arent on facebook.com/files/users/453485 etc...
but rather on photos.ak.fbcdn.net or whatever...
I'm wondering how with php i can upload to a different server whilst maintaining a mysql connection to my original... is it possible?
Facebook uses a content delivery network (cdn, hence fbcdn or facebook content delivery network) and probably uses webservices to pass binary data (photos) from server to server.
Rackspace Cloud offers a similar service. Here is an example application of their PHP library to access their webservice api: http://cloudfiles.rackspacecloud.com/index.php/Sample_PHP_Application
I'm going to make the assumption that you have multiple webservers, and want to be able to access the same set of files on each one. In that case, some sort of shared storage that each machine can access might be a good place to start.
Here are a couple options I've used:
Shared NFS Volume [ http://en.wikipedia.org/wiki/Network_File_System_(protocol) ]
MogileFS [ http://www.danga.com/mogilefs/ ]
Amazon S3 [ http://aws.amazon.com/s3/ ]
If you don't have control over the hardware or aren't able to install extra software, I'd suggest Amazon S3. There is an api that you can use to shuttle files back and forth. The only downside is that you don't get to use storage that you may already use, and it will cost you some money.
If you do have access to the hardware and software, MogileFS is somewhat like S3, in that you have an api to access the files. But is different in that you get to use your existing storage and get to do so for no additional cost.
NFS is a typical place where people will start, because it's the simplest way to get started. The downside is that you'll have to be able to configure servers, and setup a NFS volume for them to mount.
But if I were starting a high-volume photo hosting service, I'd use S3, and I'd put a CDN like Akamai in front of it.

Categories