I have photo uploading functionality in web application using PHP. Which is going to host as distributed application on many servers.
Now, problem is how should I handle that photo uploading functionality so, that photo can be available on all server? My application is in php.
You could use an already existing platform to perform syncing. Most modern OSes support syncing, and you could also use external tools.
At the OS level:
You can use rsync for *nix servers (plus, this has been ported to Windows too)
Here's a way to do it between Windows servers
Using external tools:
Goodsync enterprise
Syncbreeze
Under *nix, you could use rsync
Here's a collection of programs (free and paid) to do this
You could use a cloud-based service like Dropbox in all your servers
Using PHP:
You could use librsync
You could add an FTP/SSH server to each one of your distributed servers. Once a file is uploaded to one server, it can be uploaded via FTP to the others. PHP includes functionality for FTP - check the FTP section in the PHP handbook.
Use FXP to transfer files between servers. The KioobFTP class can be used for this, as it implements FXP transfers
Related
I'm running a PHP app on GCloud (Google App Engine). This app will require users to submit files for processing via FTP. A python cron job with process them.
Given that dev to prod is via the GAE deployment, I'm assuming there is no FTP access to the app folder structure.
How would I go about providing simple one-way FTP to my users? Can I deploy a Python project that will be a server? Or do I need to run a VM?
I've done some searching which suggests the VM option, but surely there are other options?
App Engine projects are not based on server virtual machines. App Engine is a platform as a service, not infrastructure as a service. Your code is packaged up and served on Google App Engine in a manner that can scale easily. App Engine is not a drop-in replacement for your old school web hosting, its quite a bit different.
That said, FTP is just a mechanism to move files. If your files just need to be processed by a job, you can look at providing an upload for your users where the files end up residing on Google Cloud Storage and then your cron job reads from that location and does any processing that is needed. What results from that processing might result in further considerations. Don't look at FTP being a requirement, but rather a means to moving files and you'll probably have plenty of options.
I am totally confused on how to host a Dynamic website created using PHP and MySQL in Amazon Cloud.
I went through Amazon S3 and I hosted a static website there!
Then I tried Amazon EC2 and I learned some aspects about the concept of VPC. I thought that the dynamic websites are hosting in Amazon Cloud using EC2. I followed some steps and they taught me how to launch a website using Drupal (But, I didn't want that !! )
No other tutorials on EC2 to deploy my web application was not found.
Then I found AWS Elastic Beanstalk, I uploaded a simple PHP document and I can see that deployed successfully.
But Still, I am not satisfied. Because, I don't know which is the correct way to deploy my PHP application.
So can anyone direct me on Deploying a PHP MySQL Application in AWS ?
Depends on your needs. Elastic Beanstalk might be a good option for many apps, but I chose EC2 for my app's backend (using PHP, MySQL and S3 for storage).
Quick steps to get you up and running:
Log into the AWS Mangement Console and start a new EC instance (Windows server 2012 R2 Base > t2.micro should be good enough for a start!)
At step "6. Configure Security Group", add Rules for at least HTTP, HTTPS and RDP (so you can connnect via Remote Desktop)
Connect to your new instance via Remote Desktop and install a decent browser (Enable File Downloads in IE's Security Settings and download Chrome or Firefox)
Open the Windows Firewall and add rules for the same ports you opened in the Security Group of your Instance in the AWS Management Console. (Right-click on “Inbound Rules”, then select “New Rule…”)
Download and install XAMPP (I put it in C:\xampp)
Open the XAMPP Control panel and install Apache and MySQL as services (so they will start automatically when your instance launches); make sure everything is started up.
Now put your files in C:\xampp\htdocs\ and you're ready to go!
Bonus Steps:
Set up Filezilla FTP Server (and open the required ports in both the instance's security group and the Windows Firewall) so you can upload/download files without having to go through Remote Desktop.
Get an Elastic IP and assign it to your instance, so it's IP address will never change.
Get an SSL certificate so you can use HTTPS
The answer depends on the load that you are expecting and the resources you have to handle all the administration tasks.
If you expect heavy or variable loads, there are many reasons why not to deploy a production PHP + MySQL application on a EC2.
Here are some of the benefits of deploying to Elastic Beanstalk instead of a manual configured EC2:
You get version control of each deployment.
You can scale up or down automatically if you need more/less instances to handle new load.
You get a load-balancer in front of your EC2s instances with a bunch of out-of-the-box "recommended" configurations.
Regarding MySQL, if you go for an Amazon RDS instance you can handle replication, monitorization and automatic backups with pretty low effort. A lot of the configurations you would need to tweak are now available through parameter-groups.
On the other hand, if you want to have full control of everything that is going on on your server (that means you have time to monitor, backup and do maintenance tasks, which is not my case :), or if you do not plan to have much traffic, or if you want the less expensive option, you should go with a low cost EC2 instance.
In my experience, (after 2 years of working on AWS with 10 production applications, I'm kind of a regular AWS user) pretty much every customization or change I needed on both RDS and EBS I was able to tweak it and get it working, so I'm pretty satisfied with choosing the EBS+RDS option.
Below are two links i found which are helpful to Create and Update an Application with AWS Elastic Beanstalk
https://aws.amazon.com/getting-started/tutorials/launch-an-app/
https://aws.amazon.com/getting-started/tutorials/update-an-app/
We need to execute an .exe file in a remote Windows Azure Server.
We call it from PHP with shell_exec. The .exe should create new files in two different folders into the server and generate data entries in a data base and returns a string, but it doesn’t work.
We don’t have any problem executing it in our local server with windows 7 Enterprise and IIS 7. That’s why we thought it could be a permissions problem, and then we have created a .user.ini file with the following content:
safe_mode= off
safe_mode_exec_dir= off
Unfortunately it doesn’t work too.
Any suggestions?
Thanks in advance.
You are most probably working with Windows Azure Web Sites. This is a high-density shared hosting with tightened security. If you need things like shell_exec you shall move to Windows Azure Cloud Service (Web Role), where you have full control over the OS and web server / php settings.
Using Cloud Service you will be able to use shell_exec. However when you move to Cloud Service you have begin thinking of saving files in Azure Blob Storage, as the local storage for cloud service is:
Not persistent
not synchronized across role instances
If you want don't yet know what Role and Role Instance is, or are little confused, please go through this article.
I'm trying to learn about node.js and there are tons of examples out there, but one question that I can't find an answer to or example is how does this work with web hosts (i'm using inmotionhosting.com)?
say I have a basic website www.url.com/index.php (note: I'm using PHP also). For this website to work, all I have to do is upload a file into my file manager in my web hosting site.
How does node.js work? do I just upload a node.js file into the web hosting also?
In all the examples, they are using localhost with port 8000 or something. Can someone shine some light?
Thanks!
you will need at least VPS hosting to install node.js, shared hosting won't allow you to install any application on your own, unless you they give you the option to do it.
then it all depends on how you have configured and what application is node.js serving, you can't really say where to upload files by default unless a path is set either from you or from webhosting..
from nodejs.org
Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices.
it doesn't say it specific for web use, instead you should use a web framework such as
ExpressJS where you can build you web application or any other stuff.. your host should provide further information on how to manage packages in your nodejs instance, configuration, etc..
some usefull links
Domain API
ExpressJS
ExpressJS examples
Here is something i came cross.
Hosting your node app?
Hosting Node Apps
nodeFu
Supported hosting providers?
Node Hosting
just read the descriptions for each section.
I need to upload files from web browser to S3.
I evaluated many upload components and found something that I really like: a CGI program called FileChucker.
The only problem is, this program is geared toward upload to servers, not S3.
I'd like to upload directly to S3/EC2, I don't want to upload files to my server and then send them to S3
I contacted FileChucker authors asking about compatibility with S3/EC2, they said:
I'm not familiar with how S3/EC2 work on the backend, but FileChucker can be installed on any server that supports standard web technologies (namely Perl CGI), and it can save its uploaded files to any path accessible via that server. So if EC2 supports Perl, and if it exposes its S3 storage via the standard mechanism (i.e. a filesystem path), then it should work fine.
It's been few days since I started investigating about S3/EC2, so I can't say for sure. Can someone tell me if this condition is met?
if EC2 supports Perl, and if it exposes its S3 storage via the standard mechanism (i.e. a filesystem path), then it should work fine.
I believe EC2 does support perl, but I'm not sure.
You can actually use a bog-standard HTTP <form> POST to an S3 bucket.
http://aws.amazon.com/articles/1434
Some FTP clients also allow you to upload to S3. On the Mac, Transmit is one such client.
There's also S3FS if you want to mount an S3 bucket as a filesystem on Mac/Linux.