I have a php script which requires no web hosting or disk space etc.
The script simply retrieves infromation from a server, does some processing on this information
and than passes it on to a client (an iPhone app in this case).
The only thing is that if traffic gets high than there is a high demand for bandwidth and speed.
Does anyone know of a service with high speed and badwidth (apart from web hosting services) that allows you to host (on a static ip) such a php script?
Thanks.
You may want to try some sort of cloud service where you can set up the environment you actually need. Let's say your script need a lot of RAM but only little CPU power (or the other way around) you can have exactly such a system. Amazon EC2 is one of many cloud computing providers out there.
hmm the performance point you can use something like "Facebook HipHop" to convert your php script into "c++" then you have the performance you need.
Cloud solution is perfect. You can even write shell scripts to increase decrease RAM whenever demand goes up.
Like everyone here mentioned, cloud hosting is your best bet. It's slightly more expensive for resources & bandwidth than a dedicated but is superior in performance/latency/scalability. I have a similar application setup for a current project and I am running on the RackSpace cloud with 100K+ active users on a daily basis and I have had no problems (been running for 6 months).
If your code is simple, don't use Php !
You can consider:
Python
GWan server + C
Java
Php is good for big project because its simple, fast to use/test/debug ...
Related
I am currently using an AWS micro instance as a web server for a website that allows users to upload photos. Two questions:
1) When looking at my CloudWatch metrics, I have recently noticed CPU spikes, the website receives very little traffic at the moment, but becomes utterly unusable during these spikes. These spikes can last several hours and resetting the server does not eliminate the spikes.
2) Although seemingly unrelated, whenever I post a link of my website on Twitter, the server crashes (i.e.,Error Establishing a Database Connection). Once restarting Apache and MySQL, the website returns to normal functionality.
My only guess would be that the issue is somehow the result of deficiencies with the micro instance. Unfortunately, when I upgraded to the small instance, the site was actually slower due to fact that the micro instances can have two EC2 compute units.
Any suggestions?
If you want to stay in the free tier of AWS (micro instance), you should off load as much as possible away from your EC2 instance.
I would suggest you to upload the images directly to S3 instead of going through your web server (see some example for it here: http://aws.amazon.com/articles/1434).
S3 can also be used to serve most of your web pages (images, js, css...), instead of your weak web server. You can also add these files in S3 as origin to Amazon CloudFront (CDN) distribution to improve your application performance.
Another service that can help you in off loading the work is SQS (Simple Queue Service). Instead of working with online requests from users, you can send some requests (upload done, for example) as a message to SQS and have your reader process these messages on its own pace. This is good way to handel momentary load cause by several users working simultaneously with your service.
Another service is DynamoDB (managed NoSQL DB service). You can put on dynamoDB most of your current MySQL data and queries. Amazon DynamoDB also has a free tier that you can enjoy.
With the combination of the above, you can have your micro instance handling the few remaining dynamic pages until you need to scale your service with your growing success.
Wait… I'm sorry. Did you say you were running both Apache and MySQL Server on a micro instance?
First of all, that's never a good idea. Secondly, as documented, micros have low I/O and can only burst to 2 ECUs.
If you want to continue using a resource-constrained micro instance, you need to (a) put MySQL somewhere else, and (b) use something like Nginx instead of Apache as it requires far fewer resources to run. Otherwise, you should seriously consider sizing up to something larger.
I had the same issue: As far as I understand the problem is that AWS will slow you down when you reach a predefined usage. This means that they allow for a small burst but after that things will become horribly slow.
You can test that by logging in and doing something. If you use the CPU for a couple of seconds then the whole box will become extremely slow. After that you'll have to wait without doing anything at all to get things back to "normal".
That was the main reason I went for VPS instead of AWS.
my web pages are hosted on company A. I have moved to company B. My account at company A will be active for a few days. How can I test and therefore compare the speed of those two?
Thank you.
I am using webserver through cPanel, so I guess a PHP script or something would be great.
you can use JMeter. easy to use. you can have some regression test for various scenarious which you actually want to do. ie: see how web servers perform under different loads and compare the companies.
Or you can use Pingdom tools even firebug will give you how long each web server takes to load the same resources.
Note that, your speed tet mostly will depend on latency, ie: your distance to one webserver vs the other one. Plus how many applications are running in one server on company A vs the other server on Company B if that s shared hosting.
As mentioned before:
See the 'hops' to your server with traceroute or tracert
There are some great server tools like Nagios which will give you a LOT of info about the server & network.
If they are your own servers you could even try a DDOS attack on them to see how much stress they can take.
You could simply run a php script with 50k loop iterations of different actions like file I/O, database queries etc etc to see how the 2 servers compare in milliseconds wit the different actions.
You could setup a jsperf script to run a few hundred ajax requests to see how each perform.
It really depends on what you suspect to be the problem? There are tools & methods to test each thing (network, hardware storage, cpu etc)
If you are using apache have a look at ab the benchmarking tool.
You can also do tracert or traceroute depending on the OS you're using to test the actual connection speed.
For the server speed itself, you can monitor your scripts directly inside of them (start a timer at the beginning of the load and display it when the page loading ends).
There are several things to consider on a "server speed", not only the horsepower :)
I am starting to redesign my current website project and would like to replace it by an architecture that can help me scale easily and has good performance.
Our prototype is running on a PHP Framework (CAKEPHP) + mySQL Server + 1ghz virutal server WIN2008 Server (feasibility test).
This system will not go online since it would not be able to meet the requirements.
Should be able to handle LOTS of HTTP-Requests per second ( scalability via hadoop maybe ? ) may become a bottleneck.
Should be able to handle many simultaneous uploads per minute ( mediafiles ) on the filesystem read and write + Media conversion ( Atm something like LAME encoder, are there faster tools? ) => bottleneck
Database gets many QUERIES per second ( somehow using clustering SQL Cluster or any cheaper product available? )
USE of CDN for static mediafiles
UNIX System?
should file compression be used ? CPU vs Bandwidth cost?
The scary part are the HTTP Requests and mediaupload & converter.
I started researching on www.highscalability.com for some good ebooks and would appreciate if some pro´s out here could give me some advice on this our helpful links.
You could take a look at those books:
High Performance Web Sites: Essential Knowledge for Front-End
Engineers (Steve Souders)
Even Faster Web Sites: Performance Best
Practices for Web Developers (Steve Souders)
Building Scalable Web Sites (Cal Henderson)
Use Linux as operating system,
Nginx as webserver (serveral instances
can be loadbalanced) Nginx can handle a lot
of http requests!
Redis for caching.
Solr or Sphinx for searching.
You could use a mongdb to store the files or take a look at mogilefs
For media conversions you could try node.js:
http://www.benfarrell.com/2012/06/14/morph-your-media-with-node-js/.
There are a few conversion(?) libraries at github:
http://github.com/TooTallNate/node-lame
Perhaps a few Servers with gearman installed could handle the conversion as well.
use Linux as Centos, debian no windows
use Pseudo-static
use cache
if the Browsing speed is not fast you should need CDN
if you have enough money more CPUs are perfect
you can use some Traffic monitoring software to judge wheather need
more Bandwidth or not
I have a php spider script that works for taking web site contents, and save the content to my database. It takes about 50 days to finish its job. I have a Virtual dedicated server for that, but as you realize, it takes huge time to finish execution. So, can cloud computing help my task? I searched on cloud computing and php, but I could not find nice guide for starting. Can you help me about how to start cloud computing for running php scripts faster?
You can go for Amazon EC2. Some or should I say many big companies in the world goes for Amazon EC2.
But before upgrading to such high level I suggest you to rewrite your codes multi-threaded/mutli-core using Python or C or Java, whichever seems to be your favorite.
For more information/startup guide/pricing for Amazon EC2 just check out this link:
http://aws.amazon.com/ec2/
I recently experienced a flood of traffic on a Facebook app I created (mostly for the sake of education, not with any intention of marketing)
Needless to say, I did not think about scalability when I created the app. I'm now in a position where my meager virtual server hosted by MediaTemple isn't cutting it at all, and it's really coming down to raw I/O of the machine. Since this project has been so educating to me so far, I figured I'd take this as an opportunity to understand the Amazon EC2 platform.
The app itself is created in PHP (using Zend Framework) with a MySQL backend. I use application caching wherever possible with memcached. I've spent the weekend playing around with EC2, spinning up instances, installing the packages I want, and mounting an EBS volume to an instance.
But what's the next logical step that is going to yield good results for scalability? Do I fire up an AMI instance for the MySQL and one for the Apache service? Or do I just replicate the instances out as many times as I need them and then do some sort of load balancing on the front end? Ideally, I'd like to have a centralized database because I do aggregate statistics across all database rows, however, this is not a hard requirement (there are probably some application specific solutions I could come up with to work around this)
I know this is probably not a straight forward answer, so opinions and suggestions are welcome.
So many questions - all of them good though.
In terms of scaling, you've a few options.
The first is to start with a single box. You can scale upwards - with a more powerful box. EC2 have various sized instances. This involves a server migration each time you want a bigger box.
Easier is to add servers. You can start with a single instance for Apache & MySQL. Then when traffic increases, create a separate instance for MySQL and point your application to this new instance. This creates a nice layer between application and database. It sounds like this is a good starting point based on your traffic.
Next you'll probably need more application power (web servers) or more database power (MySQL cluster etc.). You can have your DNS records pointing to a couple of front boxes running some load balancing software (try Pound). These load balancing servers distribute requests to your webservers. EC2 has Elastic Load Balancing which is an alternative to managing this yourself, and is probably easier - I haven't used it personally.
Something else to be aware of - EC2 has no persistent storage. You have to manage persistent data yourself using the Elastic Block Store. This guide is an excellent tutorial on how to do this, with automated backups.
I recommend that you purchase some reserved instances if you decide EC2 is the way forward. You'll save yourself about 50% over 3 years!
Finally, you may be interested in services like RightScale which offer management services at a cost. There are other providers available.
First step is to separate concerns. I'd split off with a separate MySQL server and possibly a dedicated memcached box, depending on how high your load is there. Then I'd monitor memory and CPU usage on each box and see where you can optimize where possible. This can be done with spinning off new Media Temple boxes. I'd also suggest Slicehost for a cheaper, more developer-friendly alternative.
Some more low-budget PHP deployment optimizations:
Using a more efficient web server like nginx to handle static file serving and then reverse proxy app requests to a separate Apache instance
Implement PHP with FastCGI on top of nginx using something like PHP-FPM, getting rid of Apache entirely. This may be a great alternative if your Apache needs don't extend far beyond mod_rewrite and simpler Apache modules.
If you prefer a more high-level, do-it-yourself approach, you may want to check out Scalr (code at Google Code). It's worth watching the video on their web site. It facilities a scalable hosting environment using Amazon EC2. The technology is open source, so you can download it and implement it yourself on your own management server. (Your Media Temple box, perhaps?) Scalr has pre-built AMIs (EC2 appliances) available for some common use cases.
web: Utilizes nginx and its many capabilities: software load balancing, static file serving, etc. You'd probably only have one of these, and it would probably implement some sort of connection to Amazon's EBS, or persistent storage solution, as mentioned by dcaunt.
app: An application server with Apache and PHP. You'd probably have many of these, and they'd get created automatically if more load needed to be handled. This type of server would hold copies of your ZF app.
db: A database server with MySQL. Again, you'd probably have many of these, and more slave instances would get created automatically if more load needed to be handled.
memcached: A dedicated memcached server you can use to have centralized caching, session management, et cetera across all your app instances.
The Scalr option will probably take some more configuration changes, but if you feel your scaling needs accelerating quickly it may be worth the time and effort.