Currently I'm working with PHP programming, and I find that I can load a web page just only by using PHP CL, so I don't understand exactly why we have to install additional server like Apache or Nginx.
I don't know why your question was voted down. I see it as a question for focusing on a slightly broader but highly related question: Why should we be extremely careful to only allow specific software onto public-facing infrastructure? And, even more generally, what sort of software is okay to place onto public-facing infrastructure? And its corollary, what does good server software look like?
First off, there is no such thing as secure software. This means you should always hold a very skeptical view of anything that opens a single port on a computer to enable network connections (in either direction). However, there is a very small set of software that has had enough eyeballs on it to guarantee a certain minimum level of assurance that things will probably not go horribly wrong. Apache is the most battle-tested server out there and Nginx comes in at a close second as far as modern web servers are concerned. The built-in PHP HTTP server is not a good choice for a public-facing system let alone testing production software as it lacks the qualities of good network server design and may have undiscovered security vulnerabilities in it. For those and other reasons, the developers include a warning against using the built-in PHP server. It was added because users kept asking for it but that doesn't mean it should be used.
It is also a good idea to not trust network servers written by someone who doesn't know what they are doing. I frequently see ill-conceived network servers written in Node or Go, typically WebSocket-based solutions or just used to work around some issue with another piece of software, that implicitly opens security holes in the infrastructure even if the author didn't intend to do so. Just because someone can do something doesn't mean that they should and, when it comes to writing network servers, they shouldn't. Frequently those servers are proxied behind Apache or Nginx, which affords some defense against standard attacks. However, once an attacker gets past the defenses of Apache or Nginx, it's up to the software to provide its own defenses, which, sadly, is almost always significantly lacking. As a result, any time I see a proxied service running on a host, I brace myself for the inevitable security disaster that awaits - Ruby, Node, and Go developers being the biggest offenders. The moment a developer decides to write a network server is the moment they've probably chosen the wrong strategy unless they have a very specific reason to do so AND must be aware of and prepared to defend against a wide range of attack scenarios. A developer needs to be well-versed in a wide variety of disciplines before taking on the extremely difficult task of writing a network server, scalable or otherwise. It is my experience that few developers out there are actually capable of that task without introducing major security holes into their own or their users' infrastructure. While the PHP core developers generally know what they are doing elsewhere, I have personally found several critical bugs in their core networking logic, which shows that they are collectively lacking in that department. Therefore their built-in web server should be used sparingly, if at all.
Beyond security, Apache and Nginx are designed to handle "load" more so than the built-in PHP server. What load means is the answer to the question of, "How many requests per second can be serviced?" The answer is actually extremely complicated. Depending on code complexity, what is being hosted, what hardware is in use, and what is running at any point in time, a single host can handle anywhere from 20 to 20,000 requests per second and that number can vary greatly from moment to moment. Apache comes with a tool called Apache Bench (ab) that can be used to benchmark performance of a web server. However, benchmarks should always be taken with a grain of salt and viewed from the perspective of "Can we get this application to go any faster?" rather than "My application is faster than yours."
As far as developing software in PHP goes (since SO is a programming question site), I recommend trying to mirror your production environment as best as possible. If Apache will be running remotely, then running Apache locally provides the best simulation of the real thing so that there aren't a bunch of last-minute surprises. PHP code running under the Apache module may have significantly different behavior than PHP code running under the built-in PHP server (e.g. $_SERVER differences)!
If you are like me and don't like setting up Apache and PHP and don't need Apache running all the time, I maintain a set of scripts for setting up portable versions of Apache, PHP, and Maria DB (roughly equivalent to MySQL) for Windows over here:
https://github.com/cubiclesoft/portable-apache-maria-db-php-for-windows/
If your software application is actually intended to be run using the built-in PHP server (e.g. a localhost only server), then I highly recommend introducing a buffer layer such as the CubicleSoft WebServer class:
https://github.com/cubiclesoft/ultimate-web-scraper/
By using a PHP userland class like that one, you can gain certain assurances that the built-in PHP server cannot provide while still being a pure PHP solution (i.e. no extra dependencies): Fewer, if any, buffer overflow opportunities, the server is interpreted through the Zend Engine resulting in fewer rogue code execution opportunities, and has more features than the built-in server including complete customization of the server request/response cycle itself. PHP itself can start such a server during an OS boot by utilizing a tool similar to Service Manager:
https://github.com/cubiclesoft/service-manager/
Of course, that all means that a user has to trust your application's code that opened a port to run on their computer. For example, what happens if a website starts port scanning localhost ports via the user's web browser? And, if they do find the port that your software is running on, can that website start deleting files or run code that installs malware? It's the unusual exploits that will really trip you up. A "zero open ports" with "disconnected network cable/disabled WiFi" strategy is the only known way to truly secure a device. Every open port and established connection carries risk.
Good network-enabled software will have been battle-tested and hardened against a wide range of attacks. Writing such software is a responsibility that takes a lot of time to get right and it will generally show if it is done wrong. PHP's built-in server feels sloppy and lacks basic configuration options. I can't recommend its use for any reasonable purpose.
If you refer to the PHP documentation:
Warning
This web server was designed to aid application development. It may
also be useful for testing purposes or for application demonstrations
that are run in controlled environments. It is not intended to be a
full-featured web server. It should not be used on a public network.
http://php.net/manual/en/features.commandline.webserver.php
So yes, as it states, this is a good tool for testing purposes. You can quickly start a server and test your scripts in your browser. But that does not mean it provides all of the features you get with a production level server like apache or Nginx :)
You can use the built in server in your local development environment. But you should you use a more secure, feature rich web server in your production environment which requires much more features in terms of security, handling large number of requests etc.
Related
I have implemented an expert advisor using the MQL4 language to be executed in MetaTrader.
Now, if I need to execute it, I always need to run MetaTrader and attach my EA program to a live currency pair graph in it.
I want to know whether there is a method to execute MQL4 scripts in servers so that I do not need to keep my computer always on. I googled this question, but I could not find an appropriate answer to it.
I found there is a way to transfer data from MetaTrader to the web server (MQL to PHP) but I have no idea whether it is useful to solve my question (http://mql4-php.iinuu.eu/)
Thanks in advance.
Yes, there are few DLL-based methods to transfer "just" DATA
ZeroMQ DLL for socket based messaging approaches.
Windows raw-sockets' for a low-level socket programming.
A few other, DLL-based, tools for passing data to/from remote or parallel processes.
No, there are no known methods to run MQL4-CODE on a server
Each MQL4 source-code is first compiled into an .EX4 file. Such "executable" files are loaded and executed in a similarly proprietary piece of software -- in a MetaTrader4 Terminal. So far, there are no known server-process implementations for this functionality and MetaQuotes, Inc., does not either sell or develop any visible effort to release any such software. Due to legal reasons, there would hardly be any open source programmes, that would work in this direction, as any similar efforts have started legal consequences initiated in a name of protecting the intellectual property in any case, where a non-published nature of the data-transfers and/or operations distributed among MetaTrader4 Terminal [localhost-side] and/or MetaTrader4 Server [broker-side] programmes was to be touched or otherwise analysed and/or re-engineered.
But, there is a way to solve your wish
There is a common practice to operate the localhost-side piece of software -- the MetaTrader4 Terminal -- hosted on a remote machine, that is being kept running in a 24/7/365-style in a professional DataCentre.
Using this kind of approach, your MQL4-code is still being run in a native mode inside a MetaTrader4 Terminal software process, however, the machine ( the Windows O/S based machine ) is virtualised into a VM and hosted in a DataCentre infrastructure.
There are nevertheless some steps & measures needed so as to protect your privacy and your intellectual property rights once thinking about the VM/hosted mode of operations of your EA/script.
Applying this mode of operations will allow you to connect from your localhost to the DataCenter just in a time when you want to visually check and/or manually correct and/or modify your all-the-time-running code in a MetaTrader4 Terminal in a non-stop mode.
Noting on the following requirement:
"I want to know whether there is a method to execute MQL4 scripts in
servers so that I do not need to keep my computer always on."
You can subscribe to VPS (Virtual Private Server) services where you can attach your EA (.ex4) files to. Basically, it acts as a server-hosting (but a really small one, just enough to run your MT4 Terminal).
There are many VPS offerings. Just google Metatrader4 VPS.
In fact, Metaquotes itself also offers this service, straight off your MT4. Once you subscribe to that service and attach your .EX4, you can then switch off your PC and the EA will still be running on the VPS.
You can find details here Link.
Most brokers nowadays offer Virtual Private Server aka VPS solutions, which aims to reduce the latency & slippage on your trades. This means that your system will be "virtually" closer to the brokers services, reducing the time it takes for pricing and execution orders to travel from your VPS to the brokers servers.
I was recently curious about PHP 5.4's built-in webserver. On the surface it seems as though, while rather barebones, with enough work it could be possible to distribute PHP applications that traditionally depend on a separate web server, like WordPress, as standalone scripts that you could just run with php -S localhost:80 app.php (or, more likely, './wordpress.sh'). They might even ship with their own PHP interpreter that has all the features the application needs, which would obviate the need for targeting many different versions of the language.
It's re-inventing the wheel somewhat, but it would certainly increase portability and reduce complexity for the end user.
However, I saw the following on the documentation page:
This web server was designed to aid application development. It may also be useful for testing purposes or for application demonstrations that are run in controlled environments. It is not intended to be a full-featured web server. It should not be used on a public network.
This would obviously refer to issues like proper filesystem security and serving the correct HTTP headers, which can be worked through. However, is there more to it? Are there inherent security concerns and/or technical limitations with using PHP's built-in web server in a production environment that can't be worked around? If so, what are they?
I can think of plenty of operational issues why you wouldn't want to do this:
Logging
Rewrites
Throttling
Efficiency (not tested, but I'm guessing Nginx is a lot faster than PHP's built-in non-optimized server)
Integration with anything else you have that extends Nginx, Apache, and IIS (things like New Relic)
However, there is a solution where you get most of the benefit of running PHP with its built-in web server while getting most of the benefit of running a web server out front. That is, you could use a server like Nginx as a reverse proxy to PHP's built-in web server. In this situation, HTTP becomes a replacement for FastCGI, analogous to common usages of the built-in HTTP server in Node.js applications.
Now, I can't speak to the specifics of the warning in the documentation as I am not one of the PHP authors. If it were me, I'd not run PHP alone for the reasons above, but I might consider running it behind a real web server like Nginx. For me though, setting up PHP with PHP-FPM and what not isn't that difficult, and I'll take that over guessing at the seaworthiness of a built-in server that is documented to be for testing only.
The problem with PHP's built-in web server is that it is single threaded!
That has performance and security implications. Performance implications obviously are that only one user can be served at a time (until one request finishes, another can not start).
Security implications are that it's pretty easy to DOS that server, using a simple open socket that sends tiny amounts of data (similar to Slow Loris).
It's useful for simple, one-page, non-interactive applications that have no risk of denial of service.
PHP's built in server only supports HTTP/1.0, which means clients have to make a new TCP/IP connection for every request. This is very slow.
It is not intended for production use and may not be able to gracefully handle crashes and memory leaks, raising stability concerns. More importantly PHP itself warns of this explicitly:
Warning This web server was designed to aid application development. It may also be useful for testing purposes or for application demonstrations that are run in controlled environments. It is not intended to be a full-featured web server. It should not be used on a public network.
http://php.net/manual/en/features.commandline.webserver.php
This is probably a odd question but is something that I have been wondering about lately.
I have a application that requests a page (php script, works like a API and outputs a simple string) from my webserver every second. That seems quite a lot of spam and I was wondering if any issue could arrive from that.
Like, I should probably have attention to the webserver logging, to make sure it doesnt spam the disk until its full. RAM/CPU isn't a problem at this point. APC is enabled. The scripts are optimized. What else should I look into, if anything ?
This is probably the same situation I would encounter with a lot of visitors comming to my site, but I never had that experience yet.
Thanks!
Every second? That's 86400 times a day per client. That's a lot for php! but it should be okay unless you have multiple clients, some kind of I/O heavy or database system behind it.
Otherwise, php5[-fpm] with APC on nginx sounds suited for this use, if you must use PHP.
If this component of your application aggregates data without a database, by mining other data sources over the internet, you may want to check with the data providers that realtime polling is permissible and to ensure your addresses are whitelisted explicitly.
Firewalls aren't to be forgotten: using a permit-by-exception security policy, i.e. iptables -t filter -P INPUT DROP, fine-turned to the packet level using the iptables -t raw table as well. One of the greatest threats to mission-critical webserver performance is the ability of an adversary to identify a node as critical by analyzing traffic frequency and volume. Closing all non-critical ports at the lowest-level is an easy defense.
Another option is automated failover strung together with node monitoring for this server and rapid deployment of a drop-in replacement appliance using a cloud VPS provider such as Digital Ocean or Amazon Web Services. This is an alternative to running redundant servers (or instances) permanently, and fun to setup.
Applications which require realtime request processing with failover are often seen in the financial industry in high-value risk environments, as well as in the security and transportation industries in safety-critical risk environments. If either of these scenarios applies to you, you may wish to consider rebuilding this component of your application from the ground up using a specially-purposed language set including Ada, Erlang, Haskell. This would allow you to optimize resource utilization at a lower-level, and therefore obtain optimum performance. Depending on your risk environment, this may or may not be worthwhile for you.
I am developing a server application that is mostly used to registered users and gives services upon request.
It was built from scratch, and I am not fundamentally a PHP developer.
I must say that this is the first time I am deploying a production to the web.
The end of phase 1 is drawing near and I would like to know what do you think is the best approach to deploying this server.
How do I debug on a production server ? how do I log all my actions to log files ?
How should I handle the resources and traffic? how would I know if the server is reaching its limits ?
Any approaches to evening out the load ? I have one DB, is it possible to let another server use the same DB to take off the load from the first one ?
(Of course these are general and long shot questions, but I'd like to have a bigger picture of what I am doing here)
Is there anything I should know about security measures other than my own code ? like if I can trace down a hacking attempt for example?
If you are asking all these basic question, then you should not be deploying a production server. But needless to say, you are.
It's usually never a good idea to debug on a Production server, that is what QA or Development or Testing servers are for. If you are deploying a combination of Apache, PHP, MySQL, usually for PHP, there is a php_error.log file for you to look at. The location of that is base on your httpd.conf
Handling resource traffic is based on your volume, you have to answer that question yourself. Google MySQL configuration optimization and you'll find many helpful tips to correctly configure and optimize it's speed, and when you hit your limit, you'll know.
Security, that is another very vague question, security differ base on your needs, such as a bank vs a mom and pop website. I guess research network secure, and always keep in mine this, don't over kill, security is good enough when it accomplish the task.
As mentioned you shouldn't debug on a production server, all testing should have been completed in your development environment. However, of course you should thoroughly test everything once it is on the production server and fix any issues that may appear. By making sure your development environment is as close as possible to the live one in terms of settings etc you can mitigate the risks but you can never totally eradicate any potential issues.
Depending on your server you can try running a command such as "top" or "topaz" at the command line, this works on a Unix box if it has the correct installation and will tell you how much CPU is free and how much is being used. This would give a rough idea if it can handle the traffic you are throwing at it. Handling traffic and managing resources is a huge area in itself, there's a lot that can be done such as load balancing for example if you have multiple servers, you may find VMWare is helpful here also. There are also call-gapping techniques that can be employed depdending on what your application is for and who is using it. And yes you can have one database shared between more than one server.
There is professional monitoring sofware you can buy to show how busy your servers are that may help, just search for "monitoring software" for example.
Security is another huge area and the solutions would depend on what you are deploying and who is going to use it. You should be aware of all the likely methods of attack that your application is likely to be subjected to and have planned your code to cope with this, for example SQL injection, session hijacking etc.
You need contingency plans in case hackers compromise your site and ideally disaster recovery plans as well, depending on how critical your application is.
Best advice is to plan everything thoroughly before you start and get your bosses to approve the plans to cover yourself.
If you have more precise questions I can give more precise answers, I used to work for a global bank and have experience of releasing critical code to production servers with all the red-tape that goes with that
;-)
This has been bugging me for awhile now.
In a deployed PHP web application one can upload a changed php script and have the updated file picked up by the web server without having to restart.
The problem? Ruby, Groovy, & Python, etc. are all "better" than PHP in terms of language expressiveness, concision, power, ...your-reason-here.
Currently, I am really enjoying Groovy (via Grails), but the reality is that the JVM does not do well (at all) with production dynamic reloading of application code. Basically, Permgen out of memory errors are a virtual guarantee, and that means application crash at anytime -- not good.
Ruby frameworks seem to have this solved somewhat from what I have read: Passenger has an option to dynamically reload changed files in polled directories on the next request (thus preventing connected users from being disconnected, session lost, etc.).
Standalone Python I am not sure about at all; it may, like PHP allow dynamic reloading of python scripts without web server restart.
As far as our web work is concerned, invariably clients wind up wanting to make changes to a deployed application regardless of how detailed and well planned the spec was. Telling the client, "sure, we'll implement that [simple] change at 4AM tomorrow [so as to not wreak havoc with connected users]", won't go over too well.
As of 2011 where are we at in terms of dynamic reloading and scripting languages? Are we forever doomed, relegated to the convenience of PHP, or the joys of non-PHP and being forced to restart a deployed application?
BTW, I am not at all a fan of JSPs, GSPs, and Ruby, Python templating equivalents, despite their reloadability. This is a cake & eat it too thread, where we can make a change to any aspect of the application and not have to restart.
You haven't specified a web server. If you're using Apache, mod_wsgi is your best bet for running Python web apps, and it has a reloading mechanism that doesn't require a server restart.
I think you're making a bigger deal out of this than it really is.
Any application for which it is that important that it never be down for 1/2 a minute (which is all it takes to reboot a server to pick up a file change) really needs to have multiple application server instances in order to handle potential failures of individual instances. Once you have multiple application servers to handle failures, you can also safely restart individual instances for maintenance without causing a problem.