How to execute an MQL4 program in a server? - php

I have implemented an expert advisor using the MQL4 language to be executed in MetaTrader.
Now, if I need to execute it, I always need to run MetaTrader and attach my EA program to a live currency pair graph in it.
I want to know whether there is a method to execute MQL4 scripts in servers so that I do not need to keep my computer always on. I googled this question, but I could not find an appropriate answer to it.
I found there is a way to transfer data from MetaTrader to the web server (MQL to PHP) but I have no idea whether it is useful to solve my question (http://mql4-php.iinuu.eu/)
Thanks in advance.

Yes, there are few DLL-based methods to transfer "just" DATA
ZeroMQ DLL for socket based messaging approaches.
Windows raw-sockets' for a low-level socket programming.
A few other, DLL-based, tools for passing data to/from remote or parallel processes.
No, there are no known methods to run MQL4-CODE on a server
Each MQL4 source-code is first compiled into an .EX4 file. Such "executable" files are loaded and executed in a similarly proprietary piece of software -- in a MetaTrader4 Terminal. So far, there are no known server-process implementations for this functionality and MetaQuotes, Inc., does not either sell or develop any visible effort to release any such software. Due to legal reasons, there would hardly be any open source programmes, that would work in this direction, as any similar efforts have started legal consequences initiated in a name of protecting the intellectual property in any case, where a non-published nature of the data-transfers and/or operations distributed among MetaTrader4 Terminal [localhost-side] and/or MetaTrader4 Server [broker-side] programmes was to be touched or otherwise analysed and/or re-engineered.
But, there is a way to solve your wish
There is a common practice to operate the localhost-side piece of software -- the MetaTrader4 Terminal -- hosted on a remote machine, that is being kept running in a 24/7/365-style in a professional DataCentre.
Using this kind of approach, your MQL4-code is still being run in a native mode inside a MetaTrader4 Terminal software process, however, the machine ( the Windows O/S based machine ) is virtualised into a VM and hosted in a DataCentre infrastructure.
There are nevertheless some steps & measures needed so as to protect your privacy and your intellectual property rights once thinking about the VM/hosted mode of operations of your EA/script.
Applying this mode of operations will allow you to connect from your localhost to the DataCenter just in a time when you want to visually check and/or manually correct and/or modify your all-the-time-running code in a MetaTrader4 Terminal in a non-stop mode.

Noting on the following requirement:
"I want to know whether there is a method to execute MQL4 scripts in
servers so that I do not need to keep my computer always on."
You can subscribe to VPS (Virtual Private Server) services where you can attach your EA (.ex4) files to. Basically, it acts as a server-hosting (but a really small one, just enough to run your MT4 Terminal).
There are many VPS offerings. Just google Metatrader4 VPS.
In fact, Metaquotes itself also offers this service, straight off your MT4. Once you subscribe to that service and attach your .EX4, you can then switch off your PC and the EA will still be running on the VPS.
You can find details here Link.

Most brokers nowadays offer Virtual Private Server aka VPS solutions, which aims to reduce the latency & slippage on your trades. This means that your system will be "virtually" closer to the brokers services, reducing the time it takes for pricing and execution orders to travel from your VPS to the brokers servers.

Related

run php as server without Apache

Currently I'm working with PHP programming, and I find that I can load a web page just only by using PHP CL, so I don't understand exactly why we have to install additional server like Apache or Nginx.
I don't know why your question was voted down. I see it as a question for focusing on a slightly broader but highly related question: Why should we be extremely careful to only allow specific software onto public-facing infrastructure? And, even more generally, what sort of software is okay to place onto public-facing infrastructure? And its corollary, what does good server software look like?
First off, there is no such thing as secure software. This means you should always hold a very skeptical view of anything that opens a single port on a computer to enable network connections (in either direction). However, there is a very small set of software that has had enough eyeballs on it to guarantee a certain minimum level of assurance that things will probably not go horribly wrong. Apache is the most battle-tested server out there and Nginx comes in at a close second as far as modern web servers are concerned. The built-in PHP HTTP server is not a good choice for a public-facing system let alone testing production software as it lacks the qualities of good network server design and may have undiscovered security vulnerabilities in it. For those and other reasons, the developers include a warning against using the built-in PHP server. It was added because users kept asking for it but that doesn't mean it should be used.
It is also a good idea to not trust network servers written by someone who doesn't know what they are doing. I frequently see ill-conceived network servers written in Node or Go, typically WebSocket-based solutions or just used to work around some issue with another piece of software, that implicitly opens security holes in the infrastructure even if the author didn't intend to do so. Just because someone can do something doesn't mean that they should and, when it comes to writing network servers, they shouldn't. Frequently those servers are proxied behind Apache or Nginx, which affords some defense against standard attacks. However, once an attacker gets past the defenses of Apache or Nginx, it's up to the software to provide its own defenses, which, sadly, is almost always significantly lacking. As a result, any time I see a proxied service running on a host, I brace myself for the inevitable security disaster that awaits - Ruby, Node, and Go developers being the biggest offenders. The moment a developer decides to write a network server is the moment they've probably chosen the wrong strategy unless they have a very specific reason to do so AND must be aware of and prepared to defend against a wide range of attack scenarios. A developer needs to be well-versed in a wide variety of disciplines before taking on the extremely difficult task of writing a network server, scalable or otherwise. It is my experience that few developers out there are actually capable of that task without introducing major security holes into their own or their users' infrastructure. While the PHP core developers generally know what they are doing elsewhere, I have personally found several critical bugs in their core networking logic, which shows that they are collectively lacking in that department. Therefore their built-in web server should be used sparingly, if at all.
Beyond security, Apache and Nginx are designed to handle "load" more so than the built-in PHP server. What load means is the answer to the question of, "How many requests per second can be serviced?" The answer is actually extremely complicated. Depending on code complexity, what is being hosted, what hardware is in use, and what is running at any point in time, a single host can handle anywhere from 20 to 20,000 requests per second and that number can vary greatly from moment to moment. Apache comes with a tool called Apache Bench (ab) that can be used to benchmark performance of a web server. However, benchmarks should always be taken with a grain of salt and viewed from the perspective of "Can we get this application to go any faster?" rather than "My application is faster than yours."
As far as developing software in PHP goes (since SO is a programming question site), I recommend trying to mirror your production environment as best as possible. If Apache will be running remotely, then running Apache locally provides the best simulation of the real thing so that there aren't a bunch of last-minute surprises. PHP code running under the Apache module may have significantly different behavior than PHP code running under the built-in PHP server (e.g. $_SERVER differences)!
If you are like me and don't like setting up Apache and PHP and don't need Apache running all the time, I maintain a set of scripts for setting up portable versions of Apache, PHP, and Maria DB (roughly equivalent to MySQL) for Windows over here:
https://github.com/cubiclesoft/portable-apache-maria-db-php-for-windows/
If your software application is actually intended to be run using the built-in PHP server (e.g. a localhost only server), then I highly recommend introducing a buffer layer such as the CubicleSoft WebServer class:
https://github.com/cubiclesoft/ultimate-web-scraper/
By using a PHP userland class like that one, you can gain certain assurances that the built-in PHP server cannot provide while still being a pure PHP solution (i.e. no extra dependencies): Fewer, if any, buffer overflow opportunities, the server is interpreted through the Zend Engine resulting in fewer rogue code execution opportunities, and has more features than the built-in server including complete customization of the server request/response cycle itself. PHP itself can start such a server during an OS boot by utilizing a tool similar to Service Manager:
https://github.com/cubiclesoft/service-manager/
Of course, that all means that a user has to trust your application's code that opened a port to run on their computer. For example, what happens if a website starts port scanning localhost ports via the user's web browser? And, if they do find the port that your software is running on, can that website start deleting files or run code that installs malware? It's the unusual exploits that will really trip you up. A "zero open ports" with "disconnected network cable/disabled WiFi" strategy is the only known way to truly secure a device. Every open port and established connection carries risk.
Good network-enabled software will have been battle-tested and hardened against a wide range of attacks. Writing such software is a responsibility that takes a lot of time to get right and it will generally show if it is done wrong. PHP's built-in server feels sloppy and lacks basic configuration options. I can't recommend its use for any reasonable purpose.
If you refer to the PHP documentation:
Warning
This web server was designed to aid application development. It may
also be useful for testing purposes or for application demonstrations
that are run in controlled environments. It is not intended to be a
full-featured web server. It should not be used on a public network.
http://php.net/manual/en/features.commandline.webserver.php
So yes, as it states, this is a good tool for testing purposes. You can quickly start a server and test your scripts in your browser. But that does not mean it provides all of the features you get with a production level server like apache or Nginx :)
You can use the built in server in your local development environment. But you should you use a more secure, feature rich web server in your production environment which requires much more features in terms of security, handling large number of requests etc.

How to restrict PHP web app access with one machine?

Basically, I want to provide a web application (built in PHP, MySQL, Apache) to users with source code in case they don't have Internet connection. But with that, I have to take care that they web application package (with Apache, PHP, MySQL and actual application with data) cannot be copied and run in another machine (may be we can bind authentication with Hard Disk serial id).
The first solution stroked in my mind was to build stand alone application but we don't have that option because we have limitation to go with web application only.
One solution, I thought is to create a web browser like container (which may be using one of the system's browser inside) in Java or any other stand alone programming language where we have additional authentication for current machine and internally it uses system's browser for HTTP requests/responses.
Please share your idea about feasibility/implementation of above solution or any other better solution.
One thing to keep in mind that, we are providing all source code with servers, so authentication with database or PHP won't be much useful.
But with that, I have to take care that they web application package (with Apache, PHP, MySQL and actual application with data) cannot be copied and run in another machine (may be we can bind authentication with Hard Disk serial id).
This is, strictly speaking, impossible.
The first solution stroked in my mind was to build stand alone application but we don't have that option because we have limitation to go with web application only.
Have you ever heard of IDA Pro? JD-GUI? ILSpy?
Stand-alone applications can trivially be reverse-engineered. This will protect nothing.
Your best options are:
Provide a cloud service, which is totally agnostic towards HTTP clients, so you can own the back-end machines that contain your source code, then give your customers a dumb open source front-end that speaks to the back-end.
Enforce your software policies (i.e. only allowed to run one copy of the software) with the appropriate tool for the job: Lawyers and contracts.

Webserver optimization: Dealing with frequent php requests

This is probably a odd question but is something that I have been wondering about lately.
I have a application that requests a page (php script, works like a API and outputs a simple string) from my webserver every second. That seems quite a lot of spam and I was wondering if any issue could arrive from that.
Like, I should probably have attention to the webserver logging, to make sure it doesnt spam the disk until its full. RAM/CPU isn't a problem at this point. APC is enabled. The scripts are optimized. What else should I look into, if anything ?
This is probably the same situation I would encounter with a lot of visitors comming to my site, but I never had that experience yet.
Thanks!
Every second? That's 86400 times a day per client. That's a lot for php! but it should be okay unless you have multiple clients, some kind of I/O heavy or database system behind it.
Otherwise, php5[-fpm] with APC on nginx sounds suited for this use, if you must use PHP.
If this component of your application aggregates data without a database, by mining other data sources over the internet, you may want to check with the data providers that realtime polling is permissible and to ensure your addresses are whitelisted explicitly.
Firewalls aren't to be forgotten: using a permit-by-exception security policy, i.e. iptables -t filter -P INPUT DROP, fine-turned to the packet level using the iptables -t raw table as well. One of the greatest threats to mission-critical webserver performance is the ability of an adversary to identify a node as critical by analyzing traffic frequency and volume. Closing all non-critical ports at the lowest-level is an easy defense.
Another option is automated failover strung together with node monitoring for this server and rapid deployment of a drop-in replacement appliance using a cloud VPS provider such as Digital Ocean or Amazon Web Services. This is an alternative to running redundant servers (or instances) permanently, and fun to setup.
Applications which require realtime request processing with failover are often seen in the financial industry in high-value risk environments, as well as in the security and transportation industries in safety-critical risk environments. If either of these scenarios applies to you, you may wish to consider rebuilding this component of your application from the ground up using a specially-purposed language set including Ada, Erlang, Haskell. This would allow you to optimize resource utilization at a lower-level, and therefore obtain optimum performance. Depending on your risk environment, this may or may not be worthwhile for you.

Full Oracle backup from PHP

I maintain a PHP driven web application with Oracle backend. The app interacts with a number of third-party apps so information is managed with a combination of XML files, Microsoft Access databases and HTML forms. There are currently 80 tables with many BLOBs and a pretty good bunch of foreign key relationships. All procedures are carefully explained in a document that (of course) nobody ever reads. The customer was feeling uneasy about his data so he was given an estimate with some improvements that could be made (stuff like adding previews and confirmations in some operations).
Sadly, the customer misinterpreted one of the specs (a partial export to be written in 12 man-hours) and he's expecting a full backup and restore feature that would allow him to save and restore the complete database through a web browser without the DBA intervention.
Before having yet another argument with the client, I'd like to know whether I have any option to actually implement this feature in a timely manner, considering that it doesn't need any refinements (e.g., there is no need to select what to restore).
Production server is a Windows Server 2003 box running PHP/5.2.9. The Oracle server is a remote box running "Oracle9i Release 9.2.0.1.0 - 64bit Production".
(Please note I'm not a DBA so there may be well-known solutions I'm not aware of.)
Oracle is a monster. Once you've read this you'll realise that how you backup the system depends totally on how it has been configured. The short answer is to automate whatever manual process - invoke it as a long running process (since this is MSWindows, prefix the rman command with 'start') then use polling to detect when it finishes (e.g. wrap rman in a DOS batch file which logs start and end times).
I'd be hard pushed to think of a more difficult problem to provide a generic solution for than Oracle runing on top of MSWindows. The latter may be nice for users to click on buttons, but automating anything is a PITA.
Have fun :)
Finally, I had the chance of implementing full Oracle backup from PHP in a later project. I used the Oracle Data Pump command-line utilities, available since 10g. In short:
You define an Oracle directory to map a keyword to a physical directory and grant write permission to the app's Oracle user.
You run expdp with the appropriate arguments and get a complete dump in a single file.
To restore a backup, you run impdp.
It's also advisable to run commands with proc_open() rather than system() since you can bypass_shell if on Windows and have fine-grained control on the process.
As for this question, the pre-10g alternative is the "exp" / "imp" combo.

Is it possible to have a peer to peer communication using nothing but PHP

Is it possible to implement a p2p using just PHP? Without Flash or Java and obviously without installing some sort of agent/client on one's computer.
so even though it might not be "true" p2p, but it'd use server to establish connection of some sort, but rest of communication must be done using p2p
i apologize for little miscommunication, by "php" i meant not a php binary, but a php script that hosted on web server remote from both peers, so each peer have nothing but a browser.
without installing some sort of
agent/client on one's computer
Each computer would have to have the PHP binaries installed.
EDIT
I see in a different post you mentioned browser based. Security restrictions in javascript would prohibit this type of interaction
No.
You could write a P2P client / server in PHP — but it would have to be installed on the participating computers.
You can't have PHP running on a webserver cause two other computers to communicate with each other without having P2P software installed.
You can't even use JavaScript to help — the same origin policy would prevent it.
JavaScript running a browser could use a PHP based server as a middleman so that two clients could communicate — but you aren't going to achieve P2P.
Since 2009 (when this answer was originally written), the WebRTC protocol was written and achieved widespread support among browsers.
This allows you to perform peer-to-peer between web browsers but you need to write the code in JavaScript (WebAssembly might also be an option and one that would let you write PHP.)
You also need a bunch of non-peer server code to support WebRTC (e.g. for allow peer discovery and proxy data around firewalls) which you could write in PHP.
It is non-theoretical because server side application(PHP) does not have peer's system access which is required to define ports, IP addresses, etc in order to establish a socket connection.
ADDITION:
But if you were to go with PHP in each peer's web servers, that may give you what you're looking for.
Doesn't peer-to-peer communication imply that communication is going directly from one client to another, without any servers in the middle? Since PHP is a server-based software, I don't think any program you write on it can be considered true p2p.
However, if you want to enable client to client communications with a php server as the middle man, that's definitely possible.
Depends on if you want the browser to be sending data to this PHP application.
I've made IRC bots entirely in PHP though, which showed their status and output in my web browser in a fashion much like mIRC. I just set the timeout limit to infinite and connected to the IRC server using sockets. You could connect to anything though. You can even make it listen for incoming connections and handle them.
What you can't do is to get a browser to keep a two-way connection without breaking off requests (not yet anyways...)
Yes, but its not what's generally called p2p, since there is a server in between. I have a feeling though that what you want to do is to have your peers communicate with each other, rather than have a direct connection between them with no 'middleman' server (which is what is normally meant by p2p)
Depending on the scalability requirements, implementing this kind of communication can be trivial (simple polling script on clients), or demanding (asynchronous comet server).
In case someone comes here seeing if you can write P2P software in PHP, the answer is yes, in this case, Quentin's answer to the original question is correct, PHP would have to be installed on the computer.
You can do whatever you want to do in PHP, including writing true p2p software. To create a true P2P program in PHP, you would use PHP as an interpreted language WITHOUT a web server, and you would use sockets - just like you would in c/c++. The original accepted answer is right and wrong, unless however the original poster was asking if PHP running on a webserver could be a p2p client - which would of course be no.
Basically to do this, you'd basically write a php script that:
Opens a server socket connection (stream_socket_server/socket_create)
Find a list of peer IP's
Open a client connection to each peer
...
Prove everyone wrong.
No, not really. PHP scripts are meant to run only for very small amount of time. Usually the default maximum runtime is two minutes which will be normally not enough for p2p communication. After this the script will be canceled though the server administrator can deactivate that. But even then the whole downloading time the http connection between the server and the client must be hold. The client's browser will show in this time its page loading indicator. If the connection breakes most web servers will kill the php script so the p2p download is canceled.
So it may be possible to implement the p2p protocol, but in a client/server scenario you run into problems with the execution model of php scripts.
both parties would need to be running a server such as apache although for demonstration purposes you could get away with just using the inbuilt php test server. Next you are going to have to research firewall hole punching in php I saw a script i think on github but was long time ago . Yes it can be done , if your client is not a savvy programmer type you would probably need to ensure that they have php installed and running. The path variable may not work unless you add it to the system registry in windows so make sure you provide a bat file that both would ensure the path is in the system registry so windows can find it .Sorry I am not a linux user.
Next you have to develop the code. There are instrucions for how hole punching works and it does require a server on the public domain which is required to allow 2 computers to find each others ip address. Maybe you could rig up something on a free website such as www.000.webhost.com alternatively you could use some kind of a built in mechanism such as using the persons email address. To report the current ip.
The biggest problem is routers and firewalls but packets even if they are directed at a public ip still need to know the destination on a lan so the information on how to write the packet should be straight forwards. With any luck you might find a script that has done most of the work for you.

Categories