Websites on the same server communicating - php

I want to separate out the API calls my site makes to another install as the site has become big and buggy by having everything together. I would like to know what ways are there if any to make two sites communicate when they are on the same server.
I originally was thinking I could get the client-facing site to just include the models from the API site through a custom loader for CodeIgniter, but I am currently leaning towards wanting the API site to take advantage of Laravel which would obviously scrap directly loading them.
Currently I have some calls which are using CURL to POST requests, is this the only way? I was hoping to drop the HTTP calls in favour of something more direct.

As I said in my comments to the question, I'm definitely no expert on this kind of stuff, but my original thinking was that IPC-style stuff could be done, maybe using names pipes.
PHP does allow for this in its POSIX and process control functions. Simply use posix_mkfifo to create a named pipe and then you should be able to use fopen, fread, etc. (along with the stream_* functions if you need to) to write to and read form the pipe. However, I'm not sure how well that works with a single writer and multiple readers, and it's also probably quite a large change to your code to replace the HTTP stuff you currently have.
So the next possibility is that, if you want to stick with HTTP (and don't mind the small overhead of forming HTTP requests and the headers, etc.), then you should ensure that your server is using local sockets to cut down on transport costs. If your web site's domain name is the same hostname as the server itself this should already be happening (as your /etc/hosts file will have any entry pointing the hostname to 127.0.0.1). However, if not, all you need to do is add such an entry and, as far as I'm aware, it'll work. At the very worst you could hardcode 127.0.0.1 in your code (and ensure your webserver responds correctly to these requests), of course.

Related

How do you store non-serializable, process-specific objects between PHP requests?

We have a need to keep a collection of socket objects around that are associated with different client browser sessions, so that when the client's browser makes a subsequent request, we can use the existing socket connection/session to make a request on their behalf. The socket is to something that is not HTTP. Is there a way to store objects like this in PHP that will survive across page requests?
Is there a way to store objects like this in PHP that will survive across page requests?
No.
To quote zombat's answer to a very similar question:
In PHP, there is no concept of page instances. Every request to the web server is a fresh start. All the classes are re-loaded, so there is no concept of class sharing, nor is there a concept of resource pooling, unless it is implemented externally. Sharing sockets between web requests therefore isn't really possible.
If the objects were serializable, you could use PHP's serialize() and unserialize() in conjunction with MySQL memory tables to solve this issue. Other than that, I don't think there's much else you can do.
In php script dies after page load so there is no way to do it. Hovewer you can create a long-living daemon which will open all required by process sockets and keep it opened between page reload. Of course you'll need to isolate these sockets by some kind of access key to make sure different sessions won't have access to other user's sockets. Also you need to keep in mind that it will die at some moment so make sure you have logic to restart all the sockets which were opened. But it can be achieved for sure.
Thanks.
This isn't a complete answer; but it steps towards an answer.
As has been pointed out ad nauseum elsewhere, the standard, classic model of using PHP, via a web server (Apache, Nginx, etc) does not allow you to do this, because each page hit starts with an entirely fresh set of variables.
Three thoughts:
You need a persistence layer. Obviously this is where you store stuff in a database, or use APC (APCu in PHP7+), Redis, or something similar.
Your problem, however, is that you specify "unserializable."
My next suggestion would be, perhaps you could persist the elements necessary to construct the object, and re-initialise the object for each PHP request. It won't be as amazingly performant as you'd like, but it's the most useful solution without having to rewrite eveything. Perhaps you've already tried it.
The next step is to do something completely out-there. One of the advantages the NodeJS infrastructure has is that the entire server loop persists.
So you could try one of the alternate methods of running PHP, like ReactPHP or PHP FastCGI. (There are others, but I can't remember them off the top of my head. I'll edit this if I remember.)
This involves an entirely different way of writing PHP--as different as NodeJS programming is from stuffing around with jQuery inside your browser. It wouldn't run within Apache. Rather, it would run directly as an app on your Unix server. And you'll have to cater for things like garbage collection so you don't have memory leaks, and write nice tight event loops.
The plus side is, because your thread persists and handles each subsequent request, you're able to handle requests in exactly the way you're after.
Your comment mentions these are local sockets. And it sounds like the PHP application acts as a socket client. So the only thing which is needed by the PHP session is a common identifier, such as a user ID, and for the sockets to be named consistently.
So, for example:
<?php
$userid = $_SESSION['userid'];
$fp = stream_socket_client("unix:///tmp/socket-" . $userid, $errno, $errstr, 30);
if ($fp) {
fread...
}

PHP - cross-application communication

I have two PHP applications on my server. One of them has RESTAPI which I would like to consume and render in the second application. What is better way then curling the API? Can I somehow ask php-fpm for the data directly or something like that?
Doing curl and making request through the webserver seems wrong.
All this happens on single server - I know it probably doesn't scale well but its small project.
why use REST if you can access the functions directly?
If everything is on the same server then there is no need for some REST, since it makes a somewhat pointless run through the webserver.
But if it is already there and you don't care about the overhead (if there's not much traffic going on then it would make sense), then use file_get_contents instead of curl, it is easier to use, but I doubt it is faster/slower; both are right.
You could also use a second webserver (a second virtualhost) on a different port for internal use. That way things are nicely separated.
(If everything is on different servers, but a local network, then using sockets would be fastest. )
Doing curl and making request through the webserver seems wrong. - I disagree with that. You can still achieve what you want to achieve using Php CURL, even if it's on the same server.
I was in the same problem, but i solved it using MySQL to "queue" tasks, and a worker could use any pooling method, or PHP executing a new server side worker.
Since the results were stored in the same database, the PHP pages could load the results, or the status anytime.

PHP: DIY WebDav Server

I know that there are a number of libraries available, but I am trying to learn more about the WebDav protocol itself for a project I’m developing.
For stage 1, I would like to implement a virtual read-only file system in PHP, presenting as a WebDav server.
As far as I can tell, it would need to be able to:
list virtual files & directories
change directories
print the contents of a single file
I’ve found a number of sources, but they either try to do too much or gloss over the implementation of the protocol itself.
Can someone explain or point me to a source that might answer the following:
What are the steps in the communication between the client & server?
How does PHP receive a request, and how should the response be formatted?
Thanks
When I originally started sabre/dav I still made sure to read the entire rfc first. You really need to have a good idea of all the features, the data model and how they work together.
After that, you probably only really need to look at the PROPFIND, OPTIONS and GET methods. One option is to just look at what a client sends your way... figure out based on the rfcs what the response should be, and then write the code that sends the correct response.
Another good way to start learning is to hook up an existing webdav client to a webdav server and inspect what kind of messages they send back and forward.

detecting calls to outside PHP (curl, file get contents ...)

Is there a simple way to detect all outside calls from PHP?
In an open sourced project I have a lot of 3rd party scripts. With use of new relic I was able to debug long execution times leading back to some of this scripts making calls back to their servers.
I dont mind this but I want to know what data this scripts are sending and most of all I dont want to have slow site when 3rd party script server is down or not accessible.
Is there an easy way to log all curl, file get contents etc requests?
Thanks!
You are searching for a packet sniffer. Usually you'll use tcpdump and/or wireshark.
http://wiki.ubuntuusers.de/tcpdump
https://www.wireshark.org/
There are many solutions, but my preferred is
You build your own proxy : example a dedicated Apache Server (it can running in the same IP but different port who will handle this type of operations). After that, you change all of your old URL to pass by your proxy
Imagine that you have this in your code : curl_init('www.google.com'); so you have to change it by: curl_init('http://localhost:8090/CALLS_OUTSIDE_PHP_CONTROLLER.php?url_to_redirect=www.google.com');
The PHP controller running under 8090 can do many operations as : blacklist/whitelist some urls, doing regular URL check in background... many cool stuff

Deploy PHP website to client server without showing PHP files

I asked a recent question regarding the use of readfile() for remotely executing PHP, but maybe I'd be better off setting out the problem to see if I'm thinking the wrong way about things, so here goes:
I have a PHP website that requires users to login, includes lots of forms, database connections and makes use of $_SESSION variables to keep track of various things
I have a potential client who would like to use the functionality of my website, but on their own server, controlled by them. They would probably want to restyle the website using content and CSS files local to their server, but that's a problem for later
I don't want to show them my PHP code, since that's the value of what I'd be providing.
I had thought to do this with calls to include() from the client's server to mine, which at least keeps variable scope intact, but many sites (and the PHP docs) seem to recommend readfile(), file_get_contents() or similar. Ideally I'd like to have a simple wrapper file on the client's server for each "real" one on my server.
Any suggestions as to how I might accomplish what I need?
Thanks,
ColmF
As suggested, comment posted as an answer & modified a touch
PHP is an interpretive language and as such 'reads' the files and parses them. Yes it can store cached byte code in certain cases but it's not like the higher level languages that compile and work in bytecode. Which means that the php 'compiler' requires your actual source code to work. Check out zend.com/en/products/guard which might do what you want though I believe it means your client has to use the Zend Server.
Failing that sign a contract with the company that includes clauses of not reusing your code / etc etc. That's your best protection in this case. You should also be careful though, if you're using anything under an 'open source' license your entire app may be considered open source and thus this is all moot.
This is not a non-standard practice for many companies. I have produced software I'm particularly proud of and a company wants to use it. As they believe in their own information security for either 'personal' reasons or because they have to comply to a standard such as PCI there are times my application must run in their environments. I have offered my products as 'web services' where they query my servers with data and recieve responses. In that case my source is completely protected as this is no different than any other closed API. In every case I have licensed the copy to the client with provisions that they are not allowed to modify nor distribute it. This is a legal binding contract and completely expected from the clients side of things. Of course there were provisions that I would provide support etc etc but that's neither here nor there.
Short answers:
Legal agreement, likely your best bet from everyone's point of view
Zend guard like product, never used it so I can't vouch for it
Private API but this won't really work for you as the client needs to host it
Good luck!
If they want it wholly contained on their server then your best bet is a legal solution not a technical one.
You license the software to them and you make sure the contract states the intellectual property belongs to you and it cannot be copied/distributed etc without prior permission (obviously you'll need some better legalese than that, but you get the idea).
Rather than remote execution, I suggest you use a PHP source protection system, such as Zend Guard, ionCube or sourceguardian.
http://www.zend.com/en/products/guard/
http://www.ioncube.com/
http://www.sourceguardian.com/
Basically, you're looking for a way to proxy your application out to a remote server (i.e.: your clients). To use something like readfile() on the client's site is fine, but you're still going to need multiple scripts on their end. Basically, readfile scrapes what's available at a particular file path or URL and pipes it to the end user. So if I were to do readfile('google.com'), it would output the source code for Google's homepage.
Assuming you don't just want to have a dummy form on your clients' sites, you're going to need to have some code hanging out on their end. The code is going to have to intercept the form submissions (so you'll need a URL parameter on the page you're scraping with readfile to tell your code that the form submission URL is your client's site and not your own). This page (the form submission handler page) will need to make calls back to your own site. Think something like this:
readfile("https://your.site/whatever?{$_SERVER['QUERY_STRING']}");
Your site is then going to process the response and then pass everything back to your clients' sites.
Hopefully I've gotten you on the right path. Let me know if I was unclear; I realize this is a lot of info.
I think you're going to have a hard time with this unless you want some kind of funny wrapper that does curl type requests to your server. Especially when it comes to handling things like sessions and cookies.
Are you sure a PHP obfuscator wouldn't be sufficient for what you are doing?
Instead of hosting it yourself, why not do what most php applications do and simply distribute the program to your client with an auto-update feature? Hosting it yourself is complicated, from management of websites to who is paying for the hosting.
If you don't want it to be distributed, then find a pre-written license that allows you to do this. If you can't find one then it's time to talk to a lawyer.
You can't stop them from seeing your code. You can make it very hard for them to understand your code, which is a good second best. See our SD PHP Obfuscator for a tool that will scramble the identifiers and the whitespacing in the code, making it much more difficult to understand.

Categories