I'm starting my first project with yo + grunt + angular.js.
I have a service which needs to read some data from my server; I built it using angular $http service.
I've also built a RESTful web service (implemented in PHP, but it could be Java, C, Perl, ..., it doesn't matter) which exposes an API to get the data.
The server from which grunt serves my ng-app is currently (and probably will ever be) the same from where the PHP web service is run (by apache).
I wonder if this is an acceptable architecture... I end up having two distinct servers (grunt and apache) on the same server... More, I always have to add an "Access-Control-Allow-Origin:127.0.0.1" to the output of my PHP service... :-(
Is it possible to serve PHP from grunt, for example?
UPDATE: I speak about development stage... Of course at production I wouldn't use grunt...
To better explain myself, I would like to use relative urls in $http()... With the same code at both the development and the production stages...
If at production I can expect it to work, because I will only have one server for the deployed Angular app and the PHP service, who's supposed to interpret PHP at development, when the Angular app is served by Grunt? Grunt itself? If yes, how?
UDPATE 2, AND A POSSIBLE SOLUTION: After thinking quite a bit on this issue (and also reading this article), and not receiving satisfactory answers here, I decided I will use this approach:
Development
Use a "production-like" server (Apache, lighttpd, ...) to serve real PHP pages.
Use absolute urls with $http or $request to access that server (distinct from Grunt, which serves angular.js pages). The urls will be easily configurable, to require only a minimum work (and possible errors) to switch to production.
In PHP scripts, before producing (JSON) output, always output a proper "Access-Control-Allow-Origin" header; the value of the directive will also be easily configurable.
Production
Deploy angular.js app to the same server where PHP is deployed.
Change the urls, and make them relative, since now they share the same origin with client-side scripts.
Change the "Access-Control-Allow-Origin" header, to allow only local requests (or possibly remove that header at all...).
I would be very pleased if anybody would like to comment this solution, to dispute it, or to propose better ones...
Our solution to the problem at work was to create flat files with sample data inside the app folder and use relative URLs with $resource and $http and then deploy our code as an application at the same subdirectory level... /fx/api/fund for example.
This allows grunt to serve up something static for seeing what the design will look like of the Angular app while still providing a full experience. Then we have a development server that gets updated when we commit code (using Jenkins) that we can check for real functionality and run our test suite against.
This approach is a little clumsy but it allows us to get the benefits of the grunt approach and still have a testing server. We also have our builds use the minified version so that we can test that magnification won't break the app.
The only problem with this approach is that the built in web server with grunt can't handle post requests so anything calling a post will fail.
It sounds like you are trying to do the same thing as me. ( solution for local development only)
I am using yo angular to start an angular project, but I want to connect to a php service to deliver some content.
I used grunt-connect-proxy to pass my post request to apache. This works well, except for the fact that $_POST remains empty when sending form-data e.g. $http.post('/api',{"foo":"bar"}). I posted an issue about this, but it still remains unsolved and I can not figure out how to make that work. Anyway, the other solution is to keep everything in the same folder/domain.
That was my story
Actually the story had a tail.
Finally I figured out what was causing the problem, see this post
Not receiving a satisfactory answer, after thinking quite a bit about the issue myself, I provide hereafter my conclusions:
Development
Use a "production-like" server (Apache, lighttpd, ...) to serve real PHP pages.
Use absolute urls with $http or $request to access that server (distinct from Grunt, which serves angular.js pages). The urls will be easily configurable, to require only a minimum work (and possible errors) to switch to production.
In PHP scripts, before producing (JSON) output, always output a proper "Access-Control-Allow-Origin" header; the value of the directive will also be easily configurable.
Production
Deploy angular.js app to the same server where PHP is deployed.
Change the urls, and make them relative, since now they share the same origin with client-side scripts.
Change the "Access-Control-Allow-Origin" header, to allow only local requests (or possibly remove that header at all...).
Related
Before this question gets closed, I know the setup above is possible. I just want clarification on some things.
I just started learning Aurelia because I want to convert one of my projects into a web app. My project is built with html+css+JavaScript(jQuery)+ PHP(MySql).
I havent used any sort of framework before.
In the guide, they mention a few ways to setup a web server. I used the http server with node. Now this is where I need some help understanding a few things.
I dont want to use node.js. I want to use PHP on the server. Will that work and how?
When using Apache server, I know any PHP page is sent to the interpreter that renders the final html. I use XAMPP and its apache comes bundled with PHP. Does the http server used by node come with PHP? Is this even a sensible question?
Now I know Aurelia is purely front end. If it used to make single page applications, it uses Ajax. So now I made the following assumption:
Using Aurelia, the user accesses the root page of the app that the web server sends. After that, Aurelia makes various Ajax requests to the server which will use my PHP files to do database query stuff.
Is that right or am I missing something. And can I just use xampp(apache) to host my app instead of server from node?
Aurelia is a framework that, after you export it to any server, does not rely on any back-end software at all. This means that with the help of the http- / fetch-client API, you can just call out to your php script.
I have an example in my github:
https://github.com/rjpvroegop/randyvroegop.nl-made-with-aurelia
Here I use the http-client to post data to my php script wich has a very simple email functionality.
You can see the action inside my view-model in src/pages/contact/index.js.
You can see the PHP script in src/assets/components/contactengine.php.
These work the way they should. Note: you have to change your gulp build if you want your PHP served the way I serve mine, from the dist folder after gulp-watch or gulp-export.
Next to that you can use any back-end functionality you would like, as long as it returns the proper data. This PHP script does that. If you would download my distribution to test this you can simply do the following:
gulp export from your terminal in the root folder
copy everything from the export folder to your PHP webserver.
I would somehow solve the following scenario: We have a server nginx acting as a reverse proxy for some apache servers. We should make sure that when a request comes to nginx proxy it is pre processed by a php script that sets some HTTP request headers, based on the URL content, and then the URL is passed to the server apache.
We should avoid redirects in this process, but I have no idea how I could do it.
Thanks a lot...
[EDIT]
Sorry for the vague question. Our setup is as follows: nginx is used as a balancer for some apache web server. On the web server runs an application that generates the content of e-commerce (and page categories) on the basis of the analysis of the submitted URL. We use a third-party analysis tool that requires a request header valorized with category but the categories are calculated by the php code of the application... I should make that the request processed by nginx will have an header before arriving to apache. I can extract the code from the php application and create an intermediate layer but I have no idea how to manage the whole process.
This is a simple draw: Black as-is, in green to-be (or may be-to-be)
simple solution draw
Your question is very vague - and will probably be closed on that basis. My response here is intended as a comment - but its a bit long for the comment box.
That you are using nginx as a reverse proxy implies that you are somewhat concerned with performance. While it would be quite possible to implement what you describe, the nature of PHP running in a webserver means that it will be rather inefficient at the task you describe - each incoming web request will require a new connection to the backend webserver.
Presumably there is some application running on or behind the apache webservers - is there a reason you don't implement the required functionality there?
Can you provide examples of the changes you need to apply to requests and responses? It's possible that some of this could be handled by nginx or apache.
Alternatively you might have a look at ICAP (rfc3507) which is protocol designed for supporting these kind of transformations. Although there sre server implementations using PHP, I suspect they will have most of the same performance issues referenced above.
I am still confused on that. Is it possible? If is possible please solve me out.
If you want an application which works well on any computer you without installing any additional software you should not use PHP which is a server side language.
However if you really need to do this, you should put php files with this application together with some library which can serve you as a http server from PHP (like React) and write a bootstrap application which:
fires a server script
opens a web browser with application uri
Of course using library like React will probably force you to rewrite some parts of your application.
What React exactly does:
set up a server which listens on specified port (it may be default 80)
created a callback for you which is fired when a request is incoming
You can then put your code to dispatch it.
I know that there are a number of libraries available, but I am trying to learn more about the WebDav protocol itself for a project I’m developing.
For stage 1, I would like to implement a virtual read-only file system in PHP, presenting as a WebDav server.
As far as I can tell, it would need to be able to:
list virtual files & directories
change directories
print the contents of a single file
I’ve found a number of sources, but they either try to do too much or gloss over the implementation of the protocol itself.
Can someone explain or point me to a source that might answer the following:
What are the steps in the communication between the client & server?
How does PHP receive a request, and how should the response be formatted?
Thanks
When I originally started sabre/dav I still made sure to read the entire rfc first. You really need to have a good idea of all the features, the data model and how they work together.
After that, you probably only really need to look at the PROPFIND, OPTIONS and GET methods. One option is to just look at what a client sends your way... figure out based on the rfcs what the response should be, and then write the code that sends the correct response.
Another good way to start learning is to hook up an existing webdav client to a webdav server and inspect what kind of messages they send back and forward.
Im making a simple web-based control panel, and i figured the easiest way for me to accomplish it would be to have PHP on the 2 machines (one being the web facing machine, the other being behind a VPN), basically I need it so when I press a button on the site on the externally facing IP of machine 1, it sends a request to the internally facing IP (eg 192.168.100.1) of machine 2 and runs the PHP file (test.php plus some $_GET data) without actually redirecting the end user to 192.168.100.1, because obviously that will time out as there is no access to it.
If all you want is to make certain internal PHP pages accessible on the external server, you should consider setting up a reverse proxy instead of manually proxying requests with PHP.
See the Apache documentation for an example: http://httpd.apache.org/docs/2.2/mod/mod_proxy.html
Of course this won't work if you do your authentication on the external server and/or need to execute additional PHP code on the external server before/after the internal PHP code. In that case refer to Mihai's or Louis's answer.
You can use cURL to send or forward HTTP requests from machine 1 to machine 2 and to receive the responses machine 2 gives you and (if needed) process those responses to show them to the user.
You could also use (XML-/JSON-)RPC or SOAP which would be a bit more elegant and extensible (more commonplace than using cURL) but it would have a higher learning curve with a bigger setup time/work.
You should also be able to use file_get_contents (normally supporting the http protocol) or http_get, a function designed for simple http get requests.
It might not be the most ideal way, but should be fairly easy to do.