I have several PHP libraries (scripts and classes and function files) that I want to make available as a service that is web accessible. I am trying to be as deliberate with the wording as possible since it seems that 'Web Service' is rather nuanced. From what I can tell there are 2 main flavors of Web Service, REST and WSDL/SOAP, with the later seeming to be more applicable to what I want to do, but it seems like a lot of overhead and possibly overkill. Could I simply make a PHP page that accepts a parameter of "function" to indicate what action to take, then echo out the response like normal? Requiring the construction of a SOAP message as part of an AJAX call seems horrible.
What is the difference between a requesting a PHP page and a Web Service response (aside from the SOAP protocol)?
Would you ever return a JSON string in SOAP?
Are the implementations separate, exclusive or in parallel?
Could you, or even want, to use Apache rewrites to accomplish nearly the same effect as REST or WSDL? Directing the request to a page appending a parameter for the requested action.
OR am I over thinking all this and should not worry about SOAP and just got with the standard PHP function parameter and return text or json?
I am also looking ahead a bit, since I work with a lot of legacy code bases, Ruby, Perl, Python, and Java, and would eventually want to make a Service from them as well. Or at least incorporate the libraries somehow.
I am going to recommend this book to you, which is an amazing reference for advanced PHP topics, and is very current. It has a chapter that focuses on networking with PHP, and a specific section on creating your own PHP-based web services. It also contains loads upon loads of other up-to-date kung fu for PHP developers.
http://www.amazon.com/PHP-Advanced-Object-Oriented-Programming-QuickPro/dp/0321832183/
I can tell you what worked for me.
I had to create a small web service in which an outside application needed to get a list of products. I echo'ed a JSON encoded array, while using .htpasswd to protect the data from prying eyes :). The data was accessible very easily with a small CURL script, and it took about 2-3 hours.
If you need the web service users to manage information, if you need an ACL, you will have to look into SOAP and/or REST more. For what I needed - it was more than enough.
Related
Ive built an AngularJS application over the last several months that utilizes a MySQL database for its data. This data is fetched by Angular making calls to PHP and PHP returns JSON strings etc.
The issue is once this application is running inside node-webkit, none of the php works, so all of the content areas are empty. I assume (though the documentation on this issue is null and so i have no confirmation) this happens because Node-webkit is a client-side application framework and therefor wont run server-side languages like php. Is there a way to expand node webkit to run php and other server side languages?
I have done my best to find an answer to this question before posting, but documentation for this is nonexistent, and all of the information I have found about node-webkit talks about installing node on your server and installing npms for MySQL and having angular make calls to node. This defeats the purpose of the application entirely as it is designed so that the exe/deb/rpm/dmg can run and you can set up a database with any cloud database provider and be ready to go. Not ideal if you have to buy a vps just to run this one thing.
I have to assume this is possible in some way. i refuse to believe that everyone with an nw application hard codes all their data.
Thanks in advance
I know of four methods to accomplish this. Some of which you have preferred not to do but I am going to offer them in the hopes it helps you or someone else.
Look for an NPM that can do this for you. You should be able to do this functionality within node.js. - https://www.npmjs.com/search?q=mysql
You can host your PHP remotely. Using node-remote you can give this server the appropriate access to your NW.js project.
You can code a RESTful PHP application that your JavaScript can pass off information to.
You can use my boilerplate code to run PHP within a NW.js project. It however fires up an express.js web server internally to accomplish this. But the server is restricted to the machine and does not accept outside connections - https://github.com/baconface/php-webkit
1 and 4 both carry a risk in your case. Your project can be reversed engineered to reveal the source code and the connection information can be retrieved rather easy. So this should only be in an application on trusted machines and 2 and 3 are the ideal solutions.
Whats the best method for posting some data from a server side script, to a PHP web app on another server?
I have control over both ends, but I need to keep it as clean as possible.
I'm hoping people don't mistake this as a request for code, I'm not after anything like that, just a suitable method, even the name of a technology is good enough for me. (FYI the recipient web app will be built in Yii which supports REST if that matters).
Use cURL: http://curl.haxx.se
If you're calling from a PHP script, you can use PHP:cURL https://php.net/curl
Probably best to do it over SSL, if you want to keep the info safe.
Most of the answers here mention cURL, which is fine for smaller use-cases. However if you have more complex and/or growing needs, or plan to open up access to other servers in the future, you might want to consider creating and consuming a web service.
This article makes a somewhat compelling argument for RESTful web services over SOAP-based, but depending on who will be consuming the service, a SOAP-based web service can be both simple to consume (How to easily consume a web service from PHP) and set up (php web service example). Consuming a RESTful web service is easily done via cURL (Call a REST API in PHP).
The choice really comes down to scope and your consuming audience.
You can access your REST API with PHPs cURL Extension.
You will find examples here.
If you use a framework, some have alternatives to cURL, which are easier to handle (like Zend http client).
Or for very simple purposes (and if php-settings allow this), you could use file_get_contents().
I am very new to both iPhone APP development and PHP development though I have around 8 years of experience in .NET technologies. We have started developing an iPhone app which will talk to various third party API's like facebook, twitter, four square, google geo-code.
Now a lot of these interactions will have to happen from within APP itself for instance the initial authentication with facebook, posting messages to facebook etc. But we need some of the interactions to happen at the server for a variety of reasons and since I am a .NET developer the obvious means I could think of was web services.
We didn't want to use SOAP for a variety of reasons and we tried developing our own framework for web services using JSON but realized it would be too much of an effort to add features like security to the framework we are creating.
So we decided to go with an established framework like Zend where we could implement security and other features out of box. We also have to decide between using Zend Json-RPC and using Zend REST. The questions I have are multi-fold please understand I am very new to PHP development so some of my questions might be very basic.
I would like to know from any one who has developed iPhone app's interacting with a lot of third party API's how much interaction have you put in the server and are there any other efficient ways to communicate to a server other than using web services?
Between Zend REST and Zend RPC which is more secure and which will take less development effort I am guessing Zend REST will be more secure and Zend RPC will take less development effort.
Is it a good idea to use established framework like Zend for your development where we consider performance to be of utmost importance will using Zend add a over head in terms of performance?
How secure is Zend Json-RPC calls, how can I make the service calls more secure when using Zend Json-RPC.
I am a .NET developer transitioning into APP and PHP development so hoping to get some guidelines on the whole approach we are planning to take from some one experienced in these areas.
Lets see how to best answer this one.
Answer to 1
Haven't done an iPhone app. At work I build/maintain an Adobe AIR client side application that doing lots of services calls. My rule of thumbs is to do anything that makes sense on the client (take advantage of their resources) instead of nagging the server consistently. Usually our application loads all the info it needs from the server upfront and has enough data to do lots with. Every once in a while it needs to send that information back to the server to be stored in a secure location but most of the logic of how things work are in the client side app.
Since we are using Adobe technologies, we are using AMF as the transport protocol to send data back and forth between the client and server.
Answer to 2
Security will be up to you to handle. I talk more about this in step 4. For REST you are just passing a get/post/delete/etc with values that are not hidden. XMLRPC you are just passing an xml which anyone can see as well. Now, REST is a discussion on it's own. As there is no real standard it's hard to define what REST is when people are talking about it. If you want to use REST, I don't think Zend_Rest does a good job at really handling it. There are other frameworks that focus on REST that might work better for you. Also, if security is important use HTTPS instead of HTTP.
If you choose to do REST (the right way) I think it'll take you long to implement.
Answer to 3
It's all about how you architect it. I use Zend for the services I've described above at work. I've built it in a way where you can all the API using JSONRPC or AMF (and I can easily add XMLRPC or others if I want) and consume the same resource. I use AMF for our AIR application and I use JSONRPC for my PHP sites/tools. I like JSON better as I feel it's lighter weight than xml and for me it's easier to work with.
Next, I have cron jobs scheduled where every night I cache thousands of queries worth of data from the db into memory. Data that I know won't change in the next day and will be used quite often. Anything not cached by this process will be cached individually as it's requested by a client with a specific expiration time. What does this all mean, all my service calls are extremely fast and efficient. Many times I don't even have to hit the db so the time to process a request on the server side is a split second.
Also, if you use Zend, don't use the framework for an API, just use the server module as a standalone piece. Don't use the whole MVC stack, just create a standalone file for each protocol you want to use. I have a json.php which handles the JSONRPC requests and an amf.php file that handles AMF request. Both files inside are pretty lightweight, they just need to initiate the Zend_Json_Server or Zend_Amf_Server, assign the class path to where my classes are and handle the request.
Answer to 4
Which ever protocol you use, you'll have to build security into it like you would with anything. You can use the Zend authentication modules and acl as well. If you are passing sensitive data back and forth, whether it's json, xml, rest, you'll need to encrypt that data or some one will see it. AMF is a binary format making that a bit harder to do but that's besides the point. Which ever protocol you choose, you still need to build some authentication mechanism to make sure others don't use it without access.
If you are looking for more info on the different ways to build webservices using Zend I think the book Zend Framework Web Servicces is a good resource to start with. I hope this helps getting you started.
How can New Relic tap into my app with a simple install? How does it know all the methods, requests, etc?
It works for RoR, PHP, etc.
Can anyone explain the technology behind it? I'm interested in tapping into my Rails app, but I want to do so smoothly like New Relic.
Thanks
First up, you will not manage to duplicate the functionality of NewRelic on your own. Ignoring the server-side, the rpm Gem is a pretty complex piece of software, doing a lot of stuff. Have a look at the source if you want to see how it hooks into the Rails system. The source is a worth a read, as it does some cool stuff in terms of threading and marshaling of the data before sending it back to their servers.
If you want a replacement because Newrelic is expensive (and rightly so, it's awesome at what it does), then have a look at the FreeRelic project on Github.
They are using ASPECT ORIENTED PROGRAMMING CONCEPTS AND Reflection heavily for Intercepting original method call and adding instrumentation around that.
In a general way, New Relic's gem inserts kinda middleware in your web framework, and collects data from your endpoint (think as a rails route) until it's response. After every "harvesting time" (defaults to 60 seconds), it sends a post request to NR services with this data.
You can also tailor data you need with Custom Metrics, Custom Events.
Is also possible to do queries with NRQL and build graphs with that (like you would do in Graphana).
They have a customize service for Wordpress too, but is a bit messy in the start.
Some options if you want to save some money is configure cloudwatch + datadog, but I would give a shot to their service if uptime is crucial for your app.
For a rails solution you could simply implement a more verbose logging level (development/debug level) and interrogate the production.log file for specific events, timings etc
For Java they are attaching a Java agent to JVM which intercepts method calls and monitor them. You can use AspectJ to replicate the same behaviour and log every method call to wherever you want, let's say create custom Cloudwatch metrics.
In case of Java it's bytecode ingestion. They "hacking" the key methods of your application server and add their code in it. Then they send relevant transaction info to their server, aggregating it and you can see the summary. It's really complicated process so I don't think one dev can implement it.
If you’re already familiar with New Relic’s application monitoring
then you probably know about New Relic’s agents that run in-process
on web apps collecting and reporting all sorts of details about whats
happening in the app. RUM leverages the agents to dynamically inject
JavaScript into pages as they are built. The injected JavaScript
collects timing information in the browser and contains details that
identify the specific app and web transaction processed on the
backend as well as how time was spent in the app for each request.
When a page completes loading in an end user’s browser, the
information is sent back to New Relic asynchronously – so it doesn’t
effect page load time.
You can turn RUM on/off via your application settings in New Relic.
As well, you can turn RUM on/off via the agent’s configuration file
(newrelic.yml – a ‘browser_monitoring auto_instrument’ flag has been
introduced).
The agents have been enhanced to automatically inject JavaScript into
the HTML pages so using RUM is as simple as checking the checkbox on
the New Relic control panel. However, if you’d prefer more control,
you can use New Relic’s Agent API to generate the JavaScript and thus
control exactly when and where the header and footer scripts are
included.
I asked a recent question regarding the use of readfile() for remotely executing PHP, but maybe I'd be better off setting out the problem to see if I'm thinking the wrong way about things, so here goes:
I have a PHP website that requires users to login, includes lots of forms, database connections and makes use of $_SESSION variables to keep track of various things
I have a potential client who would like to use the functionality of my website, but on their own server, controlled by them. They would probably want to restyle the website using content and CSS files local to their server, but that's a problem for later
I don't want to show them my PHP code, since that's the value of what I'd be providing.
I had thought to do this with calls to include() from the client's server to mine, which at least keeps variable scope intact, but many sites (and the PHP docs) seem to recommend readfile(), file_get_contents() or similar. Ideally I'd like to have a simple wrapper file on the client's server for each "real" one on my server.
Any suggestions as to how I might accomplish what I need?
Thanks,
ColmF
As suggested, comment posted as an answer & modified a touch
PHP is an interpretive language and as such 'reads' the files and parses them. Yes it can store cached byte code in certain cases but it's not like the higher level languages that compile and work in bytecode. Which means that the php 'compiler' requires your actual source code to work. Check out zend.com/en/products/guard which might do what you want though I believe it means your client has to use the Zend Server.
Failing that sign a contract with the company that includes clauses of not reusing your code / etc etc. That's your best protection in this case. You should also be careful though, if you're using anything under an 'open source' license your entire app may be considered open source and thus this is all moot.
This is not a non-standard practice for many companies. I have produced software I'm particularly proud of and a company wants to use it. As they believe in their own information security for either 'personal' reasons or because they have to comply to a standard such as PCI there are times my application must run in their environments. I have offered my products as 'web services' where they query my servers with data and recieve responses. In that case my source is completely protected as this is no different than any other closed API. In every case I have licensed the copy to the client with provisions that they are not allowed to modify nor distribute it. This is a legal binding contract and completely expected from the clients side of things. Of course there were provisions that I would provide support etc etc but that's neither here nor there.
Short answers:
Legal agreement, likely your best bet from everyone's point of view
Zend guard like product, never used it so I can't vouch for it
Private API but this won't really work for you as the client needs to host it
Good luck!
If they want it wholly contained on their server then your best bet is a legal solution not a technical one.
You license the software to them and you make sure the contract states the intellectual property belongs to you and it cannot be copied/distributed etc without prior permission (obviously you'll need some better legalese than that, but you get the idea).
Rather than remote execution, I suggest you use a PHP source protection system, such as Zend Guard, ionCube or sourceguardian.
http://www.zend.com/en/products/guard/
http://www.ioncube.com/
http://www.sourceguardian.com/
Basically, you're looking for a way to proxy your application out to a remote server (i.e.: your clients). To use something like readfile() on the client's site is fine, but you're still going to need multiple scripts on their end. Basically, readfile scrapes what's available at a particular file path or URL and pipes it to the end user. So if I were to do readfile('google.com'), it would output the source code for Google's homepage.
Assuming you don't just want to have a dummy form on your clients' sites, you're going to need to have some code hanging out on their end. The code is going to have to intercept the form submissions (so you'll need a URL parameter on the page you're scraping with readfile to tell your code that the form submission URL is your client's site and not your own). This page (the form submission handler page) will need to make calls back to your own site. Think something like this:
readfile("https://your.site/whatever?{$_SERVER['QUERY_STRING']}");
Your site is then going to process the response and then pass everything back to your clients' sites.
Hopefully I've gotten you on the right path. Let me know if I was unclear; I realize this is a lot of info.
I think you're going to have a hard time with this unless you want some kind of funny wrapper that does curl type requests to your server. Especially when it comes to handling things like sessions and cookies.
Are you sure a PHP obfuscator wouldn't be sufficient for what you are doing?
Instead of hosting it yourself, why not do what most php applications do and simply distribute the program to your client with an auto-update feature? Hosting it yourself is complicated, from management of websites to who is paying for the hosting.
If you don't want it to be distributed, then find a pre-written license that allows you to do this. If you can't find one then it's time to talk to a lawyer.
You can't stop them from seeing your code. You can make it very hard for them to understand your code, which is a good second best. See our SD PHP Obfuscator for a tool that will scramble the identifiers and the whitespacing in the code, making it much more difficult to understand.