I know that you can cache the WSDL but is there a way to cache the soap responses through configuration of the php soapclient?
Obviously, we could "cache" ourselves by constructing some tables in a database and running a cron. This will take much more effort and I am wondering if there is a way to specify caching abilities of the explicit SOAP data being returned from soap server to client.
Similar to how a browser can cache various data based on headers ?
Do I need to have the soap server configured properly or is this something I can do strictly on the soapclient.
Our soap server is a 3rd party vendor which we have little control over so I am hoping to keep the solution to soapclient side if possible.
Open to all suggestions/alternatives (aside from the one I mentioned) if this does not exist.
In short - No. That type of caching is very application-specific, so it's not built in to the protocol for you. I would say that the solution you chalked up your self is a good way to go. A side effect of such a queue is that you get a level of decoupling between your main application and the external service. This can be useful for a lot of things, once you get past the initial development phased (debugging, service windows, logging etc.)
Related
I need to convert EPP (session based protocol - https://www.rfc-editor.org/rfc/rfc5734) to an HTTP request/response based protocol (JSON). The JSON part has already been coded and is working with a few clients.
I've looked at nginx using websockets but websockets appear too high level for the raw EPP protocol.
I need to solve the following process:
nginx to terminate an SSL TCP connection
read off the EPP request (XML) - preferably in PHP
convert to JSON and send it to an HTTP server
read the result
convert to XML and send it back to the EPP connection
Are there any recommended technologies within nginx to achieve this? I can code the PHP socket server without too much hassle.
So you are building an EPP server? Welcome to the EPP world, from someone being in it since its birth or even before :-)
EPP is a "simple" protocol using XML over TLS (typically, there are some instances over HTTPS and during drafting period they were other proposals like over SMTP or BXXP).
So, as a server you need something being able to handle TLS termination, and read XML. This is possible in any language, and is not rocket science. Of course the devil lies in the details. And you do not provide enough details/context to see exactly what constraints you may have or specific problem.
So you may be a little off topic here because writing a simple server handling TLS and reading XML would need to be shown here as code if you want people to help you.
Please make sure to read RFC 5734 multiple times about specific transport considerations. You need of course to remember that it is a stateful protocol so if you "forward" the requests internally over a stateless protocol you will need to carry some sort of authentication.
You do not need websockets, in fact I do not understand why you speak about them. You just need TLS termination, not HTTPS one.
Have a look at HAProxy too, it is a popular handler of things like that.
But again, based on your specific (unknown) constraints (specially number of clients, volume of queries, SLAs needed, etc.), something as simple as stunnel may be enough.
Note that you have mod_epp for Apache. Maybe not very live anymore but could give you ideas. It allows to use any CGI program under Apache when the server receives in fact EPP frames and not HTTP ones.
As a side note, besides security (but that should be covered by RFC5734), I would recommend you to be careful about encodings, XML namespaces, and avoid using multiple serialisation mechanisms in the same stream (JSON inside XML is a bad idea as is XML inside JSON, but I do not know exqctly how your "convert" part works).
Whats the best method for posting some data from a server side script, to a PHP web app on another server?
I have control over both ends, but I need to keep it as clean as possible.
I'm hoping people don't mistake this as a request for code, I'm not after anything like that, just a suitable method, even the name of a technology is good enough for me. (FYI the recipient web app will be built in Yii which supports REST if that matters).
Use cURL: http://curl.haxx.se
If you're calling from a PHP script, you can use PHP:cURL https://php.net/curl
Probably best to do it over SSL, if you want to keep the info safe.
Most of the answers here mention cURL, which is fine for smaller use-cases. However if you have more complex and/or growing needs, or plan to open up access to other servers in the future, you might want to consider creating and consuming a web service.
This article makes a somewhat compelling argument for RESTful web services over SOAP-based, but depending on who will be consuming the service, a SOAP-based web service can be both simple to consume (How to easily consume a web service from PHP) and set up (php web service example). Consuming a RESTful web service is easily done via cURL (Call a REST API in PHP).
The choice really comes down to scope and your consuming audience.
You can access your REST API with PHPs cURL Extension.
You will find examples here.
If you use a framework, some have alternatives to cURL, which are easier to handle (like Zend http client).
Or for very simple purposes (and if php-settings allow this), you could use file_get_contents().
I'm looking at connecting an existing PHP codebase to a remote CORBA service. All actual data is passed via XML so I don't think I need an IDL to PHP mapping for remote invocation. I just need to connect to the CORBA service, transmit to it the XML string, and read the XML response.
After some research on this I found the CORBA4PHP PHP extension which I'm about to try although am having some reservations (last updated in 2009). I also found numerous implementations in Java and Python.
To avoid dealing with a new PHP extension I'm wondering if there exists a CORBA HTTP proxy of sorts in any language that would take care of communicating with the CORBA service. I would HTTP POST to the proxy (or some socket communication), it would relay it to the CORBA service, and relay back to me its response.
Does such a proxy exist?
I don't know of such a service, but perhaps others might know of one. That said, given how simple your IDL is, I would just go ahead and try the CORBA4PHP extension and use that if it works.
If it doesn't work (I've no idea about its quality), it would be really simple to build such a proxy yourself:
Download a free ORB (let's say you get one for Java, say JacORB)
Compile your IDL and build a client to the CORBA service
Add a front-end API to your Java application which your PHP code will use to call it and pass in the string parameter containing your XML (POST sounds reasonable, and there are plenty of ways to implement such a thing in Java)
I'm currently re-developing a fairly large-scale PHP web application. Part of this redevelopment involves moving the bulk of some fairly hefty business logic out of the core of the web app and moving it into a set of SOAP web services.
What's currently concerning me (just slightly) is perceived overhead this brings with it in terms of local HTTP traffic. I should explain that the SOAP web services currently, and for the foreseeable future, will reside on the same physical server, and if and when they move they will remain on the same network. What concerns me is the fact that for each call that was an internal php function call, it is now an http request invoking a similar function call.
Obviously this is something I can measure as we move further along the line, but I was wondering if anyone could offer any advice, or more importantly share any previous experience of taking an application down this route.
Are you doing hundreds or thousands of these calls a second? If not, then you probably don't have to worry.
But profile it. Set up a prototype of the system working across the network with a large number of SOAP calls, and see if it slows down to unacceptable levels.
If the server is running on the same physical box then you can't have any privilege seperation. And increasing capacity by adding tiers to the stack (instead of euivalent nodes) is a recipe for non-availability.
If you're hiding something behind SOAP, the the HTTP overhead is likely to be relatively small in comparison to what the 'something' is doing (e.g. reading from database). However when you add up the cost of constructing the SOAP request, decomposing the soap request the compositing the response, add the overhead for HTTP then one has to wonder why you don't provide a shortcut by calling the PHP code implemented within the server directly in the same thread of execution.
If you were to design an abstract soap interface which mapped directly to and from php types then this could be shortcutted without having any overhead in maintining different APIs.
Does it HAVE to be a SOAP web service? Is this the client telling you this, or is it your decision?
You seem concerned about HTTP calls, which is fine, but what about the overhead of unnecessarily serializing/de-serializing to/from XML to be transferred over the "wire" - that "wire" being the same machine. =)
It's doesnt really make sense to have a SOAP-based web service where it's only to be consumed by a client on the same machine.
However I agree with #Skilldrick's answer - its not going to be an issue as long as you are intelligent about your HTTP calls.
Cache whenever you can, batch your calls, etc.
SOAP is more verbose that REST. REST uses HTTP protocol to do the same with less network bandwith, if that's your concern.
See:
WhatIsREST
Wikipedia
As to really answer your question, remember the 80/20 rule. For that use a benchmarking/tracing tool to help you finding where your hotspots are. Fix those and forget about the rest.
I'm Using PHP 5 and the built-in SoapClient.
This is really a question for the developers of PHP Soap support.
The SoapClient gives you 2 choices: WSDL mode, which caches the WSDL file locally, and non-WSDL which requires you to build your own requests.
Using the WSDL is obviously more convenient. But, I wonder how much processing this does each time you create a SoapClient instance. The WSDL is cached, but does it have to re-process the entire WSDL each time you create a SoapClient? If so, it seems it might be more efficient (CPU-wise) to go the non-WSDL route.
It's no problem to create the non-WSDL SoapClient in my situation. Should I?
It takes enough time for them to build in a cache (so its painful). I guess the real question is when does that cache expire (on script exit?) and how many calls are you making (per script?).
Also it sounds like you're trying to prematurely optimize something. If its not a problem don't worry about it. You could spend up time on something that doesn't matter.