php soap client: wsdl vs. non-wsdl - which is faster? - php

I'm Using PHP 5 and the built-in SoapClient.
This is really a question for the developers of PHP Soap support.
The SoapClient gives you 2 choices: WSDL mode, which caches the WSDL file locally, and non-WSDL which requires you to build your own requests.
Using the WSDL is obviously more convenient. But, I wonder how much processing this does each time you create a SoapClient instance. The WSDL is cached, but does it have to re-process the entire WSDL each time you create a SoapClient? If so, it seems it might be more efficient (CPU-wise) to go the non-WSDL route.
It's no problem to create the non-WSDL SoapClient in my situation. Should I?

It takes enough time for them to build in a cache (so its painful). I guess the real question is when does that cache expire (on script exit?) and how many calls are you making (per script?).
Also it sounds like you're trying to prematurely optimize something. If its not a problem don't worry about it. You could spend up time on something that doesn't matter.

Related

Connection pooling strategy using guzzle

I want to achieve high availability with SolR Cloud.
I need to dev a SolR PHP Client supporting node failure.
My lead is to work with guzzle RetryMiddleware and somehow keeptrack of up or down nodes.
My question is : is it a good lead ? (I'm not very familiar with Guzzle)
I'm not familiar with Solr Cloud, but IMO if you want to create a proper client, you need to write your own middleware for Guzzle with the specific fallback logic inside.
RetryMiddleware is basically for retrying the same request after a delay period, nothing more. You cannot change the request (send it to a different node or something). That's why I think it could be only a part of the solution.
Otherwise the question is too broad at the moment.

How to do a asynchronous set in Couchbase PHP client library

In the Couchbase PHP client library there is a exposed method called getDelayed that accept a callback as a parameter and allows you to do a asynchronous get to couchbase.
The problem is that I can't find any method to do a asynchronous set (I'm expecting something like setDelayed).
Anyone know a way to do this? What can be the reason to implement a asynchronous get and not a set?
The Couchbase client library does not provide asynchronous methods for set/update operations.
I'm wondering what the purpose would be ... the async get makes sense ... you send a bunch of keys and get a response whenever the data is sent back to you. With a set you're going to send the data to the server. The couch server will cache the data in memory and return instantly. My understanding is that it queues it for disk writing. After writing to disk it queues it for indexing of views. There really isn't much to "delay". It returns to you immediately after writing it to memory. There isn't anything to do asynchronously is there? The only thing I could think of is the driver could cache it for you before sending it the couch server, but I'm not sure that would accomplish much except for using memory in the couch client (harder to debug oom errors).

CORBA broker / proxy over HTTP or accessible via sockets (for PHP)?

I'm looking at connecting an existing PHP codebase to a remote CORBA service. All actual data is passed via XML so I don't think I need an IDL to PHP mapping for remote invocation. I just need to connect to the CORBA service, transmit to it the XML string, and read the XML response.
After some research on this I found the CORBA4PHP PHP extension which I'm about to try although am having some reservations (last updated in 2009). I also found numerous implementations in Java and Python.
To avoid dealing with a new PHP extension I'm wondering if there exists a CORBA HTTP proxy of sorts in any language that would take care of communicating with the CORBA service. I would HTTP POST to the proxy (or some socket communication), it would relay it to the CORBA service, and relay back to me its response.
Does such a proxy exist?
I don't know of such a service, but perhaps others might know of one. That said, given how simple your IDL is, I would just go ahead and try the CORBA4PHP extension and use that if it works.
If it doesn't work (I've no idea about its quality), it would be really simple to build such a proxy yourself:
Download a free ORB (let's say you get one for Java, say JacORB)
Compile your IDL and build a client to the CORBA service
Add a front-end API to your Java application which your PHP code will use to call it and pass in the string parameter containing your XML (POST sounds reasonable, and there are plenty of ways to implement such a thing in Java)

How to cache php soapclient responses?

I know that you can cache the WSDL but is there a way to cache the soap responses through configuration of the php soapclient?
Obviously, we could "cache" ourselves by constructing some tables in a database and running a cron. This will take much more effort and I am wondering if there is a way to specify caching abilities of the explicit SOAP data being returned from soap server to client.
Similar to how a browser can cache various data based on headers ?
Do I need to have the soap server configured properly or is this something I can do strictly on the soapclient.
Our soap server is a 3rd party vendor which we have little control over so I am hoping to keep the solution to soapclient side if possible.
Open to all suggestions/alternatives (aside from the one I mentioned) if this does not exist.
In short - No. That type of caching is very application-specific, so it's not built in to the protocol for you. I would say that the solution you chalked up your self is a good way to go. A side effect of such a queue is that you get a level of decoupling between your main application and the external service. This can be useful for a lot of things, once you get past the initial development phased (debugging, service windows, logging etc.)

Should I be worried about HTTP overhead when calling local web services?

I'm currently re-developing a fairly large-scale PHP web application. Part of this redevelopment involves moving the bulk of some fairly hefty business logic out of the core of the web app and moving it into a set of SOAP web services.
What's currently concerning me (just slightly) is perceived overhead this brings with it in terms of local HTTP traffic. I should explain that the SOAP web services currently, and for the foreseeable future, will reside on the same physical server, and if and when they move they will remain on the same network. What concerns me is the fact that for each call that was an internal php function call, it is now an http request invoking a similar function call.
Obviously this is something I can measure as we move further along the line, but I was wondering if anyone could offer any advice, or more importantly share any previous experience of taking an application down this route.
Are you doing hundreds or thousands of these calls a second? If not, then you probably don't have to worry.
But profile it. Set up a prototype of the system working across the network with a large number of SOAP calls, and see if it slows down to unacceptable levels.
If the server is running on the same physical box then you can't have any privilege seperation. And increasing capacity by adding tiers to the stack (instead of euivalent nodes) is a recipe for non-availability.
If you're hiding something behind SOAP, the the HTTP overhead is likely to be relatively small in comparison to what the 'something' is doing (e.g. reading from database). However when you add up the cost of constructing the SOAP request, decomposing the soap request the compositing the response, add the overhead for HTTP then one has to wonder why you don't provide a shortcut by calling the PHP code implemented within the server directly in the same thread of execution.
If you were to design an abstract soap interface which mapped directly to and from php types then this could be shortcutted without having any overhead in maintining different APIs.
Does it HAVE to be a SOAP web service? Is this the client telling you this, or is it your decision?
You seem concerned about HTTP calls, which is fine, but what about the overhead of unnecessarily serializing/de-serializing to/from XML to be transferred over the "wire" - that "wire" being the same machine. =)
It's doesnt really make sense to have a SOAP-based web service where it's only to be consumed by a client on the same machine.
However I agree with #Skilldrick's answer - its not going to be an issue as long as you are intelligent about your HTTP calls.
Cache whenever you can, batch your calls, etc.
SOAP is more verbose that REST. REST uses HTTP protocol to do the same with less network bandwith, if that's your concern.
See:
WhatIsREST
Wikipedia
As to really answer your question, remember the 80/20 rule. For that use a benchmarking/tracing tool to help you finding where your hotspots are. Fix those and forget about the rest.

Categories