In the Couchbase PHP client library there is a exposed method called getDelayed that accept a callback as a parameter and allows you to do a asynchronous get to couchbase.
The problem is that I can't find any method to do a asynchronous set (I'm expecting something like setDelayed).
Anyone know a way to do this? What can be the reason to implement a asynchronous get and not a set?
The Couchbase client library does not provide asynchronous methods for set/update operations.
I'm wondering what the purpose would be ... the async get makes sense ... you send a bunch of keys and get a response whenever the data is sent back to you. With a set you're going to send the data to the server. The couch server will cache the data in memory and return instantly. My understanding is that it queues it for disk writing. After writing to disk it queues it for indexing of views. There really isn't much to "delay". It returns to you immediately after writing it to memory. There isn't anything to do asynchronously is there? The only thing I could think of is the driver could cache it for you before sending it the couch server, but I'm not sure that would accomplish much except for using memory in the couch client (harder to debug oom errors).
Related
I recently use php thrift client to call some service implemented by java thrift server.
But I found that when I transfer a large amount of complex data, php spent a lot of time serialize and deserialize data because of tens of thousands of TBinaryProtocol::readXXX() or TBinaryProtocol::writeXXX()
calls.
Any good idea to optimize this?
The TBufferedTransport or TFramedTransport may help. The former only has a buffer in between to reduce I/O calls, while the latter also changes the transport stack by modifying the wire data (i.e. an Int32 holding the total length of the data block is inserted at the beginning).
Hence, TBufferedTransport is a purely local thing, in contrast TFramedTransport must be used on both client and server. Aside from that, both are working very similar.
Furthermore, some of the server types available require TFramedTransport, so for any new API it may be a good choice to add TFramedTransport from the beginning.
Suppose we need to log some data on every php call on each request being made to the server on a high traffic- request heavy web application, to basically trace the actions taken by each client.
I was considering saving them to memory and then logging all in one go to prevent frequent disk access.
Is there a Php framework which already does this which I can reuse?
I need to do this on the actual production server so I dont want to use stuff like xdebug.
Redis would be my suggestion.
There is a php library Rediska, that makes use of Redis server. At some point data can be dumped to db, i.e. when server load isn't at it highest.
I would use apc_store, i use it for light php request time analyzer
https://gist.github.com/aiphee/8004486cbd37b3f13efd271b8457cb38
A page is sending AJAX call to server and should get item info in response. The array to look-up/return is a rather big one and I can’t hold it in the PHP file to accept the request. So, as far as my knowledge and experience tell, there are 2 methods:
Access database for each request.
Store items in files (e.g. “item12.txt”) and send contents to the user.
My C experience says that opening and closing a file takes much more system time than the rest of the program. How is it in PHP? What is the preferred method (most importantly, resource-wise) – file system or database? Is there any other way you would recommend (e.g. JavaScript directly loading the file with variable array from the server for each request)? Maybe there’s some innovative method lying around you’re aware of?
P.S. On the server-side a number only will be accepted, so no worries regarding someone trying to access files in the server or trying to do some fancy stuff on database.
Sockets
Depending on how many requests you will be handling, you could look into socket connections.
Sockets gives you 2 way communication between the client and the server, which would allow you to do interactive things, as needed.
Socket tutorial 1
Socket tutorial 2
Node.js
node.js is the new kid on the block. You write your own socket webserver, and use javascript to communicate with it. This is a great alternative to Ajax, as it's much more efficient and reliabe.
node.js can be run alongside PHP, and only be used for ajax-like calls.
node.js
node.js socket turotial
There are nothing innovative. If you have low frequency calls to data and you want super simple access to data then use files. But today is much better to use any database (SQL lite) is ok i think. IF you need more performance then use MySQL or NoSQL solutions. Tools made to solve things. Use the right tool for your purpose.
I'm trying to get Realtime info (Speed, Downloaded, Left) of File Downloads made by a script coded in PHP & Curl.
"curl_getinfo" gives all the required data but it gives it only after the download.
Anyway to get it realtime ?
AFAIK there is no way to do this with cURL. The curl_multi_* functions will let you make a number of asynchronous requests and check their current state, but only in so much as they will tell you whether they are completed/what their current errorlevel is.
You cannot get information about speed/downloaded/left from cURL. You would have to write your own HTTP request logic using fsockopen() or similar, then you can include logic to update some kind of display somewhere as the request progresses. This does have the disadvantage of being much more difficult to make asynchronous - because of PHP's lack of multi-thread support you would have to use exec() or pcntl_fork() and make some kind of horrible IPC architecture.
I have done stuff like this before, and my honest opinion is that it is not worth effort. If you still want to go ahead and do it, I will dig out some of the stuff I used when I did it.
I know that you can cache the WSDL but is there a way to cache the soap responses through configuration of the php soapclient?
Obviously, we could "cache" ourselves by constructing some tables in a database and running a cron. This will take much more effort and I am wondering if there is a way to specify caching abilities of the explicit SOAP data being returned from soap server to client.
Similar to how a browser can cache various data based on headers ?
Do I need to have the soap server configured properly or is this something I can do strictly on the soapclient.
Our soap server is a 3rd party vendor which we have little control over so I am hoping to keep the solution to soapclient side if possible.
Open to all suggestions/alternatives (aside from the one I mentioned) if this does not exist.
In short - No. That type of caching is very application-specific, so it's not built in to the protocol for you. I would say that the solution you chalked up your self is a good way to go. A side effect of such a queue is that you get a level of decoupling between your main application and the external service. This can be useful for a lot of things, once you get past the initial development phased (debugging, service windows, logging etc.)