I am creating a PHP (don't hate!!) script that creates a long-term connection to Apple's new APNS server, as per their new documentation.
The general concept is a while(true) loop that sleeps for n seconds, and checks a queue for outbound push notifications, which are created and inserted into a database by a separate application.
I am getting stuck with comprehending the following section of the documentation, because of my lack of knowledge in the HTTP/2 spec and protocol.
Best Practices for Managing Connections
<snip> You can check the health of your connection using an HTTP/2 PING frame.
As this loop runs, I need to be alerted of the health of my connection, so that I can reconnect, in the case that I get disconnected, or if the connection is somehow terminated.
So, to summarize, how would I send a HTTP/2 PING frame using cURL, specifically, PHP's cURL, and what might the response look like?
I suppose, since cURL uses nghttp2 as the low-level library to interact with HTTP/2, this has something to do with it, but I am not sure how to use nghttp2 functions from within curl: https://nghttp2.org/documentation/nghttp2_submit_ping.html
curl (currently) offers no API that allows an application to send specific HTTP/2 frames like PING.
Related
I want to create a web application where the UI updates in real time (or as close to real time as you're going to get). Data for the UI comes from the server.
I don't want to use traditional HTTP requests where I constantly send requests to the server for new data. I'd rather have a connection open, and have the server push data to the client.
I believe this the publisher/subscriber pattern.
I've heard people mention zeromq, and React and to use Websockets. But from all the examples I've looked at I can't really find anything on this. For example zeromq has examples that show server and client. Do I implement the server, and then use websockets on the UI end as the client?
How would something like this be implemented?
Traditional HTTP Requests is still what all of this is all about still.
You can have Regular HTTP Requests:
- Users sends Request to Server
- Server responds to said request
There's also Ajax Polling and Ajax Long Polling, the concept is similar.
Ajax Polling means, an HTTP request is sent every X seconds to to look for new information.
Example: Fetch new Comments for a section.
Ajax Long Polling is similar, but when you send a request to the server, if there are no responses ready to give to the client, you let the connection hang (for a defined period of time).
If during that time new information comes in, you are already waiting for it. Otherwise after the time expires, the process restarts. Instead of going back and forth, you send a request - wait, wait - and whether you receive a response or not, after a period of time, you restart the process.
WebSockets is still an HTTP Request.
It consists in the client handling the weight in front-end, by opening WebSocket request to a destination.
This connection will not close - and it will receive and send real-time information back and forth.
Specific actions and replies from the server, need to be programmed and have callbacks in the client side for things to happen.
With WebSockets you can receive and transmit in realtime, it's a duplex bi-directional connection.
So yes, in case it wasn't clear.
You setup a WebSocket Server, running on a loop, waiting for connections.
When it receives one, there's a chat-like communication between that said server and the client; the client who needs to have programmed callbacks for the server responses.
I have a website that needs to send notifications to the online clients at real time same as Facebook, after more googling, I found a lot of documentation about push and pull technology. I found from this documentation ways for implementing them using Ajax or Sockets. I need to know what is the best to use in my case and how is it coded using javascript or jquery and php.
I cannot say you what's the best use in your case without knowing your case in detail.
In most cases it is enough to have the clients check with the server every one or two seconds, asking if something new has happened. I prefer this over sockets most of the time because it works on every web server without any configuration changes and in any browser supporting AJAX, even old ones.
If you have few clients (because every client requires an open socket on the server) and you want real realtime, you can use websockets. There are several PHP implementations, for example this one: http://code.google.com/p/phpwebsocket/
If you can ensure that there will be only single browser open per logged in user then you can apply this long polling technique easily.
Policy for Ajax Call:
Do not make request every 2 seconds.
But wait and make request only after 2 seconds of getting response from previous request.
If a request does not respond within 12 seconds then do not wait send a fresh request. This is connection lost case.
Policy for server response:
if there is update response immediately. to check if there is update rely on session ; (better if you could send some hint from client side like latest message received; this second update checking mechanism will eliminate the restriction of single browser open as mentioned above)
otherwise sleep() for 1 second; (do not use infinite loop but use sleep) and then check whether there is update; if update is there respond; if not sleep again for 1 second; repeat this until total 10 seconds has elapsed and then respond back with no update
If you apply this policy (commonly known as long polling), you will find processor usage reduced from 95% to 4% under heavy load case.
Hope this explains. Best of luck.
Just use apply the long-polling technique using jQuery.
Sockets are not yet supported everywhere and also you would need to open a listening socket on the server for this to work.
I'm using the JAXL library to implement a jabber chat bot written in php, which is then ran as a background process using the PHP CLI.
Things work quite well, but I've been having a hard time figuring out how to make the chat bot reconnect upon disconnection!
I notice when I leave it running over night sometimes it drops off and doesn't come back. I've experimented with $jaxl->connect() and $jaxl->startStream(), and $jaxl->startCore() after jaxl_post_disconnect hook, but I think I'm missing something.
One solution would be to test your connection:
1) making a "ping" request to your page/controller or whatever
2) setTimeout(functionAjaxPing(), 10000);
3) then read the Ajax response and if == "anyStringKey" then your connection works find
4) else: reconnect() / errorMessage() / whatEver()
This is what IRC chat use i think.
But this will generate more traffic since the ping/ping request will be needed.
Hop this will help you a bit. :)
If you are using Jaxl v3.x all you need is to add a callback for on_disconnect event.
Also you must be using XEP-0199 XMPP Ping. What this XEP will do is, periodically send out XMPP pings to connected jabber server. It will also receive server pings and send back required pong packet (for instance if your client is not replying to server pings, jabber.org will drop your connection after some time).
Finally you MUST also use whitespace pings. A whitespace ping is a single space character sent to the server. This is often enough to make NAT devices consider the connection “alive”, and likewise for certain Jabber servers, e.g. Openfire. It may also make the OS detect a lost connection faster—a TCP connection on which no data is sent or received is indistinguishable from a lost connection.
What I ended up doing was creating a crontab that simply executed the PHP script again.
In the PHP script I read a specific file for the pid of the last fork. If it exists, the script attempts to kill it. Then the script uses pcntl_fork() to fork the process (which is useful for daemonifying a PHP script anyway) and capture the new PID to a file. The fork then logs in with to Jabber with JAXL per usual.
After talking with the author of JAXL it became apparent this would be the easiest way to go about this, despite being hacky. The author may have worked on this particular flaw in more recent iterations, however.
One flaw to this particular method is it requires pcntl_fork() which is not compiled with PHP by default.
I'm currently pinging URLs using CURL + PHP. But in my script, a request is sent, then it waits until the response comes, then another request, ... If each response takes ~3s to come, in order to ping 10k links it takes more than 8 hours!
Is there a way to send multiple requests at once, like some kind of multi-threading?
Thank you.
USe the curl_multi_* functions available in curl. See http://www.php.net/manual/en/ref.curl.php
You must group the URLs in smaller sets: Adding all 10k links at once is not likely to work. So create a loop around the following code and use a subset of URLS (like 100) in the $urls variable.
$all = array();
$handle = curl_multi_init();
foreach ($urls as $url) {
$all[$url] = curl_init();
// Set curl options for $all[$url]
curl_multi_add_handle($handle, $all[$url]);
}
$running = 0;
do {
curl_multi_exec($handle, $running;);
} while ($running > 0);
foreach ($all as $url => $curl) {
$content = curl_multi_getcontent($curl);
// do something with $content
curl_multi_remove_handle($handle, $curl);
}
curl_multi_close($handle);
First off I would like to point out that this is not a basic task which you can do on any kind of shared hosting provider. I assume you will get banned for sure.
So I assume you are able to compile software(VPS?) and start long running processes in the background(using php cli). I would use a redis(I liked predis as PHP client library very much) to push messages on a list. (P.S: I would prefer to write this in node.js/python(explanation below works for PHP), because I think this task can be coded in these languages pretty fast. I am going to try and write it and post code on github later.)
Redis:
Redis is an advanced key-value store.
It is similar to memcached but the
dataset is not volatile, and values
can be strings, exactly like in
memcached, but also lists, sets, and
ordered sets. All this data types can
be manipulated with atomic operations
to push/pop elements, add/remove
elements, perform server side union,
intersection, difference between sets,
and so forth. Redis supports different
kind of sorting abilities.
Then start a couple of worker processes which will take(blocking if none available) messages from the list.
Blpop:
This is where Redis gets really
interesting. BLPOP and BRPOP are the
blocking equivalents of the LPOP and
RPOP commands. If the queue for any of
the keys they specify has an item in
it, that item will be popped and
returned. If it doesn't, the Redis
client will block until a key becomes
available (or the timeout expires -
specify 0 for an unlimited timeout).
Curl is not exactly pinging(ICMP Echo), but I guess some servers could block these requests(security). I would first try to ping(using nmap snippet part) the host, and fail back to curl if ping fails, because pinging is faster then using curl.
Libcurl:
A free client-side URL transfer
library, supporting FTP, FTPS, Gopher
(protocol), HTTP, HTTPS, SCP, SFTP,
TFTP, TELNET, DICT, FILE, LDAP, LDAPS,
IMAP, POP3, SMTP and RTSP (the last
four—only in versions newer than
7.20.0 or 9 February 2010)
Ping:
Ping is a computer network
administration utility used to test
the reachability of a host on an
Internet Protocol (IP) network and to
measure the round-trip time for
messages sent from the originating
host to a destination computer. The
name comes from active sonar
terminology. Ping operates by sending
Internet Control Message Protocol
(ICMP) echo request packets to the
target host and waiting for an ICMP
response.
But then you should do a HEAD request and only retrieve headers to check if host is up. Otherwise you would also be downloading content of url(takes time/cost bandwidth).
HEAD:
The HEAD method is identical to GET
except that the server MUST NOT return
a message-body in the response. The
metainformation contained in the HTTP
headers in response to a HEAD request
SHOULD be identical to the information
sent in response to a GET request.
This method can be used for obtaining
metainformation about the entity
implied by the request without
transferring the entity-body itself.
This method is often used for testing
hypertext links for validity,
accessibility, and recent
modification.
Then each worker process should use curl_multi. I think this link might provide a good implementation of this(minus it does not do head request). to have some sort of concurrency in each process.
You can either fork your php process using pcntl_fork or look into curl's built-in multi-threading. https://web.archive.org/web/20091014034235/http://www.ibuildings.co.uk/blog/archives/811-Multithreading-in-PHP-with-CURL.html
PHP doesn't have true multi-thread capabilities.
However, you could always make your CURL requests asynchronously.
This would allow you to fire off batches of pings instead of one at a time.
Reference: How do I make an asynchronous GET request in PHP?
Edit: Just keep in mind your gonna have to make your PHP wait until all responses come back before terminating.
Christian
curl has the "multi request" facility which is essentially a way of doing threaded requests. Study the example on this page: http://www.php.net/manual/en/function.curl-multi-exec.php
You can use the PHP exec() function to execute unix commands like wget to accomplish this.
exec('wget -O - http://example.com/url/to_ping /dev/null 2>&1 &');
It's by no means an ideal solution but does get the jobs done and by sending the output to /dev/null and running it in the background you can move onto the next "ping" without having to wait for the response.
Note: Some servers have exec() disabled for security purposes.
I would use system() and execute the ping script as a new process. Or multiple processes.
You can make a centralized queue with all addresses to ping, then kick of some ping scripts on the task.
Just note:
If a program is started with this
function, in order for it to continue
running in the background, the output
of the program must be redirected to a
file or another output stream. Failing
to do so will cause PHP to hang until
the execution of the program ends.
To handle this kind of tasks try out I/O multiplexing strategies. In a nutshell, the idea is that you create a bunch of sockets, feed them to your OS (say, using epoll on linux / kqueue on FreeBSD) and sleep until an event occurs on some of the sockets. Your OS's kernel can handle hundreds or even thousands of sockets in parallel in a single process.
You can not only handle TCP sockets but also deal with timers / file descriptors in a similar fashion in parallel.
Back to PHP, check out something like https://github.com/reactphp/event-loop which exposes a good API and hides lots of low-level details.
Run multiple php processes.
Process 1: pings sites 1-1000
Process 2: pings sites 1001-2001
...
I've been debugging some Soap requests we are making between two servers on the same VLAN. The app on one server is written in PHP, the app on the other is written in Java. I can control and make changes to the PHP code, but I can't affect the Java server. The PHP app forms the XML using the DOMDocument objects, then sends the request using the cURL extension.
When the soap request took longer than 5 minutes to complete, it would always wait until the max timeout limit and exit with a message like this:
Operation timed out after 900000 milliseconds with 0 bytes received
After sniffing the packets that were being sent, it turns out that the problem was caused by a 5 minute timeout in the network that was closing what it thought was a stale connection. There were two ways to fix it: bump up the timeout in iptables, or start sending KeepAlive packets over the request.
To be thorough, I would like to implement both solutions. Bumping up the timeout was easy for ops to do, but sending KeepAlive packets is turning out to be difficult. The cURL library itself supports this (see the --keepalive-time flag for the CLI app), but it doesn't appear that this has been implemented in the PHP cURL library. I even checked the source to make sure it wasn't an undocumented feature.
So my question is this: How the heck can I get these packets sent? I see a few clear options, but I don't like any of them:
Write a wrapper that will kick off the request by shell_execing the CLI app. This is a hack that I just don't like
Update the cURL extension to support this. This is a non-option according to Ops.
Open the socket myself. I know just enough to be dangerous. I also haven't seen a way to do this with fsockopen, but I could be missing something.
Switch to another library. What exists that supports this?
Thanks for any help you can offer.