Ratchet PHP WAMP - React / ZeroMQ - Specific user broadcast - php

Note: This is not the same as this question which utilises MessageComponentInterface. I am using WampServerInterface instead, so this question pertains to that part specifically. I need an answer with code examples and an explanation, as I can see this being helpful to others in the future.
Attempting looped pushes for individual users
I'm using the WAMP part of Ratchet and ZeroMQ, and I currently have a working version of the push integration tutorial.
I'm attempting to perform the following:
The zeromq server is up and running, ready to log subscribers and unsubscribers
A user connects in their browser over the websocket protocol
A loop is started which sends data to the specific user who requested it
When the user disconnects, the loop for that user's data is stopped
I have points (1) and (2) working, however the issue I have is with the third one:
Firstly: How can I send data to each specific user only? Broadcast sends it to everyone, unless maybe the 'topics' end up being individual user IDs maybe?
Secondly: I have a big security issue. If I'm sending which user ID wants to subscribe from the client-side, which it seems like I need to, then the user could just change the variable to another user's ID and their data is returned instead.
Thirdly: I'm having to run a separate php script containing the code for zeromq to start the actual looping. I'm not sure this is the best way to do this and I would rather having this working completely within the codebase as opposed to a separate php file. This is a major area I need sorted.
The following code shows what I currently have.
The server that just runs from console
I literally type php bin/push-server.php to run this. Subscriptions and un-subscriptions are output to this terminal for debugging purposes.
$loop = React\EventLoop\Factory::create();
$pusher = Pusher;
$context = new React\ZMQ\Context($loop);
$pull = $context->getSocket(ZMQ::SOCKET_PULL);
$pull->bind('tcp://127.0.0.1:5555');
$pull->on('message', array($pusher, 'onMessage'));
$webSock = new React\Socket\Server($loop);
$webSock->listen(8080, '0.0.0.0'); // Binding to 0.0.0.0 means remotes can connect
$webServer = new Ratchet\Server\IoServer(
new Ratchet\WebSocket\WsServer(
new Ratchet\Wamp\WampServer(
$pusher
)
),
$webSock
);
$loop->run();
The Pusher that sends out data over websockets
I've omitted the useless stuff and concentrated on the onMessage() and onSubscribe() methods.
public function onSubscribe(ConnectionInterface $conn, $topic)
{
$subject = $topic->getId();
$ip = $conn->remoteAddress;
if (!array_key_exists($subject, $this->subscribedTopics))
{
$this->subscribedTopics[$subject] = $topic;
}
$this->clients[] = $conn->resourceId;
echo sprintf("New Connection: %s" . PHP_EOL, $conn->remoteAddress);
}
public function onMessage($entry) {
$entryData = json_decode($entry, true);
var_dump($entryData);
if (!array_key_exists($entryData['topic'], $this->subscribedTopics)) {
return;
}
$topic = $this->subscribedTopics[$entryData['topic']];
// This sends out everything to multiple users, not what I want!!
// I can't send() to individual connections from here I don't think :S
$topic->broadcast($entryData);
}
The script to start using the above Pusher code in a loop
This is my issue - this is a separate php file that hopefully may be integrated into other code in the future, but currently I'm not sure how to use this properly. Do I grab the user's ID from the session? I still need to send it from client-side...
// Thought sessions might work here but they don't work for subscription
session_start();
$userId = $_SESSION['userId'];
$loop = React\EventLoop\Factory::create();
$context = new ZMQContext();
$socket = $context->getSocket(ZMQ::SOCKET_PUSH, 'my pusher');
$socket->connect("tcp://localhost:5555");
$i = 0;
$loop->addPeriodicTimer(4, function() use ($socket, $loop, $userId, &$i) {
$entryData = array(
'topic' => 'subscriptionTopicHere',
'userId' => $userId
);
$i++;
// So it doesn't go on infinitely if run from browser
if ($i >= 3)
{
$loop->stop();
}
// Send stuff to the queue
$socket->send(json_encode($entryData));
});
Finally, the client-side js to subscribe with
$(document).ready(function() {
var conn = new ab.Session(
'ws://localhost:8080'
, function() {
conn.subscribe('topicHere', function(topic, data) {
console.log(topic);
console.log(data);
});
}
, function() {
console.warn('WebSocket connection closed');
}
, {
'skipSubprotocolCheck': true
}
);
});
Conclusion
The above is working, but I really need to figure out the following:
How can I send individual messages to individual users? When they visit the page that starts the websocket connection in JS, should I also be starting the script that shoves stuff into the queue in PHP (the zeromq)? That's what I'm currently doing manually, and it just feels wrong.
When subscribing a user from JS, it can't be safe to grab the users id from the session and send that from client-side. This could be faked. Please tell me there is an easier way, and if so, how?

Note: My answer here does not include references to ZeroMQ, as I am not using it any more. However, I'm sure you will be able to figure out how to use ZeroMQ with this answer if you need to.
Use JSON
First and foremost, the Websocket RFC and WAMP Spec state that the topic to subscribe to must be a string. I'm cheating a little here, but I'm still adhering to the spec: I'm passing JSON through instead.
{
"topic": "subject here",
"userId": "1",
"token": "dsah9273bui3f92h3r83f82h3"
}
JSON is still a string, but it allows me to pass through more data in place of the "topic", and it's simple for PHP to do a json_decode() on the other end. Of course, you should validate that you actually receive JSON, but that's up to your implementation.
So what am I passing through here, and why?
Topic
The topic is the subject the user is subscribing to. You use this to decide what data you pass back to the user.
UserId
Obviously the ID of the user. You must verify that this user exists and is allowed to subscribe, using the next part:
Token
This should be a one use randomly generated token, generated in your PHP, and passed to a JavaScript variable. When I say "one use", I mean every time you reload the page (and, by extension, on every HTTP request), your JavaScript variable should have a new token in there. This token should be stored in the database against the User's ID.
Then, once a websocket request is made, you match the token and user id to those in the database to make sure the user is indeed who they say they are, and they haven't been messing around with the JS variables.
Note: In your event handler, you can use $conn->remoteAddress to get the IP of the connection, so if someone is trying to connect maliciously, you can block them (log them or something).
Why does this work?
It works because every time a new connection comes through, the unique token ensures that no user will have access to anyone else's subscription data.
The Server
Here's what I am using for running the loop and event handler. I am creating the loop, doing all the decorator style object creation, and passing in my EventHandler (which I'll come to soon) with the loop in there too.
$loop = Factory::create();
new IoServer(
new WsServer(
new WampServer(
new EventHandler($loop) // This is my class. Pass in the loop!
)
),
$webSock
);
$loop->run();
The Event Handler
class EventHandler implements WampServerInterface, MessageComponentInterface
{
/**
* #var \React\EventLoop\LoopInterface
*/
private $loop;
/**
* #var array List of connected clients
*/
private $clients;
/**
* Pass in the react event loop here
*/
public function __construct(LoopInterface $loop)
{
$this->loop = $loop;
}
/**
* A user connects, we store the connection by the unique resource id
*/
public function onOpen(ConnectionInterface $conn)
{
$this->clients[$conn->resourceId]['conn'] = $conn;
}
/**
* A user subscribes. The JSON is in $subscription->getId()
*/
public function onSubscribe(ConnectionInterface $conn, $subscription)
{
// This is the JSON passed in from your JavaScript
// Obviously you need to validate it's JSON and expected data etc...
$data = json_decode(subscription->getId());
// Validate the users id and token together against the db values
// Now, let's subscribe this user only
// 5 = the interval, in seconds
$timer = $this->loop->addPeriodicTimer(5, function() use ($subscription) {
$data = "whatever data you want to broadcast";
return $subscription->broadcast(json_encode($data));
});
// Store the timer against that user's connection resource Id
$this->clients[$conn->resourceId]['timer'] = $timer;
}
public function onClose(ConnectionInterface $conn)
{
// There might be a connection without a timer
// So make sure there is one before trying to cancel it!
if (isset($this->clients[$conn->resourceId]['timer']))
{
if ($this->clients[$conn->resourceId]['timer'] instanceof TimerInterface)
{
$this->loop->cancelTimer($this->clients[$conn->resourceId]['timer']);
}
}
unset($this->clients[$conn->resourceId]);
}
/** Implement all the extra methods the interfaces say that you must use **/
}
That's basically it. The main points here are:
Unique token, userid and connection id provide the unique combination required to ensure that one user can't see another user's data.
Unique token means that if the same user opens another page and requests to subscribe, they'll have their own connection id + token combo so the same user won't have double the subscriptions on the same page (basically, each connection has it's own individual data).
Extension
You should be ensuring all data is validated and not a hack attempt before you do anything with it. Log all connection attempts using something like Monolog, and set up e-mail forwarding if any critical's occur (like the server stops working because someone is being a bastard and attempting to hack your server).
Closing Points
Validate Everything. I can't stress this enough. Your unique token that changes on every request is important.
Remember, if you re-generate the token on every HTTP request, and you make a POST request before attempting to connect via websockets, you'll have to pass back the re-generated token to your JavaScript before trying to connect (otherwise your token will be invalid).
Log everything. Keep a record of everyone that connects, asks for what topic, and disconnects. Monolog is great for this.

To send to specific users, you need a ROUTER-DEALER pattern instead of PUB-SUB. This is explained in the Guide, in chapter 3. Security, if you're using ZMQ v4.0, is handled at the wire level, so you don't see it in the application. It still requires some work, unless you use the CZMQ binding, which provides an authentication framework (zauth).
Basically, to authenticate, you install a handler on inproc://zeromq.zap.01, and respond to requests over that socket. Google ZeroMQ ZAP for the RFC; there is also a test case in the core libzmq/tests/test_security_curve.cpp program.

Related

call recording using twilio proxy api

I am doing communication between two people using Twilio proxy when any customer replies to my Twilio proxy reserved number, then the session will create and the session will handle all communication "SMS as well as call" how can I record call in this scenario.
When i am adding Twilio number to proxy pool, the "call comes in" and "message comes in" URL is changed to proxy service name.
Thanks
I found a pretty good project that examples that! And the best part is that really works very well!
Basically, you only need to set up the webhooks when a call is made and on this webhook you call the processRecordCall :D
The method is here:
async function processRecordCall(req){
let callSID = req.body.inboundResourceSid;
console.log(req.body.inboundResourceSid);
console.log(req.body.inboundResourceStatus);
console.log(req.body.outboundResourceStatus);
//The only Proxy callback we care about is the one that provides us context for the initiated outbound dial. We will ignore all other callbacks (including those with context for SMS). Alternatively and depending on the use case, we could use the Proxy "Intercept" callback, but this has additional requirements for handling error status.
if(req.body.outboundResourceStatus == "initiated")
{
//Get all recordings for a given call.
let recordings = await client.recordings.list({callSid: callSID});
console.log(recordings.length);
//We only want to start recording if there are no existing recordings for this callSid. *This may be overkill as I haven't hit this edge case. Consider it preventitive for now*
if(recordings.length == 0)
{
let recordingSet = false;
let callStatus = "";
//Self-invoking function to facilitate iterative attempts to start a recording on the given call. See example here: https://stackoverflow.com/a/3583740
(function tryRecord (i) {
setTimeout(async function () {
if(!recordingSet && i > 0)
{
console.log(`I is ${i}`);
//Fetch the call to see if still exists (this allows us to prevent unnecessary attempts to record)
//Prod code probably needs callback or async added to ensure downstream execution is as expected
let call = await client.calls(callSID).fetch();
callStatus = call.status;
//Only attempt to create the recording if the call has not yet completed
if(callStatus != "completed")
{
//Try to create the recording
try{
let result = await axios.post(`https://api.twilio.com/2010-04-01/Accounts/${accountSid}/Calls/${callSID}/Recordings.json`, {},
{
auth: {
username: accountSid,
password: authToken
}
});
//This code assumes an HTTP 201 *Created* status and thus sucessful creation of recording. Production code may need to explicitly ensure HTTP 201 (eg. handle any other possible status')
console.log(`statusCode: ${result.status}`);
recordingSet = true; //This isn't entirely necessary (eg. could just set i=1 to force stop iteration), but I'm making an assumption that most use cases will prefer having this for reporting purposes
} catch(err){
//TODO(?) - There may be specific errors that should be explicitly handeled in a production scenario (eg. 21220)
}
}
else //The call is completed, so there's no need to loop anymore
{
console.log("Force stop iteration");
i = 1; //Set i = 1 to force stop the iteration
}
}
if (--i || !recordingSet) tryRecord(i); // decrement i and call myLoop again if i > 0 or if recording is not set
}, 1000) //NOTE: 1 second time delay before retry (this can be whatever works best --responsibly-- for the customer)
})(15); //NOTE: 15 attempts to record by default (unless forced to stop or recording already started. This can be whatever works best --responsibly-- for the customer)
}
else
{
console.log("recording exists already");
}
}
else
{
console.log(`We ignored callback with outgoing status '${req.body.outboundResourceStatus}'`);
}
}
I'm linking the full project below
https://github.com/Cfee2000/twilio-proxy-admin-app

How to build a mysql queuing system with count downs?

I am working on a browser/mobile game and I am trying to build a system that automatically ends queued tasks after a certain time has passed. It's the basic research schema used in most games.
Research A costs $100 and will take 1 hour to complete. Do I have to check every second for tasks that are at or past their completion time and trigger an event to clear them and increment the level number? Is there a better way or more optimum way? This idea works by itself but what happens if you need to run 5 or 6 different queues in the game design? show I abstract them enough to get them all in one table?
I apologize if I seem a little vague or erratic with my questions. I am trying to figure out where to start with this concept.
I'm not very familiar with it, but I believe you could use websockets or NodeJS to create a callback event, you could then call that callback with a PHP socket server. This kind of
You can base yourself off this tutorial: http://www.sanwebe.com/2013/05/chat-using-websocket-php-socket
Steps
First, identify the message type using the websocket.onmessage callback, something similar to this should work:
websockets.onmessage = function(ev)
{
var msg = JSON.parse(ev.data); //Assuming you'll encode the message components in JSON with PHP
if ( msg.type == "research_end" )
{
FinishResearch(msg.content); //Assuming that the content element of the JSON array contains the ID of the research
}
}
Secondly, make the server send the actual message. To not make this too complicated or long I'll just pretend that sendMessage($msg, $client) is a function that sends a message to a client.
However, as explained in the tutorial, each client socket is stored in an array called $clients, you'll have to add some kind of identifier to each research so it's easy to know which research belongs to what client.
Now, here's an important part. On the server there will be a variable called $research which will be structured as such:
$research['peername'][0]['time'] = 60000
$research['peername'][0]['type'] = 20
You can add the research by sending an outgoing message to the websocket server by using this:
var array = {message: '20', type: 'research', time : '300000'}; //Create the request array
websocket.send(JSON.stringify(msg)); //Send it to the socket server as a json string, decode with json_decode once it arrives
Then, when it gets to the server and is identified as a research request, we call a callback called doResearch which takes two arguments
//loop through all connected sockets
foreach ($changed as $changed_socket) {
//check for any incomming data
while(socket_recv($changed_socket, $buf, 1024, 0) >= 1)
{
$received_text = unmask($buf); // Unmask data
$msg_array = json_decode($received_text); // Decode the JSON string we sent to the server
doResearch($msg_array, $changed_socket); // Let's say this function contains all the procedures to do the research
}
}
doResearch would be similar to this:
function doResearch($msg_array, $socket)
{
$name = socket_getpeername($socket, $addr);
$count = count($research[$name]);
$research[$name][$count]['time'] = $msg_array['time'];
$research[$name][$count]['type'] = $msg_array['type'];
}
And finally, you would have to add a conditional like this inside the main server loop:
foreach ( $research as $i )
{
foreach ( $i as $i2 )
{
if ( time() <= $i2['time'] )
{
$sql->query("INSERT INTO researches('peer', 'researchid') VALUES ('".$i."', '".$i2['type']."')");
sendMessage('Research type '.$i2['type'].' has finished.', $i2['socket']);
}
}
}
Then, that would check if a research has been finished and insert it into the database.
Hope this helps.

How to access to an external database securely?

I'm developing a mobile app which has to access to an external webapp (PHP + Codeigniter) to administrate the actions queried by ajax.
So by this way, there is a problem. If anyone see the urls used, could delete rows, or modify the user's info from the database. So I thought in this system to aboid this:
After a sucessful login I would do this:
// getToken : https://stackoverflow.com/a/13733588/2154101
$this->session->set_userdata('private_token', getToken(50));
$public_token = getToken(50);
$this->session->set_userdata('secure_token', md5("$private_token:$public_token"));
$data['token'] = $public_token;
// some stuff ...
// send $data in JSON
Then the client would the public token in the next query I would do this on the server:
$public_token = $this->input->post('token');
$data['token'] = get_public_token($public_token);
// some stuff ...
// send $data in JSON
Where get_public_token is within a helper with this code:
public get_public_token($public_token) {
$last_secure_token = $this->session->userdata('secure_token');
$private_token = $this->session->userdata('private_token');
$actual_token = md5("$private_token:$public_token");
if ($actual_token === $last_secure_token) {
$public_token = getToken(50);
$this->session->set_data('private_token', getToken(50));
$this->session->set_data('secure_token', md5("$private_token:$public_token"));
return $public_token;
} else { // you are cheating me ...
$this->session->sess_destroy();
redirect('/');
}
}
So only the user of this session could modify the data of the database.
I'm just trying to do the same explained here: https://stackoverflow.com/a/17371101/2154101
The session are encrypted, and I store them in a database too.
Do you think this method will work ok? Am I missing something important?
You should create an API for your mobile application. Create a authentication mechanism.
If your database holds user specific data, then you should create account for each user. So if the user sniffs the network and tries to call the api manually, then he could only change he's own data.
There are some API libraries for php out there, you should look into that.
Actually your solution is doing more than necessary. The only token of interest is the public_token sent back and forth. So you can throw away private_token and secure_token from session data, keeping only public_token for checking. Your current check is something like (X + 5)/2 == (14 + 5)/2 (is [received_token + 5]/2 equal to [14 + 5]/2 ?) when you can simplify to X == 14.
However if someone is sniffing the network, he can get the last token sent to a client and use it to hijack into that session. He can execute anything while the original client doesn't send a request with the outdated token, killing the session.
A better solution would be creating a secure_key after login and keep it at both ends (client and server). Then server would keep sending a new public_token at each response, but the client would send a md5(secure_key + public_token) at requests. This would narrow even more the hijacking window to the exact point where the session started. Without the original key, attackers can't create a valid md5.
However we are talking about minor hacking fans here. Anyone more zealous could hack that anyway. If you are concerned about that, then throw away all that stuff and simply use a HTTPS connection. With a trusted connection your sessions and access control rules are protected.
The better way is create API using SOAP or SAML2.
OAuth can be a very good solution: http://oauth.net/. It takes care of token and has a very secured API! If you wish to support secure authentication of web application + mobile application then it can be a good/proven solution!
On the other hand, it really depends on how complex your current system is and how the system is going to be in future.

Salesforce callout using PHP

Apologies, since I may not know the terminologies for the salesforce API. I just started programming a connector to interact with salesforce and I am stuck.
I have a requirement, where each time a new entry is added to the Leads section, I will have to retrieve a couple of fields (Firstname and Product Code) and pass it to a different software that makes use of PHP.
<?php
require "conf/config_cleverbridge_connector.inc.php";
require "include/lc_connector.inc.php";
// Start of Main program
// Read basic parameters
if ($LC_Username === "")
{
$LC_Username = readParam("USER");
}
if ($LC_Password === "")
{
$LC_Password = readParam("PASSWORD");
}
$orderID = "";
$customerID = substr(readParam("PURCHASE_ID"), 0, 10);
$comment = readParam("EMAIL")."-".readParam("PURCHASE_ID");
// Create product array
$products = array();
$itemID = readParam("INTERNAL_PRODUCT_ID");
$quantity = 1;
if (!ONCE_PER_PURCHASED_QUANTITY)
{
$quantity = readParam("QUANTITY");
}
// Add product to the product array
$products[] = array (
"itemIdentification" => $itemID,
"quantity" => $quantity,
);
// Create the order
$order = array(
"orderIdentification" => $orderID,
"customerIdentification" => $customerID,
"comment" => $comment,
"product" => $products,
);
// Calling webservice
$ticket = doOrder($LC_Username, $LC_Password, $order);
if ($ticket)
{
Header("HTTP/1.1 200 Ok");
Header("Content-Type: text/plain");
print TICKET_URL.$result->order->ticketIdentification;
exit;
}
else
{
$error = "No result from WSConnector_doOrder";
trigger_error($error, E_USER_WARNING);
printError(500, "Internal Error.");
exit;
}
// End of Main program
?>
Now this is the code that I got and have to work with. And this is hosted on a different remote server.
I am very very new to salesforce and I am not really sure how to trigger calling this php file over a remote site.
The basic idea is:
1. New entry in Lead is created.
2. Immediately 2 fields (custID and prodID) are sent to this PHP file I have pasted above (some of the variables are different)
3. This does its processing and sends 2 fields back to salesforce.
Any help or guidance is appreciated. Even links to read up on is okay as I am completely clueless.
PS: I have another example where it makes use of JSON Messages if that may make any difference.
Thanks
I'll repost the links from my comment :)
https://salesforce.stackexchange.com/questions/23977/is-it-possible-to-get-the-record-id
Web hook in salesforce?
If your PHP endpoint is visible on the open web (not a part of some intranet or just your own localhost) then simplest thing to do would be to send an Outbound Message from Salesforce. No coding required, just some XML document you'll have to parse on the PHP side. Plus it will automatically attempt to resend the messages if the host is unreachable...
If your app can't be accessed from SF servers then I think your PHP app will have to be the "actor". Querying SF every X minutes for new Leads or maybe subscribing to Streaming API... This will mean you'd have to store credentials to SF on your PHP app and remember to either change the password periodically or set on the "integration user"'s profile the "password never expires" checkbox.
So you're getting the notification, you generate your tickets, time to send them back. Will you want to pretend the update of Lead was done by the person that created it or will you want to see "last modified by: Integration User"? Outbound message can contain session id which you can use to act as the person who initiated the action (created the lead and fired the workflow) - at least until they log out or the session timeouts.
For message back you can use SOAP or REST salesforce apis - read the docs to figure out how to send an update command (and if you want to make it clear it was done by special user associated with this PHP app - how to log in to the APIs). I think the user's profile must have "API enabled" ticked before you could reuse somebody's session so maybe it's better to have a dedicated account for integrations like that...
Another thing to keep in mind if it'd be outbound messages is to ignore the messages sent from sandboxes so if somebody makes a test environment you will not call your "production" database of tickets. You can also remember to modify the outbound message and remote site setting every time a sandbox is made so you'll have "prod talking to prod, test talking to test". I know you can include user's session id in the OM - so maybe you can also add organization's id (for production it'll stay the same, every new sandbox will have new id).
The problem with this approach is that it might not scale. If 1000 leads is inserted in one batch (for example with Data Loader) - you'll get spammed with 1000 outbound messages. Your server must be able to handle such load... but it will also mean you're using 1 API request to send every single update back. You can check the limit of API requests in Setup -> Company Information. Developer Edition will have this limit very low, sandboxes are better, production is best (it also depends how many user licenses have you bought). That's why I've asked about some batching them up.
More coding but also more reliable would be to ask SF for changes every X minutes (Streaming API? Normal query? check the "web hook" answer) and send an update of all these records in one go. SELECT Id, Name FROM Lead WHERE Ticket__c = null (note there's nothing about AND LastModifiedDate >= :lastTimeIChecked)...

How to user predis for publish more than one time

How can I publish info between the clients more than once?
I mean when I publish info from one user to other, he receives and backwards, but this is only once.
Because when one user send something to the other, the GET is being loaded and the receiving stops, how can I make it that way so the clients receives forever, not only once?
How pub/sub works: like a channel, you put from one side and you get the same from the other side.
So publisher data will be received only when there is some subscriber for it.
Use pubSub context and subscribe to a channel say "x" and from another side, keep taking the data, say from User, and publish it using publish command every time to the same channel.
Subscriber:
$redis = new Predis\Client(// put setting here, if req);
$pubsub = $redis->pubSub();
$pubsub->subscribe($channel1);
foreach ($pubsub as $message)
{
switch ($message->kind) {
case 'subscribe':
echo "Subscribed to {$message->channel}\n";
break;
case 'message':
// do something
break;
}
}
Publisher:
while(1) // or whatever condition
{
$redis->publish($channel2, $userdata);
}
You can use chat messages to break the connection, e.g. publish exit and check at subscriber if exit then close the connection and then check at publisher side if no subscriber attached, close it too.

Categories