I found this SMPP Transceiver implementation:
http://sourceforge.net/projects/php-smppv3-4/files/
The way I use smpp_transceiver.php, is the following:
File send.php
require_once "smpp_transceiver.php";
...
// Open socket
$tx = new SMPP('SOME IP HERE', $port); // (1)
$tx->debug = false;
$tx->system_type = $systemType;
$tx->addr_npi = 1;
// Login as transmitter
$bindResult = $tx->bindTransmitter($username, $password);
$tx->sms_source_addr_npi = 1;
$tx->sms_source_addr_ton = 0;
$tx->sms_dest_addr_ton = 0;
$tx->sms_dest_addr_npi = 1;
// Send SMS
$sendResult = $tx->sendSMS($from, $to, $msg);
// Close socket
$tx->close(); // (2)
$state2 = $tx->state;
// Delete object
unset($tx);
Very simple question:
Is it okay to constantly open (1) and close (2) sockets?
This send.php is supposed to act as a webservice.
So I will be calling this many consecutive times:
http://...../send.php?mobile=......&body=hey
http://...../send.php?mobile=......&body=blah
http://...../send.php?mobile=......&body=zort
http://...../send.php?mobile=......&body=troz
I was told that SMPP connection should be kept alive, and this is clearly not happening here.
So, two more questions:
How can I keep the connection alive? given that this is PHP and smpp_transceiver.php is a non static class. I want that every call to send.php uses the same socket connection.
Should I implement some kind of synchronized lock(o) to smpp_transceiver.php if the previous thing is not possible?
You should keep the connection alive when possible, but it's also a question of how frequently is frequent.
If "frequent" is a few times a minute, then, it's not the end of the world and carry on.
If "frequent" is a few times a second, then you may want to seek another approach for the actual SMPP portion. PHP isn't a great choice for services that need to be kept alive for long periods of time. Try python, node or ruby.
Related
I have this webpage running the following PHP script 1k0_base.php
It outputs the following:
The code is as follows (excluding irrelevant or sensitive information):
include 'PhpSerial.php';
$serial = new PhpSerial;
$serial->deviceSet("COM5");
$serial->confBaudRate(9600);
$serial->confParity("none");
$serial->confCharacterLength(8);
$serial->confStopBits(1);
$serial->confFlowControl("none");
$serial->deviceOpen();
$readx = $serial->readPortLine(100,"\n");
echo nl2br("\n".$readx."\n\n");
$read0 = $serial->readPortLine(24,"");
echo nl2br("\n".$read0."\n\n");
//$read1 = $serial->readPortLine(24,"");
$serial -> deviceClose();
$readxx = explode(" ",$readx);
print_r($readxx);
echo nl2br("\n".$readxx[0]."\n"); //test
$read00 = explode(" ",$read0);
print_r($read00);
//$read11 = explode(" ",$read1);
$arrCount = count($read00);
for($x=0;$x<$arrCount;$x++){
$m=1;
if ($read00[$x]>0){
$m=$read00[$x];
break;
}
}
$massCharNum = $x;
for($x=0;$x<$arrCount;$x++){
$unit=1;
if (ctype_alpha($read00[$x])){
$unit=$read00[$x];
break;
}
}
date_default_timezone_set('America/Denver');
echo nl2br("\n\nDebug String: ".$read0."\n");
echo nl2br("Mass: ".$m."\n");
echo nl2br("Unit: ".$unit."\n");
echo nl2br("Time: ".date("n/j/y H:i:s",time())."\n\n");
I am connecting to a scale via R232 > RS232 extender to Prolific USB-to-Serial Comm Port (COM5) as identified in device manager.
The arduino IDE I am using is able to read and communicate with COM5 port, and I have confirmed that the data is being printed every 5 secs.
This code has been shown to work with other scale data monitoring serial to usb communications, only difference is using a pass through DB9 RS232 to this Prolific USB-to-Serial.
The following is the data sheet for serial comms for the scale I'm using (same as the other scales that are working with the code as intended).
The next image is a snippet of the PhpSerial.php I am using, and not sure why when reading the port line my code returns an empty string and array. Even though the serial monitor is showing its printing. The code also breaks when I force the COM5 port to be busy as well as disconnecting it. So The code knows it's connected, but still doesn't read values it's printing.
Picture verifying the COM5 serial print statements on Arduino IDE
SOLVED: Turns out PHP is not robust in reading serial data (go figure), and after many attempts to get the code to work with the Prolific drivers on the serial-to-USB connection I gave up and bought a serial-to-USB cable with the more standard FTDI driver chipset. Turns out that was enough to get the Phpserial.php to read the incoming data. If someone knows a more articulate or detailed explanation as to why this is the case I'd love to hear feedback and thoughts.
Note: This is not the same as this question which utilises MessageComponentInterface. I am using WampServerInterface instead, so this question pertains to that part specifically. I need an answer with code examples and an explanation, as I can see this being helpful to others in the future.
Attempting looped pushes for individual users
I'm using the WAMP part of Ratchet and ZeroMQ, and I currently have a working version of the push integration tutorial.
I'm attempting to perform the following:
The zeromq server is up and running, ready to log subscribers and unsubscribers
A user connects in their browser over the websocket protocol
A loop is started which sends data to the specific user who requested it
When the user disconnects, the loop for that user's data is stopped
I have points (1) and (2) working, however the issue I have is with the third one:
Firstly: How can I send data to each specific user only? Broadcast sends it to everyone, unless maybe the 'topics' end up being individual user IDs maybe?
Secondly: I have a big security issue. If I'm sending which user ID wants to subscribe from the client-side, which it seems like I need to, then the user could just change the variable to another user's ID and their data is returned instead.
Thirdly: I'm having to run a separate php script containing the code for zeromq to start the actual looping. I'm not sure this is the best way to do this and I would rather having this working completely within the codebase as opposed to a separate php file. This is a major area I need sorted.
The following code shows what I currently have.
The server that just runs from console
I literally type php bin/push-server.php to run this. Subscriptions and un-subscriptions are output to this terminal for debugging purposes.
$loop = React\EventLoop\Factory::create();
$pusher = Pusher;
$context = new React\ZMQ\Context($loop);
$pull = $context->getSocket(ZMQ::SOCKET_PULL);
$pull->bind('tcp://127.0.0.1:5555');
$pull->on('message', array($pusher, 'onMessage'));
$webSock = new React\Socket\Server($loop);
$webSock->listen(8080, '0.0.0.0'); // Binding to 0.0.0.0 means remotes can connect
$webServer = new Ratchet\Server\IoServer(
new Ratchet\WebSocket\WsServer(
new Ratchet\Wamp\WampServer(
$pusher
)
),
$webSock
);
$loop->run();
The Pusher that sends out data over websockets
I've omitted the useless stuff and concentrated on the onMessage() and onSubscribe() methods.
public function onSubscribe(ConnectionInterface $conn, $topic)
{
$subject = $topic->getId();
$ip = $conn->remoteAddress;
if (!array_key_exists($subject, $this->subscribedTopics))
{
$this->subscribedTopics[$subject] = $topic;
}
$this->clients[] = $conn->resourceId;
echo sprintf("New Connection: %s" . PHP_EOL, $conn->remoteAddress);
}
public function onMessage($entry) {
$entryData = json_decode($entry, true);
var_dump($entryData);
if (!array_key_exists($entryData['topic'], $this->subscribedTopics)) {
return;
}
$topic = $this->subscribedTopics[$entryData['topic']];
// This sends out everything to multiple users, not what I want!!
// I can't send() to individual connections from here I don't think :S
$topic->broadcast($entryData);
}
The script to start using the above Pusher code in a loop
This is my issue - this is a separate php file that hopefully may be integrated into other code in the future, but currently I'm not sure how to use this properly. Do I grab the user's ID from the session? I still need to send it from client-side...
// Thought sessions might work here but they don't work for subscription
session_start();
$userId = $_SESSION['userId'];
$loop = React\EventLoop\Factory::create();
$context = new ZMQContext();
$socket = $context->getSocket(ZMQ::SOCKET_PUSH, 'my pusher');
$socket->connect("tcp://localhost:5555");
$i = 0;
$loop->addPeriodicTimer(4, function() use ($socket, $loop, $userId, &$i) {
$entryData = array(
'topic' => 'subscriptionTopicHere',
'userId' => $userId
);
$i++;
// So it doesn't go on infinitely if run from browser
if ($i >= 3)
{
$loop->stop();
}
// Send stuff to the queue
$socket->send(json_encode($entryData));
});
Finally, the client-side js to subscribe with
$(document).ready(function() {
var conn = new ab.Session(
'ws://localhost:8080'
, function() {
conn.subscribe('topicHere', function(topic, data) {
console.log(topic);
console.log(data);
});
}
, function() {
console.warn('WebSocket connection closed');
}
, {
'skipSubprotocolCheck': true
}
);
});
Conclusion
The above is working, but I really need to figure out the following:
How can I send individual messages to individual users? When they visit the page that starts the websocket connection in JS, should I also be starting the script that shoves stuff into the queue in PHP (the zeromq)? That's what I'm currently doing manually, and it just feels wrong.
When subscribing a user from JS, it can't be safe to grab the users id from the session and send that from client-side. This could be faked. Please tell me there is an easier way, and if so, how?
Note: My answer here does not include references to ZeroMQ, as I am not using it any more. However, I'm sure you will be able to figure out how to use ZeroMQ with this answer if you need to.
Use JSON
First and foremost, the Websocket RFC and WAMP Spec state that the topic to subscribe to must be a string. I'm cheating a little here, but I'm still adhering to the spec: I'm passing JSON through instead.
{
"topic": "subject here",
"userId": "1",
"token": "dsah9273bui3f92h3r83f82h3"
}
JSON is still a string, but it allows me to pass through more data in place of the "topic", and it's simple for PHP to do a json_decode() on the other end. Of course, you should validate that you actually receive JSON, but that's up to your implementation.
So what am I passing through here, and why?
Topic
The topic is the subject the user is subscribing to. You use this to decide what data you pass back to the user.
UserId
Obviously the ID of the user. You must verify that this user exists and is allowed to subscribe, using the next part:
Token
This should be a one use randomly generated token, generated in your PHP, and passed to a JavaScript variable. When I say "one use", I mean every time you reload the page (and, by extension, on every HTTP request), your JavaScript variable should have a new token in there. This token should be stored in the database against the User's ID.
Then, once a websocket request is made, you match the token and user id to those in the database to make sure the user is indeed who they say they are, and they haven't been messing around with the JS variables.
Note: In your event handler, you can use $conn->remoteAddress to get the IP of the connection, so if someone is trying to connect maliciously, you can block them (log them or something).
Why does this work?
It works because every time a new connection comes through, the unique token ensures that no user will have access to anyone else's subscription data.
The Server
Here's what I am using for running the loop and event handler. I am creating the loop, doing all the decorator style object creation, and passing in my EventHandler (which I'll come to soon) with the loop in there too.
$loop = Factory::create();
new IoServer(
new WsServer(
new WampServer(
new EventHandler($loop) // This is my class. Pass in the loop!
)
),
$webSock
);
$loop->run();
The Event Handler
class EventHandler implements WampServerInterface, MessageComponentInterface
{
/**
* #var \React\EventLoop\LoopInterface
*/
private $loop;
/**
* #var array List of connected clients
*/
private $clients;
/**
* Pass in the react event loop here
*/
public function __construct(LoopInterface $loop)
{
$this->loop = $loop;
}
/**
* A user connects, we store the connection by the unique resource id
*/
public function onOpen(ConnectionInterface $conn)
{
$this->clients[$conn->resourceId]['conn'] = $conn;
}
/**
* A user subscribes. The JSON is in $subscription->getId()
*/
public function onSubscribe(ConnectionInterface $conn, $subscription)
{
// This is the JSON passed in from your JavaScript
// Obviously you need to validate it's JSON and expected data etc...
$data = json_decode(subscription->getId());
// Validate the users id and token together against the db values
// Now, let's subscribe this user only
// 5 = the interval, in seconds
$timer = $this->loop->addPeriodicTimer(5, function() use ($subscription) {
$data = "whatever data you want to broadcast";
return $subscription->broadcast(json_encode($data));
});
// Store the timer against that user's connection resource Id
$this->clients[$conn->resourceId]['timer'] = $timer;
}
public function onClose(ConnectionInterface $conn)
{
// There might be a connection without a timer
// So make sure there is one before trying to cancel it!
if (isset($this->clients[$conn->resourceId]['timer']))
{
if ($this->clients[$conn->resourceId]['timer'] instanceof TimerInterface)
{
$this->loop->cancelTimer($this->clients[$conn->resourceId]['timer']);
}
}
unset($this->clients[$conn->resourceId]);
}
/** Implement all the extra methods the interfaces say that you must use **/
}
That's basically it. The main points here are:
Unique token, userid and connection id provide the unique combination required to ensure that one user can't see another user's data.
Unique token means that if the same user opens another page and requests to subscribe, they'll have their own connection id + token combo so the same user won't have double the subscriptions on the same page (basically, each connection has it's own individual data).
Extension
You should be ensuring all data is validated and not a hack attempt before you do anything with it. Log all connection attempts using something like Monolog, and set up e-mail forwarding if any critical's occur (like the server stops working because someone is being a bastard and attempting to hack your server).
Closing Points
Validate Everything. I can't stress this enough. Your unique token that changes on every request is important.
Remember, if you re-generate the token on every HTTP request, and you make a POST request before attempting to connect via websockets, you'll have to pass back the re-generated token to your JavaScript before trying to connect (otherwise your token will be invalid).
Log everything. Keep a record of everyone that connects, asks for what topic, and disconnects. Monolog is great for this.
To send to specific users, you need a ROUTER-DEALER pattern instead of PUB-SUB. This is explained in the Guide, in chapter 3. Security, if you're using ZMQ v4.0, is handled at the wire level, so you don't see it in the application. It still requires some work, unless you use the CZMQ binding, which provides an authentication framework (zauth).
Basically, to authenticate, you install a handler on inproc://zeromq.zap.01, and respond to requests over that socket. Google ZeroMQ ZAP for the RFC; there is also a test case in the core libzmq/tests/test_security_curve.cpp program.
I'm interacting with ActiveMQ via STOMP. I have one process which publishes messages and a multiple processes that subscribes and processes the messages (about 10 parallel instances).
After reading a message I want to be sure that if, for some reason my application fails/crashes, the message will not be lost. So naturally, I turned to transactions. Unfortunately, I discovered that once a consumer reads a message as a part of the transaction, all the following messages are not being sent to the other consumers, until the transaction ends.
Test case: abc queue has a 100 messages. If I activate the following code in two different browser tabs, the first will return in 10 seconds and the second will return in 20 seconds.
<?php
// Reader.php
$con = new Stomp("tcp://localhost:61613");
$con->connect();
$con->subscribe(
"/queue/abc",
array()
);
$tx = "tx3".microtime();
echo "TX:$tx<BR>";
$con->begin($tx);
$messages = array();
for ($i = 0; $i < 10; $i++) {
$t = microtime(true);
$msg = $con->readFrame();
if (!$msg) {
die("FAILED!");
}
$t = microtime(true)-$t; echo "readFrame() took $t MS to complete<BR>";
array_push($messages, $msg);
$con->ack($msg, $tx);
sleep(1);
}
$con->abort($tx);
Is there something I'm missing code-wise? Is there a way to configure ActiveMQ (or send a header) that will make the transaction remove the item from the queue, allow other processes consume the other messages, and if the transaction fails or is timed-out, will put the item back in?
PS: I thought about creating another queue - DetentionQueue for each reading process but I really rather not do it if I have a choice.
You will probably want to adjust the prefetch size of the subscription so that ActiveMQ doesn't send the Messages on the Queue to client 1 before client 2 gets a chance to get any. By default its set to 1000 so best to tune it for your use case.
You can set the prefetch size via the "activemq.prefetchSize=1" header on the subscribe frame. Refer to the ActiveMQ Stomp page for all the frame options.
I have been tracking emails for years using a "beacon" image and for those clients that allow the images to download it has worked great to track how many people have opened the email.
I came across the service "DidTheyReadIt" which shows how long the client actually read the email, I tested it with their free service and it is actually pretty close to the times I opened the email.
I am very curious in how they achieve the ability to track this, I am certain that whatever solution is chosen it will put a lot of load on the server / database and that many of the community will reply with "Stop, No and Dont" but I do want to investigate this and try it out, even if its just enough for me to run a test on the server and say "hell no".
I did some googling and found this article which has a basic solution http://www.re-cycledair.com/tracking-email-open-time-with-php
I made a test using sleep() within the beacon image page:
<?php
set_time_limit(300); //1000 seconds
ignore_user_abort(false);
$hostname_api = "*";
$database_api = "*";
$username_api = "*";
$password_api = "*";
$api = mysql_pconnect($hostname_api, $username_api, $password_api) or trigger_error(mysql_error(),E_USER_ERROR);
mysql_select_db($database_api, $api);
$fileName = "logo.png";
$InsertSQL = "INSERT INTO tracker (FileName,Time_Start,Time_End) VALUES ('$fileName',Now(),Now()+1)";
mysql_select_db($database_api, $api);
$Result1 = mysql_query($InsertSQL, $api) or die(mysql_error());
$TRID = mysql_insert_id();
//Open the file, and send to user.
$fp = fopen($fileName, "r");
header("Content-type: image/png");
header('Content-Length: ' . filesize($fileName));
readfile($fileName);
set_time_limit(60);
$start = time();
for ($i = 0; $i < 59; ++$i) {
// Update Read Time
$UpdateSQL = "UPDATE tracker SET Time_End = Now() WHERE TRID = '$TRID'";
mysql_select_db($database_api, $api);
$Result1 = mysql_query($UpdateSQL, $api) or die(mysql_error());
time_sleep_until($start + $i + 1);
}
?>
The problem with the code above (other than updating the database every second) is that once the script runs it continues to run even if the user disconnects (or moves to another email in this case).
I added "ignore_user_abort(false);", however as there is no connection to the mail client and the headers are already written I dont think the "ignore_user_abort(false);" can fire.
I looked at the post Track mass email campaigns and one up from the bottom "Haragashi" says:
"You can simply build a tracking handler which returns the tracking image byte by byte. After every byte flush the response and sleep for a period of time.
If you encounter a stream closed exception the client has closed the e-mail (deleted or changed to another e-mail who knows).
At the time of the exception you know how long the client 'read' the e-mail."
Does anyone know how I could "simply build a tracking handler" like this or know of a solution I can implement into my code that will force the code to stop running when the user disconnects?
I think the problem is that you aren't doing a header redirect every so often. The reason that it is necessary is because once a script starts executing in PHP+Apache, it basically disregards the client until finished. If you force a redirect every X seconds, it makes the server re-evaluate if the client is still connected. If the client isn't connected, it can't force the redirect, and therefore stops tracking the time.
When I played around with this stuff, my code looked like:
header("Content-type: image/gif");
while(!feof($fp)) {
sleep(2);
if(isset($_GET['clientID'])) {
$redirect = $_SERVER['REQUEST_URI'];
} else {
$redirect = $_SERVER['REQUEST_URI'] . "&clientID=" . $clientID;
}
header("Location: $redirect");
exit;
}
If the client id was set, then above this block of code I would log this attempt at reading the beacon in the database. It was easy to simply increment the time on email column by 2 seconds every time the server forced a redirect.
Would you not do something more like this:
<?php
// Time the request
$time = time();
// Ignore user aborts and allow the script
// to run forever
ignore_user_abort(true);
set_time_limit(0);
// Run a pointless loop that sometime
// hopefully will make us click away from
// page or click the "Stop" button.
while(1)
{
// Did the connection fail?
if(connection_status() != CONNECTION_NORMAL)
{
break;
}
// Sleep for 1 seconds
sleep(1);
}
// Connention is now terminated, so insert the amount of seconds since start
$duration = time() - $time;
I'm trying to make my PHP server a bit more efficient.
I've built an object named Client which contains the connected client (which has an open socket connection with the server) information such as name, id etc.
For now I have one array of socket connections, and one array of Client objects.
When I'm referring a connection, I'm searching inside my Client array to find the right client who matches this connection.
It works great, but it's a bit inefficient.. For small amount of clients in the server you don't feel it, but I'm afraid that if I'll have thousands of connection it will slow down the server.
As a solution I thought about 2 dimensional array, but I have a logic problem designing it.
Can I do something like this:
$clients = array();
$temp = array($newsock, new Client());
$clients[] = $temp;
I want my $clients[] to be the socket and the $clients[][] to be the client object.
In each row of $client I will have only $client[$index][0] which will be my client object for that connection.
Will I be able to send this to the socket_select() function?
You say that you have within your client object an id attribute. Why not use that id as the key for both arrays?
Socket connections array
Client object array
You might even be able to hold the connection and the client object in one array, each in one object under the same key I talked about before - the clients id.
In any case, wherever you decide to store your clients connection object, you will be able to pass it to all the relevant socket functions -
socket_select();
socket_accept();
socket_write();
etc...
With regard to the efficiency of your server, I implemented some forking for broadcasting data to large amounts of clients (all of them in the example of a chat server).
This is the implementation that I used for forking the broadcasts -
function broadcastData($socketArray, $data){
global $db;
$pid = pcntl_fork();
if($pid == -1) {
// Something went wrong (handle errors here)
// Log error, email the admin, pull emergency stop, etc...
echo "Could not fork()!!";
} elseif($pid == 0) {
// This part is only executed in the child
foreach($socketArray AS $socket) {
// There's more happening here but the essence is this
socket_write($socket,$msg,strlen($msg));
// TODO : Consider additional forking here for each client.
}
// This is where the signal is fired
exit(0);
}
// The child process is now occupying the same database
// connection as its parent (in my case mysql). We have to
// reinitialize the parent's DB connection in order to continue using it.
$db = dbEngine::factory(_dbEngine);
}
The code above was lifted from a previous question of mine (that was self answered).
Terminating zombie child processes forked from socket server
Perhaps it might assist you if you chose to start forking processes.