I would have a question about parallel socket connections in PHP.
I'm not a socket expert, that's why I'm asking this question here.
I created a library which is capable to persist data (string/int/float/array/object or anything that is losslessly serializable), like a cache, in PHP. I provided a part of my code, which I think causes the problem, but if you need additional info, visit: https://github.com/dude920228/php-cache
The concept was:
Create a TCP socket server
Accept connections
Read a serialized PHP array from the connecting client
Call the action handler of the action that is received from the client and do some stuff with the data is sent if necessary or write back to the client
Close the connection on the client side
It works like a charm with like 25 parallel connections, but for some reason, the clients do not receive the data they should get. At 100 parallel connections, only kinda half of the clients get the data. The program does not emit any errors and the only error control operator I use is at the socket_accept call in the main loop. The main loop looks like this:
public function run()
{
$this->socket = $this->ioHandler->createServerSocket();
while ($this->running) {
$this->maintainer->checkBackup(time(), $this->bucket);
$this->maintainer->maintainBucket($this->bucket);
if (($connection = #socket_accept($this->socket))) {
$clientId = uniqid();
socket_set_nonblock($connection);
$this->clients[$clientId] = $connection;
$read = $this->clients;
$write = array();
$except = array();
socket_select($read, $write, $except, null);
$dataString = $this->ioHandler->readFromSocket($connection);
$data = unserialize($dataString);
($this->actionHandler)($data, $this->bucket, $this->ioHandler, $connection);
$this->ioHandler->closeSocket($connection);
unset($this->clients[$clientId]);
}
}
$this->close();
}
Am I misusing the socket_select call there? Or is the problem in an other class? Thanks in advance for the answers!
Related
I am scanning a subnet to gather WMI information on assets on the network. However, if a machine is stuck in POST or has some unknown issue, it will freeze the script and never move on to the next machine. Question is, is there a way to set a timeout on PHP COM WbemScripting.SWbemLocator?
$host = 10.1.1.5; //Host is online, but may be hung trying to shutdown
//Check if host is online
if(#$fp = fsockopen($host,135,$errCode,$errStr,0.1))
{
//Create new connection
$WbemLocator = new COM ("WbemScripting.SWbemLocator");
//Connect to remote workstation
$WbemServices = $WbemLocator->ConnectServer($host, 'root\\cimv2',$user,$password);
$WbemServices->Security_->ImpersonationLevel = 3;
//Basic WMI query
$system = $WbemServices->execQuery("Select * from Win32_ComputerSystem");
foreach($pcsystem AS $n){
$hostname = $n->Name; //Hostname
}
//Process all data ->Insert into DB
}
//Move on to next machine. In this case, the script will never move on
Short answer, no - because it's a COM object, not a PHP object, and there's no facility to control this directly.
Longer, speculative answer, you could try tweaking the TCP parameters via the registry notably TcpInitialRtt although there doesn't seem to be a lot of scope for changing the behaviour unless you are running on a very reliable and uncongested network. And you'll probably break other things running on the machine.
I have a page that contains a picture. The picture should refresh every second. I have a socket.php file that creates a link to a c++ program and asks for a picture and then put it as output. I have a js code that asks socket.php for an image every second.
So
every second my js code in clients browser, asks socket.php on my server to send the user a new picture and socket.php asks my c++ code for a picture, receive the picture and pass it to the client browser.
every thing is ok.
but when I change the interval from 1 second to 50 miliseconds, the number of "apache2" processes on my server goes really up. I mean about 200 apache2 processes and this uses too much ram memory on my server.
My question is: what should I do to make a have a persistent connection between php and c++ , so for every query from user, a new connection doesn't create? Does a persistent connection help to avoid this number of apache processes?
This is my socket.php file:
if(isset($_GET['message']))
$message = $_GET['message'];
else $message = "-1";
$host = "127.0.0.1";
$port = 12345;
$socket = socket_create(AF_INET, SOCK_STREAM, 0) or die("Could not create socket\n");
$result = socket_connect($socket, $host, $port) or die("Could not connect to server\n");
socket_write($socket, $message, strlen($message)) or die("Could not send data to server\n");
$b= '';
$buf = '';
while(true)
{
$bytes = socket_recv($socket, $buf, 2048, 0);
if($bytes==0) break;
$b .= $buf;
}
$im = imagecreatefromstring($b);
header('Content-Type: image/jpeg');
imagejpeg($im);
imagedestroy($im);
This is my js code:
function updateImage()
{
if(!isPaused)
{
if(newImage.complete) {
document.getElementById("myimg").src = newImage.src;
newImage = new Image();
newImage.src = "socket.php?message=0&image" + count++ + ".jpg";
}
setTimeout(updateImage, 50);
}
}
60 calls in a min it's nothing and 200 too. I think problem is you don't close socket.
the best approach is make you c++ update mysql DB every second and page should ask for updated image from DB, like this you will gain flexibility.
At that point you also can do cashing of the image
Also you can attach as much image users as you want without opening the new sockets and without C++ application calling.
As far as I can tell, one really cannot persist a connection from an Apache instance of PHP (e.g. mod_php) to another service, except for database connections (e.g. all of those supported by PDO). #volkinc's link to how PHP executes is a pretty good example which illustrates where such a persistent link would have to cached/stored by PHP, but is not.
Help me please to realise notifications of new messages for users.
Now i have this client code:
function getmess(){
$.ajax({
url:"notif.php",
data:{"id":id},
type:"GET",
success:function(result){
$("#count").html(result);
setTimeout('getmess',10000);
}
});
}
and this server code:
$mysqli = new mysqli('localhost', 'root', '', 'test');
if (mysqli_connect_errno()) {
printf("error: %s\n", mysqli_connect_error());
exit;
}
session_start();
$MY_ID = $_SESSION['id'];
while (true) {
$result = $mysqli->query("SELECT COUNT(*) FROM messages WHERE user_get='$MY_ID'");
if (mysqli_num_rows($result)) {
while ($row = mysqli_fetch_array($result)) {
echo $row[0]."";
}
flush();
exit;
}
sleep(5);
}
I have the problem that this script is not updating in real time when new message was added to database. But if I press button with onclick="getmess();" it works.
First, you check your database every 5 seconds, so you can't achieve real time - you have at least 5 seconds delay.
And second, there is no way you can achieve real-time by polling.
The way to deliver notifications nearly real time is to send the message by the same code that inserts into the database, e.g. you should not query the database for new records, but when there is a new record to send the data to the client. Even with a long-polling as a transport protocol.
How to achieve this? Unfortunately PHP is not a good choice. You need a non-blocking server to hold the connection, you need to know which connection waits for what data and you need a way from PHP (your backend) to notify this connection.
You can use the tornado-web server, node.js or nginx to handle the connections. You assign an identifier to each connection (probably you already have one - the userid), and when there is a new record added - the PHP script performs HTTP request to the notification server (tornado, node.js, nginx) saying what data to which user does this.
For nginx, take a look at nginx push stream
I have an issue, it has only cropped up now. I am on a shared web hosting plan that has a maximum of 10 concurrent database connections. The web app has dozens of queries, some pdo, some mysql_*.
Loading one page in particular peaks at 5-6 concurrent connections meaning it takes a minimum of 2 users loading it at the same time to spit an error on one or both of them.
I know this is inefficient, I'm sure I can cut that down quite a bit, but that's what my idea is at the moment is to move the pdo code into a function and just pass in a query string and an array of variables, then have it return an array (partly to tidy my code).
THE ACTUAL QUESTION:
How can I get this function to continue to retry until it manages to execute, and hold up the script that called it (and any script that might have called that one) until it manages to execute and return it's data? I don't want things executing out of order, I am happy with code being delayed for a second or so during peak times
Since someone will ask for code, here's what I do at the moment. I have this in a file on it's own so I have a central place to change connection parameters. the if statement is merely to remove the need to continuously change the parameters when I switch from my test server to the liver server
$dbtype = "mysql";
$server_addr = $_SERVER['SERVER_ADDR'];
if ($server_addr == '192.168.1.10') {
$dbhost = "localhost";
} else {
$dbhost = "xxxxx.xxxxx.xxxxx.co.nz";
}
$dbname = "mydatabase";
$dbuser = "user";
$dbpass = "supersecretpassword";
I 'include' that file at the top of a function
include 'db_connection_params.php';
$pdo_conn = new PDO("mysql:host=$dbhost;dbname=$dbname", $dbuser, $dbpass);
then run commands like this all on the one connection
$sql = "select * from tbl_sub_cargo_cap where sub_model_sk = ?";
$capq = $pdo_conn->prepare($sql);
$capq->execute(array($sk_to_load));
while ($caprow = $capq->fetch(PDO::FETCH_ASSOC)) {
//stuff
}
You shouldn't need 5-6 concurrent connections for a single page, each page should only really ever use 1 connection. I'd try to re-architect whatever part of your application is causing multiple connections on a single page.
However, you should be able to catch a PDOException when the connection fails (documentation on connection management), and then retry some number of times.
A quick example,
<?php
$retries = 3;
while ($retries > 0)
{
try
{
$dbh = new PDO("mysql:host=localhost;dbname=blahblah", $user, $pass);
// Do query, etc.
$retries = 0;
}
catch (PDOException $e)
{
// Should probably check $e is a connection error, could be a query error!
echo "Something went wrong, retrying...";
$retries--;
usleep(500); // Wait 0.5s between retries.
}
}
10 concurrent connections is A LOT. It can serve 10-15 online users easily.
Heavy efforts needed to exhaust them.
So there is something wrong with your code.
There are 2 main reasons for it:
slow queries take too much time and thus serving one hit uses one mysql connection for too long.
multiple connections opened from every script.
The former one have to be investigated but for the latter one it's simple:
Do not mix myqsl_ and PDO in one script: you are opening 2 connections at a time.
When using PDO, open connection only once and then use it throughout your code.
Reducing the number of connections in one script is the only way to go.
If you have multiple instances of PDO class in your code, you will need to add that timeout handling code you want to every call. So, heavy code rewriting required anyway.
Replace these new instances with global $pdo; instead. It will take the same amount of time but it will be permanent solution, not temporary patch as you want it.
Please be sensible.
PHP automatically closes all the connections st the end of the script, you don't have to care about closing them manually.
Having only one connection throughout one script is a common practice. It is used by ALL the developers around the world. You can use it without any doubts. Just use it.
If you have transaction and want to log something in database you sometimes need 2 connections in one script
I have a page where few thousands of users can hit a method at the same time . I do have following code where I connect every time . Since this will go to a seperate memcache server will this cause slowdowns is there a way to connect just once and reuse that connection ? Do I have to close connection after every request ?
$primary_connected = $memcache_primary->connect($primary_memcache_server, 11211);
if($primary_connected){
$data = $memcache_primary->get($key);
if ($data != NULL) {
return data;
}
}
else{
/////Get data from database
}
If you are using the PHP memcached class (the one with the d on the end, not memcache) then yes, you can open a persistent connection.
You can pass a persistent ID to the constructor which will open a persistent connection and subsequent instances that use the same persistent ID will use that connection.
$memcached = new Memcached('method_name_or_persistent_identifier');
$memcached->addServer(...);
// use it
Hope that helps.
See Memcached::__construct() for more details.