I'm using a for loop to speed up my script. The problem is that each process that happens inside of the loop takes several minutes to load. Is it possible to move on the next sequence in the loop if the previous one hasn't completed? I know that PHP isn't a multi-threaded language, so perhaps Python would be a better choice.
ini_set('memory_limit', '2048M');
ini_set('max_execution_time', 0);
$list = file_get_contents('auth.txt');
$list = nl2br($list);
$exp = explode('<br />', $list);
$count = count($exp);
for($i=0;$i<$count;$i++) {
$auth = $exp[$i];
echo 'Trying '.$auth.' \n';
// This takes several minutes. Is it possible to move on to the next one before it has completed?
exec('python test.py --auth='.$auth);
}
Use & to run script in the background:
exec('python test.py --auth='.$auth . ' > /dev/null 2>&1 &');
Related
I want to calculate the execution time of popen function in PHP. The function is not provide execution time as default.
My pseudo code like below;
$handle = popen($cmd, 'r');
while (!feof($handle)) {
$buffer = fgets($handle, FREAD_BUFFER_SIZE);
$out .= $buffer . " ";
}
I found a solution in here but I am not sure it's the right way.
Consider using the time() or microtime(true) function.
$currentTimeinSeconds = microtime(true);
This stores the seconds passed from the instance it was declared to 1/01/1970. Make two variables (ex. T1 and T2) and simply subtract the second to the first to get the execution time.
$t1= microtime(true)
[function]
$t2= microtime(true)
ex_time=$t2-$t1
So, i have a database with big data. The data to use is currently about 2,6 GB.
All the data need to be written to a text file for later use in another scripts.
The data is being limited per file and splitted in multiple parts. 100 results per file (around 37MB each file). Thats about 71 files.
The data is json data that is being serialized and then encrypted with openssl.
The data is correctly being written to the files, untill the max execution time is reached after 240 seconds. That's after about 20 files...
Well, i can just extend that time, but thats not the problem.
The problem is the following:
Writing file 1-6: +/- 5 seconds
Writing file 7-8: +/- 7 seconds
Writing file 9-11: +/- 12 seconds
Writing file 12-14: +/- 17 seconds
Writing file 14-16: +/- 20 seconds
Writing file 16-18: +/- 23 seconds
Writing file 19-20: +/- 27 seconds
Note: time is needed time per file
In other words, with every file im writing, the writing time per file goes significantly up, what causes the script to be slow offcourse.
The structure of the script is a bit like this:
$needed_files = count needed files/parts
for ($part=1; $part<=$needed_files; $part++) { // Loop throught parts
$query > mysqli select data
$data > json_encode > serialize > openssl_encrypyt
file_put_contents($filename.$part, $data, LOCK_EX);
}
WORKING CODE AFTER HELP
$notchDetails = mysqli_query($conn, "SELECT * FROM notches WHERE projectid = ".$projectid."");
$rec_count = 0;
$limit = 100;
$part = 1;
while ($notch = mysqli_fetch_assoc($notchDetails)) {
$data1[] = $notch;
$rec_count++;
if ($rec_count >= $limit) {
$data = json_encode($data1);
$data = openssl_encrypt(bin2hex($data), "aes128", $pass, false, $iv);
$filename = $mainfolder."/".$projectfolder."/".$subfolder."/".$fname.".part".$part."".$fext;
file_put_contents($filename, $data, LOCK_EX);
$part++;
$rec_count = 0;
$data = $data1 = "";
}
}
if ($data1 != "") {
$data = json_encode($data1);
$data = openssl_encrypt(bin2hex($data), "aes128", $pass, false, $iv);
$filename = $mainfolder."/".$projectfolder."/".$subfolder."/".$fname.".part".$part."".$fext;
file_put_contents($filename, $data, LOCK_EX);
}
mysqli_free_result($notchDetails);
Personally I would have coded this as a single SELECT with no LIMIT and then based on a $rec_per_file = ?; write the outputs from within the single while get results loop
Excuse the cryptic code, you didnt give us much of a clue
<?php
//ini_set('max_execution_time', 600); // only use if you have to
$filename = 'something';
$filename_suffix = 1;
$rec_per_file = 100;
$sql = "SELECT ....";
Run query
$rec_count = 0;
while ( $row = fetch a row ) {
$data[] = serialize > openssl_encrypyt
$rec_count++;
if ( $rec_count >= $rec_per_file ) {
$json_string = json_encode($data);
file_put_contents($filename.$filename_suffix,
$json_string,
LOCK_EX);
$filename_suffix++; // inc the suffix
$rec_count = 0; // reset counter
$data = array(); // clear data
// add 30 seconds to the remaining max_execution_time
// or at least a number >= to the time you expect this
// while loop to get back to this if statement
set_time_limit(30);
}
}
// catch the last few rows
$json_string = json_encode($data);
file_put_contents($filename.$filename_suffix, $data, LOCK_EX);
Also I am not sure why you would want to serialize() and json_encode()
I had a thought, based on your comment about execution time. If you place a set_time_limit(seconds) inside the if inside the while loop it might be cleaner, and you would not have to set ini_set('max_execution_time', 600); to a very large number, which if you have a real error in here may cause PHP continue processing for a long time before kicking the script out.
From the manual:
Set the number of seconds a script is allowed to run. If this is reached, the script returns a fatal error. The default limit is 30 seconds or, if it exists, the max_execution_time value defined in the php.ini.
When called, set_time_limit() restarts the timeout counter from zero. In other words, if the timeout is the default 30 seconds, and 25 seconds into script execution a call such as set_time_limit(20) is made, the script will run for a total of 45 seconds before timing out.
I have a website that periodically gets a large number of sleeping php processes. My hosting service sets a limit of 20 concurrent running processes. If it goes over the limit my site goes down with a 503 error.
It is a rare occurrence and doesn't seem to have any correlation to the number of people visiting my site.
As a safeguard I would like to have a cron job with a php script that would kill php processes that have been sleeping for over 10 min.
I have a php function that will kill all sleeping MySql processes that have been sleeping for more than 10 min;
public function kill_sleeping_mysql_processes()
{
$result = $this->db->query("SHOW FULL PROCESSLIST");
foreach($result->result_array() as $row)
{
if ($row['Command'] == "Sleep" && $row['Time'] > 600)
{
$this->db->query("KILL {$row['Id']}")
}
}
}
The question is how can do I do the same with php processes?
I can get a read out of php processes with this code.
exec("ps aux | less", $output);
and I can kill specific php processes with this code if I have the pid;
$pid = 11054;
exec("kill -9 $pid");
But how can I selectively kill php processes that have been sleeping more than 10 min?
I cobbled something together. It is not elegant and is a bit of a hack but it seems to work, although I am going to test it further before putting in a cron job.
public function kill_dormant_php_processes()
{
$output_array = array();
exec("ps aux | grep -v grep", $ps_output);
array_shift($ps_output);
if (count($ps_output) > 0)
{
$i = 0;
foreach ($ps_output as $ps)
{
$ps = preg_split('/ +/', $ps);
$output_array[$i]->pid = $ps[1];
$output_array[$i]->stat = $ps[7];
$output_array[$i]->time = $ps[9];
$i++;
}
}
if( ! empty($output_array))
{
foreach ($output_array as $row)
{
if( $row->stat == 'S' && date('H:i', strtotime($row->time)) > date('H:i', strtotime('00:01')))
{
exec("kill -9 $row->pid");
}
}
}
}
I am sure there must be a better way to do it.
Could someone explain why 00:01 in the read out seems to translate to 6 min?
freedom 6933 6.0 0.1 57040 13040 ? S 16:55 0:01 /usr/local/bin/php53.cgi -c .:/home/freedom/:/etc index.php
As an alternative to the PHP script shared here, you can use the killall command with an "older than" time filter (using the -o option) to kill all those processes.
This command for example will kill all php-cgi processes that have been running for more than 30 minutes:
killall -o 30m /usr/bin/php-cgi
I need to login to a production server retrieve a file and update my data base with the data in this file. Since this is a production database, I don't want to get the whole file every 5 minutes since the file may be huge and this may impact the server. I need to get the last 30 lines of this file every 5 minutes interval and have as little impact as possible.
The following is my current code, I would appreciate any insight to how best accomplish this:
<?php
$user="id";
$pass="passed";
$c = curl_init("sftp://$user:$pass#server1.example.net/opt/vmstat_server1");
curl_setopt($c, CURLOPT_PROTOCOLS, CURLPROTO_SFTP);
curl_setopt($c, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($c);
curl_close($c);
$data = explode("\n", $data);
?>
Marc B is wrong. SFTP is perfectly capable of partial file transfers. Here's an example of how to do what you want with phpseclib, a pure PHP SFTP implementation:
<?php
include('Net/SFTP.php');
$sftp = new Net_SFTP('www.domain.tld');
if (!$sftp->login('username', 'password')) {
exit('Login Failed');
}
$size = $sftp->size('filename.remote');
// outputs the last ten bytes of filename.remote
echo $sftp->get('filename.remote', false, $size - 10);
?>
In fact I'd recommend an approach like this anyway since some SFTP servers don't let you run commands via the system shell. Plus, SFTP can work on Windows SFTP servers whereas tail is unlikely to do so even if you do have shell access. ie. overall, it's a lot more portable a solution.
If you want to get the last x lines of a file you could loop repeatedly, reading however many bytes each time, until you encounter 10x new line characters. ie. get the last 10 bytes, then the next to last 10 bytes, then the ten bytes before those ten bytes, etc.
An answer by #Sammitch to a duplicate question Get last 15 lines from a large file in SFTP with phpseclib:
The following should result in a blob of text with at least 15 lines from the end of the file that you can then process further with your existing logic. You may want to tweak some of the logic depending on if your file ends with a trailing newline, etc.
$filename = './file.txt'
$filesize = $sftp->size($filename);
$buffersize = 4096;
$offset = $filesize; // start at the end
$result = '';
$lines = 0;
while( $offset > 0 && $lines < 15 ) {
// work backwards
if( $offset < $buffersize ) {
$offset = 0;
} else {
$offset -= $buffer_size;
}
$buffer = $sftp->get($filename, false, $offset, $buffer_size));
// count the number of newlines as we go
$lines += substr_count($buffer, "\n");
$result = $buffer . $result;
}
SFTP is not capable of partial file transfers. You might have better luck using a fullblowin SSH connection and use a remote 'tail' operation to get the last lines of the file, e.g.
$lines = shell_exec("ssh user#remote.host 'tail -30 the_file'");
Of course, you might want to have something a little more robust that can handle things like net.glitches that prevent ssh from getting through, but as a basic starting point, this should do the trick.
i am developing a new section on my site and ive notice a small latency when login. on my computer it works great but when i put it to th eserver it is slower. the login process is slower on the server and not on my cmoputer.
half second to 1 second slower
i have doubt on my hosting that is not as fast as they say since on my computer its fast.
is there a way i can monitor the speed of the server command line or php script i can run to find out what's wrong?
Put these three lines of code in various places in your script (replacing "foo" with a description of where you place it in the code):
$h = fopen('log.txt', 'a');
fwrite($h, 'foo: ' . microtime(true));
fclose();
Then, run your script, and you can see which part is slow.
At the top of the script, put
<?php
function microtime_float()
{
list($usec, $sec) = explode(" ", microtime());
return ((float)$usec + (float)$sec);
}
$start_time = microtime_float();
and at the end
$exec_time = microtime_float() - $start_time;
echo 'Page loaded in: ' . $exec_time . 'seconds';
?>
Compare your local copy with the remote copy.