Php WebSocket Handshake - php

I create a secure websocket server with stream_socket_server.
And i'd like to have a function accept that sequencially get the connections and do the handshake as a blocking access. With chrome, it connects and deconnects and with Firefox it tells me that there's an error with fread.
The snippet is :
$write = null;
$except = null;
$sockets = $this->socket;
#stream_select( $sockets, $write, $except, 0 );
foreach( $sockets as $socket ) {
$resource = #stream_socket_accept( $socket );
if (!$resource) {
return false;
}
$accepts = array( $resource );
#stream_select( $accepts, $write, $except, 0, 1000 );
foreach( $accepts as $accepted ) {
$buffer = '';
$bytes_to_read = 8192;
while ( $chunk = fread( $accepted, $bytes_to_read ) ) {
$buffer .= $chunk;
$status = stream_get_meta_data( $accepted );
$bytes_to_read = $status[ "unread_bytes" ];
if ( strlen( $buffer ) === 1 )
$bytes_to_read = 8192;
}
$response = HandShake::perform( $buffer );
$responseLength = strlen( $response );
for( $written = 0; $written < $responseLength; $written += $fwrite ) {
$fwrite = fwrite( $accepted, substr( $response, $written ) );
if ( ( $fwrite === false ) || ( $fwrite === 0 ) ) {
stream_socket_shutdown( $accepted, STREAM_SHUT_RDWR );
return false;
}
}
}
}
Can you tell me why it doesn't work and transform my snippet ?

Related

script that denies access to an ip after they have filled a form on my site

I have an existing php script written which checks if an ip has been written to a .dat, and redirects the user to in this case google if the ip is in the file. If not, it lets them continue on with signing up to my site.
The code below checks if the control is switched on. I placed it in the index.php
if($One_time_access==1)
{
$fp = fopen("blacklist.dat", "a");
fputs($fp, "\r\n$ip\r\n");
fclose($fp);
}
Then the code below does all the work to figure out the ip, what to ban, etc
<?php
class IpList {
private $iplist = array();
private $ipfile = "";
public function __construct( $list ) {
$contents = array();
$this->ipfile = $list;
$lines = $this->read( $list );
foreach( $lines as $line ) {
$line = trim( $line );
# remove comment and blank lines
if ( empty($line ) ) {
continue;
}
if ( $line[0] == '#' ) {
continue;
}
# remove on line comments
$temp = explode( "#", $line );
$line = trim( $temp[0] );
# create content array
$contents[] = $this->normal($line);
}
$this->iplist = $contents;
}
public function __destruct() {
}
public function __toString() {
return implode(' ',$this->iplist);
}
public function is_inlist( $ip ) {
$retval = false;
foreach( $this->iplist as $ipf ) {
$retval = $this->ip_in_range( $ip, $ipf );
if ($retval === true ) {
$this->range = $ipf;
break;
}
}
return $retval;
}
/*
** public function that returns the ip list array
*/
public function iplist() {
return $this->iplist;
}
/*
*/
public function message() {
return $this->range;
}
public function append( $ip, $comment ) {
return file_put_contents( $this->ipfile, $ip, $comment );
}
public function listname() {
return $this->ipfile;
}
/*
** private function that reads the file into array
*/
private function read( $fname ) {
try {
$file = file( $fname, FILE_IGNORE_NEW_LINES | FILE_SKIP_EMPTY_LINES );
}
catch( Exception $e ) {
throw new Exception( $fname.': '.$e->getmessage() . '\n');
}
return $file;
}
private function ip_in_range( $ip, $range ) {
// return ip_in_range( $ip, $range );
if ( strpos($range, '/') !== false ) {
// IP/NETMASK format
list( $range, $netmask ) = explode( '/', $range, 2 );
if ( strpos( $netmask, '.' ) !== false ) {
// 255.255.255.0 format w/ wildcards
$netmask = str_replace('*', '0', $netmask );
$dnetmask = ip2long( $netmask );
return ((ip2long( $ip ) & $dnetmask) == (ip2long($range) & $dnetmask ));
}
else {
// IP/CIDR format
// insure $range format 0.0.0.0
$r = explode( '.', $range );
while( count( $r ) < 4 ) {
$r[] = '0';
}
for($i = 0; $i < 4; $i++) {
$r[$i] = empty($r[$i]) ? '0': $r[$i];
}
$range = implode( '.', $r );
// build netmask
$dnetmask = ~(pow( 2, ( 32 - $netmask)) - 1);
return ((ip2long($ip) & $dnetmask)==(ip2long($range) & $dnetmask));
}
}
else {
if ( strpos( $range, '*' ) !== false ) {
// 255.255.*.* format
$low = str_replace( '*', '0', $range );
$high = str_replace( '*', '255', $range );
$range = $low.'-'.$high;
}
if ( strpos( $range, '-') !== false ) {
// 128.255.255.0-128.255.255.255 format
list( $low, $high ) = explode( '-', $range, 2 );
$dlow = $this->toLongs( $low );
$dhigh = $this->toLongs( $high );
$dip = $this->toLongs( $ip );
return (($this->compare($dip,$dlow) != -1) && ($this->compare($dip,$dhigh) != 1));
}
}
return ( $ip == $range );
}
private function normal( $range ) {
if ( strpbrk( "*-/", $range ) === False ) {
$range .= "/32";
}
return str_replace( ' ', '', $range );
}
private function toLongs( $ip ) {
# $Ip = $this->expand();
# $Parts = explode(':', $Ip);
$Parts = explode('.', $ip );
$Ip = array('', '');
# for ($i = 0; $i < 4; $i++) {
for ($i = 0; $i < 2; $i++){
$Ip[0] .= str_pad(base_convert($Parts[$i], 16, 2), 16, 0, STR_PAD_LEFT);
}
# for ($i = 4; $i < 8; $i++) {
for ($i = 2; $i < 4; $i++) {
$Ip[1] .= str_pad(base_convert($Parts[$i], 16, 2), 16, 0, STR_PAD_LEFT);
}
return array(base_convert($Ip[0], 2, 10), base_convert($Ip[1], 2, 10));
}
private function compare( $ipdec1, $ipdec2 ) {
if( $ipdec1[0] < $ipdec2[0] ) {
return -1;
}
elseif ( $ipdec1[0] > $ipdec2[0] ) {
return 1;
}
elseif ( $ipdec1[1] < $ipdec2[1] ) {
return -1;
}
elseif ( $ipdec1[1] > $ipdec2[1] ) {
return 1;
}
return 0;
}
}
class IpBlockList {
private $statusid = array( 'negative' => -1, 'neutral' => 0, 'positive' => 1 );
private $whitelist = array();
private $blacklist = array();
private $message = NULL;
private $status = NULL;
public function __construct(
$whitelistfile = 'assets/includes/whitelist.dat',
$blacklistfile = 'assets/includes/blacklist.dat' ) {
$this->whitelistfile = $whitelistfile;
$this->blacklistfile = $blacklistfile;
$this->whitelist = new IpList( $whitelistfile );
$this->blacklist = new IpList( $blacklistfile );
}
public function __destruct() {
}
public function ipPass( $ip ) {
$retval = False;
if ( !filter_var( $ip, FILTER_VALIDATE_IP, FILTER_FLAG_IPV4 ) ) {
throw new Exception( 'Requires valid IPv4 address' );
}
if ( $this->whitelist->is_inlist( $ip ) ) {
// Ip is white listed, so it passes
$retval = True;
$this->message = $ip . " is whitelisted by ".$this->whitelist->message().".";
$this->status = $this->statusid['positive'];
}
else if ( $this->blacklist->is_inlist( $ip ) ) {
$retval = False;
$this->message = $ip . " is blacklisted by ".$this->blacklist->message().".";
$this->status = $this->statusid['negative'];
}
else {
$retval = True;
$this->message = $ip . " is unlisted.";
$this->status = $this->statusid['neutral'];
}
return $retval;
}
public function message() {
return $this->message;
}
public function status() {
return $this->status;
}
public function append( $type, $ip, $comment = "" ) {
if ($type == 'WHITELIST' ) {
$retval = $this->whitelistfile->append( $ip, $comment );
}
elseif( $type == 'BLACKLIST' ) {
$retval = $this->blacklistfile->append( $ip, $comment );
}
else {
$retval = false;
}
}
public function filename( $type, $ip, $comment = "" ) {
if ($type == 'WHITELIST' ) {
$retval = $this->whitelistfile->filename( $ip, $comment );
}
elseif( $type == 'BLACKLIST' ) {
$retval = $this->blacklistfile->filename( $ip, $comment );
}
else {
$retval = false;
}
}
}
$ips = array( $_SERVER['REMOTE_ADDR'],
);
$checklist = new IpBlockList( );
foreach ($ips as $ip ) {
$result = $checklist->ipPass( $ip );
if ( $result ) {
// Continue with page
}
else {
header("Location: https://www.google.co.uk/");
die();
}
}
?>
I want to know:
1. can i write the ips to a text file instead of .dat?
2. is there an easier way to do this?/shorter script?
All help will be appreciated :)

Convert DBF to CSV

I have a number of DBF database files that I would like to convert to CSVs. Is there a way to do this in Linux, or in PHP?
I've found a few methods to convert DBFs, but they are very slow.
Try soffice (LibreOffice):
$ soffice --headless --convert-to csv FILETOCONVERT.DBF
Change the files variable to a path to your DBF files. Make sure the file extension matches the case of your files.
set_time_limit( 24192000 );
ini_set( 'memory_limit', '-1' );
$files = glob( '/path/to/*.DBF' );
foreach( $files as $file )
{
echo "Processing: $file\n";
$fileParts = explode( '/', $file );
$endPart = $fileParts[key( array_slice( $fileParts, -1, 1, true ) )];
$csvFile = preg_replace( '~\.[a-z]+$~i', '.csv', $endPart );
if( !$dbf = dbase_open( $file, 0 ) ) die( "Could not connect to: $file" );
$num_rec = dbase_numrecords( $dbf );
$num_fields = dbase_numfields( $dbf );
$fields = array();
$out = '';
for( $i = 1; $i <= $num_rec; $i++ )
{
$row = #dbase_get_record_with_names( $dbf, $i );
$firstKey = key( array_slice( $row, 0, 1, true ) );
foreach( $row as $key => $val )
{
if( $key == 'deleted' ) continue;
if( $firstKey != $key ) $out .= ';';
$out .= trim( $val );
}
$out .= "\n";
}
file_put_contents( $csvFile, $out );
}
Using #Kohjah's code, here an update of the code using a better (IMHO) fputcsv approach:
// needs dbase php extension (http://php.net/manual/en/book.dbase.php)
function dbfToCsv($file)
{
$output_path = 'output' . DIRECTORY_SEPARATOR . 'path';
$path_parts = pathinfo($file);
$csvFile = path_parts['filename'] . '.csv';
$output_path_file = $output_path . DIRECTORY_SEPARATOR . $csvFile;
if (!$dbf = dbase_open( $file, 0 )) {
return false;
}
$num_rec = dbase_numrecords( $dbf );
$fp = fopen($output_path_file, 'w');
for( $i = 1; $i <= $num_rec; $i++ ) {
$row = dbase_get_record_with_names( $dbf, $i );
if ($i == 1) {
//print header
fputcsv($fp, array_keys($row));
}
fputcsv($fp, $row);
}
fclose($fp);
}

index.php returning a blank web page on openshift host

I have a website I am trying to maintain for a project:
http://uomtwittersearch-jbon0041.rhcloud.com/
The user connects to Twitter through the application and authenticates by using the twitteroauth library (by abraham). The process works fine up until it lands on index.php (calling index.inc as the respective HTML page) where it gives me a blank page. On localhost it works perfectly fine so I am not sure what could be causing this. Other pages such as connect.php initialize as required.
Visiting the website as it is will give an error and I am assuming that is because it cannot find index.php directly and it lies in the folder twitteroauth-master. I will fix this when I manage to at least make the contents of index.php appear but for now I am visiting:
http://uomtwittersearch-jbon0041.rhcloud.com/twitteroauth-master/connect.php
first, and this also goes to anyone who would like to visit it. If you have twitter log on with your details, this will move you to index.php which will be blank. Other than that one can simply replace 'connect' with 'index'.
What could be causing the blank page for index.php?
This is only my first ever web development project so I am not sure if this is something obvious. Moreover, I am using OpenShift for hosting.
EDIT --------------------
This is my index.php script. Again the script works fine without any problems on localhost.
<?php
//session_save_path(home/users/web/b2940/ipg.uomtwittersearchnet/cgi-bin/tmp);
ini_set('display_errors',1);
error_reporting(E_ALL);
session_start ();
require_once ('twitteroauth/twitteroauth.php');
require_once ('config.php');
include ('nlp/stop_words.php');
include ('nlp/acronyms.php');
set_time_limit ( 300 );
//////////////////////// TWITTEROAUTH /////////////////////////////////////
/* If access tokens are not available redirect to connect page. */
if (empty ( $_SESSION ['access_token'] ) || empty ( $_SESSION ['access_token'] ['oauth_token'] ) || empty ( $_SESSION ['access_token'] ['oauth_token_secret'] )) {
header ( 'Location: ./clearsessions.php' );
}
/* Get user access tokens out of the session. */
$access_token = $_SESSION ['access_token'];
/* Create a TwitterOauth object with consumer/user tokens. */
$connection = new TwitterOAuth ( CONSUMER_KEY, CONSUMER_SECRET, $access_token ['oauth_token'], $access_token ['oauth_token_secret'] );
///////////////////////////////////////////////////////////////////////////
///// UNCOMMENT BELOW TO AUTOMATICALLY SPECIFY CURRENTLY LOGGED IN USER
//$user = $connection->get('account/verify_credentials');
//$user_handle = $user->screen_name;
$user_handle = 'AngeloDalli';
$timeline = getContent ( $connection, $user_handle, 1 );
$latest_id = $timeline [0]->id_str;
$most_recent = getMostRecentTweet ();
if ($latest_id > $most_recent) {
$t_start = microtime(true); // start indexing
$timeline = getContent ( $connection, $user_handle, 200 );
$json_index = decodeIndex ();
$json_index = updateIndex ( $timeline, $connection, $user_handle, $json_index, $most_recent );
$json_index = sortIndex ( $json_index );
$json = encodeIndex ( $json_index );
updateMostRecentTweet ( $latest_id );
$_SESSION ['index_size'] = countIndex ( $json_index );
$t_end = microtime(true); // finish indexing
$content = 'New tweets indexed! Number of tweets in index: ' . $_SESSION ['index_size'];
// total indexing time
$time = 'Total time of indexing: ' . ($t_end - $t_start)/60 . ' seconds';
} else {
$content = 'No new tweets indexed!';
$time = '';
}
/////////////////////// FUNCTIONS //////////////////////////////////////////////
function getContent($connection, $user_handle, $n) {
$content = $connection->get ( 'statuses/user_timeline', array (
'screen_name' => $user_handle,
'count' => $n
) );
return $content;
}
function decodeIndex() {
$string = file_get_contents ( INDEX_PATH );
if ($string) {
$json_index = json_decode ( $string, true );
} else {
$json_index = [ ];
}
return $json_index;
}
function updateIndex($timeline, $connection, $user_handle, $json_index, $most_recent) {
// URL arrays for uClassify API calls
$urls = [ ];
$urls_id = [ ];
// halt if no more new tweets are found
$halt = false;
// set to 1 to skip first tweet after 1st batch
$j = 0;
// count number of new tweets indexed
$count = 0;
while ( (count ( $timeline ) != 1 || $j == 0) && $halt == false ) {
$no_of_tweets_in_batch = 0;
$n = $j;
while ( ($n < count ( $timeline )) && $halt == false ) {
$tweet_id = $timeline [$n]->id_str;
if ($tweet_id > $most_recent) {
$text = $timeline [$n]->text;
$tokens = parseTweet ( $text );
$coord = extractLocation ( $timeline, $n );
addSentimentURL ( $text, $tweet_id, $urls, $urls_id );
$keywords = makeEntry ( $tokens, $tweet_id, $coord, $text );
foreach ( $keywords as $type ) {
$json_index [] = $type;
}
$n ++;
$no_of_tweets_in_batch ++;
} else {
$halt = true;
}
}
if ($halt == false) {
$tweet_id = $timeline [$n - 1]->id_str;
$timeline = $connection->get ( 'statuses/user_timeline', array (
'screen_name' => $user_handle,
'count' => 200,
'max_id' => $tweet_id
) );
// skip 1st tweet after 1st batch
$j = 1;
}
$count += $no_of_tweets_in_batch;
}
$json_index = extractSentiments ( $urls, $urls_id, $json_index );
echo 'Number of tweets indexed: ' . ($count);
return $json_index;
}
function parseTweet($tweet) {
// find urls in tweet and remove (HTTP ONLY CURRENTLY)
$tweet = preg_replace ( '/(http:\/\/[^\s]+)/', "", $tweet );
// split tweet into tokens and clean
$words = preg_split ( "/[^A-Za-z0-9]+/", $tweet );
// /[\s,:.##?!()-$%&^*;+=]+/ ------ Alternative regex
$expansion = expandAcronyms ( $words );
$tokens = removeStopWords ( $expansion );
// convert to type-frequency array
$tokens = array_filter ( $tokens );
$tokens = array_count_values ( $tokens );
return $tokens;
}
function expandAcronyms($terms) {
$words = [ ];
$acrok = array_keys ( $GLOBALS ['acronyms'] );
$acrov = array_values ( $GLOBALS ['acronyms'] );
for($i = 0; $i < count ( $terms ); $i ++) {
$j = 0;
$is_acronym = false;
while ( $is_acronym == false && $j != count ( $acrok ) ) {
if (strcasecmp ( $terms [$i], $acrok [$j] ) == 0) {
$is_acronym = true;
$expansion = $acrov [$j];
}
$j ++;
}
if ($is_acronym) {
$expansion = preg_split ( "/[^A-Za-z0-9]+/", $expansion );
foreach ( $expansion as $term ) {
$words [] = $term;
}
} else {
$words [] = $terms [$i];
}
}
return $words;
}
function removeStopWords($words) {
$tokens = [ ];
for($i = 0; $i < count ( $words ); $i ++) {
$is_stopword = false;
$j = 0;
while ( $is_stopword == false && $j != count ( $GLOBALS ['stop_words'] ) ) {
if (strcasecmp ( $words [$i], $GLOBALS ['stop_words'] [$j] ) == 0) {
$is_stopword = true;
} else
$j ++;
}
if (! $is_stopword) {
$tokens [] = $words [$i];
}
}
return $tokens;
}
function extractLocation($timeline, $n) {
$geo = $timeline [$n]->place;
if (! empty ( $geo )) {
$place = $geo->full_name;
$long = $geo->bounding_box->coordinates [0] [1] [0];
$lat = $geo->bounding_box->coordinates [0] [1] [1];
$coord = array (
'place' => $place,
'latitude' => $lat,
'longitude' => $long
);
} else {
$coord = [ ];
}
return $coord;
}
function addSentimentURL($text, $tweet_id, &$urls, &$urls_id) {
$urls_id [] = $tweet_id;
$url = makeURLForAPICall ( $text );
$urls [] = $url;
}
function makeURLForAPICall($tweet) {
$tweet = str_replace ( ' ', '+', $tweet );
$prefix = 'http://uclassify.com/browse/uClassify/Sentiment/ClassifyText?';
$key = 'readkey=' . CLASSIFY_KEY . '&';
$text = 'text=' . $tweet . '&';
$version = 'version=1.01';
$url = $prefix . $key . $text . $version;
return $url;
}
function makeEntry($tokens, $tweet_id, $coord, $text) {
$types = array ();
while ( current ( $tokens ) ) {
$key = key ( $tokens );
array_push ( $types, array (
'type' => $key,
'frequency' => $tokens [$key],
'tweet_id' => $tweet_id,
'location' => $coord,
'text' => $text
) );
next ( $tokens );
}
return $types;
}
function extractSentiments($urls, $urls_id, &$json_index) {
$responses = multiHandle ( $urls );
// add sentiments to all index entries
foreach ( $json_index as $i => $term ) {
$tweet_id = $term ['tweet_id'];
foreach ( $urls_id as $j => $id ) {
if ($tweet_id == $id) {
$sentiment = parseSentiment ( $responses [$j] );
$json_index [$i] ['sentiment'] = $sentiment;
}
}
}
return $json_index;
}
// - Without sentiment, indexing is performed at reasonable speed
// - With sentiment, very frequent API calls greatly reduce indexing speed
// - filegetcontents() for Sentiment API calls is too slow, therefore considered cURL
// - cURL is still too slow and indexing performance is still not good enough
// - therefore considered using multi cURL which is much faster than by just using cURL
// on its own and significantly improved sentiment extraction which in turn greatly
// improved indexing with sentiment
function multiHandle($urls) {
// curl handles
$curls = array ();
// results returned in xml
$xml = array ();
// init multi handle
$mh = curl_multi_init ();
foreach ( $urls as $i => $d ) {
// init curl handle
$curls [$i] = curl_init ();
$url = (is_array ( $d ) && ! empty ( $d ['url'] )) ? $d ['url'] : $d;
// set url to curl handle
curl_setopt ( $curls [$i], CURLOPT_URL, $url );
// on success, return actual result rather than true
curl_setopt ( $curls [$i], CURLOPT_RETURNTRANSFER, 1 );
// add curl handle to multi handle
curl_multi_add_handle ( $mh, $curls [$i] );
}
// execute the handles
$active = null;
do {
curl_multi_exec ( $mh, $active );
} while ( $active > 0 );
// get xml and flush handles
foreach ( $curls as $i => $ch ) {
$xml [$i] = curl_multi_getcontent ( $ch );
curl_multi_remove_handle ( $mh, $ch );
}
// close multi handle
curl_multi_close ( $mh );
return $xml;
}
// SENTIMENT VALUES ON INDEX.JSON FOR THIS ASSIGNMENT ARE NOT CORRECT SINCE THE
// NUMBER OF API CALLS EXCEEDED 5000 ON THE DAY OF HANDING IN. ONCE THE API CALLS
// ARE ALLOWED AGAIN IT CLASSIFIES AS REQUIRED
function parseSentiment($xml) {
$p = xml_parser_create ();
xml_parse_into_struct ( $p, $xml, $vals, $index );
xml_parser_free ( $p );
$positivity = $vals [8] ['attributes'] ['P'];
$negativity = 1 - $positivity;
$sentiment = array (
'pos' => $positivity,
'neg' => $negativity
);
return $sentiment;
}
function sortIndex($json_index) {
$type = array ();
$freq = array ();
$id = array ();
foreach ( $json_index as $key => $row ) {
$type [$key] = $row ['type'];
$freq [$key] = $row ['frequency'];
$id [$key] = $row ['tweet_id'];
}
array_multisort ( $type, SORT_ASC | SORT_NATURAL | SORT_FLAG_CASE,
$freq, SORT_DESC,
$id, SORT_ASC,
$json_index );
return $json_index;
}
function encodeIndex($json_index) {
$json = json_encode ( $json_index, JSON_FORCE_OBJECT | JSON_PRETTY_PRINT );
$index = fopen ( INDEX_PATH, 'w' );
fwrite ( $index, $json );
fclose ( $index );
return $json;
}
function countIndex($json_index) {
$tweets = [ ];
$count = 0;
for($i = 0; $i < count ( $json_index ); $i ++) {
$id = $json_index [$i] ['tweet_id'];
if (in_array ( $id, $tweets )) {
} else {
$tweets [] = $id;
$count ++;
}
}
return $count;
}
function lookup($array, $key, $val) {
foreach ( $array as $item ) {
if (isset ( $item [$key] ) && $item [$key] == $val) {
return true;
} else {
return false;
}
}
}
function getMostRecentTweet() {
$file = fopen ( 'latest.txt', 'r' );
$most_recent = fgets ( $file );
if (! $most_recent) {
$most_recent = 0;
}
fclose ( $file );
return $most_recent;
}
function updateMostRecentTweet($latest_id) {
$file = fopen ( 'latest.txt', 'w' );
fwrite ( $file, $latest_id . PHP_EOL );
fclose ( $file );
}
include ('index.inc');
?>
I have fixed the problem. When creating my application on OpenShift using the application wizard, I was specifying PHP 5.3 as the cartridge and not PHP 5.4 (note the way I'm specifying certain empty arrays).
The true lesson to take from this is: always be sure about the version of the language you're developing with
Thank you for any help given and I hope this may come of use to someone else in the future!

Break query data into several files

I want to get table from database and then break output data by files (50 entries in each file): list01.txt, list02.txt... But somehow I got stacked at the question how to break data more effectively.
if ( $result = $mysqli->query($query) ) {
icount = 0;
while ( $row = mysqli_fetch_array($result) ) {
if ( icount % 50 == 0 ) {
$snum = int( icount / 50 );
$filename = 'scripts/spisok'.$snum.'.txt';
$handle = fopen( $filename, 'w' );
}
fwrite( $filename, $row['uname'].';'.$row['email'].'<br />' );
icount++;
}
echo 'ok';
$result->free();
}
Can I just break $result into 50-entry arrays first and then write them all? Sorry, im novice to PHP
You can also use array_chunk
<?php
if ( $result = $mysqli->query( $query ) ) {
$data = array();
while( $row = mysqli_fetch_array( $result ) ) {
$data[] = $row['uname'].';'.$row['email'];
}
$result->free();
// divide an array into a desired number of split lists
$chunks = array_chunk( $data, 50 );
// loop through chunks
foreach( $chunks as $index => $chunk ) {
$file = 'scripts/spisok'.( $index + 1 ).'.txt';
$chunk = implode( "<br />", $chunk );
file_put_contents( $file, $chunk );
// or
/*
$handle = fopen( $file, 'w' );
fwrite( $handle, $chunk );
fclose( $handle );
*/
}
unset( $data, $chunks );
}
?>
There is nothing particularly wrong with what you are doing except for some syntax errors here and there.
Try this :-
if ( $result = $mysqli->query($query) ) {
$icount = 0;
$handle = NULL;
while ( $row = mysqli_fetch_array($result) ) {
if ( $icount % 50 == 0 ) {
if ( $handle !== NULL ) {
fclose($handle);
}
$snum = int( $icount / 50 );
$filename = 'scripts/spisok'.$snum.'.txt';
$handle = fopen( $filename, 'w' );
}
fwrite( $handle, $row['uname'].';'.$row['email'].'<br />' );
$icount++;
}
fclose($handle);
echo 'ok';
$result->free();
}

PHP Infine Loop Problem

function httpGet( $url, $followRedirects=true ) {
global $final_url;
$url_parsed = parse_url($url);
if ( empty($url_parsed['scheme']) ) {
$url_parsed = parse_url('http://'.$url);
}
$final_url = $url_parsed;
$port = $url_parsed["port"];
if ( !$port ) {
$port = 80;
}
$rtn['url']['port'] = $port;
$path = $url_parsed["path"];
if ( empty($path) ) {
$path="/";
}
if ( !empty($url_parsed["query"]) ) {
$path .= "?".$url_parsed["query"];
}
$rtn['url']['path'] = $path;
$host = $url_parsed["host"];
$foundBody = false;
$out = "GET $path HTTP/1.0\r\n";
$out .= "Host: $host\r\n";
$out .= "User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1) Gecko/20061010 Firefox/2.0\r\n";
$out .= "Connection: Close\r\n\r\n";
if ( !$fp = #fsockopen($host, $port, $errno, $errstr, 30) ) {
$rtn['errornumber'] = $errno;
$rtn['errorstring'] = $errstr;
}
fwrite($fp, $out);
while (!#feof($fp)) {
$s = #fgets($fp, 128);
if ( $s == "\r\n" ) {
$foundBody = true;
continue;
}
if ( $foundBody ) {
$body .= $s;
} else {
if ( ($followRedirects) && (stristr($s, "location:") != false) ) {
$redirect = preg_replace("/location:/i", "", $s);
return httpGet( trim($redirect) );
}
$header .= $s;
}
}
fclose($fp);
return(trim($body));
}
This code sometimes go infinite loop. What's wrong here?
There is a big, red warning box in the feof() documentation:
Warning
If a connection opened by fsockopen() wasn't closed by the server, feof() will hang. To workaround this, see below example:
Example #1 Handling timeouts with feof()
<?php
function safe_feof($fp, &start = NULL) {
$start = microtime(true);
return feof($fp);
}
/* Assuming $fp is previously opened by fsockopen() */
$start = NULL;
$timeout = ini_get('default_socket_timeout');
while(!safe_feof($fp, $start) && (microtime(true) - $start) < $timeout)
{
/* Handle */
}
?>
Also you should only write to or read from the file pointer, if it is valid (what you are not doing, you just set an error message):
This leads to the second big red warning box:
Warning
If the passed file pointer is not valid you may get an infinite loop, because feof() fails to return TRUE.
Better would be:
$result = '';
if ( !$fp = #fsockopen($host, $port, $errno, $errstr, 30) ) {
$rtn['errornumber'] = $errno;
$rtn['errorstring'] = $errstr;
}
else {
fwrite($fp, $out);
while (!#feof($fp)) {
//...
}
fclose($fp);
$result = trim(body);
}
return $result;
A last remark: If you follow a redirect with
if ( ($followRedirects) && (stristr($s, "location:") != false) ) {
$redirect = preg_replace("/location:/i", "", $s);
return httpGet( trim($redirect) );
}
you never close the file pointer. I think better is:
if ( ($followRedirects) && (stristr($s, "location:") != false) ) {
$redirect = preg_replace("/location:/i", "", $s);
$result = httpGet( trim($redirect) );
break;
}
// ...
return $result;
feof will return false if the connection is still open in a tcp/ip stream.
function httpGet( $url, $followRedirects=true ) {
[...]
return httpGet( trim($redirect) );
}
Nothing prevents you from fetching the same URL again and again.

Categories