xampp crashes when many simultaneous API requests are made - php

I'm making an application which takes in a user's tweets using the Twitter API and one component of it is performing sentiment extraction from the tweet texts. For development I'm using xampp, of course using the Apache HTML Server as my workspace. I'm using Eclipse for PHP as an IDE.
For the sentiment extraction I'm using the uClassify Sentiment Classifier. The Classifier uses an API to receive a number of requests and with each request it sends back XML data from which the sentiment values can be parsed.
Now the application may process a large number of tweets (maximum allowed is 3200) at once. For example if there are 3200 tweets then the system will send 3200 API calls at once to this Classifier. Unfortunately for this number the system does not scale well and in fact xampp crashes after a short while of running the system with these calls. However, with a modest number of tweets (for example 500 tweets) the system works fine, so I am assuming it may be due to large number of API calls. It may help to note that the maximum number of API calls allowed by uClassify per day is 5000, but since the maximum is 3200 I am pretty sure that it is not exceeding this number.
This is pretty much my first time working on this kind of web development, so I am not sure if I'm making a rookie mistake here. I am not sure what I could be doing wrong and don't know where to start looking. Any advice/insight will help a lot!
EDIT: added source code in question
Update index method
function updateIndex($timeline, $connection, $user_handle, $json_index, $most_recent) {
// URL arrays for uClassify API calls
$urls = [ ];
$urls_id = [ ];
// halt if no more new tweets are found
$halt = false;
// set to 1 to skip first tweet after 1st batch
$j = 0;
// count number of new tweets indexed
$count = 0;
while ( (count ( $timeline ) != 1 || $j == 0) && $halt == false ) {
$no_of_tweets_in_batch = 0;
$n = $j;
while ( ($n < count ( $timeline )) && $halt == false ) {
$tweet_id = $timeline [$n]->id_str;
if ($tweet_id > $most_recent) {
$text = $timeline [$n]->text;
$tokens = parseTweet ( $text );
$coord = extractLocation ( $timeline, $n );
addSentimentURL ( $text, $tweet_id, $urls, $urls_id );
$keywords = makeEntry ( $tokens, $tweet_id, $coord, $text );
foreach ( $keywords as $type ) {
$json_index [] = $type;
}
$n ++;
$no_of_tweets_in_batch ++;
} else {
$halt = true;
}
}
if ($halt == false) {
$tweet_id = $timeline [$n - 1]->id_str;
$timeline = $connection->get ( 'statuses/user_timeline', array (
'screen_name' => $user_handle,
'count' => 200,
'max_id' => $tweet_id
) );
// skip 1st tweet after 1st batch
$j = 1;
}
$count += $no_of_tweets_in_batch;
}
$json_index = extractSentiments ( $urls, $urls_id, $json_index );
echo 'Number of tweets indexed: ' . ($count);
return $json_index;
}
extract sentiment method
function extractSentiments($urls, $urls_id, &$json_index) {
$responses = multiHandle ( $urls );
// add sentiments to all index entries
foreach ( $json_index as $i => $term ) {
$tweet_id = $term ['tweet_id'];
foreach ( $urls_id as $j => $id ) {
if ($tweet_id == $id) {
$sentiment = parseSentiment ( $responses [$j] );
$json_index [$i] ['sentiment'] = $sentiment;
}
}
}
return $json_index;
}
Method for handling multiple API calls
This is where the uClassify API calls are being processed at once:
function multiHandle($urls) {
// curl handles
$curls = array ();
// results returned in xml
$xml = array ();
// init multi handle
$mh = curl_multi_init ();
foreach ( $urls as $i => $d ) {
// init curl handle
$curls [$i] = curl_init ();
$url = (is_array ( $d ) && ! empty ( $d ['url'] )) ? $d ['url'] : $d;
// set url to curl handle
curl_setopt ( $curls [$i], CURLOPT_URL, $url );
// on success, return actual result rather than true
curl_setopt ( $curls [$i], CURLOPT_RETURNTRANSFER, 1 );
// add curl handle to multi handle
curl_multi_add_handle ( $mh, $curls [$i] );
}
// execute the handles
$active = null;
do {
curl_multi_exec ( $mh, $active );
} while ( $active > 0 );
// get xml and flush handles
foreach ( $curls as $i => $ch ) {
$xml [$i] = curl_multi_getcontent ( $ch );
curl_multi_remove_handle ( $mh, $ch );
}
// close multi handle
curl_multi_close ( $mh );
return $xml;
}

The problem is with giving curl too many URLs in one go. I am surprised you can manage 500 in parallel, as I've seen people complain of problems with even 200. This guy has some clever code to just 100 at a time, but then add the next one each time one finishes, but I notice he edited it down to just do 5 at a time.
I just noticed the author of that code released an open source library around this idea, so I think this is the solution for you: https://github.com/joshfraser/rolling-curl
As to why you get a crash, a comment on this question suggests the cause might be reaching the maximum number of OS file handles: What is the maximum number of cURL connections set by? and other suggestions are simply using a lot of bandwidth, CPU and memory. (If you are on windows, opening the task manager should allow you to see if this is the case; on linux use top)

Related

Calls to Office365 API to synchronize events, throttling

I am trying to synchronize a few events from Outlook to my local DB and I call the API as below:
$url = 'https://outlook.office365.com/api/v2.0/users/' . $this->user . '/CalendarView/'
. '?startDateTime=' . $start_datetime
. '&endDateTime=' . $end_datetime
This gives me all the events from Outlook between two specific dates.
Then I go and save all this events using the code below. The problem with it is that it returns only 10 events at a time.
$http = new \Http_Curl();
$http->set_headers( $this->get_headers() );
$response = $http->get( $url );
$data = array();
$continue = true;
while ( $continue ) {
if ( isset($response->value) ) {
$arr = array();
foreach ( $response->value as $event ) {
$arr[] = $event;
}
$data = array_merge( $data, $arr );
}
$property = '#odata.nextLink';
if ( isset( $response->$property ) ) {
$url = $response->$property;
$response = $http->get( $url );
} else {
$continue = false;
}
}
unset( $http );
return $data;
I tried then to call the API like below, setting the top parameter to 10, but I end up with many empty events.
$url = 'https://outlook.office365.com/api/v2.0/users/' . $this->user . '/CalendarView/'
. '?startDateTime=' . $start_datetime
. '&endDateTime=' . $end_datetime
.'&top=100'
I am trying to avoid making more than 60 calls per minute. Is there any way to first get the number of events between two dates and then retrieve all of them, so the top parameter should actually be the total number of events.
The correct query parameter is $top and not top. Notice $ in there.
http://docs.oasis-open.org/odata/odata/v4.0/errata03/os/complete/part2-url-conventions/odata-v4.0-errata03-os-part2-url-conventions-complete.html#_Toc453752362
5.1.5 System Query Options $top and $skip
The $top system query option requests the number of items in the queried collection to be included in the result. The $skip query option requests the number of items in the queried collection that are to be skipped and not included in the result. A client can request a particular page of items by combining $top and $skip.
The semantics of $top and $skip are covered in the [OData-Protocol] document. The [OData-ABNF] top and skip syntax rules define the formal grammar of the $top and $skip query options respectively.

PHP code for Yahoo API for downloading CSV file

I have been using the Yahoo Financial API to download historical stock data from Yahoo. As has been reported on this site, as of mid May, the old API was discontinued. There have been many posts addressed the the form of the new call, e.g.:
https://query1.finance.yahoo.com/v7/finance/download/AAPL?period1=315561600&period2=1496087439&interval=1d&events=history&crumb=XXXXXXXXXXX
As well as methods for obtaining the crumb:
Yahoo Finance URL not working
But I must be misunderstanding what the procedure is as I always get an error saying that it "Failed to open stream: HTTP request failed. HTTP/1.0 201 Unauthorized".
Below is my code. Any and all assistance is welcome. I have to admit that I am an old Fortran programmer and my coding reflects this.
Good Roads
Bill
$ticker = "AAPL";
$yahooURL="https://finance.yahoo.com/quote/" .$ticker ."/history";
$body=file_get_contents($yahooURL);
$headers=$http_response_header;
$icount = count($headers);
for($i = 0; $i < $icount; $i ++)
{
$istart = -1;
$istop = -1;
$istart = strpos($headers[$i], "Set-Cookie: B=");
$istop = strpos($headers[$i], "&b=");
if($istart > -1 && $istop > -1)
{
$Cookie = substr ( $headers[$i] ,$istart+14,$istop - ($istart + 14));
}
}
$istart = strpos($body,"CrumbStore") + 22;
$istop = strpos($body,'"', $istart);
$Crumb = substr ( $body ,$istart,$istop - $istart);
$iMonth = 1;
$iDay = 1;
$iYear = 1980;
$timestampStart = mktime(0,0,0,$iMonth,$iDay,$iYear);
$timestampEnd = time();
$url = "https://query1.finance.yahoo.com/v7/finance/download/".$ticker."?period1=".$timestampStart."&period2=".$timestampEnd."&interval=1d&events=history&crumb=".$Cookie."";
while (!copy($url, $newfile) && $iLoop < 10)
{
if($iLoop == 9) echo "Failed to download data." .$lf;
$iLoop = $iLoop + 1;
sleep(1);
}
#Craig Cocca this isn't exactly a duplicate because the reference you gave gives a solution in python which for those of us who use php but haven't learnt python doesn't help much. I'd love to see as solution with php. I've examinied the yahoo page and am able to extract the crumb but can't work out how to put it into a stream and GET call.
My latest (failed) effort is:
$headers = [
"Accept" => "*/*",
"Connection" => "Keep-Alive",
"User-Agent" => sprintf("curl/%s", curl_version()["version"])
];
// open connection to Yahoo
$context = stream_context_create([
"http" => [
"header" => (implode(array_map(function($value, $key) { return sprintf("%s: %s\r\n", $key, $value); }, $headers, array_keys($headers))))."Cookie: $Cookie",
"method" => "GET"
]
]);
$handle = #fopen("https://query1.finance.yahoo.com/v7/finance/download/{$symbol}?period1={$date_now}&period2={$date_now}&interval=1d&events=history&crumb={$Crumb}", "r", false, $context);
if ($handle === false)
{
// trigger (big, orange) error
trigger_error("Could not connect to Yahoo!", E_USER_ERROR);
exit;
}
// download first line of CSV file
$data = fgetcsv($handle);
The two dates are unix coded dates i.e.: $date_now = strtotime($date);
I've now managed to download share price history. At the moment I'm only taking the current price figures but my download method receives historical data for the past year. (i.e. until Yahoo decides to put some other block on the data).
My solution uses the "simple_html_dom.php" parser which I've added to my /includes folder.
Here is the code (modified from the original version from the Harvard CS50 course which I recommend for beginners like me):
function lookup($symbol)
{
// reject symbols that start with ^
if (preg_match("/^\^/", $symbol))
{
return false;
}
// reject symbols that contain commas
if (preg_match("/,/", $symbol))
{
return false;
}
// body of price history search
$sym = $symbol;
$yahooURL='https://finance.yahoo.com/quote/'.$sym.'/history?p='.$sym;
// get stock name
$data = file_get_contents($yahooURL);
$title = preg_match('/<title[^>]*>(.*?)<\/title>/ims', $data, $matches) ? $matches[1] : null;
$title = preg_replace('/[[a-zA-Z0-9\. \| ]* \| /','',$title);
$title = preg_replace('/ Stock \- Yahoo Finance/','',$title);
$name = $title;
// get price data - use simple_html_dom.php (added to /include)
$body=file_get_html($yahooURL);
$tables = $body->find('table');
$dom = new DOMDocument();
$elements[] = null;
$dom->loadHtml($tables[1]);
$x = new DOMXpath($dom);
$i = 0;
foreach($x->query('//td') as $td){
$elements[$i] = $td -> textContent." ";
$i++;
}
$open = floatval($elements[1]);
$high = floatval($elements[2]);
$low = floatval($elements[3]);
$close = floatval($elements[5]);
$vol = str_replace( ',', '', $elements[6]);
$vol = floatval($vol);
$date = date('Y-m-d');
$datestamp = strtotime($date);
$date = date('Y-m-d',$datestamp);
// return stock as an associative array
return [
"symbol" => $symbol,
"name" => $name,
"price" => $close,
"open" => $open,
"high" => $high,
"low" => $low,
"vol" => $vol,
"date" => $date
];
}

list=allpages does not deliver all pages

i have the problem, that i want to fill a list with the names of all pages in my wiki. My script:
$TitleList = [];
$nsList = [];
$nsURL= 'wiki/api.php?action=query&meta=siteinfo& siprop=namespaces|namespacealiases&format=json';
$nsJson = file_get_contents($nsURL);
$nsJsonD = json_decode($nsJson, true);
foreach ($nsJsonD['query']['namespaces'] as $ns)
{
if ( $ns['id'] >= 0 )
array_push ($nsList, $ns['id']);
}
# populate the list of all pages in each namespace
foreach ($nsList as $n)
{
$urlGET = 'wiki/api.php?action=query&list=allpages&apnamespace='.$n.'&format=json';
$json = file_get_contents($urlGET);
$json_b = json_decode( $json ,true);
foreach ($json_b['query']['allpages'] as $page)
{
echo("\n".$page['title']);
array_push($TitleList, $page["title"]);
}
}
But there are still 35% pages missing, that i can visit on my wiki (testing with "random site"). Does anyone know, why this could happen?
MediaWiki API doesn't return all results at once, but does so in batches.
A default batch is only 10 pages; you can specify aplimit to change that (500 max for users, 5,000 max for bots).
To get the next batch, you need to specify the continue= parameter; in each batch, you will also get a continue property in the returned data, which you can use to ask for the next batch. To get all pages, you must loop as long as a continue element is present.
For example, on the English Wikipedia, this would be the first API call:
https://en.wikipedia.org/w/api.php?action=query&list=allpages&apnamespace=0&format=json&aplimit=500&continue=
...and the continue object will be this:
"continue":{
"apcontinue":"\"Cigar\"_Daisey",
"continue":"-||"
}
(Updated according to comment by OP, with example code)
You would now want to flatten the continue array into url parameters, for example using `
See the more complete explanation here:
https://www.mediawiki.org/wiki/API:Query#Continuing_queries
A working version of your code should be (tested with Wikipedia with a slightly different code):
# populate the list of all pages in each namespace
$baseUrl = 'wiki/api.php?action=query&list=allpages&apnamespace='.$n.'&format=json&limit=500&'; // Increase limit if you are using a bot, up to 5,000
foreach ($nsList as $n) {
$next = '';
while ( isset( $next ) ) {
$urlGET = $baseUrl . $next;
$json = file_get_contents($urlGET);
$json_b = json_decode($json, true);
foreach ($json_b['query']['allpages'] as $page)
{
echo("\n".$page['title']);
array_push($TitleList, $page["title"]);
}
if (isset($json_b['continue'])) {
$next = http_build_query($json_b['continue']);
}
}
}

Php regex returning repeats in nested arrays

I'm trying to get a list of all occurrences of a file being included in a php script.
I'm reading in the entire file, which contains this:
<?php
echo 'Hello there';
include 'some_functions.php';
echo 'Trying to find some includes.';
include 'include_me.php';
echo 'Testtest.';
?>
Then, I run this code on that file:
if (preg_match_all ("/(include.*?;){1}/is", $this->file_contents, $matches))
{
print_r($matches);
}
When I run this match, I get the expected results... which are the two include sections, but I also get repeats of the exact same thing, or random chunks of the include statement. Here is an example of the output:
Array (
[0] => Array ( [0] => include 'some_functions.php'; [1] => include 'include_me.php'; )
[1] => Array ( [0] => include 'some_functions.php'; [1] => include 'include_me.php'; ) )
As you can see, it's nesting arrays with the same result multiple times. I need 1 item in the array for each include statement, no repeats, no nested arrays.
I'm having some trouble with these regular expressions, so some guidance would be nice. Thank you for your time.
what about this one
<?php
preg_match_all( "/include(_once)?\s*\(?\s*(\"|')(.*?)\.php(\"|')\s*\)?\s*;?/i", $this->file_contents, $matches );
// for file names
print_r( $matches[3] );
// for full lines
print_r( $matches[0] );
?>
if you want a better and clean way, then the only way is php's token_get_all
<?php
$tokens = token_get_all( $this->file_contents );
$files = array();
$index = 0;
$found = false;
foreach( $tokens as $token ) {
// in php 5.2+ Line numbers are returned in element 2
$token = ( is_string( $token ) ) ? array( -1, $token, 0 ) : $token;
switch( $token[0] ) {
case T_INCLUDE:
case T_INCLUDE_ONCE:
case T_REQUIRE:
case T_REQUIRE_ONCE:
$found = true;
if ( isset( $token[2] ) ) {
$index = $token[2];
}
$files[$index] = null;
break;
case T_COMMENT:
case T_DOC_COMMENT:
case T_WHITESPACE:
break;
default:
if ( $found && $token[1] === ";" ) {
$found = false;
if ( !isset( $token[2] ) ) {
$index++;
}
}
if ( $found ) {
if ( in_array( $token[1], array( "(", ")" ) ) ) {
continue;
}
if ( $found ) {
$files[$index] .= $token[1];
}
}
break;
}
}
// if your php version is above 5.2
// $files index will be line numbers
print_r( $files );
?>
Use get_included_files(), or the built-in tokenizer if the script is not included
I'm searching through a string of another files contents and not the
current file
Then your best bet is the tokenizer. Try this:
$scriptPath = '/full/path/to/your/script.php';
$tokens = token_get_all(file_get_contents($scriptPath));
$matches = array();
$incMode = null;
foreach($tokens as $token){
// ";" should end include stm.
if($incMode && ($token === ';')){
$matches[] = $incMode;
$incMode = array();
}
// keep track of the code if inside include statement
if($incMode){
$incMode[1] .= is_array($token) ? $token[1] : $token;
continue;
}
if(!is_array($token))
continue;
// start of include stm.
if(in_array($token[0], array(T_INCLUDE, T_INCLUDE_ONCE, T_REQUIRE, T_REQUIRE_ONCE)))
$incMode = array(token_name($token[0]), '');
}
print_r($matches); // array(token name, code)
Please read, how works preg_match_all
First item in array - it return all text, which is in regular expression.
Next items in array - that's texts from regular expression (in parenthesises).
You should use $matches[1]

get server ram with php

Is there a way to know the avaliable ram in a server (linux distro) with php (widthout using linux commands)?
edit: sorry, the objective is to be aware of the ram available in the server / virtual machine, for the particular server (even if that memory is shared).
If you know this code will only be running under Linux, you can use the special /proc/meminfo file to get information about the system's virtual memory subsystem. The file has a form like this:
MemTotal: 255908 kB
MemFree: 69936 kB
Buffers: 15812 kB
Cached: 115124 kB
SwapCached: 0 kB
Active: 92700 kB
Inactive: 63792 kB
...
That first line, MemTotal: ..., contains the amount of physical RAM in the machine, minus the space reserved by the kernel for its own use. It's the best way I know of to get a simple report of the usable memory on a Linux system. You should be able to extract it via something like the following code:
<?php
$fh = fopen('/proc/meminfo','r');
$mem = 0;
while ($line = fgets($fh)) {
$pieces = array();
if (preg_match('/^MemTotal:\s+(\d+)\skB$/', $line, $pieces)) {
$mem = $pieces[1];
break;
}
}
fclose($fh);
echo "$mem kB RAM found"; ?>
(Please note: this code may require some tweaking for your environment.)
Using /proc/meminfo and getting everything into an array is simple:
<?php
function getSystemMemInfo()
{
$data = explode("\n", file_get_contents("/proc/meminfo"));
$meminfo = array();
foreach ($data as $line) {
list($key, $val) = explode(":", $line);
$meminfo[$key] = trim($val);
}
return $meminfo;
}
?>
var_dump( getSystemMemInfo() );
array(43) {
["MemTotal"]=>
string(10) "2060700 kB"
["MemFree"]=>
string(9) "277344 kB"
["Buffers"]=>
string(8) "92200 kB"
["Cached"]=>
string(9) "650544 kB"
["SwapCached"]=>
string(8) "73592 kB"
["Active"]=>
string(9) "995988 kB"
...
Linux commands can be run using the exec function in PHP. This is efficient and will do the job(if objective is to get the memory).
Try the following code:
<?php
exec("free -mtl", $output);
print_r($output);
?>
Small and tidy function to get all of its values associated to their keys.
$contents = file_get_contents('/proc/meminfo');
preg_match_all('/(\w+):\s+(\d+)\s/', $contents, $matches);
$info = array_combine($matches[1], $matches[2]);
// $info['MemTotal'] = "2047442"
I don't think you can access the host server memory info without a special written PHP extension. The PHP core library does not allow (perhaps for security reasons) to access the extended memory info.
However, if your script has access to the /proc/meminfo then you can query that special file and grab the info you need. On Windows (although you've not asked for it) we can use the com_dotnet PHP extension to query the Windows framework via COM.
Below you can find my getSystemMemoryInfo that returns that info for you no matter if you run the script on a Linux/Windows server. The wmiWBemLocatorQuery is just a helper function.
function wmiWBemLocatorQuery( $query ) {
if ( class_exists( '\\COM' ) ) {
try {
$WbemLocator = new \COM( "WbemScripting.SWbemLocator" );
$WbemServices = $WbemLocator->ConnectServer( '127.0.0.1', 'root\CIMV2' );
$WbemServices->Security_->ImpersonationLevel = 3;
// use wbemtest tool to query all classes for namespace root\cimv2
return $WbemServices->ExecQuery( $query );
} catch ( \com_exception $e ) {
echo $e->getMessage();
}
} elseif ( ! extension_loaded( 'com_dotnet' ) )
trigger_error( 'It seems that the COM is not enabled in your php.ini', E_USER_WARNING );
else {
$err = error_get_last();
trigger_error( $err['message'], E_USER_WARNING );
}
return false;
}
// _dir_in_allowed_path this is your function to detect if a file is withing the allowed path (see the open_basedir PHP directive)
function getSystemMemoryInfo( $output_key = '' ) {
$keys = array( 'MemTotal', 'MemFree', 'MemAvailable', 'SwapTotal', 'SwapFree' );
$result = array();
try {
// LINUX
if ( ! isWin() ) {
$proc_dir = '/proc/';
$data = _dir_in_allowed_path( $proc_dir ) ? #file( $proc_dir . 'meminfo' ) : false;
if ( is_array( $data ) )
foreach ( $data as $d ) {
if ( 0 == strlen( trim( $d ) ) )
continue;
$d = preg_split( '/:/', $d );
$key = trim( $d[0] );
if ( ! in_array( $key, $keys ) )
continue;
$value = 1000 * floatval( trim( str_replace( ' kB', '', $d[1] ) ) );
$result[$key] = $value;
}
} else // WINDOWS
{
$wmi_found = false;
if ( $wmi_query = wmiWBemLocatorQuery(
"SELECT FreePhysicalMemory,FreeVirtualMemory,TotalSwapSpaceSize,TotalVirtualMemorySize,TotalVisibleMemorySize FROM Win32_OperatingSystem" ) ) {
foreach ( $wmi_query as $r ) {
$result['MemFree'] = $r->FreePhysicalMemory * 1024;
$result['MemAvailable'] = $r->FreeVirtualMemory * 1024;
$result['SwapFree'] = $r->TotalSwapSpaceSize * 1024;
$result['SwapTotal'] = $r->TotalVirtualMemorySize * 1024;
$result['MemTotal'] = $r->TotalVisibleMemorySize * 1024;
$wmi_found = true;
}
}
// TODO a backup implementation using the $_SERVER array
}
} catch ( Exception $e ) {
echo $e->getMessage();
}
return empty( $output_key ) || ! isset( $result[$output_key] ) ? $result : $result[$output_key];
}
Example on a 8GB RAM system
print_r(getSystemMemoryInfo());
Output
Array
(
[MemTotal] => 8102684000
[MemFree] => 2894508000
[MemAvailable] => 4569396000
[SwapTotal] => 4194300000
[SwapFree] => 4194300000
)
If you want to understand what each field represent then read more.
It is worth noting that in Windows this information (and much more) can be acquired by executing and parsing the output of the shell command: systeminfo
exec("grep MemTotal /proc/meminfo", $aryMem);
$aryMem[0] has your total ram minus kernel usage.
I don't remember having ever seen such a function -- its kind of out the scope of what PHP is made for, actually.
Even if there was such a functionnality, it would probably be implemented in a way that would be specific to the underlying operating system, and wouldn't probably work on both Linux and windows (see sys_getloadavg for an example of that kind of thing)
// helpers
/**
* #return array|null
*/
protected function getSystemMemInfo()
{
$meminfo = #file_get_contents("/proc/meminfo");
if ($meminfo) {
$data = explode("\n", $meminfo);
$meminfo = [];
foreach ($data as $line) {
if( strpos( $line, ':' ) !== false ) {
list($key, $val) = explode(":", $line);
$val = trim($val);
$val = preg_replace('/ kB$/', '', $val);
if (is_numeric($val)) {
$val = intval($val);
}
$meminfo[$key] = $val;
}
}
return $meminfo;
}
return null;
}
// example call to check health
public function check() {
$memInfo = $this->getSystemMemInfo();
if ($memInfo) {
$totalMemory = $memInfo['MemTotal'];
$freeMemory = $memInfo['MemFree'];
$swapTotalMemory = $memInfo['SwapTotal'];
$swapFreeMemory = $memInfo['SwapFree'];
if (($totalMemory / 100.0) * 30.0 > $freeMemory) {
if (($swapTotalMemory / 100.0) * 50.0 > $swapFreeMemory) {
return new Failure('Less than 30% free memory and less than 50% free swap space');
}
return new Warning('Less than 30% free memory');
}
}
return new Success('ok');
}

Categories