Calculate g-force from acceleration for 1 second interval - php

I have extracted a CSV file with accelerometer data (in m/s2) from GoPro metadata file (github library).
One second of accelerometer contains ~200 samples of data on 3 axis. A sample of this file looks like this:
In PHP, for each instantaneous value on X axis, I convert m/s2 like this:
function convert_meters_per_second_squared_to_g($ms2) {
// 1g = 9.80665 m/s2
return $ms2 * 0.101971621297793; // 1 / 9.80665 == 0.101971621297793
}
Sample code for 200 rows (1 second) of CSV file:
$acc_x_summed_up = 0;
if (($handle = fopen($filepath, "r")) !== FALSE) {
while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) {
list ($millis, $acc_x, $acc_y, $acc_z) = $data;
$acc_x_summed_up += $acc_x;
}
}
$g_force = convert_meters_per_second_squared_to_g($acc_x_summed_up);
But how do I show the g-force value for each second on X axis? I tried to sum up the values and convert them, but the result is clearly wrong, as I get values up to 63 G.
[ UPDATE: ]
The instant g-force values (all 3 axis, separated) are displayed on a graph (using highcharts). The gopro video file is displayed (using YouTube javascript API) side-by-side with the graph and played real time.
The graph and video are already working fine side by side. Only the g-force values are wrong.
Note: The video file has a g-force overlay (embeded in it) showing 2 axis (x,y).
I have rewarded #Joseph_J just because it seemed a good solution and because I'm forced to give the reward (over the weekend) by SO system. Thanks everyone for your answers!

I believe you are treating each instantaneous value as if it has occurred over 1 second, rather than instantaneously.
I'd say your best bet is to do each calculation by multiplying $acc_x by the resolution of your data divided by gravity's acceleration. So in your case, the resolution of your data is 5ms or one two-hundredth of a second, meaning your calculation should be $acc_x * 0.005/9.80665.
Using the information you provided, the 63G result that you got should be more like 0.315G. This seems more appropriate, though I'm not sure the context of the data.
EDIT: I forgot to mention that you should still sum all values that you receive from $acc_x * 0.005/9.80665 over 200 values, (you can choose to do this in blocks, or do it in running, doing in blocks will be less taxing on the system, but running will be more accurate). Pointed out by #Joseph_J
EDIT 2: As per your request of a source, I could not find much from calculating the average acceleration (and therefore g-force), but you can use the same principal behind average velocity from velocity over time graphs, however, I did find a scenario similar to yours here: Source and Source 2
Hope this helps!

As per my comment, summing it up doesn't work because force is not additive over time. What you want is to calculate the average acceleration:
function convert_meters_per_second_squared_to_g($acc_array) {
$acc_average = array_sum($acc_array)/count($acc_array);
return $acc_average * 0.101971621297793;
}
$acc_x_array = [];
if (($handle = fopen($filepath, "r")) !== FALSE) {
while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) {
list ($millis, $acc_x, $acc_y, $acc_z) = $data;
$acc_x_array[] = $acc_x;
}
}
$g_force = convert_meters_per_second_squared_to_g($acc_x_array);

Maybe your question can be seen as equivalent to asking for the net change in velocity between samples at one-second intervals?
In that sense, what you need to do is to integrate-up all the small accelerations in your 5ms intervals, so as to compute the net change in velocity over a period of one second (i.e. 200 samples). That change in velocity, divided by the 1-second interval, represents an average acceleration during that 1-second period.
So, in your case, what you'd need to do is to add up all the AcclX, AcclY & AcclZ values over a one-second period, and multiply by 0.005 to get the vector representing the change in velocity (in units of metres per second). If you then divide that by the one-second total extent of the time window, and by 9.80665m/s^2, you'll end up with the (vector) acceleration in units of G. If you want the (scalar) acceleration you can then just compute the magnitude of that vector, as sqrt(ax^2+ay^2+az^2).
You could apply the same principle to get an average acceleration over a different time-window, so long as you divide the sum of AcclX,AcclY,AcclY (after multiplying by the 0.005s inter-sample time) by the duration of the time window over which you've integrated. This is just like approximating the time-derivative of a function f(t) by (f(t+d) - f(t))/d. In fact, this is a better approximation to the derivative at the midpoint of the time-interval, namely t+d/2. For example, you could sum up the values over a 2s window, to get an average value at the centre of that 2s timespan. There's no need to just report these average accelerations every two seconds; instead you could simply move the window along 0.5s to get the next reported average acceleration 0.5s later.

THE UPDATED UPDATED SOLUTION
This solution will take your CSV and create an array containing your time, Ax, Ay, & Az values after they have been converted to G's. You should be able to take this array and feed it right into your graph.
The value displayed at each interval will be the average acceleration "at" the interval no before or after it.
I added a parameter to the function to allow for you to define how many intervals per second that you want to display on your graph. This will help smooth out your graph a bit.
I also set the initial and final values. Since this finds the average acceleration at the interval it needs data on both sides of the interval. Obviously at 0 we are missing the left hand side and on the last interval we are missing the right hand side.
I chose to use all the data from one interval to the next, this overlaps half the values from one interval to the next. This will smooth out(reduce the noise) of the averages instead of pickup up from one interval where the other left off. I added a parameter where you can toggle the overlap on and off.
Hope it works for you!
function formatAccelData($data, $split, $scale, $overlap = TRUE){
if(!$data || !$split || !$scale || !is_int($split) || !is_int($scale)){
return FALSE;
}
$g = 9.80665;
$round = 3;
$value1 = 1;
$value2 = 2;
if(!$overlap){ //Toggle overlapping data.
$value1 = 2;
$value2 = 1;
}
//Set the initial condition at t=0;
$results = array();
$results[0]['seconds'] = 0;
$results[0]['Ax'] = round(($data[0][1])/$g, $round);
$results[0]['Ay'] = round(($data[0][2])/$g, $round);
$results[0]['Az'] = round(($data[0][3])/$g, $round);
$count = 1;
$interval = (int)(1000/$split)/$scale;
for($i = $interval; $i < count($data); $i += $interval){
$Ax = $Ay = $Az = 0;
for($j = $i - ($interval/$value1); $j < $i + ($interval/$value1); $j++){
$Ax += $data[$j][1];
$Ay += $data[$j][2];
$Az += $data[$j][3];
}
$results[$count]['seconds'] = round($count/$scale, $round);
$results[$count]['Ax'] = round(($Ax/($interval * $value2))/$g, $round);
$results[$count]['Ay'] = round(($Ay/($interval * $value2))/$g, $round);
$results[$count]['Az'] = round(($Az/($interval * $value2))/$g, $round);
$count++;
}
array_pop($results); //We do this because the last interval
//will not have enought data to be calculated.
//Set the final condition with the data from the end of the last complete interval.
$results[$count - 1]['seconds'] = round(($count - 1)/$scale, $round);
$results[$count - 1]['Ax'] = round(($data[$i - $interval][1])/$g, $round);
$results[$count - 1]['Ay'] = round(($data[$i - $interval][2])/$g, $round);
$results[$count - 1]['Az'] = round(($data[$i - $interval][3])/$g, $round);
return $results;
}
To use:
$data = array_map('str_getcsv', file($path));
$split = 5; //(int) - # of milliseconds inbetween datapoints.
$scale = 4; // (int) # of data points per second you want to display.
$overlap = TRUE; //(Bool) - Overlap data from one interval to the next.
$results = formatAccelData($data, $split, $scale, $overlap);
print_r($results);
THE OLD UPDATED SOLUTION
Remember, this function takes the average leading up to the interval. So it's really a half an interval behind.
function formatAccelData($data, $step){
$fps = 1000/$step;
$second = 1;
$frame = 0;
$count = 0;
for($i = 0; $i < count($data); $i += $fps){
$Ax = $Ay = $Az = 0;
for($j = 0; $j < $fps; $j++){
$Ax += $data[$frame][1];
$Ay += $data[$frame][2];
$Az += $data[$frame][3];
$frame++;
}
$results[$count]['seconds'] = $second;
$results[$count]['Ax'] = ($Ax/$fps) * 0.101971621297793;
$results[$count]['Ay'] = ($Ay/$fps) * 0.101971621297793;
$results[$count]['Az'] = ($Az/$fps) * 0.101971621297793;
$second++;
$count++;
}
return $results;
}
How to use:
$data = array_map('str_getcsv', file($path));
$step = 5; //milliseconds
$results = formatAccelData($data, $step);
print_r($results);

Related

PHP distribute percentage based on total numbers

I'm trying to distribute 100% to total numbers (not equally), it can be done manually but I'm looking for a automatically way in PHP. I had to open calculator and get it done for manual.
What I'm trying to achieve is the result similar to this:
$value = 10000;
$total_numbers = 9
$a1 = $value*0.2;
$a2 = $value*0.175;
$a3 = $value*0.15;
$a4 = $value*0.125;
$a5 = $value*0.1;
$a6 = $value*0.08;
$a7 = $value*0.07;
$a8 = $value*0.05;
$a9 = $value*0.04;
So as you can see, the first variables have more quantity than the later ones, but if you add these, it will be 1 which is 100%. So lets say I have total_numbers=20 then I'll have to re-write it and get a calculator and do it the hard way to accomplish my goal. Is there any way this can be done automatically with a function where I can just tell the total number and it can distribute it to proportions or something?
The first one will always be bigger than rest, then second one bigger than rest but smaller than first, third one being greater than rest but small than first and second, and so on.
function distributeValue($value, $num) {
$parts = $num * ($num + 1) / 2;
$values = [];
for ($i = $num; $i > 1; --$i) {
$values[] = round($value * $i / $parts);
}
$values[] = $value - array_sum($values);
return $values;
}
var_dump(distributeValue(10000, 9));
This works by calculating the $numth triangle number (the number you get by adding all the numbers from 1 to $num) and dividing the total value up into this number of parts.
It then starts by taking $num parts, then $num-1 parts and so on.
Since it's rounding the numbers, the last step is to take the total minus all the other values which is around one part. If you are fine with getting floats instead of ints out, then you can remove the $values[] = $value - array_sum($values); line and change the condition of the for loop to $i > 0.

Optimal way of cycling through 1000's of values

I need to find the value of x where the variance of two results (which take x into account) is the closest to 0. The problem is, the only way to do this is to cycle through all possible values of x. The equation uses currency, so I have to check in increments of 1 cent.
This might make it easier:
$previous_var = null;
$high_amount = 50;
for ($i = 0.01; $i <= $high_amount; $i += 0.01) {
$val1 = find_out_1($i);
$val2 = find_out_2();
$var = variance($val1, $val2);
if ($previous_var == null) {
$previous_var = $var;
}
// If this variance is larger, it means the previous one was the closest to
// 0 as the variance has now started increasing
if ($var > $previous_var) {
$l_s -= 0.01;
break;
}
}
$optimal_monetary_value = $i;
I feel like there is a mathematical formula that would make the "cycling through every cent" more optimal? It works fine for small values, but if you start using 1000's as the $high_amount it takes quite a few seconds to calculate.
Based on the comment in your code, it sounds like you want something similar to bisection search, but a little bit different:
function calculate_variance($i) {
$val1 = find_out_1($i);
$val2 = find_out_2();
return variance($val1, $val2);
}
function search($lo, $loVar, $hi, $hiVar) {
// find the midpoint between the hi and lo values
$mid = round($lo + ($hi - $lo) / 2, 2);
if ($mid == $hi || $mid == $lo) {
// we have converged, so pick the better value and be done
return ($hiVar > $loVar) ? $lo : $hi;
}
$midVar = calculate_variance($mid);
if ($midVar >= $loVar) {
// the optimal point must be in the lower interval
return search($lo, $loVar, $mid, $midVar);
} elseif ($midVar >= $hiVar) {
// the optimal point must be in the higher interval
return search($mid, $midVar, $hi, $hiVar);
} else {
// we don't know where the optimal point is for sure, so check
// the lower interval first
$loBest = search($lo, $loVar, $mid, $midVar);
if ($loBest == $mid) {
// we can't be sure this is the best answer, so check the hi
// interval to be sure
return search($mid, $midVar, $hi, $hiVar);
} else {
// we know this is the best answer
return $loBest;
}
}
}
$optimal_monetary_value = search(0.01, calculate_variance(0.01), 50.0, calculate_variance(50.0));
This assumes that the variance is monotonically increasing when moving away from the optimal point. In other words, if the optimal value is O, then for all X < Y < O, calculate_variance(X) >= calculate_variance(Y) >= calculate_variance(O) (and the same with all > and < flipped). The comment in your code and the way have you have it written make it seem like this is true. If this isn't true, then you can't really do much better than what you have.
Be aware that this is not as good as bisection search. There are some pathological inputs that will make it take linear time instead of logarithmic time (e.g., if the variance is the same for all values). If you can improve the requirement that calculate_variance(X) >= calculate_variance(Y) >= calculate_variance(O) to be calculate_variance(X) > calculate_variance(Y) > calculate_variance(O), you can improve this to be logarithmic in all cases by checking to see how the variance for $mid compares the the variance for $mid + 0.01 and using that to decide which interval to check.
Also, you may want to be careful about doing math with currency. You probably either want to use integers (i.e., do all math in cents instead of dollars) or use exact precision numbers.
If you known nothing at all about the behavior of the objective function, there is no other way than trying all possible values.
On the opposite if you have a guarantee that the minimum is unique, the Golden section method will converge very quickly. This is a variant of the Fibonacci search, which is known to be optimal (require the minimum number of function evaluations).
Your function may have different properties which call for other algorithms.
Why not implementing binary search ?
<?php
$high_amount = 50;
// computed val2 is placed outside the loop
// no need te recalculate it each time
$val2 = find_out_2();
$previous_var = variance(find_out_1(0.01), $val2);
$start = 0;
$end = $high_amount * 100;
$closest_variance = NULL;
while ($start <= $end) {
$section = intval(($start + $end)/2);
$cursor = $section / 100;
$val1 = find_out_1($cursor);
$variance = variance($val1, $val2);
if ($variance <= $previous_var) {
$start = $section;
}
else {
$closest_variance = $cursor;
$end = $section;
}
}
if (!is_null($closest_variance)) {
$closest_variance -= 0.01;
}

PHP Generate x amount of random odd numbers within a range

I need to generate x amount of random odd numbers, within a given range.
I know this can be achieved with simple looping, but I'm unsure which approach would be the best, and is there a better mathematical way of solving this.
EDIT: Also I cannot have the same number more than once.
Generate x integer values over half the range, and for each value double it and add 1.
ANSWERING REVISED QUESTION: 1) Generate a list of candidates in range, shuffle them, and then take the first x. Or 2) generate values as per my original recommendation, and reject and retry if the generated value is in the list of already generated values.
The first will work better if x is a substantial fraction of the range, the latter if x is small relative to the range.
ADDENDUM: Should have thought of this approach earlier, it's based on conditional probability. I don't know php (I came at this from the "random" tag), so I'll express it as pseudo-code:
generate(x, upper_limit)
loop with index i from upper_limit downto 1 by 2
p_value = x / floor((i + 1) / 2)
if rand <= p_value
include i in selected set
decrement x
return/exit if x <= 0
end if
end loop
end generate
x is the desired number of values to generate, upper_limit is the largest odd number in the range, and rand generates a uniformly distributed random number between zero and one. Basically, it steps through the candidate set of odd numbers and accepts or rejects each one based how many values you still need and how many candidates still remain.
I've tested this and it really works. It requires less intermediate storage than shuffling and fewer iterations than the original acceptance/rejection.
Generate a list of elements in the range, remove the element you want in your random series. Repeat x times.
Or you can generate an array with the odd numbers in the range, then do a shuffle
Generation is easy:
$range_array = array();
for( $i = 0; $i < $max_value; $i++){
$range_array[] .= $i*2 + 1;
}
Shuffle
shuffle( $range_array );
splice out the x first elements.
$result = array_slice( $range_array, 0, $x );
This is a complete solution.
function mt_rands($min_rand, $max_rand, $num_rand){
if(!is_integer($min_rand) or !is_integer($max_rand)){
return false;
}
if($min_rand >= $max_rand){
return false;
}
if(!is_integer($num_rand) or ($num_rand < 1)){
return false;
}
if($num_rand <= ($max_rand - $min_rand)){
return false;
}
$rands = array();
while(count($rands) < $num_rand){
$loops = 0;
do{
++$loops; // loop limiter, use it if you want to
$rand = mt_rand($min_rand, $max_rand);
}while(in_array($rand, $rands, true));
$rands[] = $rand;
}
return $rands;
}
// let's see how it went
var_export($rands = mt_rands(0, 50, 5));
Code is not tested. Just wrote it. Can be improved a bit but it's up to you.
This code generates 5 odd unique numbers in the interval [1, 20]. Change $min, $max and $n = 5 according to your needs.
<?php
function odd_filter($x)
{
if (($x % 2) == 1)
{
return true;
}
return false;
}
// seed with microseconds
function make_seed()
{
list($usec, $sec) = explode(' ', microtime());
return (float) $sec + ((float) $usec * 100000);
}
srand(make_seed());
$min = 1;
$max = 20;
//number of random numbers
$n = 5;
if (($max - $min + 1)/2 < $n)
{
print "iterval [$min, $max] is too short to generate $n odd numbers!\n";
exit(1);
}
$result = array();
for ($i = 0; $i < $n; ++$i)
{
$x = rand($min, $max);
//not exists in the hash and is odd
if(!isset($result{$x}) && odd_filter($x))
{
$result[$x] = 1;
}
else//new iteration needed
{
--$i;
}
}
$result = array_keys($result);
var_dump($result);

displaying axis from min to max value - calculating scale and labels

Writing a routine to display data on a horizontal axis (using PHP gd2, but that's not the point here).
The axis starts at $min to $max and displays a diamond at $result, such an image will be around 300px wide and 30px high, like this:
(source: testwolke.de)
In the example above, $min=0, $max=3, $result=0.6.
Now, I need to calculate a scale and labels that make sense, in the above example e.g. dotted lines at 0 .25 .50 .75 1 1.25 ... up to 3, with number-labels at 0 1 2 3.
If $min=-200 and $max=600, dotted lines should be at -200 -150 -100 -50 0 50 100 ... up to 600, with number-labels at -200 -100 0 100 ... up to 600.
With $min=.02and $max=5.80, dotted lines at .02 .5 1 1.5 2 2.5 ... 5.5 5.8 and numbers at .02 1 2 3 4 5 5.8.
I tried explicitly telling the function where to put dotted lines and numbers by arrays, but hey, it's the computer who's supposed to work, not me, right?!
So, how to calculate???
An algorithm (example values $min=-186 and $max=+153 as limits):
Take these two limits $min, $max and mark them if you wish
Calculate the difference between $max and $min: $diff = $max - $min
153 - (-186) = 339
Calculate 10th logarithm of the difference $base10 = log($diff,10) = 2,5302
Round down: $power = round($base10) = 2.
This is your tenth power as base unit
To calculate $step calculate this:
$base_unit = 10^$power = 100;
$step = $base_unit / 2; (if you want 2 ticks per one $base_unit).
Calculate if $min is divisible by $step, if not take the nearest (round up) one
(in the case of $step = 50 it is $loop_start = -150)
for ($i=$loop_start; $i<=$max; $i++=$step){ // $i's are your ticks
end
I tested it in Excel and it gives quite nice results, you may want to increase its functionality,
for example (in point 5) by calculating $step first from $diff,
say $step = $diff / 4 and round $step in such way that $base_unit is divisible by $step;
this will avoid such situations that you have between (101;201) four ticks with $step=25 and you have 39 steps $step=25 between 0 and 999.
ACM Algorithm 463 provides three simple functions to produce good axis scales with outputs xminp, xmaxp and dist for the minimum and maximum values on the scale and the distance between tick marks on the scale, given a request for n intervals that include the data points xmin and xmax:
Scale1() gives a linear scale with approximately n intervals and dist being an integer power of 10 times 1, 2 or 5.
Scale2() gives a linear scale with exactly n intervals (the gap between xminp and xmaxp tends to be larger than the gap produced by Scale1()).
Scale3() gives a logarithmic scale.
The original 1973 paper is online here, which provides more explanation than the code linked to above.
The code is in Fortran but it is just a set of arithmetical calculations so it is very straightforward to interpret and convert into other languages. I haven't written any PHP myself, but it looks a lot like C so you might want to start by running the code through f2c which should give you something close to runnable in PHP.
There are more complicated functions that give prettier scales (e.g. the ones in gnuplot), but Scale1() would likely do the job for you with minimal code.
(This answer builds on my answer to a previous question Graph axis calibration in C++)
(EDIT -- I've found an implementation of Scale1() that I did in Perl):
use strict;
sub scale1 ($$$) {
# from TOMS 463
# returns a suitable scale ($xMinp, $xMaxp, $dist), when called with
# the minimum and maximum x values, and an approximate number of intervals
# to divide into. $dist is the size of each interval that results.
# #vInt is an array of acceptable values for $dist.
# #sqr is an array of geometric means of adjacent values of #vInt, which
# is used as break points to determine which #vInt value to use.
#
my ($xMin, $xMax, $n) = #_;
#vInt = {1, 2, 5, 10};
#sqr = {1.414214, 3.162278, 7.071068 }
if ($xMin > $xMax) {
my ($tmp) = $xMin;
$xMin = $xMax;
$xMax = $tmp;
}
my ($del) = 0.0002; # accounts for computer round-off
my ($fn) = $n;
# find approximate interval size $a
my ($a) = ($xMax - $xMin) / $fn;
my ($al) = log10($a);
my ($nal) = int($al);
if ($a < 1) {
$nal = $nal - 1;
}
# $a is scaled into a variable named $b, between 1 and 10
my ($b) = $a / 10^$nal;
# the closest permissable value for $b is found)
my ($i);
for ($i = 0; $i < $_sqr; $i++) {
if ($b < $sqr[$i]) last;
}
# the interval size is computed
$dist = $vInt[$i] * 10^$nal;
$fm1 = $xMin / $dist;
$m1 = int($fm1);
if ($fm1 < 0) $m1--;
if (abs(($m1 + 1.0) - $fm1) < $del) $m1++;
# the new minimum and maximum limits are found
$xMinp = $dist * $m1;
$fm2 = $xMax / $dist;
$m2 = $fm2 + 1;
if ($fm2 < -1) $m2--;
if (abs ($fm2 + 1 - $m2) < $del) $m2--;
$xMaxp = $dist * $m2;
# adjust limits to account for round-off if necessary
if ($xMinp > $xMin) $xMinp = $xMin;
if ($xMaxp < $xMax) $xMaxp = $xMax;
return ($xMinp, $xMaxp, $dist);
}
sub scale1_Test {
$par = (-3.1, 11.1, 5,
5.2, 10.1, 5,
-12000, -100, 9);
print "xMin\txMax\tn\txMinp\txMaxp,dist\n";
for ($i = 0; $i < $_par/3; $i++) {
($xMinp, $xMaxp, $dist) = scale1($par[3*$i+0],
$par[3*$i+1], $par[3*$i+2]);
print "$par[3*$i+0]\t$par[3*$i+1]\t$par[3*$i+2]\t$xMinp\t$xMaxp,$dist\n";
}
}
I know that this isn't exactly what you are looking for, but hopefully it will get you started in the right direction.
$min = -200;
$max = 600;
$difference = $max - $min;
$labels = 10;
$picture_width = 300;
/* Get units per label */
$difference_between = $difference / ($labels - 1);
$width_between = $picture_width / $labels;
/* Make the label array */
$label_arr = array();
$label_arr[] = array('label' => $min, 'x_pos' => 0);
/* Loop through the number of labels */
for($i = 1, $l = $labels; $i < $l; $i++) {
$label = $min + ($difference_between * $i);
$label_arr[] = array('label' => $label, 'x_pos' => $width_between * $i);
}
A quick example would be something in the lines of $increment = ($max-$min)/$scale where you can tweak scale to be the variable by which the increment scales. Since you devide by it, it should change proportionately as your max and min values change. After that you will have a function like:
$end = false;
while($end==false){
$breakpoint = $last_value + $increment; // that's your current breakpoint
if($breakpoint > $max){
$end = true;
}
}
At least thats the concept... Let me know if you have troubles with it.

Finding the nearest data point and sunshine data in php

I am in a big problem.The problem is following
1)I have a .dat file(dat file used to store the data ).It is a 25 MB file which contains
over 200K rows of latitude and longitudes.Example of one of such row is
-59.083 -26.583 9.4 5.2 3.3 4.3 8.1 6.6 5.3 8.4 8.3 10.0 9.1 5.1
It statrs form the latitude ,longitude,sunshine data (hours) in january,fabruary..and so on to decmber.
I have more than 200K rows like above.
My task is to calculate the sunshine hours in a particulr latitude and longitude.Suppose i have a latitude =3.86082 and longitude=100.50509 ;.So my task would be to find average sunshine hours per month on this latitude and longitude.but the second problem is that i am not going to have exact match of given latitude and longitude with the ones i have in the file.So first of all i have to find the nearest point and then i have to calculate the sunshine hours.
I am using the following code to calculate the nearest point.But it is taking a huge time beacuse of the bulk of data in the file
$file_name='grid_10min_sunp.dat';
$handle = fopen($file_name, "r");
$lat1=13.86082;
$lan1=100.50509;
$lat_lon_sunshines = make_sunshine_dict($file_name);
$closest = 500;
for($c=0;$c<count($lat_lon_sunshines);$c++)
{
$lat2=$lat_lon_sunshines[$c]['lat'];
$lan2=$lat_lon_sunshines[$c]['lan'];
$sunshines=$lat_lon_sunshines[$c]['sunshine'];
$lat_diff = abs(round((float)($lat1), 4)-$lat2);
if ($lat_diff < $closest)
{
$diff = $lat_diff + abs(round((float)($lan1), 4)-$lan2);
if($diff < $closest)
{
$closest = $diff;
$sunshinesfinal=$sunshines;
}
}
$sunshines='';
}
print_r($sunshinesfinal);die;
the function make_sunshine_dict($file_name ) also goes throgh each line of the file and prepares an array as following
$sunshines_dict = array();
$f = file_get_contents($file_name);
$handle = fopen($file_name, "r");
while($buffer = fgets($handle))
{
$tok = strtok($buffer, " \n\t");
$lat=$tok;
$latArray[]=$tok;
$tok = strtok(" \n\t");
$months = '';
$months = array();
for ($k = 0; $tok !== false; $k+=1)
{
if($k==0)
{
$lan=$tok;
$lanArray[]=$tok;
}
if($k!=0)
{
$months[] = $tok ;
"month $k : ".$months[$k]."<br>";
}
$tok = strtok(" \n\t");
}
$data[$kkk]['lat']=$lat;
$data[$kkk]['lan']=$lan;
foreach($months as $m=>$sunshine)
{
$sunshines=array();
$sumD = 0;
$iteration= 31;
for($n=1;$n<=$iteration;$n++)
{
$J = ($m+1)*$n;
$P = asin(.39795*cos(.2163108 + 2*atan(.9671396*tan(.00860*($J-186)))));
$value=(sin(0.8333*pi/180) + sin($lat*pi/180)*sin($P))/(cos($lat*pi/180)*cos($P));
/* $value ? ($value > 1 and 1) : $value;
$value ? ($value < -1 and -1): $value;*/
$D = 24 - ((24/pi) * acos($value));
$sumD = $sumD + $D;
}
$sunshinesdata=(($sumD/30)*(float)($sunshine)*.01);
$data[$kkk]['sunshine'][$m]=$sunshinesdata;
$sunshines='';
}
}
return $data;
Please help and please let me know if you require more information
And please remenber i can not use default php function for sunshine information here beacsue i am also taking cloud cover and other factors into consideration
A lot of the code looks wrong (in terms of just figuring out the sunshine hours). Although, I'll admit, I'm lost as to what you are doing in the make_sunshine_dict function.
In terms of helping you speed things up, you are reading your file twice:
$f = file_get_contents($file_name);
$handle = fopen($file_name, "r");
Also, you can map Lat/Long coordinates into a grid. In other words, make a 2 dimensional array where each row and each column represents 1 degree of latitude and 1 degree of longitude, respectively. As you read in your file, dump entries into the correct lat/long bucket in the grid. When you need to find the location closest to a given lat/lon then you only need to compare that location against the points in its bucket.
Load the .dat into a SQL database every x interval in a Cron job and then just query the database like you always would.

Categories