I'm getting some weird data in my database (well it does not add up to what I'm trying to do)
Here's my for loop which counts from 1 to 52 in weeks and then makes a URL out if it to give to a function for processing
for ($week = 1; $week < 52 ; $week++) {
kalenderFetch("heren2kalender","http://kovv.mavari.be/xlsKalenderScheidsrechter.aspx?&reeks=H3A&week=".$week);
}
The function to which it post then processes the URL and extracts the data from the table and inputs it to a database:
For as far al I can tell it does not input all data from all weeks, it just randomly stops sometimes I get more data sometimes less.
I was getting an awful lot of error (like time-exceptions and memory-exceptions)
First of all, use:
for ($week = 1; $week <= 52 ; $week++)
Further, if you get time and memory exceptions, try:
set_time_limit(0);
memory_limit("128M");
Whether this works or not depends on your PHP settings. Maybe you also have to increase memory in your php.ini file.
Related
In Short: How to break the code after the code takes more than 53 seconds to execute, something like this:
while (5 < 200)
{
// do some work here, which will take around 1-6 seconds
if (code this loop is running for more than 53 seconds)
break;
}
If you want to know, why i want to do this:
Ok, this is What I am doing: I am copying data from 7 JSON pages on the web and inserting them into my MySQL database.
The code is something like this:
$start_time = microtime(TRUE); // Helps to count PHP script execution time
connect to database
$number = get inserted number into database
while ($number > ($number+20) )
{
7 Open JSON File link like this - "example.com/$number?api=xxxyyzz"
Use Prepared PDO statements to insert data into MySQL
}
// Code to count PHP script execution time
$end_time = microtime(TRUE);
$time_taken = $end_time - $start_time;
$time_taken = round($time_taken,5);
echo '<p>Page generated in '.$time_taken.' seconds.</p>';
So in my case, It takes around 5.2 seconds to complete one whole loop of adding all data. But some JSON files are empty, so it takes only 1.4 second to complete 1 loop, if they are empty.
So like that, I want to complete millions of loops (add Data from millions of JSON files). So if my code runs for 24/7, it will take me 1 month to complete my task.
But after the code runs for 90 seconds, i get this error:
I am using a CRON job to do the task. And looks like server gives the same error to CRON job.
So, I want to do the CRON job to run every 1 minute, so I do not get timed out error.
But I am afraid of this: What If the script added data in half rows and 1 mintue gets over, and it do not add data into other half rows. Then after the starting on the new minute, the code start from the next $number.
So, If i can break; out of the loop after 53 seconds (If the code starts another loop at 52 seconds, then break at the end of it, that will be around 58-59 seconds).
I mean, i will put the break; code just before the loop end (before }). So i do not exit the loop, while the data got inserted into half of the rows.
I guess that your PHP's max_execution_time is equal to 90 seconds, you can specify max_execution_time by set_time_limit, but I don't think it is a good approach for this.
Have a try pcntl or pthreads, it would save you a lot of time.
Any idea of what would be the best way of writing a function in PHP for an online registration system with possibility of objects' occupancy;
Just to be clear:
I want to check the availability of one object in the database by writing a function and by comparing two variables:
Starting time of reservations;
Their duration (finishing time);
So when a new reservation is entered I check the database; if it doesn't pass the limit of objects in that period (by comparing to previous reservations) it gives a message which I will then pass it to JavaScript and enable the Submission button; but if it passes the limit in my JavaScript I'll suggest a duration which is available for the entered Starting Time;
In my current PHP function I am having some problems:
First I am using so many variables and so many loops (which may cause the system slow if it gets bigger) and the code seems quite messy!
It doesn't recognize the difference between serial or concurrent reservations therefore it behaves the same to these reservations.
Here is a snippet of my function:
$reservation = new Reservation;
$reservations = $reservation -> fetch_all();
foreach ($reservations as $reservation) {
for ($j = $res_code['res_code_st']; $j < $res_code['res_code_fi']; $j++) {
for ($i = $reservation['res_code_st']; $i < $reservation['res_code_fi']; $i++) {
if ($i == $j) {
$count = $count + 1;
$arr[] = $reservation['res_code_st'];
$arr[] = $reservation['res_code_fi'];
break 2;
Which actually I'm storing time in this format;
For example for 12:30 I'm storing 1230 or for 09:20 I'm storing 0920 and then I'm checking every minute of any item with every minute of new reservation (everything happens in the same day: Days don't matter!) and in case it finds a match I count it as a new reservation in that period (the reason why it doesn't differ concurrency and serial);
I believe it should be simple but I'm kinda confused and my mind doesn't work for a better solution right now!
Thanks for your times :)
EDIT
I tried the suggested way of #kamil-maraz , I think it saves some time for reducing complexity but I still couldn't figure out how to check the concurrency.
Let me give one example:
There are four possibility of disturbance I try to show in this symbolic figure,
Suppose each line is a reservation across time, first line is for new reservation and next four are already stored in the DB;
Four disturbance are as :
One that starts before and ends at the middle of new request,
One that starts before and ends after the new request;
One that is completely inside the new reservation;
One that starts after the new request and ends after it;
0-----------------0
0--------------------------------0
0--------------0
0----------0
0-----0
$result = $db -> prepare('SELECT Count(reservation_id) FROM reservations WHERE (res_code_st < ? AND res_code_fi > ?) OR (res_code_st > ? AND res_code_fi < ?) OR (res_code_st < ? AND res_code_fi > ?) OR (res_code_st < ? AND res_code_fi > ?)');
$result -> execute(array($res_code['res_code_st'], $res_code['res_code_st'], $res_code['res_code_st'], $res_code['res_code_fi'], $res_code['res_code_st'], $res_code['res_code_fi'], $res_code['res_code_fi'], $res_code['res_code_fi']));
$row = $result -> fetch();
This is giving me the number of reservations in the interval of new request; But what about this case:
0--------------------------0
0-----0
0-----0
0------0
Although in the interval there are 4 reservations which is invalid (Suppose the #object limit == 3 ), but since at each time not more than 2 reservations are made it is still valid (the concurrency problem which I was talking about).
Any idea how should I change the SQL function to fix this ?
It seems to me, that it could be done entirely on the database. You are fetching all results and then you do some magic over data. But you can do it through a database query.
for example somethging like this:
SELECT COUNT(id) FROM reservations WHERE date < ... AND date > ... AND ... etc
then, in the php, you can test count ...
if you want to test different types of reservations, concurent, etc. you can use aggregated table (Like somebedy used here and you can store in rows types of reservations too.
I have problem with my script
the problem with loop if someone can help me please
Problems:
I am starting loop perfectly without any problem with first loop only.
When result = 0 next loop start it gives me -1 -2 -3 etc in first account only.
and the correct way :
when next loop start its add +2 in smtp $server_index++; and start from first without sticky in first account and get minues numbers in same account .
$server_index = 0;
while($customer = mysql_fetch_assoc($result)){
//// Start Server Switch
$available_server_limit = $servers[$server_index]["per_day_limit"] - getEmailUsage($servers[$server_index]["id"],date("d/m/Y",time()));
if($available_server_limit==0){
if($server_index==$servers_count-1){
//exit(showError("Sorry! We don't have limit to send more emails"));
mysql_query("UPDATE `smtp_servers` SET `per_day_limit`=`per_day_limit`+2");
$server_index=0;
}else{
$server_index++;
}
}
It's really difficult to see exactly what you want from only this much info. However, I should point out that you're only going to hit $server_index++; IF $available_server_limit==0
If $available_server_limit does not equal 0, you'll get stuck.
The loop -- by the way -- is in a while statement relying on variables you're not using. May I suggest a foreach, or for statement?
On another note, please do away with mysql_* while you still can. It's depreciated as of recent versions of PHP.
See this for more info
I also notice that you don't show the hole loop. So with that in mind, here's my GUESS at the best way around this for YOU.
$server_index = 0;
foreach( $servers as $key=>$val )
{
$available_server_limit = $servers[$server_index]["per_day_limit"] - getEmailUsage($servers[$server_index]["id"],date("d/m/Y",time()));
if($available_server_limit==0)
{
if($server_index==$servers_count-1)
{
mysql_query("UPDATE `smtp_servers` SET `per_day_limit`=`per_day_limit`+2");
$server_index=0;
}
}
$server_index++;
Note: I didn't close the foreach loop because your while loop isn't closed. I figure you've got code after this that you're using.
Image : http://i40.tinypic.com/2hodx55.png
I have built a Network Interface Monitor using Php and SNMP , but now when i execute it on localhost i see my graph goes to origin(0) again and again (Please see the image) and also the speed on Y axis is wrong. At times it goes in Millons and Millions.
please can anyone tell me what is the problem in the code below
<?php
$int="wlan0";
session_start();
$rx0 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.10.3');
$tx0 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.16.3');
sleep(5);
$rx1 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.10.3');
$tx1 =snmpget('localhost','public','.1.3.6.1.2.1.2.2.1.16.3');
$rx0 = substr($rx0, 11);
$tx0 = substr($tx0, 11);
$rx1 = substr($rx1, 11);
$tx1 = substr($tx1, 11);
$tBps = $tx1 - $tx0;
$rBps = $rx1 - $rx0;
$round_rx=$rBps;
$round_tx=$tBps;
$time=date("U")."000";
$_SESSION['rx'][] = "[$time, $round_rx]";
$_SESSION['tx'][] = "[$time, $round_tx]";
$data['label'] = $int;
$data['data'] = $_SESSION['rx'];
if (count($_SESSION['rx'])>60)
{
$x = min(array_keys($_SESSION['rx']));
unset($_SESSION['rx'][$x]);
}
echo '{"label":"'.$int.'","data":['.implode($_SESSION['rx'], ",").']}';
?>
What you are seeing here is a classic case of polling a counter faster than its refresh interval. It is often the case that counters (in this case, interface counters) are updated every few seconds (10-15 seconds is a common value).
If the counter updates every 15 seconds, and you ask for data every 5 seconds, then you will receive the same value once or twice in a row (depending on latency, processing time, etc.). If you receive the same value twice, then you will see a zero value for the delta (which is what your image shows).
There are two ways to get around this:
Ask for data less frequently than the counters are updated (30-second polling usually works fine). Obviously, if you can find out the exact refresh interval, then you can use that.
Modify the configuration of your equipment to refresh its counters faster. Sometimes this is possible, sometimes it is not; it just depends on the manufacturer, the software, and what has been implemented.
For Net-SNMP "snmpd" daemons, you can walk NET-SNMP-AGENT-MIB::nsCacheTable (1.3.6.1.4.1.8072.1.5.3) for more information about its internal caching of counters.
For example:
snmpwalk -v2c -cpublic localhost 1.3.6.1.4.1.8072.1.5.3 | grep .1.3.6.1.2.1.2.2
NET-SNMP-AGENT-MIB::nsCacheTimeout.1.3.6.1.2.1.2.2 = INTEGER: 3
NET-SNMP-AGENT-MIB::nsCacheStatus.1.3.6.1.2.1.2.2 = INTEGER: cached(4)
Here, you can see that my particular box is caching IF-MIB::ifTable (.1.3.6.1.2.1.2.2), which is the table that you're using, every three seconds. In my case, I would not ask for data any more often than every three seconds. NET-SNMP-AGENT-MIB::nsCacheTimeout (.1.3.6.1.4.1.8072.1.5.3.1.2) is marked as read-write, so you might be able to issue an a "set" command to change the caching duration.
I have a php page query mysql database, it will return about 20000 rows. However the browser will take above 20 minutes to present. I have added index on my database and it do used it, the query time in command line is about 1 second for 20000 rows. but in web application, it takes long. is anyone know which causing this problem? and better way to improve it?Below is my php code to retrieve the data:
select * from table where Date between '2010-01-01' and '2010-12-31'
$result1 = mysql_query($query1) or die('Query failed: ' . mysql_error());
while ($line = mysql_fetch_assoc($result1)) {
echo "\t\t<tr>\n";
$Data['Date'] = $line['Date'];
$Data['Time'] = $line['Time'];
$Data['Serial_No'] = $line['Serial_No'];
$Data['Department'] = $line['Department'];
$Data['Team'] = $line['Team'];
foreach ($Data as $col_value) {
echo "\t\t\t<td>$col_value</td>\n";
};
echo "\t\t</tr>\n";
}
Try adding an index to your date column.
Also, it's a good idea to learn about the EXPLAIN command.
As mentioned in the comments above, 1 second is still pretty long for your results.
You might consider putting all your output into a single variable and then echoing the variable once the loop is complete.
Also, browsers wait for tables to be completely formed before showing them, so that will slow your results (at least slow the process of building the results in the browser). A list may work better - or better yet a paged view if possible (as recommended in other answers).
It's not PHP that's causing it to be slow, but the browser itself rendering a huge page. Why do you have to display all that data anyway? You should paginate the results instead.
Try constructing a static HTML page with 20,000 table elements. You'll see how slow it is.
You can also improve that code:
while ($line = mysql_fetch_assoc($result1)) {
echo "\t\t<tr>\n";
foreach ($line as $col_value) {
echo "\t\t\t<td>$col_value</td>\n";
flush(); // optional, but gives your program a sense of responsiveness
}
echo "\t\t</tr>\n";
}
In addition, you should increase your acceptance rate.
You could time any steps of the script, by echoing the time before and after connecting to the database, running the query and outputting the code.
This will tell you how long the different steps will take. You may find out that it is indeed the traffic causing the delay and not the query.
On the other hand, when you got a table with millions of records, retreiving 20000 of them can take a long time, even when it is indexed. 20 minutes is extreme, though...