I'm trying to modify a script that generates random numbers to filenames and instead have a pretty increasing counter of +1 each new file
The original function is very simple it looks like this:
$name .= generateRandomString(5);
What I've came up with my mediocre skills is:
$name .= $count = 1; while ($count <= 10) { echo "$count "; ++$count; }
However, what happens when I run the code is that it just keeps on looping. I was looking for a function similar to the: generateRandomString but for increasing numbers, is there any?
Any ideas?
user current timestamp in place of $count, it will always keep increasing.
$name .= time()
Related
I am new to php so please mind if it easy question. I have a php script, I want it to be executed only 10 times a day and not more than that. I don't want to use cron for this. Is there any way to do this in php only?
Right now I have set a counter which increases by one every time any one runs the script and loop it to 10 times only. if it exceeds it it shows an error message.
function limit_run_times(){
$counter = 1;
$file = 'counter.txt';
if(file_exists($file)){
$counter += file_get_contents($file);
}
file_put_contents($file,$counter);
if($counter > 11 ){
die("limit is exceeded!");
}
}
I want some efficient way to do this so everyday the script is only executed for 10 times and this is applicable for everyday i.e this counter gets refreshed to 0 everyday or is there any other efficient method.
I would rather recommend that you use a database instead - its cleaner and more simple to maintain.
However, it is achievable with file-handling as well. The file will be of format 2019-05-15 1 (separated by tab, \t). Fetch the contents of the file and split the values by explode(). Then do your comparisons and checks, and return values accordingly.
function limit_run_times() {
// Variable declarations
$fileName = 'my_log.txt';
$dailyLimit = 10;
$content = file_get_contents($fileName);
$parts = explode("\t", $content);
$date = $parts[0];
$counter = $parts[1] + 1;
// Check the counter - if its higher than 10 on this date, return false
if ($counter > $dailyLimit && date("Y-m-d") === $date) {
die("Daily executing limit ($dailyLimit) exceeded! Please try again tomorrow.");
}
// We only get here if the count is $dailyLimit or less
// Check if the date is today, if so increment the counter by 1
// Else set the new date and reset the counter to 1 (as it is executed now)
if (date("Y-m-d") !== $date) {
$counter = 1;
$date = date("Y-m-d");
}
file_put_contents($fileName, $date."\t".$counter);
return true;
}
I am trying to create a random string which will be used as a short reference number. I have spent the last couple of days trying to get this to work but it seems to get to around 32766 records and then it continues with endless duplicates. I need at minimum 200,000 variations.
The code below is a very simple mockup to explain what happens. The code should be syntaxed according to 1a-x1y2z (example) which should give a lot more results than 32k
I have a feeling it may be related to memory but not sure. Any ideas?
<?php
function createReference() {
$num = rand(1, 9);
$alpha = substr(str_shuffle("abcdefghijklmnopqrstuvwxyz"), 0, 1);
$char = '0123456789abcdefghijklmnopqrstuvwxyz';
$charLength = strlen($char);
$rand = '';
for ($i = 0; $i < 6; $i++) {
$rand .= $char[rand(0, $charLength - 1)];
}
return $num . $alpha . "-" . $rand;
}
$codes = [];
for ($i = 1; $i <= 200000; $i++) {
$code = createReference();
while (in_array($code, $codes) == true) {
echo 'Duplicate: ' . $code . '<br />';
$code = createReference();
}
$codes[] = $code;
echo $i . ": " . $code . "<br />";
}
exit;
?>
UPDATE
So I am beginning to wonder if this is not something with our WAMP setup (Bitnami) as our local machine gets to exactly 1024 records before it starts duplicating. By removing 1 character from the string above (instead of 6 in the for loop I make it 5) it gets to exactly 32768 records.
I uploaded the script to our centos server and had no duplicates.
What in our enviroment could cause such a behaviour?
The code looks overly complex to me. Let's assume for the moment you really want to create n unique strings each based on a single random value (rand/mt_rand/something between INT_MIN,INT_MAX).
You can start by decoupling the generation of the random values from the encoding (there seems to be nothing in the code that makes a string dependant on any previous state - excpt for the uniqueness). Comparing integers is quite a bit faster than comparing arbitrary strings.
mt_rand() returns anything between INT_MIN and INT_MAX, using 32bit integers (could be 64bit as well, depends on how php has been compiled) that gives ~232 elements. You want to pick 200k, let's make it 400k, that's ~ a 1/10000 of the value range. It's therefore reasonable to assume everything goes well with the uniqueness...and then check at a later time. and add more values if a collision occured. Again much faster than checking in_array in each iteration of the loop.
Once you have enough values, you can encode/convert them to a format you wish. I don't know whether the <digit><character>-<something> format is mandatory but assume it is not -> base_convert()
<?php
function unqiueRandomValues($n) {
$values = array();
while( count($values) < $n ) {
for($i=count($values);$i<$n; $i++) {
$values[] = mt_rand();
}
$values = array_unique($values);
}
return $values;
}
function createReferences($n) {
return array_map(
function($e) {
return base_convert($e, 10, 36);
},
unqiueRandomValues($n)
);
}
$start = microtime(true);
$references = createReferences(400000);
$end = microtime(true);
echo count($references), ' ', count(array_unique($references)), ' ', $end-$start, ' ', $references[0];
prints e.g. 400000 400000 3.3981630802155 f3plox on my i7-4770. (The $end-$start part is constantly between 3.2 and 3.4)
Using base_convert() there can be strings like li10, which can be quite annoying to decipher if you have to manually type the string.
I do not believe this to be a duplicate, I've looked for it, but really had no clue what to call it exactly.
I want to know why a loop that is ten times larger than another loop doesn't take ten times longer to run.
I was doing some testing to try and figure out how to make my website faster and more reactive, so I was using microtime() before and after functions. On my website, I'm not sure how to pull lists of table rows with certain attributes out without going through the entire table, and I wanted to know if this was what was slowing me down.
So using the following loop:
echo microtime(), "<br>";
echo microtime(), "<br>";
session_start();
$connection = mysqli_connect("localhost", "root", "", "") or die(mysqli_connection_error());;
echo microtime(), "<br>";
echo microtime(), "<br>";
$x=1000;
$messagequery = mysqli_query($connection, "SELECT * FROM users WHERE ID='$x'");
while(!$messagequery or mysqli_num_rows($messagequery) == 0) {
echo('a');
$x--;
$messagequery = mysqli_query($connection, "SELECT * FROM users WHERE ID='$x'");
}
echo "<br>";
echo microtime(), "<br>";
echo microtime(), "<br>";
I got the following output and similar outputs:
0.14463300 1376367329
0.14464400 1376367329
0.15548900 1376367330
0.15550000 1376367330 < these two
[a's omitted, for readability]
0.33229800 1376367330 < these two
0.33230700 1376367330
~18-20 microseconds, not that bad, nobody will notice that. So I wondered what would happen as my website grew. What would happen if I had 10 times as many (10,000) table rows to search through?
0.11086600 1376367692
0.11087600 1376367692
0.11582100 1376367693
0.11583600 1376367693
[lots of a's]
0.96294500 1376367694
0.96295500 1376367694
~83-88 microseconds. Why isn't it 180-200 microseconds? Does it take time to start and stop a loop or something?
UPDATE: To see if it was the mySQL adding variables, I tested it without the mySQL:
echo microtime(), "<br>";
echo microtime(), "<br>";
session_start();
$connection = mysqli_connect("localhost", "root", "W2072a", "triiline1") or die(mysqli_connection_error());;
echo microtime(), "<br>";
echo microtime(), "<br>";
$x=1000000;
while($x > 10) {
echo('a');
$x--;
}
echo "<br>";
echo microtime(), "<br>";
echo microtime(), "<br>";
Now it appears that at one million, it takes ~100 milliseconds(right?) and at ten million it takes ~480 milliseconds. So, my question still stands. Why do larger loops move more quickly? It's not important, I'm not planning my entire website design based off of this, but I am interested.
Normally, loops will scale linearly.
Possible bug: If you haven't already done so, consider what might happen if there was no record with id 900.
I would strongly recommend using MySQL to do your filtration work for you via WHERE clauses rather than sorting thru information this way. It's not really scalable.
Frankly, the line
while(!$messagequery or mysqli_num_rows($messagequery) == 0) {
doesn't make sense to me. $messagequery will be false if a failure occurs, and you want the loop to run as long as mysqli_num_rows($messagequery) is NOT equal to zero, I think. However, that's not what the above code does.
If mysqli_num_rows($messagequery) is equal to zero, the loop will continue.
If mysqli_num_rows($messagequery) is NOT equal to zero, the loop will stop.
See operator precedence: http://php.net/manual/en/language.operators.precedence.php
Does that help answer your question?
If you are really interested in this, you might take a look at the op codes that PHP creates. The Vulcan Logic Disassembler (VLD) might help you with this.
However, this shouldn't be your problem if you are only interested in your site speed. You won't have speed benefits/drawbacks just because of the loops themselves, but on the things they actually loop on (MySQL queries, arrays, ...).
Compare this small test script:
<pre>
<?php
$small_loop = 3000;
$big_loop = $small_loop*$small_loop;
$start = microtime(true);
// Big loop
for ($i = 0; $i < $big_loop; $i++) {
; // do nothing
}
echo "Big loop took " . (microtime(true) - $start) . " seconds\n";
$start = microtime(true);
// Small loops
for ($i = 0; $i < $small_loop; $i++) {
for ($j = 0; $j < $small_loop; $j++) {
;
}
}
echo"Small loops took " . (microtime(true) - $start) . " seconds\n";
?>
</pre>
The output for me was:
Big loop took 0.59838700294495 seconds
Small loops took 0.592453956604 seconds
As you can see the difference in 1 loop vs. 3000 loops isn't really significant.
I have a script which lists all possible permutations in an array, which, admittedly, might be used instead of a wordlist. If I get this to work, it'll be impossible to not get a hit eventually unless there is a limit on attempts.
Anyway, the script obviously takes a HUGE amount of memory, something which will set any server on fire. What I need help with is finding a way to spread out the memory usage, something like somehow resetting the script and continuing where it left off by going to another file or something, possibly by using Sessions. I have no clue.
Here's what I've got so far:
<?php
ini_set('memory_limit', '-1');
ini_set('max_execution_time', '0');
$possible = "abcdefghi";
$input = "$possible";
function string_getpermutations($prefix, $characters, &$permutations)
{
if (count($characters) == 1)
$permutations[] = $prefix . array_pop($characters);
else
{
for ($i = 0; $i < count($characters); $i++)
{
$tmp = $characters;
unset($tmp[$i]);
string_getpermutations($prefix . $characters[$i], array_values($tmp), $permutations);
}
}
}
$characters = array();
for ($i = 0; $i < strlen($input); $i++)
$characters[] = $input[$i];
$permutations = array();
print_r($characters);
string_getpermutations("", $characters, $permutations);
print_r($permutations);
?>
Any ideas? :3
You could store the permutations in files every XXX permutations, then reopen files when needed in the correct order to display/use your permutations. (Files or whatever you want, as long as you can free PhP memory)
I see that you're just echoing the permutations, but maybe you'd want to do something else with it ? So it depends somehow.
Also, try to unset as many unused variables as soon as possible while doing your permutations.
Edit : Sometimes, using references as you did for your permutations array can result to a bigger use of memory. Just in case you didn't try, check which is better, with or without
I want to generate the lotto number like generating here
http://www.nationallottery.co.za/lotto_home/NumberGenerator.asp
may i know what will be the logic or way to generate the lotto number using PHP,mysql and Ajax.
I will be thankful of you.
Sample Example:
$generated = array();
while (count($generated) < 6)
{
$no = mt_rand(1, 49);
if(!array_search($no, $generated))
{
$generated[] = $no;
}
}
echo implode(" : ", $generated);
You just need to generate random numbers.
Create multiple random numbers and style them however you want them. That site appears to have replaced text numbers with images which are probably programatically coded. If you want multiple rows like they offer, just make a form like they have and return the correct number of rows. Shouldn't be too hard
The php function you are looking for is mt_rand()
Use it to generate an integer between two given values, something like this:
<?php
for($i = 1; $i <= 6; $i++){
echo mt_rand(1,45).' ';
}
?>