I am developing a event organization website. Here when the user registers for an event he will be given a unique random number(10 digit), which we use to generate a barcode and mail it to him. Now,
I want to make the number unique for each registered event.
And also auto increment
One solution is to grab all the auto increment numbers in an array and generate a auto increment number using laravel takes the form (0000000001 to 9999999999) and loop through and check all the values. Grab the first value that doesn't equal to any of the values in the array and add it to the database.
But I am thinking that there might be a better solution to this. Any suggestion?
Select Maximum number stored in your DB and add 1 in it like:
SELECT (MAX(Column_Name)+1) AS Max_val FROM Table_Name;
I suggest simple timestamp-based solution using the Carbon class to produce a unique number using timestamp. It's fairly simple to have a basic unique and random stamp generation using timestamp.
You can use as given below,
use Carbon\Carbon;
$current_timestamp = Carbon::now()->timestamp; // Produces something like this 1552296328
You can use it as a unique identifier. If you want the next numbers, just +1. But keep in mind, you have to manage another number batch in a timely manner. (i.e if you have generated 500 numbers for now by increment, You should not generate another number for the next 500 seconds. Otherwise, it will repeat the number). If you want to know more, you can read here.
The solution with rand() function may not work here because it can re-produce the existing number in the database and you will be errored for Unique Constraint Violation(i.e. If you have column unique in DB).
No matter what approach you use, it would never be truly random. It will be PRNG. For your case, I think auto increment with zero fill should be enough.
But if you are set on using random number then using rand() function of PHP should be enough. 10 digit means 10000000000 unique number.Unless your project has millions of events it should realistically be no problem. So, approach 1 should be no problem. Also, you can check after generating any random number that whether that number is already present(There is 0.000001% or something like that chance.). If it is present then try to generate a random number again.
But if your project gets very successful (I.E. millions of events) then problems similar to Y2K might creep up.
MySQL UUID would give you truly unique is: Store UUID v4 in MySQL
You don’t need to worry about auto incrementing.
I'm trying to generate a unique order number for my ecommerce application, this is my code:
<?php
$bytes = random_bytes(3);
$random_hash = bin2hex($bytes);
$order_num = $random_hash . "1";
echo strtoupper(hash('crc32b', $order_num));
The order number (in the example is 1), is going to be an auto-increment value retrieved from MySQL.
Does this ensure me uniqueness?
I wanted a short max 8-10 chars unique final value.
An only numbers solution would be fine too.
As far as I know, most hash algorithms make no guarantee of when collisions might occur, so you're probably just as likely to get a collision with your proposed code as using the random part on its own.
If the auto-increment part is unique, and the random part is just to avoid guesses, you could just concatenate the two parts together (i.e. everything in your example before the hash call). That way if the same random number comes up twice, it will have different numbers on the end.
If that results in something too long, you could do something with base_convert or asc to convert the number into a shorter representation.
The hash function will not provide any uniqueness to the id, it only obfuscates the id a bit.
If you have lets say 100 possible values, you would get 100 possible hashes from them, no more. If an attacker wants to brute-force the hashes, he can pick the 100 possible hashes and try them.
In your case with 3 bytes of randomness, you would not get all possible combinations before you get a duplicate. So the same random number would be generated much earlier than with 3 bytes of possible combinations.
There are two common approaches when it comes to unique ids:
You let the database automatically increment the id, this makes sure that the id is unique.
You generate a UUID (global id with 16 bytes) which offers such a huge keyspace, that a duplicate is extremely unlikely. In practice one can neglate the possiblilty of duplicates.
The UUID has a lot of advantages and one disadvantage:
(+) UUID's can work decentralized e.g. in an offline scenario.
(+) One can generate the id before it is inserted in the database, so one has not to wait before the row is created in the db.
(+) The ids are not deterministic, so an attacker cannot guess the next id.
(-) They use more storage space and are a bit slower when searching.
I'd like to generate a long list of 9-digits sequences.
Let's call them ID.
So each ID is unique and the main purpose is to have them all really different. It is unacceptable to have 2 IDs which differs by 1 or 2 digits in sequence.
Do you have any ideas how to implement it without comparing each new generated ID with each previously generated?
Probably there is some algorithm already or simple MYSQL function to compare how close those strings are?
You could try the following formula for your ID's - you would only need to check that the ID value doesn't already exist in the table (salt is a constant between 0 and 100 that doesn't ever change once you pick a value - I would recommend using a prime number, and definitely not 0):
ID = random integer * 101 + salt;
This generates ID values like the following (for salt = 73):
469956305
017775467
001195913
913620520
156482807
577463533
470183959
049290800
078643925
141526626
If you take any two of these ID values and compare them, you'll notice that no two numbers differ by only one or two digits in sequence. I wrote a script to compare all possible ID values between 0 and 3000000, and there were no two ID values of this form differing by 1 or 2 digits in sequence. If you want to test it out yourself, here's the script I used (in C#): http://ideone.com/lFHnlX - I reduced the upper limit because of timeout on IDEone.
You want to get away with not-checking for uniqueness and you don't want IDs to be similar? Then you're really looking for UUIDs/GUIDs.
MySQL's built-in uuid() function will get you there.
As Robert Harvey points out, UUIDs are alphanumeric (not numeric) and longer than 9 characters, but you're going to have to sacrifice something – you cannot satisfy all of your constraints simultaneously.
I have just found this great tutorial as it is something that I need.
However, after having a look, it seems that this might be inefficient. The way it works is, first generate a unique key then check if it exists in the database to make sure it really is unique. However, the larger the database gets the slower the function gets, right?
Instead, I was thinking, is there a way to add ordering to this function? So all that has to be done is check the previous entry in the DB and increment the key. So it will always be unique?
function generate_chars()
{
$num_chars = 4; //max length of random chars
$i = 0;
$my_keys = "123456789abcdefghijklmnopqrstuvwxyz"; //keys to be chosen from
$keys_length = strlen($my_keys);
$url = "";
while($i<$num_chars)
{
$rand_num = mt_rand(1, $keys_length-1);
$url .= $my_keys[$rand_num];
$i++;
}
return $url;
}
function isUnique($chars)
{
//check the uniqueness of the chars
global $link;
$q = "SELECT * FROM `urls` WHERE `unique_chars`='".$chars."'";
$r = mysql_query($q, $link);
//echo mysql_num_rows($r); die();
if( mysql_num_rows($r)>0 ):
return false;
else:
return true;
endif;
}
The tiny url people like to use random tokens because then you can't just troll the tiny url links. "Where does #2 go?" "Oh, cool!" "Where does #3 go?" "Even cooler!" You can type in random characters but it's unlikely you'll hit a valid value.
Since the key is rather sparse (4 values each having 36* possibilities gives you 1,679,616 unique values, 5 gives you 60,466,176) the chance of collisions is small (indeed, it's a desired part of the design) and a good SQL index will make the lookup be trivial (indeed, it's the primary lookup for the url so they optimize around it).
If you really want to avoid the lookup and just unse auto-increment you can create a function that turns an integer into a string of seemingly-random characters with the ability to convert back. So "1" becomes "54jcdn" and "2" becomes "pqmw21". Similar to Base64-encoding, but not using consecutive characters.
(*) I actually like using less than 36 characters -- single-cased, no vowels, and no similar characters (1, l, I). This prevents accidental swear words and also makes it easier for someone to speak the value to someone else. I even map similar charactes to each other, accepting "0" for "O". If you're entirely machine-based you could use upper and lower case and all digits for even greater possibilities.
In the database table, there is an index on the unique_chars field, so I don't see why that would be slow or inefficient.
UNIQUE KEY `unique_chars` (`unique_chars`)
Don't rush to do premature optimization on something that you think might be slow.
Also, there may be some benefit in a url shortening service that generates random urls instead of sequential urls.
I don't know why you'd bother. The premise of the tutorial is to create a "random" URL. If the random space is large enough, then you can simply rely on pure, dumb luck. If you random character space is 62 characters (A-Za-z0-9), the the 4 characters they use, given a reasonable random number generator, is 1 in 62^4, which is 1 in 14,776,336. Five characters is 1 in 916,132,832. So, a conflict is, literally, "1 in a billion".
Obviously, as the documents fill, your odds increase for the chance of a collision.
With 10,000 documents, it's 1 in 91,613, almost 1 in 100,000 (for round numbers).
That means, for every new document, you have a 1 in 91,613 chance of hitting the DB again for another pull on the slot machine.
It is not deterministic. It's random. It's luck. In theory, you can hit a string of really, really, bad luck and just get collision after collision after collision. Also, it WILL, eventually, fill up. How many URLs do you plan on hashing?
But if 1 in 91,613 odds isn't good enough, boosting it to 6 chars makes it more than 1 in 5M for 10,000 documents. We're talking almost LOTTO odds here.
Simply put, make the key big enough (7 characters? 8?) and the problem pretty much "wishes" itself out of existence.
Couldn't you encode the URL as Base36 when it's generated, and then decode it when visited - that would allow you to remove the database completely?
A snippet from Channel9:
The formula is simple, just turn the
Entry ID of our post, which is a long
into a short string by Base-36
encoding it and then stick
'http://ch9.ms/' onto the front of it.
This produces reasonably short URLs,
and can be computed at either end
without any need for a database look
up. The result, a URL like
http://ch9.ms/A49H is then used in
creating the twitter link.
I solved a similar problem by implementing an alogirthm that used to generate serial numbers one-by-one in base36. I had my own oredring of base36 characters all of which are unique. Since it was generating numbers serially I did not have to worry about duplication. Complexity and randomness of the number depends on the ordering of base36 numbers[characters]... that too for public only becuase to my application they are serial numbers :)
Check out this guys functions - http://www.pgregg.com/projects/php/base_conversion/base_conversion.php source - http://www.pgregg.com/projects/php/base_conversion/base_conversion.inc.phps
You can use any base you like, for example to convert 554512 to base 62, call
$tiny = base_base2base(554512, 10, 62); and that evaluates to $tiny = '2KFk'.
So, just pass in the unique id of the database record.
In a project I used this in a removed a few characters from the $sChars string, and am using base 58. You can also rearrange the characters in the string if you want the values to be less easy to guess.
You could of course add ordering by simply numbering the urls:
http://mytinyfier.com/1
http://mytinyfier.com/2
and so on. But if the hash key is indexed in the database (which it obviously should be), the performance boost would be minimal at best.
I wouldn't bother doing ordered enumeration for two reasons:
1) SQL servers are very effective at checking such hash collisions (given correct indexes)
2) That might hurt privacy, as users would be able to easily figure out what other users are tinyurl-ing.
Use autoincrement on the database, and get the latest id as described by http://www.acuras.co.uk/articles/24-php-use-mysqlinsertid-to-get-the-last-entered-auto-increment-value
Perhaps this is a bit off-answer, but, my general rule for creating always unique keys is simple md5( time() * 100 + rand( 0, 100 ) ); There is a one in 100,000 chance that if two people are using the same service at the same second they will get the same result (nie impossible).
That said, md5( rand( 0, n ) ) works too.
That might work, but the easiest way to accomplish the problem would probably be with hashing. Theoretically speaking, hashing runs in O(1) time, as in, it only has to perform the hash, and then does only one actual hit to the database to retrieve the value. Then, you would introduce complications for checking for hash collisions, but it seems like this is probably what most of the tinyurl providers do. And, a good hash function isn't terribly hard to write.
I have also created small tinyurl service.
I wrote a script in Python that was generating keys and store in MySQL table named tokens with status U(Unused).
But, I am doing it in offline mode. I have a corn job on my VPS. It runs a script every 10 minutes. The script check if there are less than 1000 keys in the table, it keep generating keys and inserting them if they are unique and not already exists in the table until the key's count up to 1000.
For my service, 1000 keys for 10 minutes are more than enough, you can set the timing or number of keys generated according to your need.
Now when any tiny url needs to be created on my website, my PHP script just fetch any key which is unused from the table and marked its status as T(taken). PHP script does not have to bother about its uniqueness as my python script already populated only unique keys.
Couldn't you just trim the hash to the length you wish?
$tinyURL = substr(md5($longURL . time()),0,4);
Granted, this may not provide as much pseudo randomness as using the entire string length. But, if you hash the long URL concatenated with the time(), wouldn't this be sufficient? Thoughts on using this method? Thanks!
I've seen lots of examples of how to use uniqid() in PHP to create a unique string, but need to create a unique order number (integers only, no letters).
I liked the idea of uniqid() because from what I understand it uses date/time, so the chances of having another id created that is identical is nil.... (if I'm understanding the function correctly)
mt_rand should do the trick.
It generates a random number between its first paramater and its second paramater. For example, to generate a random number between 500 and 1000, you'd do:
$number = mt_rand(500,1000);
But if you're using it as an order number, you should just use an autoincrement column. Not only is that what it's there for, but what would you do in the event where the same number was generated more than once? Assuming you're using MySQL, you can read about autoincrement columns here.
Use hexdec to convert the hex string to a number. http://us.php.net/manual/en/function.hexdec.php
hexdec(uniqid())
uniqid() does what you're thinking it does.. but if you're plugging this value into a database, you're better off using an auto incrementing field for ids.. it really depends on what you're using the ids for.
I personally use date('U') to generate a string based on the number of seconds since the UNIX EPOCH. If this isn't random enough (if you think you're going to have two orders being placed within the same exact second) simply add another layer with mt_rand(0,9):
$uniqid = date('U') . mt_rand(0,9);
This will, in almost all cases, give you an incremental ID except for the case of having orders created at exactly the same second, in which case the second order could precede the first.