First of all, I'm aware of the vast, vast keyspace of bitcoin addresses. However, I've been experimenting with Vanitygen these days and I was wondering if all the generated addresses in it were directly ported to a local server running the compiled blockchain instead of writing them to a file, wouldn't it be feasible?
1.With the current source of vanitygen, is it possible to directly drop chunks of addresses on the local server( say 'insight') and check for a positive balance?
How would you start with that?
Thanks in advance.
Here's my PHP code( Feel free to use it )
<?php
$lines = file('in.csv', FILE_IGNORE_NEW_LINES);
$i=0;
foreach ($lines as $line_num => $line) {
$address = explode(',', $line);
$variablee = file_get_contents($address[0]);
$i++;
if($variablee!="0"){
$file = 'out.txt';
$current = file_get_contents($file);
$current .= $line;
file_put_contents($file, $current);
}
echo "\n".$i;
}
?>
Update: There is just one question here, which is to direct vanitygen generated addresses directly to a local server running the compiled blockchain rather than writing them to a file. The code shown above works as fast as 1,000 addresses/second while I've heard people checking as many as 50K addresses/second for a positive balance. I've tried using cwebsocket from here but can't figure out a way to implement it into vanitygen
Update:My code as of now checks about 1,000 addresses/second
To import the addresses you want to format the private key into "Wallet Import Format" or 'WIF'.
See: https://en.bitcoin.it/wiki/Wallet_import_format
The native client will want to reindex the whole blockchain per address if you import a non client based keypairs.
The native client also has a cap on how many addresses it will track.
Related
I've made this script to extract data from a CSV file.
$url = 'https://flux.netaffiliation.com/feed.php?maff=3E9867FCP3CB0566CA125F7935102835L51118FV4';
$data = array_map(function($line) { return str_getcsv($line, '|'); }, file($url));
It's working exactly as I want but I've just been told that it's not the proper way to do it and that I really should use fgetcsv instead.
Is it right ? I've tried many ways to do it with fgetcsv but didn't manage at all to get anything close.
Here is an example of what i would like to get as an output :
$data[4298][0] = 889698467841
$data[4298][1] = Figurine Funko Pop! - N° 790 - Disney : Mighty Ducks - Coach Bombay
$data[4298][2] = 108740
$data[4298][3] = 14.99
First of all, there is no the ONE proper way to do things in programming. It is up to you and depends on your use case.
I just downloaded the CSV file and it is ca. 20MB big. In your solution you download the whole file at once. If you do not have any memory restrictions and you do not have to give a fast feedback to the caller, I mean if the delay for downloading of the whole file is not important, your solution is better solution, if you want to guarantee the processing of the whole content. In this case, you read all the content at once and the further processing of the content does not depend on other things like your Internet connection etc.
If you want to use fgetcsv, you would read from the URL line by line squentially. Your connection has to remain until a line has been processed. In this you do not need big memory allocation but it would take longer to having processed the whole content.
Both methods have their pros and contras. You should know what is your goal. How often would you run this script? You should consider your use case and make a decision which method is the best for you.
Here is the same result without array_map():
$url = 'https://flux.netaffiliation.com/feed.php?maff=3E9867FCP3CB0566CA125F7935102835L51118FV4';
$lines = file($url);
$data = [];
foreach($lines as $line)
{
$data[] = str_getcsv(trim($line), '|');
//optionally:
//$data[] = explode('|',trim($line));
}
$lines = null;
I'm parsing a 1 000 000 line csv file in PHP to recover this datas: IP Address, DNS , Cipher suites used.
In order to know if some DNS (having several mail servers) has different Cipher suites used on their servers, I have to store in a array a object containing the DNS name, a list of the IP Address of his servers, and a list of cipher suites he uses. At the end I have an array of 1 000 000 elements. To know the number of DNS having different cipher suites config on their servers I do:
foreach($this->allDNS as $dnsObject){
$res=0;
if(count($dnsObject->getCiphers()) > 1){ //if it has several different config
res++;
}
return $res;
}
Problem: Consumes too much memory, i can't run my code on 1000000 line csv (if I don't store these data in a array, I parse this csv file in 20 sec...). Is there a way to bypass this problem ?
NB: I already put
ini_set('memory_limit', '-1');
but this line just bypass the memory error.
Saving all of those CSV data will definitely take its toll on the memory.
One logical solution to your problem is to have a database that will store all of those data.
You may refer to this link for a tutorial on parsing your CSV file and storing it to database.
Write the processed Data (for each Line seperately) into one File (or Database)
file_put_contents('data.txt', $parsingresult, FILE_APPEND);
FILE_APPEND will append the $parsingresult at the End of the File-Content.
Then you can access the processed Data by file_get_contents() or file().
Anyways. I think, using a Database and some Pre-Processing would be the best Solution if this is needed more often.
You can use fgetcsv() to read and parse the CSV file one line at a time. Keep the data you need and discard the line:
// Store the useful data here
$data = array();
// Open the CSV file
$fh = fopen('data.csv', 'r');
// The first line probably contains the column names
$header = fgetcsv($fh);
// Read and parse one data line at a time
while ($row = fgetcsv($fh)) {
// Get the desired columns from $row
// Use $header if the order or number of columns is not known in advance
// Store the gathered info into $data
}
// Close the CSV file
fclose($fh);
This way it uses the minimum amount of memory needed to parse the CSV file.
I have tried to extract the user email addresses from my server. But the problem is maximum files are .txt but some are CSV files with txt extension. When I am trying to read and extract, I could not able to read the CSV files which with TXT extension. Here is my code:
<?php
$handle = fopen('2.txt', "r");
while(!feof($handle)) {
$string = fgets($handle);
$pattern = '/[A-Za-z0-9._%+-]+#[A-Za-z0-9.-]+\.[A-Za-z]{2,4}/i';
preg_match_all($pattern, $string, $matches);
foreach($matches[0] as $match)
{
echo $match;
echo '<br><br>';
}
}
?>
I have tried to use this code for that. The program is reading the complete file which are CSV, and line by line which are Text file. There are thousands of file and hence it is difficult to identify.
Kindly, suggest me what I should do to resolve my problem? Is there any solution which can read any format, then it will be awesome.
Well your files are different. Because of that you will have to take a different approach for each of those. In more general terms this is usually calling adapting and is mostly provided using the Adapter design pattern.
Should you use the adapter design pattern you would have a code inspecting the extension of a file to be opened and a switch with either txt or csv. Based on the value you would retrieve aTxtParseror aCsvParser` respectively.
However, before diving deep into this territory you might want to have a look at the files first. I cannot say this for sure without seeing the structures but you can. If the contents of both the text and csv files are the same then a very simple approach is to change the extension to either txt or a csv for all files and then process them using same logic, knowing files with the same extension will now be processed in the same manner.
But from what I understood the file structures actually differ. So to keep your code concise the adapter pattern, having two separate classes/functions for parsing and another one on top of that for choosing the right parsing function (this top function would actually be a form of a strategy) and running it.
Either way, I very much doubt so there is a solution for the problem you are facing as a file structure is mostly your and your own.
Ok, so problem is when CSV file has too long string line. Based on this restriction I suggest you to use example from php.net Here is an example:
$handle = #fopen("/tmp/inputfile.txt", "r");
if ($handle) {
while (($buffer = fgets($handle, 4096)) !== false) {
echo $buffer;
// do your operation for searching here
}
if (!feof($handle)) {
echo "Error: unexpected fgets() fail\n";
}
fclose($handle);
}
I'm writing a feature for an admin panel that blocks ip addresses on the apache level. The file is called blacklist.txt and looks like 10.0.0.1,10.0.0.2,10.0.0.3, ... All a single line, with each ip address separated by a comma. After reading What is the best way to write a large file to disk in PHP?, I am still unsure of the best practices on the matter.
Here's what I want to do: IF an administrator presses the 'ban hammer', the file is read looking for strpos($file, $ip), if it's not found, append to the end of the file and the .htaccess file blocks accordingly.
Question: is a .txt file suitable for this potentially large amount of data? I do not want to execute a query to check if someone is banned every time a page is requested
EDIT:
The purpose is to block single ip addresses that have 10 failed login attempts in the past 12 hours. I would think that the 'recover my password' would prevent a normal client from doing this.
Question: is a .txt file suitable for this potentially large amount of
data?
No, it is not. A database with proper indexing is.
First for reading your File in CSV format
you can use many ways. example:
$rows = array_map('str_getcsv', file('myfile.csv'));
$header = array_shift($rows);
$csv = array();
foreach ($rows as $row) {
$csv[] = array_combine($header, $row);
}
src: http://steindom.com/articles/shortest-php-code-convert-csv-associative-array
for checking that on each page load and to minimize the Reading of that file
you can use a memory cache , something like memCache, then search the array for the incoming ip. note: memory cache is faster then Database query.
PHP shared memory ref: http://www.php.net/manual/en/book.shmop.php
memCache php.net/memcache
Array Search php.net/in_array
also to return the key if value found php.net/array_search
note: in 1 mb file you can store ~65K IP considering an ip is the following format: "255.255.255.255,"
it's even better if you put the key of the array the ip, then instead of searching the array for that ip you can Check if the Key exist with this: php.net/array_key_exists
this has been bugging me for ages now but i can't figure it out..
Basically i'm using a hit counter which stores unique IP address in a file. But what i'm trying to do is get it to count how many hits each IP address has made.
So instead of the file reading:
222.111.111.111
222.111.111.112
222.111.111.113
I want it to read:
222.111.111.111 - 5
222.111.111.112 - 9
222.111.111.113 - 41
This is the code i'm using:
$file = "stats.php";
$ip_list = file($file);
$visitors = count($ip_list);
if (!in_array($_SERVER['REMOTE_ADDR'] . "\n", $ip_list))
{
$fp = fopen($file,"a");
fwrite($fp, $_SERVER['REMOTE_ADDR'] . "\n");
fclose($fp);
$visitors++;
}
What i was trying to do is change it to:
if (!in_array($_SERVER['REMOTE_ADDR'] . " - [ANY NUMBER] \n", $ip_list))
{
$fp = fopen($file,"a");
fwrite($fp, $_SERVER['REMOTE_ADDR'] . " - 1 \n");
fclose($fp);
$visitors++;
}
else if (in_array($_SERVER['REMOTE_ADDR'] . " - [ANY NUMBER] \n", $ip_list))
{
CHANGE [ANY NUMBER] TO [ANY NUMBER]+1
}
I think i can figure out the last adding part, but how do i represent the [ANY NUMBER] part so that it finds the IP whatever the following number is?
I realise i'm probably going about this all wrong but if someone could give me a clue i'd really appreciate it.
Thanks.
This is bad idea, don't do it this way.
Its normal to store website statics in the file-system but not with pre-aggregation applied to it.
If you going to use the file-system then do post-aggregation on the data otherwise use a database.
What you are doing is a very bad idea
But lets first answer the actual question you are asking.
To be able to do that you will have to actually process the file first in some kind of data structure that allows for that to be done. I'd presonally recommend an array in the form of IP => AMOUNT.
For example (untested code):
$fd = file($file);
$ip_list = array();
for ($fd as $line) {
list($ip, $amount) = explode("-", $line);
$ip_list[$ip] = $amount;
}
Note that the code is not perfect as it would leave a space at the end of $ip and another in front of $amount due to the nature of your original data. But it works good enough just to point you in the right direction. A more "accurate" solution would involve regular expressions or modifying the original data source to a more convenient format.
Now the real answer to your actual problem
Your process will quickly become a performance bottleneck as you would have to open up that file, process it and write it all back again afterwards (not sure if you can do in-line editing of an open file) for every request.
As you are trying to do some kind of per-IP hit count, there are a lot of better solutions to your problem:
Use an existing solution for it (like piwik)
Use an actual database for your data
Keep your file simple with just a list of IPs and post-process it off-line periodically to make it be the format you want
You can avoid writing that file altogether if you have access to your webserver's logs (and they are setup to log every request with the originating IP) and you can post-process that file instead
in_array() simply does a basic a string match. it will NOT look for substrings. Ignoring how bad an idea it is to use a flat file for data storage, what you want is preg_grep, which allows you to use regexes
$ip_list = file('ips.txt');
$matches = preg_replace('/^\d+\.\d+\.\d+\.\d+ - \d+$/', $ip_list);
of course, this is a very basic and very broken IP address match, and will not help you actually CHANGE the value in $ip_list, because you don't get the actual index(es) of the matched lines.