Laravel - Saving a .txt File to mySQL Database - php

I have been struggling to create something that can save txt files into a mySQL database. I have managed to create something that saves JSON files but not txt files.
Here is the txt file in question: https://celestrak.org/NORAD/elements/sbas.txt. This txt file contains a few satellites with their data. Each satellite has exactly three lines, no exceptions. So, for example here is one satellite:
AOR-E (EGNOS/PRN 120)
1 24307U 96053A 17257.68868765 -.00000150 00000-0 00000-0 0 9992
2 24307 2.8040 77.2609 0004175 104.1816 44.8421 1.00271450 76939
The first lines tells us the satellite name. The next two lines give us some parameters, which always start with the numbers 1 and 2. This format will not change - the name on line 0 and the two lines after it, which start in 1 or 2.
What I want to be able to do is to create a row for each satellite - with the columns object_name for line 0, tle_line1 for line 1 and tle_line2 for line 2.
I have managed to create something that saves data from a JSON format into the SQL database. Maybe some can be deviated from that?
I am using Laravel and Guzzle for the HTTP requests:
$api = new Client([
'base_uri' => 'https://celestrak.org',
]);
$response = $api->get('jsonlocater');
$data = json_decode($response->getBody()->getContents(), true);
foreach ($data as $attributes) {
$attributes = array_change_key_case($attributes, CASE_LOWER);
Satellites::create($attributes);
}

First of all, I'm not sure what is your response format but using vanilla PHP you may do something like the following to fetch the contents in array:
$url = 'http://celestrak.com/NORAD/elements/sbas.txt';
$lines = file($url, FILE_IGNORE_NEW_LINES);
$arrays = array_map(function($array) {
$columns = ['object_name', 'tle_line1', 'tle_line2'];
return array_combine($columns, array_map('trim', $array));
}, array_chunk($lines, 3));
Now, if you dd($arrays) the result then you'll get something like the following:
From this result, you should be easily able to create entries in your database. Each array in image should be an entry/row in your database table. For example:
\DB::table('table_name')->insert($arrays);
Note that, if you've timestamps (created_at & updated_at) in your table then you've to add those fields in each array when generating the arrays.

Related

Getting all from json api array using php

I wanna improve on how to fetch data from an API. In this case I want to fetch every app-id from the Steam API, and list them one per line in a .txt file. Do I need an infinite (or a very high-number) loop (with ++ after every iteration) to fetch everyone? I mean, counting up from id 0 with for example a foreach-loop? I'm thinking it will take ages and sounds very much like bad practice.
How do I get every appid {"appid:" n} from the response of http://api.steampowered.com/ISteamApps/GetAppList/v0001?
<?php
//API-URL
$url = "http://api.steampowered.com/ISteamApps/GetAppList/v0001";
//Fetch content and decode
$game_json = json_decode(curl_get_contents($url), true);
//Define file
$file = 'steam.txt';
//This is where I'm lost. One massive array {"app": []} with lots of {"appid": n}.
//I know how to get one specific targeted line, but how do I get them all?
$line = $game_json['applist']['apps']['app']['appid'][every single line, one at a time]
//Write to file, one id per line.
//Like:
//5
//7
//8
//and so on
file_put_contents($file, $line, FILE_APPEND);
?>
Any pointing just in the right direction will be MUCH appreciated. Thanks!
You don't need to worry about counters with foreach loops, they are designed to go through and work with each item in the object.
$file = "steam.txt";
$game_list = "";
$url = "http://api.steampowered.com/ISteamApps/GetAppList/v0001";
$game_json = file_get_contents($url);
$games = json_decode($game_json);
foreach($games->applist->apps->app as $game) {
// now $game is a single entry, e.g. {"appid":5,"name":"Dedicated server"}
$game_list .= "$game->appid\n";
}
file_put_contents($file, $game_list);
Now you have a text file with 28000 numbers in it. Congratulations?

PHP - SPL file object returning null on seek

Alright, I am using SPL file object with a+ mode. The data apends correctly but I want to validate the last row of that data by grabbing it from the file again. The data is CSV format, it works for the first line in the file and apends the 2 line correctly. But when seeking the 2nd line it returns NULL.
<?php
$handle = new SplFileObject("filePath here","a+");
$handle->setFlags(SplFileObject::READ_CSV);
// write using fputcsv
$status = $handle->fputcsv($data,',');
$status = $handle->fputcsv($data,',');
$handle->seek(1);
$fields = $handle->fgetcsv(',');
$fields2 = $handle->current();
// Both fields and fields2 return NULL
There would be two lines worth of data in the file. Please help

Get random email address from CSV file

I am developing a PHP application where I need to fetch 5 random email addresses from a CSV file and send to user.
I already worked with CSV file many times but don't know how to fetch randomly in limit.
NOTE: CSV file have more than 200k emails.
Any one have a idea or suggestion then please send me.
If CSV is too big and won't be saved in a DB
You'll have to loop through all of the rows in the CSV once to count them.
You'll have to call a random-number generator function (rand, mt_rand, others...) and parametrize it to output numbers from 0 to $count, and call it 5 times (to get 5 numbers).
You'll have to loop through all of the rows in the CSV again and only copy the necessary information for the rows whose number matches the randomly generated values.
Nota bene: don't use file_get_contents with str_getcsv. Instead use fopen with fgetcsv. The first approach loads the entire file to memory which we don't want to do. The second approach only read the file line-by-line.
If the CSV is too big and will be saved in a DB
Loop through the CSV rows and insert each record into the DB.
Use a select query with LIMIT 5 and ORDER BY RAND().
If the CSV is small enough to fit into memory
Loop through the CSV rows and create an array holding all of them.
You'll have to call a random-number generator function (rand, mt_rand, others...) and parametrize it to output numbers from 0 to array count, and call it 5 times (to get 5 numbers).
Then retrieve the rows from the big array by their index number -- using the randomly generated numbers as indexes.
If csv file is not too big you can load whole file to array to get something like
e[0] = 'someone1#somewhere.com';
e[1] = 'someone2#somewhere.com';
e[2] = 'someone3#somewhere.com';
then you can pick random email by e[rand(0, sizeof(e))];
and do this 5 times (with check for double items)
Read all emails from CSV then select random 5 email from email array.
To select 5 random number use array_rand function.
$email = array('test#test.com','test2#test.com','test3#test.com','test4#test.com','test5#test.com');
$output = array_rand($email, 5);
print_r($email); // will return random 5 email.
for large number try to use something like
$max = count($email);
$email_rand = array();
for ($i =0; $i<5; $i++)
{
$a = mt_rand(0, $max);
$email_rand[] = $email[$a];
}
print_r($email_rand);
<?php
$handle = fopen('test.csv', 'r');
$csv = fgetcsv($handle);
function randomMail($key)
{
global $csv;
$randomMail = $csv[$key];
return $randomMail;
}
$randomKey = array_rand($csv, 5);
print_r(array_map('randomMail', $randomKey));
This is small utility to achieve the thing you expect and change the declaration of randomMail function as you desired.
for($i=0;$i<5;$i++)
{
$cmd = "awk NR==$(($"."{RANDOM} % `wc -l < ~/Downloads/email.csv` + 1)) ~/Downloads/email.csv >> listemail.txt";
$rs = exec($cmd);
}
after you read list mail from listmail.txt

Preg-Match-All - Synonym File

I am writing a php script that will parse through a file, (synonyms.dat), and coordinate a list of synonyms with their parent word, for about 150k words.
Example from file:
1|2
(adj)|one|i|ane|cardinal
(noun)|one|I|ace|single|unity|digit|figure
1-dodecanol|1
(noun)|lauryl alcohol|alcohol
1-hitter|1
(noun)|one-hitter|baseball|baseball game|ball
10|2
(adj)|ten|x|cardinal
(noun)|ten|X|tenner|decade|large integer
100|2
(adj)|hundred|a hundred|one hundred|c|cardinal
(noun)|hundred|C|century|one C|centred|large integer
1000|2
(adj)|thousand|a thousand|one thousand|m|k|cardinal
(noun)|thousand|one thousand|M|K|chiliad|G|grand|thou|yard|large integer
**10000|1
(noun)|ten thousand|myriad|large**
In the example above I want to link ten thousand, myriad, large to the word 1000.
I have tried various method of reading the .dat file into memory using file_get_contents and then exploding the file at \n, and using various array search techniques to find the 'parent' word and it's synonyms. However, this is extremely slow, and more often then not crashes my web server.
I believe what I need to do is use preg_match_all to explode the string, and then just iterate over the string, inserting into my database where appropriate.
$contents = file_get_contents($page);
preg_match_all("/([^\s]+)\|[0-9].*/",$contents,$out, PREG_SET_ORDER);
This matches each
1|2
1-dodecanol|1
1-hitter|1
But I don't know how to link the fields in between each match, IE the synonyms themselves.
This script is intended to be run once, to get all the information into my database appropriately. For those interested, I have a database 'synonym_index' which holds a unique id of each word, as well as the word. Then another table 'synonym_listing' which contains a 'word_id' column and a 'synomym_id' column where each column is a foreign key to synonym_index. There can be multiple synonym_id's to each word_id.
Your help is greatly appreciated!
You can use explode() to split each line into fields. (Or, depending on the precise format of the input, fgetcsv() might be a better choice.)
Illustrative example, which will almost certainly need adjustment for your specific use case and data format:
$infile = fopen('synonyms.dat', 'r');
while (!feof($infile)) {
$line = rtrim(fgets($infile), "\r\n");
if ( $line === '' ) {
continue;
}
// Line follows the format HEAD_WORD|NUMBER_OF_SYNONYM_LINES
list($headWord, $n) = explode('|', $line);
$synonyms = array();
// For each synonym line...
while ( $n-- ) {
$line = rtrim(fgets($infile), "\r\n");
$fields = explode('|', $line);
$partOfSpeech = substr(array_shift($fields), 1, -1);
$synonyms[$partOfSpeech] = $fields;
}
// Now here, when $headWord is '**10000', $synonyms should be array(
// 'noun' => array('ten thousand', 'myriad', 'large**')
// )
}
Wow, for this type of functionality you have databases with tables and indices.
PHP is to serve a request/response, not to read a big file into memory. I advise you to put the data in a database. That will be much faster - and it is made for it.

how to insert value in a particular location in csv file using php

Is it possible to write at a particular location in a CSV file using PHP?
I don't want to append data at the end of the CSV file. But I want to add data at the end of a row already having values in the CSV.
thanks in advance
No, it s not possible to insert new data in the middle of a file, due to filesystem nature.
Only append at the end is possible.
So, the only solution is to make another file, write a beginning part of source, append a new value, and then append the rest of the source file. And finally rename a resulting file to original name.
There you go. Complete working code:
<?php
//A helping function to insert data at any position in array.
function array_insert($array, $pos, $val)
{
$array2 = array_splice($array, $pos);
$array[] = $val;
$array = array_merge($array, $array2);
return $array;
}
//What and where you want to insert
$DataToInsert = '11,Shamit,Male';
$PositionToInsert = 3;
//Full path & Name of the CSV File
$FileName = 'data.csv';
//Read the file and get is as a array of lines.
$arrLines = file($FileName);
//Insert data into this array.
$Result = array_insert($arrLines, $PositionToInsert, $DataToInsert);
//Convert result array to string.
$ResultStr = implode("\n", $Result);
//Write to the file.
file_put_contents($FileName, $ResultStr);
?>
Technically Col. Shrapnel's answer is absolutely right.
Your problem is that you don't want to deal with all these file operations just to change some data. I agree with you. But you're looking for the solution in a wrong level. Put this problem in a higher level. Create a model that represents an entity in your CSV database. Modify the model's state and call its save() method. The method should be responsible to write your model's state in CSV format.
Still, you can use a CSV library that abstracts low level operations for you. For instance, parsecsv-for-php allows you to target a specific cell:
$csv = new parseCSV();
$csv->sort_by = 'id';
$csv->parse('data.csv');
# "4" is the value of the "id" column of the CSV row
$csv->data[4]['firstname'] = 'John';
$csv->save();

Categories