I'm trying to use the google charts api to present power (watt) over time. I have a sqlite database in which i store my data. A php script then gathers the data and outputs it into a json file. For the charts to work i need to keep a specific structure. Whenever i run the phpscript the json file gets overwritten with the new data. I need help with the php script to always output the data according to googles parameters.
I'm aiming to end up with an area chart that plots power on the y axis and the datestamps on the x axis.
I've read theese documents in googles documentation but i can't figure out how to output the data the way they do.
https://developers.google.com/chart/interactive/docs/php_example
https://developers.google.com/chart/interactive/docs/reference#dataparam
while ($res = $stmt->fetch(PDO::FETCH_ASSOC)){
$voltage[] = $res['voltage'];
$current[] = $res['current'];
$datestamp[] = $res['datestamp'];
}
$unixtimestamp = array();
foreach ($datestamp as $nebulosa){
$unixtimestamp[] = strtotime($nebulosa);
}
$power = array();
foreach ($voltage as $key => $door){
$power[] = $door * $current[$key];
}
//echo "<pre>", print_r($power, true), "</pre>";
$fp = fopen('data.json', 'w');
fwrite($fp, json_encode($power));
fwrite($fp, json_encode($datestamp));
fclose($fp);
The json file has this format after running the php script.
[3.468,5]["2016-10-14 14:56:22","2016-10-14 14:56:23"]
Related
Background
I'm trying to complete a code challenge where I need to refactor a simple PHP application that accepts a JSON file of people, sorts them by registration date, and outputs them to a CSV file. The provided program is already functioning and works fine with a small input but intentionally fails with a large input. In order to complete the challenge, the program should be modified to be able to parse and sort a 100,000 record, 90MB file without running out of memory, like it does now.
In it's current state, the program uses file_get_contents(), followed by json_decode(), and then usort() to sort the items. This works fine with the small sample data file, however not with the large sample data file - it runs out of memory.
The input file
The file is in JSON format and contains 100,000 objects. Each object has a registered attribute (example value 2017-12-25 04:55:33) and this is how the records in the CSV file should be sorted, in ascending order.
My attempted solution
Currently, I've used the halaxa/json-machine package, and I'm able to iterate over each object in the file. For example
$people = \JsonMachine\JsonMachine::fromFile($fileName);
foreach ($people as $person) {
// do something
}
Reading the whole file into memory as a PHP array is not an option, as it takes up too much memory, so the only solution I've been able to come up with so far has been iterating over each object in the file, finding the person with the earliest registration date and printing that. Then, iterating over the whole file again, finding the next person with the earliest registration date and printing that etc.
The big issue with that is that the nested loops: a loop which runs 100,000 times containing a loop that runs 100,000 times. It's not a viable solution, and that's the furthest I've made it.
How can I parse, sort, and print to CSV, a JSON file with 100,000 records? Usage of packages / services is allowed.
I ended up importing into MongoDB in chunks and then retrieving in the correct order to print
Example import:
$collection = (new Client($uri))->collection->people;
$collection->drop();
$people = JsonMachine::fromFile($fileName);
$chunk = [];
$chunkSize = 5000;
$personNumber = 0;
foreach ($people as $person) {
$personNumber += 1;
$chunk[] = $person;
if ($personNumber % $chunkSize == 0) { // Chunk is full
$this->collection->insertMany($chunk);
$chunk = [];
}
}
// The very last chunk was not filled to the max, but we still need to import it
if(count($chunk)) {
$this->collection->insertMany($chunk);
}
// Create an index for quicker sorting
$this->collection->createIndex([ 'registered' => 1 ]);
Example retrieve:
$results = $this->collection->find([],
[
'sort' => ['registered' => 1],
]
);
// For every person...
foreach ($results as $person) {
// For every attribute...
foreach ($person as $key => $value) {
if($key != '_id') { // No need to include the new MongoDB ID
echo some_csv_encode_function($value) . ',';
}
}
echo PHP_EOL;
}
I wanna improve on how to fetch data from an API. In this case I want to fetch every app-id from the Steam API, and list them one per line in a .txt file. Do I need an infinite (or a very high-number) loop (with ++ after every iteration) to fetch everyone? I mean, counting up from id 0 with for example a foreach-loop? I'm thinking it will take ages and sounds very much like bad practice.
How do I get every appid {"appid:" n} from the response of http://api.steampowered.com/ISteamApps/GetAppList/v0001?
<?php
//API-URL
$url = "http://api.steampowered.com/ISteamApps/GetAppList/v0001";
//Fetch content and decode
$game_json = json_decode(curl_get_contents($url), true);
//Define file
$file = 'steam.txt';
//This is where I'm lost. One massive array {"app": []} with lots of {"appid": n}.
//I know how to get one specific targeted line, but how do I get them all?
$line = $game_json['applist']['apps']['app']['appid'][every single line, one at a time]
//Write to file, one id per line.
//Like:
//5
//7
//8
//and so on
file_put_contents($file, $line, FILE_APPEND);
?>
Any pointing just in the right direction will be MUCH appreciated. Thanks!
You don't need to worry about counters with foreach loops, they are designed to go through and work with each item in the object.
$file = "steam.txt";
$game_list = "";
$url = "http://api.steampowered.com/ISteamApps/GetAppList/v0001";
$game_json = file_get_contents($url);
$games = json_decode($game_json);
foreach($games->applist->apps->app as $game) {
// now $game is a single entry, e.g. {"appid":5,"name":"Dedicated server"}
$game_list .= "$game->appid\n";
}
file_put_contents($file, $game_list);
Now you have a text file with 28000 numbers in it. Congratulations?
So I already have a script that collects the first 4999 followers ids of a twitter user using the API in xml format. I semi understand how the cursor process works but I am confused how to implement it to loop until it gathers all the followers. The user I am attempting to gather will take about 8 calls. Any ideas on how to implement the cursor loop?
<?php
$xmldata = 'http://api.twitter.com/1/followers/ids/microsoft.xml';
$open = fopen($xmldata, 'r');
$content = stream_get_contents($open);
fclose($open);
$xml = simplexml_load_file($xmldata);
$cursor = $xml->next_cursor;
$file = fopen ('output1.csv', 'w+');
fwrite($file, "User id\n\r");
while($cursor =! 0)
{
foreach ($xml->ids->id as $id)
{
fwrite($file, $id . ", ");
fwrite($file, "\n");
}
$xmldata = 'http://api.twitter.com/1/followers/ids.xml?cursor='. $cursor
.'&screeb_name=microsoft';
?>
Let me take an example of Microsoft's followers (346K followers) as of now.
https://api.twitter.com/1/followers/ids.json?cursor=-1&screen_name=microsoft
It fetches only 5000 user IDs, that the twitter API limit. So, you need to take the next_cursor string from the json output
next_cursor_str":"1418048755615786027"
So, your next call would be
https://api.twitter.com/1/followers/ids.json?cursor=1418048755615786027&screen_name=microsoft
Keep doing this until the next_cursor is ZERO.
As you keep doing again and again, just keep storing all the ids..
I have a file in .shp format and I need it to convert it to an Excel spreadsheet programmatically. I want to do this using PHP or JavaScript.
Once I've used small PHP lib ShapeFile, you can get it in phpclasses.org. Although it is a bit of not so good design, it works.
Here is a little example from my own code:
require_once 'lib/ShapeFile.inc.php';
$shp = new ShapeFile($filename, array('noparts' => false));
if ($shp->getError() !== '')
print_r($shp->getError());
else
{
$records = array();
while ($record = $shp->getNext())
{
$dbf_data = $record->getDbfData();
$shp_data = $record->getShpData();
//Dump the information
$obj = array(
'type' => $shp->getShpTypeName($record->getShpType())
);
$obj['shape'] = $shp_data;
$obj['meta'] = $dbf_data;
$records[] = $obj;
}
}
print_r($records);
So, after that $records contain all the data from shapefile. Of course, you will need some time to figure out what shapefile is and what data it can hold (assuming you are not familiar with it). Start from wikipedia. Actually there are bunch of arrays with some labels.
Then use some php excel lib (just seek in so) and you're done :)
I have remade my original post as it was terribly formatted. Basically I would like some advice / tips on how to generate a line graph with 2 Y Axis (temperature and humidity) to display some information from my text file. It is contained in a textfile called temperaturedata.txt I have included a link to one of my posts from the JpGrapher forum only because it is able to display the code clearly.
I understand that since it is JpGraph problem I shouldn't post here however the community here is a lot more supportive and active. Many thanks for all your help guys in advance!
my code
I don't see any reason why you shouldn't post here about jpgraph. And I don't see why you shouldn't post your sample code and data here, either.
The code you've posted on the other site is broken. Check line #42.
Furthermore, you're passing JpGraph a single row (specifically, the last row) via $keyval. $data is where all your data is stored, though in a wrong format. A very quick fix was:
$keyval = array();
$keyval['time'] = array();
$keyval['count'] = array();
$keyval['temperature'] = array();
$keyval['humidity'] = array();
if ($file) {
while (!feof($file)) {
$line = trim(fgets($file));
if (strlen($line)) {
$fields = explode(":", $line);
$keyval['time'][] = $fields[0];
$keyval['count'][] = $fields[1];
$keyval['temperature'][] = $fields[2];
$keyval['humidity'][] = $fields[3];
}
}
fclose($file);
}
which transposed $data and renamed it $keyval. (Where it used to hold time data in $data[x]['time'], it now holds it in $keyval['time'][x].) And we're passing $keyval['temperature'], which is a simple array of temperature values.