Hi everyone! I'm working with a large tab delimited file which contains counts and lengths. I want to display this large data set in a HighCharts, but the count can go up to 100k. Trying to display this will be a lost cause.. so I want to cluster the data in sets of 500/700! An example of my data:
35 1265
36 1310
37 1180
38 1064
39 938
40 906
41 821
42 903
43 845
44 816
45 815
46 853
47 858
.......
72721 1
72732 1
72878 1
72984 1
73138 1
73279 1
73283 1
73379 1
74322 1
74373 1
75038 1
76222 1
76606 1
77153 1
78573 1
80839 1
I have tried to find an example of a bar/column chart in HighCharts that can work with this, but no luck yet. It is a part of a Laravel website, so my best option (I thought) is altering the data in PHP before using it in HighCharts.
I want to have a loop that fills an array with the sum of all the second values between count 0 and 500, and then another array filled with the sum of all the data from 501-1000.. I have tried searching on the internet, but I must be google'ing wrong. Does someone have any suggestions on how I could handle this?
I want the output to look something like this:
0-500 23520
501-1000 235216
1001-1500 235138
Could anyone help me? I am trying find formula and write piece of code in PHP language which makes next
Imagine, we have 3 types of something, k = 1,2,3 and length of this numbers could be various (n-length), but neighboring type should not(!) be the same - 1,1 or 2,2
For example
k = 1,2,3
n = 5
Output
1,2,3,1,2 |
1,2,3,1,3 |
1,2,3,2,1 |
1,2,3,2,3 |
1,3,2,1,3 |
1,3,2,1,2 |
1,3,2,1,3 |
1,3,2,3,1 |
1,3,2,3,2
.........
Mb this is has some common named problem, share with me pls and I'will try to find some resources about
Thanks
The simplest way of generation such lists is recursive (if n, k are not large -note that variant count is k*(k-1)n-1).
Pseudocode:
Generate(list, n, k, lastvalue)
if (list.length = n)
output(list)
else
for i = 1 .. k
if (i != lastvalue)
Generate(list + i, n, k, i)
Delphi code
procedure Generate(list: string; n, k, lastvalue: Integer);
var
i: Integer;
begin
if (Length(list) = n) then
Memo1.Lines.Add(list)
else
for i := 1 to k do
if (i <> lastvalue) then
Generate(list + IntToStr(i), n, k, i)
end;
begin
Generate('', 4, 3, 0);
Output for n=4, k=3
1212 1213 1231 1232 1312 1313 1321 1323
2121 2123 2131 2132 2312 2313 2321 2323
3121 3123 3131 3132 3212 3213 3231 3232
well u do a loop in a loop. since k has a length, and the numbers in the k variable are the ones you move along, they present the outer loop. (for loop)
now u can run along the output because u have the number n known as well.
u place k[1] as the first variable of n[1] and it doesn't change until the inner loop is over. (in this case k[1] is 1). now u do a while loop with a changeable (a for example) variable that runs over the n array created. n will be (1,null, null, null, null). while a != n.lenth(). u check the n(a-1) for it's value to make sure is not the same. whenever a reaches n.length, u change the value of the last number by the next on the k array, and then you go back 2 spots (n[a-1]) and change it and go back, go recoursive all the way to the start until all spots have been changed and the n[2] of the array has the highest value of the k array. to make life easier, u can make a new array, let us assign as j for the matter, which will get a value as soon as the closest n[a] spot gets the last value possible.
BTW whenever u go back reset the value of spots u ran by to null so that all the numbers in the k array are optional again. when the j array is full, you reset all of it and move on in the for loop.
hope i was of help, if you have any questions feel free to ask
I made a sphinx configuration with 10 fields.
Some of the fields are string, so I defined it in source part of configuration like this:
sql_attr_uint = section_id
sql_field_string = name
sql_field_string = element_code
sql_field_string = section_code
All indexed whell:
collected 18334 docs, 2.5 MB
sorted 18.9 Mhits, 100.0% done
total 18334 docs, 2460468 bytes
total 13.065 sec, 188322 bytes/sec, 1403.26 docs/sec
total 44 reads, 0.112 sec, 3255.7 kb/call avg, 2.5 msec/call avg
total 366 writes, 0.386 sec, 735.4 kb/call avg, 1.0 msec/call avg
rotating indices: successfully sent SIGHUP to searchd (pid=3131).
And when I'm trying to search some query exactly from the command line, everything works fine, I see text-values of the string-fields, which I defined before. But when I'm going to sphinxapi and make the same search query it returns the same result, but instead of string-values I see digits, which changing with every query:
[96659] => Array
(
[weight] => 1
[attrs] => Array
(
[name] => 140436931107525
[element_code] => 140436931107617
[section_id] => 4016
[section_code] => 140436931107680
)
)
Please anybody, what it means? I need string-values, I don't want to make additional sql-queries to DB.
It sounds like your sphinxapi.php file is too old. Use the one from the version of sphinx you have installed.
I have the following text-file with 48.891 names. I would like to import them to a mySQL database. I need the first 3 characters with the gender and the name. The rest should be ignored.
M Aad 4 $
M Aadam 1 $
F Aadje 1 $
M Ådne + 1 $
M Aadu 12 $
?F Aaf 1 $
F Aafke 4 $
? Aafke 1 $
F Aafkea 1 $
M Aafko 1 $
M Aage 761 $
M Åge + 56 $
F Aagje 1 2 $
F Aagot 1 $
F Ågot + 2 $
F Aagoth 1 $
F Ågoth + 1 $
M Åke + 118 $
M Aalbert 1 $
M Aalderich 1 $
M Aalderk 1 $
The database handling isn't my question. The problem is how to filter only the gender and name in the textfile.
Thnaks for any help in advance.
To do this only in SQL , you can see LOAD DATA INFILE or you can do it with a little program.
I've found that the ETL software Kettle by Pentaho makes it pretty easy to import text files into mysql (especially if you don't have the most technical background. The documentation is kind of sparse, and this software is far from perfect, but I've found that I can often import data in a fraction of the time I would have to spend writing a script for one specific file. You can select a text file input and specify the delimiters, fixed width, etc.. and then simply export directly into your SQL server (they support MySql, SQLite, Oracle, and much more). The drag and drop interface is great for anyone who doesn't want to write code every time they run an import.
I have tried to use this possible solution without any luck:
$test = passthru('/usr/bin/top -b -n 1');
preg_match('([0-9]+ total)', $test, $matches);
var_dump($matches);
That code shows the following text:
top - 19:15:43 up 31 days, 23 min, 1 user, load average: 0.00, 0.01, 0.05 Tasks: 85 total, 1 running, 84 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.0%sy, 0.0%ni, 99.8%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 247980k total, 224320k used, 23660k free, 68428k buffers Swap: 521212k total, 37120k used, 484092k free [...] 0.4 0:00.00 top array(0) { }
How can I take a certain information like total of tasks so it only show for example 84 total?
Thanks in advance.
I got it to work thanks to Barmar and Satish. Here's the correct code:
exec("/usr/bin/top -b -n 1 | grep Tasks: | awk -F' ' '{print $4}'")
Many thanks! :)