Asterisk full log parser - php

I want to make a log parser for for an Asterisk PBX, but don't know where to start.
I figured it out what i need from the log. the lines that i need look like this:
[Apr 12 11:11:56] VERBOSE[3359] logger.c: -- Called 111
the number in VERBOSE[....] are the same for 1 call.
The first thing that i have to do is get the lines that contain that VERBOSE number so i can identify that call. the second thing is to read the text, there are some standard texts so it won't be hard to recognize.
The thing is that i would like to read it real time (the file is written real time), and display it in a webpage. PHP or Ajax.
The thing that i want to do is, show up rows in a webpage as users call. and the new call to be added under the current/answered call.
Any tips, examples would be great.
Thanks,
Sebastian

I would do it in 2 programs that can work as simple CGI programs.
First program parses log and show date and call identifier. Such identifier will be link to 2nd program. In Python you can use regexp:
# 1st group - date, 2nd - callid
RX_VERBOSE = re.compile(r'(.*) VERBOSE\[(\d+)\] ')
def make_link(call_id):
return(' <a href="show_call.cgi?call_id=%d>%d</a>' % (call_id, call_id))
def show_calls(logfn):
call_ids = []
f = open(logfn)
for line in f:
rx = RX_VERBOSE.search(line)
if rx:
call_id = int(rx.group(2))
if not call_id in call_ids:
print('%s %s' % (rx.group(1), make_link(call_id)))
call_ids.append(call_id)
f.close()
This program will show the lines that has call identifier:
def show_call_datails(logfn, call_id):
search_str = ' VERBOSE[%s] ' % call_id
f = open(logfn)
for line in f:
if search_str in line:
print(line.rstrip())
f.close()

Related

Detect when song changes in Shoutcast server

I have a script that is reading the last line of a log file for my radio station using the following command.
watch -n 1 php script.php
That command above executes my script in 1 sec intervals. The log this script is reading has output as listed below.
2016-04-28 23:30:34 INFO [ADMINCGI sid=1] Title updated [Bankroll Fresh Feat. Street Money Boochie - Get It]
2016-04-28 23:30:34 INFO [ADMINCGI sid=1] DJ name updated [1]
2016-04-28 23:30:36 INFO [YP] Updating listing details for stream #1 succeeded.
Every time the song changes, 3 more lines are added to the logs as in the example output above. I need a way to do 3 things.
1) Detect only the latest occurrance of an entry in the logs matching the pattern of line #1
2) Execute code when that occurs and do nothing else until that happens again.
3) Regex to Extract data between 'second set' of square brackets on a line delimited by a "-" e.g. [Rihanna feat. Keyshia Cole - Title (Remix)]
Before the log output of my radio script changed, my script would detect when a song change occurred by tailing the logs for the 'Title Updated' line and then extract the artist and title name from within the square brackets on that same line.
Once that happens the data is sent to a MySQL database and sent to Twitter.
I have tried using "strpos" within an if statement to first detect a line that contains "Title updated" and then execute a function to grab the song information from the line afterwards which works but only if I use a static scenario by putting an example except of line #1 into a variable and then running it off my script. It does detect the line and server it's purpose but I need my script to remain dynamic meaning to only do something when this event happens and always sit idle in the meantime.
Right now I had to go bootleg and do the following.
Created a function to grab the last 3 lines from the log and then put each line into an array. Going off the example output above.
array[0] = Target Line
array[1] = next line
array[3] = next line
The logs would remain in this fashion as no other output is posted until the next song change and then it repeats with the only thing changing is the Artist and Title information. Currently since I am running my script to forcibly look at array[0] which is always the line I need, when it the script posts to Twitter on a song change, it immediately sends duplicates. Luckily I implemented error codes into the Twitter portion so that I can use sleep() to force my script to idle for 180 seconds (3 minutes) around the average song length. Now this is decent and it does post but my Tweets are no longer in real time because each song has different lengths.
Here is a snippet from my script below...
$lines = read_file(TEXT_FILE, LINES_COUNT);
foreach ($lines as $line) {
$pattern="/\[([^\]]*)\]/";
if (preg_match_all($pattern, $lines[0], $matches)){
foreach ($matches[1] as $a ){
$fulltitle = explode("-", $matches[1][1]);
$artist = $fulltitle[0];
$title = $fulltitle[1];
This script below is the direction I would like to go back to and does work when using the static version as in the example below. Soon as I set the script to look at the last line of the log it never detects a change due to it always detecting the next line after the target line. (believe regex may be responsible but not sure)
$line ="2016-04-27 22:56:48 INFO [ADMINCGI sid=1] Title updated [Tessa Feat. Gucci Mane - Get You Right]";
echo $line;
$pattern="/\[([^\]]*)\]/";
$needle = " Title updated ";
if (strpos($line,$needle) !== false) {
preg_match_all($pattern, $line, $matches);
foreach ($matches[1] as $a ){
$fulltitle = explode("-", $matches[1][1]);
$artist = $fulltitle[0];
$title = $fulltitle[1];

Force a statement to visually write to a file slowly

I have a want to take a File.open('somefile', 'w+') and make it read a file, take one line of text at a time, and visually write it slowly in another file. The reason I ask this question is because I can find nothing that does this already in code, nor can I find anything that actually controls the speed of how fast a program writes on the computer. I know that this can be simulated in a program such as Adobe Aftereffects so long as you provide a cursor after a character and the visual effect doesn't take place too quickly, but I've got 4,000 lines of code that I want to iterate over and can't afford to do this manually. This effect can also be achieved with a Microsoft Macro, but this requires it to be manually entered into the macro with no option of copy and paste.
-solutions preferred in Python, Ruby, and PHP-
If I understood properly, what you are trying to achieve, here you go:
input = File.read('readfrom.txt', 'r')
File.open('writeto.txt', 'w+') do |f|
input.chars.each do |c|
f.print(c) # print 1 char
f.flush # flush the stream
sleep 1 # sleep
end
end
This is one quick and dirty way of doing it in Python.
from time import sleep
mystring= 'My short text with a newline here\nand then ensuing text'
dt = 0.2 #0.2 seconds
for ch in mystring:
with open('fn_out','w+') as f:
f.write(ch)
f.flush()
sleep(dt)
f.flush() will result in updating the file with the changes.
One could make this more elaborate by having a longer pause after each newline, or a variable timestep dt.
To watch the change one has to repeatedly reload the file, as pointed out by #Tom Lord so you could run something like this beforehand to watch it in the terminal:
watch -n 0.1 cat fn_out
After some serious testing, I have finally developed a piece of code that will do the very thing I want. Tom Lord gave me some new words to use in my search terms "simulate typing" and this led me to win32ole with its SendKeys function. Here is a code that will iterate over all the characters in a file and print them out exactly as they were saved while simulating typing. I will see about making this into a gem for future use.
require 'win32ole'
wsh = WIN32OLE.new("WScript.Shell")
wsh.Run("Notepad.exe")
while not wsh.AppActivate("Notepad")
sleep 1
end
def fileToArray(file)
x = []
File.foreach("#{file}") do |line|
x << line.split('')
end
return x.flatten!
end
tests = fileToArray("readfrom.txt")
x = 0
while x <= tests.length
send = tests[x]
wsh.SendKeys("#{send}")
x += 1
sleep 0.1
end

Combine 2 CSV files based on a match within a column disregarding the header row

I have been scouring the ole interweb for this solution but have not found anything successful. I have a CSV output from one script that has data presented in a specific way and i need to match that and merge with another file. Added bonus if i can round up to a simple 2 x decimal points.
File 1: dataset1.csv (using column 1 as a primary key or what i want to search the other file for.)
5033db62b38f86605f0baeccae5e6cbc,20.875,20.625,41.5
5033d9951846c1841437b437f5a97f0a,3.3529411764705882,12.4117647058823529,13.7647058823529412
50335ab3ab5411f88b77900736338bc6,6.625,1.0625,3
5033db62b38f86605f0baeccae5e6cbc,2.9375,1,1.4375
File 2: dataset2.csv (if column 2 matches column 1 of file join column 1 from file 2 replacing the data in column 1 of file 1.)
"dc2","5033db62b38f86605f0baeccae5e6cbc"
"dc1","5033d9951846c1841437b437f5a97f0a"
Desired results:
File 1 (or new file3):
dc1,3.35,12.41,13.76
dc2,20.875,20.625,41.5
Just to demonstrate that I have been trying to find a way, and not just randomly asking a question hoping someone else would solve my problem.
I have found a number of resources that say to use join.
join -o 1.1,1.2,1.3,1.4,2.3 file 1 file 2 etc. I have tested this a number of different ways. I read on a number of posts that the results need to be sorted - with that long of a string its a little hard. Not to mention file 1 may have 30 to 40 entries but file2 may only have 10. I just need a name associated with the long string.
I started looking at grep - but then I will need a forEach loop to cycle through all the results and there has to be an easier way.
I have also looked at AWK - now this is a fun one trying to figure out exactly how to make this work.
awk 'FNR==NR {a[$2]; next} $2 in a' file.csv testfile2.csv
Yeah.... tried many ways to get this to compare as this seems to be the general idea... but still haven't got it to work. I would like this to be some type of shell script for linux to be very simple and something i can call from a php page and have it run. Like if user hits refresh it churns through it and digests the data.
Any help would be greatly appreciated!
Thank you.
j.
You can use a combination of sort and gnu awk:
mergef.awk:
BEGIN { FS= "[ ,\"]+"; }
FNR == NR { if ( !($1 in vals) ) vals [ $1 ] = sprintf("%.2f,%.2f,%.2f", $2, $3,$4) ;}
FNR != NR { print $2 "," vals[ $3 ]; }
Say your files are f1.csv and f2.csv then use this command:
awk -f mergef.awk f1.csv f2.csv | sort
the first line in the script deals with the quotes present in the second file (because of this setting there is an empty field $1 for the second file)
the second line reads in the first file. The if takes care that only the first occurence of a key is used.
the last line prints the new keys from the second file along the stored values from the first file, retrieved via the old keys
FNR == NR is true for the first file
Using python and the pandas library:
import pandas as pd
# Read in the csv files.
df1 = pd.read_csv(dataset1.csv, header=None, index_col=0)
df2 = pd.read_csv(dataset2.csv, header=None, index_col=1)
# Round values in the first file to two decimal places.
df1 = df1.round(2)
# Merge the two files.
df3 = pd.merge(df2, df1, how='inner', left_index=True, right_index=True)
# Write the output.
df3.to_csv(output.csv, index=False, header=False)
except formatting the numbers this does the job
$ join -t, -1 1 -2 2 -o2.1,1.2,1.3,1.4 <(sort file1) <(tr -d '"' <file2 | sort -t, -k2)
dc1,3.3529411764705882,12.4117647058823529,13.7647058823529412
dc2,2.9375,1,1.4375
dc2,20.875,20.625,41.5
note that there two matches for dc2.
Bonus: for required formatting pipe the output of the previous script to
$ ... | tr ',' ' ' | xargs printf "%s,%.2f,%.2f,%.2f\n"
dc1,3.35,12.41,13.76
dc2,2.94,1.00,1.44
dc2,20.88,20.62,41.50
but then, perhaps awk is a better alternative. This is to show that no programming is required if you can utilize existing unix toolset.
Here is a solution with PHP:
foreach (file("dataset1.csv") as $line_no => $csv) {
if (!$line_no) continue; // in case you have a header on first line
$fields = str_getcsv($csv);
$key = array_shift($fields);
$data1[$key] = array_map(function ($v) { return number_format($v, 2); }, $fields);
};
foreach (file("dataset2.csv") as $csv) {
$fields = str_getcsv($csv);
if (!isset($data1[$fields[1]])) continue;
$data2[$fields[0]] = array_merge(array($fields[0]), $data1[$fields[1]]);
};
ksort($data2);
$csv = implode("\n", array_map(function ($v) {
return implode(',', $v);
}, $data2));
file_put_contents("dataset3.csv", $csv);
NB: As you mentioned that the first file will be using column 1 as a primary key, a duplicate key value should not occur. If it does, the last occurrence will prevail.

getting result from Vowpal Wabbit daemon mode

I am running VW in daemon mode. As a standalone executable, it runs perfectly fine. In daemon mode, I see something about predictions and options initially but not the end result. Not sure what exactly is going on.
This is how I call VW6
/bin64/vw --daemon --num_children 2 -t -i ~/modelbow.vw6 --min_prediction 0 --max_prediction 1 -p stdout 2>&1
I check vw6 is running fine. I send data using simple php script (removed debug lines for brevity):
$fp = fsockopen("localhost",26542, $errno, $errstr, 3);
$fp_dat = fopen("/tmp/ml.dat", "r");
$mldata = explode("\n", file_get_contents("/tmp/ml.dat"));
$mlstr = implode($mldata);
fwrite($fp, $mlstr);
$result = trim(fgets($fp, 1024));
print $result;
Print $result above prints nothing. The only thing I see in stdout is
num sources = 1
Num weight bits = 28
learning rate = 10
initial_t = 1
power_t = 0.5
decay_learning_rate = 1
predictions = stdout
only testing
average since example example current current current
loss last counter weight label predict features
While in a standalone executable mode, if I run with the same model same dat file just without the -daemon option, it happily gives a result at the end
...
...
predictions = stdout
only testing
average since example example current current current
loss last counter weight label predict features
1.000000 ba66dfc7a135e2728d08010b40586b90
Any idea what could be going wrong here with the daemon mode? I tried using -p /tmp/ option as well...ran the daemon mode with sudo but nothing helped. Is there a debug dump option or verbose option or something else to know what exactly is going on?
thanks
The reason it is not working is not in vw but in the PHP client code.
explode on "\n", strips the newlines out.
implode without a glue-string parameter results in glue-string defaulting to the empty string.
Result: newlines are stripped out.
All examples are merged into one big (and incomplete, since there's no newline at the end) example.
vw needs newlines to separate examples, without them it will be waiting forever for the 1st example to complete.
So I think you need to change the implode line of code to:
$mlstr = implode("\n", $mldata);
for it to work.
You will also need an additional ending newline to get the last line through.

Modify PHP_Beautifier in Vim to not strip empty lines

Just finished incorporating PHP_Beautifier into Vim and the fact that it removes whitespace irks me. Apparently it's a bug since 2007. There is a hack to fix this problem, but it leads to other problems. Instead I decided to use a round about method.
First Convert multiple blank lines to a single blank line via the command as suggested here
:g/^\_$\n\_^$/d
Next Convert all blank lines to something unique like so (make sure it does not get changed during beautification)
:%s/^[\ \t]*\n/$x = 'It puts the lotion on the skin';\r/ge
Next Call PHP_Beautifier like so
:% ! php_beautifier --filters "ArrayNested() IndentStyles(style=k&r) NewLines(before=if:switch:foreach:else:T_CLASS,after=T_COMMENT:function)"<CR>
Finally Change all unique lines back to empty lines like so
:%s/$x='It puts the lotion on the skin';//ge
All four work when I tested them independently. I also have the third step mapped to my F8 key like so
map <F8> :% ! php_beautifier --filters "ArrayNested() IndentStyles(style=k&r) NewLines(before=if:switch:foreach:else:T_CLASS,after=T_COMMENT:function)"<CR>
But when I try to string the commands together via the pipe symbol, like so (I padded the pipes with whitespace to better show the different commands)
map <F8> :g/^\_$\n\_^$/d | %s/^[\ \t]*\n/$x = 'It puts the lotion on the skin';\r/ge | % ! php_beautifier --filters "ArrayNested() IndentStyles(style=k&r) NewLines(before=if:switch:foreach:else:T_CLASS,after=T_COMMENT:function)" | %s/$x = 'It puts the lotion on the skin';//ge<CR>
I get the following error
Error detected while processing /home/xxx/.vimrc:
line 105:
E749: empty buffer
E482: Can't create file /tmp/vZ6LPjd/0
Press ENTER or type command to continue
How do I bind these multiple commands to a key, in this case F8.
Thanks to ib's answer, I finally got this to work. If anyone is having this same problem, just copy this script into your .vimrc file
func! ParsePHP()
:exe 'g/^\_$\n\_^$/d'
:%s/^[\ \t]*\n/$x = 'It puts the lotion on the skin';\r/ge
:exe '%!php_beautifier --filters "ArrayNested() IndentStyles(style=k&r)"'
:%s/$x = 'It puts the lotion on the skin';//ge
endfunc
map <F8> :call ParsePHP()<CR>
For some Ex commands, including :global and :!, a bar symbol (|) is
interpreted as a part of a command's argument (see :help :bar for the full
list). To chain two commands, the first of which allows a bar symbol in its
arguments, use the :execute command.
:exe 'g/^\_$\n\_^$/d' |
\ %s/^[\ \t]*\n/$x = 'It puts the lotion on the skin';\r/ge |
\ exe '%!php_beautifier --filters "ArrayNested() IndentStyles(style=k&r) NewLines(before=if:switch:foreach:else:T_CLASS,after=T_COMMENT:function)"' |
\ %s/$x = 'It puts the lotion on the skin';//ge

Categories