PHP Extract data from PDF in array format - php

I have following pdf file Marsheet PDF m trying to extract data shown in example, I have tried PDFParse, PDFtoText, etc.... but not working properly is there any solution or example?
<?php
//Output something like this or suggest me if u have any better option
$data_array = array(
array( "name" => "Mr Andrew Smee",
"medicine_name" => "FLUOXETINE 20MG CAPS",
"description" => "TAKE ONE ONCE DAILY FOR LOW MOOD. CAUTION:YOUR DRIVING REACTIONS MAY BE IMPAIRED",
"Dose" => '9000',
"StartDate" => '28/09/15',
"period" => '28',
"Quantity" => '28'
),
array( "name" => "Mr Andrew Smee",
"medicine_name" => "SINEMET PLUS 125MG TAB",
"description" => "TAKE ONE TABLET FIVE TIMES A DAY FOR PD
(8am,11am,2pm,5pm,8pm)
THIS MEDICINE MAY COLOUR THE URINE. THIS IS
HARMLESS. CAUTION:REACTIONS MAY BE IMPAIRED
WHILST DRIVING OR USING TOOLS OR MACHINES.",
"Dose" => '0800,1100,1400,1700,2000',
"StartDate" => '28/09/15',
"period" => '28',
"Quantity" => '140'
), etc...
);
?>

TL;DR You are almost certainly not going to do this with a library alone.
Update: a working solution (not a perfect solution!) is coded below, see 'in practice'. It requires:
defining the areas where the text is;
the possibility of installing and running a command line tool, pdf2json.
Why it is not easy
PDF files contain typesetting primitives, not extractable text; sometimes the difference is slight enough that you can go by, but usually having only extractable text, in easily accessible format, means that the document looks "slightly wrong" aesthetically, and therefore the generators that create the "best" PDFs for text extraction are also the less used.
Some generators exist that embed both the typesetting layer and an invisible text layer, allowing to see the beautiful text and to have the good text. At the expense, you guessed it, of the PDF size.
In your example, you only have the beautiful text inside the file, and the existence of a grid means that the text needs to be properly typeset.
So, inside, what there actually is to be read is this. Notice the letters inside round parentheses:
/R8 12 Tf
0.99941 0 0 1 66 765.2 Tm
[(M)2.51003(r)2.805( )-2.16558(A)-3.39556(n)
-4.33056(d)-4.33056(r)2.805(e)-4.33056(w)11.5803
( )-2.16558(S)-3.39556(m)-7.49588(e)-4.33117(e)556]TJ
ET
and if you assemble the (s)(i)(n)(g)(l)(e) letters inside, you do get "Mr Andrew Smee", but then you need to know where these letters are related to the page, and the data grid. Also you need to beware of spaces. Above, there is one explicit space character, parenthesized, between "Mr" and "Andrew"; but if you removed such spaces and fixed the offsets of all the following letters, you would still read "Mr Andrew Smee" and save two characters. Some PDF "optimizers" will try and do just that, and not considering offsets, the "text" string of that entity will just be "MrAndrewSmee".
And that is why most text extraction libraries, which can't easily manage character offsets (they use "text lines", and by and large they don't care about grids) will give you something like
Mr Andrew Smee 505738 12/04/54 (61
or, in the case of "optimized" texts,
MrAndrewSmee50573812/04/54(61
(which still gives the dangerous illusion of being parsable with a regex -- sometimes it is, sometimes it isn't, most of the times it works 95% of the time, so that the remaining 5% turns into a maintenance nightmare from Hell), but, more importantly, they will not be able to get you the content of the medication details timetable divided by cell.
Any information which is space-correlated (e.g. a name has different meanings if it's written in the left "From" or in the right "To" box) will be either lost, or variably difficult to reconstruct.
There are PDF "protection" schemes that exploit the capability of offsetting the text, and will scramble the strings. With offsets, you can write:
9 l 10 d 4 l 5 1 H 2 e 3 l o 6 W 7 o 8 r
and the PDF viewer will show you "Hello World"; but read the text directly, and you get "ldlHeloWor", or worse. You could add malicious text and place it outside the page, or write it in transparent color, to prank whoever succeeds in removing the easily removed optional copy-paste protection of PDF files. Most libraries would blithely suck up the prank text together with the good text.
Trying with most libraries, and why it might work (but probably not)
Libraries such as XPDF (and its wrappers phpxpdf, pdf2html, etc.) will give you a simple call such as this
// open PDF
$pdfToText->open('PDF-book.pdf');
// PDF text is now in the $text variable
$text = $pdfToText->getText();
$pdfToText->close();
and your "text" will contain everything, and be something like:
...
START DATE START DAY
WEEK 1 WEEK 2 WEEK 3 WEEK 4
DATE 28 29 30 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
19/10/15
Medication Details
Commencing
D.O.B
Doctor
Hour:Dose 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7 1 2 3 4 5 6 7
Patient
Number
Period
MEDICATION ADMINISTRATION RECORD SHEETS Pharmacy No.
Document No.
02392 731680
28
0900 1
TAKE ONE ONCE DAILY FOR LOW MOOD.
CAUTION:YOUR DRIVING REACTIONS MAY BE IMPAIRED.
28
FLUOXETINE 20MG CAPS
Received Quantity returned quant. by destroyed quant. by
So, reading above, ask yourself - what is that second 28? Can you tell whether it is the received quantity, the returned quantity, the destroyed quantity without looking at the PDF? Sure, if there's only one number, chances are that it will be the received quantity. It becomes a bet.
And is 02392 731680 the document number? It looks like it is (it is not).
Notice also that in the PDF, the medicine name is before the notes. In the extracted text, it is after. By looking at the offsets inside the PDF, you understand why, and it's even a good decision -- but looking at the extracted text, it's not so easy.
So, automatic analysis looks enticingly like it can be done, but as I said, it is a very risky business. It is brittle: someone entering the wrong (for you) text somewhere in the document, sometimes even filling the fields not in sequential order, will result in a PDF which is visually correct and, at the same time, unexplainably unparseable. What are you going to tell your users?
Sometimes, a subset of the available information is stable enough for you to get the work done. In that case, XPDF or PDF2HTML, a bunch of regex, and you're home free in half a day. Yay you! Just keep in mind that any "little" addition to the project might then be impossible. Two numbers are added that are well separated in the PDF; are they 128 and 361, or 12 and 8361, or 1283 and 61? All you get in $text is 128361.
So if you go that way, document it clearly and avoid expectations which might be difficult to maintain. Your initial project might work so well, so fast, in so little, than an addition is accepted unbeknownst to you - and you're then required to do the impossible. Explaining why the first 95% was easy and the subsequent 5% very hard might be more than your job is worth.
One difficult way to do it, which worked for me
But can you do the same thing "by hand"? After all, by looking at the PDF, you know what you are seeing. Can the same thing be done by a machine? (this still applies). Sure, in this - after all - clearly delimited problem of computer vision, you very probably can. It just won't be quick and easy. You need:
a very low level library (or reading the PDF yourself; you just need to uncompress it first, and there are tools for that, e.g. pdftk). You need to recover the text with coordinates. "C" for "hospitalized" is worth nothing. "C, 495.2, 882.7" plus the coordinates of your grid tells you of a hospitalization on October 13th, 2015 -- and that is the information you are after!
patience (or a tool) to input the coordinates of the text zones. You need to tell the system which area is October 13th, 2015... as well as all the other days. For example:
// Cell name X1 Y1 X2 Y2 Text
[ 'PatientName', 60, 760, 300, 790, '' ],
[ 'PatientNumber', 310, 760, 470, 790, '' ],
...
[ 'Grid01Y01X01', 90, 1020, 110, 1040, '' ],
...
Note that very many of those values you can calculate programmatically: once you have the top left corner and know one cell's size, the others are more or less calculable with a very slight error. You needn't input yourself six grids of four weeks with six rows each, seven days per week.
You can use the same structure to create a PNG with red areas to indicate which cells you've got covered. That will be useful to visually check you did not forget anything.
At that point you parse the PDF, and every time you find a text at coordinates (x1,y1) you scan all of your cells and determine where the text should be (there are faster ways to do that using XY binary search trees). If you find 'Mr Andrew S' at 66, 765.2 you add it to PatientName. Then you find 'mee' at 109.2, 765.2 and you also add it to PatientName. Which now reads 'Mr Andrew Smee'.
If the horizontal distance is above a certain threshold, you add a space (or more than one).
(For very small text there's a slight risk of the letters being output out of order by the PDF driver and corrected through kerning, but usually that's not a problem).
At the end of the whole cycle you will be left with
[ 'PatientName', 60, 760, 300, 790, 'Mr Andrew Smee' ],
[ 'PatientNumber', 310, 760, 470, 790, '505738' ],
and so on.
I did this kind of work for a large PDF import project some years back and it worked like a charm. Nowadays, I think most of the heavy lifting could be done with TcLibPDF.
The painful part is recording by hand, the first time, the information for the grid; possibly there might be tools for that, or one could whip up a HTML5/AJAX editor using canvases.
In practice
Most of the work has already been done by the excellent pdf2json tool, which consuming the 'Andrew Smee' PDF, outputs something like:
[
{
"height" : 1263,
"width" : 892
"number" : 1,
"pages" : 1,
"fonts" : [
{
"color" : "#000000",
"family" : "Times",
"fontspec" : "0",
"size" : "15"
},
...
],
"text" : [
{ "data" : "12/04/54",
"font" : 0,
"height" : 17,
"left" : 628,
"top" : 103,
"width" : 70
},
{ "data" : "28/09/15",
"font" : 0,
"height" : 17,
"left" : 105,
"top" : 206,
"width" : 70
},
{ "data" : "AQUARIUS",
"font" : 0,
"height" : 17,
"left" : 99,
"top" : 170,
"width" : 94
},
{ "data" : " ",
"font" : 0,
"height" : 17,
"left" : 193,
"top" : 170,
"width" : 5
},
{ "data" : "NURSING",
"font" : 0,
"height" : 17,
"left" : 198,
"top" : 170,
"width" : 83
},
...
In order to make things simple, I convert the Andrew Smee PDF to a PNG and resample it to 892 x 1263 pixel (any size will do, as long as you keep track of the size. Below, they are saved in 'width' and 'height'). This way I can read pixel coordinates straight off my old PaintShop Pro's status bar :-).
The "Address" field is from 73,161 to 837,193.
My sample "template", with only three fields, is therefore in PHP 5.7 (with short array syntax, [ ] instead of Array() )
<?php
function template() {
$template = [
'Address' => [ 'x1' => 73, 'y1' => 161, 'x2' => 837, 'y2' => 193 ],
'Medicine1' => [ 'x1' => 1, 'y1' => 283, 'x2' => 251, 'y2' => 299 ],
'Details1' => [ 'x1' => 1, 'y1' => 302, 'x2' => 251, 'y2' => 403 ],
];
foreach ($template as $fieldName => $candidate) {
$template[$fieldName]['elements'] = [ ];
}
return $template;
}
// shell_exec('/usr/local/bin/pdf2json "Andrew-Smee.pdf" andrew-smee.json');
$parsed = json_decode(file_get_contents('ann-underdown.json'), true);
$pout = [ ];
foreach ($parsed as $page) {
$template = template();
foreach ($page['text'] as $text) {
// Will it blend?
foreach ($template as $fieldName => $candidate) {
if ($text['top'] > $candidate['y2']) {
continue; // Too low.
}
if (($text['top']+$text['height']) < $candidate['y1']) {
continue; // Too high.
}
if ($text['left'] > $candidate['x2']) {
continue;
}
if (($text['left']+$text['width']) < $candidate['x1']) {
continue;
}
$template[$fieldName]['elements'][] = $text;
}
}
// Now I must reassemble all my fields
foreach ($template as $fieldName => $data) {
$list = $data['elements'];
usort($list, function($txt1, $txt2) {
for ($r = 8; $r >= 1; $r /= 2) {
if (($txt1['top']/$r) < ($txt2['top']/$r)) {
return -1;
}
if (($txt1['top']/$r) > ($txt2['top']/$r)) {
return 1;
}
if (($txt1['left']/$r) < ($txt2['left']/$r)) {
return -1;
}
if (($txt1['left']/$r) > ($txt2['left']/$r)) {
return 1;
}
}
return 0;
});
$text = '';
$starty = false;
foreach ($list as $data) {
if ($data['top'] > $starty + 5) {
if ($starty > 0) {
$text .= "\n";
}
} else {
// Add space
// $text .= ' ';
}
$starty = $data['top'];
// Add text to current line
$text .= $data['data'];
}
// Remove extra spaces
$text = preg_replace('# +#', ' ', $text);
$template[$fieldName] = $text;
}
$paged[] = $template;
}
print_r($paged);
And the result (on a multipage PDF)
Array
(
[0] => Array
(
[Address] => AQUARIUS NURSING HOME 4-6 SPENCER ROAD, SOUTHSEA PO4 9RN
[Medicine1] => ATORVASTATIN 40MG TABS
[Details1] => take ONE tablet at NIGHT
)
[1] => Array
(
[Address] => AQUARIUS NURSING HOME 4-6 SPENCER ROAD, SOUTHSEA PO4 9RN
[Medicine1] => SOTALOL 80MG TABS
[Details1] => take ONE tablet TWICE each day
DO NOT STOP TAKING UNLESS YOUR DOCTOR TELLS
YOU TO STOP.
)
[2] => Array
(
[Address] => AQUARIUS NURSING HOME 4-6 SPENCER ROAD, SOUTHSEA PO4 9RN
[Medicine1] => LAXIDO ORANGE SF 13.8G SACHETS
[Details1] => ONE to TWO when required
DISSOLVE OR MIX WITH WATER BEFORE TAKING.
NOT IN CASSETTE
)
)

Sometimes its hard to extract the pdfs into required format/output directly using some libraries or tools. Same problem occurred with me recently where I had 1600+ pdfs and I needed to extract those data and store it in db. I tried almost all the libraries, tools and none of them helped me. So, I tried put some manual effort to find a pattern and process them using php. For this I used this php library PDF TO HTML.
Install PDF TO HTML library
composer require gufy/pdftohtml-php:~2
This will convert your pdf into html code with each < div > tag representing the page and < p > tag representing the titles and their values. Now using p tags if you can identify the common pattern and it is not hard to put that in the logic to process all the pdfs and convert them into csv/xls or anything else. Since in my case after each 11 < p > tags, the pattern was repeating so i used this .
$pdf = new Gufy\PdfToHtml\Pdf('<PDF_FILE_PATH>');
// get total no pages
$total_pages = $pdf->getPages();
// Iterate through each page and extract the p tags
for($i = 1; $i <= $total_pages; $i++){
// This will convert pdf to html
$html = $pdf->html($i);
// Create a dom document
$domOb = new DOMDocument();
// load html code in the dom document
$domOb->loadHTML(mb_convert_encoding($html, 'HTML-ENTITIES', 'UTF-8'));
// Get SimpleXMLElement from Dom Node
$sxml = simplexml_import_dom($domOb);
// here you have the p tags
foreach ($sxml->body->div->p as $pTag) {
// your logic
}
}
Hope this helps you as it helped me alot

Related

Why does rand seem more random than mt_rand when only doing (1, 2)?

I have some elements that I'm trying to randomize at 50% chance of output. Wrote a quick if statement like this.
$rand = mt_rand(1, 2);
if ( $rand == 1 ) {
echo "hello";
} else {
echo "goodbye";
}
In notice that when using mt_rand, "goodbye" is output many times in a row, whereas, if I just use "rand," it's a more equal distribution.
Is there something about mt_rand that makes it worse at handling a simple 1-2 randomization like this? Or is my dataset so small that these results are just anecdotal?
To get the same value "many times in a row" is a possible outcome of a randomly generated series. It would not be completely random if such a pattern were not allowed to occur. If you would continue taking samples, you would also find that the opposite value will sometimes occur several times in a row, provided you keep going long enough.
One way to test that the generated values are indeed quite random and uniformly distributed, is to count how many times the same value is generated as the one generated before, and how many times the opposite value is generated.
Note that the strings "hello" and "goodbye" don't add much useful information; we can just look at the values 1 and 2.
Here is how you could do such a test:
// $countAfter[$i][$j] will contain the number of occurrences of
// a pair $i, $j in the randomly generated sequence.
// So there is an entry for [1][1], [1][2], [2][1] and [2][2]:
$countAfter = [1 => [1 => 0, 2 => 0],
2 => [1 => 0, 2 => 0]];
$prev = 1; // We assume for simplicity that the "previously" generated value was 1
for ($i = 0; $i < 10000; $i++) { // Produce a large enough sample
$n = mt_rand(1, 2);
$countAfter[$prev][$n]++; // Increase the counter that corresponds to the generated pair
$prev = $n;
}
print_r($countAfter);
You can see in this demo that the 4 numbers that are output do not differ that much. Output is something like:
Array (
[1] => Array (
[1] => 2464
[2] => 2558
)
[2] => Array (
[1] => 2558
[2] => 2420
)
)
This means that 1 and 2 are generated about an equal number of times and that a repetition of a value happens just as often as a toggle in the series.
Obviously these numbers are rarely exactly the same, since that would mean the last couple of generated values would not be random at all, as they would need to bring those counts to the desired value.
The important thing is that your sample needs to be large enough to see the pattern of a uniform distribution confirmed.

Divide a two dimensional array into surfaces

For a project it's required to arrange a two dimensional array into planes that are in proportion to each other based on a the percentage of each plane. (I hope this make sense, else see example below). In this 2D array the 'first' level represents the rows and the 'second' level, the columns. For example;
array(
// row 1
array(
// items
number1
number2
numberN
),
// row 2
array(
// items..
),
// row N
array(
// items..
)
)
The numbers in this array has te be added/arranged in such a way that they form panels. The panels together form one grid. Each digit is representing a item (doesn't matter for the question what this is). I came up with a solution myself. Click here for print of the 2D array (The groups are color coded.).
Lets say, there are three groups (listed below). The groups do represent the panels, introduced above. Each group has some percentage between zero and hundred. The sum of the percentage of the planes is required to be hundred percent. The maximum amount of group is seven. Example group info;
Group 1 (Panel A): 70%
Group 2 (Panel B): 20%
Group 3 (Panel C): 10%
Again this arrangement should result in one large panel with (sub)panels in it. As shown in this schematic figure.
I came up whit the idea to divide the end result into 4 corners. Each corner will be calculated by the rules. These corners should than be mirror (horizontally and/or vertically) based on what corner it is (Upper left, Upper right, Lower left, Lower right).
List of rules;
The number of items should be te same for each row
The aspects ratio of the complete grid should be 2 to 1. So the width is two times the hight.
The amount of rows are based on the total items, since the aspect is known.
After some days of work I was able to come up with a working script. But this does act weird (weird as in, not as expected) in some cases. See current solution above.
So, my question is; How do lever designers this? Is this a known problem and are there solution (like algorithm) what solve this (kind of) questions? I am struggling with the following problem for a long time now. Searching on the internet, trying to find similar problems. But I did not succeed.
I am not asking for a ready made solution. Just a pointer in the right direction would be much appreciated.
Assuming the planes should have approximately the same "aspect ratio" as the complete matrix, you could use this algorithm:
Calculate for each percentage what would be the coefficient to apply to the width and height to get the exact area that would be left over after subtracting the percentage of the available area. This coefficient is the square root of the coefficient that needs to be applied to the area (related to the percentage).
As this coefficient will in general be a non-integer number, check which way of rounding the width and height yields an area that comes closest to the desired area.
Repeat this for each plane.
Here is the code:
function createPlanes($width, $height, $groupPercentages) {
$side = 0;
$area = $width * $height;
$planeWidth = $width;
$planeHeight = $height;
$sumPct = 0;
$coefficient2 = 1;
foreach ($groupPercentages as $i => $pct) {
$plane = [
"column" => floor(($width - $planeWidth) / 2),
"row" => floor(($height - $planeHeight) / 2),
"width" => $planeWidth,
"height" => $planeHeight,
];
$coefficient2 -= $pct / 100;
$coefficient = sqrt($coefficient2);
$planeArea = $coefficient2 * $area;
$planeWidth = $coefficient * $width;
$planeHeight = $coefficient * $height;
// determine all possible combinations of rounding:
$deltas = [
abs(floor($planeWidth) * floor($planeHeight) - $planeArea),
abs(floor($planeWidth) * min(ceil($planeHeight), $plane["height"]) - $planeArea),
abs(min(ceil($planeWidth), $plane["width"]) * floor($planeHeight) - $planeArea),
abs(min(ceil($planeWidth), $plane["width"]) * min(ceil($planeHeight), $plane["height"]) - $planeArea)
];
// Choose the one that brings the area closest to the required area
$choice = array_search(min($deltas), $deltas);
$planeWidth = $choice & 2 ? ceil($planeWidth) : floor($planeWidth);
$planeHeight = $choice & 1 ? ceil($planeHeight) : floor($planeHeight);
$newSumPct = ($area - $planeWidth * $planeHeight) / $area * 100;
$plane["pct"] = $newSumPct - $sumPct;
$sumPct = $newSumPct;
$planes[] = $plane;
}
return $planes;
}
// Example call for a 2D array with 20 columns and 32 rows, and
// three percentages: 10%, 20%, 70%:
$planes = createPlanes(20, 32, [10, 20, 70]);
The $planes variable will get this content:
array (
array (
'column' => 0,
'row' => 0,
'width' => 20,
'height' => 32,
'pct' => 10.9375,
),
array (
'column' => 0,
'row' => 1,
'width' => 19,
'height' => 30,
'pct' => 20,
),
array (
'column' => 1,
'row' => 3,
'width' => 17,
'height' => 26,
'pct' => 69.0625,
),
)
The inner attributes define where the plane starts (row, column), and how large it is (height, width), and which is the actual percentage that plane is in relation to the total area.
Note that the actual 2D does not need to be part of the algorithm, as its values don't influence it.

PHP : How to detect a shift between two images

I would like to detect and get the shift between two images but I can't find anything except similarity comparaison. The idea is same as this post but in PHP instead of Python :
How to detect a shift between images
I imagine something like this : $coordinate = image_shift($image_1, $image_2);
$coordinate would be something like this :
['x' => 10, 'y' => -12, 'tilt' => 0.2] x and y in pixels and tilt in radian.
Thanks in advance for your help.

Filter an array based on density

I have a sample graph like one below.., which I plotted with set of (x,y) values in an array X.
http://bubblebird.com/images/t.png
As you can see the image has dense peak values between 4000 to 5100
My exact question is can I programmatically find this range where the graph is most dense?
ie.. with Array X how can I find range within which this graph is dense?
for this array it would be 4000 - 5100.
Assume that the array has only one dense region for simplicity.
Thankful if you can suggest a pseudocode/code.
You can use the variance of the signal on a moving window.
Here is an example (see the graph attached where the test signal is red, the windowed variance is green and the filtered signal is blue) :
:
test signal generation :
import numpy as np
X = np.arange(200) - 100.
Y = (np.exp(-(X/10)**2) + np.exp(-((np.abs(X)-50.)/2)**2)/3.) * np.cos(X * 10.)
compute moving window variance :
window_length = 30 # number of point for the window
variance = np.array([np.var(Y[i-window_length / 2.: i+window_length/2.]) for i in range(200)])
get the indices where the variance is high (here I choose the criterion variance superior to half of the maximum variance... you can adapt it to your case) :
idx = np.where(variance > 0.5 * np.max(variance))
X_min = np.min(X[idx])
# -14.0
X_max = np.max(X[idx])
# 15.0
or filter the signal (set to zero the points with low variance)
Y_modified = np.where(variance > 0.5 * np.max(variance), Y, 0)
you may calculate the absolute difference between the adjacent values, then maybe smooth things a little with sliding window and then find the regions, where the smoothed absolute difference values are at 50% of maximum value.
using python (you have python in tags) this would look like this:
a = ( 10, 11, 9, 10, 18, 5, 20, 6, 15, 10, 9, 11 )
diffs = [abs(i[0]-i[1]) for i in zip(a,a[1:])]
# [1, 2, 1, 8, 13, 15, 14, 9, 5, 1, 2]
maximum = max(diffs)
# 15
result = [i>maximum/2 for i in diffs]
# [False, False, False, True, True, True, True, True, False, False, False]
You could use classification algorithm (for example k-means), to split data into clusters and find the most weighted cluster

Flesch-Kincaid Readability: Improve PHP function

I wrote this PHP code to implement the Flesch-Kincaid Readability Score as a function:
function readability($text) {
$total_sentences = 1; // one full stop = two sentences => start with 1
$punctuation_marks = array('.', '?', '!', ':');
foreach ($punctuation_marks as $punctuation_mark) {
$total_sentences += substr_count($text, $punctuation_mark);
}
$total_words = str_word_count($text);
$total_syllable = 3; // assuming this value since I don't know how to count them
$score = 206.835-(1.015*$total_words/$total_sentences)-(84.6*$total_syllables/$total_words);
return $score;
}
Do you have suggestions how to improve the code? Is it correct? Will it work?
I hope you can help me. Thanks in advance!
The code looks fine as far as a heuristic goes. Here are some points to consider that make the items you need to calculate considerably difficult for a machine:
What is a sentence?
Seriously, what is a sentence? We have periods, but they can also be used for Ph.D., e.g., i.e., Y.M.C.A., and other non-sentence-final purposes. When you consider exclamation points, question marks, and ellipses, you're really doing yourself a disservice by assuming a period will do the trick. I've looked at this problem before, and if you really want a more reliable count of sentences in real text, you'll need to parse the text. This can be computationally intensive, time-consuming, and hard to find free resources for. In the end, you still have to worry about the error rate of the particular parser implementation. However, only full parsing will tell you what's a sentence and what's just a period's other many uses. Furthermore, if you're using text 'in the wild' -- such as, say, HTML -- you're going to also have to worry about sentences ending not with punctuation but with tag endings. For instance, many sites don't add punctuation to h1 and h2 tags, but they're clearly different sentences or phrases.
Syllables aren't something we should be approximating
This is a major hallmark of this readability heuristic, and it's one that makes it the most difficult to implement. Computational analysis of syllable count in a work requires the assumption that the assumed reader speaks in the same dialect as whatever your syllable count generator is being trained on. How sounds fall around a syllable is actual a major part of what makes accents accents. If you don't believe me, try visiting Jamaica sometime. What this means it that even if a human were to do the calculations for this by hand, it would still be a dialect-specific score.
What is a word?
Not to wax psycholingusitic in the slightest, but you will find that space-separated words and what are conceptualized as words to a speaker are quite different. This will make the concept of a computable readability score somewhat questionable.
So in the end, I can answer your question of 'will it work'. If you're looking to take a piece of text and display this readability score among other metrics to offer some kind of conceivable added value, the discerning user will not bring up all of these questions. If you are trying to do something scientific, or even something pedagogical (as this score and those like it were ultimately intended), I wouldn't really bother. In fact, if you're going to use this to make any kind of suggestions to a user about content that they have generated, I would be extremely hesitant.
A better way to measure reading difficulty of a text would more likely be something having to do with the ratio of low-frequency words to high-frequency words along with the number of hapax legomena in the text. But I wouldn't pursue actually coming up with a heuristic like this, because it would be very difficult to empirically test anything like it.
Take a look at the PHP Text Statistics class on GitHub.
Please have a look at following two classes and its usage information. It will surely help you.
Readability Syllable Count Pattern Library Class:
<?php class ReadabilitySyllableCheckPattern {
public $probWords = [
'abalone' => 4,
'abare' => 3,
'abed' => 2,
'abruzzese' => 4,
'abbruzzese' => 4,
'aborigine' => 5,
'acreage' => 3,
'adame' => 3,
'adieu' => 2,
'adobe' => 3,
'anemone' => 4,
'apache' => 3,
'aphrodite' => 4,
'apostrophe' => 4,
'ariadne' => 4,
'cafe' => 2,
'calliope' => 4,
'catastrophe' => 4,
'chile' => 2,
'chloe' => 2,
'circe' => 2,
'coyote' => 3,
'epitome' => 4,
'forever' => 3,
'gethsemane' => 4,
'guacamole' => 4,
'hyperbole' => 4,
'jesse' => 2,
'jukebox' => 2,
'karate' => 3,
'machete' => 3,
'maybe' => 2,
'people' => 2,
'recipe' => 3,
'sesame' => 3,
'shoreline' => 2,
'simile' => 3,
'syncope' => 3,
'tamale' => 3,
'yosemite' => 4,
'daphne' => 2,
'eurydice' => 4,
'euterpe' => 3,
'hermione' => 4,
'penelope' => 4,
'persephone' => 4,
'phoebe' => 2,
'zoe' => 2
];
public $addSyllablePatterns = [
"([^s]|^)ia",
"iu",
"io",
"eo($|[b-df-hj-np-tv-z])",
"ii",
"[ou]a$",
"[aeiouym]bl$",
"[aeiou]{3}",
"[aeiou]y[aeiou]",
"^mc",
"ism$",
"asm$",
"thm$",
"([^aeiouy])\1l$",
"[^l]lien",
"^coa[dglx].",
"[^gq]ua[^auieo]",
"dnt$",
"uity$",
"[^aeiouy]ie(r|st|t)$",
"eings?$",
"[aeiouy]sh?e[rsd]$",
"iell",
"dea$",
"real",
"[^aeiou]y[ae]",
"gean$",
"riet",
"dien",
"uen"
];
public $prefixSuffixPatterns = [
"^un",
"^fore",
"^ware",
"^none?",
"^out",
"^post",
"^sub",
"^pre",
"^pro",
"^dis",
"^side",
"ly$",
"less$",
"some$",
"ful$",
"ers?$",
"ness$",
"cians?$",
"ments?$",
"ettes?$",
"villes?$",
"ships?$",
"sides?$",
"ports?$",
"shires?$",
"tion(ed)?$"
];
public $subSyllablePatterns = [
"cia(l|$)",
"tia",
"cius",
"cious",
"[^aeiou]giu",
"[aeiouy][^aeiouy]ion",
"iou",
"sia$",
"eous$",
"[oa]gue$",
".[^aeiuoycgltdb]{2,}ed$",
".ely$",
"^jua",
"uai",
"eau",
"[aeiouy](b|c|ch|d|dg|f|g|gh|gn|k|l|ll|lv|m|mm|n|nc|ng|nn|p|r|rc|rn|rs|rv|s|sc|sk|sl|squ|ss|st|t|th|v|y|z)e$",
"[aeiouy](b|c|ch|dg|f|g|gh|gn|k|l|lch|ll|lv|m|mm|n|nc|ng|nch|nn|p|r|rc|rn|rs|rv|s|sc|sk|sl|squ|ss|th|v|y|z)ed$",
"[aeiouy](b|ch|d|f|gh|gn|k|l|lch|ll|lv|m|mm|n|nch|nn|p|r|rn|rs|rv|s|sc|sk|sl|squ|ss|st|t|th|v|y)es$",
"^busi$"
]; } ?>
Another class which is readability algorithm class having two methods to calculate score:
<?php class ReadabilityAlgorithm {
function countSyllable($strWord) {
$pattern = new ReadabilitySyllableCheckPattern();
$strWord = trim($strWord);
// Check for problem words
if (isset($pattern->{'probWords'}[$strWord])) {
return $pattern->{'probWords'}[$strWord];
}
// Check prefix, suffix
$strWord = str_replace($pattern->{'prefixSuffixPatterns'}, '', $strWord, $tmpPrefixSuffixCount);
// Removed non word characters from word
$arrWordParts = preg_split('`[^aeiouy]+`', $strWord);
$wordPartCount = 0;
foreach ($arrWordParts as $strWordPart) {
if ($strWordPart <> '') {
$wordPartCount++;
}
}
$intSyllableCount = $wordPartCount + $tmpPrefixSuffixCount;
// Check syllable patterns
foreach ($pattern->{'subSyllablePatterns'} as $strSyllable) {
$intSyllableCount -= preg_match('`' . $strSyllable . '`', $strWord);
}
foreach ($pattern->{'addSyllablePatterns'} as $strSyllable) {
$intSyllableCount += preg_match('`' . $strSyllable . '`', $strWord);
}
$intSyllableCount = ($intSyllableCount == 0) ? 1 : $intSyllableCount;
return $intSyllableCount;
}
function calculateReadabilityScore($stringText) {
# Calculate score
$totalSentences = 1;
$punctuationMarks = array('.', '!', ':', ';');
foreach ($punctuationMarks as $punctuationMark) {
$totalSentences += substr_count($stringText, $punctuationMark);
}
// get ASL value
$totalWords = str_word_count($stringText);
$ASL = $totalWords / $totalSentences;
// find syllables value
$syllableCount = 0;
$arrWords = explode(' ', $stringText);
$intWordCount = count($arrWords);
//$intWordCount = $totalWords;
for ($i = 0; $i < $intWordCount; $i++) {
$syllableCount += $this->countSyllable($arrWords[$i]);
}
// get ASW value
$ASW = $syllableCount / $totalWords;
// Count the readability score
$score = 206.835 - (1.015 * $ASL) - (84.6 * $ASW);
return $score;
} } ?>
// Example: how to use
<?php // Create object to count readability score
$readObj = new ReadabilityAlgorithm();
echo $readObj->calculateReadabilityScore("Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into: electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently; with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum!");
?>
I actually don't see any problems with that code. Of course, it could be optimized a bit if you really wanted to by replacing all the different functions with a single counting loop. However, I'd strongly argue that it isn't necessary and even outright wrong. Your current code is very readable and easy to understand, and any optimizations would probably make things worse from that perspective. Use it as it is, and don't try to optimize it unless it actually turns out to be a performance bottleneck.

Categories