So I'm having a rough time with possibly a simple problem. I have a SimpleXMLElement array of objects (PDF data bytes). I'm trying to iterate over my array and have the code write out each chunk of PDF
data bytes to individual text files so I can write them to individual PDF files. My code is as follows:
$docfiles = $xml->EnvelopeStatus->EnvelopeID . "/" . $xml->EnvelopeStatus->DocumentStatuses->DocumentStatus->Name . '.txt';
$doctxt = fopen($docfiles,"w");
$docarr = array();
foreach($xml->DocumentPDFs->DocumentPDF as $post){
foreach ($post->PDFBytes as $docusigntxt) {
$docarr[] = $docusigntxt;
foreach ($docarr as $value) {
fwrite($doctxt,$value);
}
}
}
fclose($doctxt);
What am I doing wrong? Any suggestions would be greatly appreciated.
If you haven't already done so, it might be worth checking out the DocuSign SOAP API sample code that's available on GitHub: https://github.com/docusign/DocuSign-eSignature-SDK. What you're trying to accomplish is pretty straight-forward -- I imagine the PHP folder in the GitHub repository would contain a code snippet that shows you how to achieve your goal.
This was just a case of getting to know how to properly drill down into the SimpleXML syntax. This is what solved my problem. I skipped over creating the text files and created PDFs, which is what I wanted:
foreach($xml->DocumentPDFs->DocumentPDF as $value) {
$binary = base64_decode($value->PDFBytes);
file_put_contents($xml->EnvelopeStatus->EnvelopeID . "/" . $value->Name, $binary);
}
That's it.
Related
For each document (.pdf, .txt, .docx ecc) I have also a corresponding json file with the same filename.
Example:
file1.json,
file1.pdf,
file2.json,
file2.txt,
filex.json,
filex.pdf,
But I got also some json files which are not accompanied with the corresponding document.
I want to delete all json files which have no corresponding document. Im really stucked because I cant find a proper solution to my problem.
I know how to scandir() get the filename, extensions from pathinfo() ecc. but the issue is that for each json file I find in directory I have to perform another foreach on that directory excluding all json files and see If the same filename exists or not so than I can decide to delete it. (This is how I think to solve it).
The problem here is with performance since there are millions of files and for each json I have to run a foreach on millions of files.
Can anyone guide me to a better solution?
Thank you!
Edit: Since no one will help without first posting a piece of code (and this approach in stackoverflow is definitively wrong) here is how I'm trying.:
<?php
$dir = "2000/";
$files = scandir($dir);
foreach ($files as $file) {
$fullName = pathinfo($file);
if ($fullName['extension'] === 'json') {
if (!in_array($fullName['filename'].'.pdf', $files)){
unlink($dir.$file);
}
}
}
Now as you can see I can only search only for one type of document (.pdf in this case). I want to search for every extension excluding .json and also I don't want that for each json file to run a foreach/in_array() but achieving all this in just one foreach.
Maybe you should consider it in another way? I mean, iterate through all files, and try to find corresponding files to json, if not found remove it.
It would look like follows:
$dir = "2000/";
foreach (glob($dir . "*.json") as $file) {
$file = new \SplFileInfo($dir . $file);
if (count(glob($dir . $file->getBasename('.' . $file->getExtension()) . ".*")) === 1) {
unlink($dir . $file->getFilename());
}
}
Manual
PHP: SplFileInfo
PHP: glob
I haven't done much coding in the way of HTML5 and PHP before as ive always used Python and only created in system applications instead of web based apps.
Ive tried to find but could not, any information that might assist me with my latest task.
I would like for users to be able to upload a CSV or XML file (Havent decided on format yet) that contains SKUs in one field and Prices in another (Columns).
I then want the user to be able to specify a set of variables and have the document edited to that effect.
Im not sure if I would have to use MySQL to achieve this, and I have no experience with it so if I can at all avoid it then that would be preferable.
Any advice / suggestions on material for doing this, or even actual examples of how this might be achieved would go a long way to increasing my understanding of how to approach this task.
Kind Regards.
Lewis
You can use fgetcsv method and fputcsv methods to manipulate a csv file in php.
For xml files you can use simpleXML parser
I will give an example for CSV files in php.
Reading a CSV file
if(file_exists("/tmp/my_file.csv")){
$filex = fopen("/tmp/my_file.csv","r");
}
else{
echo "file not found";
}
$data = array();
while(!feof($file))
{
$data[] = fgetcsv($file);
}
fclose($filex);
//now you can manipulate $data as you wish
Writing to CSV file
$list = array
(
"abcd,efgh,ijkl,mnop",
"qrst,uvwx,yzab,cdef"
);
$file = fopen("my_file.csv","w");
foreach ($list as $line)
{
fputcsv($file,explode(',',$line));
}
fclose($file);
Problem
I'm trying to edit HTML/PHP files server side with PHP. With AJAX Post I send three different values to the server:
the url of the page that needs to be edited
the id of the element that needs to be edited
the new content for the element
The PHP file I have now looks like this:
<?php
$data = json_decode(stripslashes($_POST['data']));
$count = 0;
foreach ($data as $i => $array) {
if (!is_array($array) && $count == 0){
$count = 1;
// $array = file url
}
elseif (is_array($array)) {
foreach($array as $i => $content){
// $array[0] = id's
// $array[1] = contents
}
}
}
?>
As you can see I wrapped the variables in an array so it's possible to edit multiple elements at a time.
I've been looking for a solution for hours but can't make up my mind and tell what's the best/possible solution.
Solution
I tried creating a new DOMElement and load in the html, but when dealing with a PHP file, this solution isn't possible since it can't save php files:
$html = new DOMDocument();
$html->loadHTMLFile('file.php');
$html->getElementById('myId')->nodeValue = 'New value';
$html->saveHTMLFile("foo.html");
(From this answer)
Opening a file, writing in it and saving it comes is another way to do this. But I guess I must be using str_replace or preg_replace this way.
$fname = "demo.txt";
$fhandle = fopen($fname,"r");
$content = fread($fhandle,filesize($fname));
$content = str_replace("oldword", "newword", $content);
$fhandle = fopen($fname,"w");
fwrite($fhandle,$content);
fclose($fhandle);
(From this page)
I read everywhere that str_replace and preg_replace are risky 'caus I'm trying to edit all kinds of DOM elements, and not a specific string/element. I guess the code below comes close to what I'm trying to achieve but I can't really trust it..
$replace_with = 'id="myID">' . $replacement_content . '</';
if ($updated = preg_replace('#id="myID">.*?</#Umsi', $replace_with, $file)) {
// write the contents of $file back to index.php, and then refresh the page.
file_put_contents('file.php', $updated);
}
(From this answer)
Question
In short: what is the best solution, or is it even possible to edit HTML elements content in different file types with only an id provided?
Wished steps:
get file from url
find element with id
replace it's content
First of all, you are right in not wanting to use a regex function for HTML parsing. See the answer here.
I'm going to answer this question under the presumption you are committed to the idea of retrieving PHP files server-side before they are interpreted. There is an issue with your approach right now, since you seem to be under the impression that you can retrieve the source PHP file by the URL parameter - but that's the location of the result (interpreted PHP). So be careful your structure does what you want.
I am under the assumption that the PHP files are structured like this:
<?php include_some_header(); ?>
<tag>...</tag>
<!-- some HTML -->
<?php //some code ?>
<tag>...</tag>
<!-- some more HTML -->
<?php //some code ?>
Your problem now is that you cannot use an HTML reader (and writer), since your file is not HTML. My answer is that you should restructure your code, separating templating language from business logic. Get started with some templating language. Afterwards, you'll be able to open the template, without the code, and write back the template using a DOM parser and writer.
Your only alternative in your current setup is to use the replace function as you have found in this answer. It's ugly. It might break. But it's definitely not impossible. Make backups before writing over your own code with your own code.
With help from the guys on Stackoverflow I can now Parse JSON code from a file and save a 'Value' into a database
However the file I intend to read from is actually a massive 2GB file. My web server will not hold this file. However it will hold a ZIPPED version of it - ie 80MB.(ie .GZ)
I believe there is a way to PARSE JSON from a ZIPPED file (.GZ)..........Can anybody help?
I have found the below function which I believe will do this (I think) but I don't know how to link it to my code
private function uncompressFile($srcName, $dstName) {
$sfp = gzopen($srcName, "rb");
$fp = fopen($dstName, "w");
while ($string = gzread($sfp, 4096)) {
fwrite($fp, $string, strlen($string));
}
gzclose($sfp);
fclose($fp);
}
My current PHP code is below and works. It reads a basic small file, JSON decodes it (The JSON is in a series of separate lines hence the need for FILE_IGNORE_NEW_LINES) and then takes a value and saves to MySQL database.
However I believe I need to somehow combine these two bits of code so I can read a ZIPPED file without exceeding my 100MB storage on my webserver
$file="CIF_ALL_UPDATE_DAILY_toc-update-sun";
$trains = file($json_filename, FILE_IGNORE_NEW_LINES | FILE_SKIP_EMPTY_LINES);
foreach ($trains as $train) {
$json=json_decode($train,true);
foreach ($json as $key => $value) {
$input=$value['main_train_uid'];
$q="INSERT INTO railstptest (main_train_uid) VALUES ('$input')";
$r=mysqli_query($mysql_link,$q);
}
}
}
if (is_null($json)) {
die("Json decoding failed with error: ". json_last_error());
}
mysqli_close($mysql_link);
Many Thanks
EDIT
Here is a short snippet of the JSON . There are a series of these
I would only want to be getting a few key values. For example the value G90491 and P20328. A lot of the info I would not need
{"JsonAssociationV1":{"transaction_type":"Delete","main_train_uid":"G90491","assoc_train_uid":"G90525","assoc_start_date":"2013-09-07T00:00:00Z","location":"EDINBUR","base_location_suffix":null,"diagram_type":"T","CIF_stp_indicator":"O"}}
{"JsonAssociationV1":{"transaction_type":"Delete","main_train_uid":"P20328","assoc_train_uid":"P21318","assoc_start_date":"2013-08-23T00:00:00Z","location":"MARYLBN","base_location_suffix":null,"diagram_type":"T","CIF_stp_indicator":"C"}}
It may be possible to do stream extraction of the file and then use a stream JSON parser. ZipArchive has getStream, and someone created a streaming JSON parser for PHP.
You will have to write a listener that inserts the database values as they are found and discards unnecessary JSON so it does not consume memory.
$zip = new ZipArchive;
$zip->open("file.zip");
$parser = new JsonStreamingParser_Parser($zip->getStream("file.json"),
new DB_Value_Inserter);
$parser->parse();
Based on your question, you're working with gzip instead of zip. To get the stream you can use
fopen("compress.zlib://path/to/file.json", "r");
It's difficult to write the DB_Value_Inserter since you haven't provided the format of the JSON you need, but it seems like you can probably just override the Listener::value method and just write the string values you receive.
PHP has compression wrappers that can help with opening and reading lines from compressed files. One is for reading gzip files:
$gzipFile = 'CIF_ALL_UPDATE_DAILY_toc-update-sun.gz';
$trains = new SplFileObject("compress.zlib://{$gzipFile}", 'r');
$trains->setFlags(SplFileObject::DROP_NEW_LINE | SplFileObject::READ_AHEAD
| SplFileObject::SKIP_EMPTY);
Because SplFileObject is iterable, you can keep your outer foreach loop the way it is. Of course, fgets() remains an alternative to using SplFileObject.
I have been looking for a convenient way of making and maintaining translations of my Kohana-modules. I have played around with POEdit and have extracted all __()'s from my modules. Really like the way POedit works, and it's just to run a quick update to gather all new strings and save a new catalog later on. I could afterwards convert the po-files to PHP-arrays sort of...it seems a bit complicated with all steps.
I have seen this approach but I would rather not install tables and new modules for translations, I think this gets to complicated and "drupalish" ;-).
How do you managing localizations and translations on different languages in your Kohana-projects? Any hints would be much appreciated!
This is how I did it. First of all POEdit for Mac is very buggy and strange, unfortunately.
In POEdit, created a new catalog with correct path and __ as a keyword.
Run POEdit to extract all the strings.
After this I ran this simple PHP-script over the generated PO-file. The output from the script I pasted into the files in i18n-folder of the project.
$file = 'sv_SE.po';
$translations = array();
$po = file($file);
$current = null;
foreach ($po as $line) {
if (substr($line,0,5) == 'msgid') {
$current = trim(substr(trim(substr($line,5)),1,-1));
}
if (substr($line,0,6) == 'msgstr') {
$translations[$current] = trim(substr(trim(substr($line,6)),1,-1));
}
}
echo "<?php\n\n";
foreach ($translations as $msgid => $msgstr) {
echo '\'' . $msgid . '\' => \'' . $msgstr . "',\n";
}
echo "\n?>";
By using POEdit it's easy to maintain the project-localizations since it's syncing all the strings but just clicking "Update". I'll get a report of new and obsolete strings and can update the localizations in a few moments. Hope it helps someone.
Try this I18n_Plural module. I like how it handles plural forms, very simple and easy. There is a lot of examples in readme file (showing on a main module page).