passing parameters to a bat (or vbs) in php - php

Objective: Use PHP to call a vbs that converts an xls/xlsx file to a csv.
Question: How can I pass a source file path and a destination file path to a vbs that converts xls/xlsx to csv and run that vbs in a PHP web application?
Details: I have a working vbs that takes a source file path and a destination file path and converts the xls/xlsx at source file path into a csv. I can execute it from the Windows cmd line and it does exactly what I want it to do. I can also put the execution command into a bat file and run the bat file to achieve the same results. However, when I use exec()/shell_exec()/system() in PHP to execute the same command no csv is created. (If I try to run the bat from PHP using system() the contents of the bat file show up on the page, in fact, echo Conversion complete! prints "echo Conversion complete! Conversion complete.") I haven't seen any errors yet.
Note: I know about PHPExcel, I'd prefer not to use it.
excelToCsv.vbs
On Error Resume Next
if WScript.Arguments.Count < 2 Then WScript.Echo "Please specify the source and the destination files. Usage: ExcelToCsv <xls/xlsx source file> <csv destination file>"
Wscript.Quit
End If
csv_format = 6
Set objFSO = CreateObject("Scripting.FileSystemObject")
src_file = objFSO.GetAbsolutePathName(Wscript.Arguments.Item(0))
dest_file = objFSO.GetAbsolutePathName(WScript.Arguments.Item(1))
Dim oExcel
Set oExcel = CreateObject("Excel.Application")
Dim oBook
Set oBook = oExcel.Workbooks.Open(src_file)
oBook.SaveAs dest_file, csv_format
oBook.Close False
oExcel.Quit
batConverter.bat
excelToCsv.vbs conversionTestSourceMS2003.xls batTest.csv
echo Conversion Complete!
index.phtml
<?php
system("cmd /c batConvert.bat")
?>
Note: All of the above files (along with conversionTestSourceMS2003.xls) are in the same directory. I have not implemented any way to pass the parameters (since I can't get it to work even if it's all hard coded...)
Set Up: PHP5, Zend Framework, WAMP, Windows 7 (localhost).

For the sake of simplicity, I merged everything into a single ASP page. This will allow me to hopefully see a similar problem in IIS, and since it is in a single ASP script, I will be able to see the error more directly. My test machine is running on Windows Vista SP2 on IIS7 with Excel 2007 SP3.
excelToCsv.asp
<%
Option Explicit
Dim csv_format, src_file, dest_file, strPath, objFSO
csv_format = 6
src_file = "conversionTestSourceMS2003.xls"
dest_file = "testbat.csv"
strPath = "[HARDCODED PATH HERE]\"
src_file = strPath & src_file
dest_file = strPath & dest_file
Dim objExcel, objBook
Set objExcel = CreateObject("Excel.Application")
Set objBook = objExcel.Workbooks.Open(src_file)
objBook.SaveAs dest_file, csv_format
objBook.Close False
Response.Write "Conversion Complete!"
objExcel.Quit
%>
When running this code, I got a generic ASP error. So, I enabled detailed error messages in ASP and I get this following error...
Microsoft Office Excel error '800a03ec'
Microsoft Office Excel cannot access the file '[HARDCODED PATH
HERE]\conversionTestSourceMS2003.xls'. There are several possible
reasons: • The file name or path does not exist. • The file is being
used by another program. • The workbook you are trying to save has the
same name as a currently open workbook.
/temp/ExcelToCsv.asp, line 18
Now, this is not Apache, but I do believe the problem is related to yours. This error implies there is a security/permission problem where Excel cannot do what it needs to do to access or read the file. In fact, I encountered similar errors when I was executing the VBScript (and passing the error up the chain) from PHP (in IIS).
I believe it can be resolved by changing the Windows User being used to create the process. This can be configured in Services.msc by editing the Apache service and changing the Log On tab to an actual Windows user instead of a Service Account. I have not tested it yet, though, since setting up Apache is not something I can do right now.

Related

How to execute xpdf (pdftotext.exe) on shared drive?

im trying to parse pdf to text via PHP and XPDF (pdftotext.exe). On my localhost everythings works well, but when im trying to move everything on server, im getting into troubles.
First of all i checked some settings on server and safe_mode is off, exec is not disabled and permissions are rwxrwxrwx.
Then im trying this
$command = "\\\\149.223.22.11\\cae\\04_Knowledge-base\\tools\\pdftotext.exe -enc UTF-8 ". $fileName . " \\\\149.223.22.11\\cae\\04_Knowledge-base\\output.txt";
$result = exec($command,$output,$args);
echo shell_exec($command);
which isnt working. When i look into $result, $output, are empty, but $args returns 1 which coresponds to Incorrect function by this document windows system error codes
Whole command looks like \\149.223.22.11\cae\04_Knowledge-base\tools\pdftotext.exe -enc UTF-8 \\149.223.22.11\cae\04_Knowledge-base\testpdf\04_egerland_final_paper.pdf \\149.223.22.11\cae\04_Knowledge-base\output.txt and when is dirrectly inputed into commandline, its working.
So im a bit out of ideas. Have someone any hint?
edit 20160201 - aditional trying
So i made aditional tests and when im trying to run similar command with exec from localhost (target .exe file, input and output file is in same location, only im using localhost not server) its working. Im now checking differences in server settings. So can here be problem, that localhosts Server Api is Apache 2.0 Handler and server is CGI/FASTCGI?
So it was all mistake on my side. I badly checked IIS permissions and forgot to assign user to virtual directory which i wana to access. So my only advice and more wisdom from this is to double (maybe triple) check if you have all permissions set correctly.

Check PDF number pages with PHP (in Linux)

I have a webpage where I let users to upload files to the account folder. Exactly PDF and JPG files only. I want to count the number of pages inside each PDF uploaded to show it to the users.
To do this, I was using PDFINFO linux library, part of XPDF proyect.
This is the man page of the binary file: http://linuxcommand.org/man_pages/pdfinfo1.html
You can download the .zip with the binaries there: http://www.foolabs.com/xpdf/download.html
My code (this worked perfectly, but yesterday it failed):
function getNumPagesInPDF($document){
if(!file_exists($document))return null;
$cmd = "pdfinfo";
// Open the document
exec($cmd." '".$document."'", $output);
// Browse the data
$pagecount = 0;
foreach($output as $op){
// Extrac number of pages
if(preg_match("/Pages:\s*(\d+)/i", $op, $matches) === 1){
$pagecount = intval($matches[1]);
break;
}
}
return $pagecount;
}
I can run the command in SSH, and it works in the server. Now, this code doesn't work in PHP, but nothing changed the code.
AH! a little addition: I checked exec works in my PHP using:
function exec_enabled() {
$disabled = explode(',', ini_get('disable_functions'));
return !in_array('exec', $disabled);
}
if (exec_enabled()){
echo "exec funciona";
}
Another addition: PHP didn't shows any error related with that and I have the error logging enabled to a log file (including warnings). My host recently activated mod_security.
TASK1: Try $document variable: the path is ok, relative to the place where the php code file is placed. The path exists and the file too.
TASK2: Check if $output variable has anything: NO, $output array is empty! Why? cannot understand.
TASK3: Check the $cmd." '".$document."'" : it's ok, and copied the "result" to ssh works. I'm lost.
As per the comment discussion, we've seen that running a binary using a bare filename does not always work. This is as true on the console as it is inside a system command like exec().
When you run pdfinfo in either environment, the system will search through the environment variable PATH to discover which directories to find it in. This variable is nearly always different between your user account and the Apache environment, which is why it is important to always specify the fully-qualified filename when running a binary programmatically.
As far as I know, exec() does not regard the folder containing the current PHP script as the current working directory. Even if it did, the current directory . would need to be in the Apache user's PATH in order for this to be found. Thus, I am not sure why this used to work for you, but it emphasises the importance of the above lesson: always use the full path.
You should also read the path from a settings file, rather than hardwiring it in code. This will help you as you move from local, test, staging and live environments of your app, which may store this binary in different locations.

PHP problems when transfering code from Windows to OS X

I have recently bought a new MacBook Pro. Before I had my MacBook Pro I was working on a website on my desktop computer. And now I want to transfer this code to my new MacBook Pro.
The problem is that when I transfered the code (I put it on Dropbox and simply downloaded it on my MacBook Pro) I started to see lots of error messages in my PHP code.
The error message I”m receiving is:
Warning: Cannot modify header information - headers already sent by (output started at /some/file.php:1) in /some/file.php on line 23
I have done some research on this and it seems that this error is most frequently caused by a new line, simple whitespace or any output before the <?php sign. I have looked through all the places where I have cookies that are being sent in the HTTP request and also where I'm using the header() function. I haven’t detected any output or whitespace that possibly could interfere and cause this problem.
Noteworthy is that the error always says that the output is started at line 1. Which got me thinking if there is some kind of coding differences in the way that the Mac OS X and Windows operating systems handle new lines or white spaces? Or could the Dropbox transfer messed something up?
The code on one of the sites(login.php) which produces the error:
<?php
include "mysql_database.php";
login();
$id = $_SESSION['Loggedin'];
setcookie("login", $id, (time()+60*60*24*30));
header('Location: ' . $_SERVER['HTTP_REFERER']);
?>
login function:
function login() {
$connection = connecttodatabase();
$pass = "";
$user = "";
$query = "";
if (isset($_POST['user']) && $_POST['user'] != null) {
$user = $_POST['user'];
if (isset($_POST['pass']) && $_POST['pass'] != null) {
$pass = md5($_POST['pass']);
$query = "SELECT ID FROM Anvandare WHERE Nickname='$user' AND Password ='$pass'";
}
}
if ($query != "") {
$id = $connection->query($query);
$id = mysqli_fetch_assoc($id);
$id = $id['ID'];
$_SESSION['Loggedin'] = $id;
}
closeconnection($connection);
}
Complete error:
Warning: Cannot modify header information - headers already sent by (output started at /Users/name/GitHub/website/login.php:1) in /Users/namn/GitHub/website/login.php on line 9
Check if there are spaces in front of your php opening tag. Also try resaving the file from notepad++ using the windows (crlr) line endings. (Edit > EOL Conversion > Windows format)
Noteworthy is that the error always says that the output is started at
line 1. Which got me thinking if there is some kind of coding
differences in the way that the Mac OS X and Windows operating systems
handle new lines or white spaces? Or could the Dropbox transfer messed
something up?
Don’t redo your code or worry about the header() calls or even the cookie stuff. That is not the issue.
The issue is that Windows line endings are different from Mac line endings. More details here.
Different operating systems use different characters to mark the end
of line:
Unix / Linux / OS X uses LF (line feed, '\n', 0x0A)
Macs prior to OS X use CR (carriage return, '\r', 0x0D)
Windows / DOS uses CR+LF (carriage return followed by line feed, '\r\n', 0x0D0A)
And what happens in cases like this is the formatting of the page causes the PHP parser in Apache to choke on the files. Possibly sending content to the browser before you intend to when making header() calls or setting cookies. Meaning technically the screwed up line endings force a “header” to be sent because the file itself is outputting data to the browser inadvertently.
The solution might be to avoid using Dropbox & just copy the files onto a flash drive & transfer it that way. That’s an idea but I am not convinced that Dropbox was the culprit in this. Meaning the issue might still exist even if you copy the files to a flash drive.
Or if that does not work, do as the linked to article suggests & download a good text editing tool like TextWrangler. Just load the files into TextWrangler & then manually change the line endings so they are Mac (CR) and resave the files.
Another long-term solution to this issue might be to use a version control system like git coupled with an account on GitHub to manage your code. The benefit is by pushing code to GitHub & pulling code from GitHub, the process itself will deal with cross-platform line ending headaches. And you don’t need to worry about inadvertent oddities caused by a straight copy of files to a service like DropBox.
But again, pretty convinced this has nothing to do with Dropbox. It’s all about Windows line endings being different from Mac OS X line endings.
EDIT: There are some interesting ideas on how to handle the bulk conversion of Windows line endings to Mac OS X line endings on Mac OS X Hints. The most intrguing one is the use of zip and unzip to facilitate the process. I have not tried this, so caveat emptor! But it does sound like something worth testing since the last line states, BTW, it's the "-a" flag to unzip, that is causing the ascii files to have their lines endings converted.:
I've always used the following (in a file named fixascii):
#!/bin/sh
zip -qr foo.zip "$#" && unzip -aqo foo.zip && rm foo.zip
And then execute it as:
fixascii [files or directories to convert]
Which has the benefit over most of these other commands in that you
can point it with impunity at an entire directory tree and it will
process all the files in it and not corrupt any binaries that may
happen to have a string of bits in them that look like a line-ending.
I've seen too many times where someone corrupted a ton of images and
other binaries, when trying to fix line-endings on text files using
dos2unix or tr in combination with find but failed to ensure that only
text files were processed. Unzip figure out which files are ascii,
converts them, and leaves the binaries alone.
BTW, it's the "-a" flag to unzip, that is causing the ascii files to
have their lines endings converted.
And then looking in the official man page for unzip under the -a (convert text files) option; emphasis is mine:
Ordinarily all files are extracted exactly as
they are stored (as ''binary'' files). The -a option causes files
identified by zip as text files (those with the 't' label in zipinfo
listings, rather than 'b') to be automatically extracted as
such, converting line endings, end-of-file characters and the
character set itself as necessary. (For example, Unix files use
line feeds (LFs) for end-of-line (EOL) and have no end-of-file (EOF)
marker; Macintoshes use carriage returns (CRs) for EOLs; and most
PC operating systems use CR+LF for EOLs and control-Z for EOF. In
addition, IBM mainframes and the Michigan Terminal System use
EBCDIC rather than the more common ASCII character set, and NT
supports Unicode.) Note that zip's identification of text files
is by no means perfect; some ''text'' files may actually be binary
and vice versa. unzip therefore prints ''[text]'' or
''[binary]'' as a visual check for each file it extracts when using
the -a option. The -aa option forces all files to be extracted
as text, regardless of the supposed file type.
EDIT: Also, if you have access to a Linux machine, you might want to checkout dos2unix. More details here as well. And found another Stack Overflow question here.
Finally found an easy way to fix this! I was looking through the php.ini file when i came across an option which is named: auto_detect_line_endings, and has its default value set to: Off.
The description to this option is:
; If your scripts have to deal with files from Macintosh systems,
; or you are running on a Mac and need to deal with files from
; unix or win32 systems, setting this flag will cause PHP to
; automatically detect the EOL character in those files so that
; fgets() and file() will work regardless of the source of the file.
; http://php.net/auto-detect-line-endings
Which is exactly what i was looking for!
I simply used the ini_set() function at the beginning of my database file(which i load on every php page) and it seems to have solved the problem for me! The ini_set() function also returns the option changed in the php.ini file to normal when script is completed.
Full line of the ini_set() function that i used:
ini_set("auto_detect_line_endings", true);
Thanks for all your help guys!
More info on ini_set() function here: ini_set() function
More info on the auto_detect_line_endings option here: Auto detect line endings option

PHP: ftp_get() can download file with same name only once

I'm having a strange problem using ftp_get() on one of the two identical instances. One is on localhost and another on an actual server. I'm using the following to download a file via FTP. Both of the instances download from the same FTP servers with the same credentials and same paths.
$result = ftp_get($connection, $downloadPath, $serverPath, FTP_BINARY);
if ($result) {
$successfulWrites[] = $downloadPath; // file name only without path
} else {
// on second attempt to download file with same name, ftp_get() returns false
// this is where I throw an exception in my code
}
On my localhost, I can download the same file over and over, and it doesn't matter what the file name on the FTP server is or where it's located.
On second instance, which is identical to the localhost's (i.e. pulled from the same git repo) in terms of code, I can download a file once, but the same file cannot be downloaded again, and ftp_get() returns false. If I change the name of the file on the FTP server, I can download it, but after that it won't work again. i.e. ftp_get() will return false.
I don't have access to the FTP server log. If it's available, I'm going to try to get it today from the host. But can anyone think of a reason this might be happening? ftp_get() just returns true or false without any explanation, so I'm pretty stuck with this.
I'm using PHP 5.4, and I have no idea what the spec is of the FTP (regular FTP) server.
As discussed, it sounded like ftp_get was successfully obtaining the file and writing it locally. I wonder whether due to a permissions problem, when it tries to write the file locally again, it fails. Thus, the FTP channel itself is fine, and the problem is just local.
I'm somewhat surprised at this though, as I would imagine PHP would have raised a warning. Is your error_reporting set to allow this whilst you are debugging?

Scan uploaded files for malwares under Windows using PHP

I'm trying to install ClamAV on Windows but I can't find how to.
What I want actually is to scan for malwares uploaded files and return a value like "safe" or "Infected by: X"
Do you think it's possible on Windows using a free library?
Do you know if there is a paid software that can do this (even using command-line)?
I managed to do it by installing ClamWin on the Windows 2008 Server. (clamwin-0.97.6). I created the eicar.txt file in order to test detection:
X5O!P%#AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*
Created test.php file:
<?php
$file = 'C:/Users/Localadmin/Desktop/testfile/eicar.txt'; // infected test file
$db = '"C:/Documents and Settings/All Users/.clamwin/db/"'; // path to database of virus definition
$scan_result = shell_exec("D:/programs/clamwin/bin/clamscan --database=$db $file");
echo $scan_result;
?>
It gives me this result:
Eicar-Test-Signature FOUND
----------- SCAN SUMMARY -----------
Known viruses: 1568163
Engine version: 0.97.6
Scanned directories: 0
Scanned files: 1
Infected files: 1
Data scanned: 0.00 MB
Data read: 0.00 MB (ratio 0.00:1)
Time: 7.363 sec (0 m 7 s)
Than you can process the string $scan_result to figure out what number has been returned after 'Infected files: '.
I will be using it to scan files uploaded via form and since the scanning takes time (7 seconds) I will use some ajax script which can nicely return feedback to the user such "Uploading file..." and "Scanning for viruses..."
You can install clamav for windows (clamwin), and use php's passthru function to scan a file via commandline and get the output back. Parse it then display your message. You will have to adjust your php timeout value, or configure your application to upload, get the user to constantly refresh for the status while a background script scans and inserts the result into a database or something. Try looking at virustotal.com they do this, and scan it with over 20 av scanners.

Categories