I have a PHP script that read and modify the content of files to compile a set of js file (ie. minify). It works when I run it in webserver. But when I execute it in PowerShell, the file can't be read.
The purpose is to create an automate script to to build compiled js file for deployment.
Here is the PHP code I'm using:
$compiledFile = './compiled.js';
$files = array(
'script/file1.js',
'script/file2.js',
'script/file3.js'
);
foreach($files as $file) {
$file = '../'.$file;
if(!is_file($file)){
//get here all the time when run in PowerShell
continue;
}
$content = file_get_contents($file);
$minifiedJs .= JSMin::minify($content);
}
file_put_contents($cachedFile, $return);
I have a feeling that it has something to do with the file path, but hours of searching no help.
Related
EDIT: I'm pretty sure the issue has to do with the firewall, which I can't access. Marking Canis' answer as correct and I will figure something else out, possibly wget or just manually scraping the files and hoping no major updates are needed.
EDIT: Here's the latest version of the builder and here's the output. The build directory has the proper structure and most of the files, but only their name and extension - no data inside them.
I am coding a php script that searches the local directory for files, then scrapes my localhost (xampp) for the same files to copy into a build folder (the goal is to build php on the localhost and then put it on a server as html).
Unfortunately I am getting the error: Warning: copy(https:\\localhost\intranet\builder.php): failed to open stream: No such file or directory in C:\xampp\htdocs\intranet\builder.php on line 73.
That's one example - every file in the local directory is spitting the same error back. The source addresses are correct (I can get to the file on localhost from the address in the error log) and the local directory is properly constructed - just moving the files into it doesn't work. The full code is here, the most relevant section is:
// output build files
foreach($paths as $path)
{
echo "<br>";
$path = str_replace($localroot, "", $path);
$source = $hosted . $path;
$dest = $localbuild . $path;
if (is_dir_path($dest))
{
mkdir($dest, 0755, true);
echo "Make folder $source at $dest. <br>";
}
else
{
copy($source, $dest);
echo "Copy $source to $dest. <br>";
}
}
You are trying to use URLs to travers local filesystem directories. URLs are only for webserver to understand web requests.
You will have more luck if you change this:
copy(https:\\localhost\intranet\builder.php)
to this:
copy(C:\xampp\htdocs\intranet\builder.php)
EDIT
Based on your additional info in the comments I understand that you need to generate static HTML-files for hosting on a static only webserver. This is not an issue of copying files really. It's accessing the HMTL that the script generates when run through a webserver.
You can do this in a few different ways actually. I'm not sure exactly how the generator script works, but it seems like that script is trying to copy the supposed output from loads of PHP-files.
To get the generated content from a PHP-file you can either use the command line php command to execute the script like so c:\some\path>php some_php_file.php > my_html_file.html, or use the power of the webserver to do it for you:
<?php
$hosted = "https://localhost/intranet/"; <--- UPDATED
foreach($paths as $path)
{
echo "<br>";
$path = str_replace($localroot, "", $path);
$path = str_replace("\\","/",$path); <--- ADDED
$source = $hosted . $path;
$dest = $localbuild . $path;
if (is_dir_path($dest))
{
mkdir($dest, 0755, true);
echo "Make folder $source at $dest. <br>";
}
else
{
$content = file_get_contents(urlencode($source));
file_put_contents(str_replace(".php", ".html", $dest), $content);
echo "Copy $source to $dest. <br>";
}
}
In the code above I use file_get_contents() to read the html from the URL you are using https://..., which in this case, unlike with copy(), will call up the webserver, triggering the PHP engine to produce the output.
Then I write the pure HTML to a file in the $dest folder, replacing the .php with .htmlin the filename.
EDIT
Added and revised the code a bit above.
I am facing the issue with this code:
<?php
$files = scandir("D:/Dummy");
foreach($files as $file) {
$filenam = $file;
$path_to_file = $filenam;
$file_contents = file_get_contents($path_to_file);
echo "Hello ".$filenam;
$printFileName="";
if(strpos("9222339940", $file_contents) === false)
{
$printFileName=$filenam." ";
}
}
echo $printFileName;
?>
Basically, I have written this code to scan all the files in the directory and from the each file, I need to replace the mobile number. But for some reason, I'm not able to run the script. It is throwing error:
file_get_contents(name of the file) failed to open stream. No such file or directory error.
The scandir() function of PHP will only return the basenames of the files within the directory. That is, if your directory D:\Dummy contains a file test.txt, then scandir() will not return the full path D:\Dummy\test.txt, but only test.txt. So the PHP process will not find the file, because you need to provide the complete path of the file.
We have few fields in the HTML page and this is being written into a file using php.
The file has the data, however, running a bash script which takes this file with a .txt as extension is not working.
When the file is opened and re-saved manually the bash script will work properly!
I've tried changing the permissions of the file but the bash script is still not using this file. Any help on this is greatly appreciated.
PHP Script:
$name = "test.txt";
$handle = fopen($name, "w");
fwrite($handle, "my message");
fclose($handle);
Bash Script:
INPUT="$1"
OLDIFS=$IFS
IFS=,
[ ! -f $INPUT ] && { echo "$INPUT file not found"; exit 99; }
while read message number ; do
echo $message # or whaterver you want to do with the $line variable
#j=$[$(line)]
echo $number
echo "$message" | gnokii --sendsms $number --smsc $SMSC
done < $INPUT
IFS=$OLDIFS
You probably want to use fclose($handle) rather than unlink()
Edit OK then, "Because..."
fclose($handle) closes the file referenced by the handle: $handle
unlink() takes a file path string as a parameter, rather than a resource.
If used as intended, unlink() will actually delete the file.
unlink() as mentioned would delete the file. Just created a plain script like this
<?php
$name = "/var/www/test.txt";
$handle = fopen($name, "w");
fwrite($handle, "test,9944");
fclose($handle);
?>
This script works f9 with storing the data. But running bash script on this with the file input as test.txt does nothing. But again opening and saving with the same file name and it works.
finally got the answer its the end of file that is missing in text file that is created by php. anyway thank you all for contributing.
My PHP/MySQL web application has a function that combines a set of images for each user into a single PDF using ImageMagick. Each PDF is then placed into a ZIP file. Each image can be up to 5 MB in size. My problem is that the browser times out before the user can download the document.
Any ideas on how to best do this?
I was thinking I could create the ZIP file without sending it to the browser (i.e. remove the "headers" at the end of my code) and then email a link to the file; however, it still requires that the user wait a long time for the ZIP file to be created (it appears as if the browser is just hanging). Could this be done with AJAX behind the scenes or directly on the server somehow?
$tmp_path = sys_get_temp_dir().'/';
$archive_name = "myarchive.zip";
$zip = new ZipArchive();
if($zip->open($archive_name,1 ? ZIPARCHIVE::OVERWRITE : ZIPARCHIVE::CREATE) !== true) {
return false;
}
foreach ($rows AS $row) {
$irows = get_images($row['user_id']);
$images = array();
foreach($irows AS $irow){
$doc = fetch_document_path($irow['id']);
$output_file_name = $tmp_path.$irow['id'].'.jpg';
exec ('convert '.$doc.' '.$output_file_name);
$images[] = $irow['id'].'.jpg';
}
$images = implode(' ', $images);
$output_file_name = $tmp_path.$row['name'].'.pdf';
exec ('convert '.$images.' "'.$output_file_name.'"');
$zip->addFile($output_file_name,basename($output_file_name));
}
$zip->close();
header('Content-type: application/zip');
header('Content-Disposition: attachment; filename="output.zip"');
readfile($archive_name);
IMO you should run some background task that will do the job. Background worker can for example save URL to result file in DB after it'll finished (can also save some more information like current status and job progress), meantime webpage can periodically ask server using AJAX if job is already done or not and finally display link when it'll be available.
Simplest way to achieve that is to run your script as background process:
$arg1 = escapeshellarg($somearg1);
$arg2 = escapeshellarg($somearg1);
exec(sprintf('/usr/bin/php archiver.php %s %s > /dev/null 2>&1 & echo $!', $arg1, $arg2));
archiver.php should begin with following lines:
<?php
ignore_user_abort(true);
set_time_limit(0);
it'll prevent script from being stopped when parent script will finish working.
Second idea I have is more complex - you can write a daemon that will run in background waiting for jobs. To communicate with it you can use queues (like AMQP) or just database. With daemon you'll have more control on what is happening and when - in first approach it can happen that your application will fire too many processes.
I have a problem with a function that doesn't work as expected since I have moved my site from a shared hosting to a VPS (both have the same Linux OS, php version 5.2.9 and Perl version 5.8.8).
When my script store a remote file into a local directory, I run a simple php script at regular intervals (5 seconds) using XMLHttpRequest, this php script execute a Perl script that return the current file size (bytes already downloaded).
Here is the php code:
<?php
if (isset($_GET['file'])) {
clearstatcache();
$file = $_GET['file'];
exec("/usr/bin/perl /home/xxxxxx/public_html/cgi-bin/filesize.pl $file", $output);
//print_r($output);
if (!empty($output) || $output[0] != "") {
$currentSize = $output[0];
file_put_contents('progress.txt', $currentSize);
} else {
...
...
}
}
?>
Here is the Perl code
#!/usr/bin/perl
$filename = $ARGV[0];
$filepath = '/home/xxxxxx/public_html/tmp_dir/'.$filename.'.flv';
$filesize = -s $filepath;
print $filesize;
When I was running these scripts on the shared server, I had no problem and could see the download progress, but now, the file size is only printed when the remote file has been fully downloaded and I can't see the progress.
I think I need to change something in the php settings but I'm not sure and I don't know what needs to be changed.
OK, I'm sorry/stupid, the filesize() function works fine, thank you all guys.
If you need the file size, you could also just call the filesize function from PHP, and avoid having to use perl altogether.
The problem is probably caused by a different file location. Are you positive that the file '/home/xxxxxx/public_html/tmp_dir/'.$filename.'.flv' exists? You could test it with:
if (-e '/home/xxxxxx/public_html/tmp_dir/'.$filename.'.flv')
Remember that you could use PHP filesize() instead:
<?php
if (isset($_GET['file'])) {
clearstatcache();
$file = $_GET['file'];
if (file_exists("/home/xxxxxx/public_html/tmp_dir/$file.flv") {
$currentSize = filesize("/home/xxxxxx/public_html/tmp_dir/$file.flv");
file_put_contents('progress.txt', $currentSize);
} else {
...
...
}
}
?>