I need open rar archive, search all *.txt files, read this files and put content in php array
$file="archive.rar";
$ext = pathinfo($file, PATHINFO_EXTENSION);
$tmp=$_SERVER['DOCUMENT_ROOT'].'/tmp';
if($ext=='rar'){
$archive = RarArchive::open($file);
$entries = $archive->getEntries();
foreach ($entries as $entry) {
echo file_get_contents($entry->getName());// not working
$entry->extract($tmp);
}
$archive->close();
}
The RarEntry::getName() method returns only path relative to archive. The file_get_contents() function knows nothing about this archive and it just try to find a regular file in current directory. You can use the rar:// protocol wrapper according to format:
rar://<url encoded archive name>[*][#[<url encoded entry name>]]
e.g.
echo file_get_contents('rar://' . $file . '#' . $entry->getName());
Related
In my program I need to read .png files from a .tar file.
I am using pear Archive_Tar class (http://pear.php.net/package/Archive_Tar/redirected)
Everything is fine if the file im looking for exists, but if it is not in the .tar file then the function timouts after 30 seconds. In the class documentation it states that it should return null if it does not find the file...
$tar = new Archive_Tar('path/to/mytar.tar');
$filePath = 'path/to/my/image/image.png';
$file = $tar->extractInString($filePath); // This works fine if the $filePath is correct
// if the path to the file does not exists
// the script will timeout after 30 seconds
var_dump($file);
return;
Any suggestions on solving this or any other library that I could use to solve my problem?
The listContent method will return an array of all files (and other information about them) present in the specified archive. So if you check if the file you wish to extract is present in that array first, you can avoid the delay that you are experiencing.
The below code isn't optimised - for multiple calls to extract different files for example the $files array should only be populated once - but is a good way forward.
include "Archive/Tar.php";
$tar = new Archive_Tar('mytar.tar');
$filePath = 'path/to/my/image/image.png';
$contents = $tar->listContent();
$files = array();
foreach ($contents as $entry) {
$files[] = $entry['filename'];
}
$exists = in_array($filePath, $files);
if ($exists) {
$fileContent = $tar->extractInString($filePath);
var_dump($fileContent);
} else {
echo "File $filePath does not exist in archive.\n";
}
I would like to delete all files matching a particular extension in a specified directory and all subtree. I suppose I should be using using unlink but some help would be highly appreciated... Thank you!
you need a combination of this
Recursive File Search (PHP)
And the unlink / delete
You should be able to edit the example instead of echoing the file, to delete it
To delete specific extension files from sub directories, you can use the following function. Example:
<?php
function delete_recursively_($path,$match){
static $deleted = 0,
$dsize = 0;
$dirs = glob($path."*");
$files = glob($path.$match);
foreach($files as $file){
if(is_file($file)){
$deleted_size += filesize($file);
unlink($file);
$deleted++;
}
}
foreach($dirs as $dir){
if(is_dir($dir)){
$dir = basename($dir) . "/";
delete_recursively_($path.$dir,$match);
}
}
return "$deleted files deleted with a total size of $deleted_size bytes";
}
?>
e.g. To remove all text files you can use it as follows:
<?php echo delete_recursively_('/home/username/directory/', '.txt'); ?>
I have a html file listing links to artists' pages. What I want to do is to use a php script to list them instead of listing them manually. I also would like to have a thumbnail image above each corresponding link, but I want to get the links first before I add the images. I'm using the following script but it's not working:
<?php
$directory = "C:/wamp/myprojects/UMVA/web/includes/artists";
$phpfiles = glob($directory . "*.html");
foreach($phpfiles as $phpfile)
{
echo ''.$phpfile.'';
}
?>
The folder containing the html files is artists. It doesn't work using the full pathname and it doesn't work using just 'artists' or '/artists' as the pathname. The 'artists' folder is in the same directory 'web' as the php file with the script. What am I missing here?
this should actually do the trick:
$htmlFiles = glob("$directory/*.{html,htm}", GLOB_BRACE);
source
Not sure where is the error, but you can also use SPL Iterators, like GlobIterator, in a more reusable way. GlobIterator returns SplFileInfo objects that provides many useful informations about your file.
Here are the doc pages:
http://fr2.php.net/manual/en/class.globiterator.php
http://fr2.php.net/manual/en/class.splfileinfo.php
Here is an example:
$it = new GlobIterator('C:/wamp/myprojects/UMVA/web/artists/*.jpg');
foreach ($it as $file) {
// I added htmlspecialchars too, never output unsafe data without escape them
echo '' . htmlspecialchars($file->getFilename()) . '';
}
if your directory is always 'C:/wamp/myprojects/UMVA/web/artists', I think you can try scandir( $dirname ) instead of glob().
Here is a simple script that will look for html files in the current directory create and a hyperlink based on the title tag.
<?php
// Get all HTML files in the directory
$html_files = glob("*.{html,htm}", GLOB_BRACE);
$url = "https://" . $_SERVER['HTTP_HOST'] . $_SERVER['REQUEST_URI'];
// Print out each file as a link
foreach($html_files as $file) {
$contents = file_get_contents($file);
$start = strpos($contents, '<title>');
if ($start !== false) {
$end = strpos($contents, '</title>', $start);
$line = substr($contents, $start + 7 , $end - $start - 7);
echo "<center>$line</center><br>\n";
}
}
?>
Save this file as index.php place the html files in the folder and browse to the URL.
I'm trying to create an Intranet page that looks up all pdf documents in a UNC path and the returns them in a list as hyperlinks that opens in a new window. I'm nearly there however the following code displays the FULL UNC path - My question how can I display only the Filename (preferably without the .pdf extension too). I've experimented with the basename function but can't seem to get the right result.
//path to Network Share
$uncpath = "//myserver/adirectory/personnel/";
//get all files with a .pdf extension.
$files = glob($uncpath . "*.pdf");
//print each file name
foreach ($files as $file)
{
echo "<a target=_blank href='File:///$file'>$file</a><br>";
}
The links work fine it just the display text shows //myserver/adirectory/personnel/document.pdf rather than just document. Note the above code was taken from another example I found whilst researching. If there's a whole new better way then I'm open to suggestions.
echo basename($file);
http://php.net/basename
Modify your code like this:
<?
$uncpath = "//myserver/adirectory/personnel/";
//get all files with a .pdf extension.
$files = glob($uncpath . "*.pdf");
//print each file name
foreach ($files as $file)
{
echo "<a target=_blank href='File:///$file'>".basename($file)."</a><br>";
}
?>
You may try this, if basename() does not work for some reason:
$file_a = explode('/',$file);
if (trim(end($file_a)) == '')
$filename = $file_a[count($file_a)-2];
else
$filename = end($file_a);
Thanks in advance.
Getting this warning when using below code:
Warning: file_get_contents(test.php) [function.file-get-contents]: failed to open stream: No such file or directory in /path/index.php on line so-n-so.
Here's the code I am using,
<?php
// Scan directory for files
$dir = "path/";
$files = scandir($dir);
// Iterate through the list of files
foreach($files as $file)
{
// Determine info about the file
$parts = pathinfo($file);
// If the file extension == php
if ( $parts['extension'] === "php" )
{
// Read the contents of the file
$contents = file_get_contents($file);
// Find first occurrence of opening template tag
$from = strpos($contents, "{{{{{");
// Find first occurrence of ending template tag
$to = strpos($contents,"}}}}}");
// Pull out the unique name from between the template tags
$uniqueName = substr($contents, $from+5, $to);
// Print out the unique name
echo $uniqueName ."<br/>";
}
}
?>
The error message says that the file isn't found.
This is because scandir() returns only the basename of the files from your directory. It doesn't include the directory name. You could use glob() instead:
$files = glob("$dir/*.php");
This returns the path in the result list, and would also make your extension check redundant.
I would suggest that you need to exclude . and .. from the list of files picked up by scandir() DOCs.
// Iterate through the list of files
foreach($files as $file) {
if('.' == $file or '..' == $file) {
continue;
}
...
Also you need to put the path before your file name:
$contents = file_get_contents($path . $file);