I'm pretty new to php and we're trying to write a plugin for wordpress. We have a server with images on it and we'd like to have the plugin have a list of images to download from the server. It then needs to go through that list and read each image from the server into the $_FILES variable that we can then pass to the wordpress media_handle_upload function.
I've been able to read a remote file with the following code. But I'm not sure where to go from here.
$url = 'http://www.planet-source-code.com/vb/2010Redesign/images/LangugeHomePages/PHP.png';
$img = curl_init();
curl_setopt($img, CURLOPT_URL, $url);
curl_setopt($img, CURLOPT_HEADER, 1);
curl_setopt($img, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($img, CURLOPT_BINARYTRANSFER, 1);
$file = curl_exec($img);
curl_close($img);
$file_array = explode("\n\r", $file, 2);
$header_array = explode("\n", $file_array[0]);
foreach($header_array as $header_value) {
$header_pieces = explode(':', $header_value);
if(count($header_pieces) == 2) {
$headers[$header_pieces[0]] = trim($header_pieces[1]);
}
}
header('Content-type: ' . $headers['Content-Type']);
header('Content-Disposition: ' . $headers['Content-Disposition']);
$imgFile = substr($file_array[1], 1);
echo $imgFile;
Solution: Lookup table
Create images name list or image file path(link) list as json,xml or txt format. so it will act like as lookup table. it can be parse easily (just like RSS feed customization). call the json or xml file and get the data in the form of array. now you can process it easily
Related
I am currently implementing an upload mechanism for files on my webserver into my Dropbox app directory.
As stated on the API docs, there is the /upload endpoint (https://www.dropbox.com/developers/documentation/http/documentation#files-upload) which accepts files up to 150MB in size. However I‘m dealing with images and videos with a potential size of up to 2GB.
Therefore I need to use the upload_session endpoints. There is an endpoint to start the session (https://www.dropbox.com/developers/documentation/http/documentation#files-upload_session-start), to append data and to finish the session.
What currently is unclear to me is how to exactly use these endpoints. Do I have to split my file on my server into 150MB chunks (how would I do that with a video file?) and then upload the first chunk with /start, the next chunks with /append and the last one with /finish? Or can I just specify the file and the API somehow (??) does the splitting for me? Obviously not, but I somehow can‘t get my head around on how I should calculate, split and store the chunks on my webserver and not lose the session inbetween...
Any advice or further leading links are greatly appreciated. Thank you!
As Greg mentioned in the comments, you decide how to manage the "chunks" of the files. In addition to his .NET example, Dropbox has a good upload session implementation in the JavaScript upload example of the Dropbox API v2 JavaScript SDK.
At a high-level, you're splitting up the file into smaller sizes (aka "chunks") and passing those to the upload_session mechanism in a specific order. The upload mechanism has a few parts that need to be used in the following order:
Call /files/upload_session/start. Use the resulting session_id as a parameter in the following methods so Dropbox knows which session you're interacting with.
Incrementally pass each "chunk" of the file to /files/upload_session/append_v2. A couple things to be aware of:
The first call will return a cursor, which is used to iterate over the file's chunks in a specific order. It gets passed as a parameter in each consecutive call to this method (with the cursor being updated on every response).
The final call must include the property "close": true, which closes the session so it can be uploaded.
Pass the final cursor (and commit info) to /files/upload_session/finish. If you see the new file metadata in the response, then you did it!!
If you're uploading many files instead of large ones, then the /files/upload_session/finish_batch and /files/upload_session/finish_batch/check are the way to go.
I know this is an old post, but here is a fully functional solution for your problem. Maybe anyone else finds it usefull. :)
<?php
$backup_folder = glob('/var/www/test_folder/*.{sql,gz,rar,zip}', GLOB_BRACE); // Accepted file types (sql,gz,rar,zip)
$token = '<ACCESS TOKEN>'; // Dropbox Access Token;
$append_url = 'https://content.dropboxapi.com/2/files/upload_session/append_v2';
$start_url = 'https://content.dropboxapi.com/2/files/upload_session/start';
$finish_url = 'https://content.dropboxapi.com/2/files/upload_session/finish';
if (!empty($backup_folder)) {
foreach ($backup_folder as $single_folder_file) {
$file_name= basename($single_folder_file); // File name
$destination_folder = 'destination_folder'; // Dropbox destination folder
$info_array = array();
$info_array["close"] = false;
$headers = array(
'Authorization: Bearer ' . $token,
'Content-Type: application/octet-stream',
'Dropbox-API-Arg: '.json_encode($info_array)
);
$chunk_size = 50000000; // 50mb
$fp = fopen($single_folder_file, 'rb');
$fileSize = filesize($single_folder_file); // File size
$tosend = $fileSize;
$first = $tosend > $chunk_size ? $chunk_size : $tosend;
$ch = curl_init($start_url);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, fread($fp, $first));
curl_setopt($ch,CURLOPT_RETURNTRANSFER,true);
$response = curl_exec($ch);
$tosend -= $first;
$resp = explode('"',$response);
$sesion = $resp[3];
$position = $first;
$info_array["cursor"] = array();
$info_array["cursor"]["session_id"] = $sesion;
while ($tosend > $chunk_size)
{
$info_array["cursor"]["offset"] = $position;
$headers[2] = 'Dropbox-API-Arg: '.json_encode($info_array);
curl_setopt($ch, CURLOPT_URL, $append_url);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_POSTFIELDS, fread($fp, $chunk_size));
curl_exec($ch);
$tosend -= $chunk_size;
$position += $chunk_size;
}
unset($info_array["close"]);
$info_array["cursor"]["offset"] = $position;
$info_array["commit"] = array();
$info_array["commit"]["path"] = '/'. $destination_folder . '/' . $file_name;
$info_array["commit"]["mode"] = array();
$info_array["commit"]["mode"][".tag"] = "overwrite";
$info_array["commit"]["autorename"] = true;
$info_array["commit"]["mute"] = false;
$info_array["commit"]["strict_conflict"] = false;
$headers[2] = 'Dropbox-API-Arg: '. json_encode($info_array);
curl_setopt($ch, CURLOPT_URL, $finish_url);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_POSTFIELDS, $tosend > 0 ? fread($fp, $tosend) : null);
curl_exec($ch);
curl_close($ch);
fclose($fp);
unlink($single_folder_file); // Remove files from server folder
}
}
I am looking for a function that gets the metadata of a .mp3 file from a URL (NOT local .mp3 file on my server).
Also, I don't want to install http://php.net/manual/en/id3.installation.php or anything similar to my server.
I am looking for a standalone function.
Right now i am using this function:
<?php
function getfileinfo($remoteFile)
{
$url=$remoteFile;
$uuid=uniqid("designaeon_", true);
$file="../temp/".$uuid.".mp3";
$size=0;
$ch = curl_init($remoteFile);
//==============================Get Size==========================//
$contentLength = 'unknown';
$ch1 = curl_init($remoteFile);
curl_setopt($ch1, CURLOPT_NOBODY, true);
curl_setopt($ch1, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch1, CURLOPT_HEADER, true);
curl_setopt($ch1, CURLOPT_FOLLOWLOCATION, true); //not necessary unless the file redirects (like the PHP example we're using here)
$data = curl_exec($ch1);
curl_close($ch1);
if (preg_match('/Content-Length: (\d+)/', $data, $matches)) {
$contentLength = (int)$matches[1];
$size=$contentLength;
}
//==============================Get Size==========================//
if (!$fp = fopen($file, "wb")) {
echo 'Error opening temp file for binary writing';
return false;
} else if (!$urlp = fopen($url, "r")) {
echo 'Error opening URL for reading';
return false;
}
try {
$to_get = 65536; // 64 KB
$chunk_size = 4096; // Haven't bothered to tune this, maybe other values would work better??
$got = 0; $data = null;
// Grab the first 64 KB of the file
while(!feof($urlp) && $got < $to_get) { $data = $data . fgets($urlp, $chunk_size); $got += $chunk_size; } fwrite($fp, $data); // Grab the last 64 KB of the file, if we know how big it is. if ($size > 0) {
curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_RESUME_FROM, $size - $to_get);
curl_exec($ch);
// Now $fp should be the first and last 64KB of the file!!
#fclose($fp);
#fclose($urlp);
} catch (Exception $e) {
#fclose($fp);
#fclose($urlp);
echo 'Error transfering file using fopen and cURL !!';
return false;
}
$getID3 = new getID3;
$filename=$file;
$ThisFileInfo = $getID3->analyze($filename);
getid3_lib::CopyTagsToComments($ThisFileInfo);
unlink($file);
return $ThisFileInfo;
}
?>
This function downloads 64KB from a URL of an .mp3 file, then returns the array with the metadata by using getID3 function (which works on local .mp3 files only) and then deletes the 64KB's previously downloaded.
Problem with this function is that it is way too slow from its nature (downloads 64KB's per .mp3, imagine 1000 mp3 files.)
To make my question clear : I need a fast standalone function that reads metadata of a remote URL .mp3 file.
This function downloads 64KB from a URL of an .mp3 file, then returns the array with the metadata by using getID3 function (which works on local .mp3 files only) and then deletes the 64KB's previously downloaded. Problem with this function is that it is way too slow from its nature (downloads 64KB's per .mp3, imagine 1000 mp3 files.)
Yeah, well what do you propose? How do you expect to get data if you don't get data? There is no way to have a generic remote HTTP server send you that ID3 data. Really, there is no magic. Think about it.
What you're doing now is already pretty solid, except that it doesn't handle all versions of ID3 and won't work for files with more than 64KB of ID3 tags. What I would do to improve it to is to use multi-cURL.
There are several PHP classes available that make this easier:
https://github.com/jmathai/php-multi-curl
$mc = EpiCurl::getInstance();
$results[] = $mc->addUrl(/* Your stream URL here /*); // Run this in a loop, 10 at a time or so
foreach ($results as $result) {
// Do something with the data.
}
1 - I have configure google picker and it is working fine and I select the file from picker and get the file id.
2 - After refresh token etc all process I get the file metadata and get the file export link
$downloadExpLink = $file->getExportLinks();
$downloadUrl = $downloadExpLink['application/vnd.openxmlformats-officedocument.wordprocessingml.document'];
3 - After that I use this
if ($downloadUrl) {
$request = new Google_HttpRequest($downloadUrl, 'GET', null, null);
$httpRequest = Google_Client::$io->authenticatedRequest($request);
if ($httpRequest->getResponseHttpCode() == 200)
{
$content = $httpRequest->getResponseBody();
print_r($content);
} else {
// An error occurred.
return null;
}
and get this response
[responseBody:protected] => PK��DdocProps/app.xml���
�0D���k�I[ѫ��m
��!����A={���}�
2G�Z�g�V��Bľ֧�n�Ҋ�ap!����fb�d����k}Ikc�_`t<+�(�NJ̽�����#��EU-
�0#���P����........
4 - I use some cURL functions to get file from google drive and save it to server. IN server directory a file created but cropped. I use this code
$downloadExpLink = $file->getExportLinks();
$downloadUrl = $downloadExpLink['application/vnd.openxmlformats-officedocument.wordprocessingml.document'];
//$downloadUrl value is
/*https://docs.google.com/feeds/download/documents/export/Export?id=1CEt1ya5kKLtgK************IJjDEY5BdfaGI&exportFormat=docx*/
When I put this url into browser it will download file successfully but when I use this url to fetch file with cURL or any php code and try to save it on server it saves corrupted file.
$ch = curl_init();
$source = $downloadUrl;
curl_setopt($ch, CURLOPT_URL, $source);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec ($ch);
curl_close ($ch);
$destination = "test/afile5.docx";
$file = fopen($destination, "w+");
fputs($file, $data);
fclose($file);
It result a corrupted file stored on server but whe I use this code to get any file other then google drive I download it successfully on server.
Can any one please help that how to download file from $downloadUrl to my server using php ?
I am using csxi to make scanning for documnets as image, but I have to upload pdf files to server. How can I convert image to PDF in php ? or is there any way to make csxi scan documents as PDF not image
If you have ImageMagick installed on your machine you could use the ImageMagick bindings for PHP to execute some simple PHP code to do this task:
$im=new Imagick('my.png');
$im->setImageFormat('pdf');
$im->writeImage('my.pdf');
Alternatively if you don't have ImageMagick available you could use a commercial API such as Zamzar which supports image to PDF conversion via PHP (more info in the docs).
Code to use this would be:
<?php
// Build request
$endpoint = "https://api.zamzar.com/v1/jobs";
$apiKey = "YOUR_API_KEY";
$sourceFilePath = "my.png";
$targetFormat = "pdf";
$sourceFile = curl_file_create($sourceFilePath);
$postData = array(
"source_file" => $sourceFile,
"target_format" => $targetFormat
);
// Send request
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $endpoint);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, 'POST');
curl_setopt($ch, CURLOPT_POSTFIELDS, $postData);
curl_setopt($ch, CURLOPT_SAFE_UPLOAD, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_USERPWD, $apiKey . ":");
$body = curl_exec($ch);
curl_close($ch);
// Process response (with link to converted files)
$response = json_decode($body, true);
print_r($response);
?>
Wrap your image inside HTML and use some HTML to PDF converter like fpdf or mpdf
You can use convertapi service, easy to install:
composer require convertapi/convertapi-php
require_once('vendor/autoload.php');
use \ConvertApi\ConvertApi;
//get api key: https://www.convertapi.com/a/si
ConvertApi::setApiSecret('xxx');
$result = ConvertApi::convert('pdf', ['File' => '/dir/test.png']);
# save to file
$result->getFile()->save('/dir/file.pdf');
to convert multiple files and other options check https://github.com/ConvertAPI/convertapi-php
Here, Php 7.4, Laravel 7+, ImageMagick-7.1.0-Q16, and Ghostscript gs10.00.0 is used.
If any files are contained in the folder JpgToPdf then delete them. And so on.
/**
* jpg To pdf WEB
*
* #method convertJpgToPdf
*/
public function convertJpgToPdf(Request $request)
{
try {
//get list of files
$files = Storage::files('JpgToPdf');
/*get count of files and ,
* check if any files contain
* if any files contains
* then
* get the files name
* delete one by one
*/
if(count($files) >1 )
{
foreach($files as $key => $value)
{
//get the file name
$file_name = basename($value);
//delete file from the folder
File::delete(storage_path('app/JpgToPdf/'. $file_name));
}
}
if ($request->has('jpeg_file'))
{
$getPdfFile = $request->file('jpeg_file');
$originalname = $getPdfFile->getClientOriginalName();
$path = $getPdfFile->storeAs('JpgToPdf', $originalname);
}
// file name without extension
$filename_without_ext = pathinfo($originalname, PATHINFO_FILENAME);
//get the upload file
$storagePath = storage_path('app/JpgToPdf/' . $originalname);
$imagick = new Imagick();
$imagick->setResolution(300, 300);
$imagick->readImage($storagePath);
$imagick->setImageCompressionQuality( 100 );
$imagick->mergeImageLayers(Imagick::LAYERMETHOD_FLATTEN);
$imagick->setImageAlphaChannel(Imagick::ALPHACHANNEL_REMOVE);
$imagick->writeImage( storage_path('app/JpgToPdf/') . $filename_without_ext .'.pdf' );
return response()->download(storage_path('app/JpgToPdf/') . $filename_without_ext .'.pdf' );
} catch (CustomModelNotFoundException $exception) {
// Throws error exception
return $exception->render();
}
}
For just a few images, do it manually and easily with the Chrome web browser. You wont need an internet connection.
Save the following with .html extension in the same folder of your image:
<html>
<body>
<img src="image.jpg" width="100%">
</body>
</html>
Open the html file with Google Chrome,
Crtl + P, to open the print dialog
Choose Save as PDF, to save it locally
Alternatively, you could send a copy to your smatphone via Google Cloud Print
I should start by saying I have no php experience what so ever, but I know this script can't be that ambitious.
I'm using Wordpress' metaWeblog API to batch the creation of several hundred posts. Each post needs a discrete title, a description, and url's for two images, the latter being custom fields.
I have been successful producing one post by manually entering data into the following file;
<?php // metaWeblog.Post.php
$BLOGURL = "http://path/to/your/wordpress";
$USERNAME = "username";
$PASSWORD = "password";
function get_response($URL, $context) {
if(!function_exists('curl_init')) {
die ("Curl PHP package not installed\n");
}
/*Initializing CURL*/
$curlHandle = curl_init();
/*The URL to be downloaded is set*/
curl_setopt($curlHandle, CURLOPT_URL, $URL);
curl_setopt($curlHandle, CURLOPT_HEADER, false);
curl_setopt($curlHandle, CURLOPT_HTTPHEADER, array("Content-Type: text/xml"));
curl_setopt($curlHandle, CURLOPT_POSTFIELDS, $context);
/*Now execute the CURL, download the URL specified*/
$response = curl_exec($curlHandle);
return $response;
}
function createPost(){
/*The contents of your post*/
$description = "post description";
/*Forming the content of blog post*/
$content['title'] = $postTitle;
$content['description'] = $description;
/*Pass custom fields*/
$content['custom_fields'] = array(
array( 'key' => 'port_thumb_image_url', 'value' => "$imagePath" ),
array( 'key' => 'port_large_image_url', 'value' => "$imagePath" )
);
/*Whether the post has to be published*/
$toPublish = false;//false means post will be draft
$request = xmlrpc_encode_request("metaWeblog.newPost",
array(1,$USERNAME, $PASSWORD, $content, $toPublish));
/*Making the request to wordpress XMLRPC of your blog*/
$xmlresponse = get_response($BLOGURL."/xmlrpc.php", $request);
$postID = xmlrpc_decode($xmlresponse);
echo $postID;
}
?>
In an attempt to keep this short, here is the most basic example of the script that iterates through a directory and is "supposed" to pass the variables $postTitle, and $imagePath and create the posts.
<?php // fileLoop.php
require('path/to/metaWeblog.Post.php');
$folder = 'foldername';
$urlBase = "images/portfolio/$folder";//truncate path to images
if ($handle = opendir("path/to/local/images/portfolio/$folder/")) {
/*Loop through files in truncated directory*/
while (false !== ($file = readdir($handle))) {
$info = pathinfo($file);
$file_name = basename($file,'.'.$info['extension']); // strip file extension
$postTitle = preg_replace("/\.0|\./", " ", $file_name); // Make file name suitable for post title !LEAVE!
echo "<tr><td>$postTitle</td>";
$imagePath = "$urlBase/$file";
echo " <td>$urlBase/$file</td>";
createPost($postTitle, $imagePath);
}
closedir($handle);
}
?>
It's supposed to work like this,
fileLoop.php opens the directory and iterates through each file
for each file in the directory, a suitable post title(postTitle) is created and a url path(imagePath) to the server's file is made
each postTitle and imagePath is passed to the function createPost in metaWeblog.php
metaWeblog.php creates the post and passes back the post id to finish creating the table row for each file in the directory.
I've tried declaring the function in fileLoop.php, I've tried combining the files completely. It either creates the table with all files, or doesn't step through the directory that way. I'm missing something, I know it.
I don't know how to incorporate $POST_ here, or use sessions as I said I'm very new to programming in php.
You need to update your declaration of the createPost() function so that it takes into account the parameters you are attempting to send it.
So it should be something like this:
function createPost($postTitle, $imagePath){
/*The contents of your post*/
$description = "post description";
...
}
More information about PHP function arguments can be found on the associated manual page.
Once this has been remedied you can use CURL debugging to get more information about your external request. To get more information about a CURL request try setting the following options:
/*Initializing CURL*/
$curlHandle = curl_init();
/*The URL to be downloaded is set*/
curl_setopt($curlHandle, CURLOPT_URL, $URL);
curl_setopt($curlHandle, CURLOPT_HEADER, false);
curl_setopt($curlHandle, CURLOPT_HTTPHEADER, array("Content-Type: text/xml"));
curl_setopt($curlHandle, CURLOPT_POSTFIELDS, $context);
curl_setopt($curlHandle, CURLOPT_HEADER, true); // Display headers
curl_setopt($curlHandle, CURLOPT_VERBOSE, true); // Display communication with server
/*Now execute the CURL, download the URL specified*/
$response = curl_exec($curlHandle);
print "<pre>\n";
print_r(curl_getinfo($ch)); // get error info
echo "\n\ncURL error number:" .curl_errno($ch); // print error info
echo "\n\ncURL error:" . curl_error($ch);
print "</pre>\n";
The above debug example code is from eBay's help pages.
It should show you if Wordpress is rejecting the request.