I am using google visualization api for my php-based website. I am planning to use tcpdf to give a pdf version of the same images that i am generating using Google Visualization API.
is there any way to save these visualizations on my server so that i use them directly in the tcpdf?
If not then can i automatically take screenshots of the images and use them in tcpdf?
The reason why i am not manually taking screenshots and saving is that, all the values in the visualizations are dynamic and keep changing every day, so I am working on something automatic. Suggestions please?
The result of the Google Visualization API can be an image or Flash (on interactive charts). If your result is an image (the most common) you can download it from the server to a server folder and add to the tcpdf from your local path.
you can use for example cURL:
$ch = curl_init($remoteURL); // get the image from this url
$fp = fopen($localFile, "wb"); //put the image in this server path
// set URL and other appropriate options
$options = array(CURLOPT_FILE => $fp,
CURLOPT_HEADER => 0,
CURLOPT_FOLLOWLOCATION => 1,
CURLOPT_TIMEOUT => 60); // 1 minute timeout (should be enough)
curl_setopt_array($ch, $options);
// Execute curl
curl_exec($ch);
curl_close($ch);
// close the file
fclose($fp);
Once the image is in a temporal path of your server you can add the image to the ctpdf using the "Image" method and the path to the file you have saved into your server.
The Server path must be writtable with the web server.This is specially important on *nix environments.
Here more ways to do the file download
http://www.php-mysql-tutorial.com/wikis/php-tutorial/reading-a-remote-file-using-php.aspx
Related
Preface
Environment
OS: Ubuntu
PHP: 7.4
Laravel: ^8.12
I'm writing a scraper page for a web app that I'm developing and these are the steps that I'm trying to achieve:
Scrape image from the target website.
Convert image from png to webp
Upload converted webp to my server via FTP.
The core of the program is one line:
Storage::disk('ftp')->put("/FILE_UPLOAD_LOCATION", file_get_contents("scraped-image.png"));
This confirmed that my FTP environment was configured correctly.
Attempts and Errors
I should just be able to do the following should I not?
$image = createimagefromstring(file_get_contents("remote_url.png");
imagepalettetotruecolor($image); // needed for webp
Storage::disk('ftp')->put('remote_desination', imagewebp($image));
The answer is no. This does not work as it just creates a "image" file with 1 as the contents of it.
I haven't tried much outside of this, but hopefully someone has an answer or can point me in a new direction.
This is happening because imagewebp() doesn't return the image, but a boolean indicating either it worked or not.
You must create a handle to store the image in memory then use it to store on the ftp:
$handle=fopen("php://memory", "rw");
$image = createimagefromstring(file_get_contents("remote_url.png");
imagepalettetotruecolor($image); // needed for webp
imagewebp($image, $handle);
imagedestroy($image); // free up memory
rewind($handle); // not sure if it's necessary
Storage::disk('ftp')->put('remote_desination', $handle);
fclose($handle); // close the handle and free up memory
My appengine project (running on App Engine Standard Environment), uses a media-converter service outside of appengine. I can easily start a new converting job and get notified whenever the job is done. The media-converter delivers a temporary url to retrieve the conversion-result (mp4 file).
Now i want, to start a background-job to download this converted file from the media-converter to my google-cloud storage.
Whatever i tried so far, i cannot download larger files that 32 mb.
These are my approaches so far:
First one by just copy with file_put_contents / file_get_contents as suggested on https://cloud.google.com/appengine/docs/standard/php/googlestorage/
$options = [
'Content-Type' => 'video/mp4',
'Content-Disposition' => 'inline',
];
$context = stream_context_create(['gs' => $options]);
file_put_contents($targetPath, file_get_contents($url), 0, $context);
Then i tried to work directly with streams:
$CHUNK_SIZE = 1024*1024;
$targetStream = fopen($targetPath, 'w');
$sourceStream = fopen($url, 'rb');
if ($sourceStream === false) {
return false;
}
while (!feof($sourceStream)) {
$buffer = fread($sourceStream, $CHUNK_SIZE);
Logger::log("Chuck");
fwrite($targetStream, $buffer);
}
fclose($sourceStream);
fclose($targetStream);
Then i was surprised that this actually worked (up to 32 mb)
copy($url, $targetPath);
Im running out of ideas now. Any suggestions? I kinda need the cp function of gutil in php.
https://cloud.google.com/storage/docs/quickstart-gsutil
I think this stackoverflow issue had a similar issue:
Large file upload to google cloud storage using PHP
There is a strict limit of 32MB of data for incoming requests.
Refer to Incoming Bandwidth for more details - https://cloud.google.com/appengine/quotas. This must be the reason why you are not able to go beyond the 32MB limit.
Possible Solution -
Can you modify the media-converter service?
If yes - Create an API in the media converter service and do the cloud storage upload at the media converter service itself by invoking the endpoint from your AppEngine application. Use the service account for cloud storage authentication (https://cloud.google.com/storage/docs/authentication#storage-authentication-php).
If no - You can do the same thing using Compute Engine. Create an API in the Compute Engine where you will be passing the URL of the file in response to the background job in AppEngine. Upload to cloud storage using the service account authentication.
I'm trying to download files from a Google Drive link from Google server to my web server, to avoid the 100 max size in PHP POST.
<?php
$link = $_POST["linkToUpload"];
$upload = file_put_contents("uploads/test".rand().".txt", fopen($link, 'r'));
header("Location: index.php");
?>
Inserting a normal link like http://example.com/text.txt it works fine. The problem comes linking google drive https://drive.google.com/uc?authuser=0&id=000&export=download. This is a direct link from Google Drive, but it doesn't work. So I tried to insert the link that I obtained downloading the file locally https://doc-08-bo-docs.googleusercontent.com/docs/securesc/000/000/000/000/000/000?e=download and it's still not working. Do you think Google is trying to avoid the server-to-server copies? Or there is another method to do it?
If you want to fetch files with your own application you should use the API (Application Programming Interface) to get these.
Have a look at the file download documentation for Google Drive
Example download snippet in PHP:
$fileId = '0BwwA4oUTeiV1UVNwOHItT0xfa2M';
$response = $driveService->files->get($fileId, array(
'alt' => 'media'));
$content = $response->getBody()->getContents();
I have a plan and there is a must :
server 1 = web server, to save source like php and etc
server 2 = to save file/document that uploaded by client
client access web server in Server 1 and then upload or download document/image but i want the document or image saved in server 2.
so web source and file/image is separate.
but i don't know what is the ideal concept from transfer file from server 1 to server 2. i try using FTP and Curl.
and please consider how to client Download file from server 2.
what is the ideal concept to transfer file like upload or download for scenario above ?
You can accomplish this with PHP FTP.
Make sure you have correct permissions to write files on the servers.
Download File using cURL :
set_time_limit(0); // unlimited max execution time for big files
$options = array(
CURLOPT_FILE => '/path/download/file/server1.img',
CURLOPT_TIMEOUT => 28800, // set this to 8 hours so we dont timeout on big files
CURLOPT_URL => 'http://server2.com/path/download/file/server2.img',
);
$ch = curl_init();
curl_setopt_array($ch, $options);
curl_exec($ch);
curl_close($ch);
For this you can use any CDN hosting providers like Rackspace or any other.Actually I have used Rackspace were we uploaded files, kept storage separate from the code.They provide you API to upload,delete,retrieve files even you can create directories. They provide you API key and secret key which is used to make API call.While implementing this make common functions which can be used throughout the project to do this operations.
After uploading a file to a GCS bucket, that is publically readbale, when I access the file directly (http://storage.googleapis.com//image.png) the browser (Chrome) downloads the file, instead of displaying it.
In the Inspector, I can see it is being served with a content type of binary/octet-stream.
How do I make sure the file has the correct content type when retrieving?
When using the GCS PHP Filesystem Library, you can set the Content-Type of the uploaded object using the file options parameter. There is an example of it in the documentation:
$options = ['gs' => ['Content-Type' => 'text/plain']];
$ctx = stream_context_create($options);
file_put_contents('gs://my_bucket/hello.txt', 'Hello', 0, $ctx);