I'm trying to get user uploads direct to Google Cloud Storage from my app on AppEngine Flex (PHP 7.2). The code works locally (not using the GCP local dev server) but when I upload the code to AppEngine and open the page/view I get a 500 error. Wondering if anyone with more experience with GCP and GCS can point out what I'm doing wrong?
I put debugging statements (removed for brevity) into the live server code and I can see these stop directly before I call $object->beginSignedUploadSession() (see code below).
$config = [
'projectId' => 'my-project-id'
];
$storage = new StorageClient($config);
$bucket = $storage->bucket('my-project-id.appspot.com');
$object = $bucket->object('csv_data/tmpcsvdata-' . $model->file_hash . '.csv');
$upload_url = $object->beginSignedUploadSession();
Locally this correctly generates the signed upload URL so I can insert it into the view and thereafter the AJAX takes care of uploading the user's file to GCS. Live, the application error handler (Yii2) returns Error(#403) but it presents no other details. The AppEngine logs don't show any information other than Error 500.
On the assumption that #403 might mean Forbidden and that the issue was with credentials I've re-checked this but it seems fine since I assume I don't need to provide a keyFile or keyFilePath because it's on AppEngine (unlike when I do it locally).
I've done some fairly extensive searches but can't find anything that seems to relate.
Any suggestions would be greatly appreciated.
Update
Managed to get some more error details. The error message is "IAM Service Account Credentials API has not been used in project XXXX" and the exception is "GuzzleHttp\Exception\ClientException". So it seems it is a 403 Forbidden and I guess must be credentials related but I'm still not sure how I should fix this.
Make sure your AppEngine service account has the iam.serviceAccounts.signBlob permission. You can get it by granting the Service Account Token Creator role. Click here for a guide to granting access.
In the end I solved my issue by uploading my service account JSON credentials file to the AppEngine in my deploy.
I had previously included this in the .gcloudignore file and wasn't using it in the live config for the StorageClient based on my understanding of the documentation for AppEngien and Cloud Storage which seemed to imply that credentials would be automatically detected by AppEngine and applied as necessary.
I'm not sure if this solution is secure or best practice but it worked for me in this case.
Related
I've been spending few days trying to figure out how to set aws s3 as external storage for Resourcespace. and i've been getting more confused with the this app.
I'm using the opensource version and trying to customize it to my needs.
I've been through the web app's lengthy documentation but couldn't find anything about setting storage (like other web apps out there) However, I found a feature called syncdir where it sets an alternative external storage (for backup) but not as an external storage, as from the documentation, it doesent seem to have a direct method to specify storage/integrate s3 with it.
I've tried the following:
I've tried using aws s3 integration and how to integrate to any php website, by changing storing directory of 'storagedir' and directory of 'syncdir' in config.default file (i added the require s3 autoload file and added aws keys in config file), but it's not working, site is still storing locally
Note: I've integrated aws s3 before with Laravel 5.7 & Codeigniter 3 frameworks successfully.
I tried adding the require aws-autoload into the file where uploading functions is, and tried to look for the code responsible to upload, but code seems confusing to me where the upload functionality is (its not a php funtion where $_FILES receives your upload.
Changed place of require aws-autoload into include/general.php, but no luck.
Followed up with some forums on the matter like:
using external storage
Amazon S3 integration
I'm assuming that using the config file (to store AWS credentials and storage set to s3 bucket url), i include the aws-autoload in general/upload file, and it would automatically understand where it should upload, but no error or bug is reporting to address it.
But most of what i found is related to the paid version of the DAM system where it seems to be already set up on amazon.
Please advise, Any help is appreciated.
I'm using Wamp on Winddows 10 PC btw
Check this discussion out, it might help you :
https://groups.google.com/forum/#!topic/resourcespace/JT833klfwjc
It look like it is still a work in progress, so you may see the WIP code,
You will find links to code in the mentioned link.
I am a newbie in php, but I am a long time coder. I am trying to get the google drive api for php working with the example files provided, but I am getting a redirect error with the following information in the message:
redirect_uri=http://jppp.com:8080/interface/googletest/simplefileupload.php
In every answer I have seen the problem is that the redirect uri in the code does not mach that on the google console, but in my case they appear identical. Here is the line of code (from the example) I cant use localhost because the lamp server is hosted on docker, and google won't let you use an IP address for the uri. I ended up changing my hosts file for jppp.com to the docker container.
$redirect_uri = 'http://jppp.com:8080/interface/googletest/simplefileupload.php';
and here is the relevant section of my google console api credentials. I have also waited more than half an hour in case google was slow in updating and tried it in a different browser.
picture of google api credential screen
Can anyone see a difference between the code and the console? What else can I try to get this working?
Thanks,
Derek
Apart from having this URL added as an allowed redirect_uri, you need to configure the domain itself in the Google Console.
When you add it & verify ownership, then you can play with it. The section can be found here:
https://console.developers.google.com/apis/credentials/domainverification
We have a bunch of mixed up authenticated accounts from the past few years in our database, where some were given "Full Access" early on and later it was changed to just using the "App Folder".
Is there any way of using the API to know if the access_token we have is within an App Folder, or to the whole account?
We basically want to switch all accounts to App Folders, but only want to alter those that need it. We'll have to move folders and also store a default path in the DB.
Having looked through the documentation I can't see anything that gives this info, any thoughts?
The Dropbox API is designed so that the permission level (e.g., app folder vs full Dropbox) is transparent to the app, so there isn't a good/official programmatic way to detect the permission for any given access token. We'll consider this a feature request though.
That said, some features of the API are only accessible to full Dropbox apps, so you could use those as a way to implicitly detect the permission. For example, the /2/sharing/list_folders endpoint is only usable by full Dropbox apps:
Apps must have full Dropbox access to use this endpoint.
That could potentially work for you, though it's still not a great solution, since in the case of an app folder app, you'll get a 400 error with a plain text body (and not a nice structured 409 error) so you can't reliably tell the difference between that and some other 400 error. To work around that you could match against the error message, but that could change of course:
Error in call to API function "sharing/list_folders": Your API app is an "App Folder" app. It is not allowed to access this API function.
Thanks to #Greg for pointing me in the right direction.
Because /sharing/list_folders doesn't return any errors I was unable to distinguish the difference between Full Dropbox & App Folder access.
For future reference, to solve my query I did the following:
Sent an API request to /sharing/get_folder_metadata with a made up shared_folder_id
This will always return an error (unless you somehow managed to pick a shared_folder_id that exists!)
If the error contained a response (which will always be invalid_id) then the access_token has Full Dropbox access
If the error was empty, then the access_token has only App Folder access
I am a newbie who is trying to create a Facebook app using PHP and Facebook's PHP SDK. The app is hosted on Heroku, and the sample app that they provided is working fine. However, I am now trying to get the sample app to work on Apache 2.2, and I have encountered a lot of problems along the way. Well, straight to the point, my latest problem will be trying to do Facebook login on localhost, but the 'An error occurred. Please try later' appears on the popup dialog. This does not happen on Heroku.
Will someone please enlighten me on if there's any steps that I can take to overcome this error? I don't think it got to do with any coding error since I am just following the provided sample app. Thanks!
This is happening because FB does not recognize the URL of the host issuing requests. You need to determine what the FQDN of your localhost server is then add it to the Site URL section of the FB App Settings manager. Chances are you have the URL of your Heroku server in there now.
its simple. in local host you must create another sample application for localhost in facebook .and you must have the permission.then you can login and work with your local one.
simply for every copies of your app.you must create it in facebook
hope it help you
good luck
I'm trying to run the example.php file that comes with the facebook sdk. I do have a hosting server that runs php, and also changed the ID's to the corresponding one on my app. Here is the message i'm getting:
This webpage is not available
The webpage at https://filipeximenes.com/facebook/ might be temporarily down or it may have moved permanently to a new web address.
Error 501 (net::ERR_INSECURE_RESPONSE): Unknown error.
i'm pointing the canvas to this adress: http://filipeximenes.com/facebook/
Thanks.
Based on your description, I think that this is your problem:
Do you have a valid security certificate on the hosting server? I ran into that problem recently when deploying an FB app. Since October, you have to have a valid cert even in sandbox mode for the FB app to run properly. If you don't have one it causes weird problems.
Just a thought that I hope helps.
One other thing to do from a debugging perspective is to take a look at the actual app running on your hosting server without viewing it via FB. If you get the same error message there, you know that it has nothing to do with the FB SDK.
Thanks!
Matt