I am using a little bit older version of the PHP package of cloud-vision library, 0.19.0, because of some other dependency issues with other packages. This might be the cause of the problem, but I am not sure.
When working on localhost, I make a request and it all goes well, vision API returns the valid responses, but when I deployed to production, every time I try to use it, it just returns an error.
"message": "Request must specify image and features., "code": 3,"status": "INVALID_ARGUMENT","details": []
Is it the old package, or is something else the problem here? I ran out of ideas.
I am using a PHP library, so the code is pretty simple,I use the
file_get_contents($imageUrl)
and pass that string to the following functions
The problem was with building the $imageUrl on local/dev and production. I am using AWS, so the $imageUrl on dev was different than production, and the production url is returning an access denied.
Check your urls when working with AWS.
Related
I'm trying to get user uploads direct to Google Cloud Storage from my app on AppEngine Flex (PHP 7.2). The code works locally (not using the GCP local dev server) but when I upload the code to AppEngine and open the page/view I get a 500 error. Wondering if anyone with more experience with GCP and GCS can point out what I'm doing wrong?
I put debugging statements (removed for brevity) into the live server code and I can see these stop directly before I call $object->beginSignedUploadSession() (see code below).
$config = [
'projectId' => 'my-project-id'
];
$storage = new StorageClient($config);
$bucket = $storage->bucket('my-project-id.appspot.com');
$object = $bucket->object('csv_data/tmpcsvdata-' . $model->file_hash . '.csv');
$upload_url = $object->beginSignedUploadSession();
Locally this correctly generates the signed upload URL so I can insert it into the view and thereafter the AJAX takes care of uploading the user's file to GCS. Live, the application error handler (Yii2) returns Error(#403) but it presents no other details. The AppEngine logs don't show any information other than Error 500.
On the assumption that #403 might mean Forbidden and that the issue was with credentials I've re-checked this but it seems fine since I assume I don't need to provide a keyFile or keyFilePath because it's on AppEngine (unlike when I do it locally).
I've done some fairly extensive searches but can't find anything that seems to relate.
Any suggestions would be greatly appreciated.
Update
Managed to get some more error details. The error message is "IAM Service Account Credentials API has not been used in project XXXX" and the exception is "GuzzleHttp\Exception\ClientException". So it seems it is a 403 Forbidden and I guess must be credentials related but I'm still not sure how I should fix this.
Make sure your AppEngine service account has the iam.serviceAccounts.signBlob permission. You can get it by granting the Service Account Token Creator role. Click here for a guide to granting access.
In the end I solved my issue by uploading my service account JSON credentials file to the AppEngine in my deploy.
I had previously included this in the .gcloudignore file and wasn't using it in the live config for the StorageClient based on my understanding of the documentation for AppEngien and Cloud Storage which seemed to imply that credentials would be automatically detected by AppEngine and applied as necessary.
I'm not sure if this solution is secure or best practice but it worked for me in this case.
I have random problems with my S3. I have several environments (several cloud / dev etc. machines). On almost all of them S3 is working perfectly fine. (I am using the PHP SDK with Gaufrette) The only exception is the productive environment which is not working. It has nothing to do with the bucket nor with my credentials. I provide the credentials via Environment variables and I have not changed them. During several deploys I have seen it working and not working in several occasions. I have no idea when it is working and when not. But as soon as I have deployed one codebase the fact whether it works or not seems to be fixed. Sometimes even an empty redeploy can solve the issue.
Here are the logs I am getting:
https://gist.github.com/KeKs0r/872af7eff4d723a589c5
I have read that sometimes the signature has problems with special characters or in some environment it has something to do with timezones? How could I check those settings and for what do I have to look?
(I am working with AWS SDK 1.5.17.1)
This is one example signature:
AmazonS3[x-aws-requestheaders][Authentication]: "AWS
MYKEY:pEU9UV/Yu1+7V71P55UuON8nGpQ="
Is the issue maybe caused by the / and the + signs? Why is the SDK not taking care of it?
I am currently trying out the Google App Engine for PHP on my local development environment. So far, I have been following the instructions at https://developers.google.com/appengine/docs/php/gettingstarted/helloworld in order to just test out a small app to get used to how the SDK works. However, when I get to the point of loading the test web server using the SDK, I get an error trying to load the very basic helloworld.php example. The command I currently run is:
../GoogleAppEngineSDK/google_appengine/dev_appserver.py --php_executable_path=/usr/bin/php --port=9999 helloworld/
As you can see I use a custom port to avoid conflict with another application that runs on the default 8080. The SDK engine loads fine, but as soon as I try to access my application under localhost:9999 I get the error:
AssertionError("invalid CGI response: ''",)
and the web page itself throws a 500 error.
So far my attempts to correct the problem have yielded nothing and was wondering if there may be something I am missing.
You should make sure you're pointing to the php-cgi executable not php. Not every OS comes with this so you may need to install it. The getting started guide has more detailed instructions.
Just had this issue. Changing my php_executable_path to /opt/local/bin/php-cgi54 did the trick.
I've been developing locally and exposing them with OAuth (oauth-php framework). Everything works perfectly fine locally on my laptop but when I deploy everything on my server it doesn't work anymore I get the following error when I'm trying to get a request_token:
Can't verify request, missing oauth_consumer_key or oauth_token
I've be investigating why it doesn't behave the same and the only clue that I found is in the log of OAuth: it looks like the oauth-php framework doesn't fetch properly the parameters in my POST Requests.
I have the same version of PHP on my server and on my local environment. I don't know what else could be affecting the oauth-php framework.
What can I do to find the problem? I don't know where to look...
Thanks!
Martin
I noticed that the Authorize header was stripped from the request, I renamed it on the client and server side and everything is working great.
In our application, the backend is accessed via Zend_XmlRpc. In the backend, I'm using Zend_Http_Client together with Zend_Http_Client_Adapter_Curl to connect to another web service over HTTPS.
During unit tests, everything works as expected and the remote service is accessible. But when the frontend connects via Zend_XmlRpc to the backend and causes the backend to do the exact same thing like the unit tests do, I get the following error:
inet_pton(): Unrecognized address test.example.com#0 (url changed)
This is caused by Zend_Validate_Ip->isValid('test.example.com').
The only difference I can spot is the additional frontend-backend-connection which is also using Zend components for communicating. Everything else is the same.
Anybody any idea?
Looks like it might be a resolver issue in the server and zf isn't catching it beforehand. It's getting a hostname where it should be getting an IP address (obviously), and it can't convert a string to a binary IP address
It was an error in Zend_Validate that was fixed with release 1.9.