Podio error when getting apps - php

Error
ErrorException: array_merge(): Argument #1 is not an array in /my/server/vendor/podio/podio-php/lib/PodioObject.php:200
Stack trace:
#0 [internal function]: Illuminate\Foundation\Bootstrap\HandleExceptions->handleError(2, 'array_merge(): ...', '/my/server/...', 200, Array)
#1 /my/server/vendor/podio/podio-php/lib/PodioObject.php(200): array_merge(NULL, Array)
#2 /my/server/vendor/podio/podio-php/models/PodioApp.php(39): PodioObject::member(Object(PodioResponse))
#3 /my/path.php(413): PodioApp::get(xxxxxxx)
This appears to be a bug with the Podio PHP SDK or Podio API. The json_response (which is causing the array_merge error) is null, yet the http response is 200. I cannot get it to occur regularly, however it occurs roughly 10% of the time on script that is running 30~ of these calls. I can run the GetApp call directly from the documentation just fine.
I know it's an error with the responses because my script breaks at different places on each rerun depending on which data hasn't been loaded from the API correctly.
Test 1: Exception at line 344 as the result of $app1 being null
Test 2: Exception at line 814 as the result of $app3 being null
etc...
This is a script that was not modified and has been in place for over 6 months, but stopped working sometime last week.
EDIT: I've also confirmed that the same error occurs with cURL, so it isn't an SDK-specific issue.

The same intermittent error is occurring for us also. Since the TLS change was rolled out.
A temporary workaround is to wrap calls in a do while loop to retry when there are errors.
E.g.
// Get item from API
$attempts = 0;
do {
try {
$item = PodioItem::get($itemId);
} catch (\Exception $e) {
$attempts++;
Log::error("PodioItemGetFailure #" . $attempts . ". " . $e->getMessage());
sleep(3);
continue;
}
break;
} while ($attempts < 3);
This is a bit nasty, so hopefully we have a resolution on the causes on Podio's side soon.

This intermittent errors should not be happening anymore :)
Unless your network connection is unstable or choppy.
Anyway it's good to have proper handling for network-dependent calls (like any Podio API call). I can only suggest that all Podio API calls should go through queueing mechanism that will allow retry in case if network is unstable or Podio is on maintenance (as example).

Related

Roundcube attachment upload internal server error

So basically I have been looking for a solution for like two days straight and nothing seems to be helping.
I am using Roundcube mail client with IMAP, postfixadmin and dovecot and whenever I try to upload attachments, I get an internal server error.
Here is something I managed to catch in logs:
[11-Nov-2021 01:41:27 UTC] PHP Fatal error: Uncaught TypeError: fclose(): Argument #1 ($stream) must be of type resource, null given in /var/www/roundcube/program/lib/Roundcube/rcube_imap_generic.php:430
Stack trace:
#0 /var/www/roundcube/program/lib/Roundcube/rcube_imap_generic.php(430): fclose()
#1 /var/www/roundcube/program/lib/Roundcube/rcube_imap_generic.php(1149): rcube_imap_generic->closeSocket()
#2 /var/www/roundcube/program/lib/Roundcube/rcube_imap.php(215): rcube_imap_generic->closeConnection()
#3 /var/www/roundcube/program/lib/Roundcube/rcube.php(1038): rcube_imap->close()
#4 /var/www/roundcube/program/include/rcmail.php(921): rcube->shutdown()
#5 [internal function]: rcmail->shutdown()
#6 {main}
thrown in /var/www/roundcube/program/lib/Roundcube/rcube_imap_generic.php on line 430
There are a lot of settings all around the server so if you think that you need some of them for debugging, just ask and I'll happily put them here
EDIT: I made a quick video with all the things going on. You can see that upload "failed" with internal server error message, but after refreshing the page, attachment is there and it's being sent with email, and after receiving that email, I can't see attachment preview in email, but when I click on it I can see it and download it.
After a few long days, I finally managed to figure this out on my own, and it's really simple. So what's happening is that rounducbe is trying to close file that doesn't exist.
So, to all of you who are facing with the same problem, to fix this you have to edit file "path/to/roundcube/program/lib/Roundcube/rcube_imap_generic.php" on line 430
Change this:
protected function closeSocket()
{
#fclose($this->fp);
$this->fp = null;
}
Into this:
protected function closeSocket()
{
if($this->fp){
#fclose($this->fp);
}
$this->fp = null;
}

Using msg_send() in PHP results in error 11... Cannot find a solution

I am currently building a small application that uses the message queue built in PHP.
I have 1 "server" process and 1 "client" process. Messages flow from server to client.
They are simple JSON objects, that are serialised, then send.
This code is used
<?php
$send = msg_send($q, MESSAGE_TYPE_EXECUTION, $update, true, false, $error);
if (isset($error) && $error != 0) {
echo 'Execution error: ' . $error . PHP_EOL;
}
// $q is the message queue integer
// MESSAGE_TYPE_EXECUTION is integer 1
// $update is the JSON string
// true is that the JSON string is serialised
// false is that it is blocking (which it is not)
// $error get's filled when an error occurs (see below)
This works without issue, until it does not.
Sometimes after a couple of minutes, sometimes after a couple of hours the following error appears:
PHP Warning: msg_send(): msgsnd failed: Resource temporarily unavailable in
/var/www/server.php on line 57
The value of the $error variable is the integer 11.
All messages that follow this error will have error 11, until I restart the process and all is working again (for a while, until the same error appears again)
I have been searching but cannot find any explanation what error 11 is, how this can be managed and fixed without restarting the process.
Any clue, information, example etc is welcome. I would really like for server.php to be reliable.
-- edit --
client.php is the process that fetches the messages (which are all more or less the same, but with other values)
it uses this fetch the messages from the queue (filled in server.php)
<?php
$update = msg_receive($q, 0, $messagetype, 1024, $message, true, MSG_IPC_NOWAIT && MSG_NOERROR, $error);
if ($update) {
// Do stuff
}
usleep(1000000);
I have not yet checked memory usage, will look into that
Platform used
PHP 7.1.3
Centos 7
So, Solution was found after some information and leads (read the comments on my original question), brought up by #ChrisHaas (Thanks again!). After some tinkering all is running smoothly now, without error 11 for msg_send().
PHP msg_send() call is basically a wrapper of msgsnd
So a lot of information can be found there, also about errors you might encounter (in combination with flags used when reading messages with msg_receive() )
The queue is limited in total size and total messages it can hold (I, however, have not found a way to increase the total size of the queue).
The reason I was getting error 11 was due to a couple of things:
The client I created was too slow fetching messages from the queue, causing it to run into the max limit and crapping out. I did not find a solution for fixing this situation, other than restarting all processes involved. To repeat the same over and over again.
I also increased the size of reading messages in msg_receive() as sometimes the messages where big (most where small). But when you declare a too small size the big messages will remain in queue and clog it up until it craps out. Increasing the max_size helped with fetching the bigger messages too.
Long story short: error 11 is related to a full message queue in my perspective (I still do not have a 100% clear documented answer though).
Pointers to fix the issue:
Be sure you fetch all messages that are big.
Be sure to read out at least as fast as you send the messages in the queue.
Check your queue(s) with the command ipcs -q in the terminal. It allows you to see the queues currently active. Keeping an eye on that allows you to see it slowly filling up on problems.
Wish the documentation on php.net was better in this case...

Can't request data on elasticsearch

I am develloping graphs for connection logs of a website. Logs are parsed by logstash and served via elasticsearch.
I've develloped some graphs, nothing outstanding nor hard, but working.
Friday I started elasticsearch, and every script I wrote to get the data fail when I send the query.
My first thought was that I somehow modified the query, so I printed it and send it to elasticsearch (with plugin head). The query was fine and I have result.
I tried purging logs from logstash and elasticsearch, and restarting to feed them from known good data... Didn't fix anything.
Tried to see if the config has any error, used a backup of a working one, didn't work either.
As last hope, I tried to print PHP errors, and I do get an exception thrown from deep inside of elasticsearch:
Fatal error: Uncaught exception 'Guzzle\Http\Exception\ServerErrorResponseException' with message 'Server error response [status code] 500 [reason phrase]
Internal Server Error [url] http://localhost:9200/empreinte_index/mobile/_search' in /home/empreinte/vendor/guzzle/guzzle/src/Guzzle/Http/Exception/BadResponseException.php:43
Stack trace:
#0 /home/empreinte/vendor/guzzle/guzzle/src/Guzzle/Http/Message/Request.php(145): Guzzle\Http\Exception\BadResponseException::factory(Object(Guzzle\Http\Message\EntityEnclosingRequest), Object(Guzzle\Http\Message\Response))
#1 [internal function]: Guzzle\Http\Message\Request::onRequestError(Object(Guzzle\Common\Event), 'request.error', Object(Symfony\Component\EventDispatcher\EventDispatcher))
#2 /home/empreinte/vendor/symfony/event-dispatcher/Symfony/Component/EventDispatcher/EventDispatcher.php(164): call_user_func(Array, Object(Guzzle\Common\Event), 'request.error', Object(Symfony\Component\EventDispatcher\EventDispatcher))
#3 /home/empreinte/vendor/symfony/event-dispatcher/Symfony in /home/empreinte/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Connections/GuzzleConnection.php on line 238
So this seems to comfort the idea that my codes are fine, but I can't find what did I done wrong, nor where to search.
Here is a minimal example of the script I use:
<?php
echo "Days";
require '/home/empreinte/vendor/autoload.php';
$client = new Elasticsearch\Client();
$Query['index'] = 'empreinte_index';
$Query['type'] = 'web';
echo ".";
//Building the timeframe needed. For brevity, using hardcoded data.
$timeframe = "{"from" : "1404165600", "to" : "1404252000" },{"from" : "1404252000", "to" : "1404338400" }";
echo ".";
$Query['body']='
{
"aggs" :
{
"temps" :
{
"range" :
{
"field" : "time",
"ranges" : ['.$timeframe.']
},
"aggs" :
{
"new_users" :
{
"terms" :
{
"field" : "is_newuser"
}
}
}
}
}
}';
echo ".";
$result = $client->search($Query);
//Parse the data to get them in usable form for graphs
echo "OK</br>";
?>
Which output "Days...", And the exception if PHP is set to display it.
(If requested, I'll post the config file and some logs).
How can I fix this? Where can I find a similar error from which I can find a fix? What does mean the error?
If you check the response it looks like a request problem: 'request.error'
When you construct the timeframe you use double quotes to construct the string but also within the string, this can be a problem as well.
I do not really think this is the problem, but try to loose the quotes around the longs representing time stamps.
Finally try to print the query and try the query in a tool like sense from elasticsearch or the head plugin or the kopf plugin.
Hope that helps

Alternating class not found error when using session handler

First of all, if it's relevant, this is in a session handler. This function is the one that writes to the database and is passed to session_set_save_handler along with my other functions like this
session_set_save_handler('sess_open', 'sess_close', 'sess_read', 'sess_write', 'sess_destroy', 'sess_gc');
I have this chunk of code...
$qid = "select count(*) as total
from zen_sessions
where sesskey = '" . $key . "'";
if(!class_exists('DB'))
require_once dirname(dirname(__FILE__)).'/class/DB.class.php';
var_dump(new DB()); //this is line 109
$total = DB::select_one($qid);
the conditional and var_dump are for testing. Oddly enough sometimes it works fine while others it gives me an error:
Fatal error: Class 'DB' not found in /path/to/file/session_functions.php on line 109
I cannot figure how this wouldn't crash at the require instead of the var_dump and why only sometimes?
Thanks in advance for any insight.
edit-- response to comment/question:
The result of the following code
var_dump(class_exists('DB', false));
var_dump(is_file(dirname(__DIR__).'/class/DB.class.php'));
is:
bool(false) bool(true)
before trying to require it and the same result after the require(or true true when it doesn't give me an error)
Looks something like:
bool(true) bool(true) object(DB)#3 (0) { }
The previous code chunk is the result about once out of every 5 page loads while the error is the result the other 4.
Edit2 -- new findings.
Even more curious is according to the manual I should never see these debugging statements or errors
Note:
The "write" handler is not executed until after the output stream is
closed. Thus, output from debugging statements in the "write" handler
will never be seen in the browser. If debugging output is necessary,
it is suggested that the debug output be written to a file instead.
Edit 3 - A Note for clarity:
The DB class Should have been autoloaded(and is everywhere else in the application) the class_exists and require are simply there for testing purposes.
Edit 4 - Stack trace
I decided to try and throw an exception when the class isn't found to see the stack trace, this is what I get
Fatal error: Uncaught exception 'Exception' with message 'DB Class Not Found.'
in /path/to/file/session_functions.php:108
Stack trace: #0 [internal function]: sess_write('074dabb967260e9...', 'securityToken|s...')
#1 {main} thrown in /path/to/file/session_functions.php on line 108
The only thing that I can think of that may be causing this, is from a notice in the PHP docs for session_set_save_handler:
Warning
Current working directory is changed with some SAPIs if session is closed in the script termination. It is possible to close the session earlier with session_write_close().
From what you are experiencing, I am guessing the current working directory is changed, so require_once doesn't find the file.
I would try adding session_write_close(); to somewhere in your function and see if that fixes it.
Admittedly, not sure why is_file would return true in this case, but maybe worth a shot.
Even though I can not be sure, but I bet that the error is somewhere else and it's just projecting itself as you've described it.
In order to test and debug your code, you need to use a debugger like PDT. But then the problem is that you need to debug a part of your code that is out of debugger's reach, the session writer! To overcome this problem you can use session_write_close. You can put it somewhere at the end of your bootstrap or if you don't have one, you can do it like this:
<?php
function shutdown_function()
{
session_write_close();
}
register_shutdown_function('shutdown_function');
Then by setting a break point, you can start debugging your session code from here. Let me know if I win the bet.
try:
$save_handler = new DB();
session_set_save_handler($save_handler, true);
then map read, write, etc functions inside your class. i faced a similar issue(bizarre random errors about a class not being found) implementing another user's custom save handler workaround for HHVM with redis, and this is how i fixed it. if you are using HipHopVirtualMachine (or possibly some other type of JIT compiler or app cache), sometimes your project can cache some functions without updating, producing odd errors like this. usually a restart of the fastcgi daemon and adding white space to one of your files is enough to force it to re interpret your project.

Selenium, php, phpUnit, 404 error calls testComplete() rather than continue, how do I stop this?

Selenium, php, phpUnit, 404 error calls testComplete() rather than continue, how do I stop this?
I am using selenium server and phpUnit to run php based tests. My tests are simple, test the page is there, if it loads, has no errors on page and then move on. I have a missing page and rather than say, yep its not there and more on I get:
Time: 16 seconds, Memory: 14.75Mb
There was 1 error:
1) OlympicsSiteMapEnglishPages::testMyTestCase
PHPUnit_Framework_Exception: Response from Selenium RC server for testComplete().
XHR ERROR: URL = http://my.url/somepage Response_Code = 404 Error_Message = Not Found.
/some/path/some_file.php:375
FAILURES!
Tests: 1, Assertions: 0, Errors: 1.
I really need to figure out how to stop it doing this! I have tried catching the exception like so:
try {
$this->open("/rel/url.php", 1);
} catch (PHPUnit_Framework_AssertionFailedError $e) {
return array_push($this->verificationErrors, $e->toString());
}
Any clues guys, I really need help!
Many thanks,
Alex
PHPUnit_Framework_Exception inherits from Exception
PHPUnit_Framework_AssertionFailedError inherits from Exception
If you want to catch them both, you'll either have to catch the 'sort of expected' PHPUnit_Framework_Exception earlier on (and possibly rethrow it as an PHPUnit_Framework_AssertionFailedError), or resort to the generic try{} catch(Exception $e){}
Right after a bit of digging nothing works, so my hackie hack way which is very slow, it to try and open each url first using:
if(#fopen(rtrim($this->url, "/") . "/blah/blah/blah","r"))
{
//do some test which should now not exit the browser as we know the page exists.
}
Please could some one find a much better way of doing things!

Categories