I've recenlty run into a very strange issue where PHP's dns_get_record() function is returning old results.
On the same server, if I use host or dig from the command line, I get the correct results.
I've even used dig to query each individual nameserver and all return the correct current value for the record.
The servers are running Ubuntu 16.04 and are up-to-date.
This is happening on 2 of my servers at Linode, not all of them, so it doesn't sounds like a Linode network thing.
After a few hours, the issue resolved itself without reboot.
There was never a hosts entry for the domain, and to the best of my knowledge, vanilla Ubuntu does not have any built-in DNS cache.
Can anyone explain how PHP's dns_get_record works and why it would provide different results from terminal host or dig?
I don't have such extensive knowledge but you may try a library.
Eg. bluelibraries/dns
This library allow you to use 4 types of DNS handlers.
PHP default dns_get_record()
DIG (if your system may run this command)
UDP - raw server calls (good only responses are smaller than 512 Bytes
TCP - raw server calls (personally, I would recommend this one).
You can try different calls with different handlers in the same time then you can compare the result and to pick the one which works for you.
Eg:
Retrieve records using dns_get_record, DIG, UDP and TCP
// PHP -> dns_get_record()
$records = DNS::getRecords('bluelibraries.com', RecordTypes::TXT, DnsHandlerTypes::DNS_GET_RECORD);
// DIG command
$records = DNS::getRecords('bluelibraries.com', RecordTypes::TXT, DnsHandlerTypes::DIG);
// UDP
$records = DNS::getRecords('bluelibraries.com', RecordTypes::TXT, DnsHandlerTypes::UDP);
//TCP
$records = DNS::getRecords('bluelibraries.com', RecordTypes::TXT);
One response example
Array
(
[0] => BlueLibraries\Dns\Records\Types\Txt\DomainVerification Object
(
[data:protected] => Array
(
[host] => bluelibraries.com
[class] => IN
[ttl] => 0
[type] => TXT
[txt] => google-site-verification=test-636b-4a56-b349-test
)
)
)
Related
I'm running a simple php script that loops through a list of URLs and does a DNS lookup and saves the A record in a database. Runs perfectly on my local server but when I put it on my live server, the records are all internal 192... because its using the internal DNS server.
I'm using Net_DNS2 and have told it to use google DNS:
$r = new Net_DNS2_Resolver(array('nameservers' => array('8.8.8.8')));
but this doesnt seem to be working, I still get back 192 internal addresses.
Any ideas how to resolve this? I already moved to Net_DNS2 from the php dns_get_record function so that I could set a nameserver, but it doesnt seem to be working.
I am trying to do a scan and scroll operation on an index as shown in the example :
$client = ClientBuilder::create()->setHosts([MYESHOST])->build();
$params = [
"search_type" => "scan", // use search_type=scan
"scroll" => "30s", // how long between scroll requests. should be small!
"size" => 50, // how many results *per shard* you want back
"index" => "my_index",
"body" => [
"query" => [
"match_all" => []
]
]
];
$docs = $client->search($params); // Execute the search
$scroll_id = $docs['_scroll_id']; // The response will contain no results, just a _scroll_id
// Now we loop until the scroll "cursors" are exhausted
while (\true) {
// Execute a Scroll request
$response = $client->scroll([
"scroll_id" => $scroll_id, //...using our previously obtained _scroll_id
"scroll" => "30s" // and the same timeout window
]
);
// Check to see if we got any search hits from the scroll
if (count($response['hits']['hits']) > 0) {
// If yes, Do Work Here
// Get new scroll_id
// Must always refresh your _scroll_id! It can change sometimes
$scroll_id = $response['_scroll_id'];
} else {
// No results, scroll cursor is empty. You've exported all the data
break;
}
}
The first $client->search($params) API call executes fine and I am able to get back the scroll id. But $client->scroll() API fails and I am getting the exception : "Elasticsearch\Common\Exceptions\NoNodesAvailableException No alive nodes found in your cluster"
I am using Elasticsearch 1.7.1 and PHP 5.6.11
Please help
I found the php driver for elasticsearch is riddled with issues, the solution I had was to just implement the RESTful API with curl via php, Everything worked much quicker and debugging was much easier
I would guess the example is not up to date with the version you're using (the link you've provided is to 2.0, and you are sauing you use 1.7.1). Just add inside the loop:
try {
$response = $client->scroll([
"scroll_id" => $scroll_id, //...using our previously obtained _scroll_id
"scroll" => "30s" // and the same timeout window
]
);
}catch (Elasticsearch\Common\Exceptions\NoNodesAvailableException $e) {
break;
}
Check if your server running with following command.
service elasticsearch status
I had the same problem and solved it.
I have added script.disable_dynamic: true to elasticsearch.yml as explained in Digitalocan tutorial https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-elasticsearch-on-ubuntu-14-04
So elasticsearch server was not started.
I removed following line from elasticsearch.yml
script.disable_dynamic: true
restart the elastic search service and set the network host to local "127.0.0.1".
I would recommend on using php curl lib directly for elasticsearch queries.
I find it easier to use than any other elasticsearch client lib, you can simulate any query using cli curl and you can find many examples, documentation and discussions in the internet.
Maybe you should try to telnet on your machine
telnet [your_es_host] [your_es_ip]
to check if you can access to it.
If not please try to open that port or disable your machine's firewall.
That error basically means it can't find your cluster, likely due to misconfiguration on either the client's side or the server's side.
I have had the same problem with scroll and it was working with certain indexes but not with others. It must have had been a bug in the driver as it went away after I have updated elasticsearch/elasticsearch package from 2.1.3 to 2.2.0
Uncomment in elasticsearch.yml:
network.host:198....
And set to:
127.0.0.1
Like this:
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
# http.port: 9200
#
I use Elasticsearch 2.2 in Magento 2 under LXC container.
I setup Elasticsearch server in docker as the doc, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
But it uses a different network (networks: - esnet) and it cannot talk to the application network. After remove the networks setting and it works well.
If you setup Elasticsearch server in docker as the doc, https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
But it uses a different network (networks: - esnet) from other services and it cannot talk to the application network. After remove the networks setting and it works well.
Try:
Stop your elasticsearch service if it's already running
Go to your elasticsearch directory via terminal, run:
> ./bin/elasticsearch
This worked for me.
First question ever here...
I have OwnCloud running on a Raspberry Pi 2.
I can access it locally with no issues.
Ports 22, 80, and 443 have been forwarded.
I can SSH into the machine from outside local.
But, if I try to access http/https from outside of my local network, I get:
"You are accessing the server from an untrusted domain.
Please contact your administrator. If you are an administrator of this instance, configure the "trusted_domain" setting in config/config.php. An example configuration is provided in config/config.sample.php.
Depending on your configuration, as an administrator you might also be able to use the button below to trust this domain."
I have the following in my config.php:
'trusted_domains' =>
array (
0 => '192.168.10.10'
),
Commenting it out fixes the problem, but that's not the best solution.
I've spent some time looking around forums looking for answers and feel I have everything set up correctly. I'm just missing something...
FYI the router is an ASUS RT-N66W
When you're accessing it remotely, you're not using 192.168.10.10, you'd be using a public IP address or external hostname. It's this which you need to add to your trusted domains. Let's say you're accessing it using an external IP of 12.34.56.78:
'trusted_domains' =>
array (
0 => '192.168.10.10',
1 => '12.34.56.78'
),
And if you also decide to use an external hostname:
'trusted_domains' =>
array (
0 => '192.168.10.10',
1 => '12.34.56.78',
2 => 'owncloud.mydomain.com'
),
You can add as many of those as is necessary for your setup.
Additional to Nicks remarks there is also the example config file (config.sample.php) that helps you understand how to manipulate the real config file (config.php).
In case somebody wonders where to find the configuration files:
you will find them here
/var/www/owncloud/config/config.php
/var/www/owncloud/config/config.sample.php
I'm currently making use of Gearman with PHP using the standard bindings (docs here). All functioning fine, but I have one small issue with not being able to detect when a call to GearmanClient::addServer (docs here) is "successfull", by which I mean...
The issue is that adding the server attempts no socket I/O, meaning that the server may not actually exist or be operational. This means that subsequent code calls (in the scenario where the sever does not infact exist) fail and result in PHP warnings
Is there any way, or what is the best way, to confirm that the Gearman Daemon is operational on the server before or after adding it?
I would like to achieve this so that I can reliably handle scenarios in which Gearman may have died, or the server is un-contactable perhaps..
Many thanks.
We first tried this by manually calling fsockopen on the host and port passed to addServer, but it turns out that this can leave a lot of hanging connections as the Gearman server expects something to happen over that socket.
We use a monitor script to check the status of the daemon and its workers — something similar to this perl script on Google Groups. We modified the script to restart the daemon if it was not running.
If this does not appeal, have a look at the Gearman Protocol (specifically the “Administrative Protocol” section, referenced in the above thread) and use the status command. This will give you information on the status of the jobs and workers, but also means you can perform a socket connection to the daemon and not leave it hanging.
You can use this library: https://github.com/necromant2005/gearman-stats
It has no external dependencies.
$adapter = new \TweeGearmanStat\Queue\Gearman(array(
'h1' => array('host' => '10.0.0.1', 'port' => 4730, 'timeout' => 1),
'h2' => array('host' => '10.0.0.2', 'port' => 4730, 'timeout' => 1),
));
$status = $adapter->status();
var_dump($status);
I'm working on creating a new type of email protocol, and in order to do that I had to set up an SRV DNS record for my domain.
In promoting this protocol, I'll need to be able to discover if a given host uses my system (and if not fall back to an older protocol).
So, is there a way to pull a DNS record (such as SRV) using PHP without using a PECL extension or running it through the linux command line (I already know I can ob_start() and system("host -t SRV hostname") but I'm looking for a better way, if it exists.)
Use dns_get_record
array dns_get_record ( string $hostname [, int $type = DNS_ANY [, array &$authns [, array &$addtl ]]] )
Fetch DNS Resource Records associated
with the given hostname.
Have you considered PEAR::Net_DNS?
http://pear.php.net/package/Net_DNS
As far as I can tell it uses socket connections (tcp/udp) and decodes the resolver data itself. The available methods look pretty extensive.