Updating remote site with drush and ssh - php

I'm very new to drush. We have a git repo of a drupal site that I would like to push to the remote server using drush. I can easily scp the drupal files or setup a cron on the remote that runs git pull but I still would like to learn how to push code and sync a remote drupal site with my local drupal.
Currently, I have drupal running locally and I use git to update the repo. The ssh is already configured and I can ssh to the remote drupal server using keys. I have also created .drush/aliases.drushrc.php file and I tested it by running drush #dev status. It worked well
<?php
$aliases['dev'] = array(
'root' => '/var/www/html',
'uri' => 'dev.example.com',
'remote-host' => '192.168.1.50'
);
?>
Now, I would like my local drupal site to be synchronized with our server on 192.168.1.50 server. The local drupal files are on my /home/ubuntu/drupal_site.
I have few questions:
What is the drush command/parameters to update remote drupal server?
What will be the drush command/parameters if remote server doesn't have drupal files yet?

Backup before synchronizing with drush ard or drush #dev ard or with the suited alias. You can set the backup path in the alias settings.
I think you named your remote server dev. That is why I keep this in the following and use the alias local for the local drupal site.
Add the alias for your local drupal site. Then you can use the following command to synchronize the files:
drush rsync #local #dev
There #local is the source and #dev the target. More details on how to use the command rsync can be displayed with:
drush help rsync
You also need to synchronize the database to get the remote site running. For this add the database account data to the alias data for #local and #dev. It will look something like this:
'databases' => array(
'default' => array(
'default' => array(
'driver' => 'mysql',
'username' => 'USERNAME',
'password' => 'PASSWORD',
'port' => '',
'host' => 'localhost',
'database' => 'DATABASE',
)
)
)
Replace the space holders with your data. Then databases can be synchronized with:
drush sql-sync #local #dev
There #local is the source and #dev the target.
Initially the synchronization will happen in one direction. After this it is good practice to synchronize files from development or test site to the productive site. The database is synchronized the other way around from productive site to development or test site.

Drush and Git workflows differ in a way, as Drush can pull packages separately – you could probably use Git to push to the server. Be sure to check the /files directory, which is usually in the .gitignore file – a possible approach would be to mirror the files directory directly from the live site.
A common approach to update and check 2 or several sites at the same time (being local and remote) would be to use Drush aliases for your sites on a script on your machine.
Articles like this one are a good starting point.

Related

"no alive nodes found in cluster" while indexing docs

I have a "legacy" php application that we just migrated to run on Google Cloud (Kubernetes Engine). Along with it I also have a ElasticSearch installation (Elastic Cloud on Kubernetes) running. After a few incidents with Kubernetes killing my Elastic Search when we're trying to deploy other services we have come to the conclusion that we should probably not run ES on Kubernetes, at least if are to manage it ourselves. This due to a apparent lack of knowledge for doing it in a robust way.
So our idea is now to move to managed Elastic Cloud instead which was really simple to deploy and start using. However... now that I try to load ES with the data needed for our php application if fails mid-process with the error message no alive nodes found in cluster. Sometimes it happens after less than 1000 "documents" and other times I manage to get 5000+ of them indexed before failure.
This is how I initialize the es client:
$clientBuilder = ClientBuilder::create();
$clientBuilder->setElasticCloudId(ELASTIC_CLOUD_ID);
$clientBuilder->setBasicAuthentication('elastic',ELASTICSEARCH_PW);
$clientBuilder->setRetries(10);
$this->esClient = $clientBuilder->build();
ELASTIC_CLOUD_ID & ELASTICSEARCH_PW are set via environment vars.
The request looks something like:
$params = [
'index' => $index,
'type' => '_doc',
'body' => $body,
'client' => [
'timeout' => 15,
'connect_timeout' => 30,
'curl' => [CURLOPT_HTTPHEADER => ['Content-type: application/json']
]
The body and which index depends on how far we get with the "ingestion", but generally pretty standard stuff.
All this works without any real problems when running on a own installation of Elastic in our own GKE cluster.
What I've tried so far is to add the retries and timeouts, but none of that seems to make much of a difference?
We're running:
php 7.4
ElasticSearch 7.11
Elastic Search client 7.12 (php via composer)
If you use WAMP64, this error will occur, You have to use XAMPP instead.
Try the following command in the command prompt, If it runs, there is a problem with your configurations.
curl -u elastic:<password> https://<endpoint>:<port>
(Ex for Elastic Cloud)
curl -u elastic:<password> example.es.us-central1.gcp.cloud.es.io:9234

OwnCloud 8.0.1 on Raspberry Pi - Untrusted Domain

First question ever here...
I have OwnCloud running on a Raspberry Pi 2.
I can access it locally with no issues.
Ports 22, 80, and 443 have been forwarded.
I can SSH into the machine from outside local.
But, if I try to access http/https from outside of my local network, I get:
"You are accessing the server from an untrusted domain.
Please contact your administrator. If you are an administrator of this instance, configure the "trusted_domain" setting in config/config.php. An example configuration is provided in config/config.sample.php.
Depending on your configuration, as an administrator you might also be able to use the button below to trust this domain."
I have the following in my config.php:
'trusted_domains' =>
array (
0 => '192.168.10.10'
),
Commenting it out fixes the problem, but that's not the best solution.
I've spent some time looking around forums looking for answers and feel I have everything set up correctly. I'm just missing something...
FYI the router is an ASUS RT-N66W
When you're accessing it remotely, you're not using 192.168.10.10, you'd be using a public IP address or external hostname. It's this which you need to add to your trusted domains. Let's say you're accessing it using an external IP of 12.34.56.78:
'trusted_domains' =>
array (
0 => '192.168.10.10',
1 => '12.34.56.78'
),
And if you also decide to use an external hostname:
'trusted_domains' =>
array (
0 => '192.168.10.10',
1 => '12.34.56.78',
2 => 'owncloud.mydomain.com'
),
You can add as many of those as is necessary for your setup.
Additional to Nicks remarks there is also the example config file (config.sample.php) that helps you understand how to manipulate the real config file (config.php).
In case somebody wonders where to find the configuration files:
you will find them here
/var/www/owncloud/config/config.php
/var/www/owncloud/config/config.sample.php

How to add ignored files into heroku?

I have a php app into heroku. I use the free version. My app use codeigniter framework and I need ignore some files with a gitignore, for example I have a file database.php with the information of the database connection's. That information is different in each enviroment, my question is, how can upload these kind of files into heroku?
This is the wrong way to solve your problem. Any config information like:
api keys
database host / password
any other specific env configuration
shouldn't be versionned in git.
You should use environment variables. I assume you want to use different database host/username/password whether you're in development environment (your working directory), in staging, or in production.
1) Check how to set environment variables (it's different between Windows and Linux/Mac OS X), but you have helpers and it's a good practice to use that (even more when using heroku)
2) Since you're using php, you'll be able to retrieve those variable with getenv. You should have something like :
$dbUserName = getenv('DB_USER_NAME');
$dbHost = getenv('DB_HOST');
$dbPassword = getenv('DB_PASSWORD');
//use the variables above to make a db connexion
3) Time to set the env variables on your heroku vm (see more)
$ heroku config:set DB_USER_NAME=foo
$ heroku config:set DB_HOST=bar
$ heroku config:set DB_PASSWORD=fee
Your code will automatically use the correct config when you'll push to heroku (you won't have to bother to have to change something on your code base between your dev env and your production env + each team member can have different config on their local dev machine)

Deployment using ssh with key without providing passphrase for private key (ssh-agent)

Wherein lies the difference between Capistrano and Rocketeer when it comes to the passphrase for a private key?
I already have both Capistrano and Rocketeer deployment strategies set up properly and working. Capistrano lets ssh-agent provide the passphrase - Rocketeer, as it seems, does not. The question is not about how but why the passphrase is needed.
Background:
I want to use Rocketeer for deployment of a Laravel application instead of Capistrano. It seems as if it delegates the SSH connection to Laravel.
After setting only the remote server's name in the configuration and running a check, after some prompts for credentials Rocketeer stores the needed passphrase and the path to my desired private key in a non-version-controlled file.
I do not want to have credentials for establishing a SSH connection stored on my disk - especially not the passphrase to any of my private keys.
So, why is anything more than the server's name required?
I see that Laravel has those fields prepared in its remotes config - I just could not find out which component is responsible eventually and why it does not leave the SSH connection completely to the system itself.
Is it Rocketeer, Laravel, Symfony, phpseclib or even php itself underneath that needs that many information for establishing a SSH connection?
It's Laravel's missing implementation of phpseclib's ssh-agent that requires that many information for establishing a SSH connection.
That's why Rocketeer does not allow to rely on the ssh-agent next to username/password and privatekey/passphrase authentication as does Capistrano.
A proposal was stated and merged to include phpseclib's undocumented implementation for using the ssh-agent instead of an explicit key.
Rocketeer would profit from this as it relies on said implementation of phpseclib in Laravel.
(Thanks to #hannesvdvreken, #ThomasPayer and #passioncoder for pointing in the right directions)
There are some thing you might want to know.
You can use the default app/config/remote.php or you can use the Rocketeer config.php that gets published under app/packages/anahkiasen/rocketeer.
I tend to use the Laravel file. I made a copy of that file into the app/config/development folder which is ignored by git with .gitignore. I only write down the passkey of my private key down in that file. It will get merged with the array in app/config/remote.php.
Here's my app/config/development/remote.php file:
return array(
'connections' => array(
'staging' => array(
'keyphrase' => 'your-secret-here',
),
'production' => array(
'keyphrase' => 'your-secret-here',
),
),
);
Hope this helps.

Doctrine 1.2, How to build database on web host?

I have my website ready on the localhost and use doctrine 1.2 for the database, I want to upload the website to a web host to try it so I changed the parameters (Database, User, Password, Host) of the DNS in the config.php file, but I don't know how to build it since I used to run this command in the CMD:
php doctrine build-all-reload
and I can't use the exec() command or it's alternatives on a shared host.
I use PHP in my website.
So how can I build my database ?
If you have a yml file you can create a php script and run the following to create the db from your yml.
$options = array(
'packagesPrefix' => 'Plugin',
'baseClassName' => 'MyDoctrineRecord',
'suffix' => '.php'
);
Doctrine_Core::generateModelsFromYaml('/path/to/yaml', '/path/to/model', $options);
In general Doctrine_Core has a few methods to create, drop and insert to db after you have set up the connection. It is pretty straight forward.
Dump your database on localhost and load it on your webhost using whatever means you have available (phpMyAdmin for example)
Migrations will be worse.

Categories