Laravel8 laravelcollective/remote can I create a dynamic connection - php

I'm migrating our inhouse backup app onto Laravel8, so far so good. I'm interested in using the laravelcollective/remote facade (SSH) to ssh onto the remote server and run some commands which it looks like this would be very good at (rather than using php exec() methods the current backup app uses).
My question however is, can i build an array/object from the database and use these details as a connection without having to manually maintain the config/remote.php file? Maintaining this with any server changes will be a nightmare to maintain as we frequently update users and sites are added removed on a regular basis! any ideas? As mentioned we are storing the ssh creds in the database which is populated via a connection form within the app.
I've built a simple test function in a controller and stepped into this with my debugger. I expected to see the array/object which is created from the config/remote file and was planning to just add new items but i couldn't find any array/objects containing the default empty production config set as default in the config/remote.php file.
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use SSH;
class SecureServerController extends Controller
{
public function test() {
SSH::into()->run([
'cd /var/www',
'git pull origin master'
], function($line)
{
echo $line.PHP_EOL;
});
}
}
The following is the route used:
use App\Http\Controllers\SecureServerController;
Route::get('/test', [SecureServerController::class, 'test']);
thanks
*** EDIT ***
SO I had a look at the code for the SSH facade and found I could create a config file and pass this via the connect function:
$config = [
'host' => '123.123.123.123',
'username' => 'the-user',
'password' => 'a-password'
];
SSH::connect($config)->run([
'cd /var/www',
'git pull origin master'
], function($line)
{
echo $line.PHP_EOL;
});
However i see no way to use any port except 22. almost all our servers use a non default port as an additional level of obfuscation.

Related

Laravel Dusk: how to use in-memory DB for testing

What I've been trying is to use in-memory database while testing with Laravel Dusk.
Here we have a file, .env.dusk.local, with the following values.
DB_CONNECTION=sqlite
DB_DATABASE=:memory:
Here is a snippet of a browser testing file.
class ViewOrderTest extends DuskTestCase
{
use DatabaseMigrations;
/** #test */
public function user_can_view_their_order()
{
$order = factory(Order::class)->create();
$this->browse(function (Browser $browser) use ($order) {
$browser->visit('/orders/' . $order->id);
$browser->assertSee('Order ABC'); //Order name
});
}
}
When php artisan dusk is executed, Dusk starts browser testing.
However, Dusk seems to be accessing my local DB, because there is an order name on the testing browser which only exists in my local DB, while 'Order ABC' is expected to be displayed on the browser.
According to the doc, Laravel Dusk allows us to set the environmental variables.
To force Dusk to use its own environment file when running tests, create a .env.dusk.{environment} file in the root of your project. For example, if you will be initiating the dusk command from your local environment, you should create a .env.dusk.local file.
I don't feel that Dusk is accessing the seperate DB.
Any advice will be appreciated.
You can't use :memory: database while Laravel dusk browser testing. Your development server and dusk testing runs on separate processes. dust test cannot access to memory of process running on development server.
Best solution it to create sqlite file database for testing.
'sqlite_testing' => [
'driver' => 'sqlite',
'database' => database_path('sqlite.testing.database'),
'prefix' => '',
],
Create sqlite.testing.database file inside database folder.
Make sure to run development server before running tests using
php artisan serve --env dusk.local
You need a connection in config/database.php
'sqlite_testing' => [
'driver' => 'sqlite',
'database' => ':memory:',
'prefix' => '',
],
Then in your phpunit.xml file use:
<env name="DB_DEFAULT" value="sqlite_testing" />
or in your tests use:
putenv('DB_DEFAULT=sqlite_testing');
Don't forget to use the RefreshDatabase trait to reset the database before each test.

force behat to use in memory db with laravel 4.2 during curl requests

I'm trying to get some Behavior tests integrated into my API test suite. I'm leveraging Laravel 4.2 and already have a nice suite of unit tests. The problem I'm running into is persistent data after these tests suite run - as well as correctly populated seed data.
I've tried to link sqlite into my bootstrap process following a few examples I've seen in various places online, but all this is really doing is setting up my DB when I call behat from CLI - during the application run (most specifically any curl requests out to the API) laravel still ties my DB to my local configuration which uses a mysql db.
here's an example snippet of a test suite, the Adding a new track scenario is one where I'd like to have my API use the sqlite when the request and payload is made to my API.
Feature: Tracks
Scenario: Finding all the tracks
When I request "GET /api/v1/tracks"
Then I get a "200" response
Scenario: Finding an invalid track
When I request "GET /api/v1/tracks/1"
Then I get a "404" response
Scenario: Adding a new track
Given I have the payload:
"""
{"title": "behat add", "description": "add description", "short_description": "add short description"}
"""
When I request "POST /api/v1/tracks"
Then I get a "201" response
Here is a snippet of my bootstrap/start.php file. What I'm I am trying to accomplish is for my behat scenario (ie: Adding a new track) request to hit the testing config so I can manage w/ a sqlite db.
$env = $app->detectEnvironment(array(
'local' => array('*.local'),
'production' => array('api'),
'staging' => array('api-staging'),
'virtualmachine' => array('api-vm'),
'testing' => array('*.testing'),
));
Laraval does not know about Behat. Create a special environment for it, with its own database.
Here is what I have in my start.php:
if (getenv('APP_ENV') && getenv('APP_ENV') != '')
{
$env = $app->detectEnvironment(function()
{
return getenv('APP_ENV');
});
}
else
{
$env = $app->detectEnvironment(array(
'local' => array('*.local', 'homestead'),
/* ... */
));
}
APP_ENV is set in your Apache/Nginx VirtualHost config. For apache:
SetEnv APP_ENV acceptance
Create a special local test URL for Behat to use and put that in the VirtualHost entry.
I recommend using an SQLite file-based database. Delete the file before each Feature or Scenario. Found it to be much quicker than MySQL. I want to use the SQLite in-memory mode but I could not find a way to persist data between requests with the in-memory database.

Deployment using ssh with key without providing passphrase for private key (ssh-agent)

Wherein lies the difference between Capistrano and Rocketeer when it comes to the passphrase for a private key?
I already have both Capistrano and Rocketeer deployment strategies set up properly and working. Capistrano lets ssh-agent provide the passphrase - Rocketeer, as it seems, does not. The question is not about how but why the passphrase is needed.
Background:
I want to use Rocketeer for deployment of a Laravel application instead of Capistrano. It seems as if it delegates the SSH connection to Laravel.
After setting only the remote server's name in the configuration and running a check, after some prompts for credentials Rocketeer stores the needed passphrase and the path to my desired private key in a non-version-controlled file.
I do not want to have credentials for establishing a SSH connection stored on my disk - especially not the passphrase to any of my private keys.
So, why is anything more than the server's name required?
I see that Laravel has those fields prepared in its remotes config - I just could not find out which component is responsible eventually and why it does not leave the SSH connection completely to the system itself.
Is it Rocketeer, Laravel, Symfony, phpseclib or even php itself underneath that needs that many information for establishing a SSH connection?
It's Laravel's missing implementation of phpseclib's ssh-agent that requires that many information for establishing a SSH connection.
That's why Rocketeer does not allow to rely on the ssh-agent next to username/password and privatekey/passphrase authentication as does Capistrano.
A proposal was stated and merged to include phpseclib's undocumented implementation for using the ssh-agent instead of an explicit key.
Rocketeer would profit from this as it relies on said implementation of phpseclib in Laravel.
(Thanks to #hannesvdvreken, #ThomasPayer and #passioncoder for pointing in the right directions)
There are some thing you might want to know.
You can use the default app/config/remote.php or you can use the Rocketeer config.php that gets published under app/packages/anahkiasen/rocketeer.
I tend to use the Laravel file. I made a copy of that file into the app/config/development folder which is ignored by git with .gitignore. I only write down the passkey of my private key down in that file. It will get merged with the array in app/config/remote.php.
Here's my app/config/development/remote.php file:
return array(
'connections' => array(
'staging' => array(
'keyphrase' => 'your-secret-here',
),
'production' => array(
'keyphrase' => 'your-secret-here',
),
),
);
Hope this helps.

Doctrine 1.2, How to build database on web host?

I have my website ready on the localhost and use doctrine 1.2 for the database, I want to upload the website to a web host to try it so I changed the parameters (Database, User, Password, Host) of the DNS in the config.php file, but I don't know how to build it since I used to run this command in the CMD:
php doctrine build-all-reload
and I can't use the exec() command or it's alternatives on a shared host.
I use PHP in my website.
So how can I build my database ?
If you have a yml file you can create a php script and run the following to create the db from your yml.
$options = array(
'packagesPrefix' => 'Plugin',
'baseClassName' => 'MyDoctrineRecord',
'suffix' => '.php'
);
Doctrine_Core::generateModelsFromYaml('/path/to/yaml', '/path/to/model', $options);
In general Doctrine_Core has a few methods to create, drop and insert to db after you have set up the connection. It is pretty straight forward.
Dump your database on localhost and load it on your webhost using whatever means you have available (phpMyAdmin for example)
Migrations will be worse.

Codeigniter cron job from CLI throws memcached errors

I'm trying to set up the cron jobs for my Codigniter application, however when I run the cron, it throws me memcached errors:
PHP Fatal error: Call to a member function get() on a non-object in /var/www/domain.com/www/dev/system/libraries/Cache/drivers/Cache_memcached.php on line 50
Fatal error: Call to a member function get() on a non-object in /var/www/domain.com/www/dev/system/libraries/Cache/drivers/Cache_memcached.php on line 50
Although I have no idea why this is throwing all the time, I can't find any errors in my cron job file, nor how to solve this problem because I don't know where this is being called, I looked into my autoloaded libraries and helpers, none of them seem to be wrong.
I also can confirm that memcached is installed, if I visit my site, memcached indeed works.
I tried suppressing the get() in Cached_memcached.php with a #, but this didn't help because no output is shown (but there is supposed to be output shown).
The command I run for the cron (user: www-data) is:
/usr/bin/php -q /var/www/domain.com/www/dev/index.php cron run cron
I'm running Ubuntu 11.10 x86_64.
This is my cron file:
<?php if ( ! defined('BASEPATH')) exit('No direct script access allowed');
class Cron extends CI_Controller {
var $current_cron_tasks = array('cron');
public function run($mode)
{
if($this->input->is_cli_request())
{
if(isset($mode) || !empty($mode))
{
if(in_array($mode, $this->current_cron_tasks))
{
$this->benchmark->mark('cron_start');
if($mode == 'cron')
{
if($this->cache->memcached->get('currency_cache'))
{
if($this->cache->memcached->delete('currency_cache'))
{
$this->load->library('convert');
$this->convert->get_cache(true);
}
}
echo $mode . ' executed successfully';
}
$this->benchmark->mark('cron_end');
$elapsed_time = $this->benchmark->elapsed_time('cron_start', 'cron_end');
echo $elapsed_time;
}
}
}
}
}
The first thing to try would be the following to determine if memcached is supported.
var_dump($this->cache->memcached->is_supported());
The second thing to ensure is that you've got a memcached.php file in application/config/
It should contain a multidimensional array of memcached hosts with the following keys:
host
port
weight
The following example defines two servers. The array keys server_1 and server_2 are irrelevant, they can be named however.
$config = array(
'server_1' => array(
'host' => '127.0.0.1',
'port' => 11211,
'weight' => 1
),
'server_2' => array(
'host' => '127.0.0.2',
'port' => 11211,
'weight' => 1
)
);
The next thing I'd try is check to see if the controller can be run in the web browser as opposed to the CLI or if you get the same error.
Also, explicitly loading the memcached driver might be worthwhile trying. The following will load the memcached driver, and failing that call upon the file cache driver.
$this->load->driver('cache', array('adapter' => 'memcached', 'backup' => 'file'));
Using this method allows you to call $this->cache->get(); to take into account the fallback too.
Another thing to check is that you're not using separate php.ini files for web and CLI.
On Ubuntu it's located in
/etc/php5/cli/php.ini
And you should ensure that the following line is present, and not commented out
extension=memcache.so
Alternatively, you can create a file /etc/php5/cond.d/memcache.ini with the same contents.
Don't forget to restart services after changing configuration files.
You can check memcached is indeed set up correctly using the CLI by executing the following
php -i | grep memcache
The problem is that $this->cache->memcached is NULL (or otherwise uninitialized), meaning that the cache hasn't been initialized.
An easy fix would be to simply create the memcache object yourself. The proper fix, however, would be to look through the source and trace how the memcache object normally gets instantiated (look for new Memcache and set a debug_print_backtrace() there. Trace the debug stack back and compare it with what your cron does - look where it goes wrong then correct it). This is basic debugging btw, sorry.
Also, make sure you do load the drivers. If your cron uses a different bootstrap function than your normal index (never used CI, is that even possible?) then make sure that the memcache init is placed in the right location.
-edit-
$this->cache->memcached probably isn't actually NULL, but the actual connection to the Memcache server definitely wasn't made before you started calling get().

Categories