Doctrine 1.2, How to build database on web host? - php

I have my website ready on the localhost and use doctrine 1.2 for the database, I want to upload the website to a web host to try it so I changed the parameters (Database, User, Password, Host) of the DNS in the config.php file, but I don't know how to build it since I used to run this command in the CMD:
php doctrine build-all-reload
and I can't use the exec() command or it's alternatives on a shared host.
I use PHP in my website.
So how can I build my database ?

If you have a yml file you can create a php script and run the following to create the db from your yml.
$options = array(
'packagesPrefix' => 'Plugin',
'baseClassName' => 'MyDoctrineRecord',
'suffix' => '.php'
);
Doctrine_Core::generateModelsFromYaml('/path/to/yaml', '/path/to/model', $options);
In general Doctrine_Core has a few methods to create, drop and insert to db after you have set up the connection. It is pretty straight forward.

Dump your database on localhost and load it on your webhost using whatever means you have available (phpMyAdmin for example)
Migrations will be worse.

Related

WordPress deployment when database is full of string length parameters

Some plugins and themes use in database json (?) where is used "s" parameter. Maybe example will be good here:
(9238, 14133, '_mail', 'a:9:{s:6:"active";b:1;s:7:"subject";s:25:"Website title";s:6:"sender";s:49:"[your-name] <wordpress#domain.com>";s:9:"recipient";s:46:"mail1#domain.com";s:4:"body";s:210:"Nadawca: [your-name] <[your-email]>\n\nTreść wiadomości:\n[your-message]\n\n-- \nTa wiadomość została wysłana przez formularz kontaktowy na stronie";s:18:"additional_headers";s:22:"Reply-To: [your-email]";s:11:"attachments";s:0:"";s:8:"use_html";b:0;s:13:"exclude_blank";b:0;}'),
As you can see every string in this row has something like "s:6". It means that following string has 6 chars. I moved from example real domains, etc. so not all lengths are correct now.
Why I'm writing about that. I usually prepare project on my my dev copy on my server and when it's done I copy that to target server (prod). I make this deployment in this standard way:
Copy all files from dev to prod (using ssh usually)
Export dev database
Manually change dev links to prod links in database (e.g. website.dev.domain.com to website-prod-domain.com)
Import database with changes to prod
Change database credentials in wp-config
And everything goes easy until there are in database some "s:" parameters. Then I usually have to go to WP admin panel and manually set all configuration in plugins and personalize options which are not working after deployment.
Is there any good solution or script to make my deployment easy also in case when I meet "s:" on my way?
My problem was about serialized data. The best way I found is using plugin dedicated to migrate database: https://pl.wordpress.org/plugins/wp-migrate-db/
The plugin works very nice with serialized data (Find & replace that handles serialized data).

Load test with 4000 Concurrent users

I am using CodeIgniter for my API implementation. Please find the server resources and technologies used as follows :
SUMMARY
Framework : CodeIgniter
Database : MySQL (Hosted on RDS)
Hosting : AWS t2.Micro
Web Server : Nginx
I am using 2 Slaves and 1 master Database as per the following link
CodeIgniter configure different IP for READ and WRITE MySQL data
I am using the following in every controller for making a database connection.
$DB1_READ = $this->load->database('READ', TRUE);
$DB2_READ = $this->load->database('READ', TRUE);
$DB3_WRITE = $this->load->database('WRITE', TRUE);
I am also using different READ and WRITE connections in the Database.php file:
2 READ SERVERS and 1 WRITE SERVER (MASTER)
PROBLEM
As I am loading Database in my controllers, WHEN I am only calling my READ Database, it's making a WRITE Connection to the WRITE server.
Where should I call $this->load->database so that if I am only going to EXECUTE READ query from READ database then it should not go to WRITE server causing failure of the WRITE server.
It is opening an unwanted connection at WRITE which in turn increasing Load on My Master Database. As per AWS RDS , I can only have 1 Master , so that is a HUGE issue for me as my MASTER dies Every time.

How to handle secret files on Azure App Services with Git

We have an PHP app, where for encryption of the connection with database we need to provide 3 files that shouldn't be publicly accessible, but should be present on the server to make the DB connection (https://www.cleardb.com/developers/ssl_connections)
Obviously we don't want to store them in the SCM with the app code, so the only idea that comes to my mind is using post-deploy action hook and fetch those files from storage account (with keys and URIs provided in the app parameters).
Is there a nicer/cleaner way to achieve this? :)
Thank you,
You can try to use Custom Deployment Script to execute additional scripts or command during the deployment task. So you can create a php script whose functionality is to download the certificate files from Blob Storage to server file system location. And then in your PHP application, the DB connection can use these files.
Following are the general steps:
Enable composer extension in your portal:
Install azure-cli module via npm, refer to https://learn.microsoft.com/en-us/azure/xplat-cli-install for more info.
Create deployment script for php via command azure site deplotmentscript --php
Execute command composer require microsoft/windowsazure, make sure you have a composer.json with the storage sdk dependency.
Create php script in your root directory to download flies from Blob Storage(e.g. named run.php):
require_once 'vendor/autoload.php';
use WindowsAzure\Common\ServicesBuilder;
use MicrosoftAzure\Storage\Common\ServiceException;
$connectionString = "<connection_string>";
$blobRestProxy = ServicesBuilder::getInstance()->createBlobService($connectionString);
$container = 'certificate';
$blobs = ['client-key.pem','client-cert.pem','cleardb-ca.pem'];
foreach($blobs as $k => $b){
$blobresult = $blobRestProxy->getBlob($container, $b);
$source = stream_get_contents($blobresult->getContentStream());
$result = file_put_contents($b, $source);
}
Modify the deploy.cmd script, add santence php run.php under the step KuduSync.
Deploy your application to Azure Web App via Git.
Any further concern, please feel free to let me know.

Updating remote site with drush and ssh

I'm very new to drush. We have a git repo of a drupal site that I would like to push to the remote server using drush. I can easily scp the drupal files or setup a cron on the remote that runs git pull but I still would like to learn how to push code and sync a remote drupal site with my local drupal.
Currently, I have drupal running locally and I use git to update the repo. The ssh is already configured and I can ssh to the remote drupal server using keys. I have also created .drush/aliases.drushrc.php file and I tested it by running drush #dev status. It worked well
<?php
$aliases['dev'] = array(
'root' => '/var/www/html',
'uri' => 'dev.example.com',
'remote-host' => '192.168.1.50'
);
?>
Now, I would like my local drupal site to be synchronized with our server on 192.168.1.50 server. The local drupal files are on my /home/ubuntu/drupal_site.
I have few questions:
What is the drush command/parameters to update remote drupal server?
What will be the drush command/parameters if remote server doesn't have drupal files yet?
Backup before synchronizing with drush ard or drush #dev ard or with the suited alias. You can set the backup path in the alias settings.
I think you named your remote server dev. That is why I keep this in the following and use the alias local for the local drupal site.
Add the alias for your local drupal site. Then you can use the following command to synchronize the files:
drush rsync #local #dev
There #local is the source and #dev the target. More details on how to use the command rsync can be displayed with:
drush help rsync
You also need to synchronize the database to get the remote site running. For this add the database account data to the alias data for #local and #dev. It will look something like this:
'databases' => array(
'default' => array(
'default' => array(
'driver' => 'mysql',
'username' => 'USERNAME',
'password' => 'PASSWORD',
'port' => '',
'host' => 'localhost',
'database' => 'DATABASE',
)
)
)
Replace the space holders with your data. Then databases can be synchronized with:
drush sql-sync #local #dev
There #local is the source and #dev the target.
Initially the synchronization will happen in one direction. After this it is good practice to synchronize files from development or test site to the productive site. The database is synchronized the other way around from productive site to development or test site.
Drush and Git workflows differ in a way, as Drush can pull packages separately – you could probably use Git to push to the server. Be sure to check the /files directory, which is usually in the .gitignore file – a possible approach would be to mirror the files directory directly from the live site.
A common approach to update and check 2 or several sites at the same time (being local and remote) would be to use Drush aliases for your sites on a script on your machine.
Articles like this one are a good starting point.

Deployment using ssh with key without providing passphrase for private key (ssh-agent)

Wherein lies the difference between Capistrano and Rocketeer when it comes to the passphrase for a private key?
I already have both Capistrano and Rocketeer deployment strategies set up properly and working. Capistrano lets ssh-agent provide the passphrase - Rocketeer, as it seems, does not. The question is not about how but why the passphrase is needed.
Background:
I want to use Rocketeer for deployment of a Laravel application instead of Capistrano. It seems as if it delegates the SSH connection to Laravel.
After setting only the remote server's name in the configuration and running a check, after some prompts for credentials Rocketeer stores the needed passphrase and the path to my desired private key in a non-version-controlled file.
I do not want to have credentials for establishing a SSH connection stored on my disk - especially not the passphrase to any of my private keys.
So, why is anything more than the server's name required?
I see that Laravel has those fields prepared in its remotes config - I just could not find out which component is responsible eventually and why it does not leave the SSH connection completely to the system itself.
Is it Rocketeer, Laravel, Symfony, phpseclib or even php itself underneath that needs that many information for establishing a SSH connection?
It's Laravel's missing implementation of phpseclib's ssh-agent that requires that many information for establishing a SSH connection.
That's why Rocketeer does not allow to rely on the ssh-agent next to username/password and privatekey/passphrase authentication as does Capistrano.
A proposal was stated and merged to include phpseclib's undocumented implementation for using the ssh-agent instead of an explicit key.
Rocketeer would profit from this as it relies on said implementation of phpseclib in Laravel.
(Thanks to #hannesvdvreken, #ThomasPayer and #passioncoder for pointing in the right directions)
There are some thing you might want to know.
You can use the default app/config/remote.php or you can use the Rocketeer config.php that gets published under app/packages/anahkiasen/rocketeer.
I tend to use the Laravel file. I made a copy of that file into the app/config/development folder which is ignored by git with .gitignore. I only write down the passkey of my private key down in that file. It will get merged with the array in app/config/remote.php.
Here's my app/config/development/remote.php file:
return array(
'connections' => array(
'staging' => array(
'keyphrase' => 'your-secret-here',
),
'production' => array(
'keyphrase' => 'your-secret-here',
),
),
);
Hope this helps.

Categories