Manage local SVN file structures with PHP - php

I have a developer tool that modifies the local file system using php. These projects are often checked out from SVN. The issue that I would like to avoid is the moving / deleting / renaming of version controlled directoires - and the breakage that occurs to a checked out SVN when doing those alterations locally.
Is there a simple way to apply the SVN commands (move add, delete, rename etc) when I also perform the matching file system operations?
Or is it easier just to delete .svn dirs if found in move / rename targets?
To clarify this:
A checked out svn that has a structure of:
somedir/
--foo/
--cheese/
User 1 alters this file system structure (using the dev tool), renaming the 'foo' dir (which is under version control) to 'modified', when they go to commit their changes the svn will error due to the change in name not being done via SVN commands.
This could be running on a variety of development servers (usually desktop machines running apache on win or mac)
Dont want to ignore or disconnect from the svn.

In your PHP file, you could use the svn commands via exec()
<?php
$Output = '';
$ExitCode = 0;
exec('svn mv "'.$Current.'" "'.$New.'"', $Output, $ExitCode);
$Output is the output of the command and $ExitCode will be 0 if the command was successful.
Keep in mind however, that for security reasons the exec() command might be disabled on some systems. If you have control of your php.ini file, you can choose to enable/disable it.

Programmatically you may be better off just removing .svn folders. It probably is just one or two lines of code.
Another thing to try, why dont you export the module? that way you dont get .svn folders.
Also note that you can script all svn commands and run them.

If I understood correctly, you have one version controlled directory (slave) within another (master). Why not just add slave directory to master svn ignore list?
svn propset svn:ignore slave .
svn ci . -m 'Added ignore dir'
From now on running svn up/add/delete in master directory won't mess up slave svn, while the latter remaining fully functional.

Related

Clearing cache manifest via CLI

I've automated the deploying of my site and I have a script that runs framework/sake /dev/build "flush=1" This works however it clears the cache directory of the user who runs it, which is different from the apache user (which I can't run it from).
I've read a few bug reports and people talking about it on the SS forum however either there is no answer or it doesn't work for example
define('MANIFEST_FILE', TEMP_FOLDER . "/manifest-main");
I thought about just deleting the cache directory however it's a randomised string so not easy to script.
Whats the best way to clear the cache via command line?
To get this to work you need to first move the cache from the default directory to within the web directory by creating a folder silverstripe-cache at the web root. Also make sure the path is read/write (SS default config blocks this being readable by the public)
Then you can script:
sudo -u apache /path/to/web/root/framework/sake dev/build "flush=1"

GIT Ignore and GIT Hook - File is replaced after a commit-push

My GIT repository is located /var/repo/myRepo.git. I set a GIT hook post-receive* to copy the files from my repository to the folder of my project
git --work-tree=/var/www/laravel --git-dir=/var/repo/myRepo.git checkout -f
Each time I commit and push something on the server, the file var/www/laravel/config/services.php is replaced and the modification I did on the server is replaced by my local copy.
For instance, if I manually modify the following file like this on the server (by ssh session)
var/www/laravel/config/services.php
This is the modified content of this file
It will be like that after a commit and push
var/www/laravel/config/services.php
This is the default content of this file
I tried to add /config/services.php to my .gitignore but it does not seem to work.
.gitignore
/node_modules
/public/storage
/public/hot
/storage/*.key
/vendor
/.idea
Homestead.json
Homestead.yaml
.env
/config/services.php
What should I do so this file is not replaced each time I commit something on my server ?
What should I do so this file is not replaced each time I commit something on my server?
You have only two options:
don't check it in, or
don't check it out.
Your git checkout -f command means "get me the latest commit, overwriting everything." If the latest commit has a new version of a file, that overwrites the old version of the file.
(Moreover, a .gitignore file does not mean what you think it means. It's not a list of files to ignore. It's a list of files—or name patterns—not to complain about. Usually most important, it lets you declare to Git: "Yes, I know these are in my work-tree and not in my index; don't tell me that." That's on the input side—i.e., the "don't check it in" part.)
This leads to a general rule about configurable software, where the software itself is maintained in Git, or indeed any version control system: Do not put the configuration into the version control system. The configuration is not part of the software.
Consider Git itself, for instance. You must configure Git to tell it your user.name and user.email in order to make commits with your user-name and email address. Now imagine Git came with its configuration file built into the software, that said your user name is Fred and your email is fred#fred.fred. Every time you updated Git to a new version, it would change your name back to "Fred <fred#fred.fred>". That's not very friendly, is it?
Git stores your configuration outside of the Git software. There are, to be sure, default configuration entries, but anything you set, is kept elsewhere. Updating Git does not touch this. This is not specific to Git, or even version-control systems: any system that provides upgrades must not store its configuration in a file that is destroyed by the upgrade.
So, stop doing that.
I did git rm /config/services.php and reimported the file manually. Now the file is not replaced by GIT.

Keep the user's configuration when running emacs in sudo mode

I have to run emacs in sudo mode, to edit some .html or .php files in my /var/www directory.
When I run it in normal-user mode, there's no problem with the syntax, and the colors (I installed the php-mode.el extension). Unfortunately when I run it in sudo mode, I lose this configuration.
Is there any way to get it back?
Unfortunately when I run it in sudo mode, I loose this configuration, which is sad.
That's completely expected. When you run a command with sudo you're running it as a different user, usually root. In most cases the target user's configuration will be used.
Is there any way to get it back ?
I believe the best option here is to run Emacs normally and then edit the file as root using TRAMP. In this case I think prefixing the file with /sudo:: will do the trick, e.g. C-x C-f /sudo::/var/www/html/foo.php RET. Emacs will prompt you for your password, just like sudo would on the command line.
TRAMP does a lot more than letting you edit certain files as root via sudo, and it is probably worth your time to browse its manual.
It is unlikely that you would ever need to run sudo emacs, and it's bad practice to run things as root unnecessarily.
The sudoedit command / sudo -e option exists for the purpose of editing files owned by other users. The editor which will be used is described in the manual (man sudo) under the description for the -e option:
The editor specified by the policy is run to edit the temporary files. The sudoers policy uses the SUDO_EDITOR, VISUAL and EDITOR environment variables (in that order). If none of SUDO_EDITOR, VISUAL or EDITOR are set, the first program listed in the editor sudoers(5) option is used.
With this approach, you will be editing a file as your normal user, and hence with your normal editor config.
Alternatively, use the Tramp methods built into Emacs, as per Chris' answer (which is most likely simpler if you are editing many files).
If these are files you essentially need to have write access to in general, perhaps you should allocate yourself to a group with write access.

executing suphp from memory

I manage a server on linux that has apache with php and suphp. Like most setups, all program files are currently stored on disk.
I want to run suphp from ram. I then copied everything in the suphp folder (config files, folders, and program totaling about 2.8MB) to ram. Then on disk, I renamed the folder so the old version doesn't get accessed. I used the -a switch with cp to preserve the permissions and such.
I then made two attempts which both led to failure.
First, I make a link (using ln -s) named suphp that points to the suphp folder in ram. Then when I browsed the file/folder structure, everything else looks identical as if the setup was ready to work.
Since that didn't work, I made another attempt by removing the suphp symbolic link, then creating an empty suphp folder and mount-binding it (using mount --bind) to the suphp folder in ram. That did not even work.
I then looked at my apache error_log, and during the time I tried this steps and until I restored the original setup, I received various error messages with the text similar to this in common in all of them:
"(2)No such file or directory: couldn't create child process: (suphp folder location)/sbin/suphp for (full path to php file on website)".
What baffles me is, why would it report no such file or directory instead of some other error...
suPHP will be loaded into RAM by Apache since it is an Apache module (mod_suPHP). There is no reason to move the actual files/executables to the system's "ram" directory and set up symlinks like you have done.
If your intent is to keep PHP scripts in memory, consider using an opcache like the one built into 5.5 or apc, see How to use PHP OPCache?

Laravel 4 - cloning the local project on the VPS

I use Laravel 4 to develop my projects.
I develop on my Mac, commit to Git, then clone it on the server (linode 1G VPS).
Since "vendor" folder is by default "GIT-ignored", I usually do "composer install" after cloning the project on the server.
After that, any other packages I install locally, I do "composer update" on the server.
Yesterday, I reported this problem - PHP Composer update "cannot allocate memory" error (using Laravel 4)
So far, I have not found a solution. I even tried to do a "fresh" cloning and "composer install", it's giving me the memory error. This is extremely frustrating.
My question is then, is it ok to just upload my entire project to the server? Since "vendor" folder is only thing that is "git-ignored", if I just copy everything there, would it work? (I haven't tried it since my server is alive at the moment and I don't want to damage anything).
What is the actual role of "compiled.php" file? Is it platform dependent? Can I copy that file too?
I've seen this memory issue quite a few times now and read other people reporting the similar issue. I hope I can just upload the entire project folder and cross my fingers that it will work.
Thanks for your help!
I do not have VPS, or even shell access to my custom/shared hosting from my provider, but I can run git and composer commands without that.
Use sshfs http://osxfuse.github.io/
sshfs actually does SFTP connection to your server and mounts server to local directory.
This way, you can run git and composer commands localy. You do not depend on your VPS/hosting server. sshfs sends files in background to remote server.
To mount VPS to local dir, run this
sshfs user#serverip:. /path/to/existing/local/dir // to mount root dir
cd !$ // to get into mounted dir
// or
sshfs user#serverip:foldername /path/to/existing/local/dir // to mount specific dir
cd !$ // to get into mounted dir
Now you can do whatever you want.
a good thing to know for you - it is possible to set up Laravel config in such a way, that the same app (the very same copy of code) can act differently on different servers (environments).
I am wrtiting that, because if you sync your remote server with a local copy of the code sooner or later you will stumble upon issues like changing the db credentials or app setup after every sync - which of course doesn't make sense :)
Check out Laravel 4 Docs Environment configuration to read more about that, or follow this tutorial by Andrew Elkins - How to set Laravel 4 Environments
The environment is based on url matches.
You’ll find that configuration in /bootstrap/start.php
$env = $app->detectEnvironment(array(
'local' => array('your-machine-name'),
));
Now say you are developing locally and use the prefix/postfix local. E.g: my-new-site.local or local.my-new-site
$env = $app->detectEnvironment(array(
'local' => array('local.*','*.local'),
));
That sets the environment, now to use it you’ll need to create a local folder in /app/config/
1 mkdir app/config/local
And so you want to have a different database configuration for local. Just copy the database config file in to the local directory and modify it.
1 cp app/config/database.php app/config/local/database.php
To sum up and answer your question:
1) I guess it's OK to copy the whole project dir to remote server (although, if your copying vendor it might take a lot of time - it usually contains a big number of files)
2) if you do so, remember to have the updated composer.json on remote server (to reflect all the necessary requirements)
3) If you are using different database servers local and remote - you obviously have to run migrations and seeders on the remote server (this concernes also package migrations/seeds)
4) after you migrate all your files, do
composer dump-autoload
php artisan optimize --force --env=YourProductionEnvironmentName
which should rebuild the bootstrap/autoloaders
5) if you are using the Laravel Environments setup mentioned above, remember to have your remote server seen as production (if your local is testing/staging).

Categories