How to quick launch server with winginx? - php

Im searching a way to start server with a bat file or by adding a prefix to a shortcut to launch a winginx program with a specific project and specific services (php, nginx, mysql)
ps: i dont need install services. Its portable server.
thanks

You can make a .bat file that will use the cd(change directory) command to change the current directory to the location of the services and then execute each service command

Related

Git Post-Push hook replacing build folder

I'm working with an AngularJS app that I am hosting on Azure as a Web App.
My repository is on Bitbucket, connected as a deployment slot. I use TortoiseGit as a Windows shell for commit/pull/push. Grunt is my task runner.
Problem: I am not managing to replace the build folder after a push has been made. This is what is being exposed on Azure (see picture):
What I've tried:
Batch file using Windows ftp.exe replacing the folder after push using mput
Following this guide by taking advantages of Azure's engine for git deployment(s) that is behind every web app.
WebHook on Bitbucket calling URL with simple PHP script doing a FTP upload
Currently my solution is that after a push I'm building using grunt build and connecting through FTP with FileZilla and replacing build folder manually.
How can I automate this?
Thanks in advance.
I solved this by refactor my initial batch script that calls:
npm install
bower install
grunt clean
grunt build
winscp.com /script=azureBuildScript.txt
and my azureBuildScript.txt:
option batch abort
option confirm off
open ftps://xxx\xxx:xxx#waws-prod-db3-023.ftp.azurewebsites.windows.net/
cd site/wwwroot
rmdir %existingFolder%
put %myFolder%
exit
This is being triggered Post-Push as a Hook job in TortoiseGit.
It turns out ftp.exe is restricted regarding passive mode and can't interpret a session whilst WinSCP does this with ease.

Allow web server to edit files inside a docker container

I'm trying to setup docker as web developer environment on my mac. I share local volume from my machine to web root folder inside a container. But I got stuck with permissions, web-server inside docker can't create a new folder or write new files to shared volume because almost all files have permissions 744 and different a user and a group.
I thought make all permissions 777 but it doesn't sounds good because in the git and in future on a server it will be also with bad permissions. In any way, a new files and folder that is creating web-server have wrong permissions.
I thought make a same group name on my mac that running web server inside docker (www-data) and change permissions to 774. But it sounds stupid.
What is a best way to fast manage files inside a docker? In my way I need to edit PHP files and immediately see result in browser.
you can use docker exec -it container_id script script being either a sed or a replacement of your file with a new version.
Starting with docker 1.8, you can add a specific user, docker exec -u muyser

PHP OSX XAMPP - exec mount command

Good Afternoon,
I am currently working on a PHP project which requires a php script to mount a windows shared drive. Currently building using OSX with XAMPP.
exec('mount -t smbfs //user:pass#192.168.1.1/Share /Volumes/Share 2> temp/error.txt');
Now i understand why this does not work. Its due to permissions. Apache is running as user daemon. Now i could change the user that Apache runs to fix this "challenge" but want to avoid any changes to the server if possible.
I would like to reach out and see if there is a better way to go about this.
Any ideas?
Ok, So i got it working.
I just needed the web server (user daemon) to own a folder in which the share is mounted.
EG. created a folder called "tempshare" that user daemon owns and is in the same folder as the php script (don't worry, it will be placed out of the web root when complete)
exec('mount -t smbfs //user:pass#192.168.1.1/Share /path/to/tempshare 2> temp/error.txt');
Seemed to work. Any advice on security using this method?

Azure Webjobs and php.exe

I have a php application (ARPReach) installed on a Azure VM. I use Task Scheduler to run a .bat every 5 min that has this simple line of code:
"C:\Program Files (x86)\PHP\v5.4\php.exe" E:\Web\arp\a.php cli/auto
Now I want to move this php app to Azure Website, and I need a similar scheduling functionality as the above.
I got the path to php.exe from the Kudu site and added a Webjob (cron.bat) to my website with the following line:
"D:\Program Files (x86)\PHP\v5.4\php.exe" D:\home\site\wwwroot\a.php cli/auto
and it seems to work fine.
However I'm not sure if this is the right way to do it with Azure Webjobs / Websites. I mean are the paths going to change after restart or auto-scaling?
Can anyone confirm for me?
Restart or scaling will not change this path, it should not change unless Azure Websites will stop supporting php version 5.4 for some reason.
So this batch file should work fine, I would use %ProgramFiles% instead of D:\Program Files (x86).

Looking for a safe way to deploy PHP code

How we do things now
We have a file server (using NFS) that multiple web servers mount and use these mounts as the web root. When we deploy our codebase, we SCP an archive (tar.gz) to the NFS server and unarchive the data directly in the "web directory" of file server.
The issue
During the deploy process we are seeing some i/o errors, mostly when a requested file cannot be read: Smarty error: unable to read resource: "header.tpl" These errors seem to go away after the deploy is finished, so we assume that it's because unarchiving the data directly to the web directory isn't the safest of things. I'm guessing we need something atomic.
My Question
How can we atomically copy new files into an existing directory (the web server's root directory)?
EDIT
The files that we are uncompromising into the web directory are not the only files that are in the directory. We are adding files to the directory, that already has files. So copying the directory or using a symlink is not an option (that I know of).
Here's what I do.
DocumentRoot is, for example, /var/www/sites/www.example.com/public_html/:
cd /var/www/sites/www.example.com/
svn export http://svn/path/to/tags/1.2.3 1.2.3
ln -snf 1.2.3 public_html
You could easily modify this to expand your .tar.gz before changing the symlink instead of exporting from svn. The important part is that the change is the atomic application of the symlink.
I think rsync is a better choise instead of scp, only the changed files would be synchroned. but deploying code by script is not convenient for deveopments in a team, and the errors in deployment is not humanize.
you can think about Capistrano, Magallanes, Deployer, but they are script too. I may recommend you have a try walle-web, a deployment tool written in PHP with yii2 out of the box. I have hosted it in our company for months, it works smoothly while deploying test, simulate, production enviroment.
it depend on groups of bash tools, rsync, git, link, but a web ui generally well for operation, have a try:)
Why don't you just have 2 dirs with 2 different versions of the site. So when you finished deploying in site_2 you just switched site dir in your webserver config (for example apache) and copy all files to site_1 dir. Then you can deploy in site_1 dir and switched to it from site_2 with the same method.
RSync was born to run... er... I mean to do this very thing
RSync works over local file systems and ssh - it's very robust and fast - sending/copying only changed files.
It can be configured to delete any files that have been deleted (or are simply just missing from the source), or it can be configured to leave them alone. You can set up exclude lists to exclude certain files/directories when syncing.
Here's a link to a tutorial.
Re: atomic - link to another question on SO
I like the NFS idea. We do deploy our code to NFS server that is mout on our frontends. In fact we run a shell script when we want to release a new version. What we do is using a symlink current to the last release dir, like this:
/fasmounts/website/current -> /fasmounts/website/releases/2013120301/
And apache document root is:
/fasmounts/website/current/public
(in fact apache document root is /var/www which is a symlink to /fasmounts/website/current/public)
The shell script updates the current symlink to the new release AFTER everything has been uploaded correctly.

Categories