I'm trying to setup ViewSVN for viewing our subversion repository.
My SVN repository uses https for access. However, irrespective of supplying svn://, svn+ssh:// or https:// in the viewsvn configuration for my svn repository, I always get this in my apache log:
svn: URL protocol is not supported 'https://my.repository.com'
Everything of course works perfectly when running from the commandline.
my localconfig.php file defines the svn root server as
$config['svnroot']='https://my.repository.com';
One other thing- I am using JavaSVN.
Do you happen to have multiple versions of SVN installed, e.g. ViewSVN using /usr/bin/svn whereas you are using /usr/local/bin/svn or something like that?
[EDIT]
I don't know JavaSVN but maybe it behaves differently when run by different users. Maybe it doesn't load additional plugins you need to handle different protocols. Do you have superuser access to your machine? Try to run the checkout from the command line as the web server user:
su nobody
svn checkout https://...
I have the "port number" and the "root folder" added in svnroot, like the follwoing
$config['svnroot']='https://my.repository.com:8088/svn';
I use TortoiseSVN and it shows my svnroot, I think JAVASVN has the same feature.
Related
I'm trying to integrate a custom web page with git. In my PHP scripts, I use the option "-c credential.helper="store --file=..." so that the web page does not stop and wait for a password to be input. The User ID is specifically designated for automated tasks like this. As part of the web interface, I have some code that will update the credentials file when the password expires.
During development, when I issue the git commands at the windows command prompt (the "terminal" for you *NIX readers :) they all work fine. However, when I put the commands inside my PHP script and run it via the web server, they fail. I've managed to capture the output of the git task, and it's waiting for someone to type in a user ID.
It seems that git on windows will automatically drop back to the "manager" helper, if all other mechanisms fail. The following sequence of commands illustrates this:
C:\TFS\Train>git config --get --show-origin credential.helper
C:\TFS\Train>type .git\.git_web_credentials
http://Promote5:[.....]#tocgnxt1pv%3a8080
C:\TFS\Train>git -c credential.helper="store --file=C:\TFS\Train\.git\.git_credentials" tag -a -m "Testing tags from the command line" Who_created_this_tag2
C:\TFS\Train>git show Who_created_this_tag2
tag Who_created_this_tag2
Tagger: JimHyslop <jim.hyslop#xxx.xxx>
Date: Wed Dec 6 18:13:27 2017 -0500
[... remainder of output elided ...]
C:\TFS\Train>git --version
git version 2.13.0.windows.1
As you can see, the "tagger" line indicates that I'm the one who applied the tag.
I've even tried deleting the credentials file completely, but it still falls back to using my identity.
Is there any way to suppress this automatic fall back to using my Windows credentials? It's making it very difficult to debug my PHP script: I never know from the command prompt whether the command succeeded because it was able to use the credentials file, or because it fell back to using my own ID (which the web service cannot do).
Edit to add: The password is actually in the credentials file, I masked it out. The file was initially created by entering the ID and password from a Cygwin shell, which is running a slightly newer version of git: 2.15.0. I'm running a WAMP stack for my web service.
After more investigation, I realized that the 'tagger' line is pulled from the configuration information under the [user] section. The user information shown in the log is unrelated to the authorization/credentials method.
The real question is: why is git ignoring the command-line option "-c credential.helper="? Or, if it's not ignoring the option, how can I figure out exactly what is failing with the credential helper?
For the record, the solution I came up with was to write a custom credential manager, as outlined at git-scm.com
EDIT: Sorry everyone, this isn't something you could have fixed! The AppKernel class had been modified to change the cache directory, as below:
public function getCacheDir()
{
if(isset($_SERVER['HTTP_HOST']))
{
return $this->rootDir.'/cache/'.$this->environment.'/'.$_SERVER['HTTP_HOST'];
}
else{
return $this->rootDir.'/cache/'.$this->environment.'/default';
}
}
So not down to Symfony, or PHP, but a previous developer (presumably not on Windows!). Thanks for all your help, +1s all round.
I'm hoping there's a simple answer to this, but right now I can't see it!
Windows 10
Symfony 2.8.11
PHP 5.5.9
For convenience, I'd like to use PHP's built-in webserver (via the Symfony Console) to run a Symfony (2.8) application, on a port other than 80. I have a colleague successfully doing this, but he's using Linux, and I'm on Windows 10. The issue is that, on anything other than the standard port 80, when Symfony builds its cache the port is appended to one of the directory names, with a colon, which is illegal in Windows filenames (although not elsewhere). The cache build process fails, and the app doesn't run.
I'm starting the PHP server via Symfony's Console, like so:
php app/console server:run appname.local
The directory it's trying to build is:
C:\git\appname\app/cache/dev/appname.local:8000
And so I get the error:
RuntimeException in bootstrap.php.cache line 2763:
Unable to create the cache directory C:\git\appname\app/cache/dev/appname.local:8000)
I'd just use the standard port (this does work), but in fact I want to run several things at once, and they can't all be on 80.
Is there any way I can run a Symfony site on PHP's webserver, on Windows, on a non-standard port, in such a way that Symfony doesn't choke at the point of building the cache? For clarity, I could change webserver, and I could change OS, but for the purposes of this question assume that those are fixed. I'd prefer not to switch off the cache (it's slow enough as it is!) but that's an option if it would help.
EDIT: it seems like this works for at least some people, so there must be something different about my config. Best bet is probably the PHP version, which is quite old (not for any particular reason, just laziness).
Symfony has a command to run a webserver (which uses the php built-in PHP server)
php bin/console server:start
This command will start the server on port 8000 (default config)
Have a look here for more information about available options : https://symfony.com/doc/current/setup/built_in_web_server.html
to start the server on a particular port specify the port after the IP address from the command line. Using the IP address is useful too, because you can access from a different host.
For example, let's say you run ipconfig /all and you see your IPv$ address is 192.168.1.100. Then you can run:
php bin/console server:start 192.168.1.100:8888
This starts Symfony's built-in web server on port 8888 on IP address 192.168.1.100. So in a browser you can enter: http://192.168.1.100:8888/ where / is the route you want to access.
To stop the built-in server enter:
php bin console server:stop 192.168.1.100:8888
You'll see messages on the command line showing the stopping/starting of the built-in web server.
As in my question edit, this was down to developer action rather than a Symfony or PHP issue to be solved by the community.
public function getCacheDir()
{
if(isset($_SERVER['HTTP_HOST']))
{
return $this->rootDir.'/cache/'.$this->environment.'/'.$_SERVER['HTTP_HOST']; //KABOOM
}
else{
return $this->rootDir.'/cache/'.$this->environment.'/default';
}
}
Thanks for all your efforts!
public function getCacheDir()
{
if(isset($_SERVER['HTTP_HOST']))
{
return str$this->rootDir.'/cache/'.$this->environment.'/'.str_replace(':','_', $_SERVER['HTTP_HOST']);
}
else{
return $this->rootDir.'/cache/'.$this->environment.'/default';
}
}
a colon isnt a valid character for filesystem filename, so replace it with underscore.
I use composer on a network where the only way to access the internet is using HTTP or socks proxy. I have http_proxy and https_proxy environment variables. When compose tries to access HTTPS URLs I get this:
file could not be downloaded: failed to open stream: Cannot connect to HTTPS server through proxy
As far as I know the only way to connect to a https website is using a connect verb. How can I use composer behind this proxy?
If you are using Windows, you should set the same environment variables, but Windows style:
set http_proxy=<your_http_proxy:proxy_port>
set https_proxy=<your_https_proxy:proxy_port>
That will work for your current cmd.exe. If you want to do this more permanent, y suggest you to use environment variables on your system.
If you're on Linux or Unix (including OS X), you should put this somewhere that will affect your environment:
export HTTP_PROXY_REQUEST_FULLURI=0 # or false
export HTTPS_PROXY_REQUEST_FULLURI=0 #
You can put it in /etc/profile to globally affect all users on the machine, or your own ~/.bashrc or ~/.zshrc, depending on which shell you use.
If you're on Windows, open the Environment Variables control panel, and add either a system or user environment variables with both HTTP_PROXY_REQUEST_FULLURI and HTTPS_PROXY_REQUEST_FULLURI set to 0 or false.
For other people reading this (not you, since you said you have these set up), make sure HTTP_PROXY and HTTPS_PROXY are set to the correct proxy, using the same methods. If you're on Unix/Linux/OS X, setting both upper and lowercase versions of the variable name is the most complete approach, as some things use only the lowercase version, and IIRC some use the upper case. (I'm often using a sort of hybrid environment, Cygwin on Windows, and I know for me it was important to have both, but pure Unix/Linux environments might be able to get away with just lowercase.)
If you still can't get things working after you've done all this, and you're sure you have the correct proxy address set, then look into whether your company is using a Microsoft proxy server. If so, you probably need to install Cntlm as a child proxy to connect between Composer (etc.) and the Microsoft proxy server. Google CNTLM for more information and directions on how to set it up.
If you have to use credentials try this:
export HTTP_PROXY="http://username:password#webproxy.com:port"
Try this:
export HTTPS_PROXY_REQUEST_FULLURI=false
solved this issue for me working behind a proxy at a company few weeks ago.
This works , this is my case ...
C:\xampp\htdocs\your_dir>SET HTTP_PROXY="http://192.168.1.103:8080"
Replace with your IP and Port
on Windows insert:
set http_proxy=<proxy>
set https_proxy=<proxy>
before
php "%~dp0composer.phar" %*
or on Linux insert:
export http_proxy=<proxy>
export https_proxy=<proxy>
before
php "${dir}/composer.phar" "$#"
iconoclast's answer did not work for me.
I upgraded my php from 5.3.* (xampp 1.7.4) to 5.5.* (xampp 1.8.3) and the problem was solved.
Try iconoclast's answer first, if it doesn't work then upgrading might solve the problem.
You can use the standard HTTP_PROXY environment var. Simply set it to the URL of your proxy. Many operating systems already set this variable for you.
Just export the variable, then you don't have to type it all the time.
export HTTP_PROXY="http://johndoeproxy.cu:8080"
Then you can do composer update normally.
Operation timed out (IPv6 issues)#
You may run into errors if IPv6 is not configured correctly. A common error is:
The "https://getcomposer.org/version" file could not be downloaded: failed to
open stream: Operation timed out
We recommend you fix your IPv6 setup. If that is not possible, you can try the following workarounds:
Workaround Linux:
On linux, it seems that running this command helps to make ipv4 traffic have a higher prio than ipv6, which is a better alternative than disabling ipv6 entirely:
sudo sh -c "echo 'precedence ::ffff:0:0/96 100' >> /etc/gai.conf"
Workaround Windows:
On windows the only way is to disable ipv6 entirely I am afraid (either in windows or in your home router).
Workaround Mac OS X:
Get name of your network device:
networksetup -listallnetworkservices
Disable IPv6 on that device (in this case "Wi-Fi"):
networksetup -setv6off Wi-Fi
Run composer ...
You can enable IPv6 again with:
networksetup -setv6automatic Wi-Fi
That said, if this fixes your problem, please talk to your ISP about it to try and resolve the routing errors. That's the best way to get things resolved for everyone.
Hoping it will help you!
according to above ideas, I created a shell script that to make a proxy environment for composer.
#!/bin/bash
export HTTP_PROXY=http://127.0.0.1:8888/
export HTTPS_PROXY=http://127.0.0.1:8888/
zsh # you can alse use bash or other shell
This piece of code is in a file named ~/bin/proxy_mode_shell and it will create a new zsh shell instance when you need proxy. After update finished, you can simply press key Ctrl+D to quit the proxy mode.
add export PATH=~/bin:$PATH to ~/.bashrc or ~/.zshrc if you cannot run proxy_mode_shell directly.
I want to do SVN update easier - with calling PHP script.
I created PHP script:
$cmd = "svn update https://___/svn/website /var/www/html/website/ 2>&1";
exec($cmd, $out);
As the user running the script is apache (not root), I get some permission errors.
If I change the owner of every directory to apache (or chrown everything to 777) I have another problem. Because I use https protocol user apache should permanently accept certificate of the svn server. I tried to do "su - apache" and accept certificate but OS says that "apache" is not valid user. I also dont know how could I accept certificate with exec() function.
Any idea? How can I make svn update-ing easier?
Is the error telling you that the user isn't a valid svn user? If apache is the user running httpd, you should be able to su to it. This is the script I use:
/usr/bin/svn --config-dir=/home/user/.subversion --username=svnuser --password=svnpass update
once the password is saved you can remove it from the command. Again, make sure the user/pass above is a valid SVN user.
Lately I've actually migrated to using Hudson for svn updates as you can schedule it as well as run manually and do a bunch of other tasks, plus you can view the svn logs for each commit as well as any console errors.
Why not use php svn functions instead of (insecure) exec?
http://www.php.net/manual/en/function.svn-auth-set-parameter.php has good examples for authentification options.
Use getent apache on the shell. This will return the shell of apache. Most likely, it is /bin/nologin or /bin/false. Change this to /bin/bash. You'll also need to specify the home directory and create it on the file system.
UPDATE: getent apache will actually return the entry in the /etc/passwd file for the apache user. The last token in this string is the shell.
I'm using Drush and Drush Make to automate download of Drupal module from a corporate network behind a NTLM-SSPI Proxy. Drush and Drush Make uses cURL to download files. cURL supports NTLM-SSPI Proxy. I configured cURL for the proxy in my .curlrc file
--proxy proxy.example.com:8080
--proxy-ntlm
--proxy-user user:password
Drush itself is able to download modules from drupal.org because it uses curl from the command line. But Drush Make uses the PHP cURL API (libcurl) . It looks like when used this way, cURL does not use the configuration in my .curlrc file.
Is there a way to configure libcurl/PHP cURL with a .curlrc file ?
No, the entire .curlrc parser and all associated logic is only present in the command line tool code. It is not included in the library at all. (and the PHP/CURL binding is only using libcurl the library, not the command line tool)
drush really loads the command line tool and runs it, so you can do this in the ~/.curlrc file, but you need to make sure your commands are correctly setup.
leet#test:~$ cat ~/.curlrc
# Proxy manly for drush make
proxy = http://localhost:3128
# Drush make work around for https
#insecure
Can be made with ...
echo -e "\n# Proxy manly for drush make\nproxy = http://localhost:3128 \n /
#Drush make work around for https \n#insecure\n" >> ~/.curlrc
Remember, this will only work for your user, I think you can set a system wide default if you put curlrc in the same folder your bin file is in or /etc/curl, but I have not tested this.
I use this all the time, for quick aegir builds.
Hope that helps.
LeeT