The answer to the following doesn't satisfy me, I wish to know a bit more about what's going on.
Can anyone explain the $pty argument in ssh2_exec() function call
Does it force the client to tell the server to spawn a PTY or is the PTY totally client-sided?
As far as I know it's attached to a process such as a SSHd for example, which would require a call to the server.
Also, when set to true does it emulate the default shell? What is it?
I know you can pass xterm for example, which emulates a PTY, is this any different? Emulation implies it's not a real PTY from my perspective.
That may be a little confusing to read, but I'm trying to grasp this concept.
Thank you. I appreciate it.
A "pty" is essentially a "pipe" between some sort of application or daemon (for example, I work on virtualization, and we use a pty to provide the virtual terminal for a virtual machine). A pty has a "master" and a "slave" side. The slave side is what your normal "terminal" program would use - xterm or ssh, etc. The master is used by whatever "thing" provides the data into the terminal [and if you write into the pty, e.g. when you type or paste text into an xterm] it gets read by the process controlling the master - the master then does whatever it should do with such data - e.g. sending it across the network in an ssh case.
It is completely to do with what happens "your end".
If you are running a command that is "interactive" over ssh - say "ssh somemachine make menuconfig" [assuming your home directory is a linux source directory - we'll ignore the fact that it probably isn't], the default is to not make a pty, so menuconfig will probably fail [to operate correctly, at least] because it's a "interactive" text program that allows you to press keys to mover around, etc. So using "ssh -t somemachine make menuconfig" will give your ssh a pty. Alternatively, "ssh somemachine" will give you a pty by default, since you are expected to type things into the other end.
The pty is a "local" terminal, but the sshd process will provide it with data from the other end, and your local sshd process feeds it into the "master" side of the pty.
This page describes what I've tried to say
http://lugatgt.org/2009/10/28/ssh-tips-and-tricks-2/
Related
My development environment consists of the single-threaded built-in PHP server. Works great:
APP_ENV=dev php -S localhost:8080 -c php.ini web/index.php
One issue with this is that the built-in server is single-threaded. This makes lots of parallel XHRs resolve sequentially. Worst of all, it doesn't mimic our production environment very well. Some front-end issues with concurrency simply don't exist in this set up.
My question:
What's an existing solution that I could leverage that would proxy requests asynchronously to multiple instances of the same PHP built-in server?
For example, I'd have a few terminal sessions running the built-in server on different ports, then each request is routed to a different one of those instances. In other words, I want multiple instances of my application running in parallel using the simplest possible set up (no Apache or Nginx if possible).
A super-server, like inetd or tcpserver†, works well. I'm a fan of the latter:
tcpserver waits for incoming connections and, for each connection,
runs a program of your choice.
With that, now you want to use a reverse proxy to pull the HTTP protocol off the wire and then hand it over to a connection-specific PHP server. Pretty simple:
$ cat proxy-to-php-server.sh
#!/bin/bash -x
# get a random port -- this could be improved
port=$(shuf -i 2048-65000 -n 1)
# start the PHP server in the background
php -S localhost:"${port}" -t "$(realpath ${1:?Missing path to serve})" &
pid=$!
sleep 1
# proxy standard in to nc on that port
nc localhost "${port}"
# kill the server we started
kill "${pid}"
Ok, now you're all set. Start listening on your main port:
tcpserver -v -1 0 8080 ./proxy-to-php-server.sh ./path/to/your/code/
In English, this is what happens:
tcpserver starts listening on all interfaces at port 8080 (0 8080) and prints debug information on startup and each connection (-v -1)
For each connection on that port, tcpserver spawns the proxy helper, serving the given code path (path/to/your/code/). Pro tip: make this an absolute path.
The proxy script starts a purpose-built PHP web server on a random port. (This could be improved: script doesn't check if port is in use.)
Then the proxy script passes its standard input (coming from the connection tcpserver serves) to the purpose-built server
The conversation happens, then the proxy script kills the purpose-built server
This should get you in the ballpark. I've not tested it extensively. (Only on GNU/Linux, Centos 6 specifically.) You'll need to tweak the proxy's invocation of the built-in PHP server to match your use case.
Note that this isn't a "load balancing" server, strictly: it's just a parallel ephemeral server. Don't expect too much production quality out of it!
† To install tcpserver:
$ curl -sS http://cr.yp.to/ucspi-tcp/ucspi-tcp-0.88.tar.gz | tar xzf -
$ cd ucspi-tcp-0.88/
$ curl -sS http://www.qmail.org/moni.csi.hu/pub/glibc-2.3.1/ucspi-tcp-0.88.errno.patch | patch -Np1
$ sed -i 's|/usr/local|/usr|' conf-home
$ make
$ sudo make setup check
I'm going to agree that replicating a virtual copy of your production environment is your best bet. You don't just want to cause issues, you want to cause yourself the same issues. Also, there's little guarantee that you will hit all of the same issues under the alternate setup.
If you do want to do this, however, you don't have particularly many options. Either you direct incoming requests to an intermediate piece of software which then dispatches them to the php backends -- this would be the Apache, Nginx solutions -- or you don't, and the request is directly handled by the single php thread.
If you're not willing to use that interposed software, there's only one layer between you and the client: networking. You could, in theory, set up a round-robin DNS for yourself. You give yourself multiple IPs, load up a PHP server listening on each, and then let your client connections get spread across them. Note that this would assign each client to a specific process -- which may not be the level of parallel you're looking for.
I have set up a LAMP Web Server and I am looking to run an application on the server side when the client clicks a button on the servers web interface. This application will look for a certain USB Device, by Serial Number, open it up and send a packet of bytes to the device.
I have an index.html, which only has a button with an action to call my test.php file which uses shell_exec() to call my application.
When the application is invoked through the web interface, the application writes out an error indicating that it couldn't open the USB Device (this a built in error for this application, so the application works, it just can not locate the usb device).
But when I invoke the application via the Terminal, the application finds the usb device and writes to it no problem.
I am looking for some advice! Simply is what I'm doing feasible? If so, how can I get the application to find the usb device when invoked via the web interface? I have a feeling it has something to do with permissions, you never know.
test.php:
<?php
echo shell_exec("/home/pi/FDTI_test/FDTI_test_application");
?>
NOTE:
The usb device is connected, works great with its driver, and is connected to the server via usb.
The application works when invoked via the terminal on server side, but not when invoked via web interface.
I think your on the right track with this being a permissions issue.
In a typical LAMP stack, the php process runs as a module in the apache process, unless you've configured it differently. In my server OS of choice, the php process runs as the user 'www-data' by default.
Probably the easiest solution would be to give sudo permission to your web user account, and set the sudoers file to NOPASSWD. This is very insecure, so only do this in rare cases.
<?php echo shell_exec("sudo /home/pi/FDTI_test/FDTI_test_application"); ?>
The next easiest option is to give the web user account permission to write to the USB device directly. Depending on your distribution, you may only need to add the user to the 'adm' group.
sudo usermod -a -G adm www-data
Again, this may not be the most secure method, but more secure than the first option.
Lastly, you could look into the hardest solution which would be to install a patched version of apache which allows suexec. This is about as equally as insecure as the second option, but much more difficult to implement. (I would have included a link to a tutorial, but I'm limited to 2 links as this is my first answer.)
Hope This Helps!
i want to statically assign the ip address of my arch linux using php. i want to change the ip by using netmask,interface,broadcast,address & gateway.the user puts up the values into a html page.the html page posts the data to the php page.i want to change the ip using this data. HOW TO DO THIS!!
Files also can be used!! right?
..i was thinking of writing directly into the rc.conf using files!!...will this work and how??..i have my arch linux up with apache & php..any of the help is appreciated!!...thanku..:)
You should write yourself a shell-script and launch that via PHP, instead of trying to accomplish such a task with PHP itself.
If you don't know how to do that, you should ask a related question https://unix.stackexchange.com/.
Why would you use PHP to attempt to configure a server?? You should configure the server using pre-existing tools and commands that are designed for that purpose.
$su
# ifconfig <interface, tpyically eth0> down
# ifconfig eth0 192.168.1.105 netmask 255.255.255.0 up
# ifconfig eth0
You COULD wrap those commands in an exec() statement, but I don't see a PHP script having the necessary system permissions to complete them successfully.
In normal condition, you may be not able to do it from web server directly due to security.
There are several problems like permission on /etc files, security context of user in which apache run etc.
One secure way is to create cron task which will run under root account and regularly check for existence of some file which can be generated by apache (php).
Once file will appear you can reconfigure whatever using ifconfig within cron task with appropriate privileges based on content o this file.
Don't forget that your apache should be configured to use all interfaces and not realy on IP based VirtualHosts or you will immediately lose connection to it.
** Preface: I don't know much about networking. If i described the set up wrong, I'll try again. **
I have a server cluster of serverA, serverB, and serverC all behind a firewall and on a switch. I want to move a file from serverA to serverB programmatically. In the past when I had to move a file on serverA to another location on serverA I just call exec("sudo mv file1 /home/user/file1"); Can I still do this when multiple servers are involved?
EDIT: All great responses guys. I look into how the server's is cluster and find out if it's a mount or what's going on. Thank you EVERYONE! You guys are my hero!
If you use a common share like nfs that is mounted to all the servers, you can use mv on a file.
If you don't have that option, you can transfer the file to another server using scp or rsync.
Well first of all you should use the native functions to move files around. See rename: http://us2.php.net/rename. It would just mean that you need to make sure the permissions are correct in both locations (likely they need to be owned by the apache user)
But in answer to your actual question it really depends on the setup. Generally another server you could move files to would have a mount point and it would look like any other directory so you wouldn't need any changes to your code at all. This is probably the best way to do it.
If you have to use FTP or something like that you'll need to use the appropriate libraries for whatever protocol required.
While this option is probably a bit too complicated to set up, let me point to UDP hole Punching.
If the addresses of all servers are know and fixed, it is able to traverse firewalls and NATed networks.
In principle, portpunching works like this:
Let A and B be the two hosts, each in its own private network; N1 and N2 are the two NAT devices:
A and B try to create an UDP connection to each other
Most likely both attempts fail, since no holes are prepared yet
But: The NAT devices N1 and N2 create UDP translation states and assign temporary external port numbers
A and B contact each others' NAT devices directly on the translated ports; the NAT devices use the previously created translation states and send the packets to A and B
This even works, the addresses of A and B are unknown to each other. In this case, one needs a public known intermediate system S. See the Wikipedia article to learn more.
you can use the linux command line tool SCP to copy files over a network via SSH
make sure SSH certificates are configured on the servers.
Example:
exec("sudo cp [-Cr] [[user#]ServerA:]/path/to/file [more...] [[user#]SERVERB:]/path/to/file
I was wondering, whether knockd http://www.zeroflux.org/cgi-bin/cvstrac.cgi/knock/wiki would be a good was to be able to restart apache without logging into ssh. But my programming question was whether there is a way to send tcp/udp packages via PHP so I can knock via a webclient.
I am aware that this is not the safest way of doing it, but I will only want to do things like update the svn, restart apache without having any passwords in it like with using ssh to do that.
You may use fsockopen() functions... but what you are doing(and the way you are doing it) is very risky from a security standpoit.. as it had been said, ssh is the way:)
If you really want to restart the apache server by using remote access (non-ssh) you can create a small php-daemon, that just watches for a specific file,(ex: /tmp/restart.apache) and when that file appears run exec("/etc/init.d/apache restart") (or whatever the command is for your distribution). This daemon should run as root... and the thing is that the whole security thing is up to you this way, you have to make sure this cannot get arbitrarly executed...
Your portknock ideea... a simple port scanner may restart your apache by mistake:) portknock is recommented to be used in conjunction with a ssh auth , not directly with apache:)
Seriously, you do not want to do what your trying to do.
You should look into calling your remote server through some sort of secure protocol, like SSH. And on the client side, have a small PHP utility application/script that executes remote SSH commands (preferably with a keyfile only based authentication mechanism).
Why not have a PHP script that calls "svn update"? As long as the files are writeable by the user Apache runs as, it works great. Just hit that URL to update the website
For SVN you have whole PHP api, try search SVN on php.net