I wanted to if there is any real difference using PHP sftp for file open like shown here and with ssh2_scp_send I am planning on uploading multiple files per php session and want to not spend to much time in uploading the files. If anyone is aware on how these to functions are implemented in PHP it would be great to know so i can choose the right one.
Thanks Again.
The SFTP and the SCP are different protocols. Both run over the SSH though.
The SCP can only copy (upload/download) files. It cannot do any other operation like listing directory contents, deleting files etc.
The SFTP is a full-fledged remote fie system protocol.
The SCP might be quicker in general, as it is better able to utilize the SSH virtual connection channel. But it's worth testing both.
Also note that the SCP will generally work against *nix SSH servers only. The SFTP is more universal.
For details on The SFTP + SCP refer to:
https://en.wikipedia.org/wiki/SSH_File_Transfer_Protocol
https://en.wikipedia.org/wiki/Secure_copy_protocol
ssh2_sftp() is used on a existing connection, then you use fopen() however, this is also considered 'insecure' by most. Which is where ssh2_scp_recv() uses the Secure Copy based on BSD RCP Protocol. SCP uses Secure Shell (SSH) for data transfer and uses the same mechanisms for authentication, thereby ensuring the authenticity and confidentiality of the data in transit. A client can send (upload) files to a server, optionally including their basic attributes (permissions, timestamps). Clients can also request files or directories from a server (download). SCP runs over TCP port 22 by default. Like RCP, there is no RFC that defines the specifics of the protocol.
http://www.php.net/manual/en/function.ssh2-scp-recv.php
http://en.wikipedia.org/wiki/Secure_copy
Related
So, here is the setup I have to work with:
I have five servers total in different locations. One server is purely a web server for hosting static files. The other four servers are solely FTP servers, each containing files uploaded by users through PHP scripts.
What I want to do is be able to choose the server with the most free space available and send the next user-uploaded file to it. I've searched around, and there doesn't seem to be any way to do that with only FTP commands.
I found a question about Determining the Free Space of an FTP Server, which showed that it was possible to create and update a file periodically with a Linux Shell script, but, the servers I have are and will stay Windows machines.
My only solution would be to host web servers on the FTP servers with a simple index.php containing the remaining filesize determined by disk_free_space() but that seems a bit much for something so simple.
All that I'm looking for is a way to find out this information with FTP commands, or possibly be able to link the servers to a VPN somehow and use PHP to figure out the amount of free space, though I wouldn't know exactly how to do that, or even, if it would work...
If you are using IIS FTP server on the Windows machine, you can configure the IIS to include free disk space in the LIST command response.
In the IIS manager, go to your FTP site, and select FTP Directory Browsing applet. There, in the Display following information in directory listing setting, check the Available bytes.
Then, the LIST FTP command response will look like:
226-Directory has 27,906,826,240 bytes of disk space available.
226 Transfer complete.
You can test this with WinSCP FTP client, it can make use of this information. Just go to the Space available tab of the Server and Protocol Information dialog.
(I'm the author of WinSCP)
Other FTP servers support other ways to retrieve free disk space.
See How to check free space in a FTP Server?
I would like to host az FTP server on: mywebsite.domain/ftpserver.php
The XAMPP server runs on my computer.
The php is important, because I would like to authenticate the users using their passwords in the mysql database, and their directory's name is also stored in the database.
Or if there is a free ftp server, then how could I create ftp users from the php?
I know that this question is some years old, but for my opinion, the accepted answer is not correct.
You can omit the webserver (XAMPP / Apache) and run a PHP script from command line. This PHP scripts can listen to a tcp port (e. g. port 22, https://www.php.net/manual/en/function.socket-listen.php) and so it can receive (FTP) requests directly from a client. You will reach the server via mywebsite.domain. mywebsite.domain/yourscript.php is not necessary because PHP will listen directly to the given port.
But there's a big backdraw: You have to implement the complete FTP protocol by yourself in PHP. And that's a quite big task and you have to know what you do.
This can not easily be done. PHP works, almost always, with a webserver, serving HTTP and HTTPS request, not FTP requests. You could configure it to answer to FTP requests on port 22, as said in the other answer, but then you still have to process all the FTP requests.
A second point would be; Why FTP? You can serve files with the HTTP and HTTPS protocol as well. The only limitation is that users cannot use a custom client, they have to use a browser.
I have a webpage that currently takes an upload from a user and stores this into a directory (/upload). [Linux based Server]
I am looking for a way instead of storing this on the server/in that directory to instead transfer the file onto a local machine. [Running Ubuntu 12.04]
Assuming I already have public/private keys setup how might I go about doing this?
Current Ideas:
ftp transfer
rsync
Ideas:
1) Stop running anything on the server, and forward every byte to your local box. Just run ssh -N -R :8080:localhost:3000 remote.host.com This will allow anyone to hit http://remote.host.com:8080 and get your port 3000. (If you do port 80, you'll need to SSH in as root.) Performance will be kinda bad, and it won't be that reliable. But might be fine for real-time transfer where you're both online at once.
2) use inotifywait to watch the upload dir on the server, and trigger rsync from the server to your local box. (Requires exposing SSH port of your box to the world.) If you sometimes delete files, use unison bidirectional file sync instead. (Although unison doesn't work on long filenames or with lots of files.)
3) Leave the system as-is, and just run rsync from cron on your local box. (Ok, not realtime.)
Of course, most people just use dropbox or similar.Alghough
So what can be best way to have a Backup of code and DB is it downloading Locally via http ?
But i fear it is security risk as some hacker might get access to it .
I am looking into compress then encrypt the compressed file.
But i dunno what encryption i should use and if linux CLI tool available for password protected encryption ?
Thanks
Arshdeep
The community over at Hacker News raves about Tarsnap. As per the site:
Tarsnap is a secure online backup service for BSD, Linux, OS X, Solaris, Cygwin, and can probably be compiled on many other UNIX-like operating systems. The Tarsnap client code provides a flexible and powerful command-line interface which can be used directly or via shell scripts.
Tarsnap is not free, but it is extremely cheap.
If you're worried about transports, use SSH. I tend to use replication over an SSH tunnel to keep a MySQL database in sync. A backup of the version control server (which is not the same as the deployment server) is passed by rsync over ssh. If you want to encrypt files locally you could use gpg, which would of course not work in tandem with the database replication, in that case you'd be forced to use a dump or snapshot of your database at regular intervals.
You don't make that much sense here.
If downloading locally then you don't go over public networks, so it is not an issue.
Unless you meant simply to download. But the question is to download what?
On the other hand, the issue of securing the upload (for initial setup) and for maintenance is as equally important.
Securing your resources such as code repository and database is critical, but if you can have SSH access to your server you already have encrypted tunnel established and transferring files over that tunnel (scp) is quite secure; if paranoid (or in need) you can bump up security on SSH server setting to version 2 only.
I have a script on my one server and I want that script to create a file on another server of mine using PHP, NOT VIA FTP?
There are many ways to do this. I'd pick the first one myself because it's easiest to set up:
If you have PHP+Apache on another server, just call some script on the other server using file_get_contents with http URL as filename or use cURL if you need to POST file contents as well.
If the servers are in same network (LAN, VPN) you can use Windows shares/Samba or NFS to mount a remote directory to you local filesystem and simply write to file directly using fopen/fwrite functions
Use SSH via SCP or SFTP
PHP allows sending files across SSH - see the ssh2* family of functions, in particular ssh2_scp_send and ssh2_scp_recv.
I've never used them myself, but the infrastructure is there in Linux, just like SMB in Windows.
In general, FTP is the only regularly and easily available way (in PHP) to create a file on another server.
There are of course other protocols that enable you to create a file, but they all require installation of software on either one or both servers:
Samba (would enable access to the remote server through an absolute file path)
WebDaV (PHP client libraries available)
SCP (Finding a PHP client is probably going to be hard)
If both servers run PHP, it's probably the easiest to set up a PHP script on the remote server that accepts file data trough POST, and writes it out to a local file. Not a perfect solution, though, due to the limits usually imposed on POST uploads.
You could always use DAV, but it might require some configuration on the receiving server. There is also SSHFS, which lets you easily mount the remote directory locally over a SSH tunnel, or just use the ssh2_* family of functions as Andy Shellam suggested.
Really, there are lots of ways to accomplish this.