I know that there are php functions that allow a user to download or you to download file using PHP BUT I have not seen a single one that allows your php file to navigate and download a file and store it in a specific directory..
So here is what I want to do. I have a web host which runs php applications. Then I have a website with a calendar. The calendar has options on the side...
Tools--->export as doc
I want to write a PHP code that EVERYDAY automatically goes to calendar's Tool options, then downloads the calendar called Team Calendar into the webhost where the script can use it.
For experimental purposes lets assume the calendar URL is at http://webdesign.about.com/od/php/ht
How do I go about this?
Thanks a bunch
EDIT: I TRIED WGET THIS IS WHAT I GOT, HOW CAN I MAKE IT DOWNLOAD THE FILE IN DOC FROM TOOLS?
[/cygdrive/c/documents and settings/omar.khawaja]$ wget http://confluence.com/display/prodsupport/Team+Calendar
--2011-06-02 16:33:43-- http://confluence.rogersdigitalmedia.com/display/prodsupport/Team+Calendar
Resolving confluence.com... 204.225.248.160
Connecting to confluence.com|204.225.248.160|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: http://confluence.com/login.action;jsessionid=2F13926CF763FE4F3862FAFC24AB81D7?os_destinati
on=%2Fdisplay%2Fprodsupport%2FTeam%2BCalendar [following]
--2011-06-02 16:33:43-- http://confluence.com/login.action;jsessionid=2F13926CF763FE4F3862FAFC24AB81
D7?os_destination=%2Fdisplay%2Fprodsupport%2FTeam%2BCalendar
Connecting to confluence.com|204.225.248.160|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7865 (7.7K) [text/html]
Saving to: `login.action;jsessionid=2F13926CF763FE4F3862FAFC24AB81D7#os_destination=%2Fdisplay%2Fprodsupport%2FTeam+Cale
ndar'
100%[==============================================================================>] 7,865 --.-K/s in 0.04s
2011-06-02 16:33:43 (207 KB/s) - `login.action;jsessionid=2F13926CF763FE4F3862FAFC24AB81D7#os_destination=%2Fdisplay%2Fp
rodsupport%2FTeam+Calendar' saved [7865/7865]
You need to use a cron job on the server to do this. Have that cron job call a PHP script that simply saves the doc to a directory on the web server.
Related
I'm trying to get my blog's RSS feed and manipulate it in PHP. Accord to the documentation, the XML feed for all Wordpress blogs can be downloaded at this address:
http://www.example.com/feed/atom/
I've written some simple code that works fine on a test server, but won't work on my hosted server:
$feedUrl = 'http://www.example.com/blog/feed/atom/';
$rawFeed = file_get_contents($feedUrl);
$feedXML = new SimpleXmlElement($rawFeed);
The reason for this is because my hosting provider prevents scripts making HTTP (port 80) connections back to the same server that they're running on.
How can I get access to the feed without needing to do a HTTP request to the same server?
I have tried accessing the URL directly (i.e. /home/example.com/blog/feed/atom), but nothing is found because it needs a proper request to generate the XML RSS feed. I've also tried a CURL request, but I got the same result.
It's a tricky problem! Thanks for any help!
Note: My solution needs to run on a non-WP page.
Some hosting providers might let you set up CRON jobs through their admin console, without having access to the command line. In a situation like that, you may be able to use a WP-CLI command to retrieve the output of the feeds, and save it to a file using something like "> filename.txt" at the end of the command.
See here: http://wp-cli.org/
And possibly here: http://wp-cli.org/commands/eval-file/
I'm working on a twilio project with PHP which will be playing back a frequently changing audio file.
Twilio's TwiML Voice documentation states to:
make sure your web server is sending the proper headers to inform us
that the contents of the file have changed
Which headers are these and how do I set them in PHP.
Which headers are these?
This is how caching works on Twilio
Twilio requests a .mp3 from your server using a GET request. Your
server sends back a 200 OK, and also sends back an E-Tag header.
Twilio will save the E-Tag header, as well as the mp3 file, in its
database.
The next time Twilio sends a GET request to that URL, it will send
along the E-Tag header (it should look like "If-None-Match"). If the
file has not changed since the last time Twilio accesses it, your
server will send back a 304 Not Modified header. Crucially, it will
not send the mp3 file data. Twilio will use the mp3 file it has
stored in its database. It's much faster for Twilio to read the mp3
file from its database than it is for your server to send it (and it
also saves your server bandwidth).
If you change the content of the mp3 that is being served at the URL,
and Twilio makes a GET request to the URL, then your server will send
back a 200 OK, with a new E-Tag. Twilio will download the file from
your server, and cache it.
How do I set them in PHP?
header("ETag: \"uniqueID\");
When sending a file, web server attaches ID of the file in header called ETag. When requesting file, browser checks if the file was already downloaded. If cached file is found, server sends the ID with the file request to server. Server checks if the IDs match and if they do, sends back header("HTTP/1.1 304 Not Modified"); else Server sends the file normally.
One easy way to check is by adding some fake key-value pairs to the end of the URL, like http://yoururl.com/play.mp3?key=somevalue. Your website should still serve the same mp3 as it would if you loaded example.com/test.mp3, but to Twilio it will appear to be a new URL (uncached).
Twilio uses Squid to cache MP3. You can control how long an item is cached using the cache control header.
cache-control: max-age=3600
http://wiki.squid-cache.org/SquidFaq/InnerWorkings#How_does_Squid_decide_when_to_refresh_a_cached_object.3F
We have a script which downloads acsv file. When we run this script on command line on EC2 console it runs fine; downloads the file and sends success message to the user.
But if we run through a browser then we get:
error 324 (net::ERR_EMPTY_RESPONSE): The server closed the connection without sending any data.
When we checked in backed for the file download, it's there but the success message sent after the download is not received by the browser.
We are using cURL to download from a remote location with authentication. The user group and ownership of the folder is "ec2-user", the folder has full rights ie 777.
To summarize: the file is downloaded but at the browser end we are not getting any data or success message which we print.
P.S.: The problem occurs when the downloaded file size is 8-9MB; if it is a smaller file size say 1MB it works. So Either script executing time or download file size or some ec2 instance config is blocking it from giving browser a response. The same script is working perfectly fine on our Godaddy Linux VPS. We have already changed Max execution time for the script.
Sadly, this is a known problem without a good solution. There's a very long thread on the amazon forum here: https://forums.aws.amazon.com/thread.jspa?threadID=33427. The solution offered there is to send a keep-alive message to keep the connection from dying after 60 seconds. Not a great solution, but I don't think there's a better one unless Amazon fixes the problem, which doesn't seem likely given that the thread has been open for 3 years.
I tried using curl to post to a local file and it fails. Can it be done? my two management systems are on the same server and it seems unnecessary to have it traverse the entire internet system just to go to a file on the same hard drive.
Using localhost didn't do the trick.
I also tried to $_SERVER[DOCUMENT_ROOT].'/dir/to/file.php' using post data. It's for an API that is encrypted, so I'm not sure exactly how it works. It's for a billing system I have and I just realized that it sends data back (API).
It's simply post data and an XML response. I could write an html form tag and input fields and get the same result, but there isn't really anything else to know.
The main question is: Does curl have the ability to post to a local file or not?
it is post data. it's for an API that is encrypted so i'm not sure exactly how it works
Without further details nobody can answer then what you should do.
But if it's indeed a POST receival script on the local server, then you can send a POST request to it using the URL:
$url = "https://$_SERVER[SERVER_NAME]/path/to/api.php";
And then receive its output from the cURL call.
$data = curl($url)->post(1)->postdata(array("billing"=>1234345))
->returntransfer(1)->exec();
// (you would use the cumbersome curl_setopt() calls instead)
So you get a XML or JSON or whatever response.
If they're on the same drive, then use file operations instead:
file_put_contents('/path/to/the/file', $contents);
Using CURL should only be done if you absolutely NEED the http layer to get involved for some reason, or if you're dealing with a remote server. Using HTTP would also mean you need to have the 'target' script be able to handle a file upload plus whatever other data you need to send, and then that script would end up having to do file operations ANYWAYS, so in effect you've gone on a round-the-world flight just so you can move from your living room to the kitchen.
file://locafilespec.ext worked for me. I had 2 files in the same folder on a linux box, in a folder that is not served by my webserver, and I used the file:// wrapper to post to file://test.php and it worked great. it's not pretty, but it'll work for dev until I move it to it's final resting place.
Does curl have the ability to post to a local file or not?
To curl local file, you need to setup HTTP server as file:// won't work, so:
npm install http-server -g
Then run the HTTP server in the folder where is the file:
$ http-server
See: Using node.js as a simple web server.
Then test the curl request from the command-line to localhost like:
curl http://127.0.0.1:8081/file.html
Then you can do the same in PHP.
I am using a cURL based php application to make requests to another webserver that does asynchronous requests. So what I am doing is creating files with the name as .req with the info I will need on the return and as the identification in the request. The requests are done using HTTP-XML-POST. The file is written using: -
file_get_contents(reqs/<databaseid>.req, FILE_APPEND);
What happens is that while the requests are being generated in bulk (about 1500 per second), the responses start coming back from the webserver. The response is caught by a another script which received the from the response and opens the request file based on it using: -
$aResponse = file(reqs/<databaseid>.req);
Now what happens is that in about 15% of requests, the file() request fails and generates a log entry in apache log like this: -
file(reqs/<databaseid>.req): failed to open stream: No such file or directory in <scriptname> on line <xyz>
It has been verified using a cleaner script that runs later that the file did exist.
Any ideas?!!!
There are some functions to handle simultaneous file access such as flock() but it's normally easier to simply use a database. Any decent DBMS has already worked it out for you.