I have a problem where my hosting company won't let me run a cron job in this format from my control panel:
/usr/bin/php /home/sites/MYDOMAIN.com/index.php?option=com_community&task=cron
Or:
www.MYDOMAINNAME.com/index.php?option=com_community&task=cron
Now if i run the second job in a browser i.e.:
www.MYDOMAINNAME.com/index.php?option=com_community&task=cron
this works fine in a browser
My support says I have to create a file to run the URL. The only problem is I don’t know how to run a URL in PHP. I have asked a few sites. But nothing. My file is called bump.php and has the following code:
lynx -dump http://www.MYDOMAIN.com/index.php?option=com_community&task=cron
this is what i have in the file
<?php
echo file_get_contents('DOMAIN.com/index.php?option=com_community&task=cron');
?>
You have to access the file in question via your webserver, not directly by file access. If you access it by file-access, it will just return the php code and not execute it.
There are several options on how to access files via webserver. One is your shown method with file_get_contents. You will need to add http:// in front of the url to tell PHP that you want it accessed remotly and not as a local file.
file_get_contents is not allways configured to allow remotely downloading files. In these cases, it will not work. You can check this link to see the configuration setting for remote accessing files:
http://www.php.net/manual/en/filesystem.configuration.php#ini.allow-url-fopen
Another solution is to use the curl extension (if available)
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://www.example.com/");
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_exec($ch);
curl_close($ch);
http://www.php.net/manual/en/function.curl-exec.php
There are other extensions if curl is also not available...
Related
I am using file_get_contents in PHP to get information from a client's collections on contentDM. CDM has an API so you can get that info by making php queries, like, say:
http://servername:port/webutilities/index.php?q=function/arguments
It has worked pretty well thus far, across computers and operating systems. However, this time things work a little differently.
http://servername/utils/collection/mycollectionname/id/myid/filename/myname
For this query I fill in mycollection, myid, and myname with relevant values. myid and mycollection have to exist in the system, obviously. However, myname can be anything you want. When you run the query, it doesn't return a web page or anything to your browser. It just automatically downloads a file with myname as the name of the file, and puts it in your local /Downloads folder.
I DON'T WISH TO DOWNLOAD THIS FILE. I just want to read the contents of the file it returns directly into PHP as a string. The file I am trying to get just contains xml data.
file_get_contents works to get the data in that file, if I use it with PHP7 and Apache on my laptop running Ubuntu. But, on my desktop which runs Windows 10, and XAMPP (Apache and PHP5), I get this error (I've replaced sensitive data with ###):
Warning:
file_get_contents(###/utils/collection/###/id/1110/filename/1111.cpd):
failed to open stream: No such file or directory in
D:\Titus\Documents\GitHub\NativeAmericanSCArchive\NASCA-site\api\update.php
on line 18
My coworkers have been unable to help me so I am curious if anyone here can confirm or deny whether this is an operating system issue, or a PHP version issue, and whether there's a solid alternative method that is likely to work in PHP5 and on both Windows and Ubuntu.
file_get_contents() is a simple screwdriver. It's very good for getting data by simply GET requests where the header, HTTP request method, timeout, cookiejar, redirects, and other important things do not matter.
fopen() with a stream context or cURL with setopt are powerdrills with every bit and option you can think of.
In addition to this, due to some recent website hacks, we had to secure our sites more. In doing so, we discovered that file_get_contents failed to work, where curl still would work.
Not 100%, but I believe that this php.ini setting may have been blocking the file_get_contents request.
; Disable allow_url_fopen for security reasons
allow_url_fopen = 0
Either way, our code now works with curl.
reference :
http://25labs.com/alternative-for-file_get_contents-using-curl/
http://phpsec.org/projects/phpsecinfo/tests/allow_url_fopen.html
So, You can solve this problem by using PHP cURL extension. Here is an example that does the same thing you were trying:
function curl($url)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data = curl_exec($ch);
curl_close($ch);
return $data;
}
$url = 'your_api_url';
$data = curl($url);
And finally you can check your data by print_r($data). Hope it you it will works and you will understand.
Reference : http://php.net/manual/en/book.curl.php
I am working on a small project to download files from a ftp server using a web based query. I have create a HTML form as a front end, which takes from user the ftp server name and the file names, and then interacts with the php script, to connect with ftp server and download the files specified. I have kept both the html and php files on my university webserver. I am able to download files on the machine running the webserver when I run the PHP script directly form the command line from the server. But I am not able to download files on my local computer using a web browser.
| Web-browser running on local machine | <--not able to download file-->
| My PHP script running on web-server machine | <--Able to download file-->| FTP Server |
<?php
$targetFile = 'file.txt';
$curl = curl_init();
$fh = fopen(dirname(__FILE__) . '/'.$targetFile,'w+b');
if ($fh == FALSE){
print "File not opened<br>";
exit;
}
echo "<br>configuring curl...<br>";
curl_setopt($curl, CURLOPT_URL, "ftp://full_path_name_of_file");
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($curl, CURLOPT_HEADER, 0);
curl_setopt($curl, CURLOPT_VERBOSE, 1);
curl_setopt($curl, CURLOPT_FILE, $fh);
curl_exec($curl);
echo curl_error($curl);
curl_close($curl);
fclose($fh);
?>
File is downloaded successfully when I run this php script from the command line from the server machine. But when I invoke the script from the web browser using my personal machine, I get an error "File not opened".
Can you please tell is there any way I can download the file via my web browser.
Thanks!
this might be a file ownership issue
Please check the permission and the ownership of the file.
In order to debug this a bit better, you might use parts of the script provided here:
https://stackoverflow.com/a/10377285/1163786
check the php configuration
there is a difference between the php configuration of the CLI and the one for the webserver. the later might have some restrictions, when compared to the CLI one. please compare or diff the files (to see the configuration difference).
the download itself is not initiated
The script downloads a file via curl from a ftp server
and stores it to a folder on your webserver,
but your are not pointing the browser (client) to the downloaded file
nor initiating a downstream to the browser from the script.
I would add a check, that the script is not called from the CLI
and then do a header forward to the downloaded file.
if(PHP_SAPI !== 'cli') {
header("Location: WWW_URL_SERVER . $path_to . $targetFile);
}
You can test the download standalone by using the "direct link" in your browser to initiate a download. This is again a permission thing, this time the webserver itself serves the static file and needs permission to do so.
Referencing: Redirect page after process complete in PHP
I am a realtor/web designer. I am trying to write a script to download a zip file to my server from a ftp server. My website is all in php/mysql. The problem that I am having is that I cannot actually log in to the ftp server. The file is provided through a link that is available to me, but access to the actual server is not available. Here is the link i have access to.
ftp://1034733:ze3Kt699vy14#idx.living.net/idx_fl_ftp_down/idx_ftmyersbeach_dn/ftmyersbeach_data.zip
php's normal functions for accomplishing this give me a connection error. Php does not have permission to access this server... Any solutions to this problem would be a life saver for me. I am looking to use a cron job to run this script every day so i don't have to physically download this file and upload it to my (godaddy) server, which is my current solution (not a good one, i know!).
Also, I can figure out how to unzip the file myself, as I have done some work with php's zip extension, but any tips for an efficient way to do this would be appreciated as well. I am looking to access a text file inside of the zip archive called "ftmyers_data.txt"
Do you have shell access to the server on which your PHP application runs? If so, you might be able to automate the retrieval using a shell script and the ftp shell command.
Use php_curl
$curl = curl_init();
$fh = fopen("file.zip", 'w');
curl_setopt($curl, CURLOPT_URL, "ftp://1034733:ze3Kt699vy14#idx.living.net/idx_fl_ftp_down/idx_ftmyersbeach_dn/ftmyersbeach_data.zip");
curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
$result = curl_exec($curl);
fwrite($fh, $result);
fclose($fh);
curl_close($curl);
Easy as pi. :)
I need to implement a way to make POST calls to pages located on the same server or in another server. We cannot use include because the files that we are calling usually call different databases or have functions with the same name.
I've been trying to implement this using curl, and while it works perfectly when calling files from another server, I get absolutely nothing when making a call to the same server where the file is.
EDIT TO ADD SOME CODE:
A simplified version of what I'm doing:
File1.php
<?php
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "www.myserver.com/File2.php");
curl_setopt($ch, CURLOPT_FRESH_CONNECT, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_VERBOSE, true);
$result = curl_exec($ch);
curl_close($ch);
echo $result;
?>
File2.php
<?php
echo "I'M IN!!";
?>
After calling File1.php, I get nothing, but if File2.php is in another server then I get a result.
Any help?
I tried using both the server URL (http...) and the total address of the files (/home/wwww....)
Be aware that if you're issuing the CURL request to your own site, you're using the default session handler, and the page you're requesting via CURL uses the same session as the page that's generating the request, you'll run into a deadlock situation.
The default session handler locks the session file for the duration of the page request. When you try to request another page using the same session, that subsequent request will hang until the request times out or the session file becomes available. Since you're doing an internal CURL, the script running CURL will hold a lock on the session file, and the CURL request can never complete as the target page can never load the session.
Because when you tried to request to the local server with the public ip, apache couldn't resolve to its local domain. So you have to check which local ip apache is using for that domain. Then you need to edit the /etc/hosts file and add the new row with local ip plus your domain. For example:
My Local ip for that domain in apache's virtual host is : 172.190.1.120 and my domain is mydomain.com
So I will add:
172.190.1.120 mydomain.com
Then your curl will work properly.
You should refactor your code. In addition to what Marc B mentioned, this approach will unnecessarily slow down your script (potentially by a large margin) and cause lots of confusion. No offense, but this is just an incredibly hacky fix for bad logic.
I am trying to pass some information of a server in which the script is running to next url or server.
I tried using curl. I have following two disadvantages:
if cannot locate file it will tell file not found
it waits till the remote file execution is completed.
How can I overcome both of the things either by using curl or other commands?
Edit:
Now I would like to suppress the file not found message error message being displayed by curl even if the file really doesn't exists.
I do not want output from destination page so I don't want to wait till the execution of the destination page is finished. I just want to trigger the code and continue with another scripts remaining
Example:
I am trying to build a log system which will have its everything in next webserver. The client website which implements the log system will be sending some of the data required for the system by calling a file in my webserver.
Code I am using:
// create a new cURL resource
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, "http://example.com/log.php?data=data");
curl_setopt($ch, CURLOPT_HEADER, 0);
// grab URL and pass it to the browser
curl_exec($ch);
// close cURL resource, and free up system resources
curl_close($ch);
Why don't you want to use standard PHP features?
<?php
$logpage = file_get_contents('http://example.com/log.php?data=data');
echo $logpage;
?>