Is it possible to make curl, access a url and the result as a file resource? like how fopen does it.
My goals:
Parse a CSV file
Pass it to fgetcsv
My obstruction: fopen is disabled
My chunk of codes (in fopen)
$url = "http://download.finance.yahoo.com/d/quotes.csv?s=USDEUR=X&f=sl1d1t1n&e=.csv";
$f = fopen($url, 'r');
print_r(fgetcsv($f));
Then, I am trying this on curl.
$curl = curl_init();
curl_setopt($curl, CURLOPT_VERBOSE, true);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, false);
curl_setopt($curl, CURLOPT_POST, true);
curl_setopt($curl, CURLOPT_POSTFIELDS, $param);
curl_setopt($curl, CURLOPT_URL, $url);
$content = #curl_exec($curl);
curl_close($curl);
But, as usual. $content already returns a string.
Now, is it possible for curl to return it as a file resource pointer? just like fopen? Using PHP < 5.1.x something. I mean, not using str_getcsv, since it's only 5.3.
My error
Warning: fgetcsv() expects parameter 1 to be resource, boolean given
Thanks
Assuming that by fopen is disabled you mean "allow_url_fopen is disabled", a combination of CURLOPT_FILE and php://temp make this fairly easy:
$f = fopen('php://temp', 'w+');
$curl = curl_init();
curl_setopt($curl, CURLOPT_URL, $url);
curl_setopt($curl, CURLOPT_FILE, $f);
// Do you need these? Your fopen() method isn't a post request
// curl_setopt($curl, CURLOPT_POST, true);
// curl_setopt($curl, CURLOPT_POSTFIELDS, $param);
curl_exec($curl);
curl_close($curl);
rewind($f);
while ($line = fgetcsv($f)) {
print_r($line);
}
fclose($f);
Basically this creates a pointer to a "virtual" file, and cURL stores the response in it. Then you just reset the pointer to the beginning and it can be treated as if you had opened it as usual with fopen($url, 'r');
You can create a temporary file using fopen() and then fwrite() the contents into it. After that, the newly created file will be readable by fgetcsv(). The tempnam() function should handle the creation of arbitrary temporary files.
According to the comments on str_getcsv(), users without access to the command could try the function below. There are also various other approaches in the comments, make sure you check them out.
function str_getcsv($input, $delimiter = ',', $enclosure = '"', $escape = '\\', $eol = '\n') {
if (is_string($input) && !empty($input)) {
$output = array();
$tmp = preg_split("/".$eol."/",$input);
if (is_array($tmp) && !empty($tmp)) {
while (list($line_num, $line) = each($tmp)) {
if (preg_match("/".$escape.$enclosure."/",$line)) {
while ($strlen = strlen($line)) {
$pos_delimiter = strpos($line,$delimiter);
$pos_enclosure_start = strpos($line,$enclosure);
if (
is_int($pos_delimiter) && is_int($pos_enclosure_start)
&& ($pos_enclosure_start < $pos_delimiter)
) {
$enclosed_str = substr($line,1);
$pos_enclosure_end = strpos($enclosed_str,$enclosure);
$enclosed_str = substr($enclosed_str,0,$pos_enclosure_end);
$output[$line_num][] = $enclosed_str;
$offset = $pos_enclosure_end+3;
} else {
if (empty($pos_delimiter) && empty($pos_enclosure_start)) {
$output[$line_num][] = substr($line,0);
$offset = strlen($line);
} else {
$output[$line_num][] = substr($line,0,$pos_delimiter);
$offset = (
!empty($pos_enclosure_start)
&& ($pos_enclosure_start < $pos_delimiter)
)
?$pos_enclosure_start
:$pos_delimiter+1;
}
}
$line = substr($line,$offset);
}
} else {
$line = preg_split("/".$delimiter."/",$line);
/*
* Validating against pesky extra line breaks creating false rows.
*/
if (is_array($line) && !empty($line[0])) {
$output[$line_num] = $line;
}
}
}
return $output;
} else {
return false;
}
} else {
return false;
}
}
Related
This question already has answers here:
check if php file exists on server
(2 answers)
Closed 4 years ago.
how can I make sure that the file exists on the server and find out its size on the URL without first downloading the file
$url = 'http://site.zz/file.jpg';
file_exists($url); //always is false
filesize($url); //not working
Help eny one worked exemple pls
The function file_exists() only works on file that exists on the server locally.
Similarly; filesize() function returns the size of the file that exists on the server locally.
If you are trying to load the size of a file for a given url, you can try this approach:
function get_remote_file_info($url) {
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
curl_setopt($ch, CURLOPT_HEADER, TRUE);
curl_setopt($ch, CURLOPT_NOBODY, TRUE);
$data = curl_exec($ch);
$fileSize = curl_getinfo($ch, CURLINFO_CONTENT_LENGTH_DOWNLOAD);
$httpResponseCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
curl_close($ch);
return [
'fileExists' => (int) $httpResponseCode == 200,
'fileSize' => (int) $fileSize
];
}
Usage:
$url = 'http://site.zz/file.jpg';
$result = get_remote_file_info($url);
var_dump($result);
Example output:
array(2) {
["fileExists"]=>
bool(true)
["fileSize"]=>
int(12345)
}
Without any libraries and file openning
$data = get_headers($url, true);
$size = isset($data['Content-Length'])?(int) $data['Content-Length']:0;
Open remote files:
function fsize($path) {
$fp = fopen($path,"r");
$inf = stream_get_meta_data($fp);
fclose($fp);
foreach($inf["wrapper_data"] as $v) {
if (stristr($v, "content-length")) {
$v = explode(":", $v);
return trim($v[1]);
}
}
return 0;
}
Usage:
$file = "https://zzz.org/file.jpg";
$inbytes = fsize($filesize);
Use sockets:
function getRemoteFileSize($url){
$parse = parse_url($url);
$host = $parse['host'];
$fp = #fsockopen ($host, 80, $errno, $errstr, 20);
if(!$fp){
$ret = 0;
}else{
$host = $parse['host'];
fputs($fp, "HEAD ".$url." HTTP/1.1\r\n");
fputs($fp, "HOST: ".$host."\r\n");
fputs($fp, "Connection: close\r\n\r\n");
$headers = "";
while (!feof($fp)){
$headers .= fgets ($fp, 128);
}
fclose ($fp);
$headers = strtolower($headers);
$array = preg_split("|[\s,]+|",$headers);
$key = array_search('content-length:',$array);
$ret = $array[$key+1];
}
if($array[1]==200) return $ret;
else return -1*$array[1];
}
You can't access to filesize of a distant file.
You have to check with your local filepath.
I am trying to search (filter) for files in a Dropbox folder, but no files are being found when there are files that match the filter. I am not using the PHP library provided by Dropbox.
Here is an extract of the code:
class Dropbox {
private $headers = array();
private $authQueryString = "";
public $SubFolders = array();
public $Files = array();
function __construct() {
$this->headers = array('Authorization: OAuth oauth_version="1.0", oauth_signature_method="PLAINTEXT", oauth_consumer_key="'.DROPBOX_APP_KEY.'", oauth_token="'.DROPBOX_OAUTH_ACCESS_TOKEN.'", oauth_signature="'.DROPBOX_APP_SECRET.'&'.DROPBOX_OAUTH_ACCESS_SECRET.'"');
$this->authQueryString = "oauth_consumer_key=".DROPBOX_APP_KEY."&oauth_token=".DROPBOX_OAUTH_ACCESS_TOKEN."&oauth_signature_method=PLAINTEXT&oauth_signature=".DROPBOX_APP_SECRET."%26".DROPBOX_OAUTH_ACCESS_SECRET."&oauth_version=1.0";
}
public function GetFolder($folder, $fileFilter = "") {
//Add the required folder to the end of the base path for folder call
if ($fileFilter == "")
$subPath = "metadata/sandbox";
else
$subPath = "search/sandbox";
if (strlen($folder) > 1) {
$subPath .= (substr($folder, 0, 1) != "/" ? "/" : "")
.$folder;
}
//Set up the post parameters for the call
$params = null;
if ($fileFilter != "") {
$params = array(
"query" => $fileFilter
);
}
//Clear the sub folders and files logged
$this->SubFolders = array();
$this->Files = array();
//Make the call
$content = $this->doCall($subPath, $params);
//Log the files and folders
for ($i = 0; $i < sizeof($content->contents); $i++) {
$f = $content->contents[$i];
if ($f->is_dir == "1") {
array_push($this->SubFolders, $f->path);
} else {
array_push($this->Files, $f->path);
}
}
//Return the content
return $content;
}
private function doCall($urlSubPath, $params = null, $filePathName = null, $useAPIContentPath = false) {
//Create the full URL for the call
$url = "https://api".($useAPIContentPath ? "-content" : "").".dropbox.com/1/".$urlSubPath;
//Initialise the curl call
$ch = curl_init();
//Set up the curl call
curl_setopt($ch, CURLOPT_HTTPHEADER, $this->headers);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
if ($params != null)
curl_setopt($ch, CURLOPT_POSTFIELDS, $params);
$fh = null;
if ($filePathName != null) {
$fh = fopen($filePathName, "rb");
curl_setopt($context, CURLOPT_BINARYTRANSFER, true);
curl_setopt($context, CURLOPT_INFILE, $fh);
curl_setopt($context, CURLOPT_INFILESIZE, filesize($filePathName));
}
//Excecute and get the response
$api_response = curl_exec($ch);
if ($fh != null)
fclose($fh);
//Process the response into an array
$json_response = json_decode($api_response);
//Has there been an error
if (isset($json_response->error )) {
throw new Exception($json_response["error"]);
}
//Send the response back
return $json_response;
}
}
I then call the GetFolder method of Dropbox as such:
$dbx = new Dropbox();
$filter = "MyFilter";
$dbx->GetFolder("MyFolder", $filter);
print "Num files: ".sizeof($dbx->Files);
As I am passing $filter into GetFolder, it uses the search/sandbox path and creates a parameter array ($params) with the required query parameter in it.
The process works fine if I don't provide the $fileFilter parameter to GetFolder and all files in the folder are returned (uses the metadata/sandbox path).
Other methods (that are not in the extract for brevity) of the Dropbox class use the $params feature and they to work fine.
I have been using the Dropbpox API reference for guidance (https://www.dropbox.com/developers/core/docs#search)
At first glance, it looks like you're making a GET request to /search but passing parameters via CURLOPT_POSTFIELDS. Try using a POST or encoding the search query as a query string parameter.
EDIT
Below is some code that works for me (usage: php search.php <term>). Note that I'm using OAuth 2 instead of OAuth 1, so my Authorization header looks different from yours.
<?php
$access_token = '<REDACTED>';
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, 'https://api.dropbox.com/1/search/auto');
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Authorization:Bearer ' . $access_token));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, array('query' => $argv[1]));
$api_response = curl_exec($ch);
echo "Matching files:\n\t" . join("\n\t",
array_map(function ($file) {
return $file['path'];
}, json_decode($api_response, true)))."\n";
?>
Is there an alternative to file_get_contents? This is the code I'm having issues with:
if ( !$img = file_get_contents($imgurl) ) { $error[] = "Couldn't find the file named $card.$format at $defaultauto"; }
else {
if ( !file_put_contents($filename,$img) ) { $error[] = "Failed to upload $filename"; }
else { $success[] = "All missing cards have been uploaded"; }
}
I tried using cURL but couldn't quite figure out how to accomplish what this is accomplishing. Any help is appreciated!
There are many alternatives to file_get_contents I've posted a couple of alternatives below.
fopen
function fOpenRequest($url) {
$file = fopen($url, 'r');
$data = stream_get_contents($file);
fclose($file);
return $data;
}
$fopen = fOpenRequest('https://www.example.com');// This returns the data using fopen.
curl
function curlRequest($url) {
$c = curl_init();
curl_setopt($c, CURLOPT_URL, $url);
curl_setopt($c, CURLOPT_RETURNTRANSFER, true);
$data = curl_exec($c);
curl_close($c);
return $data;
}
$curl = curlRequest('https://www.example.com');// This returns the data using curl.
You could use one of these available options with the data stored in a variable to preform what you need to.
I am writing a PHP program that downloads a pdf from a backend and save to a local drive. Now how do I check whether the file exists before downloading?
Currently I am using curl (see code below) to check and download but it still downloads the file which is 1KB in size.
$url = "http://wedsite/test.pdf";
$path = "C:\\test.pdf;"
downloadAndSave($url,$path);
function downloadAndSave($urlS,$pathS)
{
$fp = fopen($pathS, 'w');
$ch = curl_init($urlS);
curl_setopt($ch, CURLOPT_FILE, $fp);
$data = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
echo $httpCode;
//If 404 is returned, then file is not found.
if(strcmp($httpCode,"404") == 1)
{
echo $httpCode;
echo $urlS;
}
fclose($fp);
}
I want to check whether the file exists before even downloading. Any idea how to do it?
You can do this with a separate curl HEAD request:
curl_setopt($ch, CURLOPT_NOBODY, true);
$data = curl_exec($ch);
$httpCode = curl_getinfo($ch, CURLINFO_HTTP_CODE);
When you actually want to download you can use set NOBODY to false.
Call this before your download function and it's done:
<?php function remoteFileExists($url) {
$curl = curl_init($url);
//don't fetch the actual page, you only want to check the connection is ok
curl_setopt($curl, CURLOPT_NOBODY, true);
//do request
$result = curl_exec($curl);
$ret = false;
//if request did not fail
if ($result !== false) {
//if request was ok, check response code
$statusCode = curl_getinfo($curl, CURLINFO_HTTP_CODE);
if ($statusCode == 200) {
$ret = true;
}
}
curl_close($curl);
return $ret;
}
?>
Since you are using HTTP to fetch a resource on the internet, what you really want to check is that the return code is a 404.
On some PHP installations, you can just use file_exists($url) out of the box. This does not work in all environments, however. http://www.php.net/manual/en/wrappers.http.php
Here is a function much like file_exists but for URLs, using curl:
<?php function curl_exists()
$file_headers = #get_headers($url);
if($file_headers[0] == 'HTTP/1.1 404 Not Found') {
$exists = false;
}
else {
$exists = true;
}
} ?>
source: http://www.php.net/manual/en/function.file-exists.php#75064
Sometimes the CURL extension isn't installed with PHP. In that case you can still use the socket library in the PHP core:
<?php function url_exists($url) {
$a_url = parse_url($url);
if (!isset($a_url['port'])) $a_url['port'] = 80;
$errno = 0;
$errstr = '';
$timeout = 30;
if(isset($a_url['host']) && $a_url['host']!=gethostbyname($a_url['host'])){
$fid = fsockopen($a_url['host'], $a_url['port'], $errno, $errstr, $timeout);
if (!$fid) return false;
$page = isset($a_url['path']) ?$a_url['path']:'';
$page .= isset($a_url['query'])?'?'.$a_url['query']:'';
fputs($fid, 'HEAD '.$page.' HTTP/1.0'."\r\n".'Host: '.$a_url['host']."\r\n\r\n");
$head = fread($fid, 4096);
$head = substr($head,0,strpos($head, 'Connection: close'));
fclose($fid);
if (preg_match('#^HTTP/.*\s+[200|302]+\s#i', $head)) {
$pos = strpos($head, 'Content-Type');
return $pos !== false;
}
} else {
return false;
}
} ?>
source: http://www.php.net/manual/en/function.file-exists.php#73175
An even faster function can be found here:
http://www.php.net/manual/en/function.file-exists.php#76246
In the first example above $file_headers[0] may contain more than or something other than 'HTTP/1.1 404 Not Found', e.g:
HTTP/1.1 404 Document+%2Fdb%2Fscotbiz%2Freports%2FR20131212%2Exml+not+found
So it's important to use some other test, e.g., regex, as '==' is not reliable.
I have implemented a function that runs on each page that I want to restrict from non-logged in users. The function automatically redirects the visitor to the login page in the case of he or she is not logged in.
I would like to make a PHP function that is run from a exernal server and iterates through a number of set URLs (array with URLs that is for each protected site) to see if they are redirected or not. Thereby I could easily make sure if protection is up and running on every page.
How could this be done?
Thanks.
$urls = array(
'http://www.apple.com/imac',
'http://www.google.com/'
);
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
foreach($urls as $url) {
curl_setopt($ch, CURLOPT_URL, $url);
$out = curl_exec($ch);
// line endings is the wonkiest piece of this whole thing
$out = str_replace("\r", "", $out);
// only look at the headers
$headers_end = strpos($out, "\n\n");
if( $headers_end !== false ) {
$out = substr($out, 0, $headers_end);
}
$headers = explode("\n", $out);
foreach($headers as $header) {
if( substr($header, 0, 10) == "Location: " ) {
$target = substr($header, 10);
echo "[$url] redirects to [$target]<br>";
continue 2;
}
}
echo "[$url] does not redirect<br>";
}
I use curl and only take headers, after I compare my url and url from header curl:
$url="http://google.com";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_TIMEOUT, '60'); // in seconds
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_NOBODY, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$res = curl_exec($ch);
if(curl_getinfo($ch)['url'] == $url){
echo "not redirect";
}else {
echo "redirect";
}
You could always try adding:
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
since 302 means it moved, allow the curl call to follow it and return whatever the moved url returns.
Getting the headers with get_headers() and checking if Location is set is much simpler.
$urls = [
"https://example-1.com",
"https://example-2.com"
];
foreach ($urls as $key => $url) {
$is_redirect = does_url_redirect($url) ? 'yes' : 'no';
echo $url . ' is redirected: ' . $is_redirect . PHP_EOL;
}
function does_url_redirect($url){
$headers = get_headers($url, 1);
if (!empty($headers['Location'])) {
return true;
} else {
return false;
}
}
I'm not sure whether this really makes sense as a security check.
If you are worried about files getting called directly without your "is the user logged in?" checks being run, you could do what many big PHP projects do: In the central include file (where the security check is being done) define a constant BOOTSTRAP_LOADED or whatever, and in every file, check for whether that constant is set.
Testing is great and security testing is even better, but I'm not sure what kind of flaw you are looking to uncover with this? To me, this idea feels like a waste of time that will not bring any real additional security.
Just make sure your script die() s after the header("Location:...") redirect. That is essential to stop additional content from being displayed after the header command (a missing die() wouldn't be caught by your idea by the way, as the redirect header would still be issued...)
If you really want to do this, you could also use a tool like wget and feed it a list of URLs. Have it fetch the results into a directory, and check (e.g. by looking at the file sizes that should be identical) whether every page contains the login dialog. Just to add another option...
Do you want to check the HTTP code to see if it's a redirect?
$params = array('http' => array(
'method' => 'HEAD',
'ignore_errors' => true
));
$context = stream_context_create($params);
foreach(array('http://google.com', 'http://stackoverflow.com') as $url) {
$fp = fopen($url, 'rb', false, $context);
$result = stream_get_contents($fp);
if ($result === false) {
throw new Exception("Could not read data from {$url}");
} else if (! strstr($http_response_header[0], '301')) {
// Do something here
}
}
I hope it will help you:
function checkRedirect($url)
{
$headers = get_headers($url);
if ($headers) {
if (isset($headers[0])) {
if ($headers[0] == 'HTTP/1.1 302 Found') {
//this is the URL where it's redirecting
return str_replace("Location: ", "", $headers[9]);
}
}
}
return false;
}
$isRedirect = checkRedirect($url);
if(!$isRedirect )
{
echo "URL Not Redirected";
}else{
echo "URL Redirected to: ".$isRedirect;
}
You can use session,if the session array is not set ,the url redirected to a login page.
.
I modified Adam Backstrom answer and implemented chiborg suggestion. (Download only HEAD). It have one thing more: It will check if redirection is in a page of the same server or is out. Example: terra.com.br redirects to terra.com.br/portal. PHP will considerate it like redirect, and it is correct. But i only wanted to list that url that redirect to another URL. My English is not good, so, if someone found something really difficult to understand and can edit this, you're welcome.
function RedirectURL() {
$urls = array('http://www.terra.com.br/','http://www.areiaebrita.com.br/');
foreach ($urls as $url) {
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
// chiborg suggestion
curl_setopt($ch, CURLOPT_NOBODY, true);
// ================================
// READ URL
// ================================
curl_setopt($ch, CURLOPT_URL, $url);
$out = curl_exec($ch);
// line endings is the wonkiest piece of this whole thing
$out = str_replace("\r", "", $out);
echo $out;
$headers = explode("\n", $out);
foreach($headers as $header) {
if(substr(strtolower($header), 0, 9) == "location:") {
// read URL to check if redirect to somepage on the server or another one.
// terra.com.br redirect to terra.com.br/portal. it is valid.
// but areiaebrita.com.br redirect to bwnet.com.br, and this is invalid.
// what we want is to check if the address continues being terra.com.br or changes. if changes, prints on page.
// if contains http, we will check if changes url or not.
// some servers, to redirect to a folder available on it, redirect only citting the folder. Example: net11.com.br redirect only to /heiden
// only execute if have http on location
if ( strpos(strtolower($header), "http") !== false) {
$address = explode("/", $header);
print_r($address);
// $address['0'] = http
// $address['1'] =
// $address['2'] = www.terra.com.br
// $address['3'] = portal
echo "url (address from array) = " . $url . "<br>";
echo "address[2] = " . $address['2'] . "<br><br>";
// url: terra.com.br
// address['2'] = www.terra.com.br
// check if string terra.com.br is still available in www.terra.com.br. It indicates that server did not redirect to some page away from here.
if(strpos(strtolower($address['2']), strtolower($url)) !== false) {
echo "URL NOT REDIRECT";
} else {
// not the same. (areiaebrita)
echo "SORRY, URL REDIRECT WAS FOUND: " . $url;
}
}
}
}
}
}
function unshorten_url($url){
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, $url);
$out = curl_exec($ch);
$real_url = $url;//default.. (if no redirect)
if (preg_match("/location: (.*)/i", $out, $redirect))
$real_url = $redirect[1];
if (strstr($real_url, "bit.ly"))//the redirect is another shortened url
$real_url = unshorten_url($real_url);
return $real_url;
}
I have just made a function that checks if a URL exists or not
$ch = curl_init();
curl_setopt($ch, CURLOPT_HEADER, true);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
function url_exists($url, $ch) {
curl_setopt($ch, CURLOPT_URL, $url);
$out = curl_exec($ch);
// line endings is the wonkiest piece of this whole thing
$out = str_replace("\r", "", $out);
// only look at the headers
$headers_end = strpos($out, "\n\n");
if( $headers_end !== false ) {
$out = substr($out, 0, $headers_end);
}
//echo $out."====<br>";
$headers = explode("\n", $out);
//echo "<pre>";
//print_r($headers);
foreach($headers as $header) {
//echo $header."---<br>";
if( strpos($header, 'HTTP/1.1 200 OK') !== false ) {
return true;
break;
}
}
}
Now I have used an array of URLs to check if a URL exists as following:
$my_url_array = array('http://howtocode.pk/result', 'http://google.com/jobssss', 'https://howtocode.pk/javascript-tutorial/', 'https://www.google.com/');
for($j = 0; $j < count($my_url_array); $j++){
if(url_exists($my_url_array[$j], $ch)){
echo 'This URL "'.$my_url_array[$j].'" exists. <br>';
}
}
I can't understand your question.
You have an array with URLs and you want to know if user is from one of the listed URLs?
If I'm right in understanding your quest:
$urls = array('http://url1.com','http://url2.ru','http://url3.org');
if(in_array($_SERVER['HTTP_REFERER'],$urls))
{
echo 'FROM ARRAY';
} else {
echo 'NOT FROM ARR';
}