Creating a history list, preventing multiples - php

Im creating a website and one of the features is on the side it will show the user the pages they've visited since they got to the website, and they can click any of those names and get it to take them back to that page. I have it working, however when they click back to a certain page if they click through them again it writes out the repeat pages. for instance if say we have 10 pages, main01 to main10. The user gets to page main05, and decides he wants to go back to main03, he click main03 on his history list and goes there just fine, decides he wants to keep going and clicks "continue" this brings him to page main04, which is fine. But the history list becomes:
main01
main02
main03
main04
main05
main03
main04
so what I've tried to do is create a method that checks to see if the page visited has already been added to the array, if it has been then it should just echo nothing. if it hasnt then it echos the correct link. but whenever I try it now it just displays the last page you visited, and overwrites that every time you change pages. Here is my code:
if($_POST['visited']){
$_SESSION['visitedpages'][$_SESSION['i']] = $_POST['visited'];
$_SESSION['i']++;
echo "<pre>";
print_r($_SESSION['visitedpages']);
echo "</pre>";
}
if($_SESSION['visitedpages']){
$a_length = count($_SESSION['visitedpages']);
for($x = 0; $x < $a_length; $x++){
$name = $_SESSION['visitedpages'][$x];
$exists = checkifexists($name, $a_length);
if(!$exists){
echo "$name<br />";
}
else{
echo "";
}
}
}
function checkifexists($name, $a_length){
for($z = 0; $z < $a_length;$z++){
$existingname = $_SESSION['visitedpages'][$z-1];
if($name === $existingname) {
return true;
}
}
return false;
}
how can I get this working correctly? Any help would be appreciated.
EDIT: Actually it looks like I got it working by checking if it exists when its being written to the array, rather than when its being written as a link. However now whenever I click the link to visit the page (for instance main03 from main05) it goes to main03 but the history list doesn't display, any input on this?
EDIT2: So I changed it, using in_array, per your suggestion, and its displaying properly but its still listing duplicates. here is the code Im using:
if(in_array($_POSTED['visited'],$_SESSION['visitedpages'])){
echo "";
}
else{
$_SESSION['visitedpages'][$_SESSION['i']] = $_POST['visited'];
$_SESSION['i']++;
echo "<pre>";
print_r($_SESSION['visitedpages']);
echo "</pre>";
$arrayinit = true;
}
and now the array looks like this whenever I visit previous pages:
Array
(
[0] => main01
[1] => main02
[2] => main03
[3] => main04
[4] => main03
)

You need to check your checkifexists function.
You're checking to see if this page has already been mentioned; if it has, then return true. But what you're doing is going through the whole of $_SESSION['visitedpages'] to see if a page is in there, and of course it is.
Try calling it with:
$exists = checkifexists($name, $x - 1);
Then you'll just check elements earlier in the array to see if they're duplicates.
For what it's worth, you might be able to do this more efficiently with in_array

Related

Display different content depending on Referrer

Hey I am trying to display a different phone number for visitors my website from my Google adwords campaign.
The code below works without the else statement (so if I click through to the page from Google it will display a message, and if I visit the site regularly it does not). When I added the else statement it outputs both numbers. Thank you
<?php
// The domain list.
$domains = Array('googleadservices.com', 'google.com');
$url_info = parse_url($_SERVER['HTTP_REFERER']);
if (isset($url_info['host'])) {
foreach($domains as $domain) {
if (substr($url_info['host'], -strlen($domain)) == $domain) {
// GOOGLE NUMBER HERE
echo ('1234');
}
// REGULAR NUMBER HERE
else {
echo ('12345');
}
}
}
?>
Your logic is slightly skewed; you're checking to see if the URL from parse_url matches the domains in your array; but you're running through the whole array each time. So you get both a match and a non-match, because google.com matches one entry but not the other.
I'd suggest making your domains array into an associative array:
$domains = Array('googleadservices.com' => '1234',
'google.com' => '12345' );
Then you just need to check once:
if (isset($url_info['host'])) {
if (isset($domains[$url_info['host']])) {
echo $domains[$url_info['host']];
}
}
I've not tested this, but it should be enough for you to see the logic.
(I've also removed the substr check - you may need to put that back in, to ensure that you're getting the exact string that you need to look for)

parsing SEO friendly url without htaccess or mode_rewrite

Can anyone suggest a method in php or a function for parsingSEO friendly urls that doesn't involve htaccess or mod_rewrite? Examples would be awesome.
http://url.org/file.php/test/test2#3
This returns: Array ( scheme] => http [host] => url.org [path] => /file.php/test/test2 [fragment] => 3 ) /file.php/test/test2
How would I separate out the /file.php/test/test2 section? I guess test and test2 would be arguments.
EDIT:
#Martijn - I did figure out what your suggested before getting the notification about your answer. Thanks btw. Is this considered an ok method?
$url = 'http://url.org/file.php/arg1/arg2#3';
$test = parse_url($url);
echo "host: $test[host] <br>";
echo "path: $test[path] <br>";
echo "frag: $test[fragment] <br>";
$path = explode("/", trim($test[path]));
echo "1: $path[1] <br>";
echo "2: $path[2] <br>";
echo "3: $path[3] <br>";
echo "4: $path[4] <br>";
You can use explode to get the parts from your array:
$path = trim($array['path'], "/"); // trim the path of slashes
$path = explode("/", $path);
unset($path[0]); // the first one is the file, the others are sections of the url
If you really want to make it zerobased again, add this as last line:
$patch = array_values($path);
In response to your edit:
You want to make this as flexible as you can, so no fixed coding based on a max of 5 items. Although you probably will never exceed that, just don't pin yourself to it, just overhead you dont need.
If you have a pages system like this:
id parent name url
1 -1 Foo foo
2 1 Bar, child of Foo bar-child-of-foo
Make a recursive function. Pass the array to a function which takes the first section to find a root item
SELECT * FROM pages WHERE parent=-1 AND url=$path[0]
That query will return an id, use that in the parent column with the next value of the array. Unset each found value of the $path array. In the end, you will have an array with the remaining parts.
To sketch an example:
function GetFullPath(&$path, $parent=-1){
$path = "/"; // start with a slash
// Make the query for childs of this item
$result = mysqli_query($conn, "SELECT * FROM pages WHERE parent=".$parent." AND url=".current($path)." LIMIT 1");
// If any rows exists, append more of the url via recursiveness:
if($result->num_rows!==0){
// Remove the first part so if we go one deeper we start with the next value
$path = array_slice($patch,1); // remove first value
$fetch = $result->fetch_assoc();
// Use the fetched value to go deeper, find a child with the current item as parent
$path.= GetFullPath($path, $fetch['parent']);
}
// Return the result. if nothing is found at all, the result will be "/", probs home
return $path;
}
echo GetFullPath($path); // I pass it by reference, any alterations in the function happen to the variable outside the scope aswell
This is a draft, I did not test this, but you get the idea im trying to sketch. You can use the same method to get the ID of the page you are at. Just keep passing the variable back up again c
One of these days im getting the hang of recursiveness ^^.
Edit again: Oops, that turned out to be quite some code.

Zend Php Foreach Loop array

I have a input field and a array of email from DB table.
I am comparing the similarity of the input field to the array.
But some how, I am stuck with the loop.
Does that loop check compare each email with the input field?
It always bring me to google.com no matter what i input same or not
Here's the code from the controller:
if (isset($_POST['btn_free_download']))
{
// Get current input email from input field
$email = $this->getRequest()->getParam('email');
// Get all emails from the user
$referred_users = $this->_helper->user()->getReferredUsers()->toArray();
// Check the similarity which it with a loop
foreach ($referred_users as $referred_user)
{
similar_text($email, $referred_user['email'], $similar);
}
// If yes, Pop up message or redirect to some page
if ($similar < 97)
{
$this->_redirect('http://google.com');
}
// If not, redirect user to free download page
else
{
$this->_redirect('http://yahoo.com');
}
}
I think you need to check the manual . Foreach function is same wether you use it on zend or any other framework or raw php only.
$referred_users = $this->_helper->user()->getReferredUsers()->toArray();
$referred_users will probably hold an array of emails from the table
user, say:
$referred_users = array("one#email.com", "two#email.com", "three#email.com")
then when you use foreach loop it will iterate through each of the
emails in the array
foreach ($referred_users as $referred_user)
{
// for the first loop $referred_user = one#email.com, for second time $referred_user = two#email.com and goes on
similar_text($email, $referred_user['email'], $similar);
}
Now let us discuss your logic here:
// If yes, Pop up message or redirect to some page
if ($similar < 97)
{
$this->_redirect('http://google.com');
}
// If not, redirect user to free download page
else
{
$this->_redirect('http://yahoo.com');
}
Until and unless the last element in the array $referred_users is exactly equal to your $email
i.e. $email = "three#email.com"
you will always be given result for $similar less than 97% which means you will be redirected to google.
Which I assume you are not trying to do and probably not familiar with foreach function which is why you are not getting the expected result.
Assuming you are trying to do something like, check for the emails in the array if any of the email in the array matches (if array is from table check if the email entered from param is equal to any of the emails in the table) then redirect to somewhere or show some message else carry on. Solution below might be helpful to you.
$similarText = false;
foreach ($referred_users as $referred_user)
{
// for the first loop $referred_user = one#email.com, for second time $referred_user = two#email.com and goes on
similar_text($email, $referred_user['email'], $similar);
if ($similar > 97) {
$similarText = true;
break;
}
}
// If yes, Pop up message or redirect to some page
if ($similarText)
{
$this->_redirect('http://google.com');
}
// If not, redirect user to free download page
else
{
$this->_redirect('http://yahoo.com');
}
Hope you got the idea. But please do check the manual before posting a question in the future.

PHP random link without repetitions

My PHP is poor, but I'm trying my best to improve!!
I'm attempting to code a really simple php script that loads a random html page from a text file list.
Once people have viewed the html page, they link back to the random.php file and it loads another page... this can continue on forever.
I'm using a text file list as I'll regularly be adding more pages. My issue is there is no where in my code to prevent repeat visits!! Right now I only have about 8 links, and on more than one occasion I've had the same link 'randomly' come up 3 times in a row :( Hoping there is something simple I can add to this to prevent repetitions, and if all links have been viewed, then it resets. Many Thanks :)
<body>
<?php
$urlist=file("randomlinks.txt");
$nl=count($urlist);
$np=rand(0,$nl-1);
$url=trim($urlist[$np]);
header("Location: $url");
exit;
?>
</body>
Since the user does not know in what order the links are in the text file, if you were to read said links in sequence they would seem "random" (and you can shuffle them when first creating the file).
So you can:
save in session the index of the last link seen
link the link index to system time. This does not prevent repetitions, but guarantees that no two links come out equal, unless you hit 'refresh' after exactly the right amount of time.
Method 1:
$urlist=file("randomlinks.txt");
$nl=count($urlist);
session_start();
if (!isset($_SESSION['link'])) // If link is not in session
$_SESSION['link'] = 0; // Start from 0 (the first)
$np = $_SESSION['link']++; // Next time will use next
$_SESSION['link'] %= $nl; // Start over if nl exceeded
$url=trim($urlist[$np]);
Header("Location: $url");
Method 2:
...
$nl=count($urlist);
$np = time() % $nl; // Get number of seconds since the Epoch,
// extract modulo $nl obtaining a number that
// cycles between 0 and $nl-1, every $nl seconds
$url=trim($urlist[$np]);
Header("Location: $url");
Another method would be to remember the last N links seen - but for this, you need a session variable - so as not to get them again too soon.
session_start();
if (!isset($_SESSION['urlist'])) // Do we know the user?
$_SESSION['urlist'] = array(); // No, start with empty list
if (empty($_SESSION['urlist'])) // Is the list empty?
{
$_SESSION['urlist'] = file("randomlinks.txt"); // Fill it.
$safe = array_pop($_SESSION['urlist']);
shuffle($_SESSION['urlist']); // Shuffle the list
array_push($_SESSION['urlist'], $safe);
}
$url = trim(array_pop($_SESSION['urlist']));
If you have five URLS 1, 2, 3, 4 and 5, you might get:
1 5 3 4 2 1 4 2 5 3 1 2 3 5 4 1 4 3 2 5 1 4 ...
...the list is N-1 random :-), all links appear with equal frequency, and the same link may reappear at most at a 2-remove, like the "4" above (...4 1 4...); if it does, you'll never see it again for at least $nl visits.
ALSO
You should not use Header() from within a <BODY> tag. Remove <BODY> altogether.
You don't need to use exit() if you are at the natural end of the script: the script will exit by itself.
The simplest way I can think of would be to use a cookie.
The Internet is full of tutorials such as the following:
http://www.w3schools.com/php/php_cookies.asp
For example:
<?php
if (isset($_COOKIE["vistList"]))
$visited = split(","$_COOKIE["visitList"]);
foreach ($visited as &$value) {
if ($value == /* new site url */) {
//Find a new one
}
}
else
$expire=time()+60*60*24*30;
setcookie("vistList", "List-of-visited-URLs, separated-by-commas", $expire);
?>
I have not had a chance to test this code, but hopefully it can give you ideas.
As noted in the comments, the same thing could be accomplished using php sessions:
<?php
session_start();
if (isset($_SESSION["vistList"]))
$visited = split(","$_SESSION["visitList"]);
foreach ($visited as &$value) {
if ($value == /* new site url */) {
//Find a new one
}
}
else
$_SESSION['vistList']=/* new site URL */
?>
I would use PHP sessions to do this. Take a look at this example.
Store an array of available pages in a session variable. Every time you get a page, you remove that page from the array. When the array is empty, you reset it again from your original source.
Here's what your code might look like:
session_start();
if (empty($_SESSION["pages"]))
$_SESSION["pages"] = file("randomlinks.txt");
$nl = count($_SESSION["pages"]);
$np = mt_rand(0, $nl-1);
// get the page, remove it from the array, and shift all higher elements down:
list($url) = array_splice($_SESSION["pages"], $page, 1);
die(header("Location: $url"));

recursive or simple php loop

I'm having some problem understanding how to resolve this loop:
I'm developing a small scraper for myself and I'm trying to figure out how to loop within 2 methods until all the links are retrieved from the website.
I'm already retrieving the links from the first page but the problem is that I can't make a loop to verify the new links already extracted:
Here is my code:
$scrape->fetchlinks($url);//I scrape the links from the first page from a website
//for each one found I insert the url in the DB with status = "n"
foreach ($scrape->results as $result) {
if ($result) {
echo "$result \n";
$crawler->insertUrl($result);
//I select all the links with status = "n" to perform a scrape the stored links
$urlStatusNList = $crawler->selectUrlByStatus("n");
while (sizeof($urlStatusNList > 1)){
foreach($urlStatusNList as $sl){
$scrape->fetchlinks($sl->url); // I suppose it would retrieve all the new sublinks
$crawler->insertUrl($sl->url); // insert the sublinks in the db
$crawler->updateUrlByIdStatus($sl->id, "s"); //update the link scraped with status = "s", so I will not check these links again
//here I would like to return the loop for each new link in the db with status='n' until the system can not retrieve more links and stops with the script execution
}
}
}
}
Any type of help is very welcome. Thanks in advance !
In pseudo-code you're looking for something like this
do
{
grab new links and add them to database
} while( select all not yet extracted from database > 0 )
Will keep going on and on without recursion...

Categories