I am writing a VoteBox (like/dislike box) in html and I cannot find how to do this anywhere.
Basically, there will be a like and a dislike button. When one of them gets clicked, it will forward the user to a page. And in that, I want some PHP code that will increase the number in upvotes.txt/downvotes.txt
I have tried doing it with a database but the problem is I want anyone to be able to use this without any setup.
i also want the front page to display the number of upvotes and the number of downvotes
so its kinda like this (most of this isn't real code BTW im new to PHP):
//this is the code for upvote.html
$upvotes = get_data_from_TXT_file
$changedupvotes = $upvotes + 1
set_data_in_txt_file_to_$changedupvotes
Sorry if i havent explained this very well
Any help appreciated
This is skeleton of the code that you can use:
$file = 'file.txt'; // your file name
// error handling etc to make sure file exists & readable
$fdata = file_get_contents ( $file ); // read file data
// parse $fdata if needed and extract number
$fdata = intval($fdata) + 1;
file_put_contents($file, $fdata); // write it back to file
Reference:
http://www.php.net/manual/en/function.file-get-contents.php
http://php.net/manual/en/function.file-put-contents.php
You can use file() to read the file into an array, and then increment the upvotes, and then write the data back using file_put_contents():
if (file_exists('upvotes.txt')) {
$content = file('upvotes.txt'); // reading all lines into array
$upvotes = intval($content[0]) + 1; // getting first line
file_put_contents('upvotes.txt', $upvotes); // writing data
} else {
// handle the error appropriately
}
Related
I'm trying to export a lot of data trough a CSV export. The amount of data it's really big, around 100.000 records and counting.
My client usually uses two tabs to browse and check several stuff at the same time. So a requirement is that while the export is being made, he can continues browsing the system.
The issue is that when the CSV is being generated on the server, the session is blocked, you cannot load another page until the generation is completed.
This is what I'm doing:
Open the file
Loop trough the amount of data(One query per cycle, each cycle queries 5000 records) pd: I cannot change this, because of certain limitations.
write the data into the file
free memory
close the file
set headers to begin download
During the entire process, it's not possible to navigate the site in another tab.
The block of code:
$temp = 1;
$first = true;
$fileName = 'csv_data_' . date("Y-m-d") . '-' . time() . '.csv';
$filePath = CSV_EXPORT_PATH . $fileName;
// create CSV file
$fp = fopen($filePath, 'a');
// get data
for ($i = 1; $i <= $temp; $i++) {
// get lines
$data = $oPB->getData(ROWS_PER_CYCLE, $i); // ROWS_PER_CYCLE = 5000
// if something is empty, exit
if (empty($data)) {
break;
}
// write the data that will be exported into a file
fwrite($fp, $export->arrayToCsv($data, '', '', $first));
// count element
$temp = ceil($data[0]->foundRows / ROWS_PER_CYCLE); // foundRows is always the same value, doesn't change per query.
$first = false; // hide header for next rows
// free memory
unset($lines);
}
// close file
fclose($fp);
/**
* Begin Download
*/
$export->csvDownload($filePath); // set headers
Some considerations:
The count is being made in the same query, but it's not entering into an infinite loop, works as expected. It's contained into $data[0]->foundRows, and avoids an unnecesary query to count all the available records.
There're several memory limitations due to environment settings, that I cannot change.
Does anyone know How can I improve this? Or any other solution.
Thanks for reading.
I'm replying only because it can be helpful to someone else. A colleague came up with a solution for this problem.
Call the function session_write_close() before
$temp = 1;
Doing this, you're ending the current session and storing the session data, so I'm being able to download the file a continue navigating in other tabs.
I hope it helps some one.
Some considerations about this solution:
You must no require to use session data after session_write_close()
The export script is in another file. For ex: home.php calls trough a link export.php
Problem
I'm trying to edit HTML/PHP files server side with PHP. With AJAX Post I send three different values to the server:
the url of the page that needs to be edited
the id of the element that needs to be edited
the new content for the element
The PHP file I have now looks like this:
<?php
$data = json_decode(stripslashes($_POST['data']));
$count = 0;
foreach ($data as $i => $array) {
if (!is_array($array) && $count == 0){
$count = 1;
// $array = file url
}
elseif (is_array($array)) {
foreach($array as $i => $content){
// $array[0] = id's
// $array[1] = contents
}
}
}
?>
As you can see I wrapped the variables in an array so it's possible to edit multiple elements at a time.
I've been looking for a solution for hours but can't make up my mind and tell what's the best/possible solution.
Solution
I tried creating a new DOMElement and load in the html, but when dealing with a PHP file, this solution isn't possible since it can't save php files:
$html = new DOMDocument();
$html->loadHTMLFile('file.php');
$html->getElementById('myId')->nodeValue = 'New value';
$html->saveHTMLFile("foo.html");
(From this answer)
Opening a file, writing in it and saving it comes is another way to do this. But I guess I must be using str_replace or preg_replace this way.
$fname = "demo.txt";
$fhandle = fopen($fname,"r");
$content = fread($fhandle,filesize($fname));
$content = str_replace("oldword", "newword", $content);
$fhandle = fopen($fname,"w");
fwrite($fhandle,$content);
fclose($fhandle);
(From this page)
I read everywhere that str_replace and preg_replace are risky 'caus I'm trying to edit all kinds of DOM elements, and not a specific string/element. I guess the code below comes close to what I'm trying to achieve but I can't really trust it..
$replace_with = 'id="myID">' . $replacement_content . '</';
if ($updated = preg_replace('#id="myID">.*?</#Umsi', $replace_with, $file)) {
// write the contents of $file back to index.php, and then refresh the page.
file_put_contents('file.php', $updated);
}
(From this answer)
Question
In short: what is the best solution, or is it even possible to edit HTML elements content in different file types with only an id provided?
Wished steps:
get file from url
find element with id
replace it's content
First of all, you are right in not wanting to use a regex function for HTML parsing. See the answer here.
I'm going to answer this question under the presumption you are committed to the idea of retrieving PHP files server-side before they are interpreted. There is an issue with your approach right now, since you seem to be under the impression that you can retrieve the source PHP file by the URL parameter - but that's the location of the result (interpreted PHP). So be careful your structure does what you want.
I am under the assumption that the PHP files are structured like this:
<?php include_some_header(); ?>
<tag>...</tag>
<!-- some HTML -->
<?php //some code ?>
<tag>...</tag>
<!-- some more HTML -->
<?php //some code ?>
Your problem now is that you cannot use an HTML reader (and writer), since your file is not HTML. My answer is that you should restructure your code, separating templating language from business logic. Get started with some templating language. Afterwards, you'll be able to open the template, without the code, and write back the template using a DOM parser and writer.
Your only alternative in your current setup is to use the replace function as you have found in this answer. It's ugly. It might break. But it's definitely not impossible. Make backups before writing over your own code with your own code.
I have been working on this very simple (to somebody well versed) problem for a few hours now, Google throwing up no clear answers, and I can't find anything similar here, so please don't shoot me down for asking for help here.
I'm working on a simple page to show current donations (will refresh regularly) at an upcoming charity event.
I've managed to get /index.php to output the donation total (here's the source for data.php:
<?php $data='0.00';?>
Index does the following, basically:
<?php
$pound = htmlspecialchars("£", ENT_QUOTES);
include 'assets/files/donation_total/data.php';
echo '<h1>'.$pound.$data.'</h1>';
?>
I'm working on action_donations.php now which SHOULD take the value from data.php after using substr to get the total donations value (stripping 13 chars from left, and 4 from the right)
But it's not working. It outputs nothing. What am I doing wrong?
<?php
// get contents of a file into a string
$filename = "/assets/files/donation_total/data.php";
$handle = fopen($filename, "r");
$contents = fread($handle, filesize($filename));
fclose($handle);
echo $contents;
$value = substr($contents, 13);
$value_cleaned = substr($value, 0, -4);
echo $value_cleaned;
?>
I simply need it to read data.php for the current total, take the value from the form, add the two together, then write that value back to data.php
change the path to data.php to:
include_once('../assets/files/donation_total/data.php');
If I understood that correctly, the whole purpose of the last script is to echo out the number that's 0.00 in your example - why don't you just do it like in your index.php and include data.php. Then you can just use $data and echo that out. Reading out a PHP file and using hardcoded character limits to strip away unimportant information is generally a bad idea.
Ok so i have a .txt file with a bunch of urls. I got a script that gets 1 of the lines randomly. I then included this into another page.
However I want the url to change every 15 minutes. So I'm guessing I'm gonna need to use a cron, however I'm not sure how I should put it all into place.
I found if you include a file, it's still going to give a random output so I'm guessing if I run the cron and the include file it's going to get messy.
So what I'm thinking is I have a script that randomly selects a url from my initial text file then it saves it to another .txt file and I include that file on the final page.
I just found this which is sort of in the right direction:
Include php code within echo from a random text
I'm not the best with writing php (can understand it perfectly) so all help is appreciated!
So what I'm thinking is I have a
script that randomly selects a url
from my initial text file then it
saves it to another .txt file and I
include that file on the final page.
That's pretty much what I would do.
To re-generate that file, though, you don't necessarily need a cron.
You could use the following idea :
If the file has been modified less that 15 minutes ago (which you can find out using filemtime() and comparing it with time())
then, use what in the file
else
re-generate the file, randomly choosing one URL from the big file
and use the newly generated file
This way, no need for a cron : the first user that arrives more than 15 minutes after the previous modification of the file will re-generate it, with a new URL.
Alright so I sorta solved my own question:
<?php
// load the file that contain thecode
$adfile = "urls.txt";
$ads = array();
// one line per code
$fh = fopen($adfile, "r");
while(!feof($fh)) {
$line = fgets($fh, 10240);
$line = trim($line);
if($line != "") {
$ads[] = $line;
}
}
// randomly pick an code
$num = count($ads);
$idx = rand(0, $num-1);
$f = fopen("output.txt", "w");
fwrite($f, $ads[$idx]);
fclose($f);
?>
However is there anyway I can delete the chosen line once it has been picked?
I think I know the answer for this question allready, but just as curious I am, I'll ask it anyways.
I'm running a webshop which products come with a csv file. I can import all the objectsng without any trouble, the only thing is that images and thumbnail locations are not exported with the the database dump. (it's never perfect heh) You might say, do it manually then, that's what I did in the first place, but after 200 products and RSI, I gave it up and looked for a better more efficient way to do this.
I have asked my distributer and I can use their images for my own goals without any having copyright problems.
When I look at the location of the images, the url looks like this:
../img/i.php?type=i&file=1250757780.jpg
Does anyone have a idea how this problem can be tackled?
For scraping a website, I found this code:
<?php
function save_image($pageID) {
$base = 'http://www.gistron.com';
//use cURL functions to "open" page
//load $page as source code for target page
//Find catalog/ images on this page
preg_match_all('~catalog/([a-z0-9\.\_\-]+(\.gif|\.png|\.jpe?g))~i', $page, $matches);
/*
$matches[0] => array of image paths (as in source code)
$matches[1] => array of file names
$matches[2] => array of extensions
*/
for($i=0; $i < count($matches[0]); $i++) {
$source = $base . $matches[0][$i];
$tgt = $pageID . $matches[2][$i]; //NEW file name. ID + extension
if(copy($source, $tgt)) $success = true;
else $success = false;
}
return $success; //Rough validation. Only reports last image from source
}
//Download image from each page
for($i=1; $i<=6000; $i++) {
if(!save_image($i)) echo "Error with page $i<br>";
}
?>
For some reason it throws this error: Error with page 1, Error with page 2, etc
Well, you can either make the distributer to give you the image names in the CSV file and then you can construct the URLs directly, or you will have to scrap their website via a script and fetch the images (I'd ask them for permission before doing this).
That URL doesn't really tell you where the picture is located - only that a script i.php will be called and the file name is passed in as a parameter file on the query string.
Where the i.php script goes to actually find the image cannot be deduced from just the info you present here. You'd have to inspect the script to find out that information, me thinks.