This is a basic code which I use to pull info from my affiliate product feed. I'm pulling, as you will see below, picture links...and store the urls into my databse. The problem is that I'm showing 20 products per page, and if the affiliate serves isn;'t working properly it slows donw my sites alot.
What i'd like to do is store the whole iamges somehow and hot the urls... I think that will improve my sites performance alot. Any ideeas?
$feed = 'my affiliate feed';
$xml = simplexml_load_file($feed);
foreach( $xml->productinfo as $productinfo )
{
$pic0 = $productinfo->picture[0];
$pic1 = $productinfo->picture[1];
$pic2 = $productinfo->picture[2];
mysql_query("INSERT INTO ".$table." (pic0, pic1, pic2) VALUES ('$pic0', '$pic1', '$pic2')");
}
Thank you
First change pic0, pic1 and pic2 fields to BLOB type. (You might also want to store the MIME type with getimagesize() for use with header() when delivering the images.)
$feed = 'my affiliate feed';
$xml = simplexml_load_file($feed);
foreach( $xml->productinfo as $productinfo )
{
for($i = 0; $i<=2; $i++) {
$pic[$i] = mysql_real_escape_string(file_get_contents($productinfo->picture[$i]));
}
mysql_query("INSERT INTO $table (pic0, pic1, pic2) VALUES ('$pic[0]', '$pic[1]', '$pic[2]')");
}
I presume that you have an ID field in $table. Deliver images in a new PHP script like this:
if (!isset($_GET['id'])) die('No ID');
if (!isset($_GET['pic']) || !in_array($_GET['pic'], array(0, 1, 2)))
$i='0';
else
$i=mysql_real_escape_string($_GET['pic']);
$sql = sprintf( "SELECT pic$i FROM $table WHERE id=%s", mysql_real_escape_string($_GET['id']));
$result = mysql_query($sql) or die("Invalid query: " . mysql_error());
$row=mysql_fetch_array($result);
header("Content-type: image/jpeg");
echo $row[0];
well you can also store it as base64
blobs make the website not faster, the more db requests you have the more slow is the website
if that are just a few images its ok and if you have a fast db server
Related
I'm developing an app about books and I had a table for contents. before, I made a mistake and I split content in myself and insert them in database. after inserting 90 books I had 1000 records and I found out that it was wrong. Now, I create a table of content and I insert all content of every books(every book has a record), so I want to split content with character(*).
but it doesn't work.
It is my Previous code:
$sql2 = "SELECT tblcontent.content
from tblcontent,tblstory
where tblcontent.storyid=tblstory.storyid
and tblcontent.storyid='$storyid'";
$r2 = #mysqli_query($dbLink,$sql2);
$result2 = array();
while($res2 = #mysqli_fetch_assoc($r2))
{
$result2[] = $res2;
}
and I've added this code:
$result2=explode("*",$result2);
and
if(isset($result2)){
echo json_encode(array("result2"=>$result2));
}
But it doesn't work.Thanks in advance
I have tried to get a cleaner URL by adding a .htaccess file to my directory. However I have stumbled upon a small problem which I haven't been able to figure out yet how to solve. I provide an opportunity for my members to post content on my website. When posting the content, the title is saved and modified to be used to get a cleaner URL. For example
/dir/post.php?id=362 with the title [Hello friends] becomes ->
/dir/Hello-friends
My problem is how can I prevent that the same URL gets produced over and over again. I want that the following URLs with the same title, to get something added to it, like a number. For example
/dir/Hello-friends (The first post)
/dir/Hello-friends-2 (The second post, but here a number is added).
This is my php code
$conn = new mysqli($servername, $username, $password, $dbname);
if (mysqli_connect_error()) {
die("Database connection failed: " . mysqli_connect_error());
}
function php_slug($string)
{
$slug = preg_replace('/[^a-z0-9-]+/', '-', trim(strtolower($string)));
return $slug;
}
$title = mysqli_real_escape_string($conn,$title1);
$text1 = mysqli_real_escape_string($conn,$text0);
$text2 = mysqli_real_escape_string($conn,$text00);
$text3 = mysqli_real_escape_string($conn,$text000);
$text4 = mysqli_real_escape_string($conn,$text0000);
$text5 = mysqli_real_escape_string($conn,$text00000);
$text6 = mysqli_real_escape_string($conn,$text000000);
$pid = $_POST['pid'];
$post_title = $title;
$post_title = htmlentities($title);
$sql_titel = "SELECT post_title FROM posts WHERE title = '$title'";
$result_titel = mysqli_query($con, $sql_titel);
$resultsFound = mysqli_num_rows($result_titel);
if ($resultsFound > 0) {
$resultsFound++;
$post_title .= '-'.$resultsFound;
}
$sql = "INSERT INTO posts (title, text1, text2, text3, text4, text5, text6, post_title, pid)
VALUES ('$title', '$text1', '$text2', '$text3', '$text4', '$text5', '$text6', '".php_slug($post_title)."', '$pid')";
if ($conn->query($sql) === TRUE) {
echo "<script>alert('controlling post...!')</script>";
} else {
echo "Error: " . $sql . "<br>" . $conn->error;
}
$conn->close();
}
If you want to add a random number:
if($_POST['submit']) {
$post_title = $title;
$post_title = htmlentities($title);
$sql_titel = "SELECT post_title FROM posts WHERE post_title = '$post_title'";
$result_titel = mysqli_query($con, $sql_titel);
if(mysqli_num_rows($result_titel) > 0) {
$post_title = $post_title . '-' . mt_rand(1, 1000);
}
}
A simple extension to your code is to use the number of rows returned, like this:
if($_POST['submit']) {
$post_title = htmlentities($title);
// !!! You should use parameterized queries here !!!
$sql_titel = "SELECT post_title FROM posts WHERE title = '$title'";
$result_titel = mysqli_query($con, $sql_titel);
// Using the number of rows returned as our collision ID:
$sameNameID = mysqli_num_rows($result_titel);
if ($sameNameID > 0) {
// Bump it up by 1 (so we essentially get 0,2,3,4,5..):
$sameNameID++;
// Add it to the post title:
$post_title .= '-'.$sameNameID;
}
}
Importantly, notice that it's checking the title field rather than post_title.
Also be aware that you're probably vulnerable to SQL injection. I.e. a random person on the internet could do whatever they want to your database. htmlentities does not protect you from injection. You should use PDO instead.
But having said that, you might want to take inspiration from websites like StackOverflow itself, where a number (the article ID) is always present in the URL.
In StackOverflow's case, it's the ID which actually routes the request - this makes it possible to change the question (or title, in your case) later. For example, all of these link to this question:
https://stackoverflow.com/questions/41537052/
https://stackoverflow.com/questions/41537052/prevent-the-same-url-occuring
https://stackoverflow.com/questions/41537052/prevent-the-same-url-occuring-renamed
I have this function: http://pastebin.ca/2058418
It basically checks to see if a tables contains some URLs to pictures of a band. If it does, then it will order the table by random, choose the first result and then output the html code to insert the picture. If the table does not have the pictures of that specific band, then it downloads an XML file which contains the image URL's, parses the XML and inserts in into the table, and then gets the HTML code for the image like before.
In terms out html output, you can't tell if the image URL has been cached or not. HOWEVER, when the image URL is cached (for the first time), whatever web browser you use will not display the image. The HTML is fine - the image is linked correctly.
Do you have any ideas? A live version of the site which contains this function is here: http://redfern.me/similar/. I have just emptied the tables, so there shouldn't be many cached URL's. Try choosing a band, and then see if the image loads. You can tell if the URL's where cached or not by looking at the bottom of the page.
basically looks like you wasn't returning after you fetched the image first time.
<?php function getimage($artist){
$api_key = "XXXXXXXXX";
$iquery = mysql_query("SELECT url FROM `images` WHERE artist = '".$artist."' ORDER BY RAND() LIMIT 1");
if($artist != ""){
$artist = str_replace(" ", "+", $artist);
if(mysql_num_rows($iquery) == 0){
$url = "http://developer.echonest.com/api/v4/artist/images?format=xml&api_key=".$api_key."&name=".$artist."&results=20";
$data = file_get_contents($url);
if($data=false){return 'Error Getting Image';}
$images = new SimpleXMLElement($data);
foreach($images as $image){
foreach($image->image as $indimage){
$insiquery = "INSERT INTO images (id, artist, url) VALUES (NULL, '$artist','".$indimage->url."')";
mysql_query($insiquery);
}
}
return "<img src=\"".$indimage->url."\" alt=\"$artist image\" />";
}else{
$imgurl = mysql_fetch_array($iquery);
return"<img src=\"".$imgurl['url']."\" alt=\"$artist image\" />";
}
}
else{
return"Image Aquire Function Error: <i>No Band Specified</i>";
}
return null;
}?>
echo getimage('Britney Spears');
That's simply because when you enter to the "xml call" your var "$imgurl[0]"
is empty :
try something like this :
$images = new SimpleXMLElement($data);
$xmlImgUrl = array();
foreach($images as $image){
foreach($image->image as $indimage){
$xmlImgUrl[] = $indimage->url;
$insiquery = "INSERT INTO images (id, artist, url) VALUES (NULL, '$artist','".$indimage->url."')";
mysql_query($insiquery);
}
}
}
$imgurl = $nrows == 0 ? $xmlImgUrl : mysql_fetch_row($iquery);
if(!empty($imgurl)) echo "<img src=\"".$imgurl[0]."\" alt=\"$artist image\">";
which instantiate an array of image urls when no results in mysql.
Long before I knew anything - not that I know much even now - I desgined a web app in php which inserted data in my mysql database after running the values through htmlentities(). I eventually came to my senses and removed this step and stuck it in the output rather than input and went on my merry way.
However I've since had to revisit some of this old data and unfortunately I have an issue, when it's displayed on the screen I'm getting values displayed which are effectively htmlentitied twice.
So, is there a mysql or phpmyadmin way of changing all the older, affected rows back into their relevant characters or will I have to write a script to read each row, decode and update all 17 million rows in 12 tables?
EDIT:
Thanks for the help everyone, I wrote my own answer down below with some code in, it's not pretty but it worked on the test data earlier so barring someone pointing out a glaring error in my code while I'm in bed I'll be running it on a backup DB tomorrow and then on the live one if that works out alright.
I ended up using this, not pretty, but I'm tired, it's 2am and it did its job! (Edit: on test data)
$tables = array('users', 'users_more', 'users_extra', 'forum_posts', 'posts_edits', 'forum_threads', 'orders', 'product_comments', 'products', 'favourites', 'blocked', 'notes');
foreach($tables as $table)
{
$sql = "SELECT * FROM {$table} WHERE data_date_ts < '{$encode_cutoff}'";
$rows = $database->query($sql);
while($row = mysql_fetch_assoc($rows))
{
$new = array();
foreach($row as $key => $data)
{
$new[$key] = $database->escape_value(html_entity_decode($data, ENT_QUOTES, 'UTF-8'));
}
array_shift($new);
$new_string = "";
$i = 0;
foreach($new as $new_key => $new_data)
{
if($i > 0) { $new_string.= ", "; }
$new_string.= $new_key . "='" . $new_data . "'";
$i++;
}
$sql = "UPDATE {$table} SET " . $new_string . " WHERE id='" . $row['id'] . "'";
$database->query($sql);
// plus some code to check that all out
}
}
Since PHP was the method of encoding, you'll want to use it to decode. You can use html_entity_decode to convert them back to their original characters. Gotta loop!
Just be careful not to decode rows that don't need it. Not sure how you'll determine that.
I think writing a php script is good thing to do in this situation. You can use, as Dave said, the html_entity_decode() function to convert your texts back.
Try your script on a table with few entries first. This will make you save a lot of testing time. Of course, remember to backup your table(s) before running the php script.
I'm afraid there is no shorter possibility. The computation for millions of rows remains quite expensive, no matter how you convert the datasets back. So go for a php script... it's the easiest way
This is my bullet proof version. It iterates over all Tables and String columns in a database, determines primary key(s) and performs updates.
It is intended to run the php-file from command line to get progress information.
<?php
$DBC = new mysqli("localhost", "user", "dbpass", "dbname");
$DBC->set_charset("utf8");
$tables = $DBC->query("SHOW FULL TABLES WHERE Table_type='BASE TABLE'");
while($table = $tables->fetch_array()) {
$table = $table[0];
$columns = $DBC->query("DESCRIBE `{$table}`");
$textFields = array();
$primaryKeys = array();
while($column = $columns->fetch_assoc()) {
// check for char, varchar, text, mediumtext and so on
if ($column["Key"] == "PRI") {
$primaryKeys[] = $column['Field'];
} else if (strpos( $column["Type"], "char") !== false || strpos($column["Type"], "text") !== false ) {
$textFields[] = $column['Field'];
}
}
if (!count($primaryKeys)) {
echo "Cannot convert table without primary key: '$table'\n";
continue;
}
foreach ($textFields as $textField) {
$sql = "SELECT `".implode("`,`", $primaryKeys)."`,`$textField` from `$table` WHERE `$textField` like '%&%'";
$candidates = $DBC->query($sql);
$tmp = $DBC->query("SELECT FOUND_ROWS()");
$rowCount = $tmp->fetch_array()[0];
$tmp->free();
echo "Updating $rowCount in $table.$textField\n";
$count=0;
while($candidate = $candidates->fetch_assoc()) {
$oldValue = $candidate[$textField];
$newValue = html_entity_decode($candidate[$textField], ENT_QUOTES | ENT_XML1, 'UTF-8');
if ($oldValue != $newValue) {
$sql = "UPDATE `$table` SET `$textField` = '"
. $DBC->real_escape_string($newValue)
. "' WHERE ";
foreach ($primaryKeys as $pk) {
$sql .= "`$pk` = '" . $DBC->real_escape_string($candidate[$pk]) . "' AND ";
}
$sql .= "1";
$DBC->query($sql);
}
$count++;
echo "$count / $rowCount\r";
}
}
}
?>
cheers
Roland
It's a bit kludgy but I think the mass update is the only way to go...
$Query = "SELECT row_id, html_entitied_column FROM table";
$result = mysql_query($Query, $connection);
while($row = mysql_fetch_array($result)){
$updatedValue = html_entity_decode($row['html_entitied_column']);
$Query = "UPDATE table SET html_entitied_column = '" . $updatedValue . "' ";
$Query .= "WHERE row_id = " . $row['row_id'];
mysql_query($Query, $connection);
}
This is simplified, no error handling etc.
Not sure what the processing time would be on millions of rows so you might need to break it up into chunks to avoid script timeouts.
I had the exact same problem. Since I had multiple clients running the application in production, I wanted to avoid running a PHP script to clean the database for every one of them.
I came up with a solution that is far from perfect, but does the job painlessly.
Track all the spots in your code where you use htmlentities() before inserting data, and remove that.
Change your "display data as HTML" method to something like this :
return html_entity_decode(htmlentities($chaine, ENT_NOQUOTES), ENT_NOQUOTES);
The undo-redo process is kind of ridiculous, but it does the job. And your database will slowly clean itself everytime users update the incorrect data.
How do I go about retrieving all of the blogposts from my Wordpress blog via an external PHP script? Is this even possible? I have seen the API for creating a Wordpress plugin, but I'm not sure if that is relevant in this particular case. Any suggestions are greatly appreciated. Thank you.
Your external script can load the wordpress api with
include('blog/wp-load.php'); // change blog/ to your actual path
Then you can use get_posts or query_posts to get the posts you want.
Wordpress has a feed that your posts get published to by default. You can read the XML feed, and parse out the relevant data.
I've got a vanity site that I use to send clients to, and I also contribute occasionally to a blog. One of the things that my vanity site shows is a short list of links to the top 5 most recent posts from the blog. Here's the code I use to do it:
<ul>
<?php
//slurp latest post from Wordpress' RSS feed, and cache them for short time.
$posts = '';
$cachefile = 'my-blog-cache-file.html';
if (is_readable($cachefile) && filemtime($cachefile) > (time() - 1800)) {
readfile($cachefile);
}
else {
$doc = new DOMDocument();
$doc->load('http://my.wordpress.blog/feed');
$items = $doc->getElementsByTagName('item');
foreach($items as $i)
{
if ($i->hasChildNodes()) {
$title = $link = '';
foreach($i->childNodes as $cn) {
if ($cn->nodeName == 'title') $title = $cn->nodeValue;
if ($cn->nodeName == 'link') $link = $cn->nodeValue;
if ($cn->nodeName == 'dc:creator') $author = $cn->nodeValue;
}
if ($title != '' && $link != '' && $author == 'my name') {
$posts .= '<li>'.$title.'</li>'."\n";
}
}
}
file_put_contents($cachefile,$posts);
echo $posts;
}
?>
</ul>
Feel free to use this code. You can examine the feed of your own blog and decide what elements you want to parse out. Generally your feed will be located at your blog's URL, with /feed tacked onto the end.
The other alternative of course is to use PHP to connect to your database and read the database yourself :)
//You'll want to set your database credentials
mysql_connect($server, $username, $password);
mysql_select_db($wp_db);
// Modify the fields to pull whatever data you need for the output, even perhaps join the wp_users table for user data
// Setting the ORDER BY to DESC to mimic the Wordpress ordering with newest first
$sql = "SELECT ID, post_author, post_date, post_content, post_title, post_status, post_name, guid FROM wp_posts ORDER BY post_date DESC";
$data = mysql_query($sql);
$num = count($data);
for($i = 0; $i < $num; $i++){
$row = mysql_fetch_array($data);
// Output your posts to something
print_r($row);
}
This should allow you to play with the data far more easily :)
You'll want to take a look at Magpie. It's a fairly straight-forward RSS client for PHP, which let's you subscribe to any feed and get the posts with just a few lines of code.