I am currently trying to make a e-zine using wordpress, and have most of it done. The homepage displays a list of the "pieces" which are included in that edition of the e-zine. I would like to make it so that, when an edition expires (currently using Post Expirator plugin), a new page is created automatically resembling the front page in showing the index of that particular (now expired) edition.
I'm not very experienced using PHP, and still a newbie at wordpress. How could I accomplish this?
The idea is this, you just have to get the expiration date and make a condition with it. You just need to have a basic php skills in order for you to do it. Heres the logic
if($curdate > $expiration_date){ //you can change the condition into ">=" if you want to create a post on and after expiration date
//sample wordpress wp_insert_post
$my_post = array(
'post_title' => 'My post',
'post_content' => 'This is my post.',
'post_status' => 'publish',
'post_author' => 1,
'post_category' => array(8,39)
);
// Insert the post into the database
wp_insert_post( $my_post );
}
for more info visit http://codex.wordpress.org/Function_Reference/wp_insert_post
Here is what I ended up doing, using the suggestions of Felipe as a starting point. There might be a less convulted way of doing this, but, as I said, I'm just a beginner, so this is what I came up with:
First, I created a volnum variable, which keeps track of the current volume number. Then, I cache the front page so that later I can save it as an independent html document:
This is at the beginning of the index.php, before the get_header().
<?php $volnum; ?>
<?php ob_start(); ?>
In the front page, I have an editorial and, next to it, I have the content index. I am saving the editorial tag's (which is always "voln" where 'n' is the volume number) volume number (maybe the foreach is not necessary since the editorial only has one tag) :
<?php $tags = get_the_tags();
foreach ($tags as $tag){
$volnum = $tag->name;
}
?>
Finally, at the end of the document, after the last html, I have added the following code:
<?php
$handle = opendir("past_vol/");
$numOfFiles = count($handle);
$volExists = false;
for($i=0;$i<=$numOfFiles;$i++){
$name = readdir($handle);
if($volnum.".html" == ($name)){
$volExists = true;
continue;
}
}
if($volExists == false){
$cachefile = "past_vol/".$volnum.".html";
$fp = fopen($cachefile, 'w');
fwrite($fp, ob_get_contents());
fclose($fp);
}
closedir($handle);
ob_end_flush();
?>
"past_vol" is the directory where I am saving the past volume html files. So the directory is opened, the amount of files is counted, and a loop that goes through each file's name is started. If a file with the same name as $volnum, then $volExists is true. If at the end of the loop $volExists is false, then it saves the cached page.
Again, it could probably be optimized a whole lot, but for now this works!
Related
I have problem with my Wordpress Query.
What I'm try to do:
I have CSV file with products data(name, price, stock, sku etc.)
And I want to import this file, but when I'm trying to get Product ID by SKU my query is too high for my server, but I'm doing some stupid idea : in foreach I'm trying to get all product_id.
It's possible to split my wp query without killing my server?
I'm trying sleep but this is no result...
My code is here:
public function new_import_stock_prices(){
global $wpdb;
global $post;
if ( !function_exists( 'wc_get_product_id_by_sku' ) ) {
require_once '/includes/wc-product-functions.php';
}
echo '<h1>Import stanów magazynowych i cen z pliku CSV </h1>';
echo '<h4>Plik pobierany jest z netis/products.csv</h4>';
$fn = 'https://e-xxxxx.pl/xxx/products.csv';
$file_array = file($fn);
echo '<table>';
echo '<tr>';
echo '<td>LP</td>';
echo '<td>Nazwa</td>';
echo '<td>SKU</td>';
echo '<td>Stan magazynowy</td>';
echo '<td>Cena</td>';
echo '<td>Product ID</td>';
$i = 1;
if ( in_array( 'woocommerce/woocommerce.php', apply_filters( 'active_plugins', get_option( 'active_plugins' ) ) ) ) {
foreach ($file_array as $line_number =>&$line)
{
if ($line_number > 0 && $line_number % 10 == 0) {
$row2=explode('|',$line);
$sku = $row2[1];
// get the product ID from the SKU
$product_id = $wpdb->get_var( $wpdb->prepare( "SELECT post_id FROM $wpdb->postmeta WHERE meta_key='_sku' AND meta_value='%s' LIMIT 1", $sku ) );
// Get an instance of the WC_Product object
$product = new WC_Product( $product_id );
//Get product stock quantity and stock status
$stock_quantity = $product->get_stock_quantity();
$stock_status = $product->get_stock_status();
echo '<tr>';
echo '<td>'.$i.'</td>';
echo '<td>'.$row2[0].'</td>';
echo '<td>'.$row2[1].'</td>';
echo '<td>'.$row2[5].'</td>';
echo '<td>'.$row2[2].'</td>';
echo '<td>'.$product_id.'</td>';
echo '</tr>';
$i = $i +1;
sleep(10);
}
}
}
echo '</table>';
}
BTW. my wp_postmeta table has ~900 000+ records :O
And I want to import this file
I don't see any code for importing, I see code for displaying. Assuming by import, you mean display:
What's probably happening is one of a few things.
your running out of memory (you should get an error for this)
don't use file($fn) use file functions that open the file and read line by line, such as fgetcsv
your running out of time
not much you can do about this, except send less data
your overwhelming the browser buffer by sending to much output.
again not much you can do about this but send less data.
The only real solution (Assuming by import, you mean display) is to page the data.
Now even in a file you can page the data, but I would suggest using SQLFileObject instead of the procedural file functions. That said you can page using the procedural style but its by Byte Offset, not page number.
While I can't code an entire paging system I can give you some tips:
For example
//hard to tell how many lines in the file
$fn = 'https://e-xxxxx.pl/xxx/products.csv';
$f = fopen($fn, 'r');
fseek($f, $_GET['offset']); //seek to a byte offset
$i=0;
while(!feof($f) && ($row=fgetcsv($f)) && null !== $row[0]){
if($i==10)
$offset = ftell($f); //get byte offset
++$i;
}
ftell and fseek allow you get or move the file pointer (in bytes). So you can start reading from a predefined offset that you can pass around in the url ... etc.
You can do the same thing with SplFileObject, but a bit better.
try {
$fn = 'https://e-xxxxx.pl/xxx/products.csv';
$csv = new SplFileObject($fn, 'r');
} catch (RuntimeException $e ) {
printf("Error openning csv: %s\n", $e->getMessage());
}
$csv->seek($_GET['line']); //seek to a predefined line
while(!$csv->eof() && ($row = $csv->fgetcsv()) && null !== $row[0]) {
if(($csv->key()-$_GET['line'])==10)
$line = $csv->key(); //get line offset
++$i;
}
The main advantage of SPL is you can use the row number, which is much easier to work with.
You can also get the total number of lines in a file like this
$csv->seek(PHP_INT_MAX);
$total = $csv->key();
$csv->rewind(); //or $csv->seek($_GET['line'])
Basically this seeks to the largest possible INT PHP can handle, but because the file is a fixed length, it puts the pointer at the end of the file, then using key we can get the line number. Then we simply rewind to where we want to read from.
I mention the total number of rows because in paging it's nice to be able to show that.
Another option (to display)
Besides paging is to output the page without buffering.
// Turn off output buffering
ini_set('output_buffering', 'off');
// Turn off PHP output compression
ini_set('zlib.output_compression', false);
//Flush (send) the output buffer and turn off output buffering
//ob_end_flush();
while (ob_get_level()) ob_end_flush();
// Implicitly flush the buffer(s)
ini_set('implicit_flush', true);
ob_implicit_flush(true);
Combine this with one of the methods I showed above to read the file 1 line at a time, and you may be able to eventually read all that data out.
Saving
For saving the data, your probably going to need to break it into batches, the same thing with paging can be done here (using offset or line). So that you only import a couple thousand rows at a time. I would also recommend not outputting the data, because you can give the browser more buffer then it can handle and lock it up. However if you page the data you can break it into small enough chunks that the browser can handle it.
You can even automate this using successive AJAX calls. Basically you would call the code on the backend to save a certain number of rows (x). The sever would respond, and then you would make another call for (x) more rows, save & repeat.
I want to display all products id, to check if it's correct. Next step is change stock, price and saving products
It would be easier to do this work in something like excel, just from a data entry standpoint, no one wants to edit thousands of rows on a web page and then have their session time out or something like that.
Hope that helps.
I'm in need to start from the msgstr to retrieve the msgid. The reason is that I've got several translations, all EN to SOME OTHER LANG, but now in the current installation I need to go back from SOME OTHER LANG to EN. Note that I'm also working in WordPress, maybe this is not important though. There are a couple of similar questions here but not exactly what I need.
So is there a way to accomplish this?
WordPress ships with a PO reader and writer as part of the pomo package. Below is a pretty simple script that swaps the msgid and msgstr fields around and writes out a new file.
As already pointed out in the comments, there are several things that make this potentially problematic:
Your target strings must all be unique (and not empty)
If you have message context, this will stay in the original language.
Your original language must have only two plural forms.
Onward -
<?php
require_once 'path/to/wp-includes/pomo/po.php';
$source_file = 'path/to/languages/old-file.po';
$target_file = 'path/to/languages/new-file.po';
// parse original message catalogue from source file
$source = new PO;
$source->import_from_file($source_file);
// prep target messages with a different language
$target = new PO;
$target->set_headers( $source->headers );
$target->set_header('Language', 'en_US');
$target->set_header('Language-Team', 'English from SOME OTHER LANG');
$target->set_header('Plural-Forms', 'nplurals=2; plural=n!=1;');
/* #var Translation_Entry $entry */
foreach( $source->entries as $entry ){
$reversed = clone $entry;
// swap msgid and msgstr (singular)
$reversed->singular = $entry->translations[0];
$reversed->translations[0] = $entry->singular;
// swap msgid_plural and msgstr[2] (plural)
if( $entry->is_plural ){
$reversed->plural = $entry->translations[1];
$reversed->translations[1] = $entry->plural;
}
// append target file with modified entry
$target->add_entry( $reversed );
}
// write final file back to disk
file_put_contents( $target_file, $target->export() );
Have a project where I'm scraping a few sites with data, then outputting onto one site. To help with load times, I'm trying to rig it so once every 10 mins, my main website does a full data scrape, then stores it all into a cache folder called "cache", stored in the root folder. Then, anytime I refresh main site after that 10 mins, it pulls from the cache, making load times quite fast at that point.
Trouble is, load times haven't changed, which it really should using this method, so I'm doing something wrong. Would appreciate any help. Now I can confirm the data IS being stored in the cache, because I see the files automatically appearing there. So the issue has to be that the code is broken where specified to grab the data from cache, after it's stored every 10 minutes, it's not grabbing the data.
*part of me wonders if the issue is with how the filenames are being saved in cache, right now it seems to be random values. for ex, one is named f32dd7f0b85eb4c1be0bb9a417cc29ea553d898e.html
I'd think it needs to be saved as a specific file name. Not sure how to achieve that though. The code at the end of my php reference files seem to specify this, so not sure issue. The code that is supposed to be doing this is at the bottom of the post.
I'm really new to php, and honestly have only gotten this far through some very nice and helpful people. I'm close, but not quite there yet with this cache framework.
global.php in root folder:
<?php
$_cache_time =600; //10 minutes
$_cache_dir="./cache"; //cache dir
function deleteBlankInArray($var){
return !ctype_space($var)&&!empty($var);
}
function cache_start($filename)
{
global $_cache_dir,$_cache_time;
$cachefile = $_cache_dir.'/'.sha1($filename).'.html';
ob_start();
if(file_exists($cachefile) && (time( )-$_cache_time <
filemtime($cachefile)))
{
include($cachefile);
ob_flush();
return true;
}
return false;
}
function cache_end($filename)
{
global $_cache_dir,$_cache_time;
$cachefile = $_cache_dir.'/'.sha1($filename).'.html';
$fp = fopen($cachefile, 'w');
fwrite($fp, ob_get_contents());
fclose($fp);
ob_flush();
}
My main website, is an xhtml site. It's referencing these php pages like this:
<?php include 's&pcurrent.php';?>
<?php include 'news.php';?>
It's referencing/outputting multiple php files, which is why load times are slow, if not pulling from cache.
And lastly, this is an example of one of my php files that are being "included". This one is called litecoinchange.php
<?php
error_reporting(E_ALL^E_NOTICE^E_WARNING);
include_once "global.php";
//filename of the file
if(!cache_start("litecoinchange.php")){
$doc = new DOMDocument;
// We don't want to bother with white spaces
$doc->preserveWhiteSpace = false;
$doc->strictErrorChecking = false;
$doc->recover = true;
$doc->loadHTMLFile('https://coinmarketcap.com/');
$xpath = new DOMXPath($doc);
$query = "//tr[#id='id-litecoin']";
$entries = $xpath->query($query);
foreach ($entries as $entry) {
$result = trim($entry->textContent);
$ret_ = explode(' ', $result);
//make sure every element in the array don't start or end with blank
foreach ($ret_ as $key=>$val){
$ret_[$key]=trim($val);
}
//delete the empty element and the element is blank "\n" "\r" "\t"
//I modify this line
$ret_ = array_values(array_filter($ret_,deleteBlankInArray));
//echo the last element
echo $ret_[7];
//filename of the file
cache_end("litecoinchange");
}
}
I'm working on a small project where the users can see images taged by, in this case, "kitties". Instagram only allows 5000 requests/hour, i don't think it will reach this, but i'm choosing to cache any way. Also because i can't figure out how to get the back-link to work.
I can only get the link for next page, then the link for recent page becomes the current page, a link to itself.
Also, the api can return strange number of images, some times 14, some times 20 and so on. I want it to always show 20 images per page and only have 5 pages (100 images). And then update this file each 5/10 minutes or something.
So, my plan is to store like 100 images into a file. I got it working, but it's incredible slow.
The code looks like this:
$cachefile = "instagram_cache/".TAG.".cache";
$num_requests = 0; //Just for developing and check how many request it does
//If the file does not exsists or is older than *UPDATE_CACHE_TIME* seconds
if (!file_exists($cachefile) || time()-filemtime($cachefile) > UPDATE_CACHE_TIME)
{
$images = array();
$current_file = "https://api.instagram.com/v1/tags/".TAG."/media/recent?client_id=".INSTAGRAM_CLIENT_ID;
$current_image_index = 0;
for($i = 0; $i >= 0; $i++)
{
//Get data from API
$contents = file_get_contents($current_file);
$num_requests++;
//Decode it!
$json = json_decode($contents, true);
//Get what we want!
foreach ($json["data"] as $x => $value)
{
array_push($images, array(
'img_nr' => $current_image_index,
'thumb' => $value["images"]["thumbnail"]["url"],
'fullsize' => $value["images"]["standard_resolution"]["url"],
'link' => $value["link"],
'time' => date("d M", $value["created_time"]),
'nick' => $value["user"]["username"],
'avatar' => $value["user"]["profile_picture"],
'text' => $value['caption']['text'],
'likes' => $value['likes']['count'],
'comments' => $value['comments']['data'],
'num_comments' => $value['comments']['count'],
));
//Check if the requested amount of images is equal or more...
if($current_image_index > MAXIMUM_IMAGES_TO_GET)
break;
$current_image_index++;
}
//Check if the requested amount of images is equal or more, even in this loop...
if($current_image_index > MAXIMUM_IMAGES_TO_GET)
break;
if($json['pagination']['next_url'])
$current_file = $json['pagination']['next_url'];
else
break; //No more files to get!
}
file_put_contents($cachefile, json_encode($images));
This feels like a very ugly hack, any ideas for how to make this work better?
Or someone that can tell me how to make that "back-link" to work like it should? (Yes, i could yes js and go -1 in history, but no!).
Any ideas, suggestions, help, comments etc are appreciated.
Why not subscribe to real-time and store the images in the DB? Then, when they are rendered you can check if the image url is valid (check if the photo has been deleted). Getting the data from your own DB will be much faster than from instagram
I would like to create a cache for my php pages on my site. I did find too many solutions but what I want is a script which can generate an HTML page from my database ex:
I have a page for categories which grabs all the categories from the DB, so the script should be able to generate an HTML page of the sort: my-categories.html. then if I choose a category I should get a my-x-category.html page and so on and so forth for other categories and sub categories.
I can see that some web sites have got URLs like: wwww.the-web-site.com/the-page-ex.html
even though they are dynamic.
thanks a lot for help
check ob_start() function
ob_start();
echo 'some_output';
$content = ob_get_contents();
ob_end_clean();
echo 'Content generated :'.$content;
You can get URLs like that using URL rewriting. Eg: for apache, see mod_rewrite
http://httpd.apache.org/docs/2.2/mod/mod_rewrite.html
You don't actually need to be creating the files. You could create the files, but its more complicated as you need to decide when to update them if the data changes.
In my opinion this is the best solution. I use this for cache JSON file for my Android App. It can be simply use in other PHP files.
It's optimize file size from ~1mb to ~163kb (gzip).
Create cache folder in your directory
Then Create cache_start.php file and paste this code
<?php
header("HTTP/1.1 200 OK");
//header("Content-Type: application/json");
header("Content-Encoding: gzip");
$cache_filename = basename($_SERVER['PHP_SELF']) . "?" . $_SERVER['QUERY_STRING'];
$cache_filename = "./cache/".md5($cache_filename);
$cache_limit_in_mins = 60 * 60; // It's one hour
if (file_exists($cache_filename))
{
$secs_in_min = 60;
$diff_in_secs = (time() - ($secs_in_min * $cache_limit_in_mins)) - filemtime($cache_filename);
if ( $diff_in_secs < 0 )
{
print file_get_contents($cache_filename);
exit();
}
}
ob_start("ob_gzhandler");
?>
Create cache_end.php and paste this code
<?php
$content = ob_get_contents();
ob_end_clean();
$file = fopen ( $cache_filename, 'w' );
fwrite ( $file, $content );
fclose ( $file );
echo gzencode($content);
?>
Then create for example index.php (file which you want to cache)
<?php
include "cache_start.php";
echo "Hello Compress Cache World!";
include "cache_end.php";
?>
Manual caching (creating the HTML and saving it to a file) may not be the most efficient way, but if you want to go down that path I recommend the following (ripped from a simple test app I wrote to do this):
$cache_filename = basename($_SERVER['PHP_SELF']) . "?" . $_SERVER['QUERY_STRING'];
$cache_limit_in_mins = 60 * 32; // this forms 32hrs
// check if we have a cached file already
if ( file_exists($cache_filename) )
{
$secs_in_min = 60;
$diff_in_secs = (time() - ($secs_in_min * $cache_limit_in_mins)) - filemtime($cache_filename);
// check if the cached file is older than our limit
if ( $diff_in_secs < 0 )
{
// it isn't, so display it to the user and stop
print file_get_contents($cache_filename);
exit();
}
}
// create an array to hold your HTML output, this is where you generate your HTML
$output = array();
$output[] = '<table>';
$output[] = '<tr>';
// etc
// Save the output as manual cache
$file = fopen ( $cache_filename, 'w' );
fwrite ( $file, implode($output,'') );
fclose ( $file );
print implode($output,'');
I use APC for all my PHP caching (on an Apache server)
If you're not opposed to frameworks, try using the Zend Frameworks's Zend_Cache. It's pretty flexible, and (unlike some of the framework modules) easy to implement.
Can use Cache_lite from PEAR:
Details here
http://mahtonu.wordpress.com/2009/09/25/cache-php-output-for-high-traffic-websites-pear-cache_lite/
I was thinking from the point of load on the database, and charges for data bandwidth and speed of loading. I have some pages which are unlikely to change in years, (I know it is easy to use a CMS system based on a database ). Unlike in US, here the cost of bandwidth can be high. Anybody has any views on that, whether to create htmal pages or dynamic (php, asp.net)
Links to the pages would be stored on a database anyway.