There are 8000 html files in my data directory.I parsed all the html files via traversing the target directory and to call the parse function.The filename will be displayed when to be parsed in my program.
<?php
base_dir="c:\data";
function parse($fname){
//to parse data from the file,omitted codes
echo $fname;
}
foreach(new RecursiveDirectoryIterator(base_dir) as $item){
parse($item);
}
?>
I found that the filenames will be displayed on the web page until my whole php file was executed,the filenames will be displayed in 10 minutes ,filename is not displayed one by one when one file is pared over.
How can i display the result instantly on the web page one by one,not wait until the whole php file was finished?
It is important for me to get the file name dispalyed that i know if there is problem in my code,if long time passed ,nothing displayed in the web page will make me nervous.
If you need just to trace your program execution you could use error_log($fname) command instead of 'echo'. Less effort than AJAX and you could track execution 'live' by paste in Ubuntu terminal:
tail -F /var/log/apache2/error.log
(on other *nix path may be different).
1. PLAIN
The simple way is to disable any output buffering, and use flush for good measure. Also, disabling compression.
<?php
while (ob_get_level()) {
ob_end_clean(); // or ob_end_flush() if you want to display previous output from this script.
}
...
print "Whatever\n";
flush();
The above allows for minimal modification to your current code. It has few guarantees since the actual workflow between the parse process and your browser might contain entities (proxies, etc.) on which you have little or no control; if you execute external programs, those might be fully buffered instead of line buffered; and so on.
2. AJAX
A more complicated (but satisfying) way is, if possible, break this into two chunks. This also poses a security risk that needs addressing.
In the main PHP file you output a Javascript variable (inside a Javascript block).
print "var filesToProcess = [\n";
print implode(",", array_map(
function($rawFileName) {
return '"' . /*javascriptQuote*/($rawFileName) . '"';
},
$filesToProcess
));
print "];\n";
This will make available to the client side Javascript an array with all the files.
You can now do the processing in AJAX:
function processOneFile() {
if (!window.filesToProcess.length) {
return;
}
// jQuery
$.post('/path/to/processor.php',
{ file: window.filesToProcess.pop() },
function(result) {
// ... see below
}
).always(function(){
window.setTimeout('processOneFile', 100);
});
}
window.setTimeout('processOneFile', 100);
This will call a PHP file with one file to process after the other. The result must be returned in JSON:
Header("Content-Type: application/json;charset=UTF-8");
die(json_encode(array( "status" => "OK", "file" => $file, "problem" => null )));
Security risk: the client is sending along a file name, any file name, and the script is executing something on that. In general you do not know who the client is, or who it may be, so you need to validate the file name (e.g. ascertain its basename does indeed exist in the target directory you sent in the first place):
if (!file_exists($yourDir . DIRECTORY_SEPARATOR . basename($file)) {
Header("Content-Type: application/json;charset=UTF-8");
die(json_encode(array(
"status" => "FAILED",
"file" => $file,
"problem" => "file could not be found",
/* "real-problem" => "Sorry, friend, it didn't work." */
)));
}
The array will be returned to the Javascript function above:
function(result) {
var msg;
if (result.status == "OK") {
msg = result.file + " OK";
} else {
msg = result.file + ": " + result.problem
}
$('#logDiv').append($('<p>').text(msg));
}
The above will transform a HTML entity
<div id="logDiv"></div>
in, say,
<div id="logDiv">
<p>file1.pdf OK</p>
<p>file2.pdf: missing UUID entry</p>
...
</div>
Since you know the initial filesToProcess.length, you can also display a progress bar (there are even jQuery plugins to do that as easy as $('#bar').progress(n*100/initialLength) ).
Or you can run lengthy processes by sending out two log lines, one just before the $.post
$('#logDiv').append($('<p>').text("Parsing " + file));
Do you mean showing results without refreshing webpage?
IF that, you can use ajax to send request.
Related
There's a lot of code in each file, too much to post, so I'm giving you a general idea of what's happening in each file.
index.php
[html dropdown menu code etc.]
scripts.js
[AJAX detects user selection from dropdown, grabs fetch.php which pulls database to generate html code for secondary dropdown selections to put in index.php]
fetch.php
[Generates secondary dropdown code based on user selection and query of the database]
I need to see what exactly is being queried to debug, so I'd like to echo the sql select statement:
$query = "SELECT * FROM databasename WHERE.."
That is in fetch.php when user makes a selection from index.php - How do I do this?
When I deal with AJAX, that I return as JSON, one trick I use is to take advantage of output buffering. You can't just echo or output anything you want because it will mess up the JSON data so for an example,
ob_start(); //turn on buffering at beginning of script.
.... other code ...
print_r($somevar);
.... other code ...
$debug = ob_get_clean(); //put output in a var
$data['debug'] = $debug;
header('Content-Type: application/json');
echo json_encode($data); //echo JSON data.
What this does, is wrap any output from you script into your JSON data so that it's format is not messed up.
Then on the javascript side you can use console.log
$.post(url, input, function(data){
if(data.debug) console.log(data.debug);
});
If you are not used to debugging with console.log(), you can usually hit F12 and open the debugger in most browsers. Then in there the output will be sent to the "console". IE9 had a bit of an issue with console.log() if I recall, but I don't want to go to far off track.
NOTE: Just make sure to not leave this stuff in the code when you move it to production, its very simple to just comment this line out,
//$data['debug'] = $debug;
And then your debug information wont be exposed in production. There are other ways to automatically do this, but it depends on if you do development local then publish to the server. For example you can switch it on the $_SERVER['SERVER_ADDR']; which will be ::1 or 127.0.0.1 when it's local. This has a few drawbacks, mainly the server address is not available from the Command Line Interface (CLI). So typically I will tie it into a global constant that says what "mode" the site is in (included in the common entry point, typically index.php).
if(!defined('ENV_DEVELOPMENT')) define('ENV_DEVELOPMENT','DEVELOPMENT');
if(!defined('ENV_PRODUCTION')) define('ENV_PRODUCTION','PRODUCTION');
if(!defined('ENVIRONMENT')) define('ENVIRONMENT',ENV_DEVELOPMENT);
//site is in Development mode, uncomment for production
//if(!defined('ENVIRONMENT')) define('ENVIRONMENT',ENV_DEVELOPMENT);
Then it is a simple matter to check it:
if(ENVIRONMENT == ENV_PRODUCTION ) $data['debug'] = $debug;
If you know how to use error reporting you can even tie into that using
if(ini_get('display_errors') == 1) $data['debug'] = $debug;
Which will only show the debug when display errors is on.
Hope that helps.
UPDATE
Because I mentioned it in the comments, here is an example of it wrapped in a class (this is a simplified version, so I didn't test this)
class LibAjax{
public static function respond($callback, $options=0, $depth=32){
$result = ['userdata' => [
'debug' => false,
'error' => false
]];
ob_start();
try{
if(!is_callable($callback)){
//I have better exception in mine, this is just more portable
throw new Exception('Callback is not callable');
}
$callback($result);
}catch(\Exception $e){
//example 'Exception[code:401]'
$result['userdata']['error'] = get_class($e).'[code:'.$e->getCode().']';
//if(ENVIRONMENT == ENV_DEVELOPMENT){
//prevents leaking data in production
$result['userdata']['error'] .= ' '.$e->getMessage();
$result['userdata']['error'] .= PHP_EOL.$e->getTraceAsString();
//}
}
$debug = '';
for($i=0; $i < ob_get_level(); $i++){
//clear any nested output buffers
$debug .= ob_get_clean();
}
//if(ENVIRONMENT == ENV_DEVELPMENT){
//prevents leaking data in production
$result['userdata']['debug'] = $debug;
//}
header('Content-Type: application/json');
echo self::jsonEncode($result, $options, $depth);
}
public static function jsonEncode($result, $options=0, $depth=32){
$json = json_encode($result, $options, $depth);
if(JSON_ERROR_NONE !== json_last_error()){
//debug is not passed in this case, because you cannot be sure that, that was not what caused the error. Such as non-valid UTF-8 in the debug string, depth limit, etc...
$json = json_encode(['userdata' => [
'debug' => false,
'error' => json_last_error_msg()
]],$options);
}
return $json;
}
}
Then when you make a AJAX response you just wrap it like this (note $result is pass by reference, this way we don't have to do return, and in the case of an exception we update $result in "real time" instead of on completion)
LibAjax::respond( function(&$result){
$result['data'] = 'foo';
});
If you need to pass additional data into the closure don't forget you can use the use statement, like this.
$otherdata = 'bar';
LibAjax::respond( function(&$result) use($otherdata){
$result['data'][] = 'foo';
$result['data'][] = $otherdata;
});
Sandbox
This handles catching any output and puts it in debug, if the environment is correct (commented out). Please pleas make sure to implement some kind of protection so that the output is not sent to clients on production, I cant stress that enough. It also catches any exceptions puts it in error. And it also handles the header and encoding.
One big benefit to this is consistent structure to your JSON, you will know (on the client side) that if if(data.userdata.error) then you have an exception on the back end. It gives you one place to tweak your headers, JSON encoding etc...
One note in PHP7 you'll have to or should add the Throwable interface (instead of Exception). If you want to catch Error and Exception classes Or do two catch blocks.
Let's just say I do a lot of AJAX and got sick of re-writing this all the time, my actual class is more extensive then this, but that's the gist of it.
Cheers.
UPDATE1
One thing I had to do for things to display was to parse the data variable before I console.log() it
This is typically because you are not passing the correct header back to the browser. If you send (just before calling json_encode)
header('Content-Type: application/json');
This just lets the browser know what type of data it is getting back. One thing most people forget is that on the web all responses are done in text. Even images or file download and web pages. It's all just text, what makes that text into something special is the Content-Type that the browser thinks it is.
One thing to note about header is you cannot output anything before sending the headers. However this plays well with the code I posted because that code will capture all the output and send it after the header is sent.
I updated the original code to have the header, I had it in the more complex class one I posted later. But if you add that in it should get rid of the need to manually parse the JSON.
One last thing I should mention I do is check if I got JSON back or text, you could still get text in the event that some error occurs before the output buffering is started.
There are 2 ways to do this.
If Data is a string that needs to be parsed
$.post(url, {}, function(data){
if( typeof data == 'string'){
try{
data = $.parseJSON(data);
}catch(err){
data = {userdata : {error : data}};
}
}
if(data.userdata){
if( data.userdata.error){
//...etc.
}
}
//....
}
Or if you have the header and its always JSON, then its a bit simpler
$.post(url, {}, function(data){
if( typeof data == 'string'){
data = {userdata : {error : data}};
}
if(data.userdata){
if( data.userdata.error){
//...etc.
}
}
//....
}
Hope that helps!
UPDATE2
Because this topic comes up a lot, I put a modified version of the above code on my GitHub you can find it here.
https://github.com/ArtisticPhoenix/MISC/blob/master/AjaxWrapper/AjaxWrapper.php
Echo the contents and do a die() or exit; afterwards... then in the Network tab of your browser, start it recording, run the Ajax request (it'll fail) but check the resource/name and then view the Response, and it'll show you what was echo'd in the script
Taken from: Request Monitoring in Chrome
Chrome currently has a solution built in.
Use CTRL+SHIFT+I (or navigate to Current Page Control > Developer > Developer Tools.
In the newer versions of Chrome, click the Wrench icon > Tools > Developer Tools.) to enable the Developer Tools.
From within the developer tools click on the Network button. If it isn't already, enable it for the session or always.
Click the "XHR" sub-button.
Initiate an AJAX call.
You will see items begin to show up in the left column under "Resources".
Click the resource and there are 2 tabs showing the headers and return content.
Other browsers also have a Network tab, but you will need to use what I commented to get the string value of the query.
ArtisticPhoenix solution above is delightful.
I have a problem where I call a PHP function on page load - the function checks to see if a file exists it returns the filename, if it doesn't exist it runs a script which is fairly resourceful and takes time - converting a waveform image from an audio file. The problem is the audio files are large so creating the file can take some time, so if the audio file doesn't have this image file associated with it the page load takes as long as the process does.
What I'm after is for this function to return a placeholder image if one doesn't exist, but carry on with the process after the page is loaded - or in the background. So in theory when the page is reloaded at a later date the correct image will be there.
I can get the return of the placeholder image currently but then the process stops and the image doesn't get generated. Here's what I have so far:
function example($file_path, $file_name) {if ($file_path) {
if (file_exists("/path/to/folder/{$file_name}.png")) {
return "/path/to/folder/{$file_name}.png";
}
if (!file_exists("/path/to/folder/{$audio_file_name}.png")) {
return "/path/to/folder/processing.png";
Some stuff in here
return $new image
} return FALSE
As you can see this just stops when the file doesn't exist but I want the stuff in here to continue in background. Is it possible or do I need a different approach? Like a cron job or something? Any help appreciated.
You might try a queuing system like resque https://github.com/chrisboulton/php-resque
You then can generate a job, that processes the information and quite fast return with the "processing" image.
With this approach you won't know when it is finished though.
In my experience this is still easier than arguing with the operations guys to compile php with multi threading support.
I'd do it with AJAX. If the image is found, just put it there.
Otherwise, put the placeholder, and add a JS flag with data to load the waveform image.
In the PHP code that generates HTML Document, no conversion happens. And you have another request handler to handle requests coming from JS, that makes the conversion with suppied data.
The data created originally on HTML Document generation code will be passed to JS, which will use it to send a request for the conversion. While JS waits for response, you handle to loading time, and when response comes you put it on the placeholder.
If you're running on FastCGI / FPM you could consider doing the following:
You put a regular <img> tag with the src attribute pointing to your script.
If your script needs to regenerate, you make the browser redirect to a processing image.
If the image is ready, you redirect to the created image (you could do an AJAX poll on the page as well)
How to do step 2?
Normally, the browser has to wait for your script to end before performing a render or redirect; but FastCGI (PHP-FPM) has a special function for this: fastcgi_finish_request. It's largely undocumented, but its use is simple:
if ($need_to_process) {
header('Location: /path/to/processing.png');
fastcgi_finish_request();
// do processing here
} else {
header('Location: /path/to/final_image.png');
}
Alternative
You can apply it to your existing process as well if you have a template that you can immediately render just before doing fastcgi_finish_request().
Yet another alternative
Use a task scheduler like Gearman.
you can use "try" and "finally"
try {
return "hello world";
} finally {
//do something
}
I am not able to comment because my reputation is below 50, but I wanted to note something on mohammadhasan's answer. It seems to work but avoid 'return' statement in both try and finally block
try {
return "hello world";
} finally {
//do not put return here
}
Example:
function runner() {
try {
return "I am the trial runner";
} finally {
return "I am the default runner";
}
}
echo runner();
Will only show I am the default runner.
My php script uses php simplehtmldom to parse html and get all the links and images that I want and this can run for a duration depending on the amount of images to download.
I thought it would be good idea to allow cancelling in this case. Currently I call my php using Jquery-Ajax, the closest thing I could find is php register_shutdown_function but not sure if it can work for my case. Any ideas?
So once php is launched, it cant be disturbed? like fire ajax again to call an exit to the same php file?
This is good only in case you are processing really massive data loads through AJAX. For other cases, just handle it in JS to not display result if canceled.
But as I said If you are processing huge loads of data, then you can add a interrupt condition in every nth step of running script and fulfill that condition using another script. For example you can use a file to store a interrupt data, or MySQL MEMORY table.
Example.
1, process.php (ajax script processing loads of data)
// clean up previous potential interrupt flag
$fileHandler = fopen('interrupt_condition.txt', 'w+');
fwrite($fileHandler, '0');
fclose($fileHandler);
function interrupt_check() {
$interruptfile = file('interrupt_condition.txt');
if (trim($interruptfile[0]) == "1") { // read first line, trim it and parse value - if value == 1 interrupt script
echo json_encode("interrupted" => 1);
die();
}
}
$i = 0;
foreach ($huge_load_of_data as $object) {
$i++;
if ($i % 10 == 0) { // check for interrupt condition every 10th record
interrupt_check();
}
// your processing code
}
interrupt_check(); // check for last time (if something changed while processing the last 10 entries)
2, interrupt_process.php (ajax script to propagate cancel event to file)
$fileHandler = fopen('interrupt_condition.txt', 'w+');
fwrite($fileHandler, '1');
fclose($fileHandler);
This will definitely affect performance of your script, but makes you a backdoor to close execution. This is very simple example - you need to make it more complex to make it work for more users simultaneously, etc.
You can also use MySQL MEMORY Table, MEMCACHE - Non-persistent Caching Server or whatever non-persistent storage you could find.
I have a PHP web crawler that just checks out websites. I decided a few days ago to make the crawlers progress show in real time using AJAX. The php script writes to a file in JSON and AJAX reads the tiny file.
I double and triple checked my PHP script wondering what the hell was going on because after I finished the simple AJAX script the data appearing on my browser leaped up and down in strange directions.
The php script executed perfectly and very quickly but my AJAX would slowly increase the values, every 2 seconds as set, then drop. The numbers only increase in PHP they do not go down. However, the numbers showing up on my webpage go up and down as if the buffer is working on multiple sessions or reading from something that is being updated even though the PHP stopped about an hour ago.
Is there something I'm missing or need to keep clear like a buffer or a reset button?
This is the most I can show, I just slapped it together a really long time ago. If you know of better code then please share, I love any help possible. But, I'm sort of new so please explain things outside of basic functions.
AJAX
//open our json file
ajaxRequest.onreadystatechange = function(){
if(ajaxRequest.readyState == 4){
//display json file contents
document.form.total_emails.value = ajaxRequest.responseText;
}
}
ajaxRequest.open("GET", "test_results.php", true);
ajaxRequest.send(null);
PHP
//get addresses and links
for($x=(int)0; $x<=$limit; $x++){
$input = get_link_contents($link_list[0]);
array_shift($link_list);
$link_list = ($x%100==0 || $x==5)?filter_urls($link_list,$blacklist):$link_list;
//add the links to the link list and remove duplicates
if(count($link_list) <= 1000) {
preg_match_all($link_reg, $input, $new_links);
$link_list = array_merge($link_list, $new_links);
$link_list = array_unique(array_flatten($link_list));
}
//check the addresses against the blacklist before adding to a a file in JSON
$res = preg_match_all($regex, $input, $matches);
if ($res) {
foreach(array_unique($matches[0]) as $address) {
if(!strpos_arr($address,$blacklist)){
$enum++;
json_file($results_file,$link_list[0],$enum,$x);
write_addresses_to_file($address, $address_file);
}
}
}
unset($input, $res, $efile);
}
The symptoms might indicate the PHP script not closing the file properly after writing, and/or a race condition where the AJAX routine is fetching the JSON data in between the PHP's fopen() and the new data being written.
A possible solution would be for the PHP script to write to a temp file, then rename to the desired filename after the data is written and the file is properly closed.
Also, it's a good idea to check response.status == 200 as well as response.readyState == 4.
Tools like ngrep and tcpdump can help debugging this type of problem.
I have a hefty PHP script.
So much so that I have had to do
ini_set('memory_limit', '3000M');
set_time_limit (0);
It runs fine on one server, but on another I get: Out of memory (allocated 1653342208) (tried to allocate 71 bytes) in /home/writeabo/public_html/propturk/feedgenerator/simple_html_dom.php on line 848
Both are on the same package from the same host, but different servers.
Above Problem solved new problem below for bounty
Update: The script is so big because it rawls a site and parsers data from 252 pages, including over 60,000 images, which it makes two copies of. I have since broken it down into parts.
I have another problem now though. when I am writing the image from outside site to server like this:
try {
$imgcont = file_get_contents($va); // $va is an img src from an array of thousands of srcs
$h = fopen($writeTo,'w');
fwrite($h,$imgcont);
fclose($h);
} catch(Exception $e) {
$error .= (!isset($error)) ? "error with <img src='" . $va . "' />" : "<br/>And <img src='" . $va . "' />";
}
All of a sudden it goes to a 500 internal server error page and I have to do it again, at which point it works, because files are only copied it they don't already exist. Is there anyway I can receive the 500 response code and send it back it to the url to make it go again? As this is to all be an automated process?
If this is memory related, I would personally use copy() rather than file_get_contents(). It supports the file wrappers the same way, and I don't see any advantage in loading the whole file in memory just to write it back on the filesystem.
Otherwise, your error_log might give you more information as of why the 500 happens.
There are three parties involved here:
Remote - The server(s) that contain the images you're after
Server - The computer that is running your php script
Client - Your home computer if you are running the script from a web browser, or the same computer as the server if you are running it from Cron.
Is the 500 error you are seeing being generated by 'Remote' and seen by 'Server' (i.e. the images are temporarily unavailable);
Or is it being generated by 'Server' and seen by 'Client' (i.e. there is a problem with your script).
If it is being generated by 'Remote', then see Ali's answer for how to retry.
If it is being generated by your script on 'Server', then you need to identify exactly what the error is - the php error logs should give you more information. I can think of two likely causes:
Reaching PHP's time limit. PHP will only spend a certain amount of time working before returning a 500 error. You can set this to a higher value, or regularly re-set the timer with a call to set_time_limit(), but that won't work if your server is configured in safe mode.
Reaching PHP's memory limit. You seem to have encoutered this already, but worth making sure you're script still isn't eating lots of memory. Consider outputing debug data (possibly only if you set $config['debug_mode'] = true or something). I'd suggest:
try {
echo 'Getting '.$va.'...';
$imgcont = file_get_contents($va); // $va is an img src from an array of thousands of srcs
$h = fopen($writeTo,'w');
fwrite($h,$imgcont);
fclose($h);
echo 'saved. Memory usage: '.(memory_get_usage() / (1024 * 1024)).' <br />';
unset($imgcont);
} catch(Exception $e) {
$error .= (!isset($error)) ? "error with <img src='" . $va . "' />" : "<br/>And <img src='" . $va . "' />";
}
I've also added a line to remove the image from memory, incase PHP isn't doing this correctly itself (in theory that line shouldn't be necessary).
You can avoid both problems by making your script process fewer images at a time and calling it regularly - either using Cron on the server (the ideal solution, although not all shared webhosts allow this), or some software on your desktop computer. If you do this, make sure you consider what will happen if there are two copies of the script running at the same time - will they both fetch the same image at the same time?
So it sounds like you're running this process via a web browser. I'm guessing that you may be getting the 500 error from Apache timing out somehow after a certain period of time or the process dies or something funky. I would suggest you do one of the following:
A) Move the image downloading to a background process, you can run the crawl script in the browser which will write the urls of the images to be downloaded to the db or something and another script will fire up via cron and fetch all the images. You could also have this script work in batches of 100 or so at a time to keep memory consumption down
B) Call the script directly from the command line (this is really the preferred method for something like this anyway, and you should still probably separate the image fetching to another script)
C) If the command line is not an option for some reason, have your browser loaded script touch a file, and have a cron that runs every minute and looks for the file to exist. Then it fires up your script, you can have the output written to a file for you to check later or send an email when it's completed
Is there anyway I can receive the 500 response code and send it back it to the url to make it go again? As this is to all be an automated process?
Here's the simple version of how I would do it:
function getImage($va, $writeTo, $retries = 3)
{
while ($retries > 0) {
if ($imgcont = file_get_contents($va)) {
file_put_contents($writeTo, $imgcont);
return true;
}
$retries--;
}
return false;
}
This doesn't create the file unless we successfully get our image file, and will retry three times by default. You will of course need to add any require exception handling, error checking, etc.
I would definitely stop using file_get_contents() and write the files in chunks, like this:
$read = fopen($url, 'rb');
$write = fope($local, 'wb');
$chunk = 8096;
while (!feof($read)) {
fwrite($write, fread($read, $chunk));
}
fclose($fp);
This will be nicer to your server, and should hopefully solve your 500 problems. As for "catching" a 500 error, this is simply not possible. It is an irretrievable error thrown by your script and written to the client by the web server.
I'm with Swish, this is not really the kind of task that PHP is intended for - you'de be much better using some sort of server side scripting.
Is there anyway I can receive the 500 response code and send it back it to the url to make it go again?
Have you considered using another library? Fetching files from an external server seems to me more like a job for curl or ftp than file_get_content &etc. If the error is external, and you're using curl, you can detect the 500 return code and handle it appropriately without crashing. If not, then maybe you should split your program into two files - one of which fetches a single file/image, and the other that uses curl to repeatedly call the first one. Unless the 500 error means that all php execution crashes, you would be able to detect the failure and handle it.
Something like this pseudocode:
file1.php:
foreach(list_of_files as filename){
do {
x = call_curl('file2.php', filename);
}
while(x == 500);
}
file2.php:
filename=$_GET['filename'];
results = use_curl_to_get_page(filename);
echo results;
Thanks for all your input. I had seperated everything by the time I wrote this question, so the crawler, fired the image grabber, etc.
I took on board the solution to split the number of images, and that also helped.
I also added a try, catch round the file read.
This was only being called from the browser during testing, but now that it is all up and running it is going to be a cron job.
Thanks Swish and Benubird for your particularly detailed and educational answers. Unfortunately I had no cooperation with the developers on the backend where the images are coming from (long and complicated story).
Anyway, all good now so thanks. (Swish how do you call a script from the command line, my knowledge of this field is severely lacking?)