php file loads slow if have many methods from classes - php

I have some concerns about my PHP file, which is processing too long.
I'm using XAMPP.
The problem is, that when i use too many methods of my classes, the PHP file loads too slow or it executes too slow.
Here's my example code:
class Sample {
public function show() {
echo 'test';
}
}
$classObject = new Sample();
$classObject->show();
When i run the PHP code above, it takes only i think 1.5s, but if i add more method calls like this:
$classObject = new Sample();
$classObject->show();
$classObject->show();
the PHP code takes almost 3s while executing.
Is there a way to solve this problem?
I already found out what is the problem is when i used oop style php while fetching data from my database it takes long secs before it gonna execute but when I used procedural php style it executes faster.

Firstly, you should take a look at some PHP debugging techniques, because i dont see any attempt of debugging in your code sample.
To your problem:
What you've described is an issue, that can be but must not be a fault of PHP. The thing is, you have to understand how request processing works ... in your case (XAMPP + PHP) it goes like this:
WebServer start:
XAMPP starts Apache (or Tomcat/or whatever WebServer you're using)
Apache starts PHP binary
PHP loads the configuration (php.ini)
The request:
You'll start a web browser (doesn't matter which one)
You'll enter a web address (with or without a specific path - URI)
Then happens some DNS things - that's NOT important for you now
The browser sends a plain-text document on the server via the HTTP protocol, which communicates over TCP - also not so important for you at this time
The WebServer receives a HTTP request and begins the processing
The WebServer runs your PHP script in the already running PHP binary and awaits the ouput
The response
Then the WebServer takes the output of your script and sends it back to the browser
The browser decides (by the headers) how (and if) the content will be displayed to you
Then the browser begins displaying the response content - in your case a HTML or a plain-text document
While drawing the document, the browser begins processing all JavaScript and CSS on the way (up to down)
With this knowledge you need to find out WHERE on the way is the delay taking up time.
The first thing you should do, is to take a look on how long your script is being processed, so:
<?php
$start_time = microtime(true);
$classObject = Sample();
$classObject->show();
$classObject->show();
echo 'Processing took: ' . number_format(microtime(true) - $start_time, 6, '.', '') . ' seconds';
Then, you should look into some Developer Tool your browser provides, for example DevTools in Google Chrome (very good for debugging, though not the best) - hit F12 to open it.
You'll see something like this:
Time is the duration between the time, the request was fully sent and the time, the response was fully received
Load is the sum of durations of all the requests that were done at that moment
Finish is the duration between the time the first request was fully sent and the time the last response was fully received
DOMContentLoaded is the duration of rendering the entire document (browser/client side)
When you have all of this specific duartions, you can decide, where probably will be the problem :)

Related

PHP FastCGI Simple Counter

I am unable to understand and run a simple PHP script in FCGI mode. I am learning both Perl and PHP and I got the Perl version of FastCGI example below to work as expected.
Perl FastCGI counter:
#!/usr/bin/perl
use FCGI;
$count = 0;
while (FCGI::accept() >= 0) {
print("Content-type: text/html\r\n\r\n",
"<title>FastCGI Hello! (Perl)</title>\n",
"<h1>FastCGI Hello! (Perl)</h1>\n",
"Request number ", $++count,
" running on host <i>$ENV('SERVER_NAME')</i>");
}
Searching for similar in PHP found talk about "fastcgi_finish_request" but have no clue how
to accomplish the counter example in PHP, here is what I tried:
<?php
header("content-type: text/html");
$counter++;
echo "Counter: $counter ";
//http://www.php.net/manual/en/intro.fpm.php
fastcgi_finish_request(); //If you remove this line, then you will see that the browser has to wait 5 seconds
sleep(5);
?>
Perl is not PHP. This must not mean that you can not most often interchange things and port code between the two, however when it comes to runtime environments there are bigger differences you can not just interchange.
FCGI is on the request / protocol level already which is fully abstracted in the PHP runtime and you therefore have not as much control in PHP as you would have with Perl and use FCGI;
Therefore you can not just port that code.
Next to that fastcgi_finish_request is totally unrelated to the Perl code. You must have confused it or thrown it in by sheer luck to give it a try. However it's not really useful in this counter example context.
PHP and HTTP are stateless.
All data is only relevant for the current, ongoing request.
If you need to save state, you might consider storing the data into cookie, session, cache or db.
So the implementation of this "counter" example will be different for PERL and PHP.
Your usage of fastcgi_finish_request won't bring the functionality you expect from PERL.
Think about a long running calculation, where you output data in the middle.
You can do that with fastcgi_finish_request, the data is then pushed to the browsers, while the long running tasks keeps running.
Opening happens together FASTCGI+PHP.
Normally the connection would be open till PHP finishes, then FASTCGI would be closed.
Except you reach the exec timeout of PHP (exec timeout) or fastcgi timeout (connection timeout). fastcgi_finish_request handles the case, where the fascgi connection to the browser is closed BEFORE PHP finishes execution.
Simple Hit Counter Example for PHP
<?php
$hit_count = #file_get_contents('count.txt'); // read count from file
$hit_count++; // increment hit count by 1
echo $hit_count; // display
#file_put_contents('count.txt', $hit_count); // store the new hit count
?>
Honestly, that's not even how you should do it using Perl either.
Instead, I'd recommend using CGI::Session to track session information:
#!/usr/bin/perl
use strict;
use warnings;
use CGI;
use CGI::Carp qw(fatalsToBrowser);
use CGI::Session;
my $q = CGI->new;
my $session = CGI::Session->new($q) or die CGI->Session->errstr;
print $session->header();
# Page View Count
my $count = 1 + ($session->param('count') // 0);
$session->param('count' => $count);
# HTML
print qq{<html>
<head><title>Hello! (Perl)</title></head>
<body>
<h1>Hello! (Perl)</h1>
<p>Request number $count running on host <i>$ENV{SERVER_NAME}</i></p>
</body>
</html>};
Alternatively, if you really want to go barebones, you could keep a local file as demonstrated in: I still don't get locking. I just want to increment the number in the file. How can I do this?

Best way to implement frequent calls to taxing Python scripts

I'm working on a bit of python code that uses mechanize to grab data from another website. Because of the complexity of the website the code takes 10-30 seconds to complete. It has to work its way through a couple pages and such.
I plan on having this piece of code being called fairly frequently. I'm wondering the best way to implement something like this without causing a huge server load. Since I'm fairly new to python I'm not sure how the language works.
If the code is in the middle of processing one request and another user calls the code, can two instances of the code run at once? Is there a better way to implement something like this?
I want to design it in a way that it can complete the hefty tasks without being too taxing on the server.
My data grabbing scripts all use caching to minimize load on the server. Be sure to use HTTP tools such as, "If-Modified-Since", "If-None-Match", and "Accept-Encoding: gzip".
Also consider using the multiprocessing module so that you can run requests in parallel.
Here is an excerpt from my downloader script:
def urlretrieve(url, filename, cache, lock=threading.Lock()):
'Read contents of an open url, use etags and decompress if needed'
request = urllib2.Request(url)
request.add_header('Accept-Encoding', 'gzip')
with lock:
if ('etag ' + url) in cache:
request.add_header('If-None-Match', cache['etag ' + url])
if ('date ' + url) in cache:
request.add_header('If-Modified-Since', cache['date ' + url])
try:
u = urllib2.urlopen(request)
except urllib2.HTTPError as e:
return Response(e.code, e.msg, False, False)
content = u.read()
u.close()
compressed = u.info().getheader('Content-Encoding') == 'gzip'
if compressed:
content = gzip.GzipFile(fileobj=StringIO.StringIO(content), mode='rb').read()
written = writefile(filename, content)
with lock:
etag = u.info().getheader('Etag')
if etag:
cache['etag ' + url] = etag
timestamp = u.info().getheader('Date')
if timestamp:
cache['date ' + url] = timestamp
return Response(u.code, u.msg, compressed, written)
The cache is an instance of shelve, a persistent dictionary.
Calls to the downloader are parallelized with imap_unordered() on a multiprocessing Pool instance.
You can run more than one python process at a time. As for causing excessive load on the server that can only be alleviated by making sure either you only have one instance running at any given time or some other number of processes, say two. To accomplish this you can look at using a lock file or some kind of system flag, mutex ect...
But, the best way to limit excessive use is to limit the number of tasks running concurrently.
The basic answer is "you can run as many instances of your program you want, as long as they don't share critical ressources (like a database)".
In real-world, you often need to use the "multiprocessing module" and to properly design your application to handle concurrency and avoid corrupted states or deadlocks.
By the way, design of multiprocesses app's is beyond the scope of a simple question on StackOverflow...

PHP - how to assure correct command execution sequence

Given a simple code like :
$file = 'hugefile.jpg';
$bckp_file = 'hugeimage-backup.jpg';
// here comes some manipulation on $bckp_file.
The assumed problem is that if the file is big or huge - let´s say a jpg - One would think that it will take the server some time to copy it (by time I mean even a few milliseconds) - but one would also assume that the execution of the next line would be much faster ..
So in theory - I could end up with "no such file or directory" error when trying to manipulate file that has not yet created - or worse - start to manipulate a TRUNCATED file.
My question is how can I assure that $bckp_file was created (or in this case -copied) successfully before the NEXT line which manipulates it .
What are my options to "pause" , "delay" the next line execution until the file creation / copy was completed ?
right now I can only think of something like
if (!copy($file, $bckp_file)) {
echo "failed to copy $file...\n";
}
which will only alert me but will not resolve anything (same like having the php error)
or
if (copy($file, $bckp_file)) {
// move the manipulation to here ..
}
But this is also not so valid - because let´s say the copy was not executed - I will just go out of the loop without achieving my goal and without errors.
Is that even a problem or am I over-thinking it ?
Or is PHP has a bulit-in mechanism to ensure that ?
Any recommended practices ?
any thoughts on the issue ? ??
What are my options to "pause" , "delay" the next line execution until the file is creation / copy was completes
copy() is a synchronous function meaning that code will not continue after the call to copy() until copy() either completely finishes or fails.
In other words, there's no magic involved.
if (copy(...)) { echo 'success!'; } else { echo 'failure!'; }
Along with synchronous IO, there is also asynchronous IO. It's a bit complicated to explain in technical detail, but the general idea of it is that it runs in the background and your code hooks into it. Then, whenever a significant event happens, the background execution alerts your code. For example, if you were going to async copy a file, you would register a listener to the copying that would be notified when progress was made. That way, your code could do other things in the mean time, but you could also know what progress was being made on the file.
PHP handles file uploads by saving the whole file in a temporary directory on the server before executing any of script (so you can use $_FILES from the beginning), and it's safe to assume all functions are synchronous -- that is, PHP will wait for each line to execute before moving to the next line.

PHP Memory differing server to server

I have a hefty PHP script.
So much so that I have had to do
ini_set('memory_limit', '3000M');
set_time_limit (0);
It runs fine on one server, but on another I get: Out of memory (allocated 1653342208) (tried to allocate 71 bytes) in /home/writeabo/public_html/propturk/feedgenerator/simple_html_dom.php on line 848
Both are on the same package from the same host, but different servers.
Above Problem solved new problem below for bounty
Update: The script is so big because it rawls a site and parsers data from 252 pages, including over 60,000 images, which it makes two copies of. I have since broken it down into parts.
I have another problem now though. when I am writing the image from outside site to server like this:
try {
$imgcont = file_get_contents($va); // $va is an img src from an array of thousands of srcs
$h = fopen($writeTo,'w');
fwrite($h,$imgcont);
fclose($h);
} catch(Exception $e) {
$error .= (!isset($error)) ? "error with <img src='" . $va . "' />" : "<br/>And <img src='" . $va . "' />";
}
All of a sudden it goes to a 500 internal server error page and I have to do it again, at which point it works, because files are only copied it they don't already exist. Is there anyway I can receive the 500 response code and send it back it to the url to make it go again? As this is to all be an automated process?
If this is memory related, I would personally use copy() rather than file_get_contents(). It supports the file wrappers the same way, and I don't see any advantage in loading the whole file in memory just to write it back on the filesystem.
Otherwise, your error_log might give you more information as of why the 500 happens.
There are three parties involved here:
Remote - The server(s) that contain the images you're after
Server - The computer that is running your php script
Client - Your home computer if you are running the script from a web browser, or the same computer as the server if you are running it from Cron.
Is the 500 error you are seeing being generated by 'Remote' and seen by 'Server' (i.e. the images are temporarily unavailable);
Or is it being generated by 'Server' and seen by 'Client' (i.e. there is a problem with your script).
If it is being generated by 'Remote', then see Ali's answer for how to retry.
If it is being generated by your script on 'Server', then you need to identify exactly what the error is - the php error logs should give you more information. I can think of two likely causes:
Reaching PHP's time limit. PHP will only spend a certain amount of time working before returning a 500 error. You can set this to a higher value, or regularly re-set the timer with a call to set_time_limit(), but that won't work if your server is configured in safe mode.
Reaching PHP's memory limit. You seem to have encoutered this already, but worth making sure you're script still isn't eating lots of memory. Consider outputing debug data (possibly only if you set $config['debug_mode'] = true or something). I'd suggest:
try {
echo 'Getting '.$va.'...';
$imgcont = file_get_contents($va); // $va is an img src from an array of thousands of srcs
$h = fopen($writeTo,'w');
fwrite($h,$imgcont);
fclose($h);
echo 'saved. Memory usage: '.(memory_get_usage() / (1024 * 1024)).' <br />';
unset($imgcont);
} catch(Exception $e) {
$error .= (!isset($error)) ? "error with <img src='" . $va . "' />" : "<br/>And <img src='" . $va . "' />";
}
I've also added a line to remove the image from memory, incase PHP isn't doing this correctly itself (in theory that line shouldn't be necessary).
You can avoid both problems by making your script process fewer images at a time and calling it regularly - either using Cron on the server (the ideal solution, although not all shared webhosts allow this), or some software on your desktop computer. If you do this, make sure you consider what will happen if there are two copies of the script running at the same time - will they both fetch the same image at the same time?
So it sounds like you're running this process via a web browser. I'm guessing that you may be getting the 500 error from Apache timing out somehow after a certain period of time or the process dies or something funky. I would suggest you do one of the following:
A) Move the image downloading to a background process, you can run the crawl script in the browser which will write the urls of the images to be downloaded to the db or something and another script will fire up via cron and fetch all the images. You could also have this script work in batches of 100 or so at a time to keep memory consumption down
B) Call the script directly from the command line (this is really the preferred method for something like this anyway, and you should still probably separate the image fetching to another script)
C) If the command line is not an option for some reason, have your browser loaded script touch a file, and have a cron that runs every minute and looks for the file to exist. Then it fires up your script, you can have the output written to a file for you to check later or send an email when it's completed
Is there anyway I can receive the 500 response code and send it back it to the url to make it go again? As this is to all be an automated process?
Here's the simple version of how I would do it:
function getImage($va, $writeTo, $retries = 3)
{
while ($retries > 0) {
if ($imgcont = file_get_contents($va)) {
file_put_contents($writeTo, $imgcont);
return true;
}
$retries--;
}
return false;
}
This doesn't create the file unless we successfully get our image file, and will retry three times by default. You will of course need to add any require exception handling, error checking, etc.
I would definitely stop using file_get_contents() and write the files in chunks, like this:
$read = fopen($url, 'rb');
$write = fope($local, 'wb');
$chunk = 8096;
while (!feof($read)) {
fwrite($write, fread($read, $chunk));
}
fclose($fp);
This will be nicer to your server, and should hopefully solve your 500 problems. As for "catching" a 500 error, this is simply not possible. It is an irretrievable error thrown by your script and written to the client by the web server.
I'm with Swish, this is not really the kind of task that PHP is intended for - you'de be much better using some sort of server side scripting.
Is there anyway I can receive the 500 response code and send it back it to the url to make it go again?
Have you considered using another library? Fetching files from an external server seems to me more like a job for curl or ftp than file_get_content &etc. If the error is external, and you're using curl, you can detect the 500 return code and handle it appropriately without crashing. If not, then maybe you should split your program into two files - one of which fetches a single file/image, and the other that uses curl to repeatedly call the first one. Unless the 500 error means that all php execution crashes, you would be able to detect the failure and handle it.
Something like this pseudocode:
file1.php:
foreach(list_of_files as filename){
do {
x = call_curl('file2.php', filename);
}
while(x == 500);
}
file2.php:
filename=$_GET['filename'];
results = use_curl_to_get_page(filename);
echo results;
Thanks for all your input. I had seperated everything by the time I wrote this question, so the crawler, fired the image grabber, etc.
I took on board the solution to split the number of images, and that also helped.
I also added a try, catch round the file read.
This was only being called from the browser during testing, but now that it is all up and running it is going to be a cron job.
Thanks Swish and Benubird for your particularly detailed and educational answers. Unfortunately I had no cooperation with the developers on the backend where the images are coming from (long and complicated story).
Anyway, all good now so thanks. (Swish how do you call a script from the command line, my knowledge of this field is severely lacking?)

GUI Application becomes unresponsive while http request is being done

I have just made my first proper little desktop GUI application that basically wraps a GUI (php-gtk) interface around a SimpleTest Web Test case to make it act as a remote testing client.
Each time the local The Web Test case runs, it sends an HTTP request to another SimpleTest case (that has an XHTML interface) sitting on my server.
The application, allows me to run one local test that collates information from multiple remote tests. It just has a 'Start Test' button, 'Stop Test' button and a setting to increase/decrease the number of remote tests conducted in each HTTP Request. Each test-run takes about an hour to complete.
The trouble is, most of the time the application is making http requests. Furthermore, whenever an HTTP Request is being made, the application's GUI is unresponsive.
I have taken to making the application wait a few seconds (iterating through the Gtk::main_iteration ) between requests in order to give the user time to re-size the window, press the Stop button, etc. But, this makes the whole test run take a lot a longer than is necessary.
<?php
require_once('simpletest/web_tester.php');
class TestRemoteTestingClient extends WebTestCase
{
function testRunIterations()
{
...
$this->assertTrue($this->get($nextUrl), 'getting from pointer:'. $this->_remoteMementoPointer);
$this->assertResponse(200, "checking response for " . $nextUrl );
$this->assertText('RemoteNodeGreen');
$this->doGtkIterationsForMinNSeconds($secs);
...
}
public function doGtkIterationsForMinNSeconds($secs)
{
$this->appendStatusMessage("Waiting " . $secs);
$start = time();
$end = $start + $secs;
while( (time() < $end) )
{
$this->appendStatusMessage("Waiting " . ($end - time()));
while(gtk::events_pending()) Gtk::main_iteration();
}
}
}
Is there a way to keep the application responsive whilst, at the same time making an HTTP request?
I am considering splitting the application into two, where:
Test Controller Application - Acts as a settings-writer / report-reader and this writes to settings file and reads a report file.
Test Runner Application - Acts as a settings-reader / report-writer and, for each iteration reads the settings file, Runs the test, then write a report.
So to tell it to close down - I'd:
Press the Stop Button on the 'Test Controller Application',
which writes to the settings file,
which is read by the 'Test Runner Application'
which stops, then
writes to the report file to say it stopped
the 'Test Controller Application' reads the report and updates the status
and so on...
However, before I go ahead and split the application in two - I am wondering if there is any other obvious way to deal with, this issue. I suspect it is probably quite common and a well-trodden path.
Also is there an easier way to send messages between two applications that sit on the same machine?
You will have to implement some threading in order to manage multiple processes simultanously. That way, your app wont get unresponsive while you execute some other tasks in the background.

Categories