I build my crawler based on ChromeDriver Selenium , and I want to measure the code coverage of the web application when my automated crawler crawls the application.
So, my question is how I do that using Xdebug (I'm newer on it). I installed Xdebug on my PHP, but I didn't know how to start? Can anyone have an idea to give me steps for that because I didn't find any resource that help me.
I don't have a direct example, but I would approach this in the following way. The code is untested, and will likely require changes to work, take this as a starting point
In any case, you want to do the following things:
Collect code coverage data for each request, and store that to a file
Aggregate the code coverage data for each of these runs, and merge them
Collecting Code Coverage for Each Request
Traditionally code coverage is generated for unit tests, with PHPUnit. PHPUnit uses a separate library, PHP Code Coverage, to collect, merge and generate reports for the per-test collected coverage. You can use this library stand alone.
To collect the data, I would do composer require phpunit/php-code-coverage and then create an auto_prepend file, with something like the following in it:
<?php
require 'vendor/autoload.php';
use SebastianBergmann\CodeCoverage\Filter;
use SebastianBergmann\CodeCoverage\Driver\Selector;
use SebastianBergmann\CodeCoverage\CodeCoverage;
use SebastianBergmann\CodeCoverage\Report\Html\Facade as HtmlReport;
$filter = new Filter;
$filter->includeDirectory('/path/to/directory');
$coverage = new CodeCoverage(
(new Selector)->forLineCoverage($filter),
$filter
);
$coverage->start($_SERVER['REQUEST_URI']);
function save_coverage()
{
global $coverage;
$coverage->stop();
$data = $coverage->getData();
file_put_contents('/tmp/path/crawler/' . bin2hex(random_bytes(16)), serialize($data) . '.serialized', $data->get );
}
register_shutdown_function('save_coverage');
?>
(I copied most of that from the introduction in the php-code-coverage README.md)
You need to configure this prepend_file with php.ini: auto_prepend_file.
When you now crawl through your web site, you should get a file with code coverage for each request into /tmp/path/crawler/, but make sure that directory exists first.
Merging Code Coverage
For this step, you need to write a script that load each of the generate files (look at glob()), and merge them together.
PHP Code Coverage has a method for this too. It would look something like:
<?php
require 'vendor/autoload.php';
use SebastianBergmann\CodeCoverage\Filter;
use SebastianBergmann\CodeCoverage\Driver\Selector;
use SebastianBergmann\CodeCoverage\CodeCoverage;
use SebastianBergmann\CodeCoverage\Report\Html\Facade as HtmlReport;
$filter = new Filter;
$filter->includeDirectory('/path/to/directory');
$coverage = new CodeCoverage(
(new Selector)->forLineCoverage($filter),
$filter
);
foreach ( glob('/tmp/path/crawler/*.serialize') as $file)
{
$data = unserialize( file_get_contents( $file ) );
$fileCoverage = new CodeCoverage(
(new Selector)->forLineCoverage($filter),
$filter
);
$fileCoverage->setData( $data );
$coverage->merge( $fileCoverage );
}
/* now generate the report, as per the README.md again */
(new HtmlReport)->process($coverage, '/tmp/code-coverage-report');
?>
If I find some time, I will do a video on this.
Related
I state that I'm not familiar with Prestashop and I'm using version 1.7.6.
I'm trying to understand how I could use the import function from csv file without using of user interface.
I tried to look for documentation on a possible web api but I found nothing.
What I'd like to accomplish is the following scenario:
I have two web applications on the same server
/my_webapp
/my_prestashop
By "my_webapp" I receive a csv file, process it and produce a new csv file.
Now continuing running the process in "my_webapp", I would like to instantiate the ambient of the prestashop application to invoke the import csv function by passing it the new file just created.
Searching the web I found some sample code but, trying to use and adapt it, I am not making it work.
For example, on “my_webapp” folder I just create a “myimport.php” file and call it with two GET parameters.
The following is the call:
localhost/my_webapp/myimport.php?csv=prod.csv&limit=5
note: the file “prod.csv” is on
"path to admin folder"/import
Content of “myimport.php” file:
<?php
$rootPrestashop = '/var/www/html/my_prestashop”;
define('_PS_ADMIN_DIR_', $rootPrestashop.'/admin_shop'); //not sure if this instruction is needed
$pathConfig = $rootPrestashop.'/config/config.inc.php';
$initConfig = $rootPrestashop.'/init.php';
require_once($pathConfig);
require_once($initConfig); //this line throw an error and then I can't test the others!
$importCtrl = new AdminImportControllerCore();
$crossSteps = array();
$limit = $_GET["limit"];
$importCtrl->productImport(false, $limit, $crossSteps, true, 0);
This is what I’m trying to do, but I failed to initialize the environment.
Maybe I’m on the wrong way and there’s a better way.
I ask if anyone can help me understand if I can carry out this process and what would be the correct way.Thanks in advance
if (!defined('_PS_ADMIN_DIR_')) {
define('_PS_ADMIN_DIR_', __DIR__);
}
include _PS_ADMIN_DIR_.'/../config/config.inc.php';
if (!Context::getContext()->employee->isLoggedBack()) {
Tools::redirectAdmin(Context::getContext()->link->getAdminLink('AdminLogin'));
}
Our site uses the Vimeo PHP library (https://github.com/vimeo/vimeo.php).
Currently I'm calling the library within snippets, e.g.:
require_once("____/autoload.php");
$vimeo = new \Vimeo\Vimeo(____AuthKeys, etc.___);
...
$videos = $vimeo->request('/me/albums/____)['body']['data']
...
But this means way more calls to the API than necessary ... right?
Vimeo recommends caching the response, but I'm not sure how to do that in modx.
I'm guessing the first 3 lines only need to be run once, then cached ... until we make changes to our Vimeo account (add videos, albums, etc.)
What's the best way to accomplish this?
The only part that changes from snippet to snippet is the $vimeo->request... portion ... is there a way to only have that at the start of our snippets?
You can use getCache to cache the complete output for a longer period of time, but if you want to cache data inside your snippet, you can use the modCacheManager for that.
For example, that might look like this:
require_once("____/autoload.php");
$vimeo = new \Vimeo\Vimeo(____AuthKeys, etc.___);
...
$cacheManager = $modx->getCacheManager();
$videos = $cacheManager->get('vimeo_videos');
if (empty($videos)) {
$videos = $vimeo->request('/me/albums/____')['body']['data']
$cacheManager->set('vimeo_videos', $videos, 3600);
}
// Process $videos further
That will cache the data for one hour (note the 3600 in the set call).
I mass produce very similar sites, meaning they all use the same basic components, pages and are all single industry specific. These are my low end deeply discounted site designs. None of these sites ever get more than 20-30 visitors a day, so any extra load on the server isn't an issue.
In the interest of time, being that they all use the same components, though they may be in different locations or in a different order I would like to write one definition file that can be included with every site, so I can just call the defined constant instead of writing out the code a couple hundred times every year on every site I build. Also for editing later purposes this would make my life MUCH easier.
the definition file would look similar to the following:
define('UPCONTACT','<h1>Contact Us</h1>');
define('ULCONTACT','Contact Us');
define('UPABOUTUS','<h1>About Us</h1>');
define('ULABOUTUS','About Us');
Obviously this is a very basic example but I think you get the idea.
So the question is what are the pros and cons of using define() in this manner?
It's pretty much ok. The disadvantage is that, given you are using constants, you can't override them for a single page or site.
Use an array instead:
config.php
return array(
'aboutus' => '<h1>About Us</h1>',
'contactus' => 'Contact Us'
);
include it like this in your site:
$config = include('config.php');
Then you can print it very easily
<?php echo $config['aboutus'] ?>
You can also change a value when you need it:
$config = include('config.php');
$config['aboutus'] = '<h1>About My Company</h1>';
This is probably your best option.
It has upsides and downsides.
The upsides involve that such way is quicker than loading settings from a database (and creating a database; and creating an abstraction layer, ...).
The downsides involve that such way is not customizable by the client. If they need a change, ensure beforehand the website is static and you will charge them by every change.
IMHO it is better to have some stuff as customizable by the client, and other stuff not. But there's no technical issue at all by using define() in that way (except perhaps allowed datatypes).
A better way to use a ini file or something like that.
(and easily editable from a smartphone if it's a recursive task for you :)
Look for a builtin php function, can make simplify your life
http://php.net/manual/fr/function.parse-ini-file.php
or if you would a more stronger and flexible system,
go for templating (looking for smarty, or self made regex templating)
Looking for my first regex function (loong years ago)
Quitting Smarty to do it manually
Note:
Using Constant does not provide you to dynamically modifying them
inline code, and are poor supported type (you cannot store an array without serialize for example)
I would suggest cascaded ini files:
$conf_dir = dirname(__FILE__);
$config = array_merge_recursive(
parse_ini_file($conf_dir.'base.ini'),
parse_ini_file($conf_dir.'client.ini')
);
The benefits are readability, inability of execution (I like to lock things down that can be), and you can track the base ini in git (or whatever you use) and not the client one. There are some downsides, but such is life. The just feel cleaner, but they are not faster than .php, to be sure.
And if you wanted to eliminate any redundant execution (listen, any "performance benefit" still has "benefit" in it), serialization:
<?php
define('CACHE_DIR', '/tmp/');
// where 'http' is a path part that directly follows the app root, and will always
// be below where this file is called from.
$ini_cache = CACHE_DIR.'config.ser';
if(!file_exists($ini_cache)) {
// Build your config in any way you wish.
$conf_dir = dirname(__FILE__);
$config = array_merge_recursive(
parse_ini_file($conf_dir.'base.ini'),
parse_ini_file($conf_dir.'client.ini')
);
// Store it serialized
file_put_contents($ini_cache, serialize($config));
} else {
$config = deserialize(file_get_contents($ini_cache));
}
You can get more creative with this, but essentially, this allows you to store/generate your configuration in any way you wish. If you wanted to not have to delete the serialized cache on every change, you could add an atime check:
<?php
define('CACHE_DIR', '/tmp/');
// where 'http' is a path part that directly follows the app root, and will always
// be below where this file is called from.
$ini_cache = CACHE_DIR.'config.ser';
$conf_dir = dirname(__FILE__);
$config = array();
if(file_exists($ini_cache)) {
$client_stat = stat($conf_dir.'client.ini');
$cache_stat = stat($ini_cache);
if($client_stat['atime'] < $cache_stat['atime']) {
$config = deserialize(file_get_contents($ini_cache));
}
}
if(empty($config)) {
// Build your config in any way you wish.
$config = array_merge_recursive(
parse_ini_file($conf_dir.'base.ini'),
parse_ini_file($conf_dir.'client.ini')
);
// Store it serialized
file_put_contents($ini_cache, serialize($config));
}
With either serialization method, you can use what ever $config generation scheme you prefer, and if you use PHP, you can even get real creative/complicated with it, and the cached hit to the page will be negligible.
Say I have domain.com/php/ with all my php functions, then I share a ftp account with the front-end developers for domain.com/frontend/, now the frontend can do their work and call "../php/" functions. Is this safe to assume my php code are protected? Or another way of asking, is there anyway for them to see the php source code or somehow copy/include those files then display them?
You could restrict the user by jailing them to a folder:
http://allanfeid.com/content/creating-chroot-jail-ssh-access
This way they would have access to the folders to create the files. Then simply give them the path to which PHP files are needed. Or create an object or PHP function template to allow them to call access
Pseudo code:
class GlobalPaths
function getPathToThisResource(return string)
You can use the UNIX account system to make files unreadable to certain users. The problem is, if the PHP files can include each other, they can read each others sources. You can use an RPC system to hide the backend code. The frontend would only communicate with the RPC interface, and it wouldn't need to read the sources of the backend code.
For example, on the frontend:
<?php
error_reporting(-1);
function ask_backend($cmd, $args) {
$decoded = json_decode($data = file_get_contents("http://localhost:8800/backend/rpc.php?cmd=" . urlencode($cmd) . "&args=" . urlencode(json_encode($args))),true);
if ($decoded === null) throw new Exception("invalid data from backend: " . $data);
if ($decoded["status"] !== "ok") throw new Exception("error occurred on backend: " . $data);
return $decoded["msg"];
}
?>
The backend says:
<?php
$res = ask_backend("greeter", ["peter"]);
var_dump($res);
?>
on the backend, you could have rpc.php as follows:
<?php
error_reporting(-1);
$cmd = $_GET["cmd"];
$gargs = json_decode($_GET["args"],true);
$cmds = [
"greeter" => function($args) {
list($name) = $args;
return "hello " . $name;
}
];
$res = ($cmds[$cmd]($gargs));
$res = json_encode(["status"=>"ok", "msg"=>$res]);
echo $res;
?>
The disadvantage of this implementation is that you can only pass JSON serializable objects. Of course you can use Protocol Buffers for serialization instead. You don't even need to use HTTP, but I used that since you probably already have an HTTP server if you are running PHP.
Keep in mind that the RPC interface only needs to be available to localhost! And most importantly for your use case: the sources do not need to be readable by the developers of the frontend. Since it is not publicly accessibly, you could consider using something like PHPDaemon for the backend since that makes it easier build a proper REST interface.
when im wrote this code, i didn't use phpunit and functional tests, because i don't know how to write tests for this code. I know how to write tests for other functions and code, but i don't know which test's required for this code. Can anybody explain ?
All functions is google adwords API. Not my own. I don't need to test them also.
$adStatsSelector = new AdStatsSelector();
$adStatsSelector->dateRange = new
DateRange('20100901','20101001');
$user = new AdWordsUser();
$user->LogDefaults();
$servicedAccountService = $user->GetServicedAccountService('v201008');
$selector = new ServicedAccountSelector();
$graph = $servicedAccountService->get($selector);
foreach($graph->accounts as &$account) {
$user->SetClientId($account->login);
$campaignService = $user->GetCampaignService('v201008');
$selector = new CampaignSelector(null,null,$adStatsSelector);
$page = $campaignService->get($selector);
$account->campaigns = $page->entries;
}
As the code stands it's hard to define tests because it's not in the form of something we can call; we can't vary inputs and make sure we get certain outputs.
So as best I can see the thing you can test is whether graph ends up populated with the expected data. Somehow you invoke this code and have a look at what's in graph.
Now looking at the code several questions come to mind:
Why is the date range hard coded?
What's that v201008, why is that hard coded?
Where's the error handling? Can those $user->??? methods fail?
So I would modify this code to generalise it, and put it in a function. We can then test the function. Imagine a function( in pseudo code)
graph = getGraph(start, end, version)
now you can vary the inputs and check the response, but ... how do you know what the response should be? You may do best to mock the services you use. You can then also assert that you are calling services with the correct parameters, and for some tests force the services to raise error conditions.
Summary: Writing code to be testable really helps when you want to test things, in doing so you tend to focus on the dull but inportant stuff such as error handling.