PHP cURL within a form - php

I am trying to perform a cURL request within a form. Basically I have a page called monitoringform.php results.php and curl.php. When the user enters a MAC address into the form and presses submit, i would like the value of mac address to process through curl.php, obtain the result and post it in results.php. The cURL function works fine but I cannot get it to POST to results.php While I was playing around I noticed sometimes the IP shows up however it is incorrect. Can it be old data from the session? Any help is greatly appreciated!
monitoringform.php
<?php session_start(); ?>
<h3>Monitoring Request Form</h3>
<form name="form1" id="form1" action="" method="post">
<label for="regularInput">Modem MAC</label>
<input type="text" name="modemmac" id="modemmac" />
<button type="submit" name="submit">Submit Form</button>
</form>
results.php
<?php session_start(); ?>
<?php
include "curl.php";
$mactosearch = $_POST['modemmac'];
$mymessage=$_GET["message"];
if($mymessage=="success")
{
echo "<b>Modem IP Address:</b> $modemip <br />";
}
else{ echo "You are getting error";}
?>
curl.php
<?php
// Defining the basic cURL function
function curl($url) {
// Assigning cURL options to an array
$options = Array(
CURLOPT_RETURNTRANSFER => TRUE, // Setting cURL's option to return the webpage data
CURLOPT_FOLLOWLOCATION => TRUE, // Setting cURL to follow 'location' HTTP headers
CURLOPT_AUTOREFERER => TRUE, // Automatically set the referer where following 'location' HTTP headers
CURLOPT_CONNECTTIMEOUT => 120, // Setting the amount of time (in seconds) before the request times out
CURLOPT_TIMEOUT => 120, // Setting the maximum amount of time for cURL to execute queries
CURLOPT_MAXREDIRS => 10, // Setting the maximum number of redirections to follow
CURLOPT_USERAGENT => "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1a2pre) Gecko/2008073000 Shredder/3.0a2pre ThunderBrowse/3.2.1.8", // Setting the useragent
CURLOPT_URL => $url, // Setting cURL's URL option with the $url variable passed into the function
);
$ch = curl_init(); // Initialising cURL
curl_setopt_array($ch, $options); // Setting cURL's options using the previously assigned array data in $options
$data = curl_exec($ch); // Executing the cURL request and assigning the returned data to the $data variable
curl_close($ch); // Closing cURL
return $data; // Returning the data from the function
}
// Defining the basic scraping function
function scrape_between($data, $start, $end){
$data = stristr($data, $start); // Stripping all data from before $start
$data = substr($data, strlen($start)); // Stripping $start
$stop = stripos($data, $end); // Getting the position of the $end of the data to scrape
$data = substr($data, 0, $stop); // Stripping all data from after and including the $end of the data to scrape
return $data; // Returning the scraped data from the function
}
$scraped_page = curl("http://localhost?q_mac=$mactosearch&fields=only:ip");
$modemip = scrape_between($scraped_page, "<ip>", "</ip>");
}
?>

Related

something is not working in CURL scraping

I am trying to scrape torrentz2.eu search results by using old(dead) torrentz.eu scraper code:
when I run http://localhost/jits/torz/api.php?key=kabali
it showing me warning and null value.
Notice: Undefined variable: results_urls in /Applications/XAMPP/xamppfiles/htdocs/jits/torz/api.php on line 59
null
why?
can anybody tell me what's wrong with code.?
here is code:
<?php
$t= $_GET['key'];
// Defining the basic cURL function
function curl($url) {
// Assigning cURL options to an array
$options = Array(
CURLOPT_RETURNTRANSFER => TRUE, // Setting cURL's option to return the webpage data
CURLOPT_FOLLOWLOCATION => TRUE, // Setting cURL to follow 'location' HTTP headers
CURLOPT_AUTOREFERER => TRUE, // Automatically set the referer where following 'location' HTTP headers
CURLOPT_CONNECTTIMEOUT => 120, // Setting the amount of time (in seconds) before the request times out
CURLOPT_TIMEOUT => 120, // Setting the maximum amount of time for cURL to execute queries
CURLOPT_MAXREDIRS => 10, // Setting the maximum number of redirections to follow
CURLOPT_USERAGENT => "Mozilla/5.0 (Windows NT 5.1; rv:31.0) Gecko/20100101 Firefox/31.0", // Setting the useragent
CURLOPT_URL => $url, // Setting cURL's URL option with the $url variable passed into the function
);
$ch = curl_init(); // Initialising cURL
curl_setopt_array($ch, $options); // Setting cURL's options using the previously assigned array data in $options
$data = curl_exec($ch); // Executing the cURL request and assigning the returned data to the $data variable
curl_close($ch); // Closing cURL
return $data; // Returning the data from the function
}
?>
<?php
// Defining the basic scraping function
function scrape_between($data, $start, $end){
$data = stristr($data, $start); // Stripping all data from before $start
$data = substr($data, strlen($start)); // Stripping $start
$stop = stripos($data, $end); // Getting the position of the $end of the data to scrape
$data = substr($data, 0, $stop); // Stripping all data from after and including the $end of the data to scrape
return $data; // Returning the scraped data from the function
}
?>
<?php
$url = "https://torrentz2.eu/search?f=$t"; // Assigning the URL we want to scrape to the variable $url
$results_page = curl($url); // Downloading the results page using our curl() funtion
//var_dump($results_page);
//die();
$results_page = scrape_between($results_page, "<dl><dt>", "<a href=\"http://www.viewme.com/search?q=$t\" title=\"Web search results on ViewMe\">"); // Scraping out only the middle section of the results page that contains our results
$separate_results = explode("</dd></dl>", $results_page); // Expploding the results into separate parts into an array
// For each separate result, scrape the URL
foreach ($separate_results as $separate_result) {
if ($separate_result != "") {
$results_urls[] = scrape_between($separate_result, "\">", "<b>"); // Scraping the page ID number and appending to the IMDb URL - Adding this URL to our URL array
}
}
//print_r($results_urls); // Printing out our array of URLs we've just scraped
if($_GET["key"] === null) {
echo "Keyword Missing ";
} else if(isset($_GET["key"])) {
echo json_encode($results_urls);
}
?>
for old torrentz.eu scraper code ref: GIT repo
First thing you get NOTICE "Undefined variable: results_urls" because $results_urls is defined and used directly. Define it and then use it.
Do something like:-
// $results_urls defined here:-
$results_urls = [];
// For each separate result, scrape the URL
foreach ($separate_results as $separate_result) {
if ($separate_result != "") {
$results_urls[] = scrape_between($separate_result, "\">", "<b>"); // Scraping the page ID number and appending to the IMDb URL - Adding this URL to our URL array
}
}
Secondly the null is printed because $results_urls is not getting populated because $separate_results is not getting populated correctly. It just has one value which is empty.
I debugged further and found $results_page value is false. So whatever you are trying to do in "scrape_between" function is not working as expected. Fix your function.

How to use foreach to loop through a file with variable variable?

The following PHP code using foreach does not seem to work. I believe it has to do with "<a href='/$value/access'>".
I've shared the entire codebase.
Does anyone know what is wrong with my statement?
// include/functions.php
<?php
// Defining the basic cURL function
function curl($url) {
// Assigning cURL options to an array
$options = Array(
CURLOPT_RETURNTRANSFER => TRUE, // Setting cURL's option to return the webpage data
CURLOPT_FOLLOWLOCATION => TRUE, // Setting cURL to follow 'location' HTTP headers
CURLOPT_AUTOREFERER => TRUE, // Automatically set the referer where following 'location' HTTP headers
CURLOPT_CONNECTTIMEOUT => 120, // Setting the amount of time (in seconds) before the request times out
CURLOPT_TIMEOUT => 120, // Setting the maximum amount of time for cURL to execute queries
CURLOPT_MAXREDIRS => 10, // Setting the maximum number of redirections to follow
CURLOPT_USERAGENT => "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1a2pre) Gecko/2008073000 Shredder/3.0a2pre ThunderBrowse/3.2.1.8", // Setting the useragent
CURLOPT_URL => $url, // Setting cURL's URL option with the $url variable passed into the function
);
$ch = curl_init(); // Initialising cURL
curl_setopt_array($ch, $options); // Setting cURL's options using the previously assigned array data in $options
$data = curl_exec($ch); // Executing the cURL request and assigning the returned data to the $data variable
curl_close($ch); // Closing cURL
return $data; // Returning the data from the function
}
// Defining the basic scraping function
function scrape_between($data, $start, $end){
$data = stristr($data, $start); // Stripping all data from before $start
$data = substr($data, strlen($start)); // Stripping $start
$stop = stripos($data, $end); // Getting the position of the $end of the data to scrape
$data = substr($data, 0, $stop); // Stripping all data from after and including the $end of the data to scrape
return $data; // Returning the scraped data from the function
}
?>
// code.php
<?php
//include functions
include_once("include/functions.php");
// Set URL
$url = "https://www.instituteforsupplymanagement.org/ismreport/mfgrob.cfm";
$source = curl($url);
// Collect dataset
$arr = array("PMI","New Orders");
foreach ($arr as $value) {
$data = scrape_between($source,"<strong>$$value","</tr>");
print_r($data);
}
?>
within the string the second $ is being treated as a string, so you end up with strings like '$PMI'. why not just assign your variable variable to a temp variable:
$start = $$value;
$data = scrape_between($source,"<strong>$start","</tr>");
also I don't know how you plan to use "New Orders" as a variable name what with the space and all.
maybe you're trying to use "PMI" and "New Orders" in the string, in which case just drop the extra $

Login with curl and move to another page

I'm trying to access one page in a website with CURL, however it needs to be logged in i tried the code to login and it was successful
<?php
$user_agent = "Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20140319 Firefox/24.0 Iceweasel/24.4.0";
$curl_crack = curl_init();
CURL_SETOPT($curl_crack,CURLOPT_URL,"https://www.vininspect.com/en/account/login");
CURL_SETOPT($curl_crack,CURLOPT_USERAGENT,$user_agent);
CURL_SETOPT($curl_crack,CURLOPT_PROXY,"183.78.169.60:37899");
CURL_SETOPT($curl_crack,CURLOPT_PROXYTYPE,CURLPROXY_SOCKS5);
CURL_SETOPT($curl_crack,CURLOPT_POST,True);
CURL_SETOPT($curl_crack,CURLOPT_POSTFIELDS,"LoginForm[email]=naceriwalid%40hotmail.com&LoginForm[password]=passwordhere&toploginform[rememberme]=0&yt1=&toploginform[rememberme]=0");
CURL_SETOPT($curl_crack,CURLOPT_RETURNTRANSFER,True);
CURL_SETOPT($curl_crack,CURLOPT_FOLLOWLOCATION,True);
CURL_SETOPT($curl_crack,CURLOPT_COOKIEFILE,"cookie.txt"); //Put the full path of the cookie file if you want it to write on it
CURL_SETOPT($curl_crack,CURLOPT_COOKIEJAR,"cookie.txt"); //Put the full path of the cookie file if you want it to write on it
CURL_SETOPT($curl_crack,CURLOPT_CONNECTTIMEOUT,30);
CURL_SETOPT($curl_crack,CURLOPT_TIMEOUT,30);
$exec = curl_exec($curl_crack);
if(preg_match("/^you are logged|logout|successfully logged$/i",$exec))
{
echo "yoooha";
}
?>
Now the only problem I'm facing let's say that i don't want to be redirected to the logged in page, i want to be redirected to this page http://example.com/buy, how i can do that in the same code?
If you want to go to /buy after you log in, just use the same curl handle and issue another request for that page. cURL will retain the cookies for the duration of the handle (and on subsequent requests since you are saving them to a file and reading them back with the cookie jar.
For example:
$user_agent = "Mozilla/5.0 (X11; Linux i686; rv:24.0) Gecko/20140319 Firefox/24.0 Iceweasel/24.4.0";
$curl_crack = curl_init();
CURL_SETOPT($curl_crack,CURLOPT_URL,"https://www.vininspect.com/en/account/login");
CURL_SETOPT($curl_crack,CURLOPT_USERAGENT,$user_agent);
CURL_SETOPT($curl_crack,CURLOPT_PROXY,"183.78.169.60:37899");
CURL_SETOPT($curl_crack,CURLOPT_PROXYTYPE,CURLPROXY_SOCKS5);
CURL_SETOPT($curl_crack,CURLOPT_POST,True);
CURL_SETOPT($curl_crack,CURLOPT_POSTFIELDS,"LoginForm[email]=naceriwalid%40hotmail.com&LoginForm[password]=passwordhere&toploginform[rememberme]=0&yt1=&toploginform[rememberme]=0");
CURL_SETOPT($curl_crack,CURLOPT_RETURNTRANSFER,True);
CURL_SETOPT($curl_crack,CURLOPT_FOLLOWLOCATION,True);
CURL_SETOPT($curl_crack,CURLOPT_COOKIEFILE,"cookie.txt"); //Put the full path of the cookie file if you want it to write on it
CURL_SETOPT($curl_crack,CURLOPT_COOKIEJAR,"cookie.txt"); //Put the full path of the cookie file if you want it to write on it
CURL_SETOPT($curl_crack,CURLOPT_CONNECTTIMEOUT,30);
CURL_SETOPT($curl_crack,CURLOPT_TIMEOUT,30);
$exec = curl_exec($curl_crack);
if(preg_match("/^you are logged|logout|successfully logged$/i",$exec))
{
$post = array('search' => 'keyword', 'abc' => 'xyz');
curl_setopt($curl_crack, CURLOPT_POST, 1); // change back to GET
curl_setopt($curl_crack, CURLOPT_POSTFIELDS, http_build_query($post)); // set post data
curl_setopt($curl_crack, CURLOPT_URL, 'http://example.com/buy'); // set url for next request
$exec = curl_exec($curl_crack); // make request to buy on the same handle with the current login session
}
Here are some other examples of using PHP & cURL to make multiple requests:
How to login in with Curl and SSL and cookies (links to multiple other examples)
Grabbing data from a website with cURL after logging in?
Pinterest login with PHP and cURL not working
Login to Google with PHP and Curl, Cookie turned off?
PHP Curl - Cookies problem
You just need to change the URL after login is compete and then run curl_exec Like this :
<?php
//login code goes here
if(preg_match("/^you are logged|logout|successfully logged$/i",$exec))
{
echo "Logged in! now lets go to other page while we are logged in, shall we?";
//The new URL that you want to go to while logged in goes in bottom line :
CURL_SETOPT($curl_crack, CURLOPT_URL, "https://new_url_to_go.com/something");
$exec = curl_exec($curl_crack);
// now $exec contains the the content of new page with login
}
curl_close($curl_crack);//dont forgert to close curl session at last
?>
First define these function to get an associative array containing the url header and content (see http://nadeausoftware.com/articles/2007/06/php_tip_how_get_web_page_using_curl):
/**
* Get a web file (HTML, XHTML, XML, image, etc.) from a URL. Return an
* array containing the HTTP server response header fields and content.
*/
function get_web_page( $url, $params, $is_post = true )
{
$options = array(
CURLOPT_RETURNTRANSFER => true, // return web page
CURLOPT_HEADER => false, // don't return headers
CURLOPT_FOLLOWLOCATION => true, // follow redirects
CURLOPT_ENCODING => "", // handle all encodings
CURLOPT_USERAGENT => "Mozilla/4.0 (compatible;)", // i'm mozilla
CURLOPT_AUTOREFERER => true, // set referer on redirect
CURLOPT_CONNECTTIMEOUT => 120, // timeout on connect
CURLOPT_TIMEOUT => 120, // timeout on response
CURLOPT_MAXREDIRS => 10, // stop after 10 redirects
);
if($is_post) { //use POST
$options[CURLOPT_POST] = 1;
$options[CURLOPT_POSTFIELDS] = http_build_query($params);
} else { //use GET
$url = $url.'?'.http_build_query($params);
}
$ch = curl_init( $url );
curl_setopt_array( $ch, $options );
$content = curl_exec( $ch );
$err = curl_errno( $ch );
$errmsg = curl_error( $ch );
$header = curl_getinfo( $ch );
curl_close( $ch );
$header['errno'] = $err;
$header['errmsg'] = $errmsg;
$header['content'] = $content;
return $header;
}
try this to load the 'http://www.example.com/buy' after login is successful.
// after curl login setup
$exec = curl_exec($curl_crack);
if(preg_match("/^you are logged|logout|successfully logged$/i",$exec))
{
// close login CURL resource, and free up system resources
curl_close($curl_crack);
$params = array('product_id'=>'xxxx', qty=>10);
$url = 'http://www.example.com/buy';
//use above function to get the url content via POST params
$result = get_web_page($url, $params, true);
if($result['http_code'] == 200) {
//echo the content
echo $result['content'];
die();
}
}

How to scrape dynamic data with PHP Simple HTML DOM Parser [duplicate]

This question already has answers here:
Scrape web page data generated by javascript
(2 answers)
Closed 3 years ago.
first let me say that I have read over numerous "scrapping" threads on here and none have been of help to me. I also checked around the internet for days and now I am getting close to the wire I am hoping someone can shed some light on this for me.
I am using PHP Simple HTML DOM Parser to scrape some data from a page. The url I am working with serves dynamic content and I can not seem to get anything to work to pull that content in. I need to scrape the text(plain) from <tr id="0" class="ui-widget-content jqgrow ui-row-ltr" role="row"> to <tr id="9" class="ui-widget-content jqgrow ui-row-ltr" role="row">, I feel like once I get one to work I can get the others. Because this info is not actually on the page when the page is loaded but rather comes into the fold after the page loads I am in a rutt.
With that said, here is what I have tried:
echo file_get_html('http://sheriffclevelandcounty.com/p2c/jailinmates.aspx')->plaintext;
The above will show me everything BUT the info I need, like this:
I also tried using the example from the plugin using IMDb and modified to my needs, this is it:
// Defining the basic cURL function
function curl($url) {
// Assigning cURL options to an array
$options = Array(
CURLOPT_RETURNTRANSFER => TRUE, // Setting cURL's option to return the webpage data
CURLOPT_FOLLOWLOCATION => TRUE, // Setting cURL to follow 'location' HTTP headers
CURLOPT_AUTOREFERER => TRUE, // Automatically set the referer where following 'location' HTTP headers
CURLOPT_CONNECTTIMEOUT => 120, // Setting the amount of time (in seconds) before the request times out
CURLOPT_TIMEOUT => 120, // Setting the maximum amount of time for cURL to execute queries
CURLOPT_MAXREDIRS => 10, // Setting the maximum number of redirections to follow
CURLOPT_USERAGENT => "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1a2pre) Gecko/2008073000 Shredder/3.0a2pre ThunderBrowse/3.2.1.8", // Setting the useragent
CURLOPT_URL => $url, // Setting cURL's URL option with the $url variable passed into the function
);
$ch = curl_init(); // Initialising cURL
curl_setopt_array($ch, $options); // Setting cURL's options using the previously assigned array data in $options
$data = curl_exec($ch); // Executing the cURL request and assigning the returned data to the $data variable
curl_close($ch); // Closing cURL
return $data; // Returning the data from the function
}
// Defining the basic scraping function
function scrape_between($data, $start, $end){
$data = stristr($data, $start); // Stripping all data from before $start
$data = substr($data, strlen($start)); // Stripping $start
$stop = stripos($data, $end); // Getting the position of the $end of the data to scrape
$data = substr($data, 0, $stop); // Stripping all data from after and including the $end of the data to scrape
return $data; // Returning the scraped data from the function
}
$scraped_page = curl("http://sheriffclevelandcounty.com/p2c/jailinmates.aspx"); // Downloading IMDB home page to variable $scraped_page
$scraped_data = scrape_between($scraped_page, '<table id="tblII" class="ui-jqgrid-btable" cellspacing="0" cellpadding="0" border="0" role="grid" aria-multiselectable="false" aria-labelledby="gbox_tblII" style="width: 456px;">', '</table>'); // Scraping downloaded dara in $scraped_page for content between <title> and </title> tags
echo $scraped_data; // Echoing $scraped data, should show "The Internet Movie Database (IMDb)"
Of course neither of these work, so my question is: How do I use the PHP Simple DOM Parser to get dynamic content that is loaded after page load? Is it possible or am I just completely on the wrong track here?
I understand that you need the dynamic data that comes in the jqgrid. For that you can use post URL which in response gives the data.
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://sheriffclevelandcounty.com/p2c/jqHandler.ashx?op=s");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch,CURLOPT_POST, 1);
curl_setopt($ch,CURLOPT_POSTFIELDS, array(
'rows'=>10000, //Here you can specify how many records you want
't'=>'ii'
));
$output = curl_exec($ch);
curl_close($ch);
echo "<pre>";
print_r(json_decode($output));

Send AJAX-like post request using PHP only

I'm currently working on some automatization script in PHP (No HTML!).
I have two PHP files. One is executing the script, and another one receive $_POST data and returns information.
The question is how from one PHP script to send POST to another PHP script, get return variables and continue working on that first script without HTML form and no redirects.
I need to make requests a couple of times from first PHP file to another under different conditions and return different type of data, depending on request.
I have something like this:
<?php // action.php (first PHP script)
/*
doing some stuff
*/
$data = sendPost('get_info');// send POST to getinfo.php with attribute ['get_info'] and return data from another file
$mysqli->query("INSERT INTO domains (id, name, address, email)
VALUES('".$data['id']."', '".$data['name']."', '".$data['address']."', '".$data['email']."')") or die(mysqli_error($mysqli));
/*
continue doing some stuff
*/
$data2 = sendPost('what_is_the_time');// send POST to getinfo.php with attribute ['what_is_the_time'] and return time data from another file
sendPost('get_info' or 'what_is_the_time'){
//do post with desired attribute
return $data; }
?>
I think i need some function that will be called with an attribute, sending post request and returning data based on request.
And the second PHP file:
<?php // getinfo.php (another PHP script)
if($_POST['get_info']){
//do some actions
$data = anotherFunction();
return $data;
}
if($_POST['what_is_the_time']){
$time = time();
return $time;
}
function anotherFunction(){
//do some stuff
return $result;
}
?>
Thanks in advance guys.
Update: OK. the curl method is fetching the output of php file. How to just return a $data variable instead of whole output?
You should use curl. your function will be like this:
function sendPost($data) {
$ch = curl_init();
// you should put here url of your getinfo.php script
curl_setopt($ch, CURLOPT_URL, "getinfo.php");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
$result = curl_exec ($ch);
curl_close ($ch);
return $result;
}
Then you should call it this way:
$data = sendPost( array('get_info'=>1) );
I will give you some example class , In the below example you can use this as a get and also post call as well. I hope this will help you.!
/*
for your reference . Please provide argument like this,
$requestBody = array(
'action' => $_POST['action'],
'method'=> $_POST['method'],
'amount'=> $_POST['amount'],
'description'=> $_POST['description']
);
$http = "http://localhost/test-folder/source/signup.php";
$resp = Curl::postAuth($http,$requestBody);
*/
class Curl {
// without header
public static function post($http,$requestBody){
$curl = curl_init();
// Set some options - we are passing in a useragent too here
curl_setopt_array($curl, array(
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_URL => $http ,
CURLOPT_USERAGENT => 'From Front End',
CURLOPT_POST => 1,
CURLOPT_POSTFIELDS => $requestBody
));
// Send the request & save response to $resp
$resp = curl_exec($curl);
// Close request to clear up some resources
curl_close($curl);
return $resp;
}
// with authorization header
public static function postAuth($http,$requestBody,$token){
if(!isset($token)){
$resposne = new stdClass();
$resposne->code = 400;
$resposne-> message = "auth not found";
return json_encode($resposne);
}
$curl = curl_init();
$headers = array(
'auth-token: '.$token,
);
// Set some options - we are passing in a useragent too here
curl_setopt_array($curl, array(
CURLOPT_HTTPHEADER => $headers ,
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_URL => $http ,
CURLOPT_USERAGENT => 'From Front End',
CURLOPT_POST => 1,
CURLOPT_POSTFIELDS => $requestBody
));
// Send the request & save response to $resp
$resp = curl_exec($curl);
// Close request to clear up some resources
curl_close($curl);
return $resp;
}
}

Categories