How to capture a response after $_POST in php? - php

So i have a file named instance.php that contains my class "listing".page 2 is sending
$_POST arguments to instance.php. instance.php in favor instantiates an object of "listing" and echos a unique id. i want page 2 to capture this id, and maybe even more than that.
The problem is that instance.php and page 2 do not live in the same server. I know how to handle oop in Java quiet well but i'm getting the feeling that in php it's not that straightforward. I'm pretty sure i'm missing something here.
Could you help me with suggesting a good design for my implementation requirement?
EDIT: To be clear, i'm searching for a way for two or more php files that don't live on the same server/domain to have this relationship.

If you insist on using POST for interaction ( which is kinda strange choice ), then you will have to use cULR to facilitate it.
The other way would be to use file_get_contents() function. That would limit you to using only GET method:
// in your Page2
$val = 12345;
$data = file_get_contents("http://external.site.foo/instance.php?param={$val}");
var_dump($data);
// in the instance.php
echo $_GET['param'] , '-verified';

You would need to install and use cURL. The code in page 2 will look something like:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "http://example.com/instance.php");
curl_setopt($ch, CURLOPT_POSTFIELDS, "key1=value1&key2=value2");
$id = curl_exec($ch);
curl_close($ch);
If you want to retrieve more than the ID, I would suggest making an array() on instance.php and serializing it:
echo serialize(array("id" => 1, "name" => "John"));
and unserializing it in page 2:
...
$arr = unserialize(curl_exec($ch))
...

Related

Any idea how to connect to this API?

I tried to find any widget to show Tinkoof's bank currency rate, because it changes every 60sec., but nothing.
Finally I found this API, but there is no any documentation for it.
I tried to find any articles about parsing, but I guess there's no use in that because of absence of any tag.
I need to show the currency rate on my website via this API. Any idea?
Big thanks!
You just need to fetch the content You can use cURL or file_get_contents()
cURL version:
<?php
$url = "https://www.tinkoff.ru/api/v1/currency_rates";
$curl = curl_init($url);
curl_setopt($curl, CURLOPT_RETURNTRANSFER, TRUE);
$r = curl_exec($curl);
curl_close($curl);
$array = json_decode($r, true);
echo "<pre>";
print_r($array);
echo "</pre>";
?>
file_get_contents version:
<?php
$r = file_get_contents('https://www.tinkoff.ru/api/v1/currency_rates');
echo "<pre>";
echo print_r(json_decode($r, true));
echo "</pre>";
?>
Both of them will work unless the remote website requires you to be human (has extra verifications to stop robot requests). cURL would be a better way if that were the case because you can fake a user agent using a header array.
Once you have the array build it's just a matter of accessing the required data. using $r as an array result of the remote json structure.
It looks pretty straightforward to me. For someone with a decent knowledge of PHP will do this, provided the output is:
Now with the above information, I would:
Get the result to PHP using file_get_contents().
Parse the result as an array using json_decode($contents, $true).
Using the above result, I would get display the value using: $output["payload"]["rates"][0]["buy"] or something similar.
At this time of writing, the above will get me 58:

How do you submit a PHP form that doesn't return results immediately using Python?

There is a PHP form which queries a massive database. The URL for the form is https://db.slickbox.net/venues.php. It takes up to 10 minutes after the form is sent for results to be returned, and the results are returned inline on the same page. I've tried using Requests, URLLib2, LXML, and Selenium but I cannot come up with a solution using any of these libraries. Does anyone know of a way to retrieve the page source of the results after submitting this form?
If you know of a solution for this, for the sake of testing just fill out the name field ("vname") with the name of any store/gas station that comes to mind. Ultimately, I need to also set the checkboxes with the "checked" attribute but that's a subsequent goal after I get this working. Thank you!
I usually rely on Curl to do these kind of thing.
Instead of sending the form with the button to retrieve the source, call directly the response page (giving it your request).
As i work under PHP, it's quite easy to do this. With python, you will need pycURL to manage the same thing.
So the only thing to do is to call venues.php with the good arguments values thrown using POST method with Curl.
This way, you will need to prepare your request (country code, cat name), but you won't need to check the checkbox nor load the website page on your browser.
set_ini(max_execution_time,1200) // wait 20 minutes before quitting
$ch = curl_init();
// set URL and other appropriate options
curl_setopt($ch, CURLOPT_URL, "https://db.slickbox.net/venues.php");
curl_setopt($ch, CURLOPT_HEADER, 0);
// prepare arguments for the form
$data = array('adlock ' => 1, 'age' => 0,'country' => 145,'imgcnt'=>0, 'lock'=>0,'regex'=>1,'submit'=>'Search','vname'=>'test');
//add arguments to our request
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
//launch request
if( ! $result = curl_exec($ch))
{
trigger_error(curl_error($ch));
}
echo $result;
How about ghost?
from ghost import Ghost
ghost = Ghost()
with ghost.start() as session:
page, extra_resources = session.open("https://db.slickbox.net/venues.php", wait_onload_event=True)
ghost.set_field_value("input[name=vname]", "....")
# Any other values
page.fire_on('form', 'submit')
page, resources = ghost.wait_for_page_loaded()
content = session.content # or page.content I forgot which
After you can use beautifulsoup to parse the HTML or Ghost may have some rudimentary utilities to do that.

How to get Wikipedia page HTML with absolute URLs using the API?

I'm trying to retrieve articles through wikipedia API using this code
$url = 'http://en.wikipedia.org/w/api.php?action=parse&page=example&format=json&prop=text';
$ch = curl_init($url);
curl_setopt ($ch, CURLOPT_RETURNTRANSFER, 1);
$c = curl_exec($ch);
$json = json_decode($c);
$content = $json->{'parse'}->{'text'}->{'*'};
I can view the content in my website and everything is fine but I have a problem with the links inside the article that I have retrieved. If you open the url you can see that all the links start with href=\"/
meaning that if someone clicks on any related link in the article it redirects him to www.mysite.com/wiki/.. (Error 404) instead of en.wikipedia.com/wiki/..
Is there any piece of code that I can add to the existing one to fix this issue?
This seems to be a shortcoming in the MediaWiki action=parse API. In fact, someone already filed a feature request asking for an option to make action=parse return full URLs.
As a workaround, you could either try to mangle the links yourself (like adil suggests), or use index.php?action=render like this:
http://en.wikipedia.org/w/index.php?action=render&title=Example
This will only give you the page HTML with no API wrapper, but if that's all you want anyway then it should be fine. (For example, this is the method used internally by InstantCommons to show remote file description pages.)
You should be able to fix the links like this:
$content = str_replace('<a href="/w', '<a href="//en.wikipedia.org/w', $content);
In case anyone else needs to replace all instances of the URL.
You'll need to use regex and the g flag
/<a href="\/w/g

Parsing iTunes json, then displaying it on webpage, and caching it with PHP

I'm working on a project where I need to get info from iTunes, cache it, and display it on a webpage with PHP. I prefer to use curl, since it seems faster, but I'm more familiar with get_file_contents. An example of the json url is http://itunes.apple.com/lookup?id=284910350. I'm able to grab and decode it, but I'm having trouble from there.
Here's my start:
<?php
$cas = curl_init('http://itunes.apple.com/lookup?id=284910350');
curl_setopt($cas, CURLOPT_RETURNTRANSFER, 1);
$jsonitunes = curl_exec($cas);
curl_close($cas);
$arr = json_decode($jsonitunes,true);
foreach($arr as $item) {
echo "kind: ". $item['kind'] ."<br>";
}
?>
I can print the array, or var_dump it, but can't seem to grab any values. After that, I need to cache the whole thing. If possible, I'd like to set it up to grab new content when it arrives, or on a frequent schedule without weighing down the server.
PHP Notice: Undefined index: kind in /var/www/html/frank/scratch.php on line 9
That should be your first clue (make sure you're logging notices somewhere where you can see them as you work). When you see that, you know you're referencing the array incorrectly.
Your next step should be
var_dump($arr);
to see where the key you're looking for actually is.
Then you should see that you actually need
foreach($arr['results'] as $item) {
Have you tried the json_decode function? You should be able to use cURL to download the contents of that page, store it in a variable, then use json_decode.
<pre>
<?php
$ch = curl_init("http://itunes.apple.com/lookup?id=284910350");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$content = curl_exec($ch);
curl_close($ch);
$jsonDecoded = json_decode($content, true);
echo $jsonDecoded['results'][0]['artistName'];
?>
</pre>

serialize form to POST after getting content using CURL

I want to POST an URL using CURL and php.
There is a big form on webpage and I don't want to manually copy all the variables and put it in my POST request.
I am guessing there has to be a way to serialize the form automatically (using DOM or something) and then just change whatever values I need.
I could not google my way out of this one so I was wondering would anyone be kind enough to help.
So, is there anyway to automatically serialize a form which is buried in a bunch of html content I just pulled from a URL?
Thanks for any help,
Andrew
$http = new HttpQueryString();
$http->set($_POST);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS,$http->get());
Requires PECL pecl_http >= 0.22.0
Its not to clear to me if you are asking how to get the form in the browser to the server or how to place the posted form in the curl request.
From php, assuming the form posted over, it would be as simple as:
curl_setopt($handle, CURLOPT_POSTFIELDS, $_POST);
though no data is validated that way.
From the web side, not sure why you would serialize the form using the DOM/Javascript, as opposed to just submitting it via a normal post?
Not sure what the question really is, but you're either wanting to do something like this:
$fields_string = http_build_query($data_to_send);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS,$fields_string);
or you need to look into this:
$html = file_get_html('http://www.thesite.com/thepage.html');
foreach($html->find('input') as $element)
echo $element->name . '<br>';
I don't understand the question. It sounds like you want to screen-scrape a form, fill it in, and then POST it back to the page you got it from. Is that right?
Edit in response to comment:
I'd recommend scraping the CURL'd HTML with a tool like Simple HTML DOM (that's what I use for scraping with PHP). The documentation for your library of choice will help you figure out how to identify the form fields. After that, you'll want to curl the form's action page, with the CURL_POST_FIELDS attribute set to the values you want to pass to the form, urlencode()'d of course.

Categories