I'm trying to set up a bot for bittrex by using the bittrex api. I previously tried using python but had a hard time as the documentation was in php(https://bittrex.com/Home/Api), so I decided to switch to php. Im trying to create the bot but having a hard time starting. I pasted the initial code:
$apikey='xxx';
$apisecret='xxx';
$nonce=time();
$uri='https://bittrex.com/api/v1.1/market/getopenorders?
apikey='.$apikey.'&nonce='.$nonce;
$sign=hash_hmac('sha512',$uri,$apisecret);
$ch = curl_init($uri);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('apisign:'.$sign));
$execResult = curl_exec($ch);
$obj = json_decode($execResult);
And according to this video: (sorry I had to add space because it doesn't allow me to post more than 2 links with low rep)
https:// youtu.be/K0lDTK3D-74?t=5m30s
It should return this: (Same as Above)
http:// i.imgur.com/jCoAUT9.png
But when I try place the same thing in a php values, with my own api key and secret I just get a blank webpage with nothing on it. This is what my php file looks like(API key and secret removed for security reasons):
http://i.imgur.com/DYYoY0g.png
Any idea why this could be happening and how I could fix it?
Edit: No need for help anymore. I decided to go back to python and try to do it there and finally made it work :D
The video you're working from has faked their results. Their code doesn't do anything with the value of $obj, so I wouldn't expect anything to show up on the web page. (And definitely not with the formatting they show.)
If you're unfamiliar enough with PHP that this issue wasn't immediately apparent to you, this is probably a sign that you should step back and get more familiar with PHP before you continue -- especially if you're going to be running code that could make you lose a lot of money if it isn't working properly.
You need to echo your $obj or at least var_dump() it to see the content on a webpage.
Related
I am creating a PHP package that I want anyone to be able to use.
I've not done any PHP dev in a few years and I'm unfamiliar with pear and pecl.
The first part of my question is related to Pecl and Pear:
It seems to me that Pear and pecl are updating my computer, rather than doing anything to my code base, which leads me to the assumption that anything I do with them will also need to be duplicated by anyone wanting to use my package. Is that correct?
The 2nd part of my question is specific, I just want to do a simple HTTP (POST) request, and ideally I'd like to do it without any config required by those who use my package.
These are options I'm considering :
HTTPRequest seems like the perfect option, but it says "Fatal error: Uncaught Error: Class 'HttpRequest' not found" when I try and use it out of the box, and when I follow these instructions for installing it I get, "autoheader: error: AC_CONFIG_HEADERS not found in configure.in
ERROR: `phpize' failed" -- I don't want to debug something crazy like that in order to do a simple HTTP request, nor do I want someone using my package to have to struggle through something like that.
I've used HTTP_Request2 via a pear install and it works for me, but there is nothing added to my codebase at all, so presumably this will break for someone trying to use my package unless they follow the same install steps?
I know that I can use CURL but the syntax for that seems way over the top for such a simple action (I want my code to be really easy to read)
I guess I can use file_get_contents() .. is that the best option?
and perhaps I'll phrase the 2nd part of my question as :
Is there an approach that is considered best practice for (1) doing a HTTP request in PHP, and (2) for creating a package that is able to be easily used by anyone?
This really depends on what you need your request for. While it can be daunting when first learning it, I prefer to use cURL requests most of the time unless all I need to do is query the page with no headers. It becomes pretty readable once you get used to the syntax and the various options in my opinion. When all I need to do is query a page with no headers, I will usually use file_get_contents as this is a lot nicer looking and simpler. I also think most PHP developers can agree with me on this standpoint. I recommend using cURL requests as, when you need to set headers, they're very organized and more popular than messing with file_get_contents.
EDIT
When learning how to do cURL in PHP, the list of options on the documentation page is your friend! http://php.net/manual/en/function.curl-setopt.php
Here's an example of a simple POST request using PHP that will return the response text:
$data = array("arg1" => "val1", "arg2" => true); // POST data included in your query
$ch = curl_init("http://example.com"); // Set url to query
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST"); // Send via POST
curl_setopt($ch, CURLOPT_POSTFIELDS, http_build_query($data)); // Set POST data
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); // Return response text
curl_setopt($ch, CURLOPT_HEADER, "Content-Type: application/x-www-form-urlencoded"); // send POST data as form data
$response = curl_exec($ch);
curl_close($ch);
I have an interesting situation when calling the Shopify API. I use the standard procedure for calling the url and get the data, like this:
define('SHOPIFY_SHOP', 'myteststore.myshopify.com');
define('SHOPIFY_APP_API_KEY', 'xxxx');
define('SHOPIFY_APP_PASSWORD', 'yyy');
$shop_url = 'https://'.SHOPIFY_APP_API_KEY.':'.SHOPIFY_APP_PASSWORD.'#'.SHOPIFY_SHOP;
$response = Requests::get($shop_url.'/admin/products.json');
And I correctly get the response, parse the data and all works great. Now, when I put it to the actual server (Ubuntu 12.04), I noticed a weird message from the Spotify API:
[API] Invalid API key or access token (unrecognized login or wrong password)
I tried creating a new app, but still its the same. So the same file and the same set works on my machine, but not on the server. (only difference in the file is the path to requests library, require_once './Requests/library/Requests.php'; for Linux and require_once '..\Requests\library\Requests.php'; for Windows) As stated, I use the requests library and I assume there has to be some trick where the library (or something else) rewrites the URl and it doesn't get to Shopify correctly.
I tried using CURL with the URL directly, and it works that way as well. Can anyone point me what might be causing this?
Update: I moved to another library which solved the issue, but would like to know what was causing this since I had great experience with Requests up to this point.
I'm starting to use the same lib, and I stumbled upon something relevant right after finding this question:
https://github.com/rmccue/Requests/issues/142#issuecomment-147276906
Quoting relevant part:
This is an intentional part of the API design; in a typical use case,
you won't necessarily need data sent along with a request. Building
the URL for you is just a convenience.
Requests::get is a helper function designed to make GET requests
lightweight in the code, which is why there's no $data parameter
there. If you need to send data, use Requests::request instead
$response = Requests::request( 'http://httpbin.org/get', $headers, $data, Requests::GET, $options );
// GET is the default for type, and $options can be blank, so this can be shortened:
$response = Requests::request( 'http://httpbin.org/get', $headers, $data );
I couldn't figure why is this happening, it appears the Requests library is stripping the parameters from GET requests, so I moved to unirest library and this solved the issue.
I'm trying to get a JSON string from a page in my Laravel Project. Using this:
$json = file_get_contents($url);
$data = json_decode($json, TRUE);
return View::make('adventuretime.marceline')
->with('json', $json)
->with('title', 'ICE KING')
->with('description', 'I am the Ice King')
->with('content', 'ice king');
But since I'm only using a localhost, I think this doesn't work that's why it doesn't output anything. I want to know what is the proper way for it to be flexible and be able to get the JSON string with any $url value using php?
Looking at the comments above, it is possible that the $url you are using is not valid, check it out by pointing your browser there and see what happens.
If you are sure that the $url is fine, but you still get the 404 Not Found error - verify that you have proper Laravel routing defined for that address. If the routes are fine, maybe you forgot to do
composer dump-autoload
after making modifications in your routes.php. If so, try the above and refresh the browser to see if it helps.
Furthermore, bear in mind that using your current function, you can submit only GET requests. What is more, this function might not be available for fetching remote urls, on some hosting servers due to security reasons. If you still want to use it, it'd be good to check
if($json !== FALSE)
before you process the $json response. If the file_get_contents fails it will return false.
Reffering to the part of your question
what is the proper way for it to be flexible and be able to get the JSON string with any $url
I'd suggest using cURL, as a standard and convenient way to fetch remote content. Using cURL you have better control over the process of sending the http request and receiving the "answer" it returns. Personaly, in my Laravel 4 apps I often use this package jyggen/curl. You can read the docs for it here: jyggen docs
If you are not satisfied with cURL and you want greater control try Guzzle As the authors state, Guzzle is a PHP HTTP client & framework for building RESTful web service clients.
I have a PHP script that I'm trying to get the contents of a page. The code im using is below
$url = "http://test.tumblr.com";
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, TRUE);
$txt = curl_exec($ch);
curl_close($ch);
echo "$txt";
It works fine for me as it is now. The problem I'm having is, if I change the string URL to
$url = "http://-test.tumblr.com"; or $url = "http://test-.tumblr.com";
It will not work. I understand that -test.example.com or test-.example.com is not a valid hostnames but with Tumblr they do exists. Is there a work around for this?
I even tried creating a header redirect on another php file so cURL would be first getting a valid hostname but works the same way.
Thank you
Domain Names with hyphens
As you can see in a previous question about the allowed characters in a subdomain, - is not a valid character to start or end a subdomain with. So this is actually correct behavior.
The same problem was reported over the curl mailing list some time ago but since curl follows the standard, there is actually nothing to change on their site.
Most likely tumblr knows about this and therefore offers some alternative address leading to the same site.
Possible workaround
However you could try using nslookup to manually lookup the IP and then send your request directly to this IP (and manually setting the hostname to the correct value). I didn't try this out, but it seems as if nslookup is capable to resolve malformatted domain names that start or end in a hyphen.
curl
Additionally you should know, that the php curl function should be a direct interface to the curl command line tool and therefore, if you would encounter special behavior it would most likely be due to the logic in the curl command line tool and not the php function.
I heard it is possible to capture webpages by using PHP(maybe above 6.0) on windows server.
I got some sample code and tested. but there are no code to perform rightly.
If you know some right ways to capture webpage save it image file on web applications?
Please teach me.
you could use the browsershots api http://browsershots.org/
with the xml-rpc interface you really could use almost any language to access it.
http://api.browsershots.org/xmlrpc/
Though you have asked for a PHP solution, I would like to share yet another solution with Perl. WWW::Mechanize along with LWP::UserAgent and HTML::Parser can help in screen scraping.
Some documents for reference:
Web scraping with WWW::Mechanize
Screen-scraping with WWW::Mechanize
Downloading the html of a web page is commonly known as screen scraping. This can be useful if you want a program to extract data from a given page. The easiest way to request HTTP resources is to use a tool call cURL. cURL comes as a stand alone unix tool, but there are libraries to use it in about every programming language. To capture this page from the Unix command line type:
curl http://stackoverflow.com/questions/1077970/in-any-languages-can-i-capture-a-webpageno-install-no-activex-if-i-can-plz
In PHP, you can do the same:
<?php
$ch = curl_init() or die(curl_error());
curl_setopt($ch, CURLOPT_URL,"http://stackoverflow.com/questions/1077970/in-any-languages-can-i-capture-a-webpageno-install-no-activex-if-i-can-plz");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
$data1=curl_exec($ch) or die(curl_error());
echo "<font color=black face=verdana size=3>".$data1."</font>";
echo curl_error($ch);
curl_close($ch);
?>
Now before copying an entire website, you should check their robots.txt file to see if they allow robots to spider their site, and you may want to check if there is an API available which allows you to retrieve the data without the HTML.