I'm working on a guild homepage with a page which displays our Guild Roster. The data gets extracted from the blizzard API using Oauth and everything works so far. The problem is that the roster page takes at least 10 seconds to load. Using Postman I get the results usually in 200ms. I tested around and figured that the requests in the for-loop slow it down by a lot. Since this is my first time working with curl, my code is a bit messy. Do you have any tips how to improve the requests in the for-loop? Thanks in advance
<?php
$curl_handle = curl_init();
curl_setopt($curl_handle, CURLOPT_URL, "https://eu.battle.net/oauth/token");
curl_setopt($curl_handle, CURLOPT_POSTFIELDS, ['grant_type' => 'client_credentials']);
curl_setopt($curl_handle, CURLOPT_USERPWD, $apikey . ':' . $apisecret);
curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl_handle, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4);
curl_setopt($curl_handle, CURLOPT_USERAGENT, 'Mozilla/5.001 (windows; U; NT4.0; en-US; rv:1.0) Gecko/25250101');
curl_setopt($curl_handle, CURLOPT_HTTPHEADER, ['Content-Type: multipart/form-data']);
$response = curl_exec($curl_handle);
$access_token = json_decode($response)->access_token;
curl_reset($curl_handle);
curl_setopt($curl_handle, CURLOPT_URL, "https://eu.api.blizzard.com/data/wow/guild/malfurion/war-hounds/roster?namespace=profile-eu");
curl_setopt($curl_handle, CURLOPT_HTTPHEADER, ['Content-Type: multipart/form-data','Authorization: Bearer ' . $access_token]);
curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl_handle, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4);
curl_setopt($curl_handle, CURLOPT_USERAGENT, 'Mozilla/5.001 (windows; U; NT4.0; en-US; rv:1.0) Gecko/25250101');
$guildrequest = curl_exec($curl_handle);
$guildname = json_decode($guildrequest, true);
$name1 = $guildname['members'];
curl_reset($curl_handle);
curl_setopt($curl_handle, CURLOPT_HTTPHEADER, ['Content-Type: multipart/form-data','Authorization: Bearer ' . $access_token]);
curl_setopt($curl_handle, CURLOPT_RETURNTRANSFER, true);
curl_setopt($curl_handle, CURLOPT_IPRESOLVE, CURL_IPRESOLVE_V4);
curl_setopt($curl_handle, CURLOPT_USERAGENT, 'Mozilla/5.001 (windows; U; NT4.0; en-US; rv:1.0) Gecko/25250101');
echo "<table style=\"background-color:#595959;border:2px solid black\">";
echo "<tr><th>Roster</th><th>Name</th><th>Klasse</th></tr>";
echo "<style> tr:nth-child(even) {background-color: #4a4a4a;} table{border-collapse: collapse;}
</style>";
for($i=0; $i<count($name1);$i++)
{
//player-class general json
curl_setopt($curl_handle, CURLOPT_URL, $name1[$i][character][playable_class][key][href]);
$playerClass = curl_exec($curl_handle);
$playerClassName = json_decode($playerClass, true);
//player-class media
curl_setopt($curl_handle, CURLOPT_URL, $playerClassName[media][key][href]);
$icon = curl_exec($curl_handle);
$playerClassIcon = json_decode($icon, true);
$urlPrep = $playerClassIcon[assets][0][value];
$url = str_replace( 'https://', 'http://', $urlPrep);
//player general json
curl_setopt($curl_handle, CURLOPT_URL,$name1[$i][character][key][href]);
$character = curl_exec($curl_handle);
$characterJSON = json_decode($character, true);
//player media
$urlMedia = $characterJSON[media][href];
curl_setopt($curl_handle, CURLOPT_URL,$urlMedia);
$charactermedia = curl_exec($curl_handle);
$charactermediaJSON = json_decode($charactermedia,true);
$urlPrep2 = $charactermediaJSON[assets][0][value];
if($urlPrep2 == null){
$urlPrep2 = $charactermediaJSON[avatar_url];
}
$url2 = str_replace( 'https://', 'http://', $urlPrep2);
$color = black;
echo "<tr><td><img src=\"" .$url2. "\" alt=\"".$name1[$i][character][name]. "\"></td><td style=\"font-weight:bold;color:".$color."\"><a style=\"font-weight:bold;color:".$color."\" href=\"https://worldofwarcraft.com/de-de/character/eu/".$name1[$i][character][realm][slug]."/".$name1[$i][character][name]."\">" .$name1[$i][character][name]. "</a></td><td><img src=\"" .$url. "\" alt=\"" .$playerClassName[name][de_DE]. "\" ></td></tr>";
}
?>
First, I'd recommend you obscure your API keys, on principle, assuming you've shown the real ones.
As for your question, without going through the rest of your code and trying to make it more efficient - or looking into the question of whether some other API request mode might work as well or better - I'd suggest you considering saving the returned values as re-usable transients that will in turn be refreshed only when necessary. The latter objective can be handled in a variety of different ways, but the main result will be that the set of requests won't need to be made every time the page output has to be re-loaded.
https://developer.wordpress.org/apis/handbook/transients/
don't ever do
curl_setopt($curl_handle, CURLOPT_POSTFIELDS, [
'grant_type' => 'client_credentials'
]);
curl_setopt($curl_handle, CURLOPT_HTTPHEADER, [
'Content-Type: multipart/form-data'
]);
you risk ruining the request, the request is supposed to look like:
Content-Length: 163
Content-Type: multipart/form-data; boundary=------------------------becb5cf73f034d20
--------------------------becb5cf73f034d20
Content-Disposition: form-data; name="grant_type"
client_credentials
--------------------------becb5cf73f034d20--
when you attempt to overwrite the multipart-header that curl generates automatically, you risk removing the boundary from the header, making the request unparsable, so let curl generate the multipart header automatically, don't override it.
optimization.. the first request to fetch an oAuth token can be replaced by a minutely cronjob which every minute 24/7 generate a fresh token and stores it in a database. fetching the token from your own database should be much faster than generating a new one from Blizzard, so that would speed things up: always generate new tokens with cronjobs and cache it.
but since the request you send is not dynamic at all, it does not appear to depend on any $_GET or $_POST requests, or anything like that, having a cronjob generating a fresh html page every minute, and just serving your newest version of the html file would be much faster. the very fastest a cronjob can run is every 60 seconds, if that is too slow for you (eg you need every 10 seconds, or 30), then consider using a daemon, or you can use some cronjob-hacks, like every minute you have cronjob php generateHtml.php, and you have sleep 30 && php generateHtml.php - this hack wil allow you to run cronjobs every 30 seconds instead of the normal 1 minute limit; you can do the same with sleep 10 && + sleep 20 && etc, to have the cronjob even run every 10 seconds.
using a cronjob will make your actual pageloads take milliseconds, instead of the 10+ sec you're suffering with now :)
Related
I'm currently running a personal file upload site, which roughly does the following;
Select files to upload through the website
PHP uploads and zips the files together
The zipped file gets sent via cURL to a remote storage server
A URL is presented to share the files
I'm having an issue with step 3. I currently have a progress bar while the file uploads to the website and I'd like another percentage to show the cURL transfer to the storage server. I've seen that you that can use CURLOPT_PROGRESSFUNCTION - but I can not get it working, after a lot of searching.
My current code is as below;
// Prepare the POST data
$data = array(
'file' => new CurlFile('/path/to/local.zip', 'application/zip', 'uploaded_archive.zip'),
'upload_folder' => 'folder_name'
);
// Send the POST
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://example.com/upload.php");
curl_setopt($ch, CURLOPT_HEADER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:54.0) Gecko/20100101 Firefox/54.0");
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, true);
curl_setopt($ch, CURLOPT_NOPROGRESS, false);
curl_setopt($ch, CURLOPT_PROGRESSFUNCTION, 'progressCall');
curl_setopt($ch, CURLOPT_BUFFERSIZE, 128);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 160);
curl_setopt($ch, CURLOPT_TIMEOUT, 160);
$res = curl_exec($ch);
curl_close($ch);
This calls "progressCall" with the progress, which looks like the below to test;
// Function to save the output of progress from CURLOPT_PROGRESSFUNCTION
function progressCall($ch, $dltotal, $dlnow, $uptotal, $upnow) {
$progress = "$ch/$dltotal/$dlnow/$uptotal/$upnow";
file_put_contents('tmp/curlupload.txt', 'PERC: ' . $progress . "\n", FILE_APPEND);
}
The problem is, I get the output below;
https://gist.github.com/ialexpw/6160e32495d857cc0d8984373f3808ae
PERC: Resource id #15/0/0/0/65532
PERC: Resource id #15/0/0/0/65532
PERC: Resource id #15/0/0/0/131064
PERC: Resource id #15/0/0/0/131064
PERC: Resource id #15/0/0/0/196596
This seems to show the uploaded amount going up (which is OK), but has 0 for the total upload amount. I've tried a lot of example online, but I can not work out why the total amount to be uploaded is always 0. I tried setting a Content-Length, although it's the same.
Any help would be appreciated if something is wrong with the above.
Thanks!
Hello sorry for the response 2 years after, but just working on it !
Possible for Upload, but as mentionned we don't have in this example the total file size.
The easiest way if possible is to take the file size on the callback
function, with a global for example :
function progressCall($ch, $dltotal, $dlnow, $uptotal, $upnow) {
// You will get this before with | $fsize = filesize($thefile);
global $fsize;
$percent = round($upnow / $fsize * 100);
echo $percent . ' % downloaded';
}
I have a repetitive task that I do daily. Log in to a web portal, click a link that pops open a new window, and then click a button to download an Excel spreadsheet. It's a time consuming task that I would like to automate.
I've been doing some research with PHP and cUrl, and while it seems like it should be possible, I haven't found any good examples. Has anyone ever done something like this, or do you know of any tools that are better suited for it?
Are you familiar with the basics of HTTP requests? Like, do you know the difference between a POST and a GET request? If what you're doing amounts to nothing more than GET requests, then it's actually super simple and you don't need to use cURL at all. But if "clicking a button" means submitting a POST form, then you will need cURL.
One way to check this is by using a tool such as Live HTTP Headers and watching what requests happen when you click on your links/buttons. It's up to you to figure out which variables need to get passed along with each request and which URLs you need to use.
But assuming that there is at least one POST request, here's a basic script that will post data and get back whatever HTML is returned.
<?php
if ( $ch = curl_init() ) {
$data = 'field1=' . urlencode('somevalue');
$data .= '&field2[]=' . urlencode('someothervalue');
$url = 'http://www.website.com/path/to/post.asp';
$userAgent = 'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)';
curl_setopt($ch, CURLOPT_POST, true);
curl_setopt($ch, CURLOPT_POSTFIELDS, $data);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_USERAGENT, $userAgent);
$html = curl_exec($ch);
curl_close($ch);
} else {
$html = false;
}
// write code here to look through $html for
// the link to download your excel file
?>
try this >>>
$ch = curl_init();
$csrf_token = $this->getCSRFToken($ch);// this function to get csrf token from website if you need it
$ch = $this->signIn($ch, $csrf_token);//signin function you must do it and return channel
curl_setopt($ch, CURLOPT_HTTPGET, true);
curl_setopt($ch, CURLOPT_TIMEOUT, 300);// if file large
curl_setopt($ch, CURLOPT_URL, "https://your-URL/anything");
$return=curl_exec($ch);
// the important part
$destination ="files.xlsx";
if (file_exists( $destination)) {
unlink( $destination);
}
$file=fopen($destination,"w+");
fputs($file,$return);
if(fclose($file))
{
echo "downloaded";
}
curl_close($ch);
I have a PHP script which downloads all of the content at a URL via cURL:
<?php
function get_curl_output($link)
{
$channel = curl_init();
curl_setopt($channel, CURLOPT_URL, $link);
curl_setopt($channel, CURLOPT_HEADER, 0);
curl_setopt($channel, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($channel, CURLOPT_CONNECTTIMEOUT, 10000);
curl_setopt($channel, CURLOPT_TIMEOUT, 10000);
curl_setopt($channel, CURLOPT_VERBOSE, true);
curl_setopt($channel, CURLOPT_USERAGENT, 'Mozilla/5.0 (compatible; Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2) Gecko/20070219');
curl_setopt($channel, CURLOPT_FOLLOWLOCATION, true);
$output = curl_exec($channel);
if(curl_errno($channel))
{
file_put_contents('curlerror.txt', curl_error($channel) . PHP_EOL, FILE_APPEND);
}
curl_close($channel);
return $output;
?>
<?php
function downloader($given_url){
//download all the url's content in the $given_url.
//if this $given_url is an array with 10 urls inside it, download all of their content.
while(as long as i find a url inside $given_url){
$content_stored_here = get_curl_output($link);
//and put that content to a file
}
}
?>
Now every thing goes fine until there is no connection loss or IP changes. However, my connection randomly gets a new IP address after some hours, as I don't have a static IP address.
And i use mod_php in apache using WinNT MPM thread worker.
Once I get the new IP address, my code stops working, but throws no errors
EDIT :
i made the same program on c++ (of course changing some functions name and tweaking compiler settings and linker settings) c++ too stops at the middle of the programs once i got the new IP address or a connection loss.
Any insights on this?
You don't need such huge timeouts; when there is no connection cUrl tries to connect for as long as 10000 seconds accroding to your code. Set it to something more reasonable - just 10, for example
The problem is, I get some parts of the contents but did not get the user's reviews. by Firebug I saw contents but when I checked the source codes NO contents inside HTML tags / no same HTML tags. Here is my code:
<?php
//Headers
include('simple_html_dom.php');
function getPage($page, $redirect = 0, $cookie_file = '')
{
$ch = curl_init();
$headers = array("Content-type: application/json");
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, 0);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 1);
curl_setopt($ch, CURLOPT_HEADER, 0);
if($redirect)
{
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
}
curl_setopt($ch, CURLOPT_URL, $page);
if($cookie_file != '') {
curl_setopt($ch, CURLOPT_COOKIEJAR, $cookie_file);
curl_setopt($ch, CURLOPT_COOKIEFILE, $cookie_file);
}
curl_setopt($ch, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.0.6) Gecko/20060728 Firefox/1.5.0.6');
$return = curl_exec($ch); //Mozilla/4.0 (compatible;)
curl_close($ch);
return $return;
}//EO Fn
//Source
$url = 'http://www.vitals.com/doctor/profile/1982660171/reviews/1982660171';
//Parsing ...
$contents = getPage($url, 1, 'cookies.txt');
$html = str_get_html($contents);
//Output
echo $html->outertext;
?>
Can anyone please help me - what I should do to get the whole page so that I can grab reviews?enter code here
They're just stored as JSON in a <script> block towards the top of the page. Parse it out with RegEx or Simple HTML DOM and run it through json_decode.
var json = {"provider":{"id":"1982660171","display_name":"Stephen R Guy, MD","last_name":"Guy","first_name":"Stephen","middle_name":"Russell","master_name":"Stephen_Guy","degree_types":"MD","familiar_name":"Stephen","years_experience":"27","birth_year":"1956","birth_month":"5","birth_day":"23","gender":"M","is_limited":"false","url_deep":"http:\/\/www.vitals.com\/doctor\/profile\/1982660171\/Stephen_Guy","url_public":"http:\/\/www.vitals.com\/doctors\/Dr_Stephen_Guy.html","status_code":"A","client_ids":"1","quality_indicator_set":[{"type":"quality-indicator\/consumer-feedback","count":"2","suboverall_set":[{"name_short":"Promptness","overall":"3"},{"name_short":"Courteous Staff","overall":"4"},{"name_short":"Bedside Manner","overall":"4"},{"name_short":"Spends Time with Me","overall":"4"},{"name_short":"Follow Up","overall":"4"}],"name":"Consumer Reviews","overall":"4.0","measure_set":[{"feedback_response_id":"1756185","input_source_ids":"{0}","date":"1301544000","value":"4","scale":{"best":"1","worst":"4"},"review":{"type":"review\/consumer","comment":"I will never birth with another dr. Granted that's not saying much as I don't like dr's but I actually find him as valuable as the midwives who I adore. I liked Horlacher but when Kitty left I followed the midwives and then followed again....Dr. Guy is GREAT. I honestly don't know who I'd rather support me at my birth; Margie and Lisa or Dr. Guy. ....I wonder if I can just get all of them.Guy's great. Know what you want. Tell him. Be strong and he'll support you.I give him 10 stars. Oh...my baby's 3 years old now. He's GREAT! ","date":"1301544000"},"sub_measure":[{"name":"Waiting time during a visit","name_short":"Promptness","value":"3","scale":{"best":"4","worst":"1"}},{"name":"Courtesy and professionalism of office staff ","name_short":"Courteous Staff","value":"4","scale":{"best":"4","worst":"1"}},{"name":"Bedside manner (caring)","name_short":"Bedside Manner","value":"4","scale":{"best":"4","worst":"1"}},{"name":"Spending enough time with me","name_short":"Spends Time with Me","value":"4","scale":{"best":"4","worst":"1"}},{"name":"Following up as needed after my visit","name_short":"Follow Up","value":"4","scale":{"best":"4","worst":"1"}}]},{"feedback_response_id":"420734","input_source_ids":"{76}","link":"http:\/\/local.yahoo.com\/info-15826842-guy-stephen-r-md-university-women-s-health-center-dayton","date":"1142398800","value":"4","scale":{"best":"1","worst":"4"},"review":{"type":"review\/consumer","comment":"Excellent Doctor: I really like going to this office. They are truely down to earth people and talk my \"non-medical\" language. I have been using thier office since 1997 and they have seen me through 2 premature pregnancies!","date":"1142398800"}}],"wait_time":"50"}]}};
But again, make sure you have permissions to do this...
define('COOKIE', './cookie.txt');
define('MYURL', 'https://register.pandi.or.id/main');
function getUrl($url, $method='', $vars='', $open=false) {
$agents = 'Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.204 Safari/534.16';
$header_array = array(
"Via: 1.1 register.pandi.or.id",
"Keep-Alive: timeout=15,max=100",
);
static $cookie = false;
if (!$cookie) {
$cookie = session_name() . '=' . time();
}
$referer = 'https://register.pandi.or.id/main';
$ch = curl_init();
if ($method == 'post') {
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, "$vars");
}
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HTTPHEADER, $header_array);
curl_setopt($ch, CURLOPT_USERAGENT, $agents);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 5);
curl_setopt($ch, CURLOPT_MAXREDIRS, 10);
curl_setopt($ch, CURLOPT_REFERER, $referer);
curl_setopt($ch, CURLOPT_COOKIE, $cookie);
curl_setopt($ch, CURLOPT_COOKIEJAR, COOKIE);
curl_setopt($ch, CURLOPT_COOKIEFILE, COOKIE);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, 2);
$buffer = curl_exec($ch);
if (curl_errno($ch)) {
echo "error " . curl_error($ch);
die;
}
curl_close($ch);
return $buffer;
}
function save_captcha($ch) {
$agents = 'Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.16 (KHTML, like Gecko) Chrome/10.0.648.204 Safari/534.16';
$url = "https://register.pandi.or.id/jcaptcha";
static $cookie = false;
if (!$cookie) {
$cookie = session_name() . '=' . time();
}
$ch = curl_init(); // Initialize a CURL session.
curl_setopt($ch, CURLOPT_URL, $url); // Pass URL as parameter.
curl_setopt($ch, CURLOPT_USERAGENT, $agents);
curl_setopt($ch, CURLOPT_COOKIESESSION, true);
curl_setopt($ch, CURLOPT_COOKIE, $cookie);
curl_setopt($ch, CURLOPT_COOKIEJAR, COOKIE);
curl_setopt($ch, CURLOPT_COOKIEFILE, COOKIE);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); // Return stream contents.
curl_setopt($ch, CURLOPT_BINARYTRANSFER, 1); // We'll be returning this
$data = curl_exec($ch); // // Grab the jpg and save the contents in the
curl_close($ch); // close curl resource, and free up system resources.
$captcha_tmpfile = './captcha/captcha-' . rand(1000, 10000) . '.jpg';
$fp = fopen($tmpdir . $captcha_tmpfile, 'w');
fwrite($fp, $data);
fclose($fp);
return $captcha_tmpfile;
}
if (isset($_POST['captcha'])) {
$id = "yudohartono";
$pw = "mypassword";
$postfields = "navigation=authenticate&login-type=registrant&username=" . $id . "&password=" . $pw . "&captcha_response=" . $_POST['captcha'] . "press=login";
$url = "https://register.pandi.or.id/main";
$result = getUrl($url, 'post', $postfields);
echo $result;
} else {
$open = getUrl('https://register.pandi.or.id/main', '', '', true);
$captcha = save_captcha($ch);
$fp = fopen($tmpdir . "/cookie12.txt", 'r');
$a = fread($fp, filesize($tmpdir . "/cookie12.txt"));
fclose($fp);
<form action='' method='POST'>
<img src='<?php echo $captcha ?>' />
<input type='text' name='captcha' value=''>
<input type='submit' value='proses'>
</form>";
if (!is_readable('cookie.txt') && !is_writable('cookie.txt')) {
echo "cookie fail to read";
chmod('../pandi/', '777');
}
}
this cookie.txt
# Netscape HTTP Cookie File
# http://curl.haxx.se/rfc/cookie_spec.html
# This file was generated by libcurl! Edit at your own risk.
register.pandi.or.id FALSE / FALSE 0 JSESSIONID 05CA8241C5B76F70F364CA244E4D1DF4
after i submit form just display
HTTP/1.1 200 OK Date: Wed, 27 Apr 2011 07:38:08 GMT Server: Apache-Coyote/1.1 X-Powered-By: Servlet 2.4; Tomcat-5.0.28/JBoss-4.0.0 (build: CVSTag=JBoss_4_0_0 date=200409200418) Content-Length: 0 Via: 1.1 register.pandi.or.id Content-Type: text/plain X-Pad: avoid browser bug
if not error "Captcha invalid"
always failed login to pandi
what wrong in my script?
I'm not want to Break Captcha but i want display captcha and user input captcha from my web page, so user can registrar domain dotID from my web automaticaly
A captcha is intended to differentiate between humans and robots (programs). Seems like you are trying to log in with a program. The captcha seems to do its job :).
I don't see a legal way around.
It happens because,
You took your captcha image from first getURL (ie first curl_exec) and processed the captcha but to submit your captcha you are requested getURL (ie again curl_exec) which means to a new page with a new captcha again.
So you are placing the old captcha and putting it in the new captcha. I'm having the same problem & resolved it.
Captcha is a dynamic image created by the server when you hit the page. It will keep changing, you must extract the captcha from the page and then parse it and then submit your page for a login. Captcha will keep changing as and when the page is triggered to load!
Using a headless browsing solution this is possible. ie: zombie.js coffee.js on Node.. Also it may be possible to extract the "image" from the captcha and, using image recognition, "read" the image and convert it to text, which is then posted with the form.
As of today, the only surefire method to "trick" a captcha is to use headless browsing.
Yes, Andro Selva is right. On the second request it gives new captcha. Once it loads captcha with getUrl function and the second load is from the save_captcha function, so this are 2 different images.
It must do something like this:
Download the captcha image before close the curl and before post and tell the script to wait untill you provide captcha answer - I will use preg_match. It will require some javascript as well.
If the captcha image is generated from javascript, you need to execute this javascript with the same cookie or token. In this situation, the easier solution is to record the headers with e.g. livehttpheaders addon for mozila ffox.
With PHP I do not know how to do it, you have to get the captcha and find a way to solve it. It has a lot of algorithms to do it for you, but if you want to use java, I already hacked the source code from this link to get the code to solve the captcha and it works very well for a lot of captcha systems.
So, you could try to implement your own captcha solver, that will take a lot of time, try to find an existing implementation for PHP, or, IMHO, the best option, to use the JDownloader code base.