I have a script that is responsible for extracting names of people from an external web page by passing an ID as a parameter.
Note: The information provided by this external website is public access, everyone can check this data.
This is the code that I created:
function names($ids)
{
$url = 'https://www.exampledomain.com/es/query_data_example?name=&id='.$ids;
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_HTTPHEADER,array("Accept-Lenguage: es-es,es"));
curl_setopt($ch, CURLOPT_TIMEOUT,10);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION,1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
$html = curl_exec($ch);
$error = curl_error($ch);
curl_close($ch);
preg_match_all('/<tr class="odd"><td><a href="(.*?)">/',$html ,$matches);
if (count($matches[1] == 0))
{
$result = "";
}
else if(count($matches[1] == 1))
{
$result = $matches[1][0];
$result = str_replace('/es/person/','', $result);
$result = substr($result, 0,-12);
$result = str_replace('-', ' ', $result);
$result = ucwords($result);
}
return $result;
}
Note2: in the variable $ url I have placed an example url, it is not the real url. It's just an exact example of the original URL that I use in my code.
I make the call to the function, and I show the result with an echo:
$info = names('8476756848');
echo $info;
and everything is perfect, I extracted the name of the person to whom that id belongs.
The problem arises when I try to query that function within a for(or while) loop, since I have an array with many ids
$myids = ["2809475460", "2332318975", "2587100534", "2574144252", "2611639906", "2815870980", "0924497817", "2883119946", "2376743158", "2387362041", "2804754226", "2332833975", "258971534", "2574165252", "2619016306", "2887098054", "2449781007", "2008819946", "2763767158", "2399362041", "2832047546", "2331228975", "2965871534", "2574501252", "2809475460", "2332318975", "2587100534", "2574144252", "2611639906", "2815870980", "0924497817", "2883119946", "2376743158", "2387362041", "2804754226", "2332833975", "258971534", "2574165252", "2619016306", "2887098054", "2449781007", "2008819946", "2763767158", "2399362041", "2832047546", "2331228975", "2965871534", "2574501252", "2809475460", "2332318975", "2587100534", "2574144252", "2611639906", "2815870980", "0924497817", "2883119946", "2376743158", "2387362041", "2804754226", "2332833975", "258971534", "2574165252", "2619016306", "2887098054", "2449781007", "2008819946", "2763767158", "2399362041", "2832047546", "2331228975", "2965871534", "2574501252"];
//Note: These data are for example only, they are not the real ids.
$size = count($myids);
for ($i=0; $i < $size; $i++)
{
//sleep(20);
$data = names($myids[$i]);
echo "ID IS: " . $myids[$i] . "<br> THE NAME IS: " . $data . "<br><br>";
}
The result is something like this:
ID IS: 258971534
THE NAME IS:
ID IS: 2883119946
THE NAME IS:
and so on. I mean, it shows me the Ids but the names do not extract them from the names function.
It shows me the whole list of ids but in the case of the names it does not show me any, as if the function names does not work.
If I put only 3 ids in the array and run the for loop again, then it gives me the names of those 3 ids, because they are few. But when the array contains many ids, then the function already returns no names. It is as if the multiple requests do not accept them or limit them, I do not know.
I have placed the function set_time_limit (0) at the beginning of my php file; to avoid that I get the error of excess time of 30 seconds.
because I thought that was why the function was not working, but it did not work. Also try placing a sleep (20) inside the cycle, before calling the function names to see if it was that it was making many requests very quickly to said web page but it did not work either.
This script is already in production on a server that I have hired and I have this problem that prevents my script from working properly.
Note: There may be arrays with more than 2000 ids or I am even preparing a script that will read files .txt and .csv that will contain more than 10000 ids, which I will extract from each file and call the function name, and then those ids and the names will be saved in a table from a mysql database.
Someone will know why names are not extracted when there are many ids but when they are few for example 1 or 10 the function name does work?
Okay so here goes i am using a rest api called strichliste
i am creating a user credit payment system
i am trying to grab a users balance by username problems is
my restapi i can only get the blanace via its userid
I have created a bit of php that grabs all the current users and the corresponding id and balance using this below
function getbal(){
// Get cURL resource
$curl = curl_init();
// Set some options - we are passing in a useragent too here
curl_setopt_array($curl, array(
CURLOPT_RETURNTRANSFER => 1,
CURLOPT_URL => 'https://example.io:8081/user/'
)
);
// Send the request & save response to $resp
$resp = curl_exec($curl);
// Close request to clear up some resources
curl_close($curl);
print_r($resp);
}
this is the resulting respinse i get after using this in my main php script
<? getbal(); ?>
result --- #
{
"overallCount":3,
"limit":null,
"offset":null,"entries":[
{"id":1,
"name":"admin",
"balance":0,
"lastTransaction":null
},
{"id":2,
"name":"pghost",
"balance":0,
"lastTransaction":null
},
{"id":3,
"name":"sanctum",
"balance":0,
"lastTransaction":null
}
]
}
as you can see there are only currently 3 users but this will grow everyday so the script needs to adapt to growing numbers of users
inside my php script i have a var with the currently logged in use so example
$user = "sanctum";
i want a php script that will use the output fro gatbal(); and only output the line for the given user in this case sanctum
i want it to output the line in jsondecode for the specific user
{"id":3,"name":"sanctum","balance":0,"lastTransaction":null}
can anyone help
$user = "sanctum";
$userlist = getbal();
function findUser($u, $l){
if(!empty($l['entries'])){
foreach($l['entries'] as $key=>$val){
if($val['name']==$user){
return $val;
}
}
}
}
This way, once you have the list, and the user, you can just invoke findUser() by plugging in the userlist, and the user.
$userData = findUser($user, $userlist);
However, I would suggest finding a way to get the server to return only the user you are looking for, instead of the whole list, and then finding based on username. But thats another discussion for another time.
I am working with the Ecwid API, and now moving towards updating my order from our fulfillment site with tracking info and shipping status.
Fulfillment Operation is going to export a xml file of the order update.
I have first created the basic script to update a product and this works fine.
// Post Tracking number and change Status to shipped
// trackingNumber : ""
// fulfillmentStatus : "SHIPPED"
$storeID = "";
$myToken = "";
$data = array("trackingNumber" => "9405503699300250719362", "fulfillmentStatus" => "SHIPPED", "orderNumber" => "7074");
$data_string = json_encode($data);
$url = "https://app.ecwid.com/api/v3/".urlencode($storeID)."/orders/".$data['orderNumber']."?token=".$myToken;
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "PUT");
curl_setopt($ch, CURLOPT_POSTFIELDS, $data_string);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json','Content-Length: ' . strlen($data_string)));
$response = curl_exec($ch);
curl_close($ch);
I've also created the script to pull in the xml file and convert to json to 'put' the data over to the shopping cart.
<?php
// The file data.xml contains an XML document with a root element
// and at least an element /[root]/title.
if (file_exists('data.xml')) {
$xml = simplexml_load_file('data.xml');
print_r($xml);
} else {
exit('Failed to open data.xml.');
}
$data_string = json_encode($xml);
echo '<br> br>';
echo "<pre>";
print_r($data_string);
?>
Now this is where i am lost to put the two parts together so that it would loop through the xml file (json content) with multiple "orderNumber(s)" and update the trackingNumber and fulfillmentStatus of each order.
Vitaly from Ecwid team here.
I see that you want to update orders in your Ecwid store via API from an XML file.
So the whole process is:
get details of XML file
parse data in it, find out the total number of orders there
form a loop for each order in the file
make a request to Ecwid API to update order in each loop
In your second code snippet, I see print_r($data_string); - what does it print to the screen?
I imagine the next steps would be:
Manage to correctly find order details in the XML file (order
number, tracking number) while in the loop
Make each loop update specific order in the store
For the step 1, I suggest saving data from XML file to a convenient format for you in PHP, e.g. object or array.
For example, if it was an array, it will be something like this:
Array = [recordArray 1, recordArray 2, recordArray 3]
recordArray = [ orderNumber, trackingNumber ]
For the step 2: So each loop will go through an recordArray in the Array and then get the necessary orderNumber and trackingNumber for the request.
Then the request will use this data to update an order in your Ecwid store, just like you shown in the code snippet above. However the values: 9405503699300250719362 and 7074 will be dynamic and different for each loop.
If you have any questions, please feel free to contact me: http://developers.ecwid.com/contact
Thank you.
i been trying to fetch some data from wikia website by using simple_html_dom lib for php. basically what i do is to use the wikia api to convert into html render and extract data from there. After extracting, i will pump those data into mysql database to save. My problem is that, usually i will pull 300 records and i will stuck on 93 records with file_get_html being null which will cause my find() function to fail. I am not sure why is it stopping at 93 records but i have tried various solution such as
ini_set( 'default_socket_timeout', 120 );
set_time_limit( 120 );
basically i will have to access wikia page for 300 times to get those 300 records. But mostly i will manage to get 93 records before file_get_html gets to null. Any idea how can i tackle this issue?
i have test curl as well and have the same issue.
function test($url){
$ch=curl_init();
$timeout=5;
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, $timeout);
$result=curl_exec($ch);
curl_close($ch);
return $result;
}
$baseurl = 'http://xxxx.wikia.com/index.php?';
foreach($resultset_wiki as $name){
// Create DOM from URL or file
$options = array("action"=>"render","title"=>$name['name']);
$baseurl .= http_build_query($options,'','&');
$html = file_get_html($baseurl);
if($html === FALSE) {
echo "issue here";
}
// this code for cURL but commented for testing with file_get_html instead
$a = test($baseurl);
$html = new simple_html_dom();
$html->load($a);
// find div stuff here and mysql data pumping here.
}
$resultsetwiki is an array with the list of title to fetch from wikia, basically resultsetwiki data set is load from db as well before performing the search.
practically i will it this type of error
Call to a member function find() on a non-object in
answered my own issue, seems to be the URL that i am using and i have changed to curl with post to post the action and title parameter instead
The following code is supposed to search for a term on twitter, loop through all the result pages and return one big array with the results from each page appended at each step.
foreach($search_terms as $term){
//populate the obj array by going through all pages
//set up connection
$ch = curl_init();
// go through all pages and save in an object array
for($j=1; $j<16;$j++){
$url ='http://search.twitter.com/search.json?q=' . $term .'&rpp=100&page='.$j.'';
curl_setopt($ch, CURLOPT_URL,$url);
curl_setopt($ch,CURLOPT_RETURNTRANSFER,1);
$var[$j] = curl_exec($ch);
curl_close($ch);
$obj = array_merge((array)$obj,(array)json_decode($var[$j], true));
}
}
It doesn't quite work though and am getting these errors:
curl_setopt(): 3 is not a valid cURL handle resource
curl_exec(): 3 is not a valid cURL handle resource
curl_close(): 3 is not a valid cURL handle resource
...... and this is repeated all the way from 3-> 7...
curl_setopt(): 7 is not a valid cURL handle resource
curl_exec(): 7 is not a valid cURL handle resource
curl_close(): 7 is not a valid cURL handle resource
//set up connection
$ch = curl_init();
// go through all pages and save in an object array
for($j=1; $j<16;$j++){
You need the call to curl_init() inside your loop since you close it at the end of each iteration.