I want to find every possible path in a NON oriented but weighted graph
I have made some search on dijkstra, but I'm only able to find the shortest path, even by looking in the code, i'm not able to find every path (i think it's because of the method used by dijkstra to not search every path)
I know the beginning point, but not the ending point, all I want is no more than X (in weight) in my path (and of the course ALL the path < X)
Let say I have this array to define my point (in php):
$aRoutes = array(
array(0,0,0),
array(0,1,10),
array(0,2,20),
array(0,3,30),
array(0,4,100),
array(1,1,0),
array(1,2,50),
array(1,3,15),
array(1,4,10),
array(2,2,0), //2->3 is below in the form 3->2
array(2,4,10),
array(3,3,0),
array(3,2,20),//but it's not oriented !
array(3,4,60),
array(4,4,0)
);
As you can see, every step have 4 paths reachable.
Now, beginning with step 0, I want to know every path that is less than 50 (if I can have every "longest" that would be the best code ! :) )
In our case every path can be :
0->1 = 10
0->1->3 = 25
0->1->3->2 = 45
0->1->4 = 20
0->1->4->2 = 30
0->1->4->2->3 = 50
0->2 = 20
0->2->3 = 40
0->2->4 = 30
0->3 = 30
0->3->1 = 45
0->3->2 = 50
As you can see, the "best" one is 0->1->4->2->3 = 50
(pretty easy to know as it's the longest one)
I'm pretty bad in recursive coding, so that's why I ask your help (i barelly understand what is done in every dijkstra code i have found)
(can be coded in php, java, vb, c, c++, I understand every one and i'm able to translate it once I have it...a little preference for php :p )
Thank in advance for your help
Related
I'm setting up a web app, where users can choose the starting point and the number of characters to read from a text file containing 1 billion digits of pi.
I have looked, but I can't find any similar problems. Because I don't know what the starting digit is, I can't use other solutions.
Here is the function written in Python:
def pi(left : int, right : int):
f.seek(left+1)
return f.read(right)
For example, entering 700 as the starting point and 9 as the number of characters should return "Pi(700,9): 542019956".
Use fseek to move the file pointer to the position you need, and fread to read the amount of characters you need - just like your Python sample code.
Actually, this capability is built in to file_get_contents.
$substr = file_get_contents('pi_file.txt', false, null, 700, 9);
A handy feature of that function that I learned about just now after using it for the past 7 years.
I have this very simple PHP call to Alpha Vantage API to fill a table (or list) with NASDAQ stock prices:
<?php
function get_price($commodity = "")
{
$url = 'https://www.alphavantage.co/query?function=TIME_SERIES_DAILY_ADJUSTED&symbol=' . $commodity . '&outputsize=full&apikey=myKey';
$obj = json_decode(file_get_contents($url), true);
$date = $obj['Meta Data']['3. Last Refreshed'];
$result = $obj['Time Series (Daily)']['2018-03-23']['4. close'];
$rd_result = round($result, 2);
echo $result;
}
?>
<?php get_price("XOM");
get_price("AAPL");
get_price("MSFT");
get_price("CVX");
get_price("CAT");
get_price("BA");
?>
And it works, but just so freaking slow. It can take ove 30 secs. to load while the json file from Alpha Vantage loads in a fraction of second.
Does anyone knows where am I going wrong?
This what i did when the API took time to reply, my solution is written in C# but the logic would be the same.
string[] AlphaVantageApiKey = { "RK*********", "B2***********", 4FD*********QN", "7S3Z*********FRX", "U************I3" };
int ApiKeyValue = 0;
foreach (var stock in listOfStocks)
{
DataTable dtResult = DataRetrival.GetIntradayStockFeedForSelectedStockAs(stock.Symbol.Trim().ToUpper(), ApiKeyValue);
ApiKeyValue = (ApiKeyValue == 4) ? 0 : ApiKeyValue + 1;
}
I use 5 to 6 different API keys, when i'm querying data. I loop thought each of them for each call. There by reducing load on one perpendicular token.
I observed that this improved my performance a lot. It takes me less than 1 min to get Intraday data for 50 stocks.
Another, way you can improve your performance is to use
outputsize=compact
compact returns only the latest 100 data points in the time series.
UPDATE: Batch Stock Quotes
You might want to consider using this type of query as well. Multiple stock quotes all in one call.
Also, using the full output size is grabbing data from the past 20 years, if applicable. Take that out of your query and have the API use its condensed output default.
EDIT: According to the above, you should make changes to your query. But it can also be an issue with your server. I tested this for a use case I am working on and it takes me a few seconds to get the data, albeit I am only pulling it for one stock symbol on a page at a time.
Try increasing your memory limit if things are too slow for your liking.
<?php
ini_set('memory_limit','500M'); // or your desired limit
?>
Also, if you have shared hosting, that might be the problem. However, I do not know enough about your server to answer that fully.
I have a want to take a File.open('somefile', 'w+') and make it read a file, take one line of text at a time, and visually write it slowly in another file. The reason I ask this question is because I can find nothing that does this already in code, nor can I find anything that actually controls the speed of how fast a program writes on the computer. I know that this can be simulated in a program such as Adobe Aftereffects so long as you provide a cursor after a character and the visual effect doesn't take place too quickly, but I've got 4,000 lines of code that I want to iterate over and can't afford to do this manually. This effect can also be achieved with a Microsoft Macro, but this requires it to be manually entered into the macro with no option of copy and paste.
-solutions preferred in Python, Ruby, and PHP-
If I understood properly, what you are trying to achieve, here you go:
input = File.read('readfrom.txt', 'r')
File.open('writeto.txt', 'w+') do |f|
input.chars.each do |c|
f.print(c) # print 1 char
f.flush # flush the stream
sleep 1 # sleep
end
end
This is one quick and dirty way of doing it in Python.
from time import sleep
mystring= 'My short text with a newline here\nand then ensuing text'
dt = 0.2 #0.2 seconds
for ch in mystring:
with open('fn_out','w+') as f:
f.write(ch)
f.flush()
sleep(dt)
f.flush() will result in updating the file with the changes.
One could make this more elaborate by having a longer pause after each newline, or a variable timestep dt.
To watch the change one has to repeatedly reload the file, as pointed out by #Tom Lord so you could run something like this beforehand to watch it in the terminal:
watch -n 0.1 cat fn_out
After some serious testing, I have finally developed a piece of code that will do the very thing I want. Tom Lord gave me some new words to use in my search terms "simulate typing" and this led me to win32ole with its SendKeys function. Here is a code that will iterate over all the characters in a file and print them out exactly as they were saved while simulating typing. I will see about making this into a gem for future use.
require 'win32ole'
wsh = WIN32OLE.new("WScript.Shell")
wsh.Run("Notepad.exe")
while not wsh.AppActivate("Notepad")
sleep 1
end
def fileToArray(file)
x = []
File.foreach("#{file}") do |line|
x << line.split('')
end
return x.flatten!
end
tests = fileToArray("readfrom.txt")
x = 0
while x <= tests.length
send = tests[x]
wsh.SendKeys("#{send}")
x += 1
sleep 0.1
end
Is there any way to set a LinePlot->SetFillColor to go upwards instead of going down in JPGraph - PHP?
$p6 = new LinePlot($arrayLimSpecI);
$graph->Add($p6);
$p6->SetColor("#0B610B");
$p6->SetWeight(3);
$p6->SetStyle("solid");
//FillColor Fills the plotline from point 0 to point X
$p6->SetFillColor('red#0.65');
What I've tried:
$band = new PlotBand(HORIZONTAL,BAND_SOLID,$valueS,"max",'red');
$band->ShowFrame(false);
$graph->AddBand($band);
But this band doesn't seem to adjust to the dynamic marks that the graphic has got. If I set valueS to show on Y-Mark 10 it will show instead on the mark point 40.
EDIT: Another library that is currently supported or is capable or doing this is welcomed.
I am trying to parse xml files to store data into database. I have written a code with PHP (as below) and I could successfully run the code.
But the problem is, it requires around 8 mins to read a complete file (which is around 30 MB), and I have to parse around 100 files in each hour.
So, obviously my current code is of no use to me. Can anybody advise for a better solution? Or should I switch to other coding language?
What I get from net is, I can do it with Perl/Python or something called XSLT (which I am not so sure about, frankly).
$xml = new XMLReader();
$xml->open($file);
while ($xml->name === 'node1'){
$node = new SimpleXMLElement($xml->readOuterXML());
foreach($node->node2 as $node2){
//READ
}
$xml->next('node1');
}
$xml->close();
Here's an example of my script I used to parse the WURFL XML database found here.
I used the ElementTree module for Python and wrote out a JavaScript Array - although you can easily modify my script to write a CSV of the same (Just change the final 3 lines).
import xml.etree.ElementTree as ET
tree = ET.parse('C:/Users/Me/Documents/wurfl.xml')
root = tree.getroot()
dicto = {} #to store the data
for device in root.iter("device"): #parse out the device objects
dicto[device.get("id")] = [0, 0, 0, 0] #set up a list to store the needed variables
for child in device: #iterate through each device
if child.get("id") == "product_info": #find the product_info id
for grand in child:
if grand.get("name") == "model_name": #and the model_name id
dicto[device.get("id")][0] = grand.get("value")
dicto[device.get("id")][3] +=1
elif child.get("id") == "display": #and the display id
for grand in child:
if grand.get("name") == "physical_screen_height":
dicto[device.get("id")][1] = grand.get("value")
dicto[device.get("id")][3] +=1
elif grand.get("name") == "physical_screen_width":
dicto[device.get("id")][2] = grand.get("value")
dicto[device.get("id")][3] +=1
if not dicto[device.get("id")][3] == 3: #make sure I had enough
#otherwise it's an incomplete dataset
del dicto[device.get("id")]
arrays = []
for key in dicto.keys(): #sort this all into another list
arrays.append(key)
arrays.sort() #and sort it alphabetically
with open('C:/Users/Me/Documents/wurfl1.js', 'w') as new: #now to write it out
for item in arrays:
new.write('{\n id:"'+item+'",\n Product_Info:"'+dicto[item][0]+'",\n Height:"'+dicto[item][1]+'",\n Width:"'+dicto[item][2]+'"\n},\n')
Just counted this as I ran it again - took about 3 seconds.
In Perl you could use XML::Twig, which is designed to process huge XML files (bigger than can fit in memory)
#!/usr/bin/perl
use strict;
use warnings;
use XML::Twig;
my $file= shift #ARGV;
XML::Twig->new( twig_handlers => { 'node1/node2' => \&read_node })
->parsefile( $file);
sub read_node
{ my( $twig, $node2)= #_;
# your code, the whole node2 string is $node2->sprint
$twig->purge; # if you want to reduce memory footprint
}
You can find more info about XML::Twig at xmltwig.org
In case of Python I would recommend using lxml.
As you are having performance problems, I would recommend iterating through your XML and processing things part by part, this would save a lot of memory and is likely to be much faster.
I am reading on old server 10 MB XML within 3 seconds, your situation might be different.
About iterating with lxml: http://lxml.de/tutorial.html#tree-iteration
Review this line of code:
$node = new SimpleXMLElement($xml->readOuterXML());
Documentation for readOuterXML has a comment, that sometime it is attempting to reach out for namespaces etc. Anyway, here I would suspect big performance problem.
Consider using readInnerXML() if you could.