I'm trying to nest material with the least drop or waste.
Table A
Qty Type Description Length
2 W 16x19 16'
3 W 16x19 12'
5 W 16x19 5'
2 W 5x9 3'
Table B
Type Description StockLength
W 16X19 20'
W 16X19 25'
W 16X19 40'
W 5X9 20'
I've looked all over looking into Greedy Algorithms, Bin Packing, Knapsack, 1D-CSP, branch and bound, Brute force, and others. I'm pretty sure it is a Cutting stock problem. I just need help coming up with the function(s) to run this. I don't just have one stock length but multiple and a user may enter his own inventory of less common lengths. Any help at figuring a function or algorithm to use in PHP to come up with the optimized cutting pattern and stock lengths needed with the least waste would be greatly appreciated.
Thanks
If your question is "gimme the code", I am afraid that you have not given enough information to implement a good solution. If you read the whole of this answer, you will see why.
If your question is "gimme the algorithm", I am afraid you are looking for an answer in the wrong place. This is a technology-oriented site, not an algorithms-oriented one. Even though we programmers do of course understand algorithms (e.g., why it is inefficient to pass the same string to strlen in every iteration of a loop, or why bubble sort is not okay except for very short lists), most questions here are like "how do I use API X using language/framework Y?".
Answering complex algorithm questions like this one requires a certain kind of expertise (including, but not limited to, lots of mathematical ability). People in the field of operations research have worked in this kind of problems more than most of us ever has. Here is an introductory book on the topic.
As an engineer trying to find a practical solution to a real-world problem, I would first get answers for these questions:
How big is the average problem instance you are trying to solve? Since your generic problem is NP-complete (as Jitamaro already said), moderately big problem instances require the use of heuristics. If you are only going to solve small problem instances, you might be able to get away with implementing an algorithm that finds the exact optimum, but of course you would have to warn your users that they should not use your software to solve big problem instances.
Are there any patterns you could use to reduce the complexity of the problem? For example, do the items always or almost always come in specific sizes or quantities? If so, you could implementing a greedy algorithm that focuses on yielding high-quality solutions for common scenarios.
What would be your optimality vs. computational efficiency tradeoff? If you only need a good answer, then you should not waste mental or computational effort in trying to provide an optimal answer. Information, whether provided by a person of by a computer, is only useful if it is available when it is needed.
How much are your customers willing to pay for a high-quality solution? Unlike database or Web programming, which can be done by practically everyone because algorithms are kept to a minimum (e.g. you seldom code the exact procedure by which a SQL database provides the result of a query), operations research does require both mathematical and engineering skills. If you are not charging for them, you are losing money.
This looks to me like a variation of a 1d bin-packing. You may try a best-fit and then try it with different sorting of the table b. Anyway there doesn't exist an solution in 3/2 of the optimum and because this is a NP-complete problem. Here is a nice tutorial: http://m.developerfusion.com/article/5540/bin-packing. I used a lot to solve my problem.
Related
I'm using PHP, I have a list of numbers with a min of 1 and a max of 10:
1,2,4,10,4,3,1,6,9,8,2,10,5,6,7,3,1...
Is there a way to find the next logic number in the sequence (or at least the possible number/s)?
I think I can loop thru the array and find the one that came up least, but I'm not sure it will be working.
You can create a list of functions that test that list of numbers for a specific pattern yes, however that is much different that what humans do which is to "discover" a pattern. Humans also test for previous patterns they have seen in the past, however we are capable of discovering a pattern we haven't seen before with the algorithms inside are head. If you want the code to try to discover patterns in your list of numbers, that would be Artificial Intelligence coding. It very much does exist, though it's a big topic all together.
I hope that explanation helps :)
Edited:
Here's a link if you are interested in knowing more about Artificial Intelligence coding:
https://www.youtube.com/watch?v=TjZBTDzGeGg&list=PLUl4u3cNGP63gFHB6xb-kVBiQHYe_4hSi
I am not skilled in the world of statistics, so I hope this will be easy for someone, my lack of skill also made it very hard to find the correct search terms on this topic so I may have missed my answer in searching. anyway. I am looking at arrays of data, say CPU usage for example. how can i capture accurate information in as few data-points as possible on say, a set of data containing 1-second time intervals on cpu usage over the cores of 1 hr, where the first 30mins where 0% and the second 30 mins are 100%. right now, all i will know in one data-point i can think of is the mean, which is 50%, and not useful at all in this case. also, another case is when the usage graph was like a wave, evenly bouncing up and down between 0-100, yet still giving a mean of 50%. how can i capture this data? thanks.
If I understand your question, it is really more of a statistics question than a programming question. Do you mean, what is the best way to capture a population curve with the fewest variables possible?
Firstly, the assumptions with most standard statistics implies that the system is more or less stable (although, if the system is unstable, the numbers you get will let you know because they will be non-sensical).
The main measures that you need to know statistically are the mean, population size and the standard deviation. From this, you can calculate the rough bell curve defining to population curve, and know the accuracy of the curve based on the scale of the standard deviation.
This gives you a three variable schema for a standard bell curve.
If you want to get in further detail, you can add Cpk, Ppk, which are calculated fields.
Otherwise, you may need to get into non-linear regression and curve fitting which is best handled on a case by case basis (not great for programming).
Check out the following sites for calculating the Cp, Cpk, Pp and Ppk:
http://www.qimacros.com/control-chart-formulas/cp-cpk-formula/
http://www.macroption.com/population-sample-variance-standard-deviation/
I was taking a look at some important forums such as SMF Forums, PhpBB, or VBulleting ones and i realized they are not in 3rd FN.
They have many NULL fiels, for example, in an SMF forum a member row can have all of this columns to NULL:
pm_ignore_list, messageLabels, personalText, websiteTitle, websiteUrl, location, ICQ, AIM, YIM, MSN, timeFormat, userTitle, notifyAnnouncements, secretQuestion, secretAnswer, validation_code, additionalGroups, smileySet
So... lets say 18 fields which can be NULL in any ROW of the table.
That's not 3rd NF...
Why they do it? I am sure they know much about BD...
Thanks.
The number one reason for denormalization is performance, which is a notorious problem with many discussion forums.
Originally SQL was not designed to store hierarchical data easily, and there are many less-than optimal schema designs trying to work around this limitation.
One or more of these reasons might apply.
The database wasn't "designed" at all; it gradually accumulated more and more columns as any programmer who worked on it decided to add one. (Programmers are often only minimally trained in database design.)
The "design", such as it is, is the result of committee decisions. (See above.)
The "design" was known to be not the best idea, but was implemented in order to get the software to ship. The underlying fantasy is usually to fix it properly before the next release. (Often never gets fixed.)
The table was denormalized for faster SELECT performance. In my experience, though, SELECT speed usually suffers more from a) the overuse of ID numbers and b) misunderstanding normalization than from high degrees of normalization.
I was thinking about an idea of auto generated answers, well the answer would actually be a url instead of an actual answer, but that's not the point.
The idea is this:
On our app we've got a reporting module which basically show's page views, clicks, conversions, details about visitors like where they're from, - pretty much a similar thing to Google Analytics, but way more simplified.
And now I was thinking instead of making users select stuff like countries, traffic sources and etc from dropdown menu's (these features would be available as well) it would be pretty cool to allow them to type in questions which would result in a link to their expected part of the report. An example:
How many conversions I had from Japan on variant (one page can have many variants) 3.
would result in:
/campaign/report/filter/campaign/(current campaign id they're on)/country/Japan/variant/3/
It doesn't seem too hard to do it myself, but it's just that it would take quite a while to make it accurate enough.
I've tried google'ing but had no luck to find an existing script, so maybe you guys know anything alike to my idea that's open source and well reliable/flexible enough to suit my needs.
Thanks!
You are talking about natural language processing - an artificial intelligence topic. This can never be perfect, and eventually boils down to the system only responding to a finite number of permutations of one question.
That said, if that is fine with you - then you simply need to identify "tokens". For example,
how many - evaluate to count
conversations - evaluate to all "conversations"
from - apply a filter...
japan - ...using japan
etc.
I'm not much into audio engineering, so please be easy on me. I'm receiving an audio file as input, and need to detect whether the speaker is male or female. Any ideas how to go about doing this?
I'm using php, but am open to using other languages, and don't mind learning a little bit of sound theory as long as the time is proportionate to the task.
I can't really provide specific insight to this problem , but I'd start by reading the following article: Gender Classification from Speech.
That should at least give an idea of the concepts / methodologies involved (this article describes this quite well as far as I can tell).
First of all you will have to find pitch values and one great algorithm for finding pitch values for voice can be find on this article: http://www.fon.hum.uva.nl/paul/papers/Proceedings_1993.pdf .
It's amazingly accurate.
I'm with Christophe, both in that I don't have too much experience with this and also think some research would be your best path.
If I had to take a stab at this though, I would guess that it would involve computing the frequency spectrum of the sample using Fourier transforms, and then figuring out where the mean frequency lay. Build up a large sample of male vs female, for different cultures and languages, and then compare your specific sample's mean frequency to established means for male vs female.
I could be completely wrong though, so research is really your best bet.
One approach would be to use artificial neural networks. You provide the neural net with some examples for training and it should hopefully learn to correctly classify the voices. You will probably have to do some feature extraction using Fourier transforms to get the data into a suitable form.
There are several papers about this kind of approach if you search on Google for "neural network speaker identification" but unfortunately I am not familiar enough with them to recommend any particular one.