archive-org.com » ORG » A » ARCHDUKE.ORG

Total: 328

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Alex Selby - Collection of things
    Alex Selby Collection of things English football league predictions Eternity page Optimal heads up preflop holdem midi ascii conversion Comment on D Wave paper and treewidth based optimiser

    Original URL path: http://www.archduke.org/ (2016-04-27)
    Open archived version from archive


  • Eternity page
    notes from a talk below Official Ertl site Included for completeness Some explanatory pages Our solution Our method the talk below is more recent and probably a better explanation Notes from a talk on solving Eternity Less technical description Some partly accurate news stories Telegraph story about the puzzle s launch 3 6 1999 local copy original cannot be found Telegraph feature about inventor s house November 1999 Eternity puzzle is a best seller but it is impossible Fortunately I didn t read this story until after we solved it 14 12 1999 The puzzle inventor is sanguine in This is North Scotland 28 1 2000 local copy original cannot be found The inventor fears the worst 15 3 2000 BBC news story during the period the solutions were being verified 2 10 2000 The Times adds to the tension Are we one of the three winners 17 10 2000 The Times is now a subscription service To find this article from the Times archives try searching for the keywords 621 digits monckton unfathomable The Indy An announcement from the scrutineers is expected shortly But not that shortly as it turned out 17 10 2000 The local paper joins in 17 10 2000 local copy original cannot be found Yahoo press release looks like we did win after all Send it 26 10 2000 local copy original cannot be found BBC news agrees 26 10 2000 So does the Indy 27 10 2000 and The Times 27 10 2000 The Times is now a subscription service To find this article from the Times archives try searching for the keywords monckton aberdeenshire cardboard 48 209 and here are some pictures to prove it If he can wear that tie I can wear that jumper He s the marketing director of the

    Original URL path: http://www.archduke.org/eternity/index.html (2016-04-27)
    Open archived version from archive

  • Simplex program directory
    down is thought by some to capture enough of the game of Texas Holdem the most popular version of poker played today to be actually useful to serious players As far as I know it is the first perfect play solution to two player Texas Holdem with no betting after the flop This article by Nick Christenson originally from twoplustwo magazine has some very kind things to say about how influential this program may have been Things have moved on since 1999 in the field of academic poker studies and there have been many academic papers some even citing my program and notes describing sophisticated exact and approximate solutions of versions of real world poker One interesting observation is that in no limit poker Jam Fold strategies either raise all in or fold can be very close to optimal This is good news for theoreticians since Jam Fold strategies are much more tractable to analyse than strategies that would involve many future betting rounds but it is perhaps disappointing for poker players since it means that the game is in some situations nearly solved This is an example of such a paper Edited version of original usenet article The program

    Original URL path: http://www.archduke.org/simplex/index.html (2016-04-27)
    Open archived version from archive

  • midi <-> ascii conversion
    There are several different ways the timing of notes can be specified you can choose the one most appropriate for the edit you want to do For example you might want to type in a piece of music insert delete notes or change the track of notes none of which would be quite so easy in midi s native delta time format asc2mid c mid2asc c instructions These are precompiled

    Original URL path: http://www.archduke.org/midi/index.html (2016-04-27)
    Open archived version from archive

  • stuff » D-Wave: comment on comparison with classical computers
    the edges In practice the subset chosen didn t seem to make much difference An important point is that the problem description in 2 is not correct in saying that the entries Q ij were chosen randomly to have the values in 1 1 i j label vertices of the 439 vertex subgraph of C 8 and Q ij is allowed to be nonzero when i j or when ij is an edge of the 439 vertex graph The procedure that was actually used in this paper was to choose Ising model parameters J ij i j and h i to be IID in 1 1 and then the problem of minimising sum i j J ij s i s j sum i h i s i over spins s i in 1 1 native form for D Wave was converted to QUBO form for the software solvers by substituting s i 2x i 1 This amounts to setting Q ij 4J ij i j Q ij 0 i j and Q ii 2 h i sum j i J ji sum j i J ij Obviously you can also shuffle weight between Q ij and Q ji without changing anything This different problem description makes a big difference because the instances are much harder than if they arose from choosing Q ij IID in 1 1 or any one of several variants like this starting with Q ij such as enforcing symmetry or upper triangularity The difficulty of these QUBO instances appears to be intermediate between those arising from the description in 2 which are extremely easy and those arising from the Ising model description with h i 0 which are quite hard Prog QUBO is available here along with full test cases and results This program works in several modes of operation In its heuristic search mode with strategy 1 or strategy 3 dubbed Prog QUBO S1 and Prob QUBO S3 or S1 and S3 for short the modes mainly considered here it starts with a random state then optimises over rows and columns or trees of the 8 8 graph of K 4 4 s until no more benefit is possible Then it repeats until it runs out of time Optimisation over a row column or tree is efficient because they are thin in the sense of treewidth they can be built up semi K 4 4 by semi K 4 4 at each stage only having a small internal boundary Further details are given in AppendixB It is a slightly delicate matter to judge the performance of heuristic non proving solvers One method of evaluation would be to see what the best value they can produce in a given amount of time This has the drawback that you can t get speed comparison ratios so you can t say for example how many D Waves are equal in value to how many CPUs and it will also strongly depend on the time you choose to allow The other kind of method is to keep the quality of answer fixed and measure the time taken to arrive at this answer This obviously depends on the quality level chosen but here the problems are sufficiently easy taking milliseconds that it makes sense to ask for the optimum It may be that for a particular real life use finding a quick good value may be more important than finding the optimum but at least the optimum is an objective standard by which we can compare solvers In any case it is the only real basis of comparison we have available from the published information in 2 since we don t have access to a complete breakdown of the answers produced by the D Wave device Section 4 of 2 attempts to reverse engineer the time taken by the D Wave chip to find an optimal answer and we may attempt to mimic this to make a comparison To this end we define the Time to Solve TTS for a heuristic solver operating on a given instance to be the average time taken to find the optimum value It is assumed that the solver may be using a random algorithm and the average is meant to be over this space of randomness There are a couple of points arising The first point is that this single measure TTS does not take into account how confident the solver is that its solution is optimal Of course it cannot be sure the solution is optimal otherwise it would be a proving solver not a heuristic one but it may be confident to a greater or lesser extent and this degree of confidence may or may not be important for the end use A more discerning performance measure might include two times the expected time to find the first solution TTS and the expected time to be say 99 confident that the best solution obtained so far is the optimum but here we disregard the time to be confident and just consider the TTS as the performance measure This is like having an oracle which knows the true optimum causing the heuristic program to halt at the point it finds the optimum Fortunately whatever the shortcomings of this measure they apply more or less equally to D Wave and Prog QUBO because both work broadly speaking by quickly making lots of trial solves from random start positions and the only means they have to be confident that the solution obtained is optimal or has a given quality is to look at the distribution of values energies obtained If the lowest value obtained has occurred many times then that suggests that it may be optimal The second point arising is that if we measure the average by asking the solver to find say 500 solutions and then dividing the time by 500 we need to be sure that it is finding 500 independent solutions i e that no information from one solution gets used to help find another It may be trivial to find many variants of a given solution but we are interested in the time taken to solve from scratch because in a real application we probably only want one solution we are only finding lots of solutions here to get a more accurate time To summarise the measurement procedure would ideally be this Repeat 500 times Randomise the configuration and clear all state Repeat Produce candidate configurations until you hit one whose value is v min where v min is the optimal value Configuration refers to those of the binary variables associated with each vertex and state is auxiliary statistical information accumulated on the way However this procedure depends on knowing v min and it is more convenient to have an autonomous evaluation procedure that doesn t rely on knowing this optimal value beforehand So the actual procedure used was this Set v infty Set n 0 and timer 0 Repeat until n 500 Randomise the configuration and clear all state Keep producing next candidate configuration until you find one of value v le v If v v then set v v and goto 2 v v n n 1 If v manages to find its way down to v min which was verified later to have happened in every case tried then the elapsed time since the last time the timer was set to 0 will be the same in expectaction as in the ideal first version of the procedure above This requires finding 501 optimal values to measure the time to find 500 because the first optimal value will have been found under conditions of doing extra resets so the time taken to find the first optimal value will have the wrong expectation There is a convenient side effect of asking Prog QUBO S to find 500 independent optima namely that in practice it can do so unsupervised i e without being told the true optimum value If it keeps doing random runs until the lowest value it has obtained has occurred 500 times then for the problems here it is highly likely that this value is the true optimum In fact Prog QUBO uses the value 500 as its default setting and in all 6000 instances tried the value so obtained actually was the true optimum This is due to branching ratios from one value to another for the problems considered here the number of occurrences of value x Delta divided by the number of occurrences of value x Delta being the smallest change possible for the problem in question is in practice almost always lower than 50 and usually much lower If this branching ratio were 50 then the chance you have missed the optimum after collecting 500 lowest values obtained is about exp 500 50 4 5 times10 5 There is another convenient side effect The TTS can be rather small e g 0 3ms so to get an accurate measurement of CPU time using a system clock with an accuracy of 10ms it is necessary to repeat the run several times To summarise for each problem instance Prog QUBO S1 S3 are run for as long as necessary to find 500 independent optimal solutions These are later checked for optimality The TTS is taken to be the running time divided by 500 and is used as the performance measure This is intended to be directly comparable to the analysis in section 4 of 2 where the expected Time to Solve TTS for D Wave is calculated based on the number of solutions that appear in a sample of 1000 anneals This D Wave analysis is also effectively assuming that an oracle will step in and halt the search as soon as the device finds an optimum In neither case is the Time to Solve the same as the Time to Solve and be confident you have the optimum Prog QUBO was run in heuristic mode single threaded on a single core of a standard desktop processor on 6000 random test instances available in the github repository split as 1000 instances of three different types of problem run with two different strategies The setup details are available in Appendix A The three sets of 1000 are Set 1 size 439 weightmode 5 QUBO examples intended to be as similar as possible to those on which the V5 chip and V6 chip in V5 downgrade mode were tested Set 2 size 502 weightmode 5 QUBO examples intended to be as similar as possible to those on which the V6 chip was tested Set 3 size 439 weightmode 7 no external field Ising Model examples in QUBO form intended to be harder instances for testing Set 1 V5 V6 chip comparison with Prog QUBO on size 439 graph Size 439 weightmode 5 QUBO results Sample size TTS mean TTS median TTS 80 ile TTS 90 ile Percentage optimal D Wave V5 100 ge 224ms 202ms 214ms 265ms 85 100 D Wave V6 1000 ge 56ms 37ms 40ms ge 50ms 85 100 D Wave V5 excluding setup 100 ge 23ms 1 4ms 13ms 64ms 85 100 D Wave V6 excluding setup 1000 ge 20ms 0 63ms 3 8ms ge 14ms 85 100 Prog QUBO S1 1000 2 8ms 1 7ms 3 5ms 5 2ms 100 Prog QUBO S3 1000 1 6ms 1 2ms 2 1ms 2 9ms 100 Prog QUBO lines in above table constructed from data here here Prog QUBO running single core single threaded full test conditions explained here D Wave lines constructed from data from Fig 6 following the method of section 4 of 2 Graphs made using this program Last few D Wave sample values especially for the V5 cases are unreliable and may be underestimates for the reasons given below The values in the D Wave V5 line of the above table were calculated from the information in Fig 6 of 2 and the methods of section 4 of that paper The last column of Fig 6 consists of 100 circles each representing for its corresponding instance the number of times the minimum found value showed up in a sample of 1000 anneals These values extracted directly from the pdf are reproduced here and in sorted form here As an example to calculate the TTS 80 ile D Wave V5 figure we first look at the 20th and 21st multiplicities in the sorted list which are 21 and 25 We expect the 80th percentile number multiplicity to be between these two values so call it 23 This then implies that 1000 23 anneals are necessary on average to reach an optimum and this takes 201 1000 23 29 214ms Obviously the D Wave times for easy medium instances are dominated by the setup component In fact the setup component may be a slight overestimate because the figures reported in 2 were upper bounds estimated from the maximum of several example values The graph on the right shows what the V5 V6 timings would look like without the setup component entirely The percentage optimal column is not meant to be a measured outcome from the comparison it wouldn t be fair because the programs and D Wave devices ran for different amounts of time but it points to some uncertainty at the time of writing as to which of the 100 D Wave test instances from 2 produced optimal answers It doesn t make a great deal of difference to the overall comparison as we could assume that the D Wave device found optimal answers in all cases and the conclusion that D Wave is currently not as fast as software would be the same but all the same it is better to make a like for like comparison if possible To a large extent this uncertainty can be mitigated by not delving too far into the tail of very difficult test instances That is it makes sense to compare an instance of median difficulty for Prog QUBO with one of median difficulty for D Wave and it probably makes sense to compare 90th percentile difficulty instances since we don t care if the harder 10 were solved slowly or not solved at all and we expect the easier 90 to have been solved optimally but it is more risky to compare say 98th percentile difficulty instances since all of the 98 of easier instances were solved optimally by Prog QUBO but might not have been solved optimally in the case of D Wave There is another reason not to compare 98th percentile difficulty instances or anything beyond about the 90th percentile and that is that the statistical uncertainty from the set of D Wave tests would be too great since we are only working from a set of 100 instances In fact 1000 n 439 instances were tested on D Wave but only 100 of these were solved optimally by CPLEX due to speed limitations at the time and only the information about these 100 has been published in any detail Looking at the above method of calculation it is apparent that when the number of minimum values found gets too low there will be a large uncertainty in the D Wave time It is already somewhat uncertain for the 90th percentile difficulty problem since we are dividing 4 or 5 into 1000 and this uncertainty would be much larger for a more difficult instance It looks like Prog QUBO S3 is about 31x faster than D Wave V6 for a median problem 19x faster for a problem in the 80th percentile of difficulty and ge 17x faster for a 90th percentile problem It looks like the D Wave V6 device is catching up as the problems get harder but in reality it is the other way around due to the constant setup time 36ms included in the D Wave figures From the point of view of what the V6 device can actually do it is reasonable to include the constant setup time but from the point of view of following trends the setup time acts as a distortion See below for a setup less comparison and a discussion of trends Set 2 V6 chip comparison size 502 graph Besides being physically faster than the V5 device at setup and readout then V6 can handle problems up to size 502 which may be a more important difference as these instances can be considerably harder than those of size 439 Test results from the V6 device are less detailed than those from the V5 device and are described in 2 as very preliminary Nonetheless there appears to be enough information in that paper to make a comparison This time the important information is contained in Fig 8 from which again inspecting the pdf it is possible to discover the number of minimum found values for 1000 instances each having 10000 sample values anneals Unfortunately we have no way of knowing which of these values correspond to optimal solutions which would be useful to know to make a like for like comparison with Prog QUBO S3 which is heuristically finding answers that are verified to be optimal But again we can more or less sidestep this problem by not going beyond about the 90th percentile of difficulty This time the assumption is that if the minimum value occurs more than about 50 times in D Wave s 10000 samples which happens for problems of up to about the 90th percentile of difficulty then it is reasonably likely that it is the optimum This assumes that D Wave s branching ratios are somewhat well behaved which is actually not completely clear from Fig 5 but again any errors in these assumptions would be in D Wave s favour so we don t particularly care since these wouldn t affect our overall conclusion that D Wave appears not to be currently as fast as software Size 502 weightmode 5 QUBO results Sample size TTS mean TTS median TTS 80 ile TTS 90 ile Percentage optimal D Wave V6 1000 ge 60ms 36 9ms 42 1ms 59 3ms 85 100 D Wave V6 excluding setup 1000 ge 24ms 0 89ms 6 1ms ge 23ms 85 100 Prog QUBO S1 1000 5ms 3 1ms 6 8ms 10ms 100 Prog QUBO S3 1000 2 6ms 1 9ms 3 5ms 5 0ms 100 Prog QUBO lines in above table constructed from data here here Prog QUBO running single core single threaded full test conditions explained here D Wave lines constructed from data from Fig 8 following the method of section 4 of 2 Graphs made using this program The V6 chip takes 36 0 126k ms to do one setup and anneal k times though again these times may be slight overestimates Prob QUBO S3 is about 19x faster for a median problem 12x faster for a 80th percentile problem and also about 12x faster for a 90th percentile problem Similar comments to the V5 439 comparison apply Set 3 No external field test size 439 graph We might also wonder how D Wave performs on the hardest form of its native problem i e the set of couplings weights for the Chimera graph that produces the hardest problem While probably not the most difficult form a harder class of problems for the heuristic solver of the type that are directly encodable into D Wave occurs when J ij are IID in 1 1 i j and h i 0 As noted in 1 h i being non zero biases the x i and makes the problem much easier and experiments using Prog QUBO confirm this That paper has test results using h i 0 instances but only for the older 108 qubit device and instances of this size are sufficiently small to be solvable very easily even using only local searches so we lack any D Wave comparison for these size 439 tests but the software results are printed here for interest Size 439 no external field weightmode 7 QUBO results Sample size TTS mean TTS median TTS 80 ile TTS 90 ile Percentage optimal Prog QUBO S1 1000 34ms 22ms 46ms 69ms 100 Prog QUBO S3 1000 16ms 12ms 22ms 31ms 100 Prog QUBO lines in above table constructed from data here here Prog QUBO running single core single threaded full test conditions explained here The original Prog QUBO no external field instances derive from Ising Model mode with h i 0 which can be implemented using the program options w3 x 1 which allows state variables to take values in 1 1 This is distributionally equivalent to using weightmode 7 w7 which constructs instances of these problems in QUBO form i e with state variables taking values in 0 1 The second form is used here since the other problems are also in QUBO form Optimal values and proving solvers The 3000 test instances described above were all solved by Prog QUBO m5 Prog QUBO in proving mode option m5 i e a mode which guarantees to find the optimal answer In each case the value found by Prog QUBO in heuristic mode was in fact the optimal answer These answers were all subsequently independently checked by Jean François Puget using CPLEX Interestingly the Prog QUBO m5 finds Set 2 1000 size 502 weightmode5 instances a lot harder than Set 3 even though it is the other way around for the heuristic solvers Full output for the proving runs can be found here Set 1 Set 2 Set 3 The timing comparison with the heuristic solver can be found in the summary files in these directories Set 1 Set 2 Set 3 Table of timings for proving runs Problem set Sample size TTS mean TTS median TTS 80 ile TTS 90 ile Prog QUBO m5 Set 1 1000 1 8s 1 6s 2 5s 3 4s Prog QUBO m5 Set 2 1000 110s 110s 150s 170s Prog QUBO m5 Set 3 1000 13s 11s 18s 24s This table built from summary files in these directories Set 1 Set 2 Set 3 Prob QUBO running single core single threaded full test conditions explained here CPLEX as used in 2 took a little under 30s for a median instance and between 1min and 10min for 80th and 90th percentile difficulty instances Since then Puget has found ways to improve CPLEX s performance and reports that CPLEX can solve a Set 1 2 3 problem in an average time of 4 8s 41s 79s respectively using 32 threads He uses a variant kind of average which is between the geometric and arithmetic mean but on these problem sets this average is quite similar to the arithmetic mean and median Prob QUBO m5 works by keeping track of which of the 256 states of each K 4 4 are necessary to consider as part of an optimal solution Initially all 256 states are allowed for each of the 8 2 K 4 4 s Then for each possible set of allowed boundary values b in B of each K 4 4 up to 2 16 that K 4 4 is exhausted and the set of its optimal states S b is calculated Then the program finds as small a set of K 4 4 states as it can that meets each S b for b in B This becomes the new set of allowable states for that K 4 4 The process is repeated for all K 4 4 s in the graph until no further restriction is possible Then a treewidth based exhaust is used with the restricted set of values Before this is done each of the eight possible global graph automorphisms is applied to the problem to see which one minimises the predicted running time This method is quite simple but fast enough to do the job of verifying the 3000 test examples in a reasonable amount of time The implementation readily parallelises running nearly n times faster on an n core machine for the harder instances tested for n 4 and n 6 Scaling and Speculation Beyond seeing how the D Wave devices match up with modern conventional processors in their absolute timings one might also wonder how cleverly these devices are behaving in more fundamental terms We would like to know whether D Wave is acting non trivially non classically and though this discussion does not claim to answer that question it is still perhaps interesting to look a little further into the timings to see if we can extract any clues as to its behaviour though naturally the interpretation of these results should be taken with a pinch of salt These interpretations are just a bit of fun really Rather than look at absolute timings which depend arbitrarily on the technology level of the classical computer in the comparison we can look at scaling behaviour To do this we need to jettison the setup time component of the D Wave timings which dominates the timings for easy medium instances because we are interested in the pure annealing The setup consists of reprogramming the inductive couplers and then waiting for the temperature to stabilise back down to the operating temperature of 0 02K This takes 201ms in the V5 chip and 36ms in the V6 chip but it is conceivable that this time could be significantly reduced further To create another point of comparison the heuristic S1 strategy is deliberately worsened into a strategy named S0 Prog QUBO S0 where only local exhausts are used that is for each random start configuration each K 4 4 is optimised in turn until no further improvement is possible This has the potential to be an interesting test because doing local exhausts like this only is not likely to work very well if there are long range dependences and you may think that if the hardware works well even though there are long range dependences then that is a sign that something clever is going on To start with here are the comparisons of D Wave with Prog QUBO S0 S1 S3 where the setup time has been excluded from the D Wave figures Last few D Wave sample values in the above graphs especially for the V5 case are unreliable and may be underestimates for the reasons given in the Set 1 section above We can fix n and look at the scaling as the difficulty of the problem instance increases For n 439 there are five datasets the original 100 n 439 V5 examples the 1000 n 439 V6 examples and 1000 Set 1 examples on which Prog QUBO S0 S1 S3 are run Since the first of these only has 100 samples we downsample the results from the 1000 instance datasets to be comparable For n 502 there are only four datasets because the V5 chip cannot go up to n 502 The D Wave samples assume that their results are optimal As noted above this assumption is reasonably safe for p up to 90 or so but may be questionable in the 90 100 region in which case the D Wave sample points shown in the above graphs can be regarded as lower bounds The far left tail of the D Wave plots on the other hand may be underestimating its speed slightly partly because the really easy problems morally require only a fraction of an anneal but this is not represented here For n 439 the V5 and V6 seem to have similar scaling behaviours In both graphs for p 50 the V5 V6 chips appear to be scaling on these logarithmic plots as multiples of the software strategies S0 S1 S3 For n 502 the multiples are approximately 1 8 3 3 5 which would correspond to the relationships text TTS text V6 p text const 0 text TTS text S0 p 1 8 text const 1 text TTS text S1 p 3 text const 3 text TTS text S3 p 3 5 for p 50 So by that measure the D Wave chips aren t doing anything very interesting being worse than the local optimiser Prog QUBO S0 On the other hand we can fix the difficulty and look at the scaling as n increases and arrive at just the opposite conclusion D Wave timings excluding setup Median 80 ile 90 ile Mean V5 439 1 45ms 12 6ms 64ms 23 1ms V6 439 0 63ms 3 8ms 13 8ms 20 1ms S0 439 57ms 160ms 290ms 130ms S1 439 1 7ms 3 5ms 5 2ms 2 8ms S3 439 1 2ms 2 1ms 2 9ms 1 6ms V6 502 0 89ms 6 1ms 23 3ms 24 1ms S0 502 230ms 750ms 1500ms 830ms S1 502 3 1ms 6 8ms 10ms 5 0ms S3 502 1 9ms 3 5ms 5 0ms 2 6ms V6 502 V6 439 1 4 1 6 1 7 1 2 S0 502 S0 439 4 0 4 7 5 2 6 4 S1 502 S1 439 1 8 1 9 1 9 1 8 S3 502 S3 439 1 6 1 7 1 7 1 6 Prog QUBO lines in above table constructed from data here S0 439 S1 439 S3 439 S0 502 S1 502 S3 502 D Wave data from here V5 439 V6 439 V6 502 Prog QUBO running single core single threaded full test conditions explained here So by this measure the D Wave chip is scaling up to the larger problem size better than S0 S1 S3 are Conclusion Overall it looks like there is reasonable evidence that the D Wave devices can currently be bettered in software Trying to imagine how this conclusion could be untrue i it could be that there is some class of problem hitherto untested on which the D Wave does much better than software and that this class of problem is useful ii it could be that the test set used here is not in fact similar to the test set used in 2 which is easily checkable iii it could be that there is a new D Wave device that is much better than the ones for which the test results were published On the other hand we have actually been quite conservative in our estimates of the software speed i We have used a test set which is kind to D Wave If we imagine D Wave to be helping solve some real world problem then such a problem would have to be translated into QUBO or Ising Model form quite possibly incurring a large loss of efficiency To prove a point the above software could simulate this whole process down to emulating the resulting D Wave QUBO problem but in practice it is possible that a more direct method would be a lot better as we saw with QAP ii Even the particular form of the native problem chosen may be kind to D Wave since in Ising model form the non zero couplings J ij h i are all the same magnitude pm1 which presumably makes it possible to set them in an analogue device whereas J ij and h i arising from a real problem might exhibit a troublesomely wider dynamic range On the other hand this would also make it more difficult for the software heuristic solver iii The program Prob QUBO S3 is fast enough to prove a point but I m sure that it s possible to write something much faster iv The software timings quoted are all single threaded but any normal desktop processor now would have multiple cores threads available and Prog QUBO S readily parallelise for all but the easiest instances and a six core machine could run nearly six times as fast for the hard instances We ve also been giving the benefit of the doubt to the D Wave machine about price in that right now we could obviously buy thousands of ordinary computers for the cost of one D Wave machine and we have only been comparing one D Wave device with one desktop machine The large cost of a present D Wave machine is no bad reflection on it in that if it s a pioneering device then it will naturally be more expensive than a mass produced device with decades of development behind it In effect the above comparison assumes that D Wave is ultimately mass produceable down to the cost of a decent desktop machine 1000 or so This is perhaps slightly out of keeping with the ideal of comparing against a current D Wave device but it makes for a more interesting comparison in my view In general I think that the 512 qubit Chimera graph may not be all that powerful a tool as there are large induced subgraphs of small treewidth see Appendix B which means that it can be approximated to some extent by simple graphs There is nothing special about the Chimera family in this respect Any planar graph with small degree local connectivity would be the same I imagine it will need to reach at least the promised 2048 qubits and would really need to be doing some sort of quantum annealing before it becomes something computationally useful though even at that point the advantage over software is not totally clear Of course all this would not detract from the fascinating and wonderful achievement of D Wave if the device really is doing something non trivially non classical That is surely the interesting question and it would indicate whether something along these lines could eventually be made computationally useful and much more But I believe that s a research question a very interesting one rather than a practical one right now Appendix A Test conditions All software Prog QAP Prog QUBO tests were carried out on a hex core 3 2GHz Intel i7 3930K with 32GiB 1333MHz DDR3 RAM The operating system was Ubuntu 12 04 64 bit using Linux kernel 3 2 0 Prog QAP and Prog QUBO are single threaded all timings are given for runs on a single core and the quoted timings are CPU time as measured by the C function clock and the Python funcion time clock Though each given instance was run on a single core to save time different instances were run simultaneously on multiple cores To be clear it was still the self measured CPU time for each run that was reported and in practice this is actually slightly larger 10 or so than if the run had been run on its own on a single core therefore the timing figures are conservative from this point of view and the multiple cores were not used to produce faster timings The computers were free of any other intensive processes and at no point were more qubo processes running on them than there were cores available thus avoiding any possible hyperthreading The time taken to construct each qubo instance generating random numbers or reading from a file was not included in the timings on the grounds that the task is to solve the problem not construct and solve the problem The time taken to compile the instance into a form more amenable for the solver possibly should have been included in the timings it s a question of what form the problem is likely to appear in in a real world example though this was separately measured to be less than 0 1ms so is virtually negligible for the purposes of these tests Each heuristic qubo run consists of finding 500 optimal solutions which makes it possible to measure the CPU time to reasonable accuracy with the operating system CPU clock whose resolution is 1 centisecond The fastest TTS was 0 3ms so 500 of these takes 15 centiseconds Each solution was found from scratch with no information allowed to leak from the previous solution No attempt was made to flush the processor cache between optimal solutions though the

    Original URL path: http://www.archduke.org/stuff/d-wave-comment-on-comparison-with-classical-computers/ (2016-04-27)
    Open archived version from archive

  • English football league predictions
    GMT Wednesday 13 January 2016 at 22 00 01 GMT Tuesday 12 January 2016 at 22 00 01 GMT Monday 11 January 2016 at 22 00 01 GMT Sunday 10 January 2016 at 22 00 01 GMT Saturday 9 January 2016 at 22 00 01 GMT Friday 8 January 2016 at 22 00 01 GMT Thursday 7 January 2016 at 22 00 01 GMT Wednesday 6 January 2016 at 22 00 01 GMT Tuesday 5 January 2016 at 22 00 01 GMT Monday 4 January 2016 at 22 00 01 GMT Sunday 3 January 2016 at 22 00 01 GMT Saturday 2 January 2016 at 22 00 01 GMT Friday 1 January 2016 at 22 00 01 GMT Thursday 31 December 2015 at 22 00 01 GMT Wednesday 30 December 2015 at 22 00 01 GMT Tuesday 29 December 2015 at 22 00 01 GMT Monday 28 December 2015 at 22 00 01 GMT Sunday 27 December 2015 at 22 00 01 GMT Saturday 26 December 2015 at 22 00 01 GMT Friday 25 December 2015 at 22 00 01 GMT Thursday 24 December 2015 at 22 00 01 GMT Wednesday 23 December 2015 at 22 00 01 GMT Tuesday 22 December 2015 at 22 00 01 GMT Monday 21 December 2015 at 22 00 01 GMT Sunday 20 December 2015 at 22 00 01 GMT Saturday 19 December 2015 at 22 00 01 GMT Friday 18 December 2015 at 22 00 01 GMT Thursday 17 December 2015 at 22 00 01 GMT Wednesday 16 December 2015 at 22 00 01 GMT Tuesday 15 December 2015 at 22 00 01 GMT Monday 14 December 2015 at 22 00 01 GMT Sunday 13 December 2015 at 22 00 01 GMT Saturday 12 December 2015 at 22 00 01 GMT Friday 11 December 2015 at 22 00 01 GMT Thursday 10 December 2015 at 22 00 01 GMT Wednesday 9 December 2015 at 22 00 01 GMT Tuesday 8 December 2015 at 22 00 01 GMT Monday 7 December 2015 at 22 00 01 GMT Sunday 6 December 2015 at 22 00 01 GMT Saturday 5 December 2015 at 22 00 01 GMT Friday 4 December 2015 at 22 00 01 GMT Thursday 3 December 2015 at 22 00 01 GMT Wednesday 2 December 2015 at 22 00 01 GMT Tuesday 1 December 2015 at 22 00 01 GMT Monday 30 November 2015 at 22 00 01 GMT Sunday 29 November 2015 at 22 00 01 GMT Saturday 28 November 2015 at 22 00 01 GMT Friday 27 November 2015 at 22 00 01 GMT Thursday 26 November 2015 at 22 00 01 GMT Wednesday 25 November 2015 at 22 00 01 GMT Tuesday 24 November 2015 at 22 00 01 GMT Monday 23 November 2015 at 22 00 01 GMT Sunday 22 November 2015 at 22 00 01 GMT Saturday 21 November 2015 at 22 00 01 GMT Friday 20 November 2015 at 22 00 01 GMT Thursday 19 November 2015

    Original URL path: http://www.archduke.org/footballpreds/ (2016-04-27)
    Open archived version from archive

  • Solution
    125k Don t ask me why the larger picture is the smaller file The following three are best for printing out as they are in resolution independent formats pdf is likely to work if you use Windows and postscript is likely to work if you use a version of Unix or have a postscript printer pdf 16k postscript gzipped 16k postscript uncompressed 149k And these are non antialiased black and

    Original URL path: http://www.archduke.org/eternity/solution/index.html (2016-04-27)
    Open archived version from archive

  • Description
    placement of any piece we make sure that all length 11 subpaths of the boundary are valid according to the above mentioned lookup table In addition to this when we decide which site to tile we calculate how constrained every site is according to an n ply search n is adjustable but typically only 1 or 2 So if there is any site where you can t place n pieces leaving a boundary which passes the lookup table test then the partial will be rejected We think it is generally important to tile the most constrained site but in some cases it is clearly a waste of time and even a simple fixed order is better so we have some idea how we might improve our method of choosing sites For hex9 we find the 73752 solutions in about 25 seconds with no special symmetry code which at the time we wrote it c Feb 2000 was quite a bit faster than other publicised solvers On other benchmarks like hex36 we were also quite a lot faster But on one of the latest benchmarks milestone 32 our solver is 20 times slower than Soliver On the other hand I think milestone 32 is particularly suited to a fixed order because it is long and thin so there is an obvious good order and there is no altitude type boundary One way of seeing why we want to choose the most constrained site first assuming there is no cost in finding out what it is is this Pretend that the branching factor when there are n pieces in place is always b n Then the number of nodes searched is N 1 b 0 1 b 1 1 b 2 1 If b n b n 1 then you reduce N by swapping b n and b n 1 so you get the smallest N by putting b 0 b 1 in increasing order What is the branching factor at a particular site You could say it is the number of legal placements that cover that site But some of those placements may lead immediately to dead ends e g 30 degree angles so it seems reasonable to discard these and count just the placements that don t lead to immediate impossibilities But this treats placements which have only one possible followup move the same as placements which have many followup moves Wouldn t a more refined measure treat the former with less weight than the latter Well maybe If you took this to the extreme then your measure would end up being how many global solutions there were so you have to take into account how long it actually takes to calculate the local information to get a meaningful balance If you expect to do say 10 10 nodes of work underneath the current node then it is presumably worth choosing the site to branch on with quite a lot of care but if you only expect 10 5 nodes then you can t afford to waste too much time choosing the site The above waffle was leading to the idea of a 2 ply version of the local branching factor which is not just the sum over local placements of followup placements Our solver decides on an appropriate n works out an n ply version of the local branching factor at each site and then branches on the site with the smallest one However it seems that for the kind of length exhaust we typically did of the order of minutes it is rarely worth using more than the ordinary n 1 so this method does not gain all that much in the cases in which we used it But because I like the n ply local branching factor I m include a description of it here Perhaps it might even be useful for other search problems Data gathering We re using the notation that a region is the empty shape to be tiled and its size is the number of pieces that fit in it Having made a solver we set about tiling endgame regions But here we run up against a fundamental problem with eternity pieces Say for example we want to to tile 1000 regions of size n and count the number of solutions for each to see what kind of regions tend to be more tilable If these regions and pieces arise from placing 209 n pieces in the Eternity grid in the usual way then finding a solution to any one of them obviously means you have solved Eternity anyway so that is no good If on the other hand the regions and pieces you use are artificial in some way then you don t know that the answers you get are meaningful Basically any method of predicting how well you re doing is likely to rely on a form of extrapolation somewhere along the line We took a lot of care trying to make sure the extrapolation was OK but we still couldn t be sure it was so as a result we still can t be completely sure how often to expect a solution We chose to extrapolate by using extra pieces i e exhausting regions using more pieces than fit in the region This method and the modeling process that followed underwent many changes but we ll describe our first version of it here To begin with we worked with regions of size 24 obtained in some fairly ad hoc way and then replaced the 24 pieces in remaining hand with 90 random pieces The advantage of using extra pieces is that we expect to get solutions but not to take too long to exhaust With these numbers we are sort of getting 90 choose 24 4 about 10 21 times more solutions fake at the cost of a much smaller increase in running time The 1 4 comes in because there is a 1 4 chance that a random set

    Original URL path: http://www.archduke.org/eternity/method/desc.html (2016-04-27)
    Open archived version from archive



  •