archive-org.com » ORG » A » ARCHDUKE.ORG

Total: 328

Choose link from "Titles, links and description words view":

Or switch to "Titles and links view".
  • Notes from talk
    the depth d tree rooted on the site in question will increase the total number of nodes searched on average Unfortunately this method does not turn out to make a large improvement with Eternity because it is often too expensive to do a depth 2 reconnoitre so I won t talk about it any more here I think it might conceivably be of interest for other search problems though A more detailed account can be found here Why exhaustive search is not the best method for Eternity Anyway Consider a further simplification of the b i model Imagine that each piece has a typical number t of ways of tiling each site if this made sense t would be a small positive number around about 0 03 Then b i the number of ways to cover a site after having placed i pieces would be 209 i t or for a N piece puzzle trying to put N pieces into a size N region with a similarly behaved set of pieces N i t So the total number of solutions would be the product of this over i 0 1 N 1 i e N t N Obviously there will be an N for which this becomes larger than 1 So it looks like there should be a critical size of puzzle above which a solution spontaneously comes into existence It also looks like there should be many solutions to large puzzles That was meant to be a plausibility argument as to why you should expect a critical size This model is really too crude to use the numbers which come out of it One quantitative aspect of it that I don t exactly believe is that it would predict that the time taken to exhaust the critical size problem is about 10 12 operations but I believe it to be much worse than this and probably well outside the range of present computers But on the other hand something would have to be majorly funny if there weren t a critical size because it is hard to imagine b i differs so much from N i t that whatever N t N should be doesn t tend to infinity So let s forget about that model now and call the critical size c We did actually estimate c and at the same time the total number of solutions to the full 209 piece problem in a way not dependent on the preceding model but still not foolproof I think we put c at about 70 or 80 and the total number of solutions to the full Eternity at about 10 95 This may seem like a lot but they are very sparse Many more non solutions whatever that means Now there are two sorts of puzzle Those for which N c and those like Eternity for which N c First consider the former case N c This is overconstrained We don t expect any spontaneous solutions but of course the problem setter can plant an artificial one I guess the only method to solve this is exhaustive search How did Christopher Monckton arrive at a 209 piece puzzle During our work on the puzzle it was very unclear what he intended about various aspects of the puzzle although it doesn t really affect your approach to solving Maybe now it is a little clearer He probably had a computer program which exhaustively searched small puzzles If he used Eternity type pieces then he would have used puzzles considerably below the critical size He would have exhausted them with his program and noticed that the running time increased very rapidly as he increased N Perhaps he found N 50 was already impossible for him to exhaust and so he reasoned that N 209 would be way outside anyone s reach I think that he intended the puzzle to be impossible in its first year but he was going to make it easier year by year until someone solved it Digression on the unusual rules and the optional hint pieces A hint piece is the position of a piece in CM s solution He was going to release more hints each year it remained unsolved By releasing enough hints he can eventually make the puzzle easy although the six hints released in the first year actually make the puzzle more difficult because six isn t enough Anyway this is where it gets interesting because you can t extrapolate a difficulty estimate beyond N c For N c it actually starts to get easier If you had to completely exhaust not stopping when you ve found a solution then it would carry on getting harder and harder and for N 209 you would have to do at least 10 95 work But of course when N c you expect many solutions so you don t need to exhaust It is easy to see why You could quickly put the first N c pieces down more or less any old how and then you d be left with a size c problem for which you d expect one solution so you could go and solve it This shows that the time taken to find one solution to a N c puzzle e g Eternity by simple exhaustive search stopping when you ve found a solution should be the same as the time taken to exhaust a N c puzzle As mentioned above I believe this to be a lot of time We didn t estimate it but as a guesstimate I would think at least 10 20 operations and maybe many more Improvements to exhaustive search In this way we can see that for N c the problem certainly doesn t get any harder But of course you can do better than that You can use your freedom in the first N c piece placements to set up a favourable state with c pieces left Actually at this point we should stop talking about c because its value will shift if you start paying attention to what pieces you are putting down to begin with since you will be left with an atypical set of pieces What is a favourable state Well one for which your program will expect to find solutions at a decent rate such as 10 7 Hz Actually I don t think it s useful to estimate the rate of state directly More about this later At this point an analogy might be useful Compare trying to fit pieces into a grid with fitting packages into the boot of a car Some of the packages are awkwardly shaped and you usually put those ones in firsst keeping the nicer shapes til later when you ll have less room for manoeuvre You will also try to ensure that at each stage the space that s left is as sensible a shape as you can arrange for example fragmented bad The situation is also quite similar to the game Tetris if you ve ever played that Everyone knows that the wiggly pieces S and Z are much more awkward than the long thin one or the T Also some board configurations are bad e g having lots of spikes and others good e g convex boundary So now the approximate form of the algorithm is starting to emerge It looks as though there should be two phases The first phase will work hard to choose favourable states of a certain size in fact about 30 as fast and as well as possible and the second phase will exhaust them It is clear that for states of less than a certain size it will pay to completely exhaust them as opposed to selectively search them pruning off branches because the time taken to exhaust them will be much less than the time taken to create them Actually there is some loss involved in switching abruptly from pruned search to exhaust but not a ridiculous amount We found our actual solution by that method so we didn t pay that much serious attention to improving it A favourable state will be one with good tilable pieces left and a nice boundary of the region So we need to know how to put numbers to these ideas What follows next is an account of the basic method we used We refined it in various ways for example to take into account possible second order effects of good pieces tending to work better with other good pieces but it is not necessary to talk about these modifications to get the idea of what we did Adding extra pieces First I need to mention a fundamental problem that you always come up against when trying to make estimates of how solvable things are Ideally we would like to try an experiment something like this Sample many states of size 100 Exhaust them all See what kind of pieces turn up in solutions presumably these are the better more tilable pieces Unfortunately as discussed above exhausting a size 100 region is harder than solving the original problem of Eternity If we instead try states of size 40 say then we may be able to exhaust them but we won t find any solutions at all because the chance of a state of size 40 of having a solution is very small indeed There is no happy medium size So I think any method of prediciting how good a state is how likely it is to be solvable say will rely on some form of extrapolation unless you are capable of solving the original Eternity problem many times over This introduces the problem that you don t know whether your extrapolation process is accurate or whether it is producing biased results for some reason We took quite a lot of care to try to reduce and monitor this problem but we were never really sure that the numbers coming out of the model were really accurate Of course there is the hope to fall back on that even if your model isn t accurately telling you how well you re doing it still may be directing your search to do the right thing for example this would be true if all its probability estimates were out by the same constant factor We chose to extrapolate by adding extra pieces That is we exhausted states which had more pieces than the size of the region For example say you use 90 pieces in a region of size 24 Suppose that you expected 10 17 solutions for a random state with 24 pieces and region size 24 Obviously then sampling states with 24 pieces is out of the question because you d have to do this about 10 17 times before you found a single solvable state But 90 choose 24 is about 4x10 21 so you d expect about 40000 solutions to the 90 piece state Well actually you d expect about 40000 4 10000 solutions due to parity considerations but we don t need to worry about that In fact I don t think you ever really need to worry about parity when solving Eternity clever though it is A description of 8 fold parity can be found on Ed Pegg s page There has also been a lot written on this and other clever non rotation reflection invariant parities on the Eternity mailing list by David Eddy and others Of course there has to be a cost for this benefit Exhausting 90 pieces in a region of size 24 will take longer than exhausting 24 pieces in a region of size 24 But it won t take anything like 4x10 21 times longer If think about it it is clear that doing a 90 piece exhaust is far more efficient than doing the 90 choose 24 24 piece exhausts using every possible 24 piece subset of 90 Illustrate this It turns out we can now find a happy medium where it doesn t take forever to exhaust and we do expect solutions The sort of numbers we used were letting an exhaust take of the order of 10 minutes to find up to thousands of solutions This provides one data point This was then repeated a few thousand times on two computers over the course of one or two weeks to create enough data to extract the model parameters The other nice thing about extrapolating by adding excess pieces is that the model we chose is still exactly solvable that is one can calculate the likelihood exactly This makes it possible to maximise the likelihood over the parameters The basic model This is a description of the mathematical process we used to turn the data from the above exhausts into a probability that a given region of a certain size can be tiled with a given set of pieces The basic idea was to have a number p i for each piece so i 1 209 Actually we usually work with l i their logarithms For a given state of the appropriate size you multiply together the p i to arrive at the chance that those pieces tile the region You can also add other parameters m j which take into account the effect of the region shape These are four forms of the same thing HTML version If this comes out as gibberish then try one of the lower three PDF version This is likely to work if you use Windows and may work anyway Postscript version This is likely to work if you re using a version of Unix You may need to use ghostview to read it TeX source During the talk I just talked about a simplified model with no boundary terms and w B 1 P R X exp sum of l i and the likelihood was L sum over i of a i l i sum over B of A k B P The model would probably be useless if the likelihood which is defined a sum of zillions of terms really needed to be evaluated by adding up these terms Fortunately the potentially problematic term A k B P can be evaluated in a different way This explains the earlier remark about the extrapolation process leading to a calculable likelihood Mention that the analogue of A k B P for interaction terms is not calculable so much harder to deal with this model After a week or two we had enough data to make the parameter estimates reliable We tested this by evaluating the parameters using only one half of the data then using only the other half of the data We were satisfied we had enough data when the two results were similar This is a graph of l i tilability against rank obtained from some variant of the above process The worst piece is a little hard to see because it lies on the y axis at 0 2 8 This is piece 166 coined the beast piece by Patrick Hamlyn who also noticed it was bad It is a lot worse than the second worse piece On the other side the best piece at 208 0 66 is piece 35 known less imaginatively to Oliver and me as the superpiece and it is much better than the second best piece It seems there are four more or less equal second best pieces these are pieces 51 54 and are very similar looking Postscript pdf Here are the 12 best pieces and the 5 worst according to some variant of this method Postscript pdf You may notice that the best pieces tend to be more convex and smooth than the worst pieces which are noticeably wiggly This accords with our intuition but intuition only takes us so far and we really do need to ask the computer how good the pieces were For example pieces 120 124 and 125 are also convex but all are ranked about 30 th It would be a significant loss to classify these pieces as second best Breadth first search We now have a probability of being able to tile a size 24 region but we are going to need the probabilities of being able to tile other greater size regions Of course we would actually prefer to know the rate of finding solutions i e probability of solution divided by time to search But rate estimates are likely to be unstable and it doesn t cost too much to ignore the difference because we hope the expected rate is approximately an increasing function of the probability so maximising probability is approximately the same as maximising rate So we would like a set of model parameters for sizes greater than 24 Unfortunately we can t use the same method as in the above section The basic model We couldn t even afford to spend 209 24 x2 weeks collecting the data but of course it would take a lot longer than that using the above method because it takes longer to find solutions for larger sizes Instead we used a kind of iterative method to try to find parameters for sizes 24 This involved mimicking the procedure which would be used for an actual run which tried to find solutions see next paragraph These runs are tree like states of size n have child states some of which will be kept which are size n 1 We can then adjust the size n parameters to try to get them to predict in the sense of least squares say the probability given by the size n 1 model on its children or you can do it for grandchildren etc The procedure we were using was not all that good and the resulting parameters are almost certainly not as good as they could be This was one of the things we were trying to improve before we were so rudely interrupted by an actual solution

    Original URL path: http://www.archduke.org/eternity/talk/notes.html (2016-04-27)
    Open archived version from archive

  • Inventor's £1m jigsaw challenge
    inventor is gambling on Eternity becoming the Rubik Cube of the Millennium allowing him to pay the prize from his royalties He launched the puzzle at Hamleys toy store in Regent Street yesterday He said It won t be a computer which solves it and it won t be a mathematician either It can only be done by a human using intuition and visual skill and awareness Ordinary people have as much chance as people in Mensa It could be an eight year old child or a granny Mr Monckton 47 was a member of the Downing Street policy unit during the Thatcher era He said he gave a prototype of the puzzle to the former Prime Minister as a leaving present and four years later received a telephone call begging for the solution The object of Eternity is to put the 209 pieces together to fit a dodecagonal board with no gaps or pieces missing Mr Monckton who has taken out insurance to cover the prize money said the known solution had been sealed by loss adjustors in a vault He hopes sales of the 29 99 puzzle will be to cover the prize but said We are taking

    Original URL path: http://www.archduke.org/eternity/newsarticles/npuz03.html (2016-04-27)
    Open archived version from archive

  • This is North Scotland - Press and Journal - Evening Express Online
    then he will release Eternity 2 with a 5million award for its completion The Eternity game has been hailed as one of the biggest puzzle crazes to hit the shops since the Rubiks Cube was in its heyday Experts reckon the first Eternity game released in the summer last year is so tricky it could take at least four years to complete Rumours were around on the Internet at the tail end of last year that some people were on the verge of completing the game These claims forced Mr Monckton to put his neo classical palace on the market He explained just why he decided to raise the stakes with Eternity 2 despite facing months of uncertainty to cover his original gamble When you are an entrepreneur you have to take gambles and you re always looking for the one that will make you rich Even if you get down to piece 208 of 209 and it doesn t fit you have to start all over again Some people say they are close but they don t really know the truth He added Eternity is one of those games that if you get it right it could prove very

    Original URL path: http://www.archduke.org/eternity/newsarticles/edarch0.html (2016-04-27)
    Open archived version from archive

  • This is North Scotland - Press and Journal - Evening Express Online
    was built in 1825 and the 200 acre estate includes an ice house dairy and trout loch A spokeswoman for eBay UK the site where the estate is being advertised said they were expecting quite a lot of interest She said We are predicting a person from the US will possibly be interested in owning such a piece of land and also the title of laird which comes with it

    Original URL path: http://www.archduke.org/eternity/newsarticles/edarch1.html (2016-04-27)
    Open archived version from archive

  • Yahoo! Finance - ETERNITY £1m WINNER MAKES INVENTOR HOMELESS
    took only seven months to crack Eternity Alex had worked full time on it for six months when he realised they could be first with a solution Now Alex who lives in a rented house in Cambridge and says the cash will allow him to do what he wants with his life will share his win with Oliver who lives in university halls of residence So how did the two men do it Alex said I was given the puzzle in June last year as a birthday present but didn t look at it until November When I saw on the internet that quite a few guys had 200 plus pieces I felt fairly confident Eternity could be solved I decided to use a computer but to give it a lot of help along the way First we worked out the difficulty of each piece using a probability model We then programmed the computer to start work on the harder pieces first It took about two weeks to decide on the most promising possibilities We then constantly improved and refined the computer programme although we did go up a few blind alleys we were lucky to solve it so quickly It could have taken another two months Oliver who will use his share of the prize to buy a home added It was a clever puzzle just the right level of difficulty to be a real challenge Their success means Christopher Monckton 48 who spent 14 years developing Eternity and got insurance via underwriters by agreeing to contribute any royalties if the prize was claimed will have to sell his early 19th century Fraserburgh house and estate Aberdeenshire But he is undaunted Eternity was the best selling puzzle ever a quarter of a million people bought it I m delighted

    Original URL path: http://www.archduke.org/eternity/newsarticles/an939.html (2016-04-27)
    Open archived version from archive

  • This Is North Scotland
    a breach of the rules The best selling 209 piece brain teaser which had millions of permutations sold quickly on its release last summer Experts had predicted that it could take up to four years to crack the code of the 12 sided Eternity ouzzle Mr Monckton said The game was designed so that to complete it you really had to think about it in the right way It seems like someone has found out what that way is and has cracked it They should feel very proud of themselves Even the fastest computers would take years to complete the game The former policy adviser to Baroness Thatcher said he didn t mind having to put his luxury home on the market There is no disappointment at all The Eternity game was like entering into any business enterprise sometimes it pays off handsomely and other times it doesn t The last thing that I am going to do is start whinging and wringing my hands The only reason we aren t all absolutely awash with cash is that the American market didn t want to take Eternity We think that was a marketing problem so when we launch the sequel we will sell the concept to US toy manufacturers early on Mr Monckton said Eternity 2 would be a good deal more difficult and would make use of the talents of the person who solved the first game We expect to have it out in about two years It will be quite a different sort of game and will be a lot harder He added I will miss the house We have been here for four wonderful years and it really is a dream home The estate costs 30 000 a year to run but makes about 18 000 through

    Original URL path: http://www.archduke.org/eternity/newsarticles/edarch2.html (2016-04-27)
    Open archived version from archive

  • midi ascii conversion
    event which has been given a name Most common events have a name NT play a note Time signature Channel volume Instrument End of track Tempo Key Text For example i ST A0 3C 64 ii Meta Event type 7F 0 0 119 14 0 don t put in length asc2mid counts the data bytes iii Sysex Event F7 0 1 2 3 don t put in length asc2mid counts the data bytes iv Channel volume 100 Time signature 4 4 clocks mtick 96 crotchets 32ndnote 8 NT C 3 1 2 von 80 Turn on C two octaves above middle C lasting 3 1 2 crotchets with velocity 80 asc2mid will insert the corresponding note off event at the right time NT Bb on Turn on Bb one and a bit octaves below middle C with velocity given by most recent von on same track NT R 3 Insert a rest of 3 crotchets on this track this doesn t actually write any midi events Its only effect is on the next FOL or SIM The instrument numbers go from 1 to 128 It is up to the user to put in an End track at the end of every track mid2asc Usage mid2asc c f r s midifile textfile The c f and r flags affect how the timing of the event lines is specified The default none of c f or r outputs BA CR type timings item i in above ascii file format description There is a slight complication in this case because the time signature affects the meaning of bar number and the time signature can change at any time on any track and should affect all tracks This shouldn t cause any trouble but to guide the eye when the time signature changes on

    Original URL path: http://www.archduke.org/midi/instrux.html (2016-04-27)
    Open archived version from archive

  • stuff » About
    Wave scaling comparison with classical algorithms About Alex Selby s blog Comment RSS Trackback Leave a Reply Click here to cancel reply Name required Mail will not be published required Website Pages About D Wave comment on comparison with classical computers D Wave scaling comparison with classical algorithms Efficient subgraph based sampling of models with frustration Harder QUBO instances on a Chimera graph English football league predictions Description of predictor

    Original URL path: http://www.archduke.org/stuff/about/ (2016-04-27)
    Open archived version from archive



  •