Just Theorizing

G.A.T. Engine general discussion
Post Reply
Sooz
Proficient
Proficient
Posts: 246
Joined: Wed Feb 09, 2011 12:24 am
Location: Canada

Just Theorizing

Post by Sooz » Sun Jun 24, 2012 3:13 pm

If GAT were allowed to go into the billions, to find better and better tables for the higher hits, would the tables eventually converge? That is, predicted numbers eventually become the same for each table?
Of course this is totally impractical because by that time, the predicted draw would be long past.
I'm just theorizing.
.....Sooz

User avatar
lottoarchitect
Site Admin
Posts: 1635
Joined: Tue Jan 15, 2008 5:03 pm
Location: Greece
Contact:

Re: Just Theorizing

Post by lottoarchitect » Sun Jun 24, 2012 5:04 pm

That's an interesting question Sooz. In theory, it should converge only when a table figure out the perfect understanding of the randomness of the draws, if such a thing is ever possible. After that GAT table, there will be some off predictions and then more and more often such perfect GAT tables will be observed till a point where any following ones will be perfect and produce the same numbers. In practice I doubt this will ever be observed really, no matter how many tables are scanned.

lottoarchitect

relowe
Advanced
Advanced
Posts: 63
Joined: Tue Feb 05, 2008 12:17 am
Location: Melbourne

Re: Just Theorizing

Post by relowe » Tue Jul 10, 2012 5:45 am

Just theorizing in a slightly different direction.

GAT tables are presented to us in the panorama panel because the results of their prediction are found to be desirable.

I presume each GAT table has some sort of individual parameters/characteristics about it that allows it to make its own prediction.

My theorizing is - Are these parameter/charateristics available from individual tables to allow them to be say 'aggregated' and so be applied in a way to make some sort of super predictive GAT?

I think LA you have said in the past that a good result in one table can have some carry forward into a subsequent GAT - here I am thinking of several tables having some combined input into a subsequent GAT.

Probably on the wrong bus here but you never know your luck. :D

relowe

User avatar
lottoarchitect
Site Admin
Posts: 1635
Joined: Tue Jan 15, 2008 5:03 pm
Location: Greece
Contact:

Re: Just Theorizing

Post by lottoarchitect » Tue Jul 10, 2012 9:52 am

Hi relowe, it is really hard to describe what really happens with GATs and how they form their prediction. Also I don't want to give out many details of the process to protect the system.
I presume each GAT table has some sort of individual parameters/characteristics about it that allows it to make its own prediction.
This is the signature detected and formulated up to that point in the GAT generation process. The signature assigned to that GAT table makes the prediction.
Each GAT is influenced by all the preceding GATs (the signatures formulated till that point) plus the randomness evolution occurring at the current stage we are in so to produce the new signature for the current GAT. So, in practice all GATs are connected. THis is also the reason we can't compute GAT 1000 if we don't have GATs 1-999 computed; we can't directly jump to GAT 1000.
My theorizing is - Are these parameter/charateristics available from individual tables to allow them to be say 'aggregated' and so be applied in a way to make some sort of super predictive GAT?
Based on the previous paragraph, this happens already. Any later GAT can be considered a "super predictive GAT" because it contains aggregated all the influence of all the previous GATs. This is the core process in GAT. There can be logical errors in doing this process outside the logic implemented by GAT. A prediction remains valid ONLY if we use data that must be unknown when estimating a new GAT. That means, if I try to make a prediction for GAT #1000 for draw X, I mustn't know the prediction accuracy for draw X of any previous GAT that will produce GAT #1000. Knowing and using this info to estimate GAT#1000 invalidates the prediction.

lottoarchitect

relowe
Advanced
Advanced
Posts: 63
Joined: Tue Feb 05, 2008 12:17 am
Location: Melbourne

Re: Just Theorizing

Post by relowe » Thu Jul 12, 2012 1:37 am

Thanks LA for the information above. Helps understand the process and how we may extract the best out of GAT, but looking at skirrow’s excellent results it would appear he might have found a super GAT generating his predictions. Top results there.

If I may ask further and again with skirrow’s results in mind where he has used just 10 draws as statistical data which I gather must be where the lottery ‘signature’ information is found – is there a minimum of draws to find this information ie might 3 be enough and conversely can one have too many? And then if there is a minimum of draws that one shouldn’t go below might there be an optimum number of draws to use, and if so, how might that number be found given that the program allows us to choose between 5 and 500?

So if just a few past draws are used might ‘signature’ recognition be relatively poor but if it does find a signature it must be a strong one, and if one were to use 500 might it find every signature regardless of its strength.

I also appreciate your comment re - 'Also I don't want to give out many details of the process to protect the system.' so what ever you can advise will be well received.

relowe

User avatar
lottoarchitect
Site Admin
Posts: 1635
Joined: Tue Jan 15, 2008 5:03 pm
Location: Greece
Contact:

Re: Just Theorizing

Post by lottoarchitect » Thu Jul 12, 2012 11:25 am

Hi relowe, there aren't any easy answers to your questions. Primarily, the case is that the process evolves in a non-humanly understandable manner which means we can't really state exact reasons why something is happening or not. It is the nature of the process and the only thing we can do is to test the results and possibly conclude if something works or not. Each lottery is unique to its properties and this means, a particular setting used in Skirrow's results might not be so productive in another lottery where other settings can be more rewarding. The case is really, to test various settings and decide what can deliver a better outcome. If we can't determine such a favorable setting, we just opt for the quickest one. Having said that, even if Skirrow used 20 stat.data, I believe he'd also get similar results perhaps at a different GAT ID scale (maybe sooner, maybe later). The nature of the process doesn't care that much if there are 10 or 20 stat.data, anything that can be found in 10 draws, can be also found in 20, plus something more. Of course the GATs will propagate differently but at the end there will be similarly performing GATs to be used, perhaps even better performing because there are more data to extract even better signatures, which will be possibly longer living. On the other hand, more data may also deteriorate the outcome because randomness itself behaves like that but the actual randomness evolution process will amend this one way or another. Even a setting of 3 stat. data could yield good results although the amount of information embedded in 3 draws is rather low to be conclusive of the performance in the long run, which means we can't expect a long living signature out of only 3 draws, although not impossible. As you can see, there is not an easy answer. Basically, in all the tests I have performed, 10-20 is the most optimum setting for stat. data; it does contain enough data for prediction and also is not big enough so to slow down the evolution process. I haven't found yet any lottery that benefits from bigger stat.data settings although I can't exclude such a possibility.

The case with total tested draws has a similar problem. If we test a small range, then we can find the most optimum GATs that perform admirably in this short range. Over a longer test draws period, we may have GATs that deliver more overall hits but not so often consecutively as is generally the case with a lower range, although there can be GATs that produce even 10 consecutive good wins within that big test range but overall their total hits will be lower (therefore not shown at the panorama as this means they have much longer cold cycles). The first case cannot show the actual cold cycles which are in every GAT so we can't be sure if this GAT is at the end of its hot cycle, the latter case does contain such cold cycles and therefore is more close to real performance. The sweet spot should be around those two extremes. GAT default settings take the safest route, more tested draws (default 100) which resemble a more real life performance over a longer period of time. That means, using e.g. a table on 100 tested draws, can possibly deliver a similar outcome over the next 100 future draws. We can't say that for a GAT that was derived over 10 tested draws span. If it did manage e.g. 10 consecutive times to match 5 correct, it would be extraordinary to expect over the 10 future draws to achieve that performance again; a cold cycle is due to begin. The case of 100 tested draws does not have this cycle issue because it already contains several cold cycles and the more regular state of these cycles the better confidence we have for a similar outcome in the future.
Finally, the strength of the signature is only affected by the actual lottery data, also influenced by the stat.data setting. If the reduced randomness of draws do not contain strong elements for the signature, then the signature may not deliver several consecutive good hits, or a GAT that can do that may be found at very late stages, perhaps in the range of billions.

As a general note, the more tested draws we have, the more GATs need to be explored.

cheers
lottoarchitect

relowe
Advanced
Advanced
Posts: 63
Joined: Tue Feb 05, 2008 12:17 am
Location: Melbourne

Re: Just Theorizing

Post by relowe » Fri Jul 13, 2012 5:54 am

Hi LA,

Many thanks for your very comprehensive reply which really is a terrific guide on the use of GAT and settings to use. I have had to read it a couple of times to take it all in – longer living GAT’s cold cycles, 10-20 past draws for stat data – it’s all there. My problem, in a way, is that the more I use GAT the more I seem to wonder about what it is that is going on in that black box (you have kept the lid on it and I am OK with that) but I have to say I am still intrigued as to how it goes about its business. With lottery signatures and finding them I was thinking along the lines of a weak radio receiver only capable of receiving the strongest signal at say 10 or less stat draws and then a powerful receiver capable of receiving almost every signal out there which would be when one chose 500 stat draws. I need to adjust my thinking - it isn’t like that. But I shall settle on about 15 past draws for the stat setting and work from there and keep looking for those cold cycles that are about to become hot.

Thanks again

relowe

draughtsman
Site Admin
Posts: 112
Joined: Thu Jan 17, 2008 4:22 am
Location: Melbourne, Australia

Re: Just Theorizing

Post by draughtsman » Mon Jul 30, 2012 10:59 am

Hi Lotto Architect,

If I may add to this thought provoking thread.

I think I am correct in describing the GAT analysis process as a ‘chain’ type of event where the ‘benefits’ of one GAT are passed on to the next ie as you have indicated the 1000th GAT table depends on all those preceding it. So wouldn’t it be nice to see the hit table chart rising steadily with improving hits across the draw history as the benefits of one GAT table are passed on to the next table to enhance its predictive capability.

Thus there must be a question - why don’t we see such a rising predictive trend instead of the one or two or three hot cycles we now see across the analyzed draw?

I will presume it is because the good ‘genes’ of one table become negated when they encounter a table of not so good predictability. So why not where a table has good genes keep this table in reserve so that when a poor predictive result occurs then that poor result table is bypassed and not until another table has a good predictive result that the good genes of the earlier table reinforce with next good table genes. And then these reinforced genes are then directed to the next good predictive GAT table.

Would this not lead to an enhancement of the predictive process?

regards

draughtsman

User avatar
lottoarchitect
Site Admin
Posts: 1635
Joined: Tue Jan 15, 2008 5:03 pm
Location: Greece
Contact:

Re: Just Theorizing

Post by lottoarchitect » Tue Jul 31, 2012 3:14 am

Hi draughtsman,

interesting thinking here with a reasonable question, you do push this theorizing thread to its limits :)
Anyway,
I think I am correct in describing the GAT analysis process as a ‘chain’ type of event where the ‘benefits’ of one GAT are passed on to the next ie as you have indicated the 1000th GAT table depends on all those preceding it. So wouldn’t it be nice to see the hit table chart rising steadily with improving hits across the draw history as the benefits of one GAT table are passed on to the next table to enhance its predictive capability
Well, having a history known we can make results show super successful but the actual results will be just like a random selection. Knowing what are the good "genes" from one GAT to pass them to the next GAT will result in exactly that. The 2nd GAT will be better, the 3rd better than the 2nd etc and very soon we'll reach a GAT that hits everything. Does that mean that when we pick that best GAT will provide the correct numbers for the next unknown draw? Absolutely not. In fact, the result will have nothing to do with prediction, no different than a random selection.
The reason is that, passing "genes" where the performance result is known invalidates the very first rule of prediction: we must not know how well a GAT did in terms of prediction (so we can't know which are the good "genes" in first place) before passing its knowledge to the next GAT.
In GAT, although GAT tables seem to be produced in chain, in reality they are not. It is just the process broken in steps that make this look sequentially. Indeed a later GAT needs information from its preceding ones and all GATs predicting a particular draw must NOT KNOW at any stage this draw in terms of hits production. So, if we predict the test draw #10, we can use prediction information for draw up to #9 for all GATs, produce the prediction for draw #10 for all GATs and finally find out how well they did in predicting draw #10 which now can be part of the known information in all GATs to predict draw #11 with the same process. Do you recall version 2.0? It worked exactly like that; it computed all the GATs for a given draw, then update the hits information then proceed in predicting the next draw for all GATs. This is how a GAT is formed. So, from this you can understand that a given GAT does not need (and it is wrong to do) the complete hits achievement of the full tested draws of the previous GATs to be formed. So from this you probably understand that passing "good genes" to the next GATs is not possible because primarily the information of which are the good genes is unknown to its full extend till all the preceding GATs are fully computed for all tested draws but at the time this information is available, all the following GATs have been already fully computed too.
Thus there must be a question - why don’t we see such a rising predictive trend instead of the one or two or three hot cycles we now see across the analyzed draw?
This can't work because the outcome will be no better than a random selection of numbers. Any program that does that is just a random number generator in practice, even if it can show brilliant success hits but they will be fictionally produced and irrelevant to actual prediction. This approach will fail miserably; the reason explained at the previous paragraph.
I will presume it is because the good ‘genes’ of one table become negated when they encounter a table of not so good predictability. So why not where a table has good genes keep this table in reserve so that when a poor predictive result occurs then that poor result table is bypassed and not until another table has a good predictive result that the good genes of the earlier table reinforce with next good table genes. And then these reinforced genes are then directed to the next good predictive GAT table.

Would this not lead to an enhancement of the predictive process?
The randomness evolution process automatically handles this "bad genes" creation if we can call it that way. A bad GAT also plays its part, so its current partially known information has its role to the prediction. We can't skip that GAT, it is part of the process and a necessary one.

To expand this theory further, the only way to get better hits is to evaluate more GATs. Tools to help towards that is a better evaluator (what is used to propagate randomness evolution, here belongs the S-Variance method I talked about) which will just possibly make good GATs show quicker instead of having to inspect several millions. That means, if we run the current GAT engine with its current evaluator and get GATs that e.g. match 10 times a 5-hit and we find this at 10M GAT tables scanned, a better evaluator will just produce this result e.g. at 1M scanned GATs. Both evaluators will produce the same hits, the difference is that a better evaluator will do that quicker. In both evaluators, there will be those "bad" but necessary GATs. The same applies for GATs producing consecutive hits; a "bad" evaluator simply will take millions of GATs to be scanned to produce that, a better one will give that sooner so the difference is really the waiting time to get proper GATs to use.
The older 1.x GAT versions used a much more complex evaluator mechanism which I may re-introduce at a later version as "Type B" and the current one will be "Type A". The difference between those two evaluators is that "Type A" (the current in GAT 2.x) is much faster and needs fewer stat. data to do its thing. "Type B" is much slower, needs more stat.data but possibly can give good GATs at a slighty lower ID ranking - thus the reason not included in this version; it is actually slower to reach that lower ID GAT to get the same performance but may have some other qualities worth exploring. S-Variance will work in cooperation to these two evaluator modes but it can't operate on its own.

The other method is the synthetic. Due to the nature of the process, we can combine various GATs to form better GATs synthetically without the problem of violating the 1st rule of prediction I mentioned above. This can possibly deliver better hits overall than considering alone each GAT available to the moment we run this synthetic process. So, two low-par GATs (e.g. each picking 5 numbers and each can deliver 2 correct constantly - think of a red flat line of hitting 2) can be combined to produce a 10 (5+5) numbers synthetic GAT that produces 2 + 2 correct numbers all the time (2 correct out of 5 picked from each used GAT to produce the synthetic). Of course this is easy to spot and take advantage of by the user but think of it at the general situation. If this approach is worthy to do or not depends if we can have a 10 numbers GAT produced that can also deliver similar hits achievement at the point we decide to run the synthetic. The process I plan to create in GAT will be more complex than that, i.e. combine 3 numbers from GAT 1, 2 from GAT 2, 4 from GAT 3 etc to produce a synthetic GAT and will be applicable at the conclusion of the normal run of the engine. Running this right after each new GAT computed is out of question because combining all these GATs will take a lot of time so this will stall the normal process. So, this will be available as a second run over the initial run to find possibly even better results. For this to work, e.g. if we want to seek for 7 number GATs, the engine must also keep track of 1-6 number GATs so to be able and combine them to produce 7 numbers synthetic GATs, since allowing computation of 1-6 GATs will deliver the best performing ones for this combination task. So the speedup should be disabled to work with the synthetic, although not required since each computed GAT can return any amount of req. numbers but the problem is that it will not be an optimal one for the task.

As you can see, there is a lot of new additions for GAT each addressing part of the solution to better GAT seeking and I firmly believe these additions I propose will take GAT to the next level.

cheers
lottoarchitect

draughtsman
Site Admin
Posts: 112
Joined: Thu Jan 17, 2008 4:22 am
Location: Melbourne, Australia

Re: Just Theorizing

Post by draughtsman » Thu Aug 02, 2012 11:40 am

Thanks Lotto Architect for you extensive response to my question. I did think I was onto something there with the passing on of some good genes between the GATs but only to the next best one. You have given me a lot to digest but it is clear from this whole thread that one can't violate the prediction process that you have within the GAT system.

Your indication of things to come with GAT is very positive and all will be looked forward to with much anticipation.

Technique now in using and interpreting the GAT tables is really what we need to focus on in getting the best out of this predictive process.

thanks again

draughtsman

Post Reply

Who is online

Users browsing this forum: No registered users and 107 guests