Play Now

Neural Net


General Instructions


CST Time

Below is the NeuralReality AI Engine on-line web user interface for remote neural network input parameter configuration. Please be aware that Neural-Lotto will only calculate up to the latest lottery drawing stored within its databases as shown in the Last Drawing textbox. It is your responsability to adecuately know the mechanics and timing of your chosen lottery and to provide for sufficient time between Neural-Lotto’s calculations and the next lottery drawing. Usually, a minimum of 2 o 3 hours of prior anticipation should suffice. Too little anticipation may cause Neural-Lotto to incorrectly calculate drawing numbers, because the latest official drawing is always updated on-line as soon as possible and may overlap with current calculations.

Please select the Region, Country, State and Lottery of your choice. The Type, Last Drawing and Price (Neural-Lotto) will be updated accordingly.

Then select the appropiate parameter values (explained below) using the sliders provided. After submitting, you will be redirected to PayPal to make payment (Price textbox) for the service. After successful payment, your input parameters will be forwarded to the NeuralReality AI Engine for processing, and the results will be sent to your designated email address provided in your profile. Please make sure to add our domain (neural-lotto.net) to your list of Safe Senders/Recipients and/or check your Spam folder often.

Even though Neural-Lotto is the most advanced neural network of its kind, and learns the more it’s used, predicting lottery draws is not an exact/absolute science and results will vary depending on historic datasets, parameters and general conditions. But with enough patience, observation and experimentation, spectacular results can be obtained in very short periods of time.

If you don’t know where to set the sliders, we recommend you set them around half-way and start from there. Each time you play, observe your settings and proximity of the Neural-Lotto results vs. the official lottery draw. Then readjust gently, always observing the results. Remember — more is not always better!

Note to Israel users: For the Israel Double Lotto lottery, Neural-Lotto uses variants of the 1 thru 5 standard algorithms, all infused with basic KGA6 DNA. This is the first real attempt to incorporate KGA6 technological advances in standard NeuralReality AI algorithms, which is why Double Lotto results will differ from New Lotto results.

Note to New Zealand users: For the NZ Big Wednesday lottery, Neural-Lotto will respond with 6 numbers + 1 bonus. The bonus number represents the coin flip/toss; i.e.: 1 = heads, 2 = tails.

 

Thank you for trying out Neural-Lotto.
You must be a registered user to proceed.


Wheeling:

 


*** MULTI PROMO CODES ***

We can now directly provide highly-discounted multi promo codes!
These enable any user to play without having to go through PayPal each time!
Play at a highly discounted rate without the checkout hassle! Click HERE to start saving!
We also take bitcoins (only KGA6 v3 Official Store purchases): 1FCSHxxeHnhc2tMwDjeJKjQQsb6iPxc58U
We’ll also post Official Store KGA6 v3 PromoCode coupons here from time to time:
N/A (n/a)
n/a

 

 

 

Parameter Definitions


Learning Rate

The Learning Rate parameter ranges from 0 to 1, in steps of 0.05. This parameter helps the neural network learn faster. However, care should be taken since a high Learning Rate setting can make the neural network self over-adjust and may also skip certain pattern search branches, sometimes rendering inaccurate results.

History
The History parameter ranges from 10 to 200, in steps of 1. This parameter specifies how many historic drawings should be used by the neural network to search for patterns and calculate numbers. Even though each lottery contained within the Neural-Lotto databases span more than 200 drawings, tests have shown that patterns and trends are more pronounced in shorter series. Generally, using more than 200 linear historic drawings (without a Stepping parameter) tends to over-express or render chaotic pattern sets. This parameter is closely linked to the Epochs and Stepping parameters.

Neurons/Layers
The Neurons/Layers parameter ranges from 10 to 990, in steps of 10. This parameter specifies how many neurons are created per hidden layer (specified later on). For example, if Neurons/Layers = 50, and Layers = 20, then 1,000 neurons will be created (50 x 20). Please note that this does not include Perceptrons. Neural-Lotto can include up to 20,000 additional Perceptrons depending on overall conditions, user parameters and historic dataset, as it sees fit. More neurons may not necessarily translate into greater precision, depending on existing conditions, but will require more time to process. Experimentation with your chosen lottery is required for optimum results.

Epochs
The Epochs parameter ranges from 100 to 10,000, in steps of 100. This parameter specifies how many training cycles must be completed (run, compare, adjust) prior to the final run. In certain cases, Neural-Lotto will not complete all epochs if the desired result is reached before (when at least 50% of the specified epochs have been executed). However, training Neural-Lotto to maximum cycles does not necessarily guarantee maximum precision, as it is possible to over do, making the neural network stray. This parameter is closely linked to the Learning Rate and Momentum parameters. While other, inferior or lesser neural networks require up to 5 million cycles or more of training, the highly advanced NeuralReality AI Engine needs 10 thousand cycles, or less. Once again, experimentation is the best way to proceed in this case.

Layers
The Layers parameter ranges from 1 to 999, in steps of 1 (approx.) This parameter specifies how many hidden layers are created, each with the specified number of neurons. For example, if Neurons/Layers = 50, and Layers = 20, then 1,000 neurons will be created (50 x 20). More hidden layers translate into a more stable network, but does not necessarily translate into greater precision. This also makes the dynamic multithreaded backpropagation work harder. While desirable in certain conditions, excessive hidden layers can result in a huge neural network that learns slowly and may require unsafe amounts of Momentum (explained later on) to converge. As with the Neurons/Layers parameter, careful trial and observation is the best way to go.

Algorithm
The Algorithm parameter ranges from 1 to 5, in steps of 1. This parameter specifies which of the 5 highly adaptive, fuzzy logic artificial intelligence learning algorithms is used to process the data. There are no hard-and-fast rules as to which algorithm is better; hence no descriptions or names are provided.

Stepping
The Stepping parameter ranges from 0 to 200, in steps of 1. This parameter specifies how many decremental (backstepping) historic blocks are used for training purposes. For example, if Stepping = 10, and History = 100, then 10 cycles/blocks/steps (x Epochs) are executed. So, in a 2000 historic-drawing lottery, Draws #1900 to #1999 (History = 100) are used to find the pattern resulting in Draw #2000 (Step 1). Then Draws #1899 to #1998 (again History = 100) are used to find the pattern resulting in Draw #1999 (Step 2), and so on, until 10 Steps (blocks) are completed, ending in Draws #1891 to #1990 used to find the pattern resulting in Draw #1991 (Step 10). Care must be taken, as this parameter (when equal or greater than 1) is effectively multiplied by the number of Epochs. A setting of 0 cancels the parameter, resulting in only the historic block being used to obtain the latest drawing. Using the maximum setting effectively doubles the amount of historic drawings used.

Momentum
The Momentum parameter ranges from 0 to 1, in steps of 0.05. This parameter helps the neural network avoid settling or converging on local minima and/or optima, improving the learning rate in some situations by helping to smooth out unusual conditions in the training dataset. Care must be taken when setting Momentum: setting a value too high can create a risk of overshooting the minima (and/or optima), which can unstabilize the neural network. A value too low cannot reliably avoid local minima/optima, and can also cause the training of the system to become slow. Experimentation and observation is advised.

Wheeling
The Wheeling parameter permits selecting more output numbers than your lottery of choice normally permits. For example, if your lottery is of type 6/49, the Wheeling parameter instructs the neural network to output 7 or more numbers, instead of the usual 6. The resulting numbers can then be used in a wheeling system of your choice. If this parameter is unchecked, the neural network will output only the standard base numbers for the lottery. Please bear in mind not all lotteries are “wheelable”.

KGA6 v3
The KGA6 v3 checkbox appears if your lottery of choice is suitable/compatible. All parameters will be reset to their initial values and disabled. Only the Wheeling parameter will default to an internal preset value but will also be disabled. This value cannot be changed. Also, you will not be able to “remember” your settings for this lottery. Your order will be forwarded to the KGA6 v3 Engine instead of the NeuralReality AI Engine. Please bear in mind KGA6 will respond with a series of combinations suitable for direct play. You must make sure you play all combinations! Also, the KGA6 results will not be visible on-line; they will be emailed to your registered address. You must also make sure you include the neural-lotto.net domain in your Safe Senders list or you may not receive results in a timely manner!