• Neural networks from scratch in forth

    From Ahmed@21:1/5 to All on Mon Dec 2 20:12:56 2024
    Hi,
    Here is a session (with gforth) using neural networks
    (neural_networks.fs) applied to the XOR operation.

    ----------------the session begins here---------
    Gforth 0.7.9_20200709
    Authors: Anton Ertl, Bernd Paysan, Jens Wilke et al., for more type
    `authors'
    Copyright © 2019 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later
    <https://gnu.org/licenses/gpl.html>
    Gforth comes with ABSOLUTELY NO WARRANTY; for details type `license'
    Type `help' for basic help

    \ this is a session using neural networks for the XOR operation ok
    include neural_networks.fs
    neural_networks.fs:134:1: warning: redefined b with B
    locate1.fs:142:3: warning: original location ok

    \ create data ok
    4 >n_samples ok
    create data1 ok
    0e f, 0e f, 0e f, ok
    0e f, 1e f, 1e f, ok
    1e f, 0e f, 1e f, ok
    1e f, 1e f, 0e f, ok
    data1 >data ok
    \ this concerns the XOR operation ok

    \ create the neural network, it has 2 inputs, 1 output, 2 hidden layers
    with 5 neurons in each hidden layer ok
    1 5 5 2 2 neuralnet: net1 ok
    ' net1 is net ok
    net_layers ok

    \ activation functions ok
    ' dlatan is act_func ok
    ' dllinear is act_func_ol \ a linear activation function for the output
    layer ok

    \ setting learning rate ok
    1e-3 >eta ok
    0e >beta ok

    \ tolerance and relative tolerance ok
    1e-4 >tol ok
    0e >rtol ok

    \ epochs ok
    1000000 >epochs ok
    \ this is maximal epochs, the algorithm terminates when the Cost is less
    then the tolerance tol ok

    \ setting display steps when learning ok
    1000 >display_step ok

    \ adaptation of eta to speedup learning if possible ok
    false >adapt_eta ok

    \ initialize the weights and biases at each learning if redoing learning
    phase ok
    true >init_net ok

    \ method to initilize weights and biases ok
    ' init_weights_2 is init_weights ok
    ' init_biases_2 is init_biases ok

    \ now we lauch the learning (Backpropagation algorithm) ok
    learn
    Learning...
    -----------
    epochs| Cost
    ------+ ----
    0 1.9799033462046
    1000 0.478161583121087
    2000 0.435711003426376
    3000 0.376641058924564
    4000 0.289059769511348
    5000 0.175586135423502
    6000 0.0717553727810072
    7000 0.0181228454797771
    8000 0.00315094688675379
    9000 0.000449783250624701 ok

    \ now we verify it ok
    test
    inputs | outputs (desired outputs)
    -------+--------------------------
    0. 0. | 0.006715207738167 (0. )
    0. 1. | 0.991841706392265 (1. )
    1. 0. | 0.993839285400743 (1. )
    1. 1. | 0.00680589396777978 (0. ) ok

    \ we can also do predictions ok
    0e 1e to_inputs forward_pass .outputs
    out_n°| value
    ------+------
    0 | 0.991841706392265 ok
    \ wich it is true (approximately equals 1) ok

    -----------the session finishes here----------------------

    The program works with gforth, iforth and vfxforth.

    Ahmed

    --

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ahmed@21:1/5 to All on Mon Dec 2 20:16:48 2024
    Hi again,
    Here is the program neural_networks.fs.

    -----------The code begins here---------------

    include random.fs \ for Gforth
    \ include random3.frt \ for iForth
    \ : random choose ; \ for iForth and vfxForth



    \ ------------
    \ Net construction
    \ -------------

    \ activation functions
    : dllinear ( f: x -- x 1) 1e ;

    : sigmoid fnegate fexp 1e f+ 1/f ;
    : dlsigmoid ( f: x -- y y' )
    sigmoid fdup fdup 1e fswap f- f*
    ;

    : dlatan fdup fatan fswap fdup f* 1e f+ 1/f ;

    : dltanh ftanh fdup fdup f* 1e fswap f- ;

    defer act_func
    ' dlatan is act_func

    defer act_func_ol
    ' dllinear is act_func_ol

    \ neural net
    defer net

    variable count_layer
    : >count_layer count_layer ! ;
    : count_layer> count_layer @ ;

    : (neuralnet)
    create dup , swap , 0 , 0 do , 0 , loop , 0 ,
    ;

    : layer_address net cell+ count_layer @ 2* cells + cell+ ;
    : set_layer_address here layer_address ! ;
    : get_layer_address layer_address @ ;

    : neurons_current_layer net cell+ count_layer @ 2* cells + @ ;
    : neurons_previous_layer net cell+ count_layer @ 1- 2* cells + @ ;
    : neurons_next_layer net cell+ count_layer @ 1+ 2* cells + @ ;

    : input_layer
    0 count_layer !
    set_layer_address
    neurons_current_layer 0 do 0e f, loop \ O
    ;

    : add_layer
    1 count_layer +!
    set_layer_address

    neurons_current_layer 0 do 0e f, loop \ O
    neurons_current_layer 0 do 0e f, loop \ Op
    neurons_current_layer 0 do 0e f, loop \ D
    neurons_current_layer 0 do 0e f, loop \ B
    neurons_current_layer 0 do 0e f, loop \ dB

    neurons_current_layer 0 do
    neurons_previous_layer 0 do \ weights
    0e f, \ w
    loop
    loop

    neurons_current_layer 0 do
    neurons_previous_layer 0 do \ dweights
    0e f, \ dw
    loop
    loop
    ;

    : output_layer add_layer ;

    \ inputs/outputs
    100 value n_inputs_max
    100 value n_outputs_max

    create inputs n_inputs_max floats allot
    create outputs n_outputs_max floats allot
    create desired_outputs n_outputs_max floats allot

    variable n_inputs
    variable n_outputs

    : get_n_inputs net cell+ @ ;
    : get_n_outputs net cell+ net @ 0 do 2 cells + loop 2 cells + @ ;

    \ neuralnet
    : neuralnet:
    (neuralnet)
    ;

    : net_layers
    input_layer
    net @ 0 do
    add_layer
    loop
    output_layer
    get_n_inputs n_inputs !
    get_n_outputs n_outputs !
    ;

    : th_layer ( l -- a)
    count_layer ! get_layer_address
    ;


    : O ( nl al i -- nl al a)
    floats
    over +
    ;

    : Op ( nl al i -- nl al a)
    >r
    over r> + floats
    over +
    ;

    : D ( nl al i -- nl al a)
    >r over 2* r> + floats
    over +
    ;

    : B ( nl al i -- nl al a)
    >r
    over 3 * r> + floats
    over +
    ;

    : dB ( nl al i -- nl al a)
    >r
    over 4 * r> + floats
    over +
    ;

    : W ( np nl al d s -- np nl al a)
    rot >r >r >r
    2dup 5 * swap
    r> * + r> + floats r> tuck +
    ;

    : dW ( np nl al d s -- np nl al a)
    rot ( np nl d s al)
    >r >r >r ( np nl) ( r: al s d)
    2dup swap dup ( np nl nl np np)
    r> * r> + ( np nl nl np np*d+s) ( r: al)
    rot rot ( np nl np*d+s nl np)
    5 + * + ( np nl np*d+s+nl*[np+5])
    floats
    r> tuck +
    ;

    : calc_input_layer \ input layer
    0 count_layer !
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    inputs i floats + f@
    i O f!
    loop
    2drop
    ;

    : calc_hidden_layers \ hidden layers
    net @ 0 do
    i 1+ count_layer !
    neurons_previous_layer
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    i B f@
    neurons_previous_layer 0 do
    j i W f@
    -1 count_layer +!
    neurons_current_layer
    get_layer_address
    i O f@
    f* f+
    2drop
    1 count_layer +!
    loop
    act_func
    i Op f!
    i O f!
    loop
    2drop drop
    loop
    ;

    : calc_output_layer \ output layer
    net @ 1+ count_layer !
    neurons_previous_layer
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    i B f@
    neurons_previous_layer 0 do
    j i W f@
    -1 count_layer +!
    neurons_current_layer
    get_layer_address
    i O f@
    f* f+
    2drop
    1 count_layer +!
    loop
    act_func_ol
    i Op f!
    i O f!
    loop
    2drop drop
    ;

    : >outputs \ outputs
    net @ 1+ count_layer !
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    i O f@
    outputs i floats + f!
    loop
    2drop
    ;

    : forward_pass
    calc_input_layer
    calc_hidden_layers
    calc_output_layer
    >outputs
    ;

    \ ------------------------
    \ Learning with Backpropagation Algorithm
    \ ------------------------

    \ the criterion: cost
    fvariable cost
    : calc_cost
    0e
    n_outputs @ 0 do
    outputs i floats + f@
    desired_outputs i floats + f@
    f- fdup f* f+
    loop
    0.5e f*
    cost f!
    ;

    : see_cost forward_pass calc_cost ;
    : .cost see_cost cost f@ f. ;

    \ init W and B randomly
    : (frandom) 10000000000 dup s>f random s>f fswap f/ ;
    : frandom ( f: a--b) (frandom) f* ;
    : frand ( f: a -- b) fover f- frandom f+ ;

    defer init_dweights
    defer init_dbiases

    defer init_weights
    defer init_biases

    : init_dweights_1
    net @ 1+ 0 do
    i 1+ count_layer !
    neurons_previous_layer
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    neurons_previous_layer 0 do
    0e j i dW f!
    loop
    loop
    2drop drop
    loop
    ;

    : init_dbiases_1
    net @ 1+ 0 do
    i 1+ count_layer !
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    0e i dB f!
    loop
    2drop
    loop
    ;

    : init_weights_1
    net @ 1+ 0 do
    i 1+ count_layer !
    neurons_previous_layer
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    neurons_previous_layer 0 do
    -1e-1 1e-1 frand j i W f!
    loop
    loop
    2drop drop
    loop
    ;

    : init_biases_1
    net @ 1+ 0 do
    i 1+ count_layer !
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    -1e-1 1e-1 frand i B f!
    loop
    2drop
    loop
    ;

    : init_weights_2
    net @ 1+ 0 do
    i 1+ count_layer !
    neurons_previous_layer
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    neurons_previous_layer 0 do
    -1e 1e frand j i W f!
    loop
    loop
    2drop drop
    loop
    ;

    : init_biases_2
    net @ 1+ 0 do
    i 1+ count_layer !
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    -1e 1e frand i B f!
    loop
    2drop
    loop
    ;

    ' init_dweights_1 is init_dweights
    ' init_dbiases_1 is init_dbiases

    ' init_weights_1 is init_weights
    ' init_biases_1 is init_biases

    : init_net
    init_dweights
    init_dbiases
    init_weights
    init_biases
    ;

    \ deltas
    : calc_deltas_output_layer
    net @ 1+ count_layer !
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    outputs i floats + f@
    desired_outputs i floats + f@
    f-
    i D f!
    loop
    2drop
    ;

    : calc_deltas_hidden_layers
    0 net @ 1- do
    i 1+ count_layer !
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    i Op f@
    count_layer @ 1+ count_layer !
    neurons_previous_layer
    neurons_current_layer
    get_layer_address
    0e
    neurons_current_layer 0 do
    i D f@
    i j W f@
    f* f+
    loop
    2drop drop
    count_layer @ 1- count_layer !
    f*
    i D f!
    loop
    2drop
    -1
    +loop
    ;

    : calc_deltas
    calc_deltas_output_layer
    calc_deltas_hidden_layers
    ;

    \ calculate weigths and baises increments
    fvariable eta
    fvariable beta

    : >eta eta f! ;
    : >beta beta f! ;

    1e-4 >eta
    9e-1 >beta

    \ dweights
    : calc_dweights
    net @ 1+ 0 do
    i 1+ count_layer !
    neurons_previous_layer
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    i D f@
    neurons_previous_layer 0 do
    -1 count_layer +!
    neurons_current_layer
    get_layer_address
    i O f@
    1 count_layer +!
    fover f*
    eta f@ f* fnegate
    2drop
    j i dW f@
    beta f@ f*
    f+
    j i dW f!
    loop
    fdrop
    loop
    2drop drop
    loop
    ;

    \ dbiases
    : calc_dbiases
    net @ 1+ 0 do
    i 1+ count_layer !
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    i D f@
    eta f@ f* fnegate
    i dB f@
    beta f@ f*
    f+
    i dB f!
    loop
    2drop
    loop
    ;


    \ update weights and biases
    \ weights
    : update_weights
    net @ 1+ 0 do
    i 1+ count_layer !
    neurons_previous_layer
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    neurons_previous_layer 0 do
    j i dW f@
    j i W f@
    f+
    j i W f!
    loop
    loop
    2drop drop
    loop
    ;

    \ dbiases
    : update_biases
    net @ 1+ 0 do
    i 1+ count_layer !
    neurons_current_layer
    get_layer_address
    neurons_current_layer 0 do
    i dB f@
    i B f@
    f+
    i B f!
    loop
    2drop
    loop
    ;

    : one_pass
    forward_pass
    calc_cost
    calc_deltas
    calc_dweights
    update_weights
    calc_dbiases
    update_biases
    ;

    \ data
    variable n_samples
    variable data
    : >n_samples n_samples ! ;
    : >data data ! ;

    \ one epoch
    fvariable sum_cost
    fvariable previous_sum_cost
    1e9 previous_sum_cost f!

    : one_epoch
    0e sum_cost f!
    n_samples @ 0 do
    data @
    i n_inputs @ n_outputs @ + *
    n_inputs @ 0 do
    2dup
    i + floats + f@
    inputs i floats + f!
    loop
    2drop

    data @
    i n_inputs @ n_outputs @ + * n_inputs @ +
    n_outputs @ 0 do
    2dup
    i + floats + f@
    desired_outputs i floats + f!
    loop
    2drop

    one_pass
    cost f@ sum_cost f@ f+ sum_cost f!
    loop
    ;

    \ learn for several epochs
    variable n_epochs
    fvariable tol \ tolerance
    fvariable rtol \ relative tolerance
    variable display_step
    variable adapt_eta?
    variable init_net?

    : >epochs n_epochs ! ;
    : >tol tol f! ;
    : >rtol rtol f! ;
    : >display_step display_step ! ;
    : >adapt_eta adapt_eta? ! ;
    : >init_net init_net? ! ;

    1000 >epochs
    1e-3 >tol
    0e >rtol
    1 >display_step
    false >adapt_eta
    true >init_net

    : learn
    cr s" Learning..." type
    cr s" -----------" type
    cr s" epochs| Cost" type
    cr s" ------+ ----" type

    init_net? @ if
    init_net
    then
    n_epochs @ 0 do
    one_epoch

    i display_step @ mod 0= if
    cr i . 3 spaces sum_cost f@ f. \ 2 spaces previous_sum_cost f@
    f.
    then

    sum_cost f@ tol f@ f< if
    unloop exit
    then

    sum_cost f@ previous_sum_cost f@ f- fabs
    rtol f@ f< if
    unloop exit
    then

    adapt_eta? @ if
    sum_cost f@ previous_sum_cost f@ f> if
    eta f@ 0.99e f* >eta
    beta f@ 0.99e f* >beta
    1e9 previous_sum_cost f!
    cr ." -------------updating eta and
    beta-----------------------"
    then
    then
    sum_cost f@ previous_sum_cost f!
    loop
    ;


    : test
    cr ." inputs | outputs (desired outputs)"
    cr ." -------+--------------------------"
    n_samples @ 0 do
    cr
    n_inputs @ 0 do
    data @ j n_inputs @ n_outputs @ + * i + floats + f@
    inputs i floats + f!
    loop
    forward_pass
    n_inputs @ 0 do
    inputs i floats + f@ f.
    loop
    ." | "
    n_outputs @ 0 do
    outputs i floats + f@ f.
    ." ("
    data @ j n_inputs @ n_outputs @ + * n_inputs @ + i + floats + f@
    f.
    ." ) "
    loop
    loop
    ;


    \ for making predictions
    : to_inputs 0 n_inputs @ 1- do inputs i floats + f! -1 +loop ;

    : outputs_ident ;
    : outputs_softmax
    n_outputs @ 0 do
    outputs i floats + f@
    1e0 f* fexp
    outputs i floats + f!
    loop

    0e
    n_outputs @ 0 do
    outputs i floats + f@ f+
    loop

    n_outputs @ 0 do
    outputs i floats + f@
    fover f/
    outputs i floats + f!
    loop
    fdrop
    ;

    : outputs_probs ( f: lambda -- )
    0e
    n_outputs @ 0 do
    outputs i floats + f@ f+
    loop
    n_outputs @ 0 do
    outputs i floats + f@
    fover f/
    outputs i floats + f!
    loop
    fdrop
    ;

    defer outputs_ips \ i stands for ident, p for probs and s for
    softmax
    ' outputs_probs is outputs_ips

    : .outputs
    cr ." out_n°| value"
    cr ." ------+------"
    n_outputs @ 0 do
    cr i . ." | " outputs i floats + f@ f.
    loop
    ;

    : net_predict to_inputs forward_pass .outputs ;
    : net_predict_probs to_inputs forward_pass outputs_probs .outputs ;
    : net_predict_softmax to_inputs forward_pass outputs_softmax .outputs ;
    : net_predict_ips to_inputs forward_pass outputs_ips .outputs ;


    \ Prediction: possible forms
    \ net_predict
    \ net_predict_probs
    \ net_predict_softmax
    \ net_predict_ips
    \ forward_pass .outputs
    \ forward_pass outputs_probs .outputs
    \ forward_pass outputs_softmax .outputs


    -----------The code finishes here--------------------

    Enjoy,

    Ahmed

    --

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From mhx@21:1/5 to All on Mon Dec 2 21:10:23 2024
    Interesting ... I last looked at neural nets some 40 years ago (iForth
    full distribution dfwforth/examples/neural). At that time Jack Woehr
    was also busy it with it. There should be Forth files floating around.

    Here's my Little Red Riding Hood application, I think it can fit your framework.

    -marcel

    (*
    * LANGUAGE : ANS Forth
    * PROJECT : Forth Environments
    * DESCRIPTION : neural net with backpropagation
    * CATEGORY : Example
    * AUTHOR : Marcel Hendrix, November 26 1989
    * LAST CHANGE : October 13, 1991, Marcel Hendrix
    *)



    ?DEF Sensors [IF] FORGET Sensors
    [THEN]



    -- **** Define the layers. ******************************************


    6 =: Sensors -- Inputs
    2 =: HiddenUnits -- set up 1-dimensional I/Hidden/O vectors
    -- NOTE that this is ONE unit less than WJ&JH
    -- used!
    -- (We have a Hidden HiddenUnit though (dummy)).
    7 =: OutputUnits -- 7 outputs


    INCLUDE backprop.frt

    REVISION -lrrh "--- Neural Applications: LRRH 1.11 ---"


    -- **** End of layer defs. ******************************************

    (* Application Level *)

    :ABOUT
    CR
    CR ." ** Little Red Riding Hood Learns the Facts of Life II **"
    CR ." A Neural Net Application using Backpropagation "
    CR
    CR ." <l> <s> ADD-PAIR -- Pattern <l> primed for linking <s>."
    CR ." <l> î {Grandma Wolf Woodcutter}"
    CR ." <s> î {Love Hate Sex}."
    CR ." DRILL -- All primed pairs are coded-in."
    CR ." NO-CONNECTIONS -- Forget all associations."
    CR ." <l> REACT -- Test if <l> <s> pair is reproduced."
    CR ." .STATUS -- Prints inputs | outputs | targets."
    CR ." .WEIGHTS -- Prints all weights."
    CR ." <z> TO LearningRate -- LearningRate, oscillates > 1000)."
    CR ." <w> TO Retries -- Retry Rate (normally 3000)."
    CR ." Noisy | Clean -- Select if input is noisy or not."
    CR ." <y> TO Noise -- 1 out of <y> relations in <l> is CR"
    " -- corrupted, if Noisy."
    CR ." FALSE | TRUE TO ?display -- See matrices during learning or not."
    CR ." DO-IT! -- Sets up default and learn patterns."
    CR ." .ABOUT -lrrh -- Print this info." CR
    CR ." Note1: When running, '+' and '-' influence LearningRate,"
    CR ." '/' switch .STATUS and .WEIGHTS,"
    CR ." 'd' turns display on and off,"
    CR ." 'ESC' breaks."
    CR ." Note2: <list> PERSON WHATIF? "
    CR ." where <list> is ORed members of the following set: "
    CR ." {BigEars BigEyes BigTeeth Kindly Wrinkled Handsome}"
    CR ." Example: BigEars BigTeeth OR PERSON WHATIF? " ;

    -- Bitpatterns: (only 16 characteristics
    -- are possible ==> n <= 16)

    0 2^x =: BigEars 1 2^x =: BigEyes 2 2^x =: BigTeeth
    3 2^x =: Kindly 4 2^x =: Wrinkled 5 2^x =: Handsome

    -- Likewise, number of actions (p) limited to 16.

    0 2^x =: RunAway 1 2^x =: Scream 2 2^x =: Look?
    3 2^x =: Kiss 4 2^x =: Approach 5 2^x =: OfferFood
    6 2^x =: Flirt

    CREATE Grandma 0 1 0 1 1 0 sensor, -- BigEyes Kindly Wrinkled CREATE Wolf 1 1 1 0 0 0 sensor, -- BigEars BigEyes BigTeeth CREATE Woodcutter 1 0 0 1 0 1 sensor, -- BigEars Kindly Handsome

    -- Output patterns

    CREATE Love 0 0 0 1 1 1 0 output, -- Kiss Approach OfferFood
    CREATE Hate 1 1 1 0 0 0 0 output, -- RunAway Scream Look? CREATE Sex 0 0 0 0 1 1 1 output, -- Approach OfferFood Flirt

    -- PERSON only works if n <= 32

    Sensors 2+ ARRAY aperson
    Sensors 1+ TO 0 aperson
    One TO 1 aperson

    : PERSON DEPTH 0= ABORT" Describe!" \ <bp1>..<bpn> --- <'input>
    DEPTH 1- 0 ?DO OR LOOP \ BigEars PERSON WHATIF?
    #32 Sensors - LSHIFT
    Sensors 0 DO DUP 0< IF One ELSE Zero ENDIF
    Sensors 1- I - 2+ TO aperson
    1 LSHIFT
    LOOP DROP
    'OF aperson ;

    : .FACT "0.5" \ <n> <bool> --- <>
    > IF CR 1- 2^x
    CASE
    BigEars OF ." -- Big ears" ENDOF
    BigEyes OF ." -- Big eyes" ENDOF
    BigTeeth OF ." -- Big teeth" ENDOF
    Kindly OF ." -- A kindly appearance" ENDOF
    Wrinkled OF ." -- A wrinkled complexion" ENDOF
    Handsome OF ." -- A handsome feller" ENDOF
    ." -- something illegal?"
    ENDCASE
    ELSE DROP
    ENDIF ;

    : .ACTION \ <n> <bool> --- <>
    "0.5"
    > IF CR 2^x
    CASE
    RunAway OF ." -- run away" ENDOF
    Scream OF ." -- scream" ENDOF
    Look? OF ." -- woodcutter?" ENDOF
    Kiss OF ." -- kiss on cheek" ENDOF
    Approach OF ." -- approach" ENDOF
    OfferFood OF ." -- offer food" ENDOF
    Flirt OF ." -- flirt" ENDOF
    ." -- it is something illegal?"
    ENDCASE
    ELSE DROP
    ENDIF ;

    : doLrrh-sensation
    CR ." The little girl digests the following facts :" CR
    /inputs
    1 DO
    I I InputValues .FACT
    LOOP
    CR CR ." That is why she decides to: " CR
    /outputs
    0 DO
    I I ActualOutputs .ACTION
    LOOP CR ;

    : Lrrh-sensation ['] doLrrh-sensation [IS] SHOW-NET ;

    : doLrrh TIMER-RESET
    NO-CONNECTIONS
    Grandma Love ADD-PAIR
    Wolf Hate ADD-PAIR
    Woodcutter Sex ADD-PAIR
    DRILL
    .ELAPSED ;

    : Lrrh ['] doLrrh [IS] DO-IT! ;

    Lrrh-sensation Lrrh #900 TO LearningRate
    .ABOUT -lrrh

    (* End of Application *)

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ahmed@21:1/5 to All on Mon Dec 2 20:24:22 2024
    Hi again,

    I used it to approximte functions.
    It can be used for multi-input/multi-output (multivariable functions).
    I tested it to recognize the numbers 0...9 from a data set based on 8*8- matrices (with 0e and 1e elements) giving visual look of the numbers.
    It can create neural nets for any size (limited by memory and
    calculation times(speed)).

    Any suggestion is welcome.

    Ahmed

    --

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ahmed@21:1/5 to mhx on Mon Dec 2 23:46:49 2024
    On Mon, 2 Dec 2024 21:10:23 +0000, mhx wrote:

    Interesting ... I last looked at neural nets some 40 years ago (iForth
    full distribution dfwforth/examples/neural). At that time Jack Woehr
    was also busy it with it. There should be Forth files floating around.

    Here's my Little Red Riding Hood application, I think it can fit your framework.

    -marcel


    Here is the code using the neural_networks.fs program:
    This code is saved as lrrh.fs

    --------------the code begins here-----------------

    include neural_networks.fs

    \ data
    \ inputs
    0 value BigEars
    1 value BigEyes
    2 value BigTeeth
    3 value Kindly
    4 value Wrinkled
    5 value Handsome
    \ Outputs
    0 value RunAway
    1 value Scream
    2 value Look?
    3 value Kiss
    4 value Approach
    5 value OfferFood
    6 value Flirt

    \ training data
    : X 1e f, ;
    : _ 0e f, ;

    : Grandma _ X _ X X _ ;
    : Wolf X X X _ _ _ ;
    : Woodcutter X _ _ X _ X ;

    : Love _ _ _ X X X _ ;
    : Hate X X X _ _ _ _ ;
    : Sex _ _ _ _ X X X ;

    3 >n_samples
    create data1
    Grandma Love
    Wolf Hate
    Woodcutter Sex

    data1 >data

    \ neuralnet
    7 2 6 1 neuralnet: net1
    ' net1 is net
    net_layers
    ' dlatan is act_func
    ' dllinear is act_func_ol

    \
    1000000 >epochs
    1e-1 >eta
    0e >beta
    1 >display_step
    1e-4 >tol
    0e >rtol
    true >init_net
    false >adapt_eta

    \ learning

    : 0_1_outputs
    n_outputs @ 0 do
    outputs i floats + f@
    fround
    outputs i floats + f!
    loop
    ;

    : .fact
    0.5e f> if
    cr
    case
    BigEars of ." -- Big ears" endof
    BigEyes of ." -- Big eyes" endof
    BigTeeth of ." -- Big teeth" endof
    Kindly of ." -- A kindly appearance" endof
    Wrinkled of ." -- A wrinkled complexion" endof
    Handsome of ." -- A handsome feller" endof
    ." -- something illegal?"
    endcase
    else
    drop
    then
    ;

    : .action
    0.5e f> if
    cr
    case
    RunAway of ." -- run away" endof
    Scream of ." -- scream" endof
    Look? of ." -- look for the woodcutter" endof
    Kiss of ." -- kiss on the cheek" endof
    Approach of ." -- approach" endof
    OfferFood of ." -- offer food" endof
    Flirt of ." -- flirt" endof
    ." -- it is something illegal?"
    endcase
    else
    drop
    then
    ;

    : doLrrh-sensation
    cr ." The little girl digests the following facts :" cr
    n_inputs @ 0 do
    i inputs i floats + f@ .fact
    loop
    cr cr ." That is why she decides to: " cr
    n_outputs @ 0 do
    i outputs i floats + f@ .action
    loop
    cr
    ;

    defer show-net
    ' doLrrh-sensation is show-net

    defer do-it!
    : doLrrh timer-reset
    learn
    cr .elapsed
    ;

    ' doLrrh is do-it!

    \
    : X 1e ;
    : _ 0e ;

    : person1 _ X _ X X _ ;
    : person2 X X X _ _ _ ;
    : person3 X _ _ X _ X ;

    : Grandma_ _ X _ X X _ ;
    : Wolf_ X X X _ _ _ ;
    : Woodcutter_ X _ _ X _ X ;

    : seen to_inputs forward_pass 0_1_outputs ;

    \ do-it!
    \ person seen show-net

    -------the code ends here--------------

    Here is a session for this program (with gfgorth):

    include lrrh.fs

    do-it!
    Learning...
    -----------
    epochs| Cost
    ------+ ----
    0 4.4188457512276
    1 3.53222220254148
    2 3.06114255073205
    3 2.80622819639435
    4 2.66162924195142
    5 2.56800089024257
    6 2.49016050909303
    7 2.40496982172919
    8 2.29464055287916
    9 2.1441048550535
    10 1.94269711717085
    11 1.69027187896353
    12 1.40430631490554
    13 1.1190606012813
    14 0.870894985775117
    15 0.679777383963026
    16 0.543995031957486
    17 0.449275168744935
    18 0.379986601054602
    19 0.324695945315252
    20 0.276816623400656
    21 0.233339792513982
    22 0.193391009200482
    23 0.157134197427679
    24 0.125050933150959
    25 0.0975217531128389
    26 0.0746387387740572
    27 0.0561824514472255
    28 0.0416947230420414
    29 0.0305858793909606
    30 0.0222337205931627
    31 0.0160539496745273
    32 0.0115391355450612
    33 0.00827254819709797
    34 0.00592577107731758
    35 0.00424787278571643
    36 0.00305156160479014
    37 0.00219950142863882
    38 0.00159232177966559
    39 0.00115883013151037
    40 0.000848383616732726
    41 0.000625131744197768
    42 0.00046377036443924
    43 0.0003464626655083
    44 0.000260634102035286
    45 0.0001974076049767
    46 0.000150500468205112
    47 0.000115450242616698
    48 0.0000890730757258027
    85.481600ms ok
    person1 seen show-net
    The little girl digests the following facts :

    -- Big eyes
    -- A kindly appearance
    -- A wrinkled complexion

    That is why she decides to:

    -- kiss on the cheek
    -- approach
    -- offer food
    ok
    person2 seen show-net
    The little girl digests the following facts :

    -- Big ears
    -- Big eyes
    -- Big teeth

    That is why she decides to:

    -- run away
    -- scream
    -- look for the woodcutter
    ok
    person3 seen show-net
    The little girl digests the following facts :

    -- Big ears
    -- A kindly appearance
    -- A handsome feller

    That is why she decides to:

    -- approach
    -- offer food
    -- flirt
    ok
    grandma_ seen ok
    show-net
    The little girl digests the following facts :

    -- Big eyes
    -- A kindly appearance
    -- A wrinkled complexion

    That is why she decides to:

    -- kiss on the cheek
    -- approach
    -- offer food
    ok
    Wolf_ seen ok
    show-net
    The little girl digests the following facts :

    -- Big ears
    -- Big eyes
    -- Big teeth

    That is why she decides to:

    -- run away
    -- scream
    -- look for the woodcutter
    ok
    Woodcutter_ seen ok
    show-net
    The little girl digests the following facts :

    -- Big ears
    -- A kindly appearance
    -- A handsome feller

    That is why she decides to:

    -- approach
    -- offer food
    -- flirt
    ok


    Notice that the learning took 85.481600ms

    Thanks for this example.

    Ahmed

    --

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From mhx@21:1/5 to All on Tue Dec 3 09:31:24 2024
    Notice that the learning took 85.481600ms

    Thanks for this example.

    I'm impressed that you could port this so quickly and cleanly!

    -marcel

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ahmed@21:1/5 to mhx on Tue Dec 3 13:38:49 2024
    On Tue, 3 Dec 2024 9:31:24 +0000, mhx wrote:


    I'm impressed that you could port this so quickly and cleanly!

    -marcel

    Not really!
    I think the word ``0_1_ouputs'' is superfluous and we don't need it.
    The definition of the word ``seen'' becomes:

    : seen to_inputs forward_pass ;

    Ahmed

    --

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From albert@spenarnc.xs4all.nl@21:1/5 to Ahmed on Tue Dec 3 16:02:32 2024
    In article <bff5904288a7105aa4c56d00bdf6bb5e@www.novabbs.com>,
    Ahmed <melahi_ahmed@yahoo.fr> wrote:
    On Tue, 3 Dec 2024 9:31:24 +0000, mhx wrote:


    I'm impressed that you could port this so quickly and cleanly!

    -marcel

    Not really!

    You can contest that others may be impressed.

    I think the word ``0_1_ouputs'' is superfluous and we don't need it.
    The definition of the word ``seen'' becomes:

    : seen to_inputs forward_pass ;

    Ahmed

    --

    Groetjes Albert
    --
    Temu exploits Christians: (Disclaimer, only 10 apostles)
    Last Supper Acrylic Suncatcher - 15Cm Round Stained Glass- Style Wall
    Art For Home, Office And Garden Decor - Perfect For Windows, Bars,
    And Gifts For Friends Family And Colleagues.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ahmed@21:1/5 to albert@spenarnc.xs4all.nl on Tue Dec 3 15:19:35 2024
    On Tue, 3 Dec 2024 15:02:32 +0000, albert@spenarnc.xs4all.nl wrote:

    In article <bff5904288a7105aa4c56d00bdf6bb5e@www.novabbs.com>,
    Ahmed <melahi_ahmed@yahoo.fr> wrote:
    On Tue, 3 Dec 2024 9:31:24 +0000, mhx wrote:


    I'm impressed that you could port this so quickly and cleanly!

    -marcel

    Not really!

    You can contest that others may be impressed.

    It's not for the impression.

    'Not really!' was for 'quickly and cleanly!'.


    Groetjes Albert


    Ahmed

    --

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ahmed@21:1/5 to All on Tue Dec 3 19:46:33 2024
    Hi again,

    I did some examples using this framework. Save examples in separate
    files.

    Example 01: Fitting data (linear case)

    \ ------------------Example 01:--------------------
    include neural_networks.fs

    \ data samples
    \ x1, yd, 10 samples
    10 >n_samples
    create data1
    0e f, 0e f,
    1e f, 1e f,
    2e f, 2e f,
    3e f, 3e f,
    4e f, 4e f,
    5e f, 5e f,
    6e f, 6e f,
    7e f, 7e f,
    8e f, 8e f,
    9e f, 9e f,

    data1 >data


    \ neuranet
    1 2 2 1 2 neuralnet: net1
    ' net1 is net
    net_layers

    ' dllinear is act_func

    1e-2 >eta
    0e >beta
    1000 >epochs
    1e-4 >tol
    1 >display_step
    false >adapt_eta
    true >init_net

    learn
    test

    \ --------------Example 01 ends here-------------------------

    Example 02: fitting data (nonlinear case)

    \ -------------Example 02:-----------------------------------
    include neural_networks.fs

    \ data samples
    \ x1, yd, 10 samples
    10 >n_samples
    create data1
    0e f, 0e f,
    1e f, 1e f,
    2e f, 2e f,
    3e f, 3e f,
    4e f, 4e f,
    5e f, 4e f,
    6e f, 4e f,
    7e f, 3e f,
    8e f, 2e f,
    9e f, 1e f,

    data1 >data

    \ neuranet
    1 5 5 1 2 neuralnet: net1
    ' net1 is net
    net_layers

    ' dlatan is act_func

    1e-2 >eta
    0e >beta
    1000000 >epochs
    1e-2 >tol
    100 >display_step
    false >adapt_eta
    true >init_net

    learn
    test
    \ --------------Example 02 ends here--------------

    Example 03: Approximation of a function: f(x) = sin(x) + cos(x)

    \ --------------Example 03------------------------
    include neural_networks.fs

    \ data samples sin(x)+cos(x) values for x = 0:0.1:10

    : f1() fsincos f+ ;

    defer f()
    ' f1() is f()

    variable n_samples1
    10 n_samples1 !

    create data1 n_samples1 @ 2* floats allot

    : data_samples
    n_samples1 @ 0 do
    i s>f
    fdup
    data1 i 2* floats + f!
    f()
    data1 i 2* 1+ floats + f!
    loop
    ;

    data_samples

    data1 >data
    n_samples1 @ >n_samples

    \ neuralnet
    1 5 5 5 1 3 neuralnet: net1
    ' net1 is net
    net_layers

    ' dlatan is act_func

    1e-3 >eta
    9e-1 >beta
    1000000 >epochs
    1e-4 >tol
    100 >display_step
    false >adapt_eta
    true >init_net

    learn
    test
    \ -------------------Example 03 ends here--------------------

    Example 04: Work with several nets in the same session

    \ ------------------Example 04---------------------------
    include neural_networks.fs

    \ data samples sin(x)+cos(x) values for x = 0:1:10

    : f1() fsincos f+ ;

    defer f()
    ' f1() is f()

    variable n_samples1
    10 n_samples1 !

    create data1 n_samples1 @ 2* floats allot

    : data_samples
    n_samples1 @ 0 do
    i s>f
    fdup
    data1 i 2* floats + f!
    f()
    data1 i 2* 1+ floats + f!
    loop
    ;

    data_samples

    data1 >data
    n_samples1 @ >n_samples

    \ neuralnet
    \ 2 neural nets in the same session

    1 10 1 1 neuralnet: net1
    ' net1 is net
    net_layers

    1 10 10 1 2 neuralnet: net2
    ' net2 is net
    net_layers

    ' dlatan is act_func

    \ for use do:
    \ ' net1 is net learn
    \ ' net2 is net learn
    \ ' net3 is net learn
    \ ' net4 is net learn

    1e-2 >eta
    0e >beta
    1000000 >epochs
    1e-4 >tol
    1000 >display_step
    false >adapt_eta
    true >init_net

    cr
    cr
    s" Using net1" type
    s" ----------" type

    ' net1 is net
    learn
    test

    cr
    cr
    s" Using net2" type
    s" ----------" type

    ' net2 is net
    learn
    test

    \ ----------------Example 04 ends here----------------------

    Example 05: Multi-input, multi-output neural net

    \ ---------------Example 05--------------------------------
    include neural_networks.fs

    \ data samples x+y and x*y fo x = 0..4 and y = 0..4

    : f1() ( f: x y -- x+y x*y)
    fover fover f+ frot frot f* ;

    defer f()
    ' f1() is f()

    variable n_samples1
    25 n_samples1 !
    4 constant n_i/o

    create data1 n_samples1 @ n_i/o * floats allot

    variable ci

    : data_samples
    5 0 do
    5 0 do
    5 j * i + ci !
    j s>f fdup data1 ci @ n_i/o * floats + f!
    i s>f fdup data1 ci @ n_i/o * 1+ floats + f!
    f()
    fswap
    data1 ci @ n_i/o * 2 + floats + f!
    data1 ci @ n_i/o * 3 + floats + f!
    loop
    loop
    ;

    data_samples

    data1 >data
    n_samples1 @ >n_samples

    \ neuralnet: 2 inputs, 2 outputs and 3 hidden layers (10 neurons each)
    2 10 10 10 2 3 neuralnet: net1
    ' net1 is net
    net_layers

    ' dlatan is act_func

    1e-3 >eta
    0e >beta
    1000000 >epochs
    1e-2 >tol
    1000 >display_step
    false >adapt_eta
    true >init_net

    learn
    test
    \ -------------Example 05 ends here---------------------------

    Enjoy.

    Ahmed

    --

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Ahmed@21:1/5 to All on Tue Dec 3 20:33:26 2024
    Hi again,
    A last example concerns digit recognition.
    The digit is in {0, 1, ...,9}.
    The sapmles (examples) are 8x8-matrices (patterns for digits).

    The example begins here, it consists of three files:
    - numbers_data.fs
    - use_neural_net.fs
    - net_predict.fs

    These files are given below:

    \ ------------numbers_data.fs---------------------

    \ data samples numbers

    variable n_samples1
    10 n_samples1 !
    : _ 0e f, ;
    : X 1e f, ;

    create data1
    \ sample 1
    \ inputs
    _ _ _ _ _ _ _ _
    _ _ X X X _ _ _
    _ X _ _ _ X _ _
    _ X _ _ _ X _ _
    _ X _ _ _ X _ _
    _ X _ _ _ X _ _
    _ _ X X X _ _ _
    _ _ _ _ _ _ _ _
    \ outputs
    \ 0 1 2 3 4 5 6 7 8 9
    X _ _ _ _ _ _ _ _ _

    \ sample 2
    \ inputs
    _ _ _ _ _ _ _ _
    _ _ _ X _ _ _ _
    _ _ X X _ _ _ _
    _ X _ X _ _ _ _
    _ _ _ X _ _ _ _
    _ _ _ X _ _ _ _
    _ _ X X X _ _ _
    _ _ _ _ _ _ _ _
    \ outputs
    \ 0 1 2 3 4 5 6 7 8 9
    _ X _ _ _ _ _ _ _ _

    \ sample 3
    \ inputs
    _ _ _ _ _ _ _ _
    _ _ X X X _ _ _
    _ X _ _ _ X _ _
    _ _ _ _ X _ _ _
    _ _ _ X _ _ _ _
    _ _ X _ _ _ _ _
    _ X X X X X _ _
    _ _ _ _ _ _ _ _
    \ outputs
    \ 0 1 2 3 4 5 6 7 8 9
    _ _ X _ _ _ _ _ _ _

    \ sample 4
    \ inputs
    _ _ _ _ _ _ _ _
    _ _ X X X _ _ _
    _ X _ _ _ X _ _
    _ _ _ X X _ _ _
    _ _ _ _ _ X _ _
    _ X _ _ _ X _ _
    _ _ X X X _ _ _
    _ _ _ _ _ _ _ _
    \ outputs
    \ 0 1 2 3 4 5 6 7 8 9
    _ _ _ X _ _ _ _ _ _

    \ sample 5
    \ inputs
    _ _ _ _ _ _ _ _
    _ _ _ _ X _ _ _
    _ _ _ X X _ _ _
    _ _ X _ X _ _ _
    _ X _ _ X _ _ _
    _ X X X X X _ _
    _ _ _ _ X _ _ _
    _ _ _ _ _ _ _ _
    \ outputs
    \ 0 1 2 3 4 5 6 7 8 9
    _ _ _ _ X _ _ _ _ _

    \ sample 6
    \ inputs
    _ _ _ _ _ _ _ _
    _ X X X X X _ _
    _ X _ _ _ _ _ _
    _ X X X X _ _ _
    _ _ _ _ _ X _ _
    _ X _ _ _ X _ _
    _ _ X X X _ _ _
    _ _ _ _ _ _ _ _
    \ outputs
    \ 0 1 2 3 4 5 6 7 8 9
    _ _ _ _ _ X _ _ _ _

    \ sample 7
    \ inputs
    _ _ _ _ _ _ _ _
    _ _ X X X _ _ _
    _ X _ _ _ _ _ _
    _ X X X X _ _ _
    _ X _ _ _ X _ _
    _ X _ _ _ X _ _
    _ _ X X X _ _ _
    _ _ _ _ _ _ _ _
    \ outputs
    \ 0 1 2 3 4 5 6 7 8 9
    _ _ _ _ _ _ X _ _ _

    \ sample 8
    \ inputs
    _ _ _ _ _ _ _ _
    _ X X X X X _ _
    _ X _ _ _ X _ _
    _ _ _ _ X _ _ _
    _ _ _ X _ _ _ _
    _ _ _ X _ _ _ _
    _ _ _ X _ _ _ _
    _ _ _ _ _ _ _ _
    \ outputs
    \ 0 1 2 3 4 5 6 7 8 9
    _ _ _ _ _ _ _ X _ _

    \ sample 9
    \ inputs
    _ _ _ _ _ _ _ _
    _ _ X X X _ _ _
    _ X _ _ _ X _ _
    _ _ X X X _ _ _
    _ X _ _ _ X _ _
    _ X _ _ _ X _ _
    _ _ X X X _ _ _
    _ _ _ _ _ _ _ _
    \ outputs
    \ 0 1 2 3 4 5 6 7 8 9
    _ _ _ _ _ _ _ _ X _

    \ sample 10
    \ inputs
    _ _ _ _ _ _ _ _
    _ _ X X X _ _ _
    _ X _ _ _ X _ _
    _ _ X X X X _ _
    _ _ _ _ _ X _ _
    _ X _ _ _ X _ _
    _ _ X X X _ _ _
    _ _ _ _ _ _ _ _
    \ outputs
    \ 0 1 2 3 4 5 6 7 8 9
    _ _ _ _ _ _ _ _ _ X


    : _ 0e ;
    : X 1e ;

    \ -----------------numbers_data.fs ends here---------------------


    \ -----------------use_neural_net.fs----------------------
    include neural_networks.fs

    \ including data
    include numbers_data.fs

    data1 >data
    n_samples1 @ >n_samples


    \ neuralnet: 64 inputs, 10 outputs and 1 hidden layer (5 neurons each)
    10 10 64 1 neuralnet: net1
    ' net1 is net
    net_layers

    ' dlsigmoid is act_func
    ' dlsigmoid is act_func_ol

    1e-3 >eta
    9e-1 >beta
    1000000 >epochs
    1e-2 >tol
    100 >display_step
    false >adapt_eta
    true >init_net


    learn

    \ --------------use_neural_net.fs ends here-----------------

    \ --------------net_predict.fs---------------------------
    include use_neural_net.fs

    : .result
    0
    0e
    n_outputs @ 0 do
    outputs i floats + f@
    fover fover f< if
    drop
    fnip
    i
    else
    fdrop
    then
    loop
    cr
    cr s" The result:" type
    cr s" -----------" type
    cr s" It is: " type . s" with probability: " type f.
    cr
    ;

    \ Learning
    learn

    \ predictions
    cr
    cr s" Now, prediction:" type
    cr s" ----------------" type
    \ change the inputs and see if the neuralnet recognizes it
    \ the inputs

    : X 1e ;
    : _ 0e ;

    cr
    cr s" First example:" type
    cr s" --------------" type
    cr
    _ _ _ _ _ _ _ _
    _ _ X X X _ _ _
    _ X _ _ _ X _ _
    _ _ _ X _ _ _ _
    _ _ _ _ X _ _ _
    _ X _ _ _ X _ _
    _ X X X X _ _ _
    _ _ _ _ _ _ _ _
    to_inputs
    forward_pass outputs_probs .outputs .result
    cr

    cr
    cr s" Second example:" type
    cr s" ---------------" type
    cr
    _ _ _ _ _ _ _ _
    _ _ X X X _ _ _
    _ X _ _ _ X _ _
    _ _ _ _ X _ _ _
    _ _ _ X _ _ _ _
    _ _ X _ _ _ _ _
    _ _ X X X _ _ _
    _ _ _ _ _ _ _ _
    to_inputs
    forward_pass outputs_probs .outputs .result
    cr


    cr
    cr s" Third example:" type
    cr s" -------------" type
    cr
    _ _ _ _ _ _ _ _
    _ _ _ _ X _ _ _
    _ _ _ X _ _ _ _
    _ _ _ X _ _ _ _
    _ _ _ X _ _ _ _
    _ _ _ X _ _ _ _
    _ _ X _ _ _ _ _
    _ _ _ _ _ _ _ _
    to_inputs
    forward_pass outputs_probs .outputs .result
    cr


    cr
    cr s" Fourth example:" type
    cr s" ---------------" type
    cr
    _ _ _ _ _ _ _ _
    _ _ _ _ X _ _ _
    _ _ _ X _ X _ _
    _ _ X _ _ X _ _
    _ X X X X X _ _
    _ _ _ _ _ X _ _
    _ _ _ _ _ X _ _
    _ _ _ _ _ _ _ _
    to_inputs
    forward_pass outputs_probs .outputs .result
    cr


    cr
    cr s" fifth example:" type
    cr s" ---------------" type
    cr
    _ _ _ _ _ _ _ _
    _ X X X X X _ _
    _ _ _ _ _ X _ _
    _ _ _ _ _ X _ _
    _ _ _ _ _ X _ _
    _ _ _ _ _ X _ _
    _ _ _ _ _ _ _ _
    _ _ _ _ _ _ _ _
    to_inputs
    forward_pass outputs_probs .outputs .result
    cr

    \ --------------net_predict.fs ends here----------------


    In the terminal (command line), type:
    gforth net_predict.fs
    or include it in gforth, iForth or VfxForth.


    \ Prediction: possible forms
    \ net_predict
    \ net_predict_probs
    \ net_predict_softmax
    \ net_predict_ips
    \ forward_pass .outputs .result
    \ forward_pass outputs_ident .outputs .result
    \ forward_pass outputs_probs .outputs .result
    \ forward_pass outputs_softmax .outputs .result
    \ forward_pass .result
    \ forward_pass outputs_ident .result
    \ forward_pass outputs_probs .result
    \ forward_pass outputs_softmax .result


    Enjoy.
    Ahmed

    --

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)