I’m pretty new to ML and tensorflow. What I’m trying to do now is practically to inverse a function using tensorflow.
So I have a function h=f(c1, c2,..cn, T). It is a smooth function of all the variables. I want to train a model which would give me T given known values of c1…cn and h.
For now I’m using a keras.Sequential model with 2 or 3 dense layers.
For loss I use ‘mean_absolute_error’, For optimizer – Adam().
To train the model I generate a dataset using my h(c1…cn, T) function by varying its arguments and using values of T as train_labels.
The accuracy of the resulting model is not very good to my mind – I’m getting errors of about 10%. To my mind this is not very good, given that the training dataset is ideally smooth.
My questions are:
Am I doing something particularly wrong?
How many units should I provide for each layer? I mean in tutorials they are using either Dense(64) or Dense(1). What difference does it make in my particular case? Should it be proportional to the number of parameters of the model?
May be I should use some other types of layers/optimizers/losses?
Thank you in advance for your replies!