I intend on making a better version of this later on. The code here is really messy and should only be regarded as a proof on concept
Variable Autoencoder with PCA, a GUI and sliders to generate faces.
Trained and created from scratch with Pytorch, sklearn, tkinter.
The images used to train the model come from the Flickr-Faces-HQ dataset
Convolutional encoder
Deconvolutional decoder
(3,128,128) -> [...] -> (512) -> [...] -> (3,128,128)
Principal Components Analysis is done on the (#images, latent_space) sized matrix which contains all 70000 encoded images as 512 sized vectors.
A screenshot of the GUI, with sliders, Reset and Random buttons:
This is the mean face, i.e. average face. It is set as a zero for all sliders.
A random generated face :
And here are a few interesting sliders' effects (reminder: these were made and sorted by the PCA):
slider_0, interpretation : Background color (the strongest eigenvalue)

slider_1, interpretation : Face orientation

slider_2, interpretation : Light provenance

slider_3, interpretation : Hair color

slider_4 and slider_5 : interpretation : Sex + light provenance

slider_15 and slider_19, interpretation : Smile + chin size/face tilt

Other sliders have effects on sex, skin color, smile, face width, and a lot of sliders only affect the background