FRAME Model and Deep FRAME for Texture Synthesis

FRAME model

Original paper: Filters, Random Fields and Maximum Entropy (FRAME): Towards a Unified Theory for Texture Modeling

Markov Random Fields provide us physical inspirations for modeling texture patterns. Filters, on the other hand, afford a powerful and a biologically plausible way of extracting features. Combining the two classes of texture models, we have the FRAME (Filters, Random field, and Maximum Entropy) model, which iteratively does filter selection and filter response matching through the Maximum Entropy Principle.

Deep FRAME

Original paper: A Theory of Generative ConvNet.

Multi-layered hierarchical FRAME model (Deep FRAME) has a similar structure to the convolutional Neural Networks (CNN). In contrast to CNN, Deep FRAME is a generative model in the sense that (1) it extracts a representation that can reconstruct the input image; (2) it has an explicit dictionary for the basis/filters, which will be learned during training.

In comparison to the vanilla FRAME model, Deep FRAME has the following characteristics: (1) it tries to match statistics on the responses of neurons at 2-3 layers, not just 1 layer; (2) for simplicity, instead of matching the whole histogram of a neuron’s response, it only matches the expected response; and (3) instead of pursuing neurons from a hand-designed set (e.g. the Gabor filters), it learns the filters. The number of neurons in the new model may be excessive and thus this model is dense and not sparse. It depends on the design of the structure (how many layers, how many neurons, i.e. filters, at each layer). It is still unclear at this stage how we can design a good structure for different images in different entropy regimes.

Junhong Shen
Junhong Shen
Undergraduate in Math. of Comp.

My research interests include theories and applications of reinforcement learning and machine learning.

Related