IDEAS home Printed from https://ideas.repec.org/a/gam/jmathe/v9y2021i24p3216-d701116.html
   My bibliography  Save this article

The Representation Theory of Neural Networks

Author

Listed:
  • Marco Armenta

    (Department of Mathematics, Université de Sherbrooke, Sherbrooke, QC J1K 2R1, Canada
    Department of Computer Science, Université de Sherbrooke, Sherbrooke, QC J1K 2R1, Canada)

  • Pierre-Marc Jodoin

    (Department of Computer Science, Université de Sherbrooke, Sherbrooke, QC J1K 2R1, Canada)

Abstract

In this work, we show that neural networks can be represented via the mathematical theory of quiver representations. More specifically, we prove that a neural network is a quiver representation with activation functions, a mathematical object that we represent using a network quiver . Furthermore, we show that network quivers gently adapt to common neural network concepts such as fully connected layers, convolution operations, residual connections, batch normalization, pooling operations and even randomly wired neural networks. We show that this mathematical representation is by no means an approximation of what neural networks are as it exactly matches reality. This interpretation is algebraic and can be studied with algebraic methods. We also provide a quiver representation model to understand how a neural network creates representations from the data. We show that a neural network saves the data as quiver representations, and maps it to a geometrical space called the moduli space , which is given in terms of the underlying oriented graph of the network, i.e., its quiver . This results as a consequence of our defined objects and of understanding how the neural network computes a prediction in a combinatorial and algebraic way. Overall, representing neural networks through the quiver representation theory leads to 9 consequences and 4 inquiries for future research that we believe are of great interest to better understand what neural networks are and how they work.

Suggested Citation

  • Marco Armenta & Pierre-Marc Jodoin, 2021. "The Representation Theory of Neural Networks," Mathematics, MDPI, vol. 9(24), pages 1-42, December.
  • Handle: RePEc:gam:jmathe:v:9:y:2021:i:24:p:3216-:d:701116
    as

    Download full text from publisher

    File URL: https://www.mdpi.com/2227-7390/9/24/3216/pdf
    Download Restriction: no

    File URL: https://www.mdpi.com/2227-7390/9/24/3216/
    Download Restriction: no
    ---><---

    Corrections

    All material on this site has been provided by the respective publishers and authors. You can help correct errors and omissions. When requesting a correction, please mention this item's handle: RePEc:gam:jmathe:v:9:y:2021:i:24:p:3216-:d:701116. See general information about how to correct material in RePEc.

    If you have authored this item and are not yet registered with RePEc, we encourage you to do it here. This allows to link your profile to this item. It also allows you to accept potential citations to this item that we are uncertain about.

    We have no bibliographic references for this item. You can help adding them by using this form .

    If you know of missing items citing this one, you can help us creating those links by adding the relevant references in the same way as above, for each refering item. If you are a registered author of this item, you may also want to check the "citations" tab in your RePEc Author Service profile, as there may be some citations waiting for confirmation.

    For technical questions regarding this item, or to correct its authors, title, abstract, bibliographic or download information, contact: MDPI Indexing Manager (email available below). General contact details of provider: https://www.mdpi.com .

    Please note that corrections may take a couple of weeks to filter through the various RePEc services.

    IDEAS is a RePEc service. RePEc uses bibliographic data supplied by the respective publishers.