[N] A new alternative to the Fast Artificial Neural Network Library (FANN) in C

  • by

TFCNN is an alternative to the already well established C library FANN; Introducing TFCNN it is a fully connected neural networking library in C with a small footprint, and as such, it can be included in your project via a single header file.

Being so light weight also makes it great for use in embedded projects.

TFCNNv1 targets any platform that compiles C code, it features binary classification, and a staple set of 5 activation functions, 5 optimisers, and 3 uniform weight initialisation methods. A CPU based Uint8 version is additionally available which can be used for training and classification unlike with FANNs implementation which cannot be used in the training process. Although this does come at the additional cost of casting operations and slow float32 emulation on platforms without native support.

TFCNNv2 targets Linux platforms and features a superfluous set of 20 activation functions and has a softmax and regular multiple classification implementations. Without going into too much detail, whereas v1 is vanilla, reliable, using tried and tested methods, v2 expands upon that by implementing more selection and in some cases, purely experimental options such as derivatives based on lookup tables where maybe they really should not.

TFCNN supports more than FANN in some areas and less than FANN in other areas and for this reason, I believe they make for their own specific use cases.

TFCNN makes a better use case for beginners, who wish to better understand how a classical neural network works without having to flick through too many different source files. This really is, as simple; clear, and concise as one could make the implementation of such a neural network in the C programming language.

If you are interested to learn more about the TFCNN project please visit the Github.

submitted by /u/SirFletch
[link] [comments]

Leave a Reply

Your email address will not be published. Required fields are marked *