Glossary of Artificial Intelligence

AI

Artificial Intelligence refers to any technique that enables computers to mimic human behaviour.

CNN

Covolutional Neural Networks

DBN

Deep Belief Networks

Deep Learning

The area of specialisation in ML to extract patterns from data using neural networks.

GAN

Generative Adversarial Networks make use of generative learning such as encoders/decoders, transformers for natural language processing and auto-encoders.

HCA

Hierarchical Cluster Analysis is a form of unsupervised learning algorithm using clustering.

Keras

A high-level Deep Learning API that makes it very simple to train and run neural networks. It can run on top of either TensorFlow, Theano or Microsoft Cognitive Toolkit. TensorFlow comes with its own implementation of this API, called tf.keras, which provides support for more advanced TensorFlow features.

LLE

Locally Linear Embedding is a type of unsupervised learning algorithm using visualization and dimensionality reduction.

LSTM

Long Short-Term Memory are used for sequence processing.

ML

Machine Learning is a branch of study under Artificial Intelligence that enables a computer system the ability to learn without explicitly being programmed.

PCA

Principal Component Analysis (PCA) is a type of unsupervised learning algorithm using visualization and dimensionality reduction.

RBM

Restricted Boltzmann Machines are a semi-supervised learning algorithm for ML. RBMs are trained sequentially in an unsupervised manner and then the whole system is fine-tuned using supervised learning techniques. They are a form of DBN.

RNN

Recurrent Neural Networks

Scikit-Learn

Created by David Cournapeau in 2007 and now lead by a team of researches at the French Institue for Research in Computer Science and Automation (Inra), this framework implements many efficient Machine Learning algorithms and is a great entry point for AI.

SVM

Support Vector Machines is a kind of supervised learning algorithm.

TensorFlow

This is a more complex library for distributed numerical computation. It allows for the training and execution of very large neural networks efficiently, by distributing the computations across multi-GPU servers. It was created by Google and supports many large-scale Machine Learning applications. It was open-sourced in November 2015, with version 2.0 released in September 2019.