background

Neural Networks - An intuitive and exhaustive guide

Last updated December 1, 2025

Machine Learning

Hi, I'm Nyior Clement, the lead author and AI Advocate at Guidely. I worked on this Guide with Patrick Fleith, who is a Senior AI Engineer building AI applications to improve space mission design and operations.

We are really glad you are here!

Neural networks can feel intimidating at first, yet they sit at the centre of almost everything happening in AI today. Once you understand them, the rest of the field starts to feel a lot more familiar.

There are many resources on neural networks, but our goal is different.

  1. We want you to feel the ideas.
  2. To see them clearly.
  3. To build an intuition that stays with you long after this guide.

This is why we begin with the why

Before jumping right into neural networks, we walk through the problems older learning methods struggled to solve. When you see why those methods fall short, the need for neural networks becomes obvious. That moment of clarity changes how you learn everything that comes after.

This guide is hands-on too.

By the end, you will understand the core ideas deeply, and you will build a neural network from scratch.

You are in the right place. Stay with us xD.

What You'll Learn

  • Part 1: The prelude, from rule-based to learning algorithms
  • Part 2: What are neural networks, really? [coming soon]
  • Part 3: Inside a neural network [coming soon]
  • Part 4: How neural networks learn [coming soon]
  • Part 5: Building a neural network from scratch [coming soon]

Part 1: The prelude, from rule-based to learning algorithms

  • Why people turned to learning algorithms from rule-based algorithms
  • The anatomy of learning algorithms

Part 2: What are neural networks, really?

  • From traditional machine learning to neural networks: why people turned to neural networks from traditional ML algorithms
  • How the human brain inspired neural networks
  • The architecture of neural networks

Part 3: Inside a neural network

  • Inputs, weights, bias, activation
  • Activation functions (ReLU, Sigmoid)

Part 4: How neural networks learn

  • Loss function as “distance from perfect”
  • Gradient descent (mountain-descent analogy)
  • Backpropagation as feedback correction

Part 5: Building a neural network from scratch

  • Walk through a small demo: recognize handwritten digits (MNIST)
  • No libraries beyond Python and NumPy
  • <100 lines of code

Enjoyed the read? Help us spread the word — say something nice!

Guide Parts