
The Evolution of Computing
The journey of artificial intelligence begins with the question: Why do we need AI? To answer this, let’s review the evolution of computing. Initially, computers were programmed using imperative paradigms. With an imperative approach, you explicitly instruct the computer with a precise sequence of steps to achieve a desired outcome. This method relies on imperative knowledge—knowing exactly which ingredients or operations are needed to produce a specific result.



AI Learning Paradigms
There are three main paradigms within artificial intelligence:- Supervised Learning
- Unsupervised Learning
- Reinforcement Learning
Supervised Learning
Supervised learning involves training a model using labeled examples. Imagine teaching children by showing them many pictures of dogs and cats so they learn to identify new images based on common patterns. This method excels in classification tasks but requires vast amounts of labeled data. Its performance may decline when data points fall between established classes, making this approach inherently narrow in scope.

Unsupervised Learning
In unsupervised learning, models analyze data without any labeled examples. This approach is akin to setting a baby in an unfamiliar environment and allowing it to learn and identify patterns independently. Techniques such as K-means clustering and Principal Component Analysis (PCA) fall under this category. Unsupervised learning is particularly useful for tasks like anomaly detection, where the definition of “normal” is not predefined. However, it can struggle when dealing with complex, chaotic environments.

Reinforcement Learning
Reinforcement learning bridges the gap between supervised and unsupervised techniques. This paradigm operates much like training a pet: the model receives rewards for the correct actions and penalties for mistakes. Examples include Q-learning, deep Q-networks, policy gradient methods, and Proximal Policy Optimization (PPO). Reinforcement learning is particularly effective in controlled environments such as games or simulations, though its application to more complex, real-world scenarios remains challenging.Self-Supervised Learning
To overcome the limitations inherent in the previous paradigms, self-supervised learning was introduced. This method leverages the strengths of supervised, unsupervised, and reinforcement learning by setting its own tasks and learning by predicting missing pieces of information. For instance, models like GPT mask certain words in a text and then predict the missing words, rewarding themselves for correct predictions and penalizing errors. This approach is both powerful and scalable, forming the foundation for models as large and intricate as GPT-4.
This lesson has traced the evolution of computational paradigms—from imperative and declarative programming through various AI learning methodologies—to the rise of self-supervised learning as a pathway to generalized intelligence. In the next section, we will delve into neural networks, the core components that make modern AI systems adaptive and powerful.