Abstraction is the foundation of communication, whether its between humans, or between humans and machines. Humans developed language to abstract and convey thoughts, ideas, and emotions. Human cognition relies on abstraction — our brains create a model of the world and these models let us make decisions without distractions from unnecessary detail.
"San Francisco" is an abstraction representing a complex city known for its iconic Golden Gate Bridge, historic cable cars, diverse neighbourhoods, and tech industry presence. Code is an abstraction for humans to communicate with machines. Abstractions are why you can use an iPhone without understanding the transistors inside.
Abstractions are ubiquitous in our digital world. They bridge the gap between human understanding and technological complexity. Let's consider some examples:
-
High-Level Languages abstract away machine code,
and let programmers write human-readable text.
print("Hello, World!")
abstracts the underlying machine operations required to display text. - Operating Systems abstract away hardware complexities, allowing programmers to focus on higher-level tasks.
- Graphical User Interfaces (GUIs) abstract away command-line interfaces with visual representations.
- JavaScript Frameworks like React have abstracted away the complexities of DOM manipulation and state management. Now, developers can focus on building user interfaces with reusable components.
LLMs: Not So Reverse Abstraction
Most abstractions so far simplify complex tech for human use, but language models are contrarian. They seem to abstract human knowledge and language for computer understanding.
- Word2Vec transforms words into vector representations. This abstracts human language into a form that computers can process mathematically. It allows machines to grasp word relationships in a way that's computationally manageable.
- Transformers abstract the concept of "attention" in language, mimicking how humans focus only on relevant parts of information. This abstraction has led to significant improvements in machine language understanding and generation.
- Large Language Models (LLMs) like GPT-3 take this a step further, abstracting enormous amounts of knowledge into a single, unified model. They can generate text, answer questions, and even write code. They abstract human language into a machine-friendly form.
At first glance, these models might seem like reverse abstractions - instead of simplifying technology for humans, they're simplifying human knowledge for machines. However, this outcome is still a more accessible interface for human-computer interaction. These abstract away the need for specialized programming skills.
This "reverse abstraction" opens up new possibilities. By abstracting human language for machines, we've created a new layer of abstraction. This makes advanced AI capabilities more accessible to both developers and end-users.
Abstractions as a Means to Democratise
Every once in a while, a revolutionary technology comes, that has the potential to change everything. But as the technology evolves, abstractions play a key role in achieving this potential, democratising the technology.
Command-line interface was revolutionary technology in the 1980s. In 1984, the Macintosh introduced GUIs that abstracted from text-based command-line interfaces.
AI as a revolutionary technology is here. But the right abstraction is key to better the direction of human progress.