an image

What is knowledge?

midjourney

I don't know quantum physics. I haven't studied it. Someday, I'd like to.

But something I've heard/read about quantum mechanics is that particles aren't "things" in the classical sense: they're more like "configurations". They're like "dollars in a bank account". Each dollar in my bank account isn't an individual "thing". It's the configuration of atoms in the computer memory of the bank. If I owe someone one of my dollars and set up a program to automatically transfer a dollar from my account to them tomorrow, it's not that "one of the dollars in my bank account" is theirs. David Deutsch examines this puzzle in the multiverse chapter of his book, The Beginning of Infinity. I won't go into more depth here, both because I can't (I'm still trying to digest the chapter) and because it's not the point.

The point I'm trying to make is that what appears like a "thing" to us is not really a "thing" on its own. It's more like an aspect of some other phenomena.

"Infinity" isn't a single thing. It's a "suitcase word", referring to multiple things with a single term. Instead of a single concept of "infinity", we now know that there are different infinities. Different things. The countable infinity of naturals, rationals, and integers. The uncountable infinity of real and complex numbers. I think I read that power series of an uncountably infinite set gives rise to a countable infinite number of uncountably infinite (infinite) cardinalities.

So when I write "What is knowledge?", what I mean is... Is "knowledge" a thing? Is it its own thing? Is it a configuration? Is it an emergent property? Is it a suitcase word that is used to describe multiple different concepts.

What is intelligence

The better we understand intelligence, the better we will be able to reproduce it (in machines).

This video describes the paper "Communicative Agents for Software Development" and the code in this github project.

The primary objective of ChatDev is to offer an easy-to-use, highly customizable and extendable framework, which is based on large language models (LLMs) and serves as an ideal scenario for studying collective intelligence.

What exactly is collective intelligence? Is it the same as Engelbart's notion of collective IQ?

Doug Engelbart coined the term Collective IQ as a measure of how well people can work together on important challenges – how quickly and intelligently they can anticipate or respond to a situation, leveraging their collective perception, memory, insight, vision, planning, reasoning, foresight, and experience into applicable knowledge. Collective IQ is ultimately a measure of effectiveness.

A more specific question

The Book of Why posits that human-level intelligence will require an inference engine: A way for computers to describe, formally, why something happened and to explain what would happen if the causes for that thing were to happen were removed. That is, in order for them to understand what we mean by why, they will need to be able to investigate counterfactual scenarios: What would have happened (if conditions had been different)?

For a moment, let's assume it's true.

My next question is, can a single agent (where "single" means something like "possessing some particular memory, history, identity" and "agent" means "something that acts") create new knowledge? Can it be "intelligent" the way a human being can be?

In fact, can a human be intelligent or create knowledge without other humans? Or maybe to be more careful:

Can a human create knowledge without interacting with other intelligent agents?

Can human brains create knowledge in an agentic vacuum? Deutsch's Infinity rightly points out that they certainly need evidence from the physical world, and that here on Earth we are rich in evidence. But is it possible for humans (these "universal constructors") to "do what they do" without each other?

Is individual intelligence and knowledge creation a "configuration" of activities from a collection of agents? And does this explain at all why "rubber ducking", "talking to a peer", and "getting involved on the forums" are repeated suggestions? It's not simply that your peer may know things you don't, or that you are forced to organize and synthesize your thoughts... Is there something else going on?

Here's what may be required:

Are there ideas that cannot be created (or is vanishingly unlikely to be created) by a single "mind" (or "history" or "context window"), but that can easily be created when multiple "minds" interacting?

Does the creation of an artificial intelligence depend on not one but many agents interacting with it?

Formalization

I don't know how to formalize these questions, or whether they are truly meaningful. However, it seems like it's a good idea to write about things I don't understand when they feel compelling, even if those ideas turn out to lead to no where. Perhaps later these thoughts will re-emerge in a new light, after I've found better ways to think about questions like these, and I can determine whether I was confused about something fundamental (which rendered everything after that point nonsensical) or if I might be onto something real, but lacking the sophistication to investigate these kinds of questions more rigorously.

Lately (with the help of the fastai course), I've been embracing the idea that theory follows practice. My willingness to wonder out loud (without rigor) is meant to be part of that embrace.

More images

midjourney midjourney midjourney