In the comments, I had a discussion on the structure of knowledge. There are two general points of view. The first point of view, called foundationalism, is that knowledge starts with a few basic principles, upon which the rest of knowledge is built. The second point of view, called coherentism, is that knowledge is structured like a web, with inferences going in every direction.
This is a long-standing philosophical question, and you can read superior accounts from more authoritative sources.
Both coherentism and foundationalism have features which should raise eyebrows among critical thinkers. Namely, foundationalism involves believing its foundations without evidence or reason. Coherentism involves circular reasoning.
The Stanford Encyclopedia of Philosophy observes that coherentists typically defend their view by attacking foundationalism. Here I will instead mount a positive defense of coherentism by arguing for the virtues of circular reasoning.
The problem with circles
Circular reasoning is a fallacy of deduction. It means that a proposition is proven from the conclusions derived from the same proposition. For example, taking letters to represent propositions, we might have the following circle:
A -> B -> A
It is possible for such a circle to be true in the technical sense. For instance, I can use 1+1=2 to prove that 2+2=4, and then use 2+2=4 to prove 1+1=2, and each of those proofs would be valid and sound.
However, the circular structure subverts conventions about how deduction is used. Deduction starts from propositions that we agree on (or we’re at least willing to agree on them for the sake of argument), and we use reasoning to prove a new conclusion that we did not agree on, or did not think of. In the above circle, “B -> A” places A in the position of a new conclusion. But if A is a new conclusion, then we did not initially agree on it or did not think of it, and therefore it was not a useful starting point.
Circular reasoning can sometimes be technically correct, but it is not useful. Keep that problem in mind.
Circles of induction
When we switch from deductive reasoning to inductive reasoning, it changes everything. Every arrow, every inference, becomes controvertible and uncertain. Consider: we could have two arguments, each of the form “A -> B”, and these arguments would not be redundant, because it is harder to dispute two distinct arguments than to dispute one.
As a result, we may never really be sure of any particular starting point. In the coherentist picture, we may think W -> A, where W refers to the greater web of ideas, but that doesn’t mean that we are sure of A. So consider the following circle:
W -> A
W -> B
W -> C
A -> B -> C -> A
Now we have multiple distinct arguments for A:
W -> A
W -> C -> A
W -> B -> C -> A
As long as these arguments are meaningfully distinct, then they may reinforce each other. The circle also allows us to make for multiple distinct arguments for B and C. Thus, even if we weren’t entirely sure of A, B, or C individually, we may be more sure of them when considered together.
Recall that I said circular reasoning may be technically correct, but isn’t useful in a deductive context. I just solved the problem by showing how it can be useful in an inductive context.
The obvious objection is that I appealed to an additional element W. The full web of ideas would of course be very complicated, but since it isn’t built on any particular foundation, W is arguably nothing but a series of circles. But let me illustrate something that is even worse than a circle: an anti-circle.
A -> B -> C -> not-A
Anti-circles happen all the time in our web of knowledge, and they indicate that at least one of the inferences must be disputed. The strength of a circle structure is directly proportional to how worried you were about finding an anti-circle instead. I contend that avoiding anti-circles is extremely challenging. If you have a structure with a lot of nontrivial circles, it would be difficult to replace the structure with something different without introducing many anti-circles.
Circles in scientific practice
Since I am not trained in philosophy, it is likely my views are naive in some way. But I am a professional physicist, so at least I don’t have to be naive about how it all works out in practice.
The most common kind of circle is experiment -> theory -> experiment -> theory. In most fields of physics, experimentalists and theorists are distinct groups of people, so this actually involves communication between people with different expertise. I am an experimentalist.
Experimental observations definitely do not serve as a foundation, particularly since our observations are so complicated. It is not uncommon for different experimentalists to arrive at contradictory results, and I’m not even talking about the interpretation of those results. It can be hard to trace the source of such a contradiction, because there are just so many things it could be. The measurement devices are very complicated, and no two are quite the same. There are many kinds of experimental error that take time for the community to understand and correct for. And there’s also the fact that some fraction of the data just doesn’t make sense given what we understand. We have to formulate theories about what went wrong and how to correct for it, but we also need to consider the hypothesis that the results are correct. This requires some theoretical understanding.
Theorists, of course, don’t really understand any of those experimental details, and don’t understand exactly what kind of errors are common. What they do understand, is that there is a large body of experimental observations. They need to build a theory that will account for the observations they consider most important. They often just ignore the observations which are inconsistent (or at least they don’t talk about them much in their publications), because maybe it’s just some unknown experimental error. Or maybe it’s a theoretical error. Computing the consequences of a theory can be extremely difficult, involving many layers of approximations, free parameters, and leaps of judgment.
In some ways, the portrait I painted earlier is too pristine. We would be so lucky to have such neat circles! When a theory makes a straightforward prediction, that turns eyes and causes cash to flow. When an experiment confirms one theory over all alternatives, well, we like to say that we’re doing it all the time, but then if you look across experiments you can see that different research groups are advocating opposing conclusions.
I think it should be clear why a foundationalist approach does not work in this context. If we built more and more layers upon a single foundation, that would not be very robust to error. When inevitably some of the “facts” turn out to be incorrect, you’d like if it didn’t topple over the entire structure above it.