First, You may know that Voevodsky gave a series of lectures on his ‘univalent foundations’ at the Haifa University, which I attended and seem to have grasped the gist of it (I have to refresh my memory to be more sound).

As I see it, one may somewhat compare the setting here to the case of the Bourbaki project: Bourbaki had an ambitious grand project to create, if you wish, ‘the modern day Elements of Euclid’.

In whatever way one judges their work and influence in light of that ambition,

they did give us some very good mathematical textbooks, while in other volumes they might have chosen a rather bizarre and unfruitful path.

Similarly, I am not convinced, as much as I know about computers, that if

one’s aim is to use computers to check proofs one has any necessary

need of Voevodsky’s approach. All one needs is some formalization (and

mayby now, with half-intelligent software like Apple’s Siri, even that may

be partly avoided). And the 20’th century (the inception, in fact,

occurred in the 19’th) offered many such formalizations.

Yet, Voevodsky’s system is very very nice mathematics. I was impressed,

for instance (as far as I recall), by the ‘unification’ of truth-values

and objects (In the usual approach to logic, a basic distinction is between

‘predicates’ – true or false when objects are substituted in ’empty

places’, such as ‘negative(x)’ or ‘x is bigger than y’, (or, not in

mathematics, good(person)), contrasted with ‘terms’ – functions, if one

wishes – where when one substitutes one get objects, such as ‘the

square of’ (or, not in mathematics: ‘the father of’). When there are no

empty places, these are, respectively, sentences – true or false (their

truth value) or constants.)

True, his construction lies on a foundation of topology, indeed

homotopy, where, for example, a circle is ‘the same’ as a solid round ‘bagel’

or a circle with a solid ball-like ‘bead’ put on it, and similarly for the

mappings. But that should not deter us – we do not seek ‘philosophical

foundation’, no more, as I see it, than prerequisites for a ‘computerizing

project’. One may say, for example, that the formal languages ordinarily

used in logic are natural-numbers ‘appendages’ par excellence. So why not

use ‘topology/homotopy appendages’ as well? Remember: we wish to seek and

get mathematical wisdom – Voevodsky’s system is very beautifully that –

not some philosophical ‘foundational reassurance’.

As for checking mathematical proofs by computer, I would venture to quote

from my arXiv article: ‘CHALLENGES TO SOME PHILOSOPHICAL CLAIMS ABOUT

MATHEMATICS’ (arXiv:1601.07125) page 6:

“Also, it might be helpful to stress the nature of a proof as something

ultimately formalizable thus capable of being viewed as a mathematical

object itself. In this spirit, one should distinguish between the formal

force of the proof and the real world question whether I really have a proof,

so whether I really know that the theorem is true: have I not made a

mistake? can I trust my memory? can I trust this book’s claim that so and

so was proved? can I trust the hardware and software of a computer that I

used in the proof? etc.”

The problem is that while we may (almost) trust that the harware of a

computer will do what the (digital) program – a formal system which may be

thought of as mathematical – dictates (A cosmic ray may thwart that,

but that would be remedies, say, by repetitions), yet if we are beyond

that formal realm, say worry about mistakes, one formal system is as good as

another. If we do not trust our checking the mathematical proof, then

the computer program may as well harbor mistakes (bugs). And basically,

checking one formal system relying on another is no ‘paradigm change’.

Another quote from my above article may be related (page 6, next):

“Note, that Godel’s Incompleteness Theorem is sometimes taken as denying

mathematics a totally formal/mechanical/suitable for computers character

(which it might have had Hilbert’s program succeeded), guaranteeing its

human/creative nature. Another cluster of mathematical ideas –

Computational Complexity Theory – as if counters this. A theorem of

length L whose shortest proof is of length more than, say, L^10 is practically not a

theorem for us, and proving theorems in the sense of finding a proof of

length not more than L^10 (if there is such a proof) is ‘just’ an

NP-complete problem, equivalent to any other NP-complete problem (such as

coloring a graph by 3 colors) if computation that takes polynomial time is

considered ‘easy’, something to do quickly by machine, in a sense ‘trivial’.

In this way mathematics becomes an afternoon riddle – coloring a graph, and

only the ‘hope’ that P \noteq NP prevents it from being trivial altogether in

the just mentioned sense. But one might ‘answer’ this (and let the

pendulum swing back to ‘human mathematics’) by noting that all this refers

to proof as a formal (indeed mathematical) entity, while the proofs that

mathematicians think, discuss and write are something different, ‘human

and unmathematical’. For example, an author of a mathematical article, or an

editor of a journal, endowed by results of Computational Complexity Theory

with an efficient formal way to check proofs written formally, still has to

check whether the formal version is really a rendering of the ‘informal’

ideas (so the formally-written proof may be found wrong though the ideas are

correct).”