-
Notifications
You must be signed in to change notification settings - Fork 2
/
pearl.lagda
1559 lines (1316 loc) · 59.4 KB
/
pearl.lagda
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
\documentclass[authoryear,preprint]{sigplanconf}
\usepackage{flushend}
\usepackage{agda}
\usepackage{alltt}
\usepackage{fancyvrb}
\usepackage{ucs}
\usepackage[utf8x]{inputenc}
\usepackage{tikz}
\usetikzlibrary{cd}
\usetikzlibrary{quotes}
\usepackage{adjustbox}
\usepackage{amsthm}
\usepackage{latexsym}
\usepackage{MnSymbol}
\usepackage{courier}
\usepackage{thmtools}
\usepackage{bbold}
\usepackage[hyphens]{url}
\usepackage{bbm}
\usepackage{proof}
\usepackage{amstext}
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{comment}
\usepackage{stmaryrd}
\usepackage{listings}
\usepackage{graphicx}
\usepackage{textgreek}
\usepackage{extarrows}
\usepackage{textcomp}
\usepackage{multicol}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Macros
\newcommand{\nboxtimes}[2]{\,\,~{^{#1}\boxtimes^{#2}}~\,\,}
\newcommand{\mm}{\texttt{\textminus}}
\newcommand{\pp}{\texttt{+}}
\newcommand{\inl}[1]{\textsf{inl}~#1}
\newcommand{\inr}[1]{\textsf{inr}~#1}
\newcommand{\idv}[3]{#2 \xrightarrow{#1} #3}
\newcommand{\cp}[3]{#1\stackrel{#2}{\bullet}#3}
\newcommand{\idt}[3]{#2 \equiv_{#1} #3}
\newcommand{\idrt}[3]{#3 \equiv_{#1} #2}
\newcommand{\refl}[1]{\textsf{refl}~#1}
\newcommand{\lid}{\textsf{lid}}
\newcommand{\alt}{~|~}
\newcommand{\rid}{\textsf{rid}}
\newcommand{\linv}{l!}
\newcommand{\rinv}{r!}
\newcommand{\invinv}{!!}
\newcommand{\assoc}{\circ}
\newcommand{\identlp}{\mathit{unite}_+}
\newcommand{\identrp}{\mathit{uniti}_+}
\newcommand{\identlsp}{\mathit{unites}_+}
\newcommand{\identrsp}{\mathit{unitis}_+}
\newcommand{\swapp}{\mathit{swap}_+}
\newcommand{\assoclp}{\mathit{assocl}_+}
\newcommand{\assocrp}{\mathit{assocr}_+}
\newcommand{\identlt}{\mathit{unite}_*}
\newcommand{\identrt}{\mathit{uniti}_*}
\newcommand{\identlst}{\mathit{unites}_*}
\newcommand{\identrst}{\mathit{unitis}_*}
\newcommand{\swapt}{\mathit{swap}_*}
\newcommand{\assoclt}{\mathit{assocl}_*}
\newcommand{\assocrt}{\mathit{assocr}_*}
\newcommand{\distz}{\mathit{dist}_0}
\newcommand{\factorzl}{\mathit{factorl}_0}
\newcommand{\factorzr}{\mathit{factorr}_0}
\newcommand{\absorbl}{\mathit{absorbl}_0}
\newcommand{\absorbr}{\mathit{absorbr}_0}
\newcommand{\dist}{\mathit{dist}}
\newcommand{\factor}{\mathit{factor}}
\newcommand{\distl}{\mathit{distl}}
\newcommand{\factorl}{\mathit{factorl}}
\newcommand{\iso}{\leftrightarrow}
\newcommand{\proves}{\vdash}
\newcommand{\idc}{\mathit{id}}
\newcommand{\ap}[2]{\mathit{ap}~#1~#2}
\newcommand{\evalone}[2]{#1~\triangleright~#2}
\newcommand{\evaloneb}[2]{#1~\triangleleft~#2}
\newcommand{\Rule}[4]{
\makebox{{\rm #1}
$\displaystyle
\frac{\begin{array}{l}#2\\\end{array}}
{\begin{array}{l}#3\\\end{array}}$
#4}}
\newcommand{\jdg}[3]{#2 \proves_{#1} #3}
\newcommand{\adjoint}[1]{#1^{\dagger}}
\newcommand{\pc}{\fatsemi} % path composition
\newenvironment{floatrule}
{\hrule width \hsize height .33pt \vspace{.5pc}}
{\par\addvspace{.5pc}}
\newcommand{\nodet}[2]{\fcolorbox{black}{white}{$#1$}\fcolorbox{black}{gray!20}{$#2$}}
\newcommand{\cubt}{\mathbb{T}}
\newcommand{\ztone}{\mathbb{0}}
\newcommand{\otone}{\mathbb{1}}
\newcommand{\ptone}[2]{#1 \boxplus #2}
\newcommand{\ttone}[2]{#1 \boxtimes #2}
\newcommand{\eqdef}{\stackrel{\triangle}{=}}
\newcommand{\isoone}{\Leftrightarrow}
\newcommand{\lolli}{\multimap}
\newcommand{\isoarrow}{\stackrel{\sim}{\rightarrow}}
\newcommand{\C}{\mathbf{\mathcal{C}}}
%% \DefineVerbatimEnvironment
%% {code}{Verbatim}
%% {} % Add fancy options here if you like.
\DeclareUnicodeCharacter{8345}{\ensuremath{{}_n}}
\DeclareUnicodeCharacter{8347}{\ensuremath{{}_s}}
\DeclareUnicodeCharacter{8718}{\ensuremath{\qed}}
\DeclareUnicodeCharacter{120792}{\ensuremath{\mathbb{0}}}
\DeclareUnicodeCharacter{120793}{\ensuremath{\mathbb{1}}}
\DeclareUnicodeCharacter{9678}{\ensuremath{\odot}}
\DeclareUnicodeCharacter{9636}{\ensuremath{\Box}}
%% shorten the longarrow
\DeclareUnicodeCharacter{10231}{\ensuremath{\leftrightarrow}}
\DeclareUnicodeCharacter{2097}{\ensuremath{{}_l}}
\DeclareUnicodeCharacter{7523}{\ensuremath{{}_r}}
\DeclareUnicodeCharacter{8343}{\ensuremath{{}_l}}
\DeclareUnicodeCharacter{8779}{\ensuremath{\triplesim}}
\DeclareUnicodeCharacter{9679}{\textbullet}
\newtheorem{theorem}{Theorem}
\newtheorem{conj}{Conjecture}
\newtheorem{definition}{Definition}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Comments
\newif\ifcomments\commentstrue
\ifcomments
\newcommand{\authornote}[3]{\textcolor{#1}{[#3 ---#2]}}
\newcommand{\todo}[1]{\textcolor{red}{[TODO: #1]}}
\else
\newcommand{\authornote}[3]{}
\newcommand{\todo}[1]{}
\fi
%% \newcommand{\amr}[1]{}
%% \newcommand{\jc}[1]{}
\newcommand{\jc}[1]{\authornote{purple}{JC}{#1}}
\newcommand{\amr}[1]{\fbox{\begin{minipage}{0.4\textwidth}\color{red}{Amr says: {#1}}\end{minipage}}}
%% agda code indent
\setlength{\mathindent}{0.1cm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
\title{HoTT Effects}
\subtitle{Functional Pearl}
\authorinfo{}{}{}
\maketitle
\begin{abstract}
Homotopy type theory is much more complex than plain type
theory. The additional complexity can be justified if one is
concerned with applications to homotopy theory. But can this
complexity be justified from a purely type theoretic perspective?
This pearl gives an intuitive answer building on the observation
that proofs are effectful objects.
\end{abstract}
\AgdaHide{
\begin{code}
{-# OPTIONS --without-K #-}
open import Data.Nat hiding (_⊔_)
open import Data.Sum using (_⊎_; inj₁; inj₂)
open import Data.Product using (_×_; _,_; proj₁; proj₂; Σ; Σ-syntax)
import Relation.Binary.Core as C
open import Relation.Binary.PropositionalEquality hiding (_≡_; refl)
open ≡-Reasoning
infix 4 _≡_
_≡_ : ∀ {ℓ} {A : Set ℓ} → (x y : A) → Set ℓ
_≡_ {ℓ} {A} x y = C._≡_ {ℓ} {A} x y
refl : ∀ {ℓ} {A} → (x : A) → x ≡ x
refl {ℓ} {A} x = C.refl {ℓ} {A} {x}
infixr 8 _∘_
_∘_ : ∀ {ℓ} {A : Set ℓ} {x y z : A} →
(x ≡ y) → (y ≡ z) → (x ≡ z)
_∘_ = trans
! : ∀ {ℓ} {A : Set ℓ} {x y : A} → (p : x ≡ y) → (y ≡ x)
! = sym
unitTransL : {A : Set} {x y : A} → (p : x ≡ y) → (p ≡ refl x ∘ p)
unitTransL {x = x} C.refl = refl (refl x)
\end{code}
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
\begin{quote}
Indeed, such questions bring us rapidly to problems such as
calculating the homotopy groups of spheres, a long-standing problem in
algebraic topology for which no simple formula is known. Homotopy type
theory brings a new and powerful viewpoint to bear on such questions,
but it also requires type theory to become as complex as the answers
to these questions~\cite[Sec.~6.1]{hottbook}
\end{quote}
When attempting to explain Homotopy Type Theory (HoTT) to colleagues
or students with a background in functional programming, dependent
types, type theory, proof theory, proof assistants, etc. it is quite
difficult to motivate the sheer explosion in complexity that comes
with the HoTT perspective. Eventually, often after weeks of study, one
may start to appreciate the rich and beautiful combinatorial structure
of proofs in HoTT but only if one learns enough topology or higher
category theory. Is there really no other way to appreciate and
understand the HoTT infrastructure?
Instead of trying to say more in this introduction, we invite the
reader to read this relatively short pearl written from the
perspective of an explanation essay. We assume familiarity with the
basics of dependent type theory and we use Agda as our canonical proof
assistant.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Identity Types: The $=$ Interpretation}
\label{sec2}
In Martin-L\"of type theory, given a type $A$ and two elements $x : A$
and $y : A$, the proposition of whether $x$ and $y$ are equal can be
turned into a type $x ≡ y$. If the proposition is false, the type must
be empty:
\medskip
\begin{code}
p₁ : 4 ≡ 3
p₁ = {!!} -- impossible
\end{code}
\medskip
If however the proposition is true, the type $x ≡ y$ must contain at
least one element so that we can provide proofs of the proposition
that $x$ and $y$ are equal. The minimum we might expect is to be able
to prove the proposition if the two objects $x$ and $y$ are actually
identical. In our Agda package, we write $\AgdaFunction{refl}~x$
for that proof object:
\medskip
\begin{code}
p₂ : 4 ≡ 4
p₂ = refl 4
p₃ : 2 * 3 ≡ 5 + 1
p₃ = refl (2 + 4)
\end{code}
\medskip
As customary, we implicitly rely on Agda's evaluator to simplify
certain expressions and we treat these as identical. The constructor
\AgdaFunction{refl} establishes that our equality relation $≡$ which
relies on Agda evaluation is reflexive; we naturally also expect it to
be symmetric and transitive (which it is). We also expect at least two
principles which will be crucial for the rest of the paper:
\begin{itemize}
\item congruence: Given a proof of $x ≡ y$ called $p$,
\AgdaFunction{cong} allows us to infer a proof of $F x ≡ F y$ for
any $F$. The latter proof is called $\AgdaFunction{cong}~F~p$. For example:
\medskip
\begin{code}
p₄ : ∀ {m n} → (m ≡ n) → (suc m ≡ suc n)
p₄ p = cong suc p
p₅ : ∀ {m n} → (m ≡ n) → (m * m + 2 * m ≡ n * n + 2 * n)
p₅ p = cong (λ x → x * x + 2 * x) p
\end{code}
\item Leibniz's principle of the \emph{identity of indiscernibles:}
Given a proof of $x ≡ y$ called $p$, any proof of $P(x)$ for some
property~$P$ can be transformed to a proof of $P(y)$. Put
differently, no property $P$ can distinguish two identical
objects. For example:
\medskip
\begin{code}
p₆ : ∀ {m n} → (m ≡ n) → (m ≥ 0) → (n ≥ 0)
p₆ p gm = subst (λ x → x ≥ 0) p gm
p₇ : ∀ {m n} → (m ≡ n) → (∀ z → m ≥ z)→ (∀ z → n ≥ z)
p₇ p bm = subst (λ x → ∀ z → x ≥ z) p bm
\end{code}
\medskip\noindent The second example illustrates that the property $P$
may be false without affecting the principle: if an absurdity holds
for $m$ it also holds for $n$.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Univalence}
\label{sec3}
Given the notion of identity types, a natural question arises. Do we
want to be able to prove propositions like $ℕ ≡ ℕ × ℕ$? Clearly the
sets $ℕ$ and $ℕ × ℕ$ are not identical and are not simplifiable to one
another by the Agda evaluator. However, because there exists a
bijection between them, the sets are often treated as identical. To
capture this richer notion of identity in a formal system, one might
postulate that whenever two sets $A$ and $B$ are related by a
bijection~$≃$, they can be treated as identical $≡$:
\AgdaHide{
\begin{code}
open import Function renaming (_∘_ to _○_)
open import Data.Product using (Σ; _,_)
open import Level using (_⊔_)
open import Data.Bool
open import Algebra
open import Algebra.Structures
open import Data.Unit
infix 4 _∼_ -- homotopy between two functions
infix 4 _≃_ -- type of equivalences
_∼_ : ∀ {ℓ ℓ'} → {A : Set ℓ} {P : A → Set ℓ'} →
(f g : (x : A) → P x) → Set (ℓ ⊔ ℓ')
_∼_ {ℓ} {ℓ'} {A} {P} f g = (x : A) → f x ≡ g x
record qinv {ℓ ℓ'} {A : Set ℓ} {B : Set ℓ'} (f : A → B) :
Set (ℓ ⊔ ℓ') where
constructor mkqinv
field
g : B → A
α : (f ○ g) ∼ id
β : (g ○ f) ∼ id
record isequiv {ℓ ℓ'} {A : Set ℓ} {B : Set ℓ'} (f : A → B) :
Set (ℓ ⊔ ℓ') where
constructor mkisequiv
field
g : B → A
α : (f ○ g) ∼ id
h : B → A
β : (h ○ f) ∼ id
equiv₁ : ∀ {ℓ ℓ'} → {A : Set ℓ} {B : Set ℓ'} {f : A → B} → qinv f → isequiv f
equiv₁ (mkqinv qg qα qβ) = mkisequiv qg qα qg qβ
_≃_ : ∀ {ℓ ℓ'} (A : Set ℓ) (B : Set ℓ') → Set (ℓ ⊔ ℓ')
A ≃ B = Σ (A → B) isequiv
\end{code}
}
\medskip
\begin{code}
postulate
univalence : ∀ {ℓ} {A B : Set ℓ} → (A ≃ B) → (A ≡ B)
\end{code}
\medskip\noindent This is essentially the \emph{univalence axiom} of
HoTT\footnote{\textbf{Warning:} We are glossing over several
subtleties but without affecting the substance of the message.}. In
this paper, we do not focus on the univalence axiom itself except to
note that it has enriched our notion of equality giving us the
possibility of having proofs of $≡$ that are not just plain
\AgdaFunction{refl}.
\paragraph*{Example.} We assume one of the standard notions of
equivalence from the HoTT book with a constructor
\AgdaInductiveConstructor{mkisequiv} and we construct two equivalences
(bijections) between the type \AgdaDatatype{Bool} and itself. The
first bijection \AgdaFunction{b≃₁} uses the identity function in both
directions and the second bijection \AgdaFunction{b≃₂} uses boolean
negation in both directions. The proofs that these functions
constitutes an isomorphism are trivial. Univalence then allows to
postulate of the existence of two, a priori different, elements of the
type $\AgdaDatatype{Bool} ≡ \AgdaDatatype{Bool}$:
\medskip
\begin{code}
b≃₁ : Bool ≃ Bool
b≃₁ = (id , mkisequiv id refl id refl)
notinv : (b : Bool) → not (not b) ≡ b
notinv true = refl true
notinv false = refl false
b≃₂ : Bool ≃ Bool
b≃₂ = (not , mkisequiv not notinv not notinv)
b≡₁ : Bool ≡ Bool
b≡₁ = univalence b≃₁
b≡₂ : Bool ≡ Bool
b≡₂ = univalence b≃₂
\end{code}
\medskip
The two elements \AgdaFunction{b≡₁} and \AgdaFunction{b≡₂} of type
$\AgdaDatatype{Bool} ≡ \AgdaDatatype{Bool}$ that we have produced must
actually be kept distinct. Collapsing them allows us in a few short
steps to identify \AgdaInductiveConstructor{false} and
\AgdaInductiveConstructor{true} which renders the entire logic
inconsistent. This distinction must be diligently maintained as we use
\AgdaFunction{b≡₁} and \AgdaFunction{b≡₂} to build larger and larger
proofs and establish richer and richer properties. Consider the
following examples:
\medskip
\begin{code}
bb₁ : (Bool ≡ Bool) → ((Bool × Bool) ≡ (Bool × Bool))
bb₁ p = cong (λ a → a × a) p
bb₂ : (Bool ≡ Bool) → ((Bool × Bool) ≡ (Bool × Bool))
bb₂ p = cong (λ a → a × Bool) p
nonEmpty : (A : Set) → Set
nonEmpty A = Σ[ a ∈ A ] ⊤
ne : (Bool ≡ Bool) → nonEmpty Bool → nonEmpty Bool
ne p b• = subst (λ t → nonEmpty t) p b•
ne₁ : nonEmpty Bool
ne₁ = ne b≡₁ (false , tt) -- witness is false
ne₂ : nonEmpty Bool
ne₂ = ne b≡₂ (false , tt) -- witness becomes true
\end{code}
\medskip
We have built various proofs on pairs of booleans that must be kept
distinct. We have also established properties like
\AgdaFunction{nonEmpty} using different witnesses. All this points to
the beginning of a complex combinatorial structure on proofs.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The Homotopy Interpretation: Paths}
One way to understand and reason about the structure of proofs alluded
to in the previous section is to bring in the perspective of homotopy
theory. Informally speaking, types become topological spaces and
identity proofs become ``elastic continuous deformations'' of one
space to another. We are led to abandon the conventional computational
intuitions of type theory and consider something novel. To help that
change of perspective, comes a new language. Instead of proofs, we
will speak of paths; instead of congruence of equality we will speak
of functions acting on paths; and instead of Leibniz's principle of
the identity of indiscernibles, we will speak of transporting
properties along paths.
%%%%%%%%%
\subsection{Paths}
\label{sub:paths}
Switching language, instead of viewing an element of $A ≡ B$ as a
proof of equality, we view it as a path from the space $A$ to the
space~$B$. This is not a priori a symmetric notion but paths come
equipped with additional combinatorial structure:
\begin{itemize}
\item For every path $p : x \equiv y$, there exists an inverse path
$! p : y \equiv x$
\item For every paths $p : x \equiv y$ and $q : y \equiv z$, there
exists a path $p \circ q : x \equiv z$;
\item Compositions and inverses satisfy the following conditions:
\begin{itemize}
\item $p \circ \AgdaFunction{refl}~y \equiv p$
\item $p \equiv \AgdaFunction{refl}~x \circ p$
\item $!p \circ p \equiv \AgdaFunction{refl}~y$
\item $p ~\circ~ !p \equiv \AgdaFunction{refl}~x$
\item $!~(!p) \equiv p$
\item $p \circ (q \circ r) \equiv (p \circ q) \circ r$
\end{itemize}
\item The conditions above generate new paths like $\alpha : p \circ
\AgdaFunction{refl}~y \equiv p$ which are themselves subject to the
same conditions one level higher, e.g., $!~(!\alpha) \equiv \alpha$.
\end{itemize}
%%%%%%%%%
\subsection{\AgdaFunction{cong} to \AgdaFunction{ap}}
Continuing the path interpretation, a function from space $A$ to space
$B$ must map the points of $A$ to the points of $B$ as usual but it
must also \emph{respect the path structure}. Topologically, this
corresponds to saying that every function is \emph{continuous}, i.e.,
functions act on paths by continuously deforming them to new paths. We
relabel \AgdaFunction{cong} to \AgdaFunction{ap} convey this new
meaning:
\medskip
\begin{code}
ap : ∀ {ℓ₁ ℓ₂} {A : Set ℓ₁} {B : Set ℓ₂} →
(f : A → B) → {x y : A} → (x ≡ y) → (f x ≡ f y)
ap = cong
\end{code}
\medskip\noindent and visualize the situation as follows:
\begin{center}
\begin{tikzpicture}[scale=0.6, every node/.style={scale=0.6}]
\draw (-3,0) ellipse (1.5cm and 3cm);
\draw (3,0) ellipse (1.5cm and 3cm);
\draw[fill] (-3,1.5) circle [radius=0.025];
\draw[fill] (-3,-1.5) circle [radius=0.025];
\node[above] at (-3,1.5) {$x$};
\node[below] at (-3,-1.5) {$y$};
\draw[fill] (3,1.5) circle [radius=0.025];
\draw[fill] (3,-1.5) circle [radius=0.025];
\node[above] at (3,1.5) {$f(x)$};
\node[below] at (3,-1.5) {$f(y)$};
\draw[->,cyan,thick] (-3,1.5) -- (-3,-1.5);
\node[left,cyan] at (-3,0) {$p$};
\draw[->,cyan,thick] (3,1.5) -- (3,-1.5);
\node[right,cyan] at (3,0) {$\mathit{ap}~f~p$};
\draw[->,red,dashed,ultra thick] (-2,2.5) to [out=45, in=135] (2,2.5);
\node[red,below] at (0,3) {$f$};
\end{tikzpicture}
\end{center}
%%%%%%%%%
\subsection{\AgdaFunction{subst} to \AgdaFunction{tranport} and \AgdaFunction{apd}}
To understand the new meaning of \AgdaFunction{subst}, we must
consider how dependent functions act on paths as visualized below:
\begin{center}
\begin{tikzpicture}[scale=0.6, every node/.style={scale=0.6}]
\draw (-3,0) ellipse (1.5cm and 3cm);
\draw (3,2) ellipse (0.5cm and 1cm);
\draw (3,-1.5) ellipse (2cm and 2cm);
\node[blue,ultra thick,above] at (3,3) {$P(x)$};
\node[blue,ultra thick,below] at (3,-3.5) {$P(y)$};
\draw[fill] (-3,1.5) circle [radius=0.025];
\draw[fill] (-3,-1.5) circle [radius=0.025];
\node[above] at (-3,1.5) {$x$};
\node[below] at (-3,-1.5) {$y$};
\draw[fill] (3,1.5) circle [radius=0.025];
\draw[fill] (3,-0.5) circle [radius=0.025];
\draw[fill] (3,-2.5) circle [radius=0.025];
\node[above] at (3,1.5) {$f(x)$};
\node[above] at (3,-0.5) {$\mathit{transport}~P~p~f(x)$};
\node[below] at (3,-2.5) {$f(y)$};
\draw[left,cyan,thick] (-3,1.5) -- (-3,-1.5);
\node[left,cyan] at (-3,0) {$p$};
\draw[->,cyan,thick] (3,-0.5) -- (3,-2.5);
\node[right,cyan] at (3,-1.5) {$\mathit{apd}~f~p$};
\draw[->,red,dashed,ultra thick] (-2,2.5) to [out=45, in=135] (2.3,2.5);
\node[red,below] at (0,3) {$f$ (to fiber $P(x)$)};
\draw[->,red,dashed,ultra thick] (-2,-2.5) to [out=-45, in=-135] (1.2,-2.5);
\node[red,above] at (-0.5,-2.5) {$f$ (to fiber $P(y)$)};
\draw[->,red,dashed,ultra thick] (3.6,2.3) to [out=-45, in=45] (3.5,0.6);
\node[red,right] at (3.9,1.45) {$\mathit{transport}~P~p$};
\end{tikzpicture}
\end{center}
Because $f$ is a dependent function, the endpoints of path $p$ which
are $f(x)$ and $f(y)$ end up in separate spaces $P(x)$ and $P(y)$
respectively. Thus the way $f$ acts on $p$ is more complicated as we
require that one of the endpoints be \emph{transported} to the other
space first. The transport is exactly what we used to call \AgdaFunction{subst}:
\medskip
\begin{code}
transport : ∀ {ℓ₁ ℓ₂} {A : Set ℓ₁} (B : A → Set ℓ₂) →
{x y : A} → (x ≡ y) → B x → B y
transport = subst
apd : ∀ {ℓ₁ ℓ₂} → {A : Set ℓ₁} {B : A → Set ℓ₂} →
(f : (x : A) → B x) → {x y : A} → (p : x ≡ y) →
transport B p (f x) ≡ f y
apd f C.refl = C.refl
\end{code}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{The Topological Circle}
With the topological perspective in mind, the combinatorial complexity
of proofs (paths) becomes justified. Indeed, understanding the basic
shape of a topological space (its holes) amounts to understanding the
higher-order structure of paths in a concrete way. We explain this
point in detail using one example: the topological circle. This space
refers to the one-dimensional circumference of the two-dimensional
disk in the plane. Abstractly speaking, the space can be specified as
follows:\footnote{Agda is not designed for modeling higher-inductive
types that include points and paths. A proper encoding of the circle
is possible but would be distracting. We present the main
ingredients as postulates for clarity.}
\medskip
\begin{code}
module S¹ where
postulate
S¹ : Set₁
base : S¹
loop : base ≡ base
\end{code}
\medskip
We postulate the existence of a point \AgdaFunction{base} lying on the
circle and a way to go from this point to itself using a path called
\AgdaFunction{loop}. A priori there is another path from
\AgdaFunction{base} to itself,
$\AgdaFunction{refl}~\AgdaFunction{base}$, which involves not
moving at all. Because paths are closed under composition, there is
yet another way to go from \AgdaFunction{base} to itself that involves
going around the loop twice and hence evidently $n$ times for
arbitrary $n$. Also because paths are closed under inverses, there is
path $!~\AgdaFunction{loop}$ that involves going in the reverse
direction of \AgdaFunction{loop}; this also induces $n$-fold
iterations of this reverse path. In fact there are now many more paths
like
$(\AgdaFunction{loop} ∘ \AgdaFunction{loop}) ∘ ~!~\AgdaFunction{loop}$
but as explained in Sec.~\ref{sub:paths}, we can reason as follows:
\[\begin{array}{rcl}
&& (\AgdaFunction{loop} ∘ \AgdaFunction{loop}) ∘ ~!~\AgdaFunction{loop} \\
&≡& \AgdaFunction{loop} ∘ (\AgdaFunction{loop} ∘ ~!~\AgdaFunction{loop}) \\
&≡& \AgdaFunction{loop} ∘ (\AgdaFunction{refl}~\AgdaFunction{base}) \\
&≡& \AgdaFunction{loop}
\end{array}\]
to eliminate these paths. It does turn out that the resulting
non-trivial structure of paths in the case of the circle is as
depicted below:
\begin{center}
\begin{tikzpicture}[scale=0.6, every node/.style={scale=0.6}]
\draw (0,0) ellipse (5.5cm and 2.5cm);
\draw[fill] (0,0) circle [radius=0.025];
\draw[->,thick,red] (0,0) arc (90:440:0.2);
\node[above,red] at (0,0) {refl};
\draw[->,thick,cyan] (0,0) arc (-180:140:0.7);
\draw[->,thick,cyan] (0,0) arc (-180:150:1.2);
\node[left,cyan] at (1.4,0) {loop};
\node[right,cyan] at (2.4,0) {loop $\circ$ loop $\ldots$};
\draw[->,thick,blue] (0,0) arc (360:40:0.7);
\draw[->,thick,blue] (0,0) arc (360:30:1.2);
\node[right,blue] at (-1.4,0) {!~loop};
\node[left,blue] at (-2.4,0) {$\ldots$ !~loop $\circ$ !~loop};
\end{tikzpicture}
\end{center}
\noindent Generally speaking, calculating the precise structure of
such path spaces is a difficult problem in algebraic topology. The
quote at the beginning of the paper argues that the complications to
type theory brought on by the homotopy interpretation are not
gratuitous: they reflect genuine mathematical structure of topological
spaces. To appreciate this, it is necessary to look at how one would
prove anything about the circle or more generally about topological
spaces as represented in HoTT.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Recursion and Induction Principles}
As is customary in type theory, reasoning about the properties of a
type $A$ is expressed using an appropriate recursion or induction
principle for $A$. We first review the situation for the familiar type
of the natural numbers and then move on to the more interesting
situation of a topological space.
%%%
\subsection{Recursion and Induction for the Natural Numbers}
The recursion and induction principles for the natural numbers are expressed as follows:
\medskip
\begin{code}
recℕ : ∀ {ℓ} → (C : Set ℓ) →
C → (ℕ → C → C) → ℕ → C
recℕ C c f 0 = c
recℕ C c f (suc n) = f n (recℕ C c f n)
indℕ : ∀ {ℓ} → (C : ℕ → Set ℓ) →
C 0 → ((n : ℕ) → C n → C (suc n)) → (n : ℕ) → C n
indℕ C c f 0 = c
indℕ C c f (suc n) = f n (indℕ C c f n)
\end{code}
\medskip
First note that only the types differ; the implementation is
identical. In fact, the only difference between the two principles is
that the induction principle uses dependent functions whereas the
recursion principle uses non-dependent functions. To gain some
intuition we show two simple functions defined using the recursion
principle and one property established using the induction principle:
\medskip
\begin{code}
double : ℕ → ℕ
double = recℕ ℕ 0 (λ n r → suc (suc r))
add : ℕ → ℕ → ℕ
add = recℕ (ℕ → ℕ) (λ n → n) (λ m r n → suc (r n))
add-assoc : (i j k : ℕ) →
add i (add j k) ≡ add (add i j) k
add-assoc =
indℕ
(λ i → (j k : ℕ) → add i (add j k) ≡ add (add i j) k)
(λ j k → refl (add j k))
(λ i p j k → cong suc (p j k))
\end{code}
%%%
\subsection{Recursion and Induction for the Circle}
Defining the recursion and induction principles for interesting
topological spaces is more challenging. We list the recursion and
induction principles for the circle and then spend the remainder of
this section to explain them.
\medskip
\begin{code}
module S¹reasoning where
open S¹
record rec (B : Set₁) (b : B) (p : b ≡ b) : Set₁ where
field
f : S¹ → B
α : f base ≡ b
β : ! α ∘ ap f loop ∘ α ≡ p
record ind (P : S¹ → Set) (b : P base)
(p : transport P loop b ≡ b) : Set₁ where
field
f : (x : S¹) → P x
α : f base ≡ b
β : ! (ap (transport P loop) α) ∘ apd f loop ∘ α ≡ p
\end{code}
\medskip
Both the recursion and induction principles allow us to write
functions from $S¹$ to some other space with enough structure, i.e., a
space with at least a point and a loop on that point. The defined
function is non-dependent in the case of the recursion principle and
is dependent in the case of the induction principle. Both the
recursion and induction principles postulate the existence of the
function and how it acts on the point \AgdaFunction{base} and path
\AgdaFunction{loop}. In both cases, the function sends the point
\AgdaFunction{base} to the point \AgdaFunction{b} as captured by
$\alpha$.\footnote{Our presentation is slightly more complicated than
the one in the book~\cite{hottbook} because $\alpha$ is expressed as
a propositional equality. This is harmless.} To understand the type
of $\beta$, first in the case of the recursion principle, it is
helpful to visualize the situation:
\begin{center}
\begin{tikzpicture}[scale=0.6, every node/.style={scale=0.6}]
\draw (-3,0) ellipse (1.5cm and 3cm);
\draw (3,0) ellipse (3cm and 3cm);
\node[blue,ultra thick,above] at (-3,3) {$S¹$};
\node[blue,ultra thick,above] at (3,3) {$B$};
\draw[fill] (-3,1.5) circle [radius=0.025];
\draw[fill] (-3,-1.5) circle [radius=0.025];
\node[above] at (-3,1.5) {\AgdaFunction{base}};
\node[below] at (-3,-1.5) {\AgdaFunction{base}};
\draw[fill] (1.5,1.5) circle [radius=0.025];
\node[above] at (1.5,1.5) {$f(\AgdaFunction{base})$};
\draw[fill] (1.5,-1.5) circle [radius=0.025];
\node[below] at (1.5,-1.5) {$f(\AgdaFunction{base})$};
\draw[->,left,cyan,thick] (1.5,1.5) -- (1.5,-1.5);
\node[left] at (1.5,0) {$\AgdaFunction{ap}~f~\AgdaFunction{loop}$};
\draw[fill] (4.5,1.5) circle [radius=0.025];
\node[above] at (4.5,1.5) {$b$};
\draw[->,left,cyan,thick] (1.5,1.5) -- (4.5,1.5);
\node[above] at (3,1.5) {$\alpha$};
\draw[->,left,cyan,thick] (1.5,-1.5) -- (4.5,-1.5);
\node[below] at (3,-1.5) {$\alpha$};
\node[below] at (4.5,-1.5) {$b$};
\draw[->,left,cyan,thick] (4.5,1.5) to [out=-45, in=45] (4.5,-1.5);
\node[right] at (5.1,0) {$p$};
\draw[->,left,dashed,cyan,thick] (4.5,1.5) to [out=-135, in=135] (4.5,-1.5);
%% \node[left] at (3.9,0) {$\AgdaFunction{transport}\ldots$};
\draw[->,double,red,thick] (4,0) -- (5,0);
\node[below,red] at (4.5,0) {$\beta$};
\node[above] at (3,1.5) {$\alpha$};
\draw[->,left,cyan,thick] (-3,1.5) -- (-3,-1.5);
\node[left,cyan] at (-3,0) {\AgdaFunction{loop}};
\draw[->,red,dashed,ultra thick] (-2,2.5) to [out=45, in=135] (2,2.8);
\node[red,below] at (0,3.2) {$f$};
\end{tikzpicture}
\end{center}
Intuitively, in order to define the ``recursive'' function $f$ from
$S¹$ to some space $B$, the user must provide a point in $B$
corresponding to $f(\AgdaFunction{base})$ and a path
$\AgdaFunction{ap}~f~\AgdaFunction{loop}$ of type
$f(\AgdaFunction{base}) ≡ f(\AgdaFunction{base})$. We also have in the
space $B$ two known paths: $\alpha$ which relates
$f(\AgdaFunction{base})$ to $\AgdaFunction{b}$ and the path
$\AgdaFunction{p} : \AgdaFunction{b} ≡ \AgdaFunction{b}$ which is
assumed to exist as the target space must have enough structure. What
$\beta$ asserts is that this path~$\AgdaFunction{p}$ cannot be an
arbitrary path in $B$: it must incorporate whatever the action of $f$
on \AgdaFunction{loop} was. In the figure, it is apparent that the
obvious way to relate $\AgdaFunction{ap}~f~\AgdaFunction{loop}$ and
\AgdaFunction{p} is to insist that $p$ be equivalent to $! \alpha ∘
\AgdaFunction{ap}~\AgdaFunction{f}~\AgdaFunction{loop} ∘ \alpha$ which
is represented by the dashed path that follows the path from the top
occurrence of $b$ to the bottom one in the counterclockwise direction.
There is another presentation of the dashed path that provides
additional insight. This dashed path can be written as:
\[
\AgdaFunction{transport}~(\lambda x → x ≡ x)~\alpha~(\AgdaFunction{ap}~f~\AgdaFunction{loop})
\]
which, in the terminology of homotopy theory, means that it
corresponds to transporting the path
$(\AgdaFunction{ap}~f~\AgdaFunction{loop})$ along $\alpha$. The two
perspectives are equivalent as the following lemma shows.
%% β : transport (λ x → x ≡ x) α (ap f loop) ≡ p
%% β : transport
%% (λ x → transport P loop x ≡ x) α (apd f loop)
%% ≡ p
\medskip
\begin{code}
transportId : ∀ {ℓ} {A : Set ℓ} {y z : A} →
(p : y ≡ z) → (q : y ≡ y) →
transport (λ x → x ≡ x) p q ≡ ! p ∘ q ∘ p
transportId {y = y} C.refl q =
begin (transport (λ x → x ≡ x) C.refl q
≡⟨ refl q ⟩
q
≡⟨ unitTransR q ⟩
q ∘ refl y
≡⟨ refl (q ∘ refl y) ⟩
! C.refl ∘ q ∘ C.refl ∎)
where
unitTransR : ∀ {ℓ} {A : Set ℓ} {x y : A} →
(p : x ≡ y) → (p ≡ p ∘ refl y)
unitTransR {x = x} C.refl = refl (refl x)
\end{code}
\medskip
There is a third perspective regarding the dashed path and its
connection to \AgdaFunction{loop} and \AgdaFunction{p} that is useful
to keep in mind and that is a programming language theory
perspective. Both \AgdaFunction{refl}~\AgdaFunction{base} and
\AgdaFunction{loop} are paths in the space $\AgdaFunction{base} ≡
\AgdaFunction{base}$. The first is a trivial path and we will think of
it as a pure function with no side effects and the second is a path
that cannot be identified with the trivial \AgdaFunction{refl} path and
we will think of it as a function with possible side effects. The
function~$f$ is arbitrary but given that \AgdaFunction{loop} is viewed
as an object with side effects, its action on \AgdaFunction{loop} may
also have side effects. Therefore, when moving from the space $S¹$ to
another space $B$, the side effects in the action of $f$ on
\AgdaFunction{loop} must be taken into account and the path
$\AgdaFunction{p}$ can neither discard nor duplicate these side
effects. We will develop this perspective further in the next section.
The induction principle for $S¹$ differs from the recursion principle
in two aspects: first the dependent version \AgdaFunction{apd} is used
instead of \AgdaFunction{ap}; second the path $p$ is not just of type
$\AgdaFunction{b} ≡ \AgdaFunction{b}$ but rather of type
$\AgdaFunction{transport}~P~\AgdaFunction{loop}~\AgdaFunction{b} ≡
\AgdaFunction{b}$.
To understand this subtlety we reason as follows. Recall that the
circle is constructed from a point \AgdaFunction{base} and a path
\AgdaFunction{loop} on which we can travel starting from
\AgdaFunction{base} all the way around back to \AgdaFunction{base}. A
property $P$ of $S¹$ must consider both the point and the action of
traveling along \AgdaFunction{loop}. In the induction principle,
$\AgdaFunction{b}$ is a proof that the property holds for
$\AgdaFunction{base}$ and $\AgdaFunction{p}$ is a proof that the
property of \AgdaFunction{base} established by \AgdaFunction{b}
remains true when transported along \AgdaFunction{loop}. Note that
Agda does not care about this subtlety and would happily typecheck an
``induction principle" in which \AgdaFunction{p} had the type
$\AgdaFunction{b} ≡ \AgdaFunction{b}$. Thinking again of the effects
perspective, the proof of $P(\AgdaFunction{base})$ must be robust and
hold even if the side effects represented by \AgdaFunction{loop} are
taken into account.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Effectful Paths and Proofs}
The above material, except perhaps for its exposition, is
standard. The goal was to show enough of HoTT to get an appreciation
of its complexity and understand that the main reason for this
complexity is the existence of non-trivial paths which, in some
intuitive sense, can be though of as operations with side effects. To
sum it up, given a space~$A$, two points $x$ and $y$ in $A$, and two
proofs $p$ and $q$ of type $x ≡ y$, we, in general, know nothing about
the relationship between $p$ and~$q$. It is true that there are both
proofs of the identity $x ≡ y$ but these proofs may be different in a
critical way that would make it inconsistent to identify them. In the
topological world, it is easy to visualize such a situation:
\begin{center}
\begin{tikzpicture}[scale=0.6, every node/.style={scale=0.6}]
\draw (0,0) rectangle (5,5);
\node[left] at (0,0) {$x$};
\node[right] at (5,5) {$y$};
\fill[blue!40!white] (2,2) rectangle (3,3);
\draw[->,cyan,thick] (0,0) to [out=10, in=-100] (5,5);
\draw[->,cyan,thick] (0,0) to [out=80, in=-170] (5,5);
\end{tikzpicture}
\end{center}
The idea is that the large space $A$ (the square) has a hole (in
blue). The two paths connect $x$ and $y$ but are separated by the hole
and hence there is no way to perform a continuous deformation (without
cutting and gluing) of one path to the other. The question is whether
a similar intuition can be develop in a conventional computational
setting?
%%%
\subsection{Concurrency}
A first hint comes from a paper
by~\citet{Gunawardena:2001:HC:377786.377845} which views the diagram
above as representing the concurrent execution of two processes. One
process $H$ makes progress depicted along the horizontal dimension and
the other $V$ makes progress depicted along the vertical
dimension. The paths from $x$ and $y$ correspond to all the legal
interleavings of the two processes. Now assume that during their
computation, the processes need to acquire exclusive access to a
shared resource. As soon as process $H$ acquires the resource, all
interleavings moving up in the vertical direction are forbidden until
the resource is released and symmetrically if process $V$ acquires the
resource all interleavings moving to the right in the horizontal
direction are forbidden until the resource is released. In other
words, the blue square represents a forbidden zone. Furthermore, all
interleavings to the right of the blue square are equivalent
executions in which process $H$ acquires the resource first and
similarly all interleavings going to the left of the blue square are
equivalent executions in which process $V$ acquires the resource
first. It would clearly be incorrect to equate executions that are on
different sides of the blue square.
The connection to concurrency outlined above suggests that paths can
be viewed as embodying conventional computational effects related to
acquiring and releasing locks and not just the special effects arising
from the homotopy theoretic interpretation (if one may call the
trajectory of a path around holes in the space as a form of
computational effect). Even more generally, and abstracting from the
particular example of concurrency theory, we argue that paths may be
viewed as embodying some general notion of computational effects.
%%%
\subsection{Effectful Proofs}
Forgetting about the homotopy theoretic interpretation and returning
to the terminology of Secs.~\ref{sec2} and~\ref{sec3}, objects of type
$x ≡ y$ are proofs of the equivalence of $x$ and $y$. If we postulate
for a moment that these proofs may embody computational effects then
all the special treatment of paths arising from the homotopy
interpretation can be justified on purely computational grounds by
appealing to the familiar reasoning laws governing computational
effects using monads, type-and-effect systems, effect handlers,
etc. For example, if we imagine that \AgdaFunction{loop} in the
definition of $S¹$ represents a proof with a computational effect,
then given an arbitrary function from $S¹$ to some space $B$, it is
natural to expect the proof object
$\AgdaFunction{ap}~f~\AgdaFunction{loop}$ to also have a computational
effect in which case this proof cannot in general be equivalent to
another proof with no effects or with different computational
effects. Similarly following \AgdaFunction{loop} twice thus
experiencing the computational effect twice would not necessarily be
the same as experiencing it once. However, in order to be consistent
with the tenets of HoTT it should be possible to experience the
computational effect in the opposite direction in a way that cancels
out.\footnote{This reversibility of computational effects does not