-
Notifications
You must be signed in to change notification settings - Fork 1
Expand file tree
/
Copy path25-Math-Expectations.txt
More file actions
723 lines (647 loc) · 27.9 KB
/
25-Math-Expectations.txt
File metadata and controls
723 lines (647 loc) · 27.9 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
TITL:
*Math Expectations*
*As Applied to X-Risk Research*
*By Forrest Landry*
*Oct 15, 2022*.
ABST:
Review of a process collision
between what is expected around
the notion of 'formality in reasoning'
around categories of AGI/APS x-risk assessments,
and/or what sorts of people are actually needed
to do this kind of work.
TEXT:
> What is your background?
> How is it relevant to the work
> you are planning to do?
Years ago, we started with a strong focus on
civilization design and mitigating x-risk.
These are topics that need and require
more generalist capabilities, in many fields,
not just single specialist capabilities,
in any one single field of study or application.
Hence, as generalists,
we are not specifically persons
who are career mathematicians,
nor even career physicists, chemists,
or career biologists, anthropologists,
or even career philosophers.
Yet when considering the needs
of topics civ-design and/or x-risk,
it is very abundantly clear
that some real skill and expertise
is actually needed in all of these fields.
Understanding anything about x-risk
and/or civilization means needing
to understand key topics regarding
large scale institutional process,
ie; things like governments, businesses,
university, constitutional law, social
contract theory, representative process,
legal and trade agreements, etc.
Yet people who study markets, economics,
and politics (theory of groups, firms, etc)
who do not also have some real grounding
in actual sociology and anthropology,
are not going to have grounding in
understanding why things happen
in the real world as they tend to do.
And those people are going to need
to understand things like psychology,
developmental psych, theory of education,
interpersonal relationships, attachment,
social communication dynamics, health
of family and community, trauma, etc.
And understanding *those* topics means
having a real grounding in evolutionary theory,
bio-systems, ecology, biology, neurochemestry
and neurology, ecosystem design, permaculture,
and evolutionary psychology, theory of bias, etc.
It is hard to see that we would be able to assess
things like 'sociological bias' as impacting
possible mitigation strategies of x-risk,
if we do not actually also have some real
and deep, informed, and realistic accounting
of the practical implications of, in the world,
of *all* of these categories of ideas.
And yet, unfortunately, that is not all,
since understanding of *those* topics themselves
means even more and deeper grounding
in things like organic and inorganic chemistry,
cell process, and the underlying *physics*
of things like that.
Which therefore includes a fairly general
understanding of multiple diverse areas of physics
(mechanical, thermal, electromagnetic, QM, etc),
and thus also of technology -- since that is
directly connected to business, social systems,
world systems infrastructure, internet,
electrical grid and energy management,
transport (for fuel, materials, etc), and
even more politics, advertising and marketing,
rhetorical process and argumentation, etc.
Oh, and of course, a deep and applied
practical knowledge of 'computer science',
since nearly everything in the above
is in one way or another "done with computers".
Maybe, of course, that would also be relevant
when considering the specific category of x-risk
which happens to involve computational concepts
when thinking about artificial superintelligence.
I *have* been a successful practicing engineer
in both large scale US-gov deployed software
and also in product design shipped to millions.
I have personally written more than 900,000
lines of code (mostly Ansi-C, ASM, Javascript)
and have been 'the principle architect' in a team.
I have developed my own computing environments,
languages, procedural methodologies, and
system management tactics, over multiple
process technologies in multiple applied contexts.
I have a reasonably thorough knowledge of CS.
Including the modeling math, control theory, etc.
Ie, I am legitimately "full stack" engineering
from the physics of transistors, up through
CPU design, firmware and embedded systems,
OS level work, application development,
networking, user interface design, and
the social process implications of systems.
I have similarly extensive accomplishments
in some of the other listed disciplines also.
As such, as a proven "career" generalist,
I am also (though not just) a master craftsman,
which includes things like practical knowledge
of how to negotiate contracts,
write all manner documents,
make all manner of things,
*and* understand the implications
of *all* of this
in the real world, etc.
For the broad category of
valid and reasonable x-risk assessment,
that nothing less than
at least some true depth
in nearly *all* of these topics,
will do.
:d4e
> You claim that you are going to do a proof.
> So therefore, we will have the expectation
> that you will construct your formal proof
> in the conventional notation/language
> that other specialized mathematicians
> are familiar with.
> And therefore also the *expectations*
> that you will _look_like_ a career mathematician;
> ie; one that has established PhD credentials
> at an accredited university specifically
> for something like advanced topos theory,
> and who also knows about all of the latest
> work in that one field (or in general etc),
> someone with multiple published papers
> demonstrating some original theorems,
> that has also been reviewed and accepted
> by the larger mathematics community, etc,
> at least some people in which
> can vouch for you, as being a respected
> mathematician in that community, etc.
As mentioned, we do not, did not, have not,
claimed to be career mathematicians.
So therefore, some of your expectations
do not seem all that relevant here.
Just tacitly assuming that we 'should' be,
or are 'just mathematicians',
seems like a mistake.
We are *generalists* --
and that is actually what we
actually should be.
Moreover, in the case of constructing a proof
applicable to final conditions of the real world,
we cannot just assume we can restrict ourselves
to just the deterministic domains of mathematics
(eg; algorithmic computability).
Any initial conditions (and other premises)
set at the start of any sort of proof claims
must also correspond empirically and *soundly*
with the dynamics that actually will show up
in practice, in the real situations,
not just in the model.
We are attempting to describe something about
the limits of modeling, *in this application*.
One thus notices that more than *just*
a principled understanding of formal deductive reasoning
is required to derive an impossibility result
regarding actual machine classes in the world:.
Since we are dealing with code
that is learned through
statistical approximation/optimization methods,
as stored as abstraction layers
of a software/firmware/hardware stack,
and computed/routed as message transmissions
from/to peripherals,
we must have a sound understanding
of relevant domains in computer science.
Given the side-channel-effects of
and noise interference across
the signal transmissions
between AGI/APS internals
and their connected physical surroundings,
a solid grasp of fundamental laws and limits of physics,
of entropy and of information theory,
and of error detection and correction methods
is also required.
Given signal feedback loops
between AGI internals and surroundings,
a principled understanding of cybernetics,
and of nonlinear (chaotic) dynamics
is also required.
Since (digital) code is necessarily embedded
as part of an assembled molecular substrate
and computed/expressed through that substrate,
this necessitates a principled understanding
of molecular chemistry.
Since the existence and continued computation
(actual non-halting) of that code,
over the long term,
depends on the reproduction of
a compatible molecular assembly
and on the preservation of
that assembly's functional integrity,
this requires a understanding of
basic manufacturing processes
and molecular assembly theory.
Since the rate
of reproduction and preservation/survival
of (the learned variants of) code
held within a substrate
is subject to (feedback from)
outside environmental conditions,
this requires a principled understanding of
evolutionary developmental biology
(as including function co-option
and extended phenotypes)
and of eco(toxi)cology.
Since artificial substrate configurations of
(self-learning, generally functional) code
would need and could fulfill different conditions
for continued existence and growth,
their supply of and demand for resources
across an extended artificial ecosystem
would come to differ from the old ecosystem.
Careful analysis is needed
to model inter-ecosystem supply-demand differences,
of equilibria in (the absence of)
inter-ecosystem resource exchanges
and game-theoretical interactions,
and of the 'offense-defense balance'
in available (info/biochem/physical)
attack/leakage vectors
and protective/containment barriers.
Only at this point --
having carefully compiled work
in the above domains (and more)
over the preceding 15 years --
that we do have a comprehensive enough
*empirical basis*
of where to even *begin* constructing
a formal and symbolized construction regarding
the long-term ecosystem-terminating feedback dynamics
that we are describing.
Jumping straight to utilizing existing models
without having some sense of which and why
(including of idealized computation)
would be undisciplined, and ill-advised.
Where/if we wanted to construct something
that is focused on *only* just validity,
(by the rules of logic alone,
as a basis of mathematics),
then we could/would have constructed
an overly simplified toy model.
The only problem is that then
everyone would simply discount
all of the work as "not relevant" --
ie, does not correspond to the real world
ie; that validity without soundness/relevance
is simply useless, in this case.
If the point was only just to show
that we could write out some ideas
in conventional notations and forms,
then we will instead simply
hire a specialized mathematician.
Instead, we need to set up a careful claims
that *soundly correspond with*
the causal dynamics in the real world --
requiring us to be disciplined about
selecting only for properties and laws
that are generally applicable
and well-established through induction
(ie; through multiple layers of
iterative empirical observation/falsification).
:d9e
> If you do not have the skill of
> a formal mathematician,
> then you cannot claim to be
> "doing a proof", insofar as the
> absence of complete knowledge of
> all of the kinds of things that can
> go wrong with reasoning,
> have not been accounted for.
> People have noticed that even
> having lots and lots of evidence
> for a single specific claim
> can sometimes have established
> unexpected counterexamples
> far deep out into the number field.
> So therefore, if you do not do things
> the way we expect, your work *will*
> be discounted, without inspection,
> as 'inherently incorrect' --
> ie; not worth my time to review.
What we can do, and what we *are doing*,
involves careful, formal reasoning,
with lots of interacting key details,
that all need to be tracked,
lots of complexity with clear attention,
tracking of all manner of assumptions,
all types of combined complexity to organize.
In that way, our work is more in the space
of _formal_reasoning_ than anything else --
so we describe it as "like math"
to those people who are _asking_us_
to *describe* our current and planned work.
When attempting to describe ourselves,
and/or certain elements of our work,
we will have to "borrow" whatever terms
are available in the language of the person
we happen to be talking to in that moment,
to attempt to convey, for them, in their
language and metaphors of understanding,
as best as possible, what we are attempting
to convey to them, at their request.
Just because we are not -- do not happen to be
*just* career mathematicians (only and exclusively)
does *not* mean that we are not
exactly the right people
to be doing *this* project --
it may be the case
that very few others, in the world,
(in any single specialist discipline)
will actually have the complete and specific
total *range* of skills necessary
to the actual depth necessary, in each,
for them to actually have any hope at all
of fully encompassing the needed thinking.
So therefore,
not being just a mathematician
and looking like one that has done just that
is *not* a failing on our part.
It is actually an indication of our rightness
for this work.
So, yes, I am claiming that I do actually have
*some* real depth of knowledge in *each*
of the above mentioned fields of study
(and a bunch of others I forgot to mention),
also and inclusive of various topics in math.
And there can be no doubt that having
*all* of this is actually relevant,
particularly when attempting to assess
the x-risk implications of planned/future
AGI/APS/superintelligence deployments.
:dcu
> What/which university did you study at?
> What were your specific focus topics of study?
> Which branches of math did you specifically
> study, and with who, and did you eventually
> contribute original work to that field?
There is a lot that we could share with you
detailing this. Can you be a bit more specific
as to what you actually need?
> Where are your co-authored papers detailing
> some of your prior results/work published?
> (Note, if it is not already in arxiv.org,
> I will be less interested -- the best work
> tends to go there, and I should be able
> to find it using a quick internet search
> that I am doing right this moment now.
> Also, If you do not have a sufficient number
> of such papers, I am letting you know now
> that will not have a favorable impression).
> Also, has your work been peer reviewed
> and was it also recognized and accepted
> in a recognized math related journal?
> Does anyone else cite your work?
> What are the views of other mathematicians
> also working in your chosen topic of focus,
> regarding your work, ideas, proofs, etc?
> Can at least someone in the math community
> vouch for you, as being a respected
> mathematician in that community, etc?
We are not claiming to be mathematicians,
though we do recognize that we need to do
careful formal work, and that it will maybe
(probably) involve generally accepted types
of math symbolism, in some form, eventually.
The emphasis is expressive conceptual clarity,
not the explicit terms or symbols used.
And we do "get", (understand and accept, etc),
that you are needing at least *some* basis
to establish some realistic assessment
that we are reasonable, know something about
what we are attempting to do, and are actually
using the right sort of tools and techniques,
in the right sorts of ways, and moreover,
that we are more likely than most to actually
be able to accomplish our stated goals, etc.
We do see that before listening to and/or
attempting to be understanding our work,
(and/or maybe recommending it or not),
you will want to know that we have handled
the difference between induction arguments
in physics and the deductive arguments of math,
can correctly apply formal reason,
have examined all relevant priors,
recognized and processed exceptions, etc.
We do have some prior formal work in the area
of x-risk, which involved some technical analysis,
and which was published to
a number of academic forums.
The main one which is closest to what you are
specifically asking for is the (@ Dark Fire https://authorzilla.com/xzvZz/putting-out-the-dark-fire-constraining-speculative-physics-disasters.html) paper,
which was a consideration of
another class of x-risk entirely,
as better understood in the area of physics,
though some aspects of it were (and probably are)
at least still somewhat controversial,
(though only in the physics community).
The argument in that paper is a complex one,
and it involves a lot of specific understandings
in high energy particle physics and cosmology
so as to constrain the probability of a *maybe*
possible (world ending) collider incident.
In regards to 'is this a good exemplar'
of our type and category of thinking,
I hesitate to recommend too strongly,
due to that complexity (typical of that world).
Insofar as it is based on my concepts,
and insofar as we collaborated extensively
while that paper was written, exploring idea, etc,
then yes, this is a good example of my work.
Most physicists that I suggest this paper to
tend to reject it for a variety of reasons.
One of the more recent and unexpected reasons
is due to my co-author, Anders Sandberg,
although he is highly respected in the x-risk field,
as it happens, his PhD (from Stockholm University),
is in 'computational neuroscience', not physics,
and thus he was not considered 'legit enough'
by some, far too strongly opinionated people.
Having my name on it is no help at all,
at least in that respect, unfortunately.
I do not see this as a problem as Anders is also
a generalist, in many of the same relevant ways,
and moreover, someone whom I personally recognize
as being a very careful, disciplined thinker,
whose opinions I value, with multiple clear
and insightful ideas. The work is valuable,
and insofar as I contributed content to that,
I will send that over with these disclaimers.
We can also send over other examples of
some of my prior formal reasoning work,
in the areas of limits of epistemic methods,
though I would rather not,
given you are clearly looking for
something specifically math oriented,
and the topics tend to involve a lot of
fairly specialized technical definitions,
and outside of that context,
are very likely to be misunderstood.
:dfn
> If I happen to get around to reading
> the papers you send over and suggest, *and*
> it happens that I do not also recognize
> and understand (@ note 1 #note1) the specific symbolism used,
> I *will* judge you as being not a mathematician,
> and therefore will reject any/all claims
> of your formally proving anything at all,
> *particularly* regarding AGI/APS x-risk.
> Also, it will not matter if your arguments are
> provided with clear/exacting definitions, etc --
> it has to actually be formalized in equations,
> using generally the accepted symbolisms --
> else I will not recognize it as 'actual math',
> and therefore I will also judge that
> you do not have a proof of your claims,
> and that therefore your arguments
> are invalid, inapplicable to AGI/APS, etc.
> And no, you cannot send over computer program code,
> nor can any of your "formal" arguments be rendered
> in any sort of 'technical language' other than
> what I will recognize as (judged to be) math.
> Though I am totally sure that I will be able
> to understand anything that you send (@ note 2 #note2), so
> no worries about that (I am really smart).
Sorry, we cannot help you.
Nor do we even want to, anymore.
~ ~ ~
:djg
> What happened?
> What is the overall assessment?
The pattern seems to be the following sequence:.
- 0; we have made well structured observations
which we think we could probably formalize.
- 1; we presented our work as towards doing a 'proof',
as a kind of deductive argument with inductive relevance,
which implied, to most people, some kind of 'math';
and so;.
- 2; where as based on our own given descriptions
of what we were trying to do,
some people who needed/wanted to evaluate
our "suitability" for this sort of work,
elected to send someone
with an educational background in mathematics
to evaluate us and our claims.
- this would have made sense, of course,
*if* such a person had the more relevant skills
to make such an assessment,
even though in this case,
it did not help.
- 3; that person, asked to evaluate us,
had their own very strong expectations
that we would be 'just like' every other
practicing career mathematician who was
doing formal proof work.
(As such, she asked about our basis of argument,
our prior work in formal papers, etc).
- 4; And where insofar as we did not look like
a long term practicing career mathematician
(who has the time to specialize in that topic only);.
- 5; that/therefore; we failed out,
with respect to their assessment,
insofar as we did not match the projected expectations
and prior strong opinions of the assessor.
The unfortunate aspect is that
the assessment is clearly wrong,
not insofar as being narrowly correct
about our not being career mathematicians,
but in the space of whether we can do
meaningful work regarding AGI/APS
and inherent terminal x-risk 'proofs'.
It was based on expectations/opinions
that simply (mistakenly) do not apply,
and moreover, are not the right sort of
expectations/opinions needed for assessment.
Better discernment is regarded regarding
that which is inherently generalist work,
vs that which is inherently specialist work,
and we are definitely the former,
which is much harder to validly assess.
This is in itself to be expected,
since there are very few generalists
in the world today, given the ongoing emphasis
of nearly every academic in every institution,
to specialize, as designed to make "progress"
(in the modern world, in STEM, market, etc).
Anyone attempting to evaluate our work
is going to have a hard time
finding the right mix of discernments
to be able to tell that what we are doing
is just inherently better than that for which
they can presently have a concept for.
:note1:
...and understand...
The risk here is that 'understanding'
can easily, especially in adverse opinion,
be conflated with 'agreement' and thus with
issues like 'validity' and 'applicability'.
These are not the same, and yet she has given
no assurance, and in fact, many dis-assurances
that she would actually be "reasonable"
and thus actually, in practice, distinguish
personal opinion and bias from logic,
despite claims to the contrary, etc.
Maybe she does not actually *both* understand,
*and* agree with all of the claims made?.
Is that a true failing of the work itself,
or simply a limit of the reader capability?
Maybe she could find some easily correctable
nit-picks -- how are we to know that she
was not simply creating them, as a way of
discounting opinions/ideas that she does not
want to see established as 'proven'?
How are we to know if the reviewer 'observations'
are fair, reasonable, applicable, correct,
if she simply does not mention them at all,
but simply rejects the entire work
without comment, as it seems very likely
that she is predisposed to 'just reject',
without any other valid reason, anyway.
Also, just about anyone can come up with
infinite synthetic reasons for
disagreements with definitions, etc.
There may be specific reasons why a given
definition is constructed the way it is,
that is not obvious on 1st inspection.
What if it happens that this disagreement
is based on their failing to understand,
or meet their own priors, expectations, etc.
:note2:
The implied "vast familiarity with math",
and that this reviewer later indicates
that they "had read analytical philosophy
papers of a some other specific author"
with the implication that they would therefore
"for sure correctly read and understand"
a much different analytical philosophy work,
despite our repeated warnings about
the time and care needed to actually understand
the actual specific meanings of the terms used,
(as based on also a number of preceding papers),
seemed a bit aggressive and overly presumptive.
We ended up with the very distinct impression,
(via numerous video gestures, face expressions, etc)
that they were clearly implying/claiming, to us,
"that they could not (would not ever) possibly
fail to understand, completely and totally,
absolutely anything and everything
that we could/would even possibly do, ever" --
(ie; how could "three men" ever have anything
original to say to someone like them).
This goes far afield of issues of agreement,
let alone of simple logical validity.
This strong feeling clear absence of any sense
of any type or aspect of intellectual humility
and/or of actual emotional communicative honesty
(ie; our experience of the review) leads us to have
significant doubts as to the prevalence of bias,
and thus, of the clear in-applicability of
this particular reviewer, has having as any kind of
valid and justified opinion of our intended work.
Unfortunately moreover, as we later discovered,
once we had a chance to review the reviewer bio,
it turned out to clearly be the case that
this reviewer themselves does not even "fit"
the 'specialized/career mathematician' persona
that they were expecting that we should have,
and that they ware projecting for themselves.
This frankly makes their self presentation,
_as_if_ knowing and being able to evaluate
and thus *judge*, all manner of 'math stuff',
not to mention any disciplined philosophy
and/or policy recommendations for AGI work
look even very much more presumptive and biased.
The bio profile shows little mention of math,
and much more around *entrepreneurship*,
machine learning, and public communication --
all topics that indicate clearly entangled interests
and the strong possibility of motivated reasoning.
Thus, our initially somewhat charitable impression
of this reviewer *maybe* having a "somewhat less"
judgemental bias, for illegitimate reasons,
became very much stronger, and less good.
Overall, the clearly presumptive attitude
and the level of intellectual entitlement
left us with a very strong impression of
a very strong and illegitimate negative bias
and of a prejudgement of distaste/distrust,
for reasons having nothing to do with
the work itself and/or of our capability
as persons.