I mean, how many people offered theoretical and mathematical proofs of free energy or perpetual motion?

This is a mathematical proof about recursion theory, it is not a physics proof or even a mathematical physics proof. It follows from standard ZFC set theory, the de facto basis for most reasoning that constitutes mathematics (proper). It is a-priori knowledge, not a-posteriori, i.e. it is not experimental physics, or even theoretical physics, since, in that context, theory is an attempt in part to explain experiment. So, it is true by definition, in the same way that this statement is true: 'there is no five-sided square'.

anyway, suppose the proof is correct, what then?

People were questioning the validity of the proof, saying that plenty of people have claimed to have proven math results. As well, there have been examples of incorrect proofs which made it past peer review, since after all we're fallible. First of all, at this point the proof has not yet been verified by any other mathematician afaik. Some results, such as many of Ramanujan's results, turned out to be correct. Probability play no role in determining the validity of a proof. Proofs are not probably valid, or probably invalid. (They are one or the other). So the objection that the proof could be wrong holds in common for both proofs that turn out to be valid and those that turn out to be invalid, so it is hardly a Shibboleth.

Alternatively, I ask you to engage in the mathematical practical of supposition. The proof is either valid or invalid. Suppose the proof is valid? What do you make of its implications? Suppose it is invalid? What do you make of its implications? So far, you seem to have only seriously considered or examined the latter case. If you consider the possibility that the proof is valid, that makes for interesting and exciting ramifications for A.I. theory and practice.

what's the logical basis of logic?

Foundational mathematics and the philosophy of mathematics does indeed pose difficult questions. Fortunately, if the proof is valid, we are as justified in accepting its results as we are justified accepting the standard basis of mathematics, ZFC set theory. Since ZFC was first "postulated" around the year 1937 (over 80 years ago), no one has found an inconsistency in these axioms, nor is it possible (if they are consistent) for anyone to find a proof (from within ZFC) that ZFC is consistent, due to Goedel's second incompleteness theorem.

it's not in coq or prolog or isabelle?

Unfortunately I became homeless before I was able to program the proof into a standard theorem prover (even if I could at that point, as it is nontrivial to formalize theorems into ATP systems). I then fell out of practice with mathematical reasoning, so I could no longer complete the mechanical verification of the proof, even if I was familiar with formalizing proofs in any of these theorem provers. (Note most accepted proofs have not yet been verified). I therefore offer a bounty of $100 for anyone who can take the informal proof write-ups I have https://frdcsa.org/frdcsa/introduction/writeup.txt and https://frdcsa.org/~andrewdo/pioneer/wk1/frdcsa.pdf (end of page 2, beginning of page 3), correct the glaring flaws in them (by using their mathematical maturity to figure out the intuition behind and intention of the proof in order to correct the mistakes (of which I determined at one point there to be at least 3)), or just prove the conclusions on their own, and first, write an informal version of the proof (and/or maybe find a more intuitive proof) and then formalize said proof in a theorem prover (with a sufficient standard library) capable of formalizing the proof.

another way to ask, is What is an application of your proof?

An application of the proof. Consider a mathematical problem with real-world implications, such as a problem from formal verification of software (which is used in mission/safety critical settings). Rice's theorem says that all non-trivial semantic properties of programs are undecidable. For instance: "Does a given program terminate on all inputs?" An application of the proof says, that suppose you have a termination checking program (such as are entered into these Termination checking competitions: http://termination-portal.org/wiki/Termination_Competition ). Because termination is undecidable, no termination checking program can decide the question in general of whether a given, say, Java (since Java is Turing-complete), program terminates. It can only determine whether the program terminates or not on a proper subset of all Java programs. My proof shows that, that, for any fixed termination checking program (call it program P) that has been submitted to the termination competition, there exists another program (call it Q) which can determine the termination status of all programs that P can, plus more. Furthermore, sometimes the Q executable or source must necessarily be larger or longer than the initial P executable or source for this to occur. (Thus, if you iterate this process to find a sequence of programs of increasing coverage starting with P_0 := P, eventually there is an n such that the P_n executable or source must necessarily be larger or longer than the P_0 executable or its source. Thus limit P_i as i goes to infinity is positive infinity.)

While this is certainly a useful application of the proof (by no means the only application), it is not the most powerful application of the proof, but rather, a rather direct application of the proof. The main value of the proof as I have stated it are the consequences it has for practical reasoning. Practical Reasoning is the use of reason to decide how to act. The theorem makes plain the incentive structure that motivates, by default, certain choices when contemplating how to create A.I. (such as packaging software) over other choices in the absense of specific information to the contrary, in the same way as the infinite ack problem motivates maneuvering away from any position in which the impossibility of achieving common knowledge given uncertain message delivery would compromise communication with contextually fatal results. In other words, the problem uses infinitary reasoning, yet dictates a practical/pragmatic choice.

Another example would be the "comedy" of a king chasing a pawn which is on its way to queening. If it is the pawn's move and the pawn is closer to the pawn's color's 8th rank than the king, the king cannot catch it, and is always at least one step behind. I will try to think of an indirect ramification of the theory that nevertheless makes sense and has more significant consequences in terms of software development (besides the obvious benefits of packaging software and its relevance for the creation of Turing-equivalent AI.)

so from the PDF, the proof you claim is basically that the conglomeration of multiple tools can solve broader problems than any individual tool?

Kind of. Basically for any program, there is always a larger, proof-theoretically stronger program. This means program complexity is an upper bound on proof-theoretic capacity as measured by set-inclusion. The easiest, practical method of rapidly increasing the meaningful complexity of programs appears to default to collecting and packaging the software that other people have written. Additionally, we write software to GLUE that software together in all ways practical and conceivable to us, when useful, and to fill in missing gaps that are critical to the project of gathering more software.

what is the difference to your proof if one collects techniques, or collects libraries, vs collects applications?

Collecting techniques and libraries is just as beneficial as collecting applications, it does not affect the validity of the proof. It is still, when viewed through the eyes of the proof, part of the "program."

how is swipl.exe quantifiable in its FSM for instance?

what's the specific reference within that PDF which you're referring to?

well, I did originally ask about "How might it be applied?" before

a practical use case?

don't you think most of us here collect various software & libs? as well as papers and books?

This is definitely good and useful, but a few limitations come to mind.

work together how? everybody's got their own path to work on which maximizes what thye want to do

There are problems which affect all of us. If we at least do something collaboratively to work together on those problems, we 1) lighten the load and 2) reduce the total number of problems that we collectively must solve to all pursue our own paths successfully. Not only are you helping me with my problem when we solve a shared problem, but I am helping you with yours. By aligning our shared tasks we shorted the distance for everyone to following the path to work on that which maximizes what they want to do.

aindilis, you do recall the noble efforts of rusell and whitehead?

Yes, the infinite regression in the foundations of mathematics and the three volume set of Principia Mathematica, after which it took some 385 pages and 101 axioms, iirc, to prove, finally, in the second volume, that 1 + 1 = 2. This is good work, because it makes explicit that which is tacitly assumed. Computers have difficulty filling in the gaps. Codifying the intrinsic assumptions used for computation, and the general topology of axiomatic systems, is central to automated theorem proving.

if somebody is for instance using python, and someone else uses Java, what does that mean to get a package from your system, instead of the individual AI tool plus a language interface library from some random google/github hit?

Well, first of all, in this system, such as with System Recommender and/or CSO, all programs are searchable together. There are tools for using natural language processing and information retrieval to help locate the corresponding tool.

Retrieval is done through the appropriate mechanism. Debian for instance does make both packages out of python and java libraries and tools. So ideally, yes, there would be (redundant) OS packages for pip and Maven.

what is a package in your system?

The archetypal package format in our project is Debian's apt packaging format (.deb files). Our initial goal is to make Debian packages out of most everything. But we intend to follow that up with other free/libre Linux distro packaging systems, like Redhat's yum (.rpm), and Slackware .tgz, Arch .pkg, so on and so forth. These may be generated via semi-automatic translation from the Debian packages in most cases, such as by using or extending the Alien package translation tool. But it is perfectly valid according to our definition to make packages for any ecosystem, such as Ruby gems, or Python pip, or Perl cpan modules, or Java maven files, etc. Any process whose installation is a one-off CLI invocation instead of a convoluted effort to dig up and manually recursively install all dependencies and then build the software.

what does it take to grab a random AI tool and make it a package? that's what you're asking people to do

Presently one must either work on the automated packaging toolchain, or more likely, learn the ropes of packaging software. The FRDCSA system includes a lot of helper functionality for locating (RADAR), retrieving and packaging (for Debian's .deb, is all that is implemented at present) (Packager) software systems. It turns it into an interactive REPL in which the user is asked questions and the system does a bunch of transformation on the data. But packaging is undeniably a difficult topic at present, which is why I'm not asking everyone. But if you can build most software, that is 99% of what's needed to make packages. The rest can be learned relatively quickly by people who are that competent to begin with. Here is the link: https://mentors.debian.net/intro-maintainers. Yet please note, in the interest of making packages quickly, for starters, we have relaxed some the quality assurance requirements, and created our own external (unofficial) Debian repository which may be including into /etc/apt/sources.list. Therefore you need not package to the same degree of quality. Our rationale is that it is very important just to get the functionality to the end user. Package quality can always be improved later, semi-automatically, such as via lintian and rebuilding, etc (in the case of Debian packages).

so wait, you're not actually offering any help in composing AI applications together in your packaging? just the literal install process?

Even if we just use software independently, and do not compose it together, the more software you have the more problems you are going to be able to solve on average. That said, FRDCSA has quite a few ways of composing AI applications together. Let me talk about that now.

First there are numerous (extant, more planned) ways to interface, none is a panacea.

FRDCSA has a "MultiAgent System" which allows interprocess communication with a number of useful features. It is called UniLang. When one interfaces a program with UniLang so that it can talk to it and every agent in the system, we call that agentifying the system. What UniLang does is allow us to communicate with many systems which have a UniLang client library implementation, currently it is limited to Perl, Prolog, Java, and Emacs Lisp. However, most FRDCSA custom code is implemented in these languages, so that provides you access to most of the core systems - BUT - these core systems then use various mechanisms to bidirectionally wrap other languages such as Python, C/C++, C#, Haskell etc. So in effect you can talk bidirectionally between almost any programming language.

How does this happen? There is no one way.

In a lot of cases it happens through Perl being able to wrap the terminal using the Expect system. This enables us to wrap command line applications in any language. A majority of our systems that do this are in these subdirectories: Capability, System and PerlLib.

Secondarily, we use a lot of invocation via the shell. All of our languages have shell-command and shell-command-to-string functions.

Thirdly, on individual basis, we connect to third party mechanisms, such as RPC, RESTful APIs, SOAP, WSDL, etc. Perl does a lot of the wrapping.

Fourthly, we embed language in other languages, like we do with Inline::Java, Language::Prolog::Yaswi, etc.

These are obviously by no means the only ways that people have to integrate systems, they are merely the methods that we have been able to use, quite successfully, to get systems to talk to one another. But more work in this area is a desired todo item.

Furthermore, UniLang provides a pre-JSON interlingua which maps cleanly between Emacs, Prolog and Perl, and is most like Prolog terms.

right, but nothing is presented to actually achieve that yet. It's only bundling for installation so far?

Is the only benefit easier installation? Packaging is merely the first step. There is actually a system which plans for how to integrate systems, called, straightwordly, Architect. CSA (of FRDCSA) stands for Cluster, Study and Apply. RADAR does the Clustering, Packager does the "Studying" so to speak, and Architect does the Applying of the software. Architect tracks the features that that software provides, and hallucinates new ways to combine them based on what it knows.

would you say that your glue allows me to integrate the usefulness of another utility into one I'm already familiar with, or the glue itself runs the composed underlying software systems on my behalf?

2020-04-20 01:11:37 both? 2020-04-20 01:11:48 FRDCSA is multistrategy. TIMTOWTDI 2020-04-20 01:12:05 so if I'm in otter for instance, I can integrate usefulness from cyc etc? 2020-04-20 01:12:12 TPTP 2020-04-20 01:12:23 (yes) 2020-04-20 01:12:30 but I don't have everything 2020-04-20 01:12:39 I'm trying to get it though 2020-04-20 01:13:03 right, the big question is "how" because there's a ton of devil-in-the-details in that sort of stuff, which people want to be able to see/evaluate before committing inot it

so if I'm in otter for instance, I can integrate usefulness from cyc etc?

but in order to use any bit of tech, you have to learn each tool, and each individual composition library?

aindilis: what all have you done in cyc?

btw, is the game.rules somethign that couldn't be done in a similar amount of code in prolog?

So what did Cyc bring to the table?

so, what does Cyc do that is so great?

Can you find the hidden (and not-so-hidden) assumptions in your system that you've described? Can Cyc find them?

wait, so is TAC doing the work of "random blog text -> relationship detection" and Cyc is just holding the results of that?

"make implications to somethign without detail"? ;)

eg what ttmrichter is actually asking for? ;)

well you heard of the Terrorism Knowledge Base?

but what are its features?

and when people build projects around cyc, how much programming is in actual cyc rules/facts, vs in external programs that use cyc as fancy subroutines?

I think the main question is how much of that is unique to Cyc, and how much of it is actually other tools or general programming?

If Cyc is so powerful, why does it always show up as a small component in a large system without anything that seems like it couldn't be done with other technologies?

What does Cyc bring to the table?

WHy would I want to spend the time and the money to use Cyc?

Could I make large quantities of data and give them to this natural language thing. ONly my data says that aindilis is BFFs with some terrorist leader. Would this "AI" then recommend assinating him? People make spam bots on social media all the time

But also, if these claims about Cyc/whatever are true. How do we know that the analysis it's doing is actually good? it could be just saying "assnaite bugs bunny.. bugs is always causing mischief"'

But alas, Cyc already knows Bugs Bunny is a fictional character!

CYC(51): (c #$BugsBunny-FictionalCharacter )

all-term-assertions:

#<AS:(#$nameString #$BugsBunny-FictionalCharacter "Bugs Bunny"):#$EnglishMt>
#<AS:(#$isa #$BugsBunny-FictionalCharacter #$FictionalCharacter):#$FictionalWorksMt>
#<AS:(#$characterInCW #$BugsBunny-FictionalCharacter #$BugsBunny-TVShow):#$FictionalWorksMt>
#<AS:(#$isa #$BugsBunny-FictionalCharacter #$Individual):#$UniversalVocabularyMt>
#<AS:(#$characterInCW #$BugsBunny-FictionalCharacter #$BugsBunny-TVShow):#$FACToryVerifiedSentenceMt>
#<AS:(#$candidateWikipediaArticleName-ManyToOne #$BugsBunny-FictionalCharacter "Bugs_Bunny"):#<(#$ContextOfPCWFn #$Wikipedia-WebSite)>>


comment:

NIL

[Time: 0.007 secs]
NIL
CYC(52): 
2020-04-21 22:27:28 and of course, coming up with the one true packaging system is basically this: https://xkcd.com/927/ 2020-04-21 22:27:58 well, I'm using M existing packaging systems 2020-04-21 22:28:01 not reinventing any 2020-04-21 22:29:07 I'm just packaging for existing packaging systems like apt-get, yum, ports, gems, pip, cpan, etc 2020-04-20 00:43:57 your idea sounds a bit too centralized 2020-04-20 00:44:27 centralized & managed is often more brittle than individuals eah doing a piece on their own, in terms of resilience