"AI" was something I "invented." Clearly, the idea of Artificial Intelligence predates me - the concept having been named back in around 1956 at the Dartmouth Convention. And I was born over twenty years later. So I obviously don't mean I came up with the well-known concept of AI, either in academia or the more vulgar science fiction concept popularized in lots of movies, for instance. When I say I "invented" it, this is an issue of theory of mind, sort of like how Europeans "discovered" America, when there were already tons of people living there. So clearly by "invented AI", I meant something relative to myself.
In point of fact, what I "invented" was a personal concept of "AI." I couldn't tell you what it was right after I "invented" it, but I could tell you what it wasn't. I had this visualization of a system which would help anyone achieve "the evasion of chance in survival." There are really a ton of aspects related to what that early, personal notion of A.I. was about, but it was largely about helping people to optimally solve problems that were negatively impacting them, in a methodical way, using computers.
So how was it that a few years later, I "solved" "AI"? <- again, it's my personal notion of "Artificial Intelligence" that I refer to. Well, after initially declining to look into accepted research (for fear of confirmation bias), and trying very experimental and original computer programming projects, and getting nowhere for about a year or a year and a half, I began to study what others had done. Full stop. I spent a year attending graduate classes and seminars in computer science and mathematics from 8:30 AM until 6:00 PM, and then reading at night, six days a week. And after doing this, I had gotten up to speed on much of the theory behind academic AI. And then what I did was to use what I had learned to independently discover a tiny corollary to (i.e., a consequence of) one of the most important theorems of mathematics.
This corollary was the "solution" to the "AI" that I "invented." It was a "non-constructive" solution, meaning it tells you that the solution exists, but it doesn't tell you exactly what that solution is. Quite a bit more work is required to find that out. And moreover, anyone I knew who understood what the solution was (just me, for the most part) could plainly see that it mandated a particular approach to building "AI", one which was clearly not being taken by anyone at that time. Namely a large programme of software conglomeration and integration, *en-masse*. Not just collecting a few programs relevant to your research, but collecting *everything* one could get their hands on, and then systematically integrating it with a specific custom integration toolchain into the software ecosystem.
20 years after I made my "discovery," there is still no evidence that anyone else is pursuing this approach. But I can prove that the solution works. My concept of "AI" is after all mathematical in nature. I view real life as being subject to tons of mathematical constraints, and the project of "AI" includes gathering these constraints into the computer, evaluating them mathematically, and acting on the solutions generated.
What's more, whenever I spoke of "AI", everyone _assumed_ I meant AI, either research AI (in the halls of academia), or science fiction AI (popularized in movies and books). But since most people weren't acquainted with academic research AI and affinities it has with my "AI", they defaulted to thinking about science fiction AI. And "AI" had almost nothing to do with that concept. And no matter how much I pleaded for people to realize this distinction, they wouldn't.
But my "solution to AI" is easily understood by any mathematicians or computer scientists who have studied the particular prerequisite subjects of mathematics. The result is considered, among the properly educated mathematicians I have talked to, so obvious and easy to demonstrate as to be "uninteresting" (a term of the trade), i.e. not worth really mentioning. But then I say, so why aren't you implementing what the result mandates should be done? There is an appreciation for what the result says, mathematicians can appreciate that it easily follows from that major theorem. But they can't seem to appreciate what it means, what it motivates, the practical consequences of it.
It's kind of like when Sallah is with Marcus Brody in "Indiana Jones and the Last Crusade." [youtube] [invidio.us] Some enemy secret agents come to kidnap them, and Sallah says to Marcus, "RUN!." And Marcus, who heard but does not understand, says, "Yes?" And Sallah says "**RUN**!!!", and Marcus says "Yes?!". But Marcus never runs, at least until Sallah finally shouts "RUN!!!!" and then decks one of the enemies.
So there you have it, there are a few inequalities here: "AI" != (academic) AI != (science fiction) AI. But "AI" is actually very close to (academic) AI. I would say synonymous, but since my theory isn't being practiced anywhere except by me (or, possibly, DFARS), I have to make the distinction.
So the fact of the matter is, yes I have a non-constructive pure existence proof for "AI," (and some necessary but insufficient conditions on it), which using practical reasoning mandates a particular approach to building it. So, in the senses that I have given, I am entirely justified in claiming 'I "solved" "AI".' And no one can convince me otherwise.
For more information, see the FRDCSA Technical Reference.