I have been reading an admirable and thought provoking essay published back in 2012 in Aeon magazine How close are we to creating artificial intelligence, written by David Deutsch, and some responses to it, like for example The real reasons we don’t have AGI yet, written by Ben Goertzel, or youtube series about Artificial Creativity by Dennis C Hackethal, or Demis Hasabis's take on Creativity and AI, so here is my view.
Let's begin with a pedestrian observation that if you already have an algorithm which is not computationally expensive, you should probably not transform that functionality to a lookup in a table of cached results precomputed by using the original algorithm, unless the list of its legitimate inputs is really short, for obvious reasons: if that list is really long, you might require huge storage for that cache, and long time to prepare it in the first place, and if that list is infinite, or unanticipatable (such is the case with making a list of all future open problems in physics that require an explanation, for example), that transformation is an impossible task.
When comparing the task of developing a simple temperature converter program to the task of developing a program that exhibits AGI, Deutsch avoided that issue by constraining the input list to a range of numbers from -89.2 to +57.8 in increments of 0.1, however he did not avoid the awkwardness of considering that possibility for the first task, just to demonstrate its inadequacy for the second.
Because, everyone understands intuitively what general intelligence is all about. If you were to interview someone in order to employ that person, and you require good problem solving skills, you would be probably not satisfied if you find out that this person can present a sensible answer only if it was prepared in advance, and if it had to be by someone else, that person might not have problem solving skills at all. Even if it was prepared by that person, you still need to know how performant was that process, in order to be able to judge about intelligence of that person. So, regardless of how comprehensible that preparation was, and how correct the presentation of its results is, sufficiently thorough examination can always reveal the limits of that preparation and expose the real capabilities of that person, by finding questions which require unprepared answer.
That is what Hasabis is talking about, when he says that expert systems lack two things to be considered intelligent: learning capability and generality.
And that is what Magnus Carlsen is talking about when he says that he wants to play against a human opponent, and not against his opponent’s computer, that's why he diverts early from theoretical lines for which professional players are prepared, potentially better than he is.
So, there is no big insight in saying that if we want intelligence, we expect something more than just a huge and infallible memory, computers certainly have that for a long time. And not just that, they possess perfection and speed in performing any algorithm given in advance.
So, what do we expect then? Alan Turing said in his Lecture to the London Mathematical Society on 20 February 1947 : "if a machine is expected to be infallible, it cannot also be intelligent", so, he expected ‘initiative’ (which eventually leads to temporary mistakes) in addition to ‘discipline’ (the ability to perform given algorithm perfectly), or in other words, ability to try something new, since routine was seen as mindlessness, and ‘mechanical’ was used to mean ‘devoid of intelligence’, so the concept of machine intelligence was probably seen as an apparent oxymoron back in his times.
Nowadays, David Deutsch expects creativity, the ability to create new explanations, however whithout precisely defining what is it all exactly, that counts as a new explanation, at least here in this essay. From his other work it is obvious he means useful novel problem solving ideas in general. The question is if by that logic, the ability of an intelligent agent for example to acquire some standard locomotion skill (like riding a bike or ice skating) from scratch, is a hallmark of general intelligence or not? Probably yes, even if it was possible to download the information needed for that skill from some repository, and just install it locally, instead of learning it from scratch. Anyway, here he gives one concrete example of what counts as a new explanation: something that "addresses some outstanding problem in theoretical physics and that is plausible and rigorous enough to meet the criteria for publication in an academic journal". Still, comparing the ability to learn something no human could understand so far, and to learn something majority of people can learn in their childhood, the former is an indicator of super intelligence in a certain field, and the latter of an average intelligence, but it both does not tell anything about the generality of that intelligence.
I don't want to sound like Neil DeGrasse Tyson, who judges the value of philosophy as "a discipline that deals with questions about the meaning of the meaning", but I don't believe that exact theory can ever be made around fuzzy terms, and the meaning of "explanation" is really a bit unclear in this context, and requires an explanation, as funny as it sounds.
Let's assume for a moment however, that a "new explanation" means a "new algorithm", because, an old algorithm is actually data that can be retreived from memory and deployed as an algorithm (program), according to the underlying principle of von Neumann architecture, and as such doesn't require more than memory to hold it, and an idea to apply it (which sometimes can also be difficult, that is deciding if it can be searched for locally or globally in order to deploy it, or if absolutely new algorithm is required). So according to this view, the problem of human writing a program that exhibits AGI, might be connected with the problem of computer writing useful novel programs on its own, ie human finding a general algorithm that can automatically find other, new algorithms, according to the requirements unanticipatable at the moment of creation of a general algorithm. With respect to that idea, Deutsch has an interesting remark too: the general algorithm itself cannot be found by using the technique of ‘evolutionary algorithms’ because the Turing test cannot itself be automated without first knowing how to write an AGI program, since the ‘judges’ of a program need to have the target ability themselves. That is true, but that is also not necessary, and it is a little bit overambitious, because evolutionary algorithms could be used only by the general algorithm to find other new algorithms, at the moment when requirements for them arrive, and the general algorithm could be written by human, and not changed by the computer, ie the general algorithm itself, or some other external program that works in conjunction with the general algorithm.
That sounds reasonable to me, since if you intend to implement a person by that general algorithm, something has to be immutable about it, something that maintains its identity. Its beliefs, skills, experience and a lot of other staff may change, but something has to be constant.
Here is one example of such a program AI Programmer: Autonomously Creating Software Programs Using Genetic Algorithms, its code on github, and slideshare presentation . The problem with this approach to yield something comparable to human general intelligence applied in a narrow domain of software development, or something that can be a basis for an AGI is not philosophical, it is purely in the domain of engineering. And some nice engineering staff has been applied here, such as simplified and Turing complete instruction set that reduces the search space for machine learning, security checks performed by an embedded interpreter in order not to allow generation of harmful code, etc.
This can be also inspirational for those who think about biological evolution of multicellular organisms, and want to know the exact mechanism how it actually works, as the current state of knowledge is insufficient, and does not provide any explanation beyond "random mutation and directed selection". One of such people is Gregory Chaitin who wants to prove Darwin's theory mathematically, and there are a lot of parallels between these two problems: explain how does a multicellular organism evolve and explain how to make a computer write a new (and useful) program. Some of them are the quality of mutation and crossover algorithms that are applied between generations, and heuristic that guides their selection based on results of "unit test" prepared for the target program.
This is all relevant only if new explanations are about new algorithms, or about anything else related to computation per se, at all. Explanation of dark matter probably isn't about it. It is about finding novel ideas that change our understanding of the world, as a result of explanation, so in the context of AGI, it may be about the output of computation (using existing algorithms) that causes a new understanding in humans who receive that output.
Explanation can be about mere translation, for example some Chinese people want something from me, and I don't understand what is it, because they speak Chinese. Or it can be about proving some mathematical fact (theorem), that is not obvious before it gets explained step by step, during the proof. So here it is again about initiative to find these steps, and the discipline too, to apply rigorously deductive operations to reach from axioms or from some further point to the desired point, that is a statement that requires a proof. It can be also about some technical, technological or organizational invention or scientific breakthrough that shows how to improve human living, maybe that, before anything else.
Explanation can be about understanding ones motives to do something, which is not obvious and requires some emotional intelligence. Or it can be about us humans not understanding chess that good as AlphaZero, regardless of how intensively we use that entity for training and game analysis, that is what Lex Fridman is talking about, when he mentions that artificial neural networks possess symbolic abstract knowledge collected and stored in a connectionist modus operandi, that is a result of extensive exploration of a game tree, but we humans have a problem to extract that knowledge from them, or to abstract it from them in order to use it by our selves in our games, whatever the proper term for that would be.
The same applies to our ability to understand the logic of programs generated using genetic algorithms, which may prove to be equally superior to human programmer's logic, just as AlphaZero's chess skill is superior to that of best professional human players (and incomprehensible to them). Their skill is superior to that of amateur chess players, but here the skill transfer is a different kind of problem than the one that professionals face when they use chess engines, or the one that humans face when they try to understand complex molecules how they perform their tasks, and try to reproduce it artificially in a laboratory: the entities that possess superior knowledge do not possess the explanatory power to communicate it to people. There were attempts to develop programs for automated chess tutoring for example, or even just for automated annotation of chess games which is a less interactive form of chess tutoring, but with limited success, although they are based on super strong chess engines, development of which was a huge success. So yes, although the chess engines can beat best human players 100 times out of 100 games played, they cannot explain to us how they do it, so that we come closer to their level of skill.
The mentioned things all require the ability of a machine to learn things first, in order to produce explanations of these things to humans. In case of dark matter explanation for example, what approach might computer take to gain the required knowledge first, and does this question present the problem of specification? Like, if programmers were to receieve that dark matter explanation as part of program specification, so that they can hide it in a program, then neither them nor computer would be creative. If programmers were to specify very precisely the approach to computer themselves, on how to find that explanation, then they would be creative, but not the computer. So roughly then, one possible approach would be for the program to find on the internet all relevant repositories of knowledge about the dark matter, acquire all that is known, process it and come up with the theory from available data, like no human was able. Or, as a superintelligent entity, maybe it can deduce it all from some logical basis without ever analyzing any empirical data, so no need to search any repositories created by human?
Another thing that I don't quite understand here, is Deutsch's insisting that the test of AGI cannot be purely behavioural. As AGI is a software agent, the usual approach to categorizing its testing is functional, that is how well is output computed based on legitimate input, vs non functional, such as resource consumption, robustness in the presence of illegitimate input (for example security vulnerabilities) or in the presence of increased legitimate input load, and a plethora of other things. However, both categories deal with software behaviour, as something observable from the outside of an agent, that is a black box. And it is not like that white box testing is unheard of, that is testing its internal working, but rationale here is probably that as an truly intelligent agent, AGI would be able to "fake the output", "refuse to get tested", "intentionally underperfom", and do similar kind of things?
Also, I am not convinced about Deutsch's claim that consciousness is very ambiguous term, or at least not about the way he tried to reveal that ambiguousness, because the elementary part of nature of subjective sensations is that we lose them under general anaesthetic. And this is not one to one relation, because besides subjective sensations, we lose the ability to think also, and to recall past subjective sensations and experience in general, but that is only because these are also part of consciousness (that what we lose under general anaestethic). However, I agree that people sometimes mix that term with focus, that is something that we lose when we are conscious, but we do not pay attention, and here should also be mentioned that we do not understand dreams. As for the qualia problem, which is basically the fact that noone has ever experienced any subjective sensation as someone else, or being someone else, and because of that we cannot know if we all feel red color or 440 Hz sound frequency the same way, or maybe each of us has different sensations, despite of the fact that we agree on reporting same sensations under same stimuli, that is somewhat an interesting question, but if we follow that logic, I think we all sense roundness vs squareness the same way, both in visual and tactile way, and logical/mathematical way. Since these three are all in concord in one person, I believe we all must be in concord with each others too. So I think qualia basically has an undeserved status of mistery, unlike dreams. Of course, we can always wonder how come do we have subjective sensations and consciousness in the first place, but that is just another instance of the question how come there is something rather than nothing, tough question, regardless of what Lawrence Krauss thinks about it.
Finally, Demis Hasabis's dividing creativity into three types or levels: interpolation, extrapolation and invention, gives certain insight too. Interpolation is what he calls averaging, here https://en.wikipedia.org/wiki/Computational_creativity described as combinatorial creativity or conceptual blending, and extrapolation as exploratory or transformational creativity. I just think that his example of invention could have been better, because inventing a new board game does not sound much as an example of real life problem solution. I also do not understand why would inventing a new game be hard for a computer, but that is only me, I know this guy knows what he is talking about.
Comments
Post a Comment