Blowers Racing

Spearheading Sports Quality

The Conundrum of Intelligent Machines

The Conundrum of Intelligent Machines

The Sony AIBO (pet robot dog) is in its third generation – Philip K. Dick raises an eyebrow. The latest walks to its charging cradle before the battery depletes. It retails for $1999 US and is available at select sources.

Up in evolution is the Sony QRIO, a diminutive bipedal robot with ball and joint sockets, akin to a person in both structure and function. It has been impressively showcased at a myriad of events since its 2003 incept date, demonstrating ball dribbling, a Japanese fan show, dance and exercise performances as well as numerous other displays of robot prowess.

Additional global businesses have responded to an impending market, with prototypes such as Toyota’s trumpet playing robot; Hitachi’s EMIEW, perched on wheels; and, perhaps the most advanced, Honda’s ASIMO. The ASIMO is similar to the rest – modeled on Homo sapiens – but it also has the ability to run, like the QRIO, except it is a full-size humanoid robot. Amusingly, one of the biggest problems is battery life.

General Motors is busy with a self-driving car slated for a 2008 release, with an array of built in sensors that tell the car what constituents are in the environment. The car responds accordingly, without driver intervention.

Hitachi, Toyota and the like feel their Robots will have important, specific uses, like caring for the elderly or working under unfavorable conditions.

It is easy to imagine robots like these melding into society: restaurant owners programming their robot waitresses to be snobby. The AIBO, with its latest M3 upgrade, can read ones emails, respond to a large number of verbal commands and guard the house by recording movements with its built in video camera.

Although robots have not yet become as pedestrian as DVD players, they will likely become integral members of society. To be sure, a Press Release from the United Nations Economic Commission for Europe (2004) shows predictions for personal and private use robots to exponentially increase in sales over the next few years.

Most notably, domestic robots like vacuum cleaners and lawnmowers are estimated to rise from the 610,000 that were in operation at the end of 2003, to 4.1 million at the conclusion of 2007. Similarly, personal entertainment robots in circulation are predicted to ascend.

Progression in Artificial Intelligence (AI) has made possible something like Honda’s ASIMO – a machine that can move “autonomously” and interact meaningfully with its environment. However, “autonomously” is a misnomer. Behind the curtain exists either a panel of people remotely conjoined or constrained preprogramming.

Robot autonomy has not been achieved in the context of high expectations. For some pressing reasons, AI has issues to resolve that are presently inextricable. If they are not resolved, the problems are of the significance that not only will robot autonomy be a challenge, but also the attainment of human-like intelligence.

To begin, although programming has improved, robots do not currently have the capacity to consider the many wayward events in a world that humans seamlessly overcome. This translates to tribulations, like simply walking to the Pharmacy in a busy city.

Referred to as the “Frame Problem” – a shortfall of AI programming – robots have trouble considering sets of environmental variables x, y, z… when planning their future actions. They tend to simplify situations and expect that most will be constant in their surroundings.

In this way, if they are to be independent, more powerful programming is rudimentary. Packaging more effectively reasoning models based on the human mind is paramount and ongoing.

An instance of research to improve on autonomy continues at the University of British Columbia’s Laboratory for Computational Intelligence. They have completed formidable research covering a holistic blend of computational systems intended for intelligent applications.

Amplified research has resulted in a robot (SPINOZA) that combines and builds upon these technologies. It localizes itself by mapping its environment. Simultaneous Localization and Mapping (SLAM) coupled with Scale Invariant Feature Transform (SIFT) take credit – enabling the robot to quickly build new maps as it goes. Objects that are viewed from different angles, or, new objects that appear are negotiated via these technologies. Sensors consist of sonar, laser range finders and vision.

SIFT, developed and patented by Lowe, breaks images down into many small overlapping feature slices. Each slice is then individually juxtaposed against one another, and the matching slices put back together.

If there are enough feature matches between object and image, then the object being visualized is drawn up and oriented on the robot’s continually updated map. It all happens in less than one second.

This allows for multi-object recognition and localization in changing environments, a blanket technology integrated into many visual robotic systems, including the AIBO. The achievement is autonomy with constrained parameters.

As technology like this evolves, the more real existing with robots becomes. But, since robots acquiesce to the thought of autonomy, can it still be said that they are intelligent? A valid question, since intelligence can be present without the presence of autonomy (i.e. Christopher Reeves). Another suitable question is, even with full autonomy, can they be labeled intelligent?

These questions call for a clarification of what intelligence means, thereby allowing for criteria as a yardstick to measure by. A query of dictionary meanings equates to intelligent entities as having an affinity for knowledge, to understand it and to solve problems via cognition.

This implies autonomy, as something must have the ability to store information (affinity for knowledge, to be used in learning) and to overcome obstacles in an unpredictable environment (solving problems via cognition). Understanding is a special aspect as will be seen.

Alan Turing was thinking the same thing circa 1950’s, having developed his “Turing Test”: an Intelligence Quotient test (IQ test), but for computers. Turing thought that, if his computer could fool a human contestant into thinking it was a human – a question and answer game – then intelligent it was.

An oversimplification, however. Comparatively, current IQ tests for humans that are valid intelligence gauges innervate a set of “cognitive abilities,” like analytical reasoning, spatial awareness, verbal ability, etc. Psychometricians have successfully measured over seventy of these abilities. Robert Sternberg (IBM Professor of Psychology and Education at Yale University) abridged this number to three: analytic, creative and practical.

These abilities are postulated as being parts of inter-correlated systems that make up intelligence. Of course, they are metaphors that hover over their respective brain regions.

Together, they account for a wide range of intellectual power, like mathematical reasoning, finding solutions to new problems, creative writing, making quick decisions with long-lasting future implications, etc. They dictate the essence of intelligent thought. The Turing test is at most a test of a program in a significantly compartmentalized way.

In consideration of these cognitive abilities as the driving force for human thought, adding the necessary need of a physical structure like a brain or ganglion to process the information, results in intelligence as, contestably, “A physical thing that has the ability to learn and understand knowledge by exercising a set of cognitive abilities which allows knowledge to be gained.”

With the inadequacy of the Turing Test, there exists the possibility of administering common IQ tests to computers, like the SAT, GMAT, GRE and so forth. If they passed, would we say they are intelligent by definition? After all, a computer’s hardware along with its program is strikingly similar to the relationship of physical bodies to DNA.

While the QRIO or EMIEW may be able to pass, there are a couple problems. One aspect that defines intelligence, perhaps the most important, is that a thing should ‘understand’, something difficult for an entity comprised of hunks of plastic.

Professor John R. Searle’s Chinese Room analogy, from his 1980 article Minds, Brains and Programs, illustrated this point. A computer is not concerned with what things mean – semantics – but rather, only the manipulation of symbolic representations, like “if x then output z…” Whatever “x” and “z” actually mean is irrelevant.

Take the word ‘death’. Data from Star Trek can relay that ‘death’ is a noun, that it has five letters and its dictionary meaning. However, because he lacks emotions, he can’t understand in the way a human does.

If Captain Picard were to be murdered in his presence, he would not have a trite reaction. For sentients, unlike Data, death has a series of emotional/perceptual tags, like sadness, fear, faces of perished family members, etc. These tags lend to an understanding of concepts, like death: tags a robot does not have.

Besides understanding, emotions are deemed an important precursor for other intellectual processes, as they contribute to both motivation and temporality in thought, like a rush after solving a math problem (goal acquisition) or remembering a time of elevated happiness.

As emotions support motivation, in contribution to goal setting, they are imperative in sentients’ ability to plan and make decisions: necessarily linked with autonomy. Temporality has an important function in planning and decision-making as well. Yet, temporality in thought is a product of consciousness too. Difficult to test, but the consensus is computers lack this important attribute, and consequently lack a sense of self-awareness.

Self-awareness allows one to be “alongside” their life while they are living it – the third-person perspective. This, additionally, has many, complex implications on our decision-making processes. Dependent on the person, some decisions are based on goals well into the future, “Where do I want to be in five years?”

This is the ability to converse with an “inner-voice”, the third-person, enabling sentients of adequate elegance a unique and powerful feature for decision-making and planning.

Providing a machine with such a sense of place and time with the available stock of research is at the least a conundrum, even to pondering laymen. But this is the essence of life, the quintessence of autonomy that even an insect has. Planning and anticipating is the core of what it is to be alive – the driving force in all life, be it reproducing or more complexly, to become the next President.

A simple planning action like, “I think it will be quiet at the Airport next Wednesday, I shall book my flight” arose because of emotions and consciousness: emotions due to the unpleasant feelings associated with being around large crowds and overbooked planes; anticipation resulting from a set of past perceptual experiences (consciousness). The latter an offshoot of inductive reasoning.

But even if programs are to overcome the problems of decision-making, giving an appearance of autonomy, without consciousness and emotions, computers lack understanding and a sense of self, thus missing necessary capacities for intelligent thought.

They are implicated as being mere, empty imposters who understand nothing. Although satisfying some of the parameters, they fail to meet the most basic definition of human intelligence.

Programmers are refining and blending logical systems from both AI and Philosophy to create more powerful programs. Currently, no one logical system is powerful enough to sufficiently model human reasoning or create a practical, fully autonomous robot.

Although AI is far beyond the Turing circuit, much is to be done if fully autonomous robots are to emerge from their long lasting rubric of theory and testing.

IBM’s Blue Brain Project is an attempt to better model the reasoning power of the human mind. The incipient point is microscopic.

According to the official project website, “Using the huge computational capacity of IBM’s eServer Blue Gene, researchers from IBM and EPFL (Ecole Polytechnique Fédérale de Lausanne) will be able to create a detailed model of the circuitry in the neocortex – the largest and most complex part of the human brain.”

With a complete three-dimensional computer model of the human brain, it will help to better understand the processes underlying thought – a significant contribution to AI research. It may also spawn positronic brains in robots, although that is ambitious. In some societies of thought, positronic brains might give rise to consciousness.

Researchers at the Massachusetts Institute of Technology took an evolutionary angel in 1997, modeling their robot KISMET on a child. KISMET was unique in that it was a reticent, autonomous, anthropomorphic robot intended to emotionally interact with humans. That was a start.

The result of the plethora of robot evolution was well covered throughout this years EXPO 2005 in Aichi, Japan. It produced a generous demonstration of robots from child-care robots to portrait painting ones. With 63 prototype displays, there appeared to be robots for everything.

However, AI research has not addressed the matrimony of emotions and consciousness with intelligence – perhaps the key to human-like intelligence. In their own way, robots could be construed as ‘intelligent’. But, if something doesn’t understand anything, it is difficult to include ‘robot’ and ‘intelligent’ in the same clause.

Perhaps the answer is in cybernetics.