All this nonsense got me thinking further, though on a quite different tack.
Feeling sorry for the Asimov fans subscribing to all this Trek-talk, I
thought to throw them as a conundrum what would happen if you threw Godel's
Theorem at the Three Laws of Robotics. For a start you would probably
reconstrue all those cuddly eccentric AI types out there.
Presumably, whether the Laws worked would depend on whether people were
construed as "human". In turn this suggests that a computer program
(whether housed in a positronic brain or no) would become conscious at the
point at which it began to construe. Now, computer programs are still
operating at a level way below that, but, if one throws sufficient data at a
sufficiently complex program and requires sufficiently complex decisions
within the context of a system to which Godel's Theorem applies, presumably
one could in principle force construing upon it.
As further food for thought, someone (I think Marvin Minsky) suggested that
consciousness required, minimally, that the system had the capacity to
remember what its state was a moment ago? Isn't that the minimal
requirement for construing, and hence anticipation, too?
I realise that this is still a pretty leaky proposition, but if the AI
people focus more on process and less on performance such a situation might
be envisaged as arising.
Any thoughts, anyone?
Bill.
Bill Ramsay,
Dept. of Educational Studies,
University of Strathclyde,
Jordanhill Campus,
GLASGOW,
G13 1PP,
Scotland.
'phone: +44 (0)141 950 3364
'fax: +44 (0)141 950 3367
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%