At OEB2018 the last session I lead was on the subject of AI, machine learning in combination with old philosophers and new learning. The session drew a big group of very outspoken, intelligent people making this session a wonderful source of ideas on the subject of philosophy and AI.
As promised to the participants, I am adding my notes taken during the session. There were a lot of ideas, so my apologies if I missed any. The notes follow below, afterwards embedding the slides that preceeded the session in order to indicate where the idea for the workshop came from.
Notes from the session Old Philosophers and New Learning @OldPhilNewLearn #AI #machineLearning and #Philosophy
The session started of
with the choices embedded in any AI, e.g. a Tesla car running into people, will
he run into a grandmother or into two kids? What is the ‘best solution’…
further into the session this question got additional dimensions: we as humans
do not necessarily see what is best, as we do not have all the parameters, and:
we could build into the car that in case of emergency, the car needs to decide
that the lives of others are more important than the lives of those in the car,
and as such simply crash the car into the wall, avoiding both grandmother and
kids.
The developer or
creator gives parameters to the AI, with machine learning embedded, the AI will
start to learn from there, based on feedback from or directed to the
parameters. This is in contrast with computer-based learning, where rules are
given, and they are either successful or not but they are no basis for new
rules to be implemented.
From a philosophical
point of view, the impact of AI (including its potential bias coming from the
developers or the feedback received) could be analysed using Hannah Arendt’s
‘Power of the System’, in her time this referred to the power mechanisms during
WWII, but the abstract lines align with the power of the AI system.
The growth of the AI
based on human algorithms does not necessarily mean that the AI will think like
us. It might choose to derive different conclusions, based on priority
algorithms it chooses. As such current paradigms may shift.
Throughout the ages,
the focus of humankind changed depending on new developments, new thoughts, new
insights into philosophy. But this means that if humans put parameters into AI,
those parameters (which are seen as priority parameters) will also change over
time. This means that we can see from where AI starts, but not where it is
heading.
How much ‘safety
stops’ are built into AI?
Can we put some kind
of ‘weighing’ into the AI parameters, enabling the AI to fall back on more
important or less important parameters when a risk needs to be considered?
Failure as humans can
results into growth based on those failures. AI also learns from ‘failures’,
but the AI learns from differences in datapoints. At present the AI only
receives a message ‘this is wrong’, at that moment in time – if something is
wrong – humans make a wide variety of risk considerations. In the bigger picture,
one can see an analogy with Darwin’s evolutionary theory where time finds what
works based on evolutionary diversity. But with AI the speed of adaptation
enhances immensely.
With mechanical AI it
was easier to define which parameters were right or wrong. E.g. with Go or
Chess you have specific boundaries, and specific rules. Within these boundaries
there are multiple options, but choosing those options is a straight path of considerations.
At present humans make much more considerations for one conundrum or action
that occurs. This means that there is a whole array of considerations that can
also imply emotions, preferences…. When looking at philosophy you can see that
there is an abundance of standpoints you can take, some even directly opposing
each other (Hayek versus Dewey on democracy), and this diversity sometimes
gives good solutions for both, workable solutions which can be debated as being
valuable outcomes although based on different priorities, and even very
different takes on a concept. The choices or arguments made in philosophy (over
time) also clearly point to the power of society, technology and reigning
culture at that point in time. For what is good now in one place, can be
considered wrong in another place, or at another point in time.
It could benefit teachers if they were
supported with AI to signal students with problems. (but of course this means
that ‘care’ is one of the parameters important for society, in another society
it could simply be that those students who have problems will be set aside.
Either choice is valid, but it builds on other views on whether we care in a
‘supporting all’ or care in a ‘support those who can so we can move forward
quicker’. It is only human emotion that makes a difference in which choice
might be the ‘better’ one to choose.
AI works in the
virtual world. Always. Humans make a difference between the real and the
virtual world, but for the AI all is real (though virtual to us).
Asimov’s laws of
robotics still apply.
Transparency is needed
to enable us to see which algorithms are behind decisions, and how we – as
humans – might change them if deemed necessary.
Law suits become more
difficult: a group of developers can set the basis of an AI, but the machine
takes it from their learning itself. The machine learns, as such the machine
becomes liable if something goes wrong, but ….? (e.g. Tesla crash).
Trust in AI needs to
be built over time. This also implies empathy in dialogue (e.g. sugar pill /
placebo-effect in medicine, which is enhanced if the doctor or health care
worker provides it with additional care and attention to the patient.
Similar, smart object
dialogue took off once a feeling of attention was built into it: e.g. replies
from Google home or Alexa in the realm off “Thank you” when hearing a
compliment. Currently machines fool us
with faked empathy. This faked empathy also refers to the difference between
feeling ‘related to’ something or being ‘attached to’ something.
Imperfections will
become more important and attractive than the perfections we sometimes strive
for at this moment.
AI is still defined
between good and bad (ethics), and ‘improvement’ which is linked to the
definition of what is ‘best’ at that time.
Societal decisions:
what do we develop first – with AI? The refugee crisis or self-driving cars?
This affects the parameters at the start. Compare it to some idiot savants,
where high intelligence, does not necessarily implies active consciousness.
Currently some humans
are already bound by AI: e.g. astronauts where the system calculates all.
No comments:
Post a Comment