AI to help human thinking processes
AI is rapidly expanding its reach: you have
initiatives of meaningful curated content generated by AI into elearning (e.g.
Wildfire http://www.wildfirelearning.co.uk/
), you have legal research analysed and organised by AI (e.g. http://www.rossintelligence.com/ ),
you have multiple AI molding social media interactions based on factors such as
friends, exchanging ideas, similar content (sometimes opinions) shared… basically,
industry is looking at AI as a means to refocus on less-repetitive parts of
their business or profit goals (https://insidebigdata.com/2017/01/29/amplifying-human-potential-towards-purposeful-artificial-intelligence-a-perspective-for-cios/
).
But, I am wondering whether there is
research projects taking into account AI using text analysis but including
cognitive language use to enhance critical thinking (for instance: if you have echo
chambers, why not use AI to pick up frequently used arguments from ‘the other
side’ to generate more in-depth arguments for either side. Or for those looking
to become dominating world leaders (devils advocate here): creating something which
goes beyond fake news: using arguments that feel right but actually are built
using persuasive language construction to trigger a feeling of ‘that is right’
and parallels what a person thinks is morally correct (I said it was a devils
advocate example :D )
AI in education
With all the talk on the new citizens needing
to be ‘creative’ mindset above anything else, the creativity does not seem to
emerge yet in AI, the focus is still more on rehashing what is already there,
but with more focus on the norm by using AI in education (I could be wrong,
feel free to provide arguments on why creativity is indeed boosted by AI in
education).
A couple of examples where AI is used to
boost learning, but along the lines of existing norms, nevertheless of interest.
Deep Knowledge Training. One of the
interesting strands of AI in education research is Deep Knowledge Training (a
good read is the 2015 paper by Piech, Bassen, Huang, Ganguli, Sahami, Guibas
and Sohl-Dickstein https://web.stanford.edu/~cpiech/bio/papers/deepKnowledgeTracing.pdf
) this allows a machine to model the knowledge of a student as they interact
with coursework. It can be used to extrapolate student performance for
instance. This seems to be good, but you know that this is based on ‘what we
expect of students’, which is not necessarily what could be good for humanity
or social thinking.
Assessing future scores. Another example is
the algorithm built by Google and Stanford which relates to a students learning
ability (well more specifically how a student would answer questions) http://www.dailymail.co.uk/sciencetech/article-3380374/The-end-exams-Algorithm-predict-students-answer-questions-explain-questions-wrong.html
. Here as well, the learning seems to parallel taking exams… which does not seem
to promote creative thinking.
IBM Watson for education (https://www.ibm.com/watson/education
). Starts from the idea of personalised learning (and passion, so I really love
that starting point), but when I looked at the videos, the definition of
personalised learning seemed to be limited to personal interests (in educator
video), which limits the concept of personalised learning. And though it is
good to provide skill-level content, if the content base you pull it from is
standard…. The standards will again be the norm, which does not necessarily
result in creative ideas or insights.
AI based on language data
One example I found using AI in relation to
natural language processing is NexLP (https://www.nexlp.com/
) (quoting from their page: “leveraging the latest advances in Natural Language
Processing (NLP), Cognitive Analytics, and Machine LearningㅡStory Engine turns disparate, unstructured data - including email
communications, business chat messages, contracts and legal documents
- into meaningful insight that can be used to act, as well as combined
with structured data to create a truly comprehensive view of the entire data
universe.) and the people behind NexLP state that they use cognitive analysis
to add more context to the actual text analysis”.
But when looking at it, it seems more of an
enhanced interactive dashboard at first glance. This means it feels more like a
quantifiable AI implementation than a qualitive one. One of the solutions to
filter meaningful content is wikification (where you link entities https://en.wikipedia.org/wiki/Entity_linking
) which seems to be an effective way to add context to text analytics technology
(https://www.nexlp.com/blog/2017/12/26/nlp-technology-architecture
)
Past fake news or beyond critical thinking
The term fake news is now a given in many politician’s
speech, both in its originally intended definition, as well as in popular
debate where it functions as a way to ridicule and diminish the truth or value
of an argument by an opposing person. But maybe we can turn this around. Create
algorithms that can be used to enhance our debating skills, our critical
thinking by generating arguments that are most frequently used by groups gently
opposing our views. I mention gently opposing, as persuasive arguments are rarely
harsh, completely opposing arguments.
I see this as a possible way to tear down
the echo chambers created by filter bubbles, and build bridges. Or at least get
a conversation started.
Feel free to share your thoughts or link to
examples.
Picture from http://cdn.nanalyze.com/uploads/2017/08/mckinsey.jpg
Picture from http://cdn.nanalyze.com/uploads/2017/08/mckinsey.jpg
No comments:
Post a Comment