First of all let me say how great it is that the MIT and Lex Friedman started sharing parts of their AI classes for everyone free to attend!
In this case here Josh Tenenbaum talks about his research into the methods of building machines that see, learn and think like people.
To me it’s fascinating to get a in depth look at the challenges and successes that the researchers are confronted with and have mastered to some degree too as of now.
You get a clear understanding of what an engineering approach in this field is in contrast to an not so much “hands on”, theoretical general way of looking at this.
Josh finds nice ways to explain the major challenges by example that they are confronted with which usually could be overlooked very easily if your not informed about this method to go at it.
The engineering approach forces to look at a given problem with an clear step by step analysis and problem solving method.
He finds examples of cognitive capabilities in nature e.g. behavioral patterns of animals and humans that the AI research in the quest for AGI (artificial general intelligence) is lacking at the moment.
It’s also very interesting to learn were “we actually are right now (video from 2018)”, in terms of progress on pattern detection picture analysis and so on.
You’ll see quite impressively that the simpler pattern detection AI’s work pretty good to a certain extent but that a human like cognitive capabilities are out of reach for now it seems.
When AI is discussed in general it seems that people often tend to go far beyond what is possible today. They’re stuck in all the pop culture imagery, sounds and movies that often tend to look at the negative and scary outcomes that we see as very probable to become reality.
In fact if you look at what was happening in the AI research field for the last 25 years you can see many examples of AI being used for malevolent purposes but in no way comparable to the “Terminator, HAL 2000” and so on levels we love to be scared by.
On the other hand things move fast in this field so it’s absolutely justified to take a long and deep look at the dark sides of AI usage to have a slight chance to raise some awareness to get people involved into the discussion of the controversial use cases of AI (weapons, privacy compromising analytics and so on).
And here... where my mind wandered to...
With the “freedom” of not looking on this through engineer eyes I, until recently, would have said that if you lump together a few ANI (artificial narrow intelligence) script-, engine snippets (like for example known from the video gaming industry as “physics engine” ~how objects or entities in a game behave in context of physical laws and so on) that one should be able to build an self learning “learning” machine to constantly improve not only it’s general capabilities but also and foremost to “let it do it’s own thing” in terms of looking for ways to learn more, faster!
With this kind of completely giving up control on the how I reckoned we’ll pop that “technological singularity” thing in the blink of an eye. Hahaha!
For the moment it seems absurd, I know but to a certain extent by using massive parallel efforts on different fronts of problem fields I assume we could certainly speed up the evolution of AI towards AGI dramatically.
Back to the more hands on approach of Josh…
This is a talk by Josh Tenenbaum for course 6.S099: Artificial General Intelligence. This class is free and open to everyone. Our goal is to take an engineering approach to exploring possible paths toward building human-level intelligence for a better world.
Video description SRC Youtube
If you find the time to take a look at this video please let me know what you think about it and about my 2 sats on it down in the comments!
Do you feel for example, that Josh’s way of trying to use the machine learns in baby steps how to use cognitive capabilities is the right approach of looking for progress towards AGI?