by Infogram
Create infographics

I learn therefore I am, by Marco Giannini

Can intelligence be summed up in learning?

January 17, 2017

[This is a guest post by Marco Giannini, originally published in the Italian website Sentiero Digitale]

 

 
Google Images has been offering from several years the possibility to search images using keywords, and to sort through the results comparing them to similar images.

The most recent advancement is Terrapattern, a prototypical search engine in which users can select a satellite picture and investigates akin shots in a given (limited) territory. Basically, when you click on a square on the satellite map, the application finds similar portions of terrain by color and shape inside the same city, using a complex machine learning tool.

In Terrapattern areas similar to the selected one appear pinpointed in red. Down right, results from the research are organized in rows and columns. Up right, the geographical location is marked on the city map (here is New York City) with a plot of their relative similarity aside.
In Terrapattern areas similar to the selected one appear pinpointed in red. Down right, results from the research are organized in rows and columns. Up right, the geographical location is marked on the city map (here is New York City) with a plot of their relative similarity aside.

Although it covers the territory of only seven large cities so far (six in the US: Pittsburgh, San Francisco, New York City, Detroit, Miami and Austin; plus Berlin in Germany), Terrapattern is the first tangible step towards a powerful tool, available to planners, scientists, journalists and anyone interested in comparing hidden details in a portion of Earth, e.g. the conformation of a coastal stretch, or the detail of the ledge of a building, and how it has been modified during decades in order to hide a building abuse.

conv

The application makes use of a pack of algorithms (borrowed from Google’s Deep Dream) composing a Convolutional Neural Network, that is a machine learning system (inspired by the study of animal cerebral cortex) based on a preconditioned grid, which recursively questions what is recognizable in the provided image and builds a schematic version of it, enhancing some salient features; then it wonders about the pattern it has previously produced in order to give back an even more essential outline, and so on endlessly, perfecting the same image at each cycle and enriching its database thanks to the experience so acquired.

neuro 1

For instance, a neural network is responsible to look for an eye exactly where you would expect to find an eye, a pair of pointed ears on the head of what looks like a collie dog breed, three aces in hand to the winner of a poker hand (thus learning that a couple might not be enough); by answering itself “yes, it looks like an eye”, and “ it looks like an ear” is persuaded of hitting the nail on the head and makes its idea someway clearer of what is a dog. Maybe it won’t understand why a dog is playing poker, but remember that this is still a first step, filtered by images and deductions related to them.
Why a software should understand something? Simply because recognizing a single instance that belongs to a known category is the best option to sort through millions of images, to become autonomous, i.e. capable of responding to stimuli and take in their own decisions, even if urged to do so.

In a few words, to the software this is the best way to become an intelligent being.

Aristotle, whose thoughts about cataloging ability in human mind still rule philosophy, might say that this case concerns the efficient cause (the agent which actually brings something about) or the formal cause (the structure or design of a being) and not the final cause, that is, what distinguishes who (or what) is intelligent from what it is not. But for the moment let’s drop the philosophical question, or rather send it to the final.

OpenData and Open source

Terrapattern was created by a team of developers, at the head of which is Golan Levin, a professor of Computation arts at Carnegie Mellon College.

The satellite industry is at the forefront in the US, and, as early as 2013, Google (in collaboration with the US Geological Survey, NASA and Time) has built a spectacular and humongous archive of satellite photos about almost every place on Earth, with the aim of surveying changes in environment since 1984.

Opendata-mappe-e-reti-neurali-1[1]

As we know, Google Earth is open data, and makes available for free to anyone a limitless repository of high-definition satellite imagery of the Earth. But it is not open source, i.e. the technology that powers the Google Maps website and Google Earth is trademarked and images have legal restrictions.

By contrast, Terrapattern is based on OpenStreetMap, an active, collaborative collection of maps and geographical data across the world, based on a free license: the archive can be re-used for any purpose (including commercial) as long as the source is mentioned. The code is open, meaning that is continuously modified and updated by volunteers spread over the five continents; anyone can retouch it to their advantage, and possibly publish and distribute their own user experience for common advantage. There are also several tutorials on the Internet for the do-it-by-yourself, organized in encyclopaedic form in wikipedia style.

Is intelligence hiding in spontaneity?

The neural network at the base of the softwares equipped with adaptive and self-learning pattern is constituted by a sort of cybernetic neurons, connected to each other and able to exchange messages containing data if they “deem” that those data are relevant (i.e. analyzed according to the grid, once again). The network is then asked to solve a problem (typically a cataloging of a disorganized dataset), it engages and gets some results: some lead straight to the success of the operation and other to its failure. the positive results reinforce the connections between neurons, while the negative ones are excluded from the experiential process; then the routine starts again, stronger than before thanks to what it has just learned.

07-neural-network[1]

A fun application of these concepts is in another Google tool, the interactive learner by Daniel Smilkov and Shan Carter. By setting the type of incoming data and analysis features, you can see how a low number of nodes makes the recognition and cataloging of data, and based on the analysis provides a more or less homogeneous result (in which positive cases, in orange, are distinguished from negative ones, in blue). In the final quadrant (called output), the fields in which the colored dots are scattered represent the forecast of future results, in other words the likelihood that new analyzed data, similar to the above, will be classified as positive or negative cases. The prediction field can also be further defined, increasing its accuracy but also the risk of false positives and false negatives.

Playing with this app you have the feeling to assume the role of a monkey pressing a button and learn how to get a prize (obviously a banana), but with the advantage of being able to observe how the nodes through which passes the initial information work together providing consistent effects, otherwise the system is doomed to failure. Needless to say, this adaptive system spontaneously offers new possibilities.

Does this mean that software can trust its experience? Is this really an intelligent behavior?

Yes, it is an intelligent behavior, but only if the initial instructions (i.e. matrix) are good.

If they are pliant enough, automation can use them to act wisely and avoid making mistakes. The intelligence is thus to be determined as education, or training, i.e. the ability to adapt.
So far, starting from a mapping software that learns to recognize the shapes of the world, we eventually arrived at the problem of understanding what intelligence is.

Or rather we went back to it, because it is the basis of every machine learning operation so far attempted. From the first, misleading nineteenth century automatons playing chess to Deep Blue which in 1996 managed to beat the chess world champion Gerry Kasparov, way down to intelligent vacuum cleaners which learn to move in your house while you’re away, all that matters seems to be the initial supply of information by which a learning system is oriented in doing its choices. In all these cases it is the code developer to make a difference, once you get a certain result even when faced with unpredictable situations.

But when we deal with neural networks composed of thousands of interdependent decision-making centers that coordinate themselves to produce a consistent behavior, our general feeling is being in front of a conscious entity, aware of himself thanks to a sort of instinct, just like the instinct we observe in action and someway admire in the animal kingdom: with some platy laziness, we’re used to ascribe this behavior to Darwinian evolution, indeed we connote this as something automatic, something that leaves no choice. Common sense marks intelligence and instinct as poles opposite; but on the other hand, sometimes it’s difficult to speak of instinct when faced with certain animal behaviors, such as that of the game.

The Cartesian theater and the purpose of the actions

Once clarified the neural network model, let’s pay a visit to the old, outdated concept of intelligence, which dates back to the Cartesian dualism: it consists in assessing the intelligence of a behavior by its intentionality. If we take for granted the old adagio that what we perceive (and under the influence of we react to) is conveyed into a central organ and then worked out, we end up falling into a circular reasoning that exchanges the premises with the consequences and mix in the form of unitary experience what would actually seem to be a network of functions; the latter theoretical model has received a huge boost to development since the studies by Dennett, pushing in oblivion the cartesian model (together with other prototypes). Yet the old model had the power to measure under the same blink of the eye human purposes and means, especially when Descartes, who advocated it, committed its works to report any human goal to God.

Today it remains unresolved the problem of explaining how this network of functions, no matter how complex, differs from a simple system that identifies a certain stimulus and behaves accordingly, and which we’d never call “smart”: e.g. a speed camera that snaps us a picture only if we pass at a speed higher than a certain limit.

If we call this network a “thinking mind”, we neglect the fact that it detaches the intentionality powering an action from the way of thinking that action: as long as the network is simulating the functionality of a real mind, it don’t know why it’s taking the decision to do so.
On the other side, we must accept the possibility that in the way we understand and catalog there is at least an appeal to habit, i.e. a blur and inaccurate kind of routine, and a mind that questions and comforts of what it already knows, and due to this, as a matter of fact, is functional.

 

Marco Giannini works as Infographic editor for the Italian newspaper La Repubblica, in parallel with his freelancing for clients all over the worrld. You can see his work right here on Visualoop, as well as on Flickr.

Written by Tiago Veloso

Tiago Veloso is the founder and editor of Visualoop and Visualoop Brasil . He is Portuguese, currently based in Bonito, Brazil.

Follow: