open langage menutoggle menu
All News

Power and creativity in Artificial Intelligence

Very often, Artificial Intelligence appears inseparable from the use of large volumes of data and a high consumption of GPUs (and therefore computing power). This vision reduces the diversity of AI approaches and underrepresents all the models it offers.

If it is true that algorithms such as BERT-type transformers are driven with several Giga of data (more than 130Gb for camemBERT), there are other ways in Artificial Intelligence such as the one, for example, proposed by Another Brain, frugal in energy and data.

Thus, throughout its history, AI has been built around a wide variety of algorithms and techniques including:

  • Rule-based or knowledge-based systems,
  • Artificial neural networks,
  • Constraintprogramming,
  • Linear regressions, Vector support machines,

In the 1990s, KnowledgeBased Systems (KBS) were the one that gave the most concrete results for modelling business expertise.

KBS even managed to offer satisfactory solutions in use cases dealing with "continuous data." Thus, these systems provided answers to the processing of sound waves or for the management of blast furnaces with the real-time recovery of information from sensors. We did not yet speak of IoTs, but they were still objects that transmitted information.

Artificial neural networks, high calculation consumers, did not provide results as interesting as today in reasonable delays. However, the nascent offer (including companies such as Nestor Inc.) showed that this was a promising direction. Solutions linking expert systems and neural networks seemed to make it possible to go beyond the limits of each approach. And France had a very special position with its pioneering trade fair, Neuro-Nîmes.

Constraint programming was mainly used to answer complex mathematical problems (NP-complete or NP-difficult) such as scheduling, planning, resource allocation, etc...

Machine learning techniques were also based on hybrid approaches: generalisation learning, inductive, incremental, etc...

In the mid-90s, Yann LeCun finalised LeNet (convolutional neural network software using Python) combining “classical” image processing techniques (convolution matrix), neural networks and the famous gradient backpropagation.

Hardware limitations often implied finding adapted means to manage all the data and obtaining results in reasonable computing times, bringing real help to the users.

Thus, as often happens, scarcity forces creativity.

There were many examples of how technologies could be articulated to solve problems of decision support, language processing or optimisation.

The possibility of taking advantage of different approaches has not completely disappeared from the researchers' thinking. For example, Facebook is trying to exploit Deep Learning to tackle the usual problems of symbolic AI (derivatives and primitives, solving equations). The debate on the usefulness of a coupling between symbolic AI and connective AI remains open.

But the current computing power could result in reducing this imagination. Indeed, many neural network algorithms can answer a specific problem. There is thus a "risk of ease" to ingest GB of learning data without worrying about the amount of memory used, the number of processors implemented, etc...

Indeed, despite the undeniable successes of AI and in particular artificial neural networks, the majority of understanding and decision-making tasks performed by humans remain beyond its reach. No autonomous vehicle today is capable of anticipating an "unlearned" driving situation, even though this is a relatively mundane task for a human.

Understanding the differences between the learning mechanisms of machines and humans is still a major topic of AI research. Relying AI performance solely on the availability of a large volume of data and significant computing power for models that cannot be generalised to other situations cannot be satisfactory.

The human brain often learns a situation with very few examples and with an excellent ability to generalise in other situations.

This ability to "generalise" from few "events" justifies looking for models that are not exclusively based on data and power.

Moreover, in my opinion, the search for an optimum approach and the development of "aesthetic" solutions remain important.

Thus, limiting memory consumption, reducing computing time, avoiding unnecessary learning or unjustified complexity should always guide the implementation of an IT solution, and a fortiori of Artificial Intelligence.

Hybridisation of algorithms can undoubtedly be part of this panoply of potential approaches. Human imagination remains indispensable to create a high-performance, aesthetic and ethical AI.


Laurent Cervoni, AI Director at Talan